Multivariate Covariance Generalized Linear Models for the Analysis of Experimental Data
|
Wagner H. Bonat | Walmes M. Zeviani |
---|---|
wbonat@ufpr.br |
walmes@ufpr.br |
LEG/DEST/UFPR | LEG/DEST/UFPR |
62a RBras & 17o SEAGRO July 24–28, 2017 UFLA, Lavras/MG
lm()
The data used below is from Pimentel Gomes and Garcia (2002), tabel 10.2.1, page 254. The measured responses are the content of nitrogen (y1
) and phosphorus (y2
) in plants fertilized in pots. The experiment was installed in a complete randomized design.
#-----------------------------------------------------------------------
# Packages.
library(lattice)
library(latticeExtra)
#-----------------------------------------------------------------------
# Pimentel Gomes, page 254.
da <- expand.grid(rep = 1:5,
trt = gl(3, 1,
labels = c("Test", "TurFer", "TurNat")),
KEEP.OUT.ATTRS = TRUE)
da <- transform(da,
y1 = c(463, 438, 494, 496, 448, 603, 596, 616, 633,
608, 471, 481, 449, 443, 456)/100,
y2 = c(950, 890, 1010, 1230, 940, 1080, 1050, 1080,
1190, 1080, 960, 930, 870, 820, 910)/1000)
str(da)
## 'data.frame': 15 obs. of 4 variables:
## $ rep: int 1 2 3 4 5 1 2 3 4 5 ...
## $ trt: Factor w/ 3 levels "Test","TurFer",..: 1 1 1 1 1 2 2 2 2 2 ...
## $ y1 : num 4.63 4.38 4.94 4.96 4.48 6.03 5.96 6.16 6.33 6.08 ...
## $ y2 : num 0.95 0.89 1.01 1.23 0.94 1.08 1.05 1.08 1.19 1.08 ...
# ATTENTION: This is a toy example, small dataset easy to handle.
xyplot(y2 ~ y1,
groups = trt,
auto.key = list(columns = 3,
title = "Treatments",
cex.title = 1),
data = da,
grid = TRUE) +
glayer(panel.ellipse(...))
# Fitting the manova model (full model).
m_full <- lm(cbind(y1, y2) ~ trt, data = da)
# Summaries for separated univariate models.
summary(m_full)
## Response y1 :
##
## Call:
## lm(formula = y1 ~ trt, data = da)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.298 -0.131 -0.040 0.160 0.282
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.67800 0.08737 53.545 1.18e-15 ***
## trtTurFer 1.43400 0.12355 11.606 7.01e-08 ***
## trtTurNat -0.07800 0.12355 -0.631 0.54
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.1954 on 12 degrees of freedom
## Multiple R-squared: 0.9406, Adjusted R-squared: 0.9307
## F-statistic: 94.96 on 2 and 12 DF, p-value: 4.407e-08
##
##
## Response y2 :
##
## Call:
## lm(formula = y2 ~ trt, data = da)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.114 -0.050 -0.016 0.022 0.226
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 1.00400 0.03973 25.269 8.96e-12 ***
## trtTurFer 0.09200 0.05619 1.637 0.1275
## trtTurNat -0.10600 0.05619 -1.886 0.0837 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.08884 on 12 degrees of freedom
## Multiple R-squared: 0.509, Adjusted R-squared: 0.4271
## F-statistic: 6.219 on 2 and 12 DF, p-value: 0.01402
# Separated ANOVA tables.
summary.aov(m_full)
## Response y1 :
## Df Sum Sq Mean Sq F value Pr(>F)
## trt 2 7.2476 3.6238 94.956 4.407e-08 ***
## Residuals 12 0.4580 0.0382
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response y2 :
## Df Sum Sq Mean Sq F value Pr(>F)
## trt 2 0.098173 0.049087 6.2187 0.01402 *
## Residuals 12 0.094720 0.007893
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# MANOVA table.
anova(m_full)
## Analysis of Variance Table
##
## Df Pillai approx F num Df den Df Pr(>F)
## (Intercept) 1 0.99923 7152.9 2 11 < 2.2e-16 ***
## trt 2 1.24157 9.8 4 24 7.472e-05 ***
## Residuals 12
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# Estimated parameters.
coef(m_full)
## y1 y2
## (Intercept) 4.678 1.004
## trtTurFer 1.434 0.092
## trtTurNat -0.078 -0.106
# Estimated covariance matrix (least squares).
var(residuals(m_full))
## y1 y2
## y1 0.03271143 0.012320000
## y2 0.01232000 0.006765714
# Null model.
m_null <- update(m_full, . ~ 1)
#-----------------------------------------------------------------------
# Doing the math.
# Error SSP of the full model.
E_full <- crossprod(residuals(m_full))
E_full
## y1 y2
## y1 0.45796 0.17248
## y2 0.17248 0.09472
# Error SSP of the null model.
E_null <- crossprod(residuals(m_null))
E_null
## y1 y2
## y1 7.7056 0.9051000
## y2 0.9051 0.1928933
# Extra error SSP = SSPnull - SSPfull.
E_extr <- E_null - E_full
E_extr
## y1 y2
## y1 7.24764 0.73262000
## y2 0.73262 0.09817333
# Eigen values of the R^{-1} H.
lambdas <- eigen(solve(E_full, E_extr))$values
# Wilks' lambda.
det(E_full)/det(E_full + E_extr)
## [1] 0.02042803
1/prod(1 + lambdas)
## [1] 0.02042803
anova(m_full, test = "Wilks")
## Analysis of Variance Table
##
## Df Wilks approx F num Df den Df Pr(>F)
## (Intercept) 1 0.0007683 7152.9 2 11 < 2.2e-16 ***
## trt 2 0.0204280 33.0 4 22 5.302e-09 ***
## Residuals 12
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# Roy's largest root.
lambdas[1]
## [1] 34.75792
anova(m_full, test = "Roy")
## Analysis of Variance Table
##
## Df Roy approx F num Df den Df Pr(>F)
## (Intercept) 1 1300.52 7152.9 2 11 < 2.2e-16 ***
## trt 2 34.76 208.5 2 12 4.784e-10 ***
## Residuals 12
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# Hotelling-Lawley trace (Wald).
sum(diag(solve(E_full, E_extr)))
## [1] 35.12691
sum(lambdas)
## [1] 35.12691
anova(m_full, test = "Hotelling-Lawley")
## Analysis of Variance Table
##
## Df Hotelling-Lawley approx F num Df den Df Pr(>F)
## (Intercept) 1 1300.52 7152.9 2 11 < 2.2e-16 ***
## trt 2 35.13 87.8 4 20 2.153e-12 ***
## Residuals 12
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# Pillai's trace.
sum(diag(E_extr %*% solve(E_extr + E_full)))
## [1] 1.24157
sum(lambdas/(1 + lambdas))
## [1] 1.24157
anova(m_full, test = "Pillai")
## Analysis of Variance Table
##
## Df Pillai approx F num Df den Df Pr(>F)
## (Intercept) 1 0.99923 7152.9 2 11 < 2.2e-16 ***
## trt 2 1.24157 9.8 4 24 7.472e-05 ***
## Residuals 12
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#-----------------------------------------------------------------------
# Calculating using matricial algebra.
# Matrix of responses.
Y <- as.matrix(da[, c("y1", "y2")])
# Total SSP = Y'Y.
t(Y) %*% Y
## y1 y2
## y1 402.4591 77.8038
## y2 77.8038 15.1729
# Design matrix (full model).
X <- model.matrix(~trt, data = da)
dim(X)
## [1] 15 3
# Estimation of the model parameters.
XlX <- crossprod(X)
XlY <- crossprod(X, Y)
B <- solve(XlX, XlY)
B
## y1 y2
## (Intercept) 4.678 1.004
## trtTurFer 1.434 0.092
## trtTurNat -0.078 -0.106
# Just to confirm.
coef(m_full)
## y1 y2
## (Intercept) 4.678 1.004
## trtTurFer 1.434 0.092
## trtTurNat -0.078 -0.106
# To create projection matrices.
proj <- function(X) {
tX <- t(X)
X %*% solve(tX %*% X) %*% tX
}
# Projection (hat) matrix for the full model.
Hfull <- proj(X)
# Projection (hat) matrix for the null model.
Hnull <- proj(X[, 1])
# Identity.
I <- diag(nrow(X))
# Error SSP of the full model.
E_full <- t(Y) %*% (I - Hfull) %*% Y
E_full
## y1 y2
## y1 0.45796 0.17248
## y2 0.17248 0.09472
# Error SSP of the null model.
E_null <- t(Y) %*% (I - Hnull) %*% Y
E_null
## y1 y2
## y1 7.7056 0.9051000
## y2 0.9051 0.1928933
# Extra error SSP.
E_extr <- t(Y) %*% (Hfull - Hnull) %*% Y
E_extr
## y1 y2
## y1 7.24764 0.73262000
## y2 0.73262 0.09817333
# Eigen values of the R^{-1} H.
lambdas <- eigen(solve(E_full, E_extr))$values
# Wilks' lambda.
det(E_full)/det(E_full + E_extr)
## [1] 0.02042803
1/prod(1 + lambdas)
## [1] 0.02042803
anova(m_full, test = "Wilks")
## Analysis of Variance Table
##
## Df Wilks approx F num Df den Df Pr(>F)
## (Intercept) 1 0.0007683 7152.9 2 11 < 2.2e-16 ***
## trt 2 0.0204280 33.0 4 22 5.302e-09 ***
## Residuals 12
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# Roy's largest root.
lambdas[1]
## [1] 34.75792
anova(m_full, test = "Roy")
## Analysis of Variance Table
##
## Df Roy approx F num Df den Df Pr(>F)
## (Intercept) 1 1300.52 7152.9 2 11 < 2.2e-16 ***
## trt 2 34.76 208.5 2 12 4.784e-10 ***
## Residuals 12
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# Hotelling-Lawley trace (Wald).
sum(diag(solve(E_full, E_extr)))
## [1] 35.12691
sum(lambdas)
## [1] 35.12691
anova(m_full, test = "Hotelling-Lawley")
## Analysis of Variance Table
##
## Df Hotelling-Lawley approx F num Df den Df Pr(>F)
## (Intercept) 1 1300.52 7152.9 2 11 < 2.2e-16 ***
## trt 2 35.13 87.8 4 20 2.153e-12 ***
## Residuals 12
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# Pillai's trace.
sum(diag(E_extr %*% solve(E_extr + E_full)))
## [1] 1.24157
sum(lambdas/(1 + lambdas))
## [1] 1.24157
anova(m_full, test = "Pillai")
## Analysis of Variance Table
##
## Df Pillai approx F num Df den Df Pr(>F)
## (Intercept) 1 0.99923 7152.9 2 11 < 2.2e-16 ***
## trt 2 1.24157 9.8 4 24 7.472e-05 ***
## Residuals 12
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# ATTENTION: models with more than one term, a QR decomposition of the X
# design matrix can be used.
#-----------------------------------------------------------------------
#-----------------------------------------------------------------------
# Packages.
rm(list = ls())
library(mvtnorm)
#-----------------------------------------------------------------------
# Prototyping: creating a function to run the simulation study.
# Simulation settings.
m <- 3 # Number of responses.
k <- 4 # Number of groups.
rept <- 4 # Repetition of each group.
# Returns the chi-square statistic of each test.
simul_chi <- function(m, k, rept, N = 1000) {
trt <- gl(k, rept)
n <- length(trt)
# Means and covariances.
mu <- rep(0, m)
Sigma <- 0.5 * (diag(m) + matrix(1, m, m))
# Paired samples of the statistics.
smp <- replicate(N, {
Y <- rmvnorm(n = length(trt),
mean = mu,
sigma = Sigma)
# Sigma of the null model.
m0 <- lm(Y ~ 1)
S0 <- var(residuals(m0)) * (n - 1)/n
# Sigma of the full model.
m1 <- lm(Y ~ trt)
S1 <- var(residuals(m1)) * (n - 1)/n
#------------------------------------
# Wilks' statistic.
logdet <- log(det(S1)/det(S0))
# Has an asymptotic chi-square aproxiamation.
wil <- -n * logdet
# ATTENTION: this has a better chi-square approximation.
# wil <- -(n - k - 1/2 * (m - (k - 1) + 1)) * logdet
#------------------------------------
# Hotelling-Lawley's statistic.
hol <- n * sum(diag(solve(S1, S0 - S1)))
#------------------------------------
# Pillai's statistic.
pil <- n * sum(diag(solve(S0, S0 - S1)))
return(c(wil = wil, hol = hol, pil = pil))
})
return(smp)
}
# A sample of the Wilks' lambda statistic.
smp <- simul_chi(m = m, rept = rept, k = k)
# The function to display the legend.
leg <- function(...) {
legend(...,
legend = c("Wilks", "Hotelling-Lawley", "Pillai"),
col = c(2, 4, 3),
lty = 1,
bty = "n")
}
# Empirical chi-square distribution vs theoretical.
plot(ecdf(smp[1, ]), col = 2,
xlab = expression(X^{2} ~ "statistic"),
ylab = "Empirical CDF",
main = NULL)
lines(ecdf(smp[2, ]), col = 4)
lines(ecdf(smp[3, ]), col = 3)
curve(pchisq(x, df = m * (k - 1)), add = TRUE, col = 1)
leg("right")
# Get the p-values.
pval <- apply(smp,
MARGIN = 1,
FUN = pchisq,
df = m * (k - 1))
str(pval)
## num [1:1000, 1:3] 0.171 0.95 0.39 0.345 0.83 ...
## - attr(*, "dimnames")=List of 2
## ..$ : NULL
## ..$ : chr [1:3] "wil" "hol" "pil"
plot(ecdf(pval[, 1]),
xlab = expression("p-values for the" ~ X^{2} ~ "statistic"),
ylab = "Empirical CDF",
col = 2,
pch = NA,
main = NULL)
lines(ecdf(pval[, 2]), col = 4, cex = NA)
lines(ecdf(pval[, 3]), col = 3, cex = NA)
segments(0, 0, 1, 1, col = 1)
leg("left")
#-----------------------------------------------------------------------
# Design settings for a small simulation study.
# k: number of treatment levels (fixed);
# N: number of simulations (fixed);
# m: number of responses (varying);
# rept: repetition per treatment level (varying).
k <- 4
N <- 1000
ds <- expand.grid(m = c(2, 5, 8),
rept = c(5, 10, 20))
# Simulation for each design setting case.
res <- by(data = ds,
INDICES = seq_len(nrow(ds)),
FUN = function(case) {
smp <- with(case,
simul_chi(m = m,
rept = rept,
k = k,
N = N))
test <- rownames(smp)
smp <- c(t(smp))
data.frame(m = case$m,
rept = case$rept,
test = rep(test, each = N),
smp = smp,
pval = pchisq(smp, case$m * (k - 1)))
})
# Stack to one data frame.
res <- do.call(rbind, res)
res$n <- res$rept * k
res$test <- factor(res$test,
levels = c("wil", "hol", "pil"),
labels = c("Wilks", "Hotelling-Lawley", "Pillai"))
res_chi <- res
rm(res)
useOuterStrips(
ecdfplot(~pval | factor(m) + factor(n),
as.table = TRUE,
groups = test,
auto.key = list(columns = 2,
title = "Test",
cex.title = 1.1),
xlab = expression("p-values for the" ~
X^{2} ~ "statistic"),
ylab = "Empirical CDF",
data = res_chi),
strip = strip.custom(
var.name = "Responses",
strip.names = TRUE,
sep = " = "),
strip.left = strip.custom(
var.name = "Subjects",
strip.names = TRUE,
sep = " = ")) +
layer({
panel.segments(0, 0, 1, 1, lty = 2, col = 1)
})
#-----------------------------------------------------------------------
# The F approximantion of each multivariate test.
# Each test.
tt <- c("Wilks", "Hotelling-Lawley", "Pillai", "Roy")
# tt <- tt[-4]
# Simulation of data under H_0 and computation of F statistics.
simul_F <- function(m, k, rept, N = 1000) {
trt <- gl(k, rept)
mu <- rep(0, m)
n <- length(trt)
Sigma <- 0.5 * (diag(m) + matrix(1, m, m))
# Sample of the statistic.
smp <- replicate(N, {
Y <- rmvnorm(n = length(trt),
mean = mu,
sigma = Sigma)
m0 <- lm(Y ~ trt)
pvals <- sapply(tt,
FUN = function(test) {
# Extract the p-value.
anova(m0, test = test)[2, 6]
})
return(pvals)
})
return(smp)
}
# Testing.
simul_F(m = 3, k = 4, rept = 10, N = 5)
## [,1] [,2] [,3] [,4] [,5]
## Wilks 0.6740731 0.196948597 0.16423446 0.9127527 0.20609552
## Hotelling-Lawley 0.6911254 0.170490524 0.18129761 0.9206495 0.20692271
## Pillai 0.6577941 0.227943856 0.14904046 0.9047159 0.20683589
## Roy 0.1748006 0.008478051 0.05087502 0.3685056 0.02589552
# Simulation for each design setting case.
res <- by(data = ds,
INDICES = seq_len(nrow(ds)),
FUN = function(case) {
smp <- with(case,
simul_F(m = m,
rept = rept,
k = k,
N = N))
smp <- stack(as.data.frame(t(smp)))
cbind(data.frame(m = case$m,
rept = case$rept),
smp)
})
# Stack to one data frame.
res <- do.call(rbind, res)
res$n <- res$rept * k
names(res)[3:4] <- c("pval", "test")
res$test <- factor(res$test, levels = tt)
res_F <- res
rm(res)
useOuterStrips(
ecdfplot(~pval | factor(m) + factor(n),
groups = test,
auto.key = list(columns = 2,
title = "Test",
cex.title = 1.1),
as.table = TRUE,
xlab = expression("p-values for the" ~
italic(F) ~ "statistic"),
ylab = "Empirical CDF",
subset = !is.na(pval),
data = res_F),
strip = strip.custom(
var.name = "Responses",
strip.names = TRUE,
sep = " = "),
strip.left = strip.custom(
var.name = "Subjects",
strip.names = TRUE,
sep = " = ")) +
layer({
panel.segments(0, 0, 1, 1, lty = 2, col = 1)
})
#-----------------------------------------------------------------------
rm(list = ls())
# The iris dataset.
m0 <- lm(cbind(Petal.Length,
Petal.Width,
Sepal.Length,
Sepal.Width) ~ Species,
data = iris)
# The model matrix and the X'X.
X <- model.matrix(m0)
XlX <- crossprod(X)
B <- coef(m0) # Matrix of parameters.
b <- cbind(c(B)) # vec(B): vectorized form.
n <- nrow(X) # Number of sample units.
k <- nrow(B) # Number of lines in B or columns in X.
r <- ncol(B) # Number of responses.
# To test H_0: the Spiecies are equal in all responses.
A <- rbind(c(0, 1, 0),
c(0, 0, 1))
h <- nrow(A) # Number of linear functions.
# All responses.
M <- cbind(diag(r))
# Only the first response.
# M <- cbind(diag(r)[, 1])
v <- ncol(M) # Number of linear combination of responses.
# Breads, ham and the Sandwich (H = hypothesis SSP).
ABM <- A %*% B %*% M
AXlXiA <- A %*% solve(XlX) %*% t(A)
H <- t(ABM) %*% solve(AXlXiA) %*% ABM
H
## [,1] [,2] [,3] [,4]
## [1,] 437.1028 186.77400 165.24840 -57.23960
## [2,] 186.7740 80.41333 71.27933 -22.93267
## [3,] 165.2484 71.27933 63.21213 -19.95267
## [4,] -57.2396 -22.93267 -19.95267 11.34493
library(car)
# The F-statistic for the ABM = 0 hypothesis.
linearHypothesis(m0,
hypothesis.matrix = A,
P = M)
##
## Response transformation matrix:
## [,1] [,2] [,3] [,4]
## Petal.Length 1 0 0 0
## Petal.Width 0 1 0 0
## Sepal.Length 0 0 1 0
## Sepal.Width 0 0 0 1
##
## Sum of squares and products for the hypothesis:
## [,1] [,2] [,3] [,4]
## [1,] 437.1028 186.77400 165.24840 -57.23960
## [2,] 186.7740 80.41333 71.27933 -22.93267
## [3,] 165.2484 71.27933 63.21213 -19.95267
## [4,] -57.2396 -22.93267 -19.95267 11.34493
##
## Sum of squares and products for error:
## [,1] [,2] [,3] [,4]
## [1,] 27.2226 6.2718 24.6246 8.1208
## [2,] 6.2718 6.1566 5.6450 4.8084
## [3,] 24.6246 5.6450 38.9562 13.6300
## [4,] 8.1208 4.8084 13.6300 16.9620
##
## Multivariate Tests:
## Df test stat approx F num Df den Df Pr(>F)
## Pillai 2 1.19190 53.4665 8 290 < 2.22e-16 ***
## Wilks 2 0.02344 199.1453 8 288 < 2.22e-16 ***
## Hotelling-Lawley 2 32.47732 580.5321 8 286 < 2.22e-16 ***
## Roy 2 32.19193 1166.9574 4 145 < 2.22e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# The residual SPP of the full model.
R <- crossprod(residuals(m0))
# Model under the constrain of the H_0.
m00 <- update(m0, . ~ 1)
# The hypothesis SPP as a difference of residuals SPP.
H <- crossprod(residuals(m00)) - R
H
## Petal.Length Petal.Width Sepal.Length Sepal.Width
## Petal.Length 437.1028 186.77400 165.24840 -57.23960
## Petal.Width 186.7740 80.41333 71.27933 -22.93267
## Sepal.Length 165.2484 71.27933 63.21213 -19.95267
## Sepal.Width -57.2396 -22.93267 -19.95267 11.34493
# Function that returns the trace of a matrix.
tr <- function(x) sum(diag(x))
tr(solve(R) %*% H)
## [1] 32.47732
tr(H %*% solve(R))
## [1] 32.47732
# The trace is the Hotelling-Lawley criterion.
anova(m0, test = "Hotelling")
## Analysis of Variance Table
##
## Df Hotelling-Lawley approx F num Df den Df Pr(>F)
## (Intercept) 1 144.552 5203.9 4 144 < 2.2e-16 ***
## Species 2 32.477 580.5 8 286 < 2.2e-16 ***
## Residuals 147
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# Testing the Kronecker operation in R.
# kronecker(diag(1:4), matrix(1, 2, 2))
# Testing the same hypothesis but with the vectorized components.
L <- kronecker(t(M), A)
b
## [,1]
## [1,] 1.462
## [2,] 2.798
## [3,] 4.090
## [4,] 0.246
## [5,] 1.080
## [6,] 1.780
## [7,] 5.006
## [8,] 0.930
## [9,] 1.582
## [10,] 3.428
## [11,] -0.658
## [12,] -0.454
Lb <- L %*% b
Lb
## [,1]
## [1,] 2.798
## [2,] 4.090
## [3,] 1.080
## [4,] 1.780
## [5,] 0.930
## [6,] 1.582
## [7,] -0.658
## [8,] -0.454
# Sigma <- var(residuals(m0))
Sigma <- R/n
# MSM <- t(M) %*% Sigma %*% M
# MSM.XlXi <- kronecker(MSM, solve(XlX))
# (1/n) * t(Lb) %*% solve(L %*% MSM.XlXi %*% t(L)) %*% Lb
# Wald statistic.
t(Lb) %*%
solve(L %*%
kronecker(t(M) %*% R %*% M,
solve(XlX)) %*%
t(L)) %*%
Lb
## [,1]
## [1,] 32.47732
# H-L trace.
tr(H %*% solve(R))
## [1] 32.47732
# So, Wald is the Hotelling-Lawley trace criterion.
Pimentel Gomes, F., Garcia, C.H., 2002. Estatística aplicada a experimentos agronômicos e florestais : Exposição com exemplos e orientações para uso de aplicativos. FEALQ, Piracicaba.
# devtools::session_info()
Sys.time()
## [1] "2017-07-26 19:24:27 BRT"
cbind(Sys.info())
## [,1]
## sysname "Linux"
## release "4.4.0-87-generic"
## version "#110-Ubuntu SMP Tue Jul 18 12:55:35 UTC 2017"
## nodename "class"
## machine "x86_64"
## login "unknown"
## user "walmes"
## effective_user "walmes"
sessionInfo()
## R version 3.4.1 (2017-06-30)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 16.04.2 LTS
##
## Matrix products: default
## BLAS: /usr/lib/libblas/libblas.so.3.6.0
## LAPACK: /usr/lib/lapack/liblapack.so.3.6.0
##
## locale:
## [1] LC_CTYPE=pt_BR.UTF-8 LC_NUMERIC=C
## [3] LC_TIME=pt_BR.UTF-8 LC_COLLATE=en_US.UTF-8
## [5] LC_MONETARY=pt_BR.UTF-8 LC_MESSAGES=en_US.UTF-8
## [7] LC_PAPER=pt_BR.UTF-8 LC_NAME=C
## [9] LC_ADDRESS=C LC_TELEPHONE=C
## [11] LC_MEASUREMENT=pt_BR.UTF-8 LC_IDENTIFICATION=C
##
## attached base packages:
## [1] stats graphics grDevices utils datasets base
##
## other attached packages:
## [1] car_2.1-4 mvtnorm_1.0-5 latticeExtra_0.6-28
## [4] RColorBrewer_1.1-2 lattice_0.20-35 knitr_1.15.1
## [7] rmarkdown_1.3
##
## loaded via a namespace (and not attached):
## [1] Rcpp_0.12.9 magrittr_1.5 splines_3.4.1
## [4] MASS_7.3-47 minqa_1.2.4 stringr_1.2.0
## [7] tools_3.4.1 parallel_3.4.1 nnet_7.3-12
## [10] pbkrtest_0.4-6 grid_3.4.1 nlme_3.1-131
## [13] mgcv_1.8-17 quantreg_5.29 MatrixModels_0.4-1
## [16] htmltools_0.3.5 yaml_2.1.14 lme4_1.1-12
## [19] rprojroot_1.2 digest_0.6.12 Matrix_1.2-10
## [22] nloptr_1.0.4 evaluate_0.10 stringi_1.1.2
## [25] compiler_3.4.1 methods_3.4.1 backports_1.0.5
## [28] SparseM_1.74
mcglm4aed - Multivariate Covariance Generalized
Linear Models for the Analysis of Experimental Data
|
Bonat & Zeviani (2017) |