March 29, 2015
[LMM] literature overview: performance March 27, 2015 [LMM] literature overview: approximate methods March 15, 2015 [FaST-LMM] Proximal contamination March 13, 2015 [FaST-LMM] REML estimate March 11, 2015 [FaST-LMM] comparison with PyLMM (continued) March 10, 2015 [FaST-LMM] comparison with PyLMM (practice) March 9, 2015 [FaST-LMM] comparison with PyLMM (theory) March 3, 2015 [FaST-LMM] fastlmm/inference/lmm_cov.py, part 2 February 27, 2015 [FaST-LMM] high-level overview, part 2 February 25, 2015 [FaST-LMM] high-level overview of the codebase, part 1 February 18, 2015 [FaST-LMM] fastlmm/inference/lmm.py February 16, 2015 [FaST-LMM] fastlmm/inference/lmm_cov.py, part 1 |
OPEN/DOWNLOAD THE SUPPLEMENTARY NOTE Notes on various functions, not easy to grasp from the first sight. Also note that all the referred formulae relate to the ML case, but the code implements restricted maximum likelihood (ML), which means that every occurrence of must be mentally replaced with where and , . However, in the formulae below, $ S $ has a different meaning of the diagonal matrix in eigendecomposition of . computeAKAComputes $A^T (K + \delta I)^{-1} A$, see explanations below for computeAKBComputes $ A^T (K + \delta I)^{-1} B $ It’s not obvious what’s going inside the function, so let’s examine it in detail. if UUA argument is not provided
The result is $ A^T U(S + \delta I)^{-1}U^T B $ if UUA argument is providedIn this case the result is $ A^T U_1(S_1 + \delta I)^{-1} U_1^T B + \frac{1}{\delta}A^T(I-U_1U_1^T) B $ What are these cases for?If the matrix $ K = WW^T = USU^T$ is full-rank, then $ UU^T $ gives identity, so that the returned value is that of $ A^T (K+\delta I)^{-1}B $. If it is low-rank, we use the economy eigendecomposition instead, given by $K = U_1 S_1 U_1^T$. Let’s check that the returned value also gives $ A^T (K+\delta I)^{-1} B $. In the low-rank case the following equality holds: One can see that (zeros are due to $U_1^TU_1U_1^T = U_1^T$) The other identity is checked similarly. nLLcoreThe most important and math-heavy function. In order to understand it, Supplementary Note 2 is absolutely required. fast computation of the log-determinantThe formula for computing is given in the end of the section 2.2 of the supplementary note (formula 2.5): In the first lines of the function body first two summands are computed, and
The last summand is computed later by taking eigendecomposition of $I - \tilde{W}^T(K+\delta I)^{-1}\tilde{W}$ and summing over the eigenvalues. Auxiliary matrices
A patternThe following pattern is found twice in the source code:
What happens here is that for a matrix $ A $ the matrix is computed. Recall that the expression also known as For convenience, let’s denote Low-rank updates
After this update, If we ignore $Y^T$ and $Y$ on the ends of the expression, the remaining part of it is exactly the inverse of the updated GSM given in section 2.3 of the supplementary note. Bingo!
Similarly, for updating The expression for subtracting from Estimating beta (coefficients) and varianceThe hard part is over, and all left is calculation of estimates from the computed matrices.
|