[FaST-LMM] fastlmm/inference/lmm_cov.py, part 2

March 29, 2015
[LMM] literature overview: performance
March 27, 2015
[LMM] literature overview: approximate methods
March 15, 2015
[FaST-LMM] Proximal contamination
March 13, 2015
[FaST-LMM] REML estimate
March 11, 2015
[FaST-LMM] comparison with PyLMM (continued)
March 10, 2015
[FaST-LMM] comparison with PyLMM (practice)
March 9, 2015
[FaST-LMM] comparison with PyLMM (theory)
March 3, 2015
[FaST-LMM] fastlmm/inference/lmm_cov.py, part 2
February 27, 2015
[FaST-LMM] high-level overview, part 2
February 25, 2015
[FaST-LMM] high-level overview of the codebase, part 1
February 18, 2015
[FaST-LMM] fastlmm/inference/lmm.py
February 16, 2015
[FaST-LMM] fastlmm/inference/lmm_cov.py, part 1

OPEN/DOWNLOAD THE SUPPLEMENTARY NOTE

Notes on various functions, not easy to grasp from the first sight.

Also note that all the referred formulae relate to the ML case, but the code implements restricted maximum likelihood (ML), which means that every occurrence of must be mentally replaced with where and , . However, in the formulae below, $ S $ has a different meaning of the diagonal matrix in eigendecomposition of .

computeAKA

Computes $A^T (K + \delta I)^{-1} A$, see explanations below for computeAKB

computeAKB

Computes $ A^T (K + \delta I)^{-1} B $

It’s not obvious what’s going inside the function, so let’s examine it in detail.

if UUA argument is not provided

  • UAS is set to $ (S + \delta I)^{-1} U^T A $
  • AKB is set to $ A^T U (S + \delta I)^{-1} U^T B $

The result is $ A^T U(S + \delta I)^{-1}U^T B $

if UUA argument is provided

In this case the result is $ A^T U_1(S_1 + \delta I)^{-1} U_1^T B + \frac{1}{\delta}A^T(I-U_1U_1^T) B $

What are these cases for?

If the matrix $ K = WW^T = USU^T$ is full-rank, then $ UU^T $ gives identity, so that the returned value is that of $ A^T (K+\delta I)^{-1}B $.

If it is low-rank, we use the economy eigendecomposition instead, given by $K = U_1 S_1 U_1^T$. Let’s check that the returned value also gives $ A^T (K+\delta I)^{-1} B $.

In the low-rank case the following equality holds:

One can see that

(zeros are due to $U_1^TU_1U_1^T = U_1^T$)

The other identity is checked similarly.

nLLcore

The most important and math-heavy function.

In order to understand it, Supplementary Note 2 is absolutely required.

fast computation of the log-determinant

The formula for computing

is given in the end of the section 2.2 of the supplementary note (formula 2.5):

In the first lines of the function body first two summands are computed, and YKY variable is initialized to $Y^T(K+\delta I)^{-1}Y$. The k variable corresponds to $s_c$ in the formula.

N = self.Y.shape[0] - self.linreg.D
      
S,U = self.getSU()
k = S.shape[0]

UY,UUY = self.getUY(idx_pheno = idx_pheno)
P = UY.shape[1]	#number of phenotypes used
YKY = computeAKA(Sd=Sd, denom=denom, UA=UY, UUA=UUY)
logdetK = np.log(Sd).sum()

if (UUY is not None):#low rank part
    logdetK+=(N - k) * np.log(denom)

The last summand is computed later by taking eigendecomposition of $I - \tilde{W}^T(K+\delta I)^{-1}\tilde{W}$ and summing over the eigenvalues.

Auxiliary matrices

  • snps is $ X $ in math notation, and in this module it is perceived to be a vector, i.e. the module is suitable for a single SNP only
  • snpsKsnps corresponds to $ X^T(K + \delta I)^{-1}X $, and it’s a number
  • snpsKY corresponds to $ X^T(K + \delta I)^{-1}Y$ where $Y$ is the matrix, each column of which holds values of a phenotype
  • num_exclude is the number of surrounding SNPs to exclude, i.e. $k \leq k_{\mathrm{up}}$ (can be less because of zero weights)
  • WW is, in the simplest case when weights are set to -1, equal to
  • S_WW is the vector of eigenvalues of WW; denote the corresponding diagonal matrix by $S_{ww}$
  • columns of U_WW are eigenvectors of WW, let’s denote it in math notation by $U_{ww}$

A pattern

The following pattern is found twice in the source code:

WY = computeAKB(Sd=Sd, denom=denom, UA=UW, UUA=UUW, UB=UY, UUB=UUY)
UWY = U_WW.T.dot(WY)
WY = UWY / np.lib.stride_tricks.as_strided(S_WW, (S_WW.size,UWY.shape[1]), (S_WW.itemsize,0))
Wsnps = computeAKB(Sd=Sd, denom=denom, UA=UW, UUA=UUW, UB=Usnps, UUB=UUsnps)
UWsnps = U_WW.T.dot(Wsnps)
Wsnps = UWsnps / np.lib.stride_tricks.as_strided(S_WW, (S_WW.size,UWsnps.shape[1]), (S_WW.itemsize,0))

What happens here is that for a matrix $ A $ the matrix

is computed.

Recall that the expression

also known as WW in the code (but with the negative sign), is under the determinant sign in the formula (2.5), this is the matrix whose determinant we have to compute.

For convenience, let’s denote WW matrix as $\Omega$ in the formulae. Recall that

Low-rank updates

  • Low-rank update to YKY:
YKY -= (UWY * WY).sum(0)

After this update, YKY equals to

If we ignore $Y^T$ and $Y$ on the ends of the expression, the remaining part of it is exactly the inverse of the updated GSM given in section 2.3 of the supplementary note. Bingo!

  • Low-rank updates to snpsKY and snpsKsnps

Similarly, for updating snpsKY we subtract from it

The expression for subtracting from snpsKsnps is obtained by replacing Y with X in the above.

Estimating beta (coefficients) and variance

The hard part is over, and all left is calculation of estimates from the computed matrices.

  • Using formula (2.3), we set $\hat{\beta}$ to be the vector snpsKY divided by snpsKsnps which is assumed to be a number in the code (i.e. a single SNP)
  • The variable r2 is calculated according to the formula (2.4). Well, not exactly so, but one can quickly recognize that with $\beta$ given by the formula (2.3)