Skip to content

Commit

Permalink
change usage of woodbury update to more precise schur complement for …
Browse files Browse the repository at this point in the history
…plackett's correction
  • Loading branch information
david-cortes committed Sep 2, 2022
1 parent 10a193f commit d6a17d7
Show file tree
Hide file tree
Showing 7 changed files with 72 additions and 371 deletions.
8 changes: 2 additions & 6 deletions R/approxcdf.R
Original file line number Diff line number Diff line change
Expand Up @@ -31,9 +31,8 @@ NULL
#' matrix is too low, it will instead switch to Plackett's original method. Bhat's method as implemented
#' here again differs from the original in the same ways as TVBS, and Plackett's method as implemented
#' here also differs from the original in that (a) it applies regularization to correlation matrices with
#' low determinant, (b) when matrices have a large-enough determinant, it uses the Woodbury matrix identity
#' for computation of gradient corrections, which is faster but less precise, (c) it uses more
#' Gaussian-Legendre points for evaluation (author's number was meant for a paper-and-pen calculation).
#' low determinant, (c) it uses more Gaussian-Legendre points for evaluation (author's number was meant
#' for a paper-and-pen calculation).
#' \item For \eqn{n = 3}, it will use Plackett-Drezner's method. The implementation here differs from the
#' author's original in that it uses more Gauss-Legendre points.
#' \item For \eqn{n = 2}, it will use Drezner's method, again with more Gaussian-Legendre points.
Expand Down Expand Up @@ -151,9 +150,6 @@ pmvn <- function(q, Cov, mean = NULL, is_standardized = FALSE, log.p = FALSE) {
#' in a recursive fashion, zeroing out one correlation coefficient at a time, instead of making
#' corrections for all correlations in aggregate. From some experiments, this turned out to result
#' in slower but more accurate calculations when correcting for multiple correlations.
#' \item If the determinant of a given correlation matrix is high enough, it will use the
#' Woodbury matrix identity when calculating gradients for corrections, which is faster but
#' less accurate.
#' \item The number of Gaussian-Legendre points used here is higher than in Plackett's, but
#' lower than in Gassmann's.
#' }
Expand Down
8 changes: 2 additions & 6 deletions approxcdf/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,9 +38,8 @@ def mvn_cdf(b: np.array, Cov: np.array, mean: Optional[np.array] = None, is_stan
matrix is too low, it will instead switch to Plackett's original method. Bhat's method as implemented
here again differs from the original in the same ways as TVBS, and Plackett's method as implemented
here also differs from the original in that (a) it applies regularization to correlation matrices with
low determinant, (b) when matrices have a large-enough determinant, it uses the Woodbury matrix identity
for computation of gradient corrections, which is faster but less precise, (c) it uses more
Gaussian-Legendre points for evaluation (author's number was meant for a paper-and-pen calculation).
low determinant, (c) it uses more Gaussian-Legendre points for evaluation (author's number was meant
for a paper-and-pen calculation).
- For :math:`n = 3`, it will use Plackett-Drezner's method. The implementation here differs from the
author's original in that it uses more Gauss-Legendre points.
- For :math:`n = 2`, it will use Drezner's method, again with more Gaussian-Legendre points.
Expand Down Expand Up @@ -193,9 +192,6 @@ def qvn_cdf(b: np.array, Rho: np.array, prefer_original: bool = False):
in a recursive fashion, zeroing out one correlation coefficient at a time, instead of making
corrections for all correlations in aggregate. From some experiments, this turned out to result
in slower but more accurate calculations when correcting for multiple correlations.
- If the determinant of a given correlation matrix is high enough, it will use the
Woodbury matrix identity when calculating gradients for corrections, which is faster but
less accurate.
- The number of Gaussian-Legendre points used here is higher than in Plackett's, but
than in Gassmann's.
Expand Down
5 changes: 2 additions & 3 deletions man/pmvn.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

3 changes: 0 additions & 3 deletions man/pqvn.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

17 changes: 0 additions & 17 deletions src/Rwrapper.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -161,22 +161,6 @@ SEXP R_determinant_4by4(SEXP x_)
return Rf_ScalarReal(determinant4by4tri(xpass));
}

SEXP R_inverse_4by4_loweronly(SEXP x_)
{
double *x = REAL(x_);
const double xpass[] = {
x[1], x[2], x[3],
x[6], x[7], x[11]
};
double lower_inv[3];
inv4by4tri_loweronly(xpass, lower_inv);
SEXP inv22_ = PROTECT(Rf_allocMatrix(REALSXP, 2, 2));
double *inv22 = REAL(inv22_);
inv22[0] = lower_inv[0]; inv22[1] = lower_inv[2];
inv22[2] = lower_inv[2]; inv22[3] = lower_inv[1];
UNPROTECT(1);
return inv22_;
}
#endif /* ifdef RUN_TESTS */

static const R_CallMethodDef callMethods [] = {
Expand All @@ -196,7 +180,6 @@ static const R_CallMethodDef callMethods [] = {
{"R_norm_cdf_4d_bhat", (DL_FUNC) &R_norm_cdf_4d_bhat, 2},
{"R_factorize_ldl_2by2blocks", (DL_FUNC) &R_factorize_ldl_2by2blocks, 1},
{"R_determinant_4by4", (DL_FUNC) &R_determinant_4by4, 1},
{"R_inverse_4by4_loweronly", (DL_FUNC) &R_inverse_4by4_loweronly, 1},
#endif
{NULL, NULL, 0}
};
Expand Down
Loading

0 comments on commit d6a17d7

Please sign in to comment.