Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor tesseroids and enable parallelization #244

Merged
merged 13 commits into from
Oct 18, 2021
Merged

Conversation

santisoler
Copy link
Member

@santisoler santisoler commented Sep 6, 2021

Refactor some pieces of the tesseroids implementation. Lower the number of
arguments of tesseroid_gravity by removing some parameters and used them as
global variables: GLQ_DEGREES, STACK_SIZE, MAX_DISCRETIZATIONS,
DISTANCE_SIZE_RATII. These parameters are not intended to be changed by the
user on most use cases. Enable parallelization of the forward model through
Numba parallel. Allow to disable it through a new parallel argument, set
True by default. The outer loop that iterates over computation points is the
one being parallelized. A new stack and small_tesseroids arrays are
allocated for each outer loop iteration. Improve some old-dated docstrings.
Refactor tests using pytest parametrizations.

Remove the glq_degrees, stack_size and max_discretizations arguments and
set them as global variables.
These parameters are not usually changed by users, it's better to hide
them and declutter the `tesseroid_gravity` function.
The distance-size ratii are now passed through the global variable
DISTANCE_SIZE_RATII.
Removed test function for invalid `distance_size_ratii` argument.
Split the original tesseroid_gravity function into a new private one so
the code gets simplified.
Unsilence pylint warning for too-many-locals on tesseroid_gravity.
Set consistent naming of `radial_adaptive_discretization` argument among
functions.
Enable parallelization through Numba prange on the computation points
loop. Both the stack and the small_tesseroids arrays are allocated
inside the same loop to avoid multiple threads to override the same
memory.
Add a new parallel argument to tesseroid_gravity and a new dispatcher
function that returns either the parallel or serial jitted function.
@santisoler santisoler added this to the v0.3 milestone Oct 18, 2021
@santisoler santisoler changed the title WIP Refactor tesseroids Refactor tesseroids and enable parallelization Oct 18, 2021
@santisoler santisoler merged commit 1563c7a into master Oct 18, 2021
@santisoler santisoler deleted the refactor-tesseroids branch October 18, 2021 19:55
@santisoler santisoler added the enhancement Idea or request for a new feature label Oct 20, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Idea or request for a new feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant