Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark tweaks #62

Merged
merged 22 commits into from
Dec 13, 2023
Merged

Benchmark tweaks #62

merged 22 commits into from
Dec 13, 2023

Conversation

paquiteau
Copy link
Member

This PR adds all the tweak done while working on the benchmark. A lot of happened, and despite my best effort is hard do split it up in several chunks

@paquiteau
Copy link
Member Author

paquiteau commented Nov 27, 2023

Alright, a lot happened but now:

  • I removed buggy/flaky tests (like on density compensation)
  • The CI is faster (and will be even more when we cache the installation of cufinufft)
  • Examples scripts serve also as tests for the trajectories
  • All tests are passing 🎉 , we get 68% of branch coverage

Moving forward

  • We still need to publish the coverage etc
  • gpu-based nufft would also need test with on-device data (e.g. cupy arrays)
  • BART tests are still very flaky

I think these could be addressed in later PRs.

@paquiteau paquiteau mentioned this pull request Nov 29, 2023
@chaithyagr chaithyagr self-requested a review December 7, 2023 08:56
Copy link
Member

@chaithyagr chaithyagr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A bunch of changes

backend: [finufft, pynfft, pynufft, bart, sigpy]
exclude:
- backend: bart
- backend: pynfft
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add an issue that it should be fixed and also point the issue here.

@@ -0,0 +1,52 @@
"""TEST CONFIGURATION.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this file in examples?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This allows the examples to be run as tests 😉, the examples are the best solution to test the trajectories generation and display IMO.

see https://stackoverflow.com/questions/56807698/how-to-run-script-as-pytest-test for more infos.

# Convert to float array and take only the real part
traj = np.ascontiguousarray(traj_raw.T.view("(2,)float32")[..., 0])
if np.all(traj[..., -1] == 0):
warnings.warn("2D Trajectory Detected")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need a warning? is it when we expect a 3D trajectory?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because this might also be a valid 3D trajectory (Sampling the central slice of the space of a 3D object, but here only a 2D NUFFT will be done by BART.

@@ -342,25 +335,29 @@ def op(self, data, ksp_d=None):
else:
op_func = self._op_calibless_host
ret = op_func(data, ksp_d)
if not self.persist_plan:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dont know why persist plan option is removed?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was too much hassle to maintain internally, and had little to no usage in practice. If ones want to have "volatile" plans he can simply delete/create new nufft operators.

return self._safe_squeeze(ret)

def _data_consistency_sense(self, image_data, obs_data):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I prefer the old name somehow. grad_sense while seems cool, could be confusing in future when we would want to make the flow compatible with autodiff.

@@ -584,7 +575,7 @@ def __adj_op(self, coeffs_d, image_d):
ret = self.raw_op.type1(coeffs_d, image_d)
return ret

def data_consistency(self, image_data, obs_data):
def get_grad(self, image_data, obs_data):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here, naming

"kspace_traj, shape",
cases=[CasesTrajectories.case_radial3D],
)
@parametrize("backend", ["cufinufft", "tensorflow"])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why was this removed? I understand that it must be tested upstream, but this wont take much time and can be very useful in future to see if gpuNUFFT breaks.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pipe Method is not converging. In lack of a better solution I removed it.

return np.sqrt(2 ** len(self.shape))


def _readcfl(cfl_file, hdr_file=None):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might as well be helpful to have this exposed (even the writer). We have collaborators who use CFL files for storing k-space data

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, this could go in a proper io module (I guess we will tackle that in another PR) see #65

@paquiteau
Copy link
Member Author

Regarding the renaming of data_consistency method to get_grad: This was done to stick closer to the Modopt API. May be data_grad is a good compromise (that should be used in Modopt as well)

@paquiteau paquiteau merged commit a34ba10 into master Dec 13, 2023
8 checks passed
@paquiteau paquiteau deleted the benchmark-tweaks branch December 15, 2023 09:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants