-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to parallelize? #4
Comments
I have the same observation but I think the challenges would be: (2) the tissue masking - essentially vahadane performs the dictionary learning to tissue pixels of each image, and therefore the dimensionality of the actual input of dictionary varies among the input images, depending on the tissue region. I would recommend you to simply cache the stain matrices of all images that may be reused to avoid recomputation, and/or use faster approaches to obtain stain concentrations (e.g., least square solver torch.linalg.lstsq) from OD and stain matrices if you have specific needs of time efficiency. An example of using least square to solve concentration is attached here, derived from @cwlkr 's codes. |
I'm sorry to bother you, but is it possible to provide a graphics card for parallel processing of multiple slices at the same time? I use the for loop and it takes too long. An unknown error occurred when I used the from multiprocessing import Pool module. So I don't have any other options. thank you!
The text was updated successfully, but these errors were encountered: