Skip to content
This repository has been archived by the owner on Jul 29, 2023. It is now read-only.

config_3D.yml #178

Open
stonedada opened this issue Nov 3, 2022 · 10 comments
Open

config_3D.yml #178

stonedada opened this issue Nov 3, 2022 · 10 comments
Labels
question Further information is requested

Comments

@stonedada
Copy link

Can you share the config_3D.yml for preprocess,train,and inference script.py ?

@Christianfoley
Copy link
Contributor

Hello. To run a the pipeline with a 3D model, one would use the 3d inference config, 2.5d preprocessing, and slightly modify one of the training configs to fit their use case (changing the model class name and the depths to suit their specifications).

We have shown that the 2.5D architecture is both more efficient and more effective than the 3D architecture for virtual staining tasks. Could you elaborate on your planned use case?

@stonedada
Copy link
Author

Thank you very much for your reply ! I just want to recurrence your results which appear in the paper.

@stonedada
Copy link
Author

Hello. To run a the pipeline with a 3D model, one would use the 3d inference config, 2.5d preprocessing, and slightly modify one of the training configs to fit their use case (changing the model class name and the depths to suit their specifications).

We have shown that the 2.5D architecture is both more efficient and more effective than the 3D architecture for virtual staining tasks. Could you elaborate on your planned use case?

I want to run the 3D moudel ,so I use 2.5d preprocessing config where change the tile ["depths"] output channel depth=5, it like depths:[5,5],and add a attribute "mask_depth"=5 .Next I use 2.5d train configs where just change the model class name to "UNet3D“,but I meet a error "AssertionError: network depth is incompatible with input depth" in unet3d.py , I don't understand the code of "feature_depth_at_last_block = depth // (2 ** self.num_down_blocks)" in unet3d.py,does it mean I should set the depth with a number much larger than 5 ?

@mattersoflight
Copy link
Member

mattersoflight commented Nov 7, 2022

Hi @stonedada, the 3D U-Net in unet3d.py requires different config parameters. However, we have stopped using 3D U-Nets in favor of 2.5D U-Net as @Christianfoley mentions.

Which specific result are you trying to reproduce from our paper (https://elifesciences.org/articles/55502)? Our reasons for using 2.5D U-Net are summarized in the section 2.5D U-Net allows efficient prediction of fluorescent structures from multi-channel label-free images.

If you are new to 2.5D U-Net and microDL repository, you should read and try DL-MBL notebook from release 1.0.0.

@stonedada
Copy link
Author

I want to reproduce the resluts of Figure 3 in your paper, which include 3D Predicted F-actin.

@stonedada
Copy link
Author

And I want to know that why "Depth must be uneven" in def adjust_slice_margins(slice_ids,depth) of aux_utils.py

@Christianfoley
Copy link
Contributor

I don't understand the code of "feature_depth_at_last_block = depth // (2 ** self.num_down_blocks)" in unet3d.py,does it mean I should set the depth with a number much larger than 5 ?

The 3D Unet downsamples in 3 dimensions, meaning that in a depth 5 (5 convolutional blocks + downsamples in the encoding path) network, the bottleneck feature map will have a z-depth of 1/(2**5) the input depth. This means to use a 3D Unet, your input data must be very large in Z. Previously we accomplished this by upsampling/resizing our data (see resize.py), but one of the benefits of a 2.5D Unet is that this is not necessary.

does it mean I should set the depth with a number much larger than 5 ?

The depth parameter in the preprocessing config should be set much larger than 5 for a 3D Unet.

And I want to know that why "Depth must be uneven" in def adjust_slice_margins(slice_ids,depth) of aux_utils.py

When translating 3d label-free volumes to 2d fluorescent predictions, we take a z-stack of label free slices and use them to predict the fluorescent target corresponding to the center slice of the stack. This "center slice" can only be the center of a stack with uneven stack depth.

@yingmuzhi
Copy link

Hello, I read your slide and paper, I also want to run 3D model. I choose retardance and nuclei images, position 150-153, slice 0-44. I also made some changes in config.yml below:

# preprocess.yml
preproc_config['channel_ids'] = [0, 1] # 0 -> nuclei, 1 -> retardance
preproc_config['normalize']['normalize_channels'] = [True, True]
preproc_config['tile']['depths'] = [1, 45] # depths
preproc_config['pos_ids'] = [150, 151, 152, 153] # position
preproc_config["slice_ids"] = list(range(45)) # slice
# Set the channels used for generating masks
preproc_config['masks']['channels'] = 0
preproc_config['masks']['mask_type'] = "otsu"
# train.yml
train_config['dataset']['input_channels'] = [1]
train_config['dataset']['target_channels'] = [0]
train_config["dataset"]["mask_channels"] = [2]
train_config["dataset"]["split_ratio"] = {"test": 0.25, "train":0.50, "val": 0.25}
train_config["network"]["class"] = "UNet3D"
train_config["network"]["depth"] = 45
train_config["network"]['num_filters_per_block'] = [16, 32, 64, 128, 256]
train_config["trainer"]["metrics"] = "pearson_corr"
train_config['trainer']['loss'] = "mae_loss" # TODO 3: your choice of loss function here.
...

However, It raise an error

raise ValueError(str(e))
ValueError: Dimension 0 in both shapes must be equal, but are 22 and 23. Shapes are [22,128,128] and [23,128,128]. for 'down_block_2/lambda_2/concat' (op: 'ConcatV2') with input shapes: [?,?,23,128,128], [?,16,22,128,128], [?,?,23,128,128], [] and with computed input tensors: input[3] = <1>.

could you please help me solver this problem, thx.

@Christianfoley
Copy link
Contributor

Hi @yingmuzhi . Could you please post the entire error traceback?

@mattersoflight mattersoflight added the question Further information is requested label Dec 8, 2022
@yingmuzhi
Copy link

yingmuzhi commented Dec 26, 2022

Hi, @Christianfoley . Thanks for your reply, I am sorry for my late reply since my final examination is around the corner.I try to run 3D Unet preprocessing according to config_preprocess_resize.yml . However, it raise a error:

Traceback (most recent call last):
  File "/root/anaconda3/envs/env_cp36_ymz/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/root/anaconda3/envs/env_cp36_ymz/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/root/.vscode-server/extensions/ms-python.python-2022.6.0/pythonFiles/lib/python/debugpy/__main__.py", line 45, in <module>
    cli.main()
  File "/root/.vscode-server/extensions/ms-python.python-2022.6.0/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py", line 444, in main
    run()
  File "/root/.vscode-server/extensions/ms-python.python-2022.6.0/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py", line 285, in run_file
    runpy.run_path(target_as_str, run_name=compat.force_str("__main__"))
  File "/root/anaconda3/envs/env_cp36_ymz/lib/python3.6/runpy.py", line 263, in run_path
    pkg_name=pkg_name, script_name=fname)
  File "/root/anaconda3/envs/env_cp36_ymz/lib/python3.6/runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "/root/anaconda3/envs/env_cp36_ymz/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/microDL/micro_dl/cli/preprocess_script.py", line 428, in <module>
    pp_config, runtime = pre_process(pp_config, base_config)
  File "/home/microDL/micro_dl/cli/preprocess_script.py", line 327, in pre_process
    mask_ext)
  File "/home/microDL/micro_dl/cli/preprocess_script.py", line 155, in generate_masks
    mask_ext=mask_ext
  File "/home/microDL/micro_dl/preprocessing/generate_masks.py", line 73, in __init__
    uniform_structure=uniform_struct
  File "/home/microDL/micro_dl/utils/aux_utils.py", line 236, in validate_metadata_indices
    'Indices for {} not available'.format(col_name)
AssertionError: Indices for slice_idx not available

And I guess the problem is - in resize_image.py, it returns slice_ids is [2, 11] but in resized_images/frames_meta.csv the slice_ids is [2, 10], they are not the same.
I am looking forward to your reply, thanks.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants