Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: If I have multiple GPUs, can I specify to run colabfold on separate GPU? #200

Closed
JasonIsaac-Lofty opened this issue Dec 13, 2023 · 5 comments

Comments

@JasonIsaac-Lofty
Copy link

I have multiple GPU on our server, can I run colabfold on separate GPU like alphafold2, just by using the flag: --gpu_devices?

@ChrisLou-bioinfo
Copy link

yes?

@sean-workman
Copy link

sean-workman commented Feb 8, 2024

This is a really unhelpful answer. There is no --gpu_devices flag in the Flags section of the read me or in the usage message from --help.

Edit:

Editing to add that --gpu_devices is indeed not a recognized flag.

@YoshitakaMo
Copy link
Owner

YoshitakaMo commented Feb 10, 2024

How about setting CUDA_VISIBLE_DEVICES environment variable manually? For example,

CUDA_VISIBLE_DEVICES=0 colabfold_batch --amber --templates <input> <outputdir>
CUDA_VISIBLE_DEVICES=1 colabfold_batch --amber --templates <input> <outputdir>
...

@sean-workman
Copy link

Hi @YoshitakaMo, I did indeed end up setting specific devices as visible in the bash scripts I wrote. Thank for maintaining this great resource!

@JasonIsaac-Lofty
Copy link
Author

How about setting CUDA_VISIBLE_DEVICES environment variable manually? For example,

CUDA_VISIBLE_DEVICES=0 colabfold_batch --amber --templates <input> <outputdir>
CUDA_VISIBLE_DEVICES=1 colabfold_batch --amber --templates <input> <outputdir>
...

Thank you very much! That was really helpful!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants