Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use GPU when invoking torchani #326

Closed
kexul opened this issue Nov 2, 2021 · 1 comment
Closed

Use GPU when invoking torchani #326

kexul opened this issue Nov 2, 2021 · 1 comment

Comments

@kexul
Copy link
Contributor

kexul commented Nov 2, 2021

Dear molssi developers:
I found my workstation is not using GPUs when executing the computing using torchani. Are there some environment variables I need to set?

Here is the code I've used:

import qcengine as qcng
import qcelemental as qcel

mol = qcel.models.Molecule.from_data("""
O  0.0  0.000  -0.129
H  0.0 -1.494  1.027
H  0.0  1.494  1.027
""")
ret = qcng.compute(inp, "torchani")
ret

Here is the output of ret.provenance:

Provenance(creator='torchani', version='unknown', routine='torchani.builtin.aev_computer', wall_time=0.008759737014770508, cpu='Intel(R) Xeon(R) Platinum 8255C CPU @ 2.50GHz', username='root', hostname='VM-121-111-centos', qcengine_version='v0.20.1')
@kexul
Copy link
Contributor Author

kexul commented Nov 2, 2021

Context:
I'm using this https://github.com/openforcefield/bespoke-fit package to refit some force filed parameters, when using torchani as the QC backend, it consumes a lot of memory and ran quite slow, so I'm finding ways to move the model to GPU to accelerate the computation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant