Global Optimizer for Clusters, Interfaces, and Adsorbates
GOCIA
is a global optimization toolkit and Python modules specialized for sampling supported clusters, restructured interfaces and adsorbate configurations.
Copyright © 2020 Zisheng Zhang
Please CITE THIS PAPER if you use any part of this repo:
Zhang, Z.; Wei, Z.; Sautet, P.; Alexandrova, A. N., Hydrogen-induced Restructuring of a Cu(100) Electrode in Electroreduction Conditions. J. Am. Chem. Soc., 2022, doi:10.1021/jacs.2c06188.
[TOC]
- Python 3.6 or later
- ASE and its dependencies
- Natsort and LATEX (pdf report generation)
First, install your own python environment, since HPCs usually don't give regular users write permission to the python path. To save disk space, it is recommended to install Miniconda
:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
sh Miniconda3-latest-Linux-x86_64.sh
Yes all the way through and source ~/.bashrc
to activate the conda environment. You can run python
in the terminal to check the version of the python that you are using.
If your machine has Git
installed, simply clone the repo to your local directory by:
git clone https://github.com/zishengz/gocia.git
Or, you can also download and unzip the source code:
wget https://github.com/zishengz/echo/archive/refs/heads/main.zip
unzip main.zip
rm main.zip
mv main gocia
After fetching the gocia
repo, add it to your PYTHONPATH
by:
export PYTHONPATH=$PYTHONPATH:`pwd`/gocia
Remember to add this export line to your ~/.bashrc
or the submission script, so that GOCIA
package is accessible by Python.
You need to use the absolute path (you can check it by running pwd
in Bash shell) for this purpose.
After these, run the following line to test:
python -c 'import gocia'
If no error occurs, GOCIA should have been imported into your path!
If you installed via Git, then update by pulling from the main branch in the gocia
directory:
cd xxx/gocia
git pull
Otherwise, you need to manually remove the old gocia
directory, and then download and unzip again.
We assume the use of VASP for local optimization unless otherwise specified.
HPC
represents job scheduler on the cluster you use:
- slurm: CORI
- sge: Hoffman2
The needed files include:
- INCAR-1, INCAR-2, INCAR-3 for low, mid, and high precision DFT calculations.
- KPOINTS
- init-worker.py The worker job that runs 3-step local optimizations and checks for unreasonable connectivity.
- HPC-vasp-init.sh
The shell script for submitting worker jobs.
REMEMBER to replace the
.bashrc
path with yours. - input.py A data file that contains the pseudo-potential path and VASP command.
Procedure:
- Replace the
.bashrc
path inHPC-vasp-init.sh
with yours. - Put the path to your pseudo-potentials and the VASP command into
input.py
- Give executable permission to the submission script by
chmod +x HPC-vasp-init.sh
- Submit the job by
./HPC-vasp-init.sh xxx.vasp
The needed files include:
- substrate.vasp The VASP-format structure file containing the substrate slab, with constraints.
- xxxSample.py Choose the structural sampling method that suit your system best.
python xxxSample.py substrate.vasp
The needed files include:
- INCAR-1, INCAR-2, INCAR-3
- KPOINTS
- init-worker.py
- HPC-vasp-init.sh
- input.py
- db2vasp.py Script for converting ase database files to systematically named VASP-format files.
- collectVASP.py write VASP results into a ase database file and filter out the duplicates.
Procudures (1-3 are the same as the 3-step opt section):
- Replace the
.bashrc
path inHPC-vasp-init.sh
with yours. - Put the path to your pseudo-potentials and the VASP command into
input.py
- Give executable permission to the submission script by
chmod +x HPC-vasp-init.sh
- Convert .db file to VASP-format by
python db2vasp.py xxx.db
- Submit in batch by:
for i in s0*vasp; do ./HPC-vasp-init.sh $i; done
- After all jobs finish, collect the results by
python collectVASP.py
The needed files include:
- substrate.vasp
- INCAR-1, INCAR-2, INCAR-3
- KPOINTS
- ga-HPC.py The master job that runs locally (login node if permitted) and controls the job submissions.
- ga-worker.py The worker job that runs local optimizations and updates the population on computing nodes.
- HPC-vasp.sh The shell script for submitting GCGA worker jobs.
- input.py A data file that contains the information needed for the GCGA sampling.
- gcga.db The ase database file containing the initial population, obtained from the previous step.
Procedures:
- Replace the
.bashrc
path inHPC-vasp.sh
with yours. - Put the path to your pseudo-potentials, the VASP command, chemical potentials, and other GCGA parameters into
input.py
- Give executable permission to the submission script by
chmod +x HPC-vasp.sh
- Copy the database file from [initial population: local optimization] into
gcga.db
- Run the GCGA master on login node by
nohup python -u ga-HPC.py &
- If you want to kill the GCGA,
touch STOP
.
Other parts are under construction...