Skip to content

tamadate/MSI_PC_usage

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 

Repository files navigation

Hogan Lab. Minnesota Supercomputer Institute (MSI) usage

Outline

ssh connection

Open terminal (Linux) or command prompt and connet MSI pc by typing following command:

where, username is your UMN internet ID. The system require 2-factor authentification (your UMN password and DUO). After the authentification, your can see this login screen:

Success. Logging you in...
Last failed login: Tue Sep 20 09:14:30 CDT 2022 from me-u-me-pcl-18.me.umn.edu on ssh:notty
There were 2 failed login attempts since the last successful login.
Last login: Thu Sep 15 08:43:38 2022 from 10.100.0.179
-------------------------------------------------------------------------------
             University of Minnesota Supercomputing Institute
                                 Mesabi
                         HP Haswell Linux Cluster
-------------------------------------------------------------------------------
For assistance please contact us at https://www.msi.umn.edu/support/help.html
[email protected], or (612)626-0802.
-------------------------------------------------------------------------------
Home directories are snapshot protected. If you accidentally delete a file in
your home directory, type "cd .snapshot" then "ls -lt" to list the snapshots
available in order from most recent to oldest.

January 6, 2021: Slurm is now the scheduler for all nodes.
-------------------------------------------------------------------------------
tamad005@ln0004 [~] %



Job submission

Submission script

Do NOT drectly run on turminal but creat job script and submit it. You can see more detail about the submission script from here. Here, just a simple example is shown:

#!/bin/bash
##  #SBATCH ** set
#SBATCH -time 20:00:00  # set maximum calculation time
#SBATCH --ntasks=5      # set number of cores (processers)
#SBATCH --mem=2gb       # set limit of memorry (ram) usage

module load intel       # load intel compiler module
module load ompi        # load openMPI module

icpc -O3 -o run.out src/*cpp -std=c++11   # compile src/*cpp and create run.out file
mpirun -n 5 ./run.out                     # run ./run.out with 5 cores (parallel)

First few lines starting from #SBATCH set resource which you use in this job. Second block module load * load a module you use in this job. Third block is main commands (what you do in this job).

Job submission & related commands

- Submit job script.

sbatch filename

- Check simulation status

squeue --me

You can see this kind of output:

tamad005@ln0006 [~/TiO2/0.6nm1.6nm_5] % squeue --me
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
         150273877   agsmall   run.sh tamad005  R   21:45:16      1 acn120
         150273874   agsmall   run.sh tamad005  R   21:45:47      1 acn97
         150273875   agsmall   run.sh tamad005  R   21:45:47      1 acn38
         150273876   agsmall   run.sh tamad005  R   21:45:47      1 acn38
         150272959   agsmall   run.sh tamad005  R   21:58:45      1 acn84
         150272960   agsmall   run.sh tamad005  R   21:58:45      1 acn172
         150272961   agsmall   run.sh tamad005  R   21:58:45      1 acn172
         150272962   agsmall   run.sh tamad005  R   21:58:45      1 acn172
         150272954   agsmall   run.sh tamad005  R   21:58:46      1 acn08
         150272955   agsmall   run.sh tamad005  R   21:58:46      1 acn08
         150272956   agsmall   run.sh tamad005  R   21:58:46      1 acn08
         150272957   agsmall   run.sh tamad005  R   21:58:46      1 acn83
         150272958   agsmall   run.sh tamad005  R   21:58:46      1 acn84
         150272780   agsmall   run.sh tamad005  R   22:06:15      1 acn109
         150272781   agsmall   run.sh tamad005  R   22:06:15      1 acn21
         150272782   agsmall   run.sh tamad005  R   22:06:15      1 acn21
         150272783   agsmall   run.sh tamad005  R   22:06:15      1 acn21
         150272784   agsmall   run.sh tamad005  R   22:06:15      1 acn08
         150272774   agsmall   run.sh tamad005  R   22:06:16      1 acn130
         150272775   agsmall   run.sh tamad005  R   22:06:16      1 acn130
         150272777   agsmall   run.sh tamad005  R   22:06:16      1 acn43
         150272778   agsmall   run.sh tamad005  R   22:06:16      1 acn43
         150272779   agsmall   run.sh tamad005  R   22:06:16      1 acn43
         150272727   agsmall   run.sh tamad005  R   22:09:25      1 acn149
         ...

- Check storage usage

You can check the your storage usage by:

groupquota -u

The option -u mean your usage. When you remove -u option, total group strage usage is displayed. Below is example where he use 391.79 GB storage (11.3% of hogancj group storage):

Quota for user 'tamad005' in group 'hogancj'
------------------------
BYTES        |          
Usage        | 391.79 GB
Quota        | 3.48 TB  
Percent used | 11.3 %   
------------------------
FILES        |          
Usage        | 233,589  
Quota        | 5,000,000
Percent used | 4.7 %    



File transfer

1. WinSCP (Windows)

You can find instruction from here.

2. FileZilla (Linux)

You can find instruction from here.

3. SCP

Type command scp like cp in UNIX command:

scp [email protected]:address1 address2



LAMMPS

1. Load module

module load lammps

2-1. Run with MSI execute file

Serial run:

lmp_intel_cpu_intelmpi -in inputFileName

Parallel run (substitute nCPU with your number of CPUs):

mpirun -n nCPU lmp_intel_cpu_intelmpi -in inputFileName

2-2. Build source code & run

Transfer src, load lammps module, and build it.




OVITO

1. Make virtual Anaconda environment

Maybe you need to create your own Anaconda environment (named env_ovito in this example) for ovito module installation:

module load conda
conda create --name env_ovito --clone base

2. Switch the environment and install ovito module

Switch from base to env_ovito and install ovito module as:

conda activate env_ovito
conda install --strict-channel-priority -c https://conda.ovito.org -c conda-forge ovito

Since the installaiton process takes about 1 hr, this process should be performed through submission script like a code OVITO_building/step2.sh.

3. Check the ovito module

Test to import ovito module and print its version:

(base) tamad005@ln0005 [~/vaporUptake] % conda activate env_ovito
(env_ovito) tamad005@ln0005 [~/vaporUptake] % python
Python 3.10.6 | packaged by conda-forge | (main, Aug 22 2022, 20:35:26) [GCC 10.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import ovito
>>> ovito.version
(3, 7, 10)
>>>

If it is installed properly, it return ovito version (3.7.10 in this example).




OpenFOAM

You can find instruction from here but it is not useful and some calculation could not run with this way due to the old version of OpenFOAM. We reccomend to build your own source code on the MSI computer and use it as following instruction.

Build source code

Step 1: Download OpenFOAM

  • You can obtain openfoam-OpenFOAM-v2012.tar.gz from Hogan Lab google drive or from this link.
  • Newer version (OpenFOAM-v2206) could not be compiled due to maybe MSI compiler issue (check date: 08/19/2022). The versions between v2012-v2206 and OpenFOAM foundation version may be avairable (not checked).

Step 2: Transfer downloaded file to MSI

  • Creat a OpenFOAM directory at your MSI home directory: mkdir ~/OpenFOAM.
  • Transfer the downloaded file to the created directory (~/OpenFOAM/) with compressed form (.tar.gz).
  • Check you have ~/OpenFOAM/openfoam-OpenFOAM-v2012.tar.gz file.

Step 3: Extract file

  • Transfer OpenFOAM_MSIscript/Extract.sh file to ~/OpenFOAM/.
  • Submit that script: sbatch Extract.sh
  • This script extract .tar file via tar -xfv openfoam-OpenFOAM-v2012.tar.gz.
  • Check you have ~/OpenFOAM/openfoam-OpenFOAM-v2012 file.

Step 4: Build

  • Transfer OpenFOAM_MSIscript/build.sh to ~/OpenFOAM/openfoam-OpenFOAM-v2012/.
  • Submit script by sbatch build.sh
  • This script build OpenFOAM (10 cores parallel) via:
module load ompi
module load flex
source ~/OpenFOAM/openfoam-OpenFOAM-v2012/etc/bashrc
./Allwmake -j 10
  • It may take a while (>1hr).

Step 5: Edit bashrc (Option)

  • The OpenFOAM simulation is executable even if this step is skipped but this step makes the simulation command more simple, leading to reduce the chance of submission mistaking.
  • On your MSI terminal, add two lines module load ompi and source ~/OpenFOAM/OpenFOAM-v2012/etc/bashrc to ~/.bashrc file:
echo module load ompi >> ~/.bashrc
echo source ~/OpenFOAM/OpenFOAM-v2012/etc/bashrc >> ~/.bashrc
  • Check if the path is correct by typing which simpleFoam. You can see the return ~/OpenFOAM/OpenFOAM-v2012/platforms/linux64GccDPInt32Opt/bin/simpleFoam when it is correct.

Run simulation

If you did the Step 5

All solvers are able to use via the same command, e.g., icoFoam, simpleFoam, rhoSimpleFoam, etc... as your local PC. This is an test case from cavity flow in tutorial:

#!/bin/bash
#SBATCH -time 20:00:00
#SBATCH --ntasks=1
#SBATCH --mem=2gb

cp -r ~/OpenFOAM/openfoam-OpenFOAM-v2012/tutorial/incompressible/icoFoam/cavity/cavity ./
blockMesh
icoFoam

If you skipped the Step 5

You need to load an ompi module and bashrc files in the submission script (you can do it on the terminal before the submission but it is better to do in the submission script for just in case). This is an test case from cavity flow in tutorial (just added two lines to above script):

#!/bin/bash
#SBATCH -time 20:00:00
#SBATCH --ntasks=1
#SBATCH --mem=2gb

module load ompi
source ~/OpenFOAM/openfoam-OpenFOAM-v2012/etc/bashrc

cp -r ~/OpenFOAM/openfoam-OpenFOAM-v2012/tutorial/incompressible/icoFoam/cavity/cavity ./
blockMesh
icoFoam



Your own code

Only you know how to use it.

Author

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages