exercises:2019_conexs_newcastle:ex0
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
exercises:2019_conexs_newcastle:ex0 [2019/09/10 16:31] – [Going to the proper directory] abussy | exercises:2019_conexs_newcastle:ex0 [2020/08/21 10:15] (current) – external edit 127.0.0.1 | ||
---|---|---|---|
Line 43: | Line 43: | ||
To browse to the same file in WinSCP, use the top right field (currently displaying `nconexs## | To browse to the same file in WinSCP, use the top right field (currently displaying `nconexs## | ||
+ | |||
+ | |||
+ | **You can edit files, create new files and directories, | ||
+ | ===== Launching a CP2K calculation | ||
+ | |||
+ | The `cp2k.sh` file contains a set of (machine) instructions that allows you to run a job in parallel on the HPC cluster. You can open and edit this file directly in the PuTTY terminal or through WinSCP. | ||
+ | |||
+ | <code - cp2k.sh> | ||
+ | |||
+ | #!/bin/bash | ||
+ | |||
+ | #SBATCH --ntasks=6 | ||
+ | #SBATCH --tasks-per-node=6 | ||
+ | #SBATCH --job-name=test | ||
+ | |||
+ | myaccount=tpcss | ||
+ | export SLURM_ACCOUNT=$myaccount | ||
+ | export SBATCH_ACCOUNT=$myaccount | ||
+ | export SALLOC_ACCOUNT=$myaccount | ||
+ | myres=1 | ||
+ | export SLURM_RESERVATION=${myaccount}_${myres} | ||
+ | export SBATCH_RESERVATION=${myaccount}_${myres} | ||
+ | export SALLOC_RESERVATIION=${myaccount}_${myres} | ||
+ | |||
+ | |||
+ | module load CP2K/ | ||
+ | export OMP_NUM_THREADS=1 | ||
+ | |||
+ | mpirun -np $SLURM_NTASKS --bind-to core cp2k.popt / | ||
+ | |||
+ | |||
+ | </ | ||
+ | |||
+ | |||
+ | The first line of this file simply marks it as an executable. The next three lines (starting with `#SBATCH`) indicates how many CPUs will be used for this calculation and the its name. If you were to run 8 cores, the you would need to change the `ntasks` and `tasks-per-node` accordingly. | ||
+ | |||
+ | |||
+ | The next paragraph specifies a bunch of indications for the queue system which we do not need to understand and should not modify. | ||
+ | |||
+ | The `module load` line indicates which (available) version of CP2K to use and the final line is the actual launch command. | ||
+ | |||
+ | Look at the structure of that last line. `/ | ||
+ | |||
+ | To launch the calculation, | ||
+ | |||
+ | |||
+ | < | ||
+ | sbatch cp2k.sh | ||
+ | </ | ||
+ | |||
+ | Note that this specific example will fail, as you do not have access to the `nconexs03` directory. | ||
+ | |||
+ | |||
+ | ==== Keeping track of your job ==== | ||
+ | |||
+ | |||
+ | There are a few very useful (terminal) commands that you can use to keep track of your calculations. First of which is the | ||
+ | |||
+ | |||
+ | < | ||
+ | squeue -u nconexs## | ||
+ | </ | ||
+ | |||
+ | |||
+ | which informs you about the state of your calculation (whether it is in the queue, or if it is currently running, and also the job id). | ||
+ | |||
+ | If you want to kill your job, you can do so using the `scancel` command: | ||
+ | |||
+ | < | ||
+ | scancel jobid(obtained with squeue) | ||
+ | </ | ||
+ | |||
+ | |||
+ | Finally you can also follow the state of your calculation by checking the output file as it is written using the `tail` command: | ||
+ | |||
+ | |||
+ | < | ||
+ | |||
+ | tail -f path_to_your_output_file | ||
+ | </ | ||
+ | |||
+ | |||
+ | This command can be exited by pressing `Ctrl+c` | ||
+ | |||
+ | |||
+ | |||
exercises/2019_conexs_newcastle/ex0.txt · Last modified: 2020/08/21 10:15 by 127.0.0.1