# Open SourceMolecular Dynamics

### Sidebar

#### For Developers

exercises:2019_conexs_newcastle:ex0

# Connecting to the HPC cluster

This is a short tutorial on how to connect to the HPC installations of Newcastle University from the Windows machines available

### Using PuTTY

PuTTY is a so-called SSH client for Windows that allows you to establish a secure connection to remote UNIX machines, such as the HPC cluster here at Newcastle University. Look for it in the search bar and open the App.

In the Host Name (or IP address) field, write the following:

nconexs##@rocket.hpc.ncl.ac.uk

where ## stands for your CONEXS id number. Make sure the SSH box is ticked and click Open. A black window (bash terminal) will open and your password will be required to finish the connecting process.

If you are familiar with UNIX/Linux style terminals, using PuTTY will be enough. If not, you might want to have a more graphical representation of your file system as well. This is possible using the Win SCP software.

### Using WinSCP

WinSCP is a software that allows you to browse the file system of a remote machine graphically. It is readily installed on the available Windows machines. Look for in the search bar and open the App.

Keep the default SFTP file protocol and type rocket.hpc.ncl.ac.uk in the Host name field, nconexs## in User name and your password in the corrsponding file. Hit Login and browse!

## Going to the proper directory

In your PuTTY terminal, type the following:

cd /nobackup/nconexs##/

cd is the bash command allowing you to move from a directory to another. If you then type the ls command, the files contained in this directory will be listed.

To browse to the same file in WinSCP, use the top right field (currently displaying nconexs##) and first go to / <root>, then nobackup and finally your nconexs## directory. You should see the same files as in the WinSCP terminal.

You can edit files, create new files and directories, etc. either directly in the PuTTY terminal or using WinSCP

## Launching a CP2K calculation

The cp2k.sh file contains a set of (machine) instructions that allows you to run a job in parallel on the HPC cluster. You can open and edit this file directly in the PuTTY terminal or through WinSCP.

cp2k.sh
#!/bin/bash

#SBATCH --job-name=test

myaccount=tpcss
export SLURM_ACCOUNT=$myaccount export SBATCH_ACCOUNT=$myaccount
export SALLOC_ACCOUNT=$myaccount myres=1 export SLURM_RESERVATION=${myaccount}_${myres} export SBATCH_RESERVATION=${myaccount}_${myres} export SALLOC_RESERVATIION=${myaccount}_${myres} module load CP2K/6.1-foss-2017b export OMP_NUM_THREADS=1 mpirun -np$SLURM_NTASKS --bind-to core cp2k.popt /nobackup/nconexs03/test.inp > /nobackup/nconexs03/test.out



The first line of this file simply marks it as an executable. The next three lines (starting with #SBATCH) indicates how many CPUs will be used for this calculation and the its name. If you were to run 8 cores, the you would need to change the ntasks and tasks-per-node accordingly.

The next paragraph specifies a bunch of indications for the queue system which we do not need to understand and should not modify.

The module load line indicates which (available) version of CP2K to use and the final line is the actual launch command.

Look at the structure of that last line. /nobackup/nconexs03/test.inp is the path to your CP2K input file and /nobackup/nconexs03/test.out that of your input file. You will have to edit those paths for your own calculation. Note that in principle, you can use either absolute paths or relative paths.

To launch the calculation, you have to run the cp2k.sh file. This is done via the sbatch command (to be typed in the PuTTY terminal):

sbatch cp2k.sh

Note that this specific example will fail, as you do not have access to the nconexs03 directory.

### Keeping track of your job

There are a few very useful (terminal) commands that you can use to keep track of your calculations. First of which is the

squeue -u nconexs##

which informs you about the state of your calculation (whether it is in the queue, or if it is currently running, and also the job id).

If you want to kill your job, you can do so using the scancel command:

scancel jobid(obtained with squeue)

Finally you can also follow the state of your calculation by checking the output file as it is written using the tail command:

tail -f path_to_your_output_file

This command can be exited by pressing Ctrl+c