High performance computing (HPC) cluster (sw.pharma.hr) acquired through FarmInova project (#KK.01.1.1.02.0021). It conisist of one worker node (r2d2) with both CPU and GPU cores and one management node (C-3po). Intended uses are quantum chemistry, molecular dynamics, machine learning, ligand docking, and biostatistical modelling.
Current administrator: Davor Sakic
For current usage: GangliA
For current job status: PhpqstaT
r2d2
Boston SM2029GP-TR, 2U
2x Intel Xeon 6230N, 20core 2.3 GHz
384 GB ECC DDR4 2666 MHz
3x nVidia A100 40 GB GPU
2x 1.92 TB SATA SSD Enterprise
4x 2 TB SATA HDD Enterprise SFF
C-3po
HP ProLiant DL360e Gen8, 1U
Home, job scheduler, mangement
1x Intel Xeon E5-2430, 6core 2.2 GHz
48 GB ECC DDR3
3x 3 TB HDD MB3000GBKAC
xtb, CC BY-SA 4.0 licence
orca 5.0.2, Academic licence
Gaussian 16, University of Zagreb licence
Amber20, University of Zagreb licence
q-chem, single research group licence (Sakic)
BrianqC, single research group licence (Sakic)
High performance computing cluster (sw.pharma.hr) is a common computing resource for researchers at the university of zagreb, faculty of Pharmacy and Biochemistry and their partners. It is located at the University of Zagreb, Faculty of Pharmacy and Biochemistry, Ante Kovacica 1, 10000 Zagreb, Croatia. Official web address is sw.pharma.hr. This resource was aquired through ESF-ERDF funded Farminova project.
A registered user account is needed in order to use sw.pharma.hr . To apply for a user account, send an e-mail to Administrator. If registration is approved, a user account will be created and all relevant access details will be provided through direct e-mail communication inside one working week.
Registration of the research project is also possible, as well as hosting the website of the project. For additional details, send an e-mail to the Administrator.
Installation of new software for all users is handled individually. Contact Administrator for more info.
The address of the cluster access node is sw.pharma.hr, and it is accessed via the SSH protocol.
Note that to access sw.pharma.hr, your IP address has to be through the CARNET network. To connect outside CARNET network use either VPN or keybella.srce.hr service.
If calculations made using sw.pharma.hr computing resource are published, cite using the following template: "This research was performed using the resources of computer cluster sw.pharma.hr, acquired through ESF-ERDF financed FarmInova project and based in University of Zagreb, Faculty of Pharmacy and Biochemistry."
After article publication, it is mandatory to report the article with either a DOI or a CROSBI article identification number inside 30 days to the Administrator. A short description will be published on this website inside the Publications section.
Cluster sw.pharma.hr is meant to be used for RESEARCH purpose ONLY. Users are advised to use this resource with mindfulness towards other users, without unnecessary aggregation and waste of resources. Users are obligated to use software installed on the sw.pharma.hr in accordance with their respective licences.
On this cluster Sun Grid Engine (SGE) scheduler is used. User applications (hereafter jobs that run using the SGE system) must be described by a start shell script (e.g. sh, bash). Within the start script, in addition to normal commands, SGE parameters are specified. The same parameters can be specified outside the start script, when submitting a job.
Starting jobs is accomplished with the qsub
command:
qsub SGE_parameters script_name
Start shell script has this structure:
#!/bin/bash
#$ -SGE_parameter1 value
#$ -SGE_parameter2 value
command1
command1
Most common SGE_parameters
include:
-N job_name # unique identifier of the job
-cwd # job is run in current directory (place of start shell script)
-o job_name.out # name of standard output file
-e job_name.err # name of standard error file
-j y/N # combine standard output and error file, default is N
-pe parallel_environment NUMBER # number of processors
mpi # CPU queue (up to 32)
gpu # GPU queue (available 1-3)
-q queue_name # name of the queue
all.q # queue for all CPU only jobs
gpu.0 # queue for the first graphic card
gpu.1 # queue for the second graphic card
gpu.2 # queue for the third graphic card
-l resource=value
cores=NUMBER # number of processor cores (must match elsewhere)
memory=NUMBER # RAM in GB per processor core
scratch=NUMBER # scratch size in GB per processor core
Some additional SGE_variables
include:
$USER # name of the user
$TMPDIR # name of directory for temporary files
$JOB_ID # SGE job identification number (unique)
$SGE_O_WORKDIR # name of directory from which job was started
$JOB_NAME # name of the job
$QUEUE # name of the queue
$NSLOTS # number of processor cores
Setting of some SGE_variables
is done through export
:
export TMPDIR="/scratch-ssd/$TMPDIR"
After job submision using qsub
command, a message will appear: Your job JobID ("job_name") has been submittedJobID is unique number identification of the job. To monitor jobs use
qstat
command:
qstat options
-s [r|p|s|h] # filter jobs according to status: r - working, p - in queue, s - stopped, h - halted
-j [job_ID] # detailed information about the job
-f # information on the load on the nodes and the queue on the node
-F # detailed information of the nodes
-u user # information about jobs of this user (use * for all users)
For quick look at the host and current load use qhost
command.qhold [job_ID]
To resume this job use:
qrls [job_ID]
For deleting jobs use qdel
command:
qdel [job_ID] # delete job using [job_ID]
qdel [job_name] # delete job using [job_name]
qdel -u user # delete all jobs of user
qdel -f [job_ID] # force delete job if this job is stuck
The qacct
command is used to retrieve information about completed jobs:
qacct -o user # all finished jobs of this user
qacct -j [job_ID] # information about specific job
More information can be found here, here, and here. Additionally, use man qsub
command to see a complete manual for submiting jobs.
test.xtb.xyz
3
O 0.00000 0.00000 0.11779
H 0.00000 0.75545 -0.47116
H 0.00000 -0.75545 -0.47116
test.xtb.script
#!/bin/bash
#$ -N test.xtb
#$ -l memory=8
#$ -cwd
#$ -pe mpi 4
#$ -o test.xtb.out
#$ -e test.xtb.err
module load xtb/6.4.1
export OMP_STACKSIZE=8G
export OMP_NUM_THREADS=$NSLOTS,4
export OMP_MAX_ACTIVE_LEVELS=1
xtb test.xtb.xyz --ohess
qsub test.xtb.script
test.orca.inp
!opt freq b3lyp 6-31g(d)
%pal nprocs 8 end
%maxcore 16000
* XYZ 0 1
O 0.00000 0.00000 0.11779
H 0.00000 0.75545 -0.47116
H 0.00000 -0.75545 -0.47116
*
test.orca.script
#!/bin/sh
#$ -N test.orca
#$ -o test.orca.err
#$ -j Y
#$ -l memory=16
#$ -pe mpi 8
#$ -cwd
module load orca/5.0.3
run-orca-isabella.sh test.orca.inp > test.orca.out
qsub test.orca.script
test.g16.com
%nproc=8
%mem=16gb
%chk=test.g16.chk
# opt freq b3lyp/6-31g(d)
test file title
0 1
O 0.00000 0.00000 0.11779
H 0.00000 0.75545 -0.47116
H 0.00000 -0.75545 -0.47116
test.g16.script
#!/bin/bash
#$ -cwd
#$ -l cores=8
#$ -l memory=2
export PATH=/apps/nbo6/bin:\
/usr/lib64/mvapich2/bin:\
/opt/sge/bin:/opt/sge/bin/lx-amd64:\
/usr/local/cuda-5.5/bin:/usr/local/bin:\
/bin:/usr/bin:/usr/local/sbin:\
/usr/sbin:/sbin:/opt/puppetlabs/bin:
dog16 test.g16
qsub test.g16.script
test.qchem.inp
$molecule
0 1
O 0.00000 0.00000 0.11779
H 0.00000 0.75545 -0.47116
H 0.00000 -0.75545 -0.47116
$end
$rem
$rem
JOBTYPE opt
METHOD b3lyp
BASIS 6-31G*
$end
@@@
$molecule
read
$end
$rem
JOBTYPE freq
METHOD b3lyp
BASIS 6-31G*
$end
test.qchem.script
#$ -cwd
#$ -l cores=8
#$ -l memory=4
#$ -pe gpu 1
#$ -N test.qchem
source /usr/local/qchem/qcenv.sh
cuda-wrapper.sh qchem -nt 16 -gpu test.qchem.inp test.qchem.out
qsub test.qchem.script