CLAMV

 
ConstructorUniversity

Back to Constructor University Main Page
 
 
   

Contact

 
   

Hardware

 
   

Downloads

 
   

Links

 
   

Impressum

 
   

Privacy Policy /
Datenschutzerklärung

 
 

Contact:

Dr. Achim Gelessus
agelessus@constructor.university
Phone: +49-421-200-4623

 

 

Computational Laboratory for Analysis, Modeling, and Visualization (CLAMV)

 


HPC Queuing System


We use the queuing system Slurm. The most important Slurm commands for users are:
  • sinfo
    Show Slurm partitions.
  • sinfo -N
    Show computing nodes.
  • sbatch xxx
    Submit job file xxx.
  • squeue
    Show running jobs.
  • scancel JobID
    Remove job with JobID from Queuing System.



Example Slurm job files can be found on the login nodes at /usr/local/etc/HPC/Slurm

ScriptDescription
job.1coreJob uses 1 processor core (serial job) and no GPU. Use the HPC computing nodes for serial jobs only if you have a large number of concurrent serial jobs. For single or small number of concurrent serial jobs use alternatives like CLAMV Teaching Lab computer pool or local workstations.
job.8coreJob uses 8 processor cores and no GPU. Can be used for any OpenMP parallelized program
job.1gpu8coresGromacs job requesting 8 processor cores and one GPU. Can be used for any GPU program
job.2gpu8coresGromacs job requesting 8 processor cores and two GPU. Can be used for any GPU program
job.mpiParallel HelloWorld program requesting 16 tasks
job.array1coreGenerate an array of 12 serial jobs and assign job parameter to program
job.orcaParallel Orca job requesting 16 processor cores
job.qeParallel Quantum Espresso job requesting 16 processor cores. Scratch directory on local disk of computing node.
job.localscratchExample how to use the local scratch disk of the computing nodes to reduce network traffic during job run time.


Configuration of the Slurm installation is still in progress. Submission files are regularly adapted to new settings. Please check regularly the example scripts and always choose settings according to the latest version.
Due to the processor architecture all parallel applications should try to use multiples of 8 cores, thus 8,16,24,32, ... core requests are recommended.
For the submission of Gaussian16 jobs it is recommended to use command StartGauss16 which is available upon loading the Gauss16 Environment Module. You must be member of group gaussian to run Gaussian16 jobs.




HPC Introduction
Download the PDF file

more info »


Page last edited on: 2025-05-28 . © Constructor University Bremen (constructor.university). All rights reserved. Do not reproduce without permission. Technical contact: itsupport@constructor.university. For all inquiries, Constructor University Bremen may be contacted at phone no +49 421 200-4100.