wiki:UsersPage/Support/CP3/IT/GridUI
Last modified 2 days ago Last modified on 11/15/17 18:09:42

How to work interactively on Ingrid Cluster

Introduction

The main purpose of ingrid cluster is to be half of the Belgian Tier-2 computing facility for the CMS/LHC experiment located at CERN, Switzerland. Ingrid is principally a "grid cluster" connected to WLCG grid infrastructure. We also provide interactive access to our cluster for physicists working in belgian High Energy labs involved in CMS experiment. Within the collaboration with UCL/CISM, we provide access to ingrid for CISM users. See below some information and good practices for ingrid usage.

Composition

The details about the composition of the cluster are here

All the boxes are connected by a 1 Gb/s network. Note that this cluster is dedicated for sequential job run. If you have to run heavy parallel job (for example using MPI library), you should consider using the CÉCI clusters, which are optimized for such use cases.

Access

Shell access to ingrid is done by using a ssh client. All good Linux/Unix distribution integrates it by default. For MS Windows users, we recommend putty. The informations needed for connection are:

  • hostname: ingrid-ui2.cism.ucl.ac.be
  • port: 22
  • protocol: ssh v2
  • your CECI key

Example of connection from a Linux CP3 workstation:

ssh -i </path/to/your/CECI/private/key> <your-CECI-username>@ingrid-ui2.cism.ucl.ac.be

As ingrid-ui2.cism.ucl.ac.be is inaccessible from outside the university network, you will need to use gwceci.cism.ucl.ac.be as an SSH gateway. More information about gwceci.cism.ucl.ac.be can be found at this link. Going through an SSH gateway can be entirely transparent when using an SSH agent as explained at this link.

Batch Scheduler

As batch scheduler we use SLURM. For more information how to use it see this page.

File Sytems

When you use ingrid, you can access to different file system of different size:

  • On each worker node and from UI, you have access to your home directory with a 50 GB quota.
  • On each worker node and from UI, you have access to your scratch directory (/nfs/scratch/fynu/username) (28 TB shared between users).
  • On each worker node and from UI, you have access to your user directory (/nfs/user/username) (85 TB shared between users).

You can check your home quota with the command quota -s . If you need more storage space, ask to administrator team. If this demand is justified and if we have space available, we can enlarge your quota.

Warnings

  • We are not a storage center: at the end of your computations move out your result to dedicated storage facilities. Data on scratch is not safe.
  • The only storage volume that is backuped is the home user directories; the scratch volume and the large storage are not and will never be backuped.
  • The code you developed should be versioned (see CMS git repository or use our GIT repository with your cp3 credentials).

Software Installation

If you need specific software (compiler, library, tool, ...), first check that it is not already available. A large number of softwares with various versions are located in:

/nfs/soft

For instance:

  • matlab
  • mathematica
  • octave
  • python (all versions above 2.5)
  • geant
  • fireworks
  • root
  • na62 software

Most of these can be used by loading the appropriate module:

> module avail
-------------------------------------------------------------------------------------------- /nfs/soft/modules ---------------------------------------------------------------------------------------------
boost/1.57_sl6_gcc49      crab/crab2                gcc/gcc-4.9.1-sl6_amd64   grid/grid_environment_sl6 parallel/parallel         python/python27_sl6       root/6.02.05-sl6_gcc49
cmake/cmake-3.4.1         crab/crab3                geant/geant_sl5           lhapdf/6.1                pheno/pheno_gcc49         python/python27_sl6_gcc49 root/6.04.00-sl6_gcc49
cms/brilcalc              cw/colorwrapper           geant/geant_sl6           mathematica/mathematica   python/beanstalk-client   root/5.34.05-sl5_gcc41    root/6.06.02-sl6_gcc49
cms/cmssw                 gcc/gcc-4.6.4-sl6_amd64   grid/grid_environment_sl5 matlab/matlab             python/python27_sl5       root/5.34.09-sl6_gcc44    slurm/slurm_utils
module load root/5.34.09-sl6_gcc44
module unload root

If there's no module for the software you need, you can add the executable path to your $PATH variable by adding the following in your .bashrc config file:

export PATH=$PATH:/nfs/soft/..../bin/

Do not add the new path at the beginning of the $PATH as it can slow down your whole session. If you want to replace an existing command, please do it with an alias instead, as in:

alias python='python2.6'

If the desired software is not available, you can either:

  • Install this software in your home.
  • Ask to administrator team.

For the second point as for storage, this demand must be justified and technically possible! For HE CMS users, CMSSW and classic CERN tools are already installed. See below for usage.

Working with CMSSW on ingrid

  • Make sure your environment is ready :
    module load cms/cmssw
    
  • List all available cmssw releases:
    scram list CMSSW
    
  • Check what is the recommended release here. In case of doubts, double-check with users.
  • Get a working directory:
    scram project CMSSW_X_Y_Z_patchW