wiki:General-Scripts

Version 2 (modified by trac, 12 years ago) ( diff )

--

Running Software.MadGraph / Software.MadEvent in the parallel mode on your own cluster

If you have a central data disk

The current Software.MadGraph / Software.MadEvent version assumes all the scripts are ran from a central data disk mounted on all cluster nodes (e.g. a home directory). If you have access to such a central disk, read the following. If not, please refer to the last section. Some scripts may assume the existence of a specific queue called {{{ madgraph }}}. If you have difficulties with these, simply create this queue on your cluster (this can help to limit the number of CPU for example), or remove the =-q madgraph= options in the {{{ run_XXX }}} scripts located in {{{ bin }}}.

If you have a PBS-compatible batch managing system (PBSPro, !OpenPBS, Torque, ...)

This is the easiest case since the default configuration should work out of the box using the {{{ qsub }}} command (and {{{ qstat }}} and/or {{{ qdel }}} if the whole web interface is present). There is nothing special to do, just run the {{{ generate_events }}} script as usual and select parallel mode.

If you use the Condor batch managing system

A "translation" script exists (see attachments of this page) to emulate the {{{ qsub }}} command using a Condor syntax. This script should be tuned to fit your Condor installation and put in a directory of the =$PATH= variable.

If you use another batch managing system

This is the most complicated case, you can either:

  • Modify manually the {{{

survey }}}, {{{ refine }}} and {{{ run_XXX }}} scripts located in the {{{ bin }}} directory to force them to use your submission system.

  • Write a "translation" script like the one available for Condor (see attachment) to emulate the {{{

qsub }}} command. If you manage to do this and want to share your script to help other Software.MadGraph / Software.MadEvent users, please feel free to edit this page.

If you do not have a central data disk

We are aware the central disk solution may be inefficient or even impossible to set up on large clusters. We are thus working on a permanent solution. In the meantime, an intermediate solution using temporary directory and the {{{ scp }}} command exists. Please contact directly Pavel Denim (pavel.demin_AT_uclouvain.be) for more information on this.

-- Main.MichelHerquet - 02 Mar 2009

Attachments (1)

Download all attachments as: .zip

Note: See TracWiki for help on using the wiki.