Changes between Version 1 and Version 2 of General-Scripts
- Timestamp:
- Apr 6, 2012, 4:33:02 PM (13 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
General-Scripts
v1 v2 4 4 5 5 === If you have a ''central'' data disk === 6 The current Software.MadGraph / Software.MadEvent version assumes all the scripts are ran from a central data disk mounted on all cluster nodes (e.g. a home directory). If you have access to such a central disk, read the following. If not, please refer to the last section. Some scripts may assume the existence of a specific queue called {{{madgraph}}}. If you have difficulties with these, simply create this queue on your cluster (this can help to limit the number of CPU for example), or remove the =-q madgraph= options in the {{{run_XXX}}} scripts located in {{{bin}}}. 6 The current Software.MadGraph / Software.MadEvent version assumes all the scripts are ran from a central data disk mounted on all cluster nodes (e.g. a home directory). If you have access to such a central disk, read the following. If not, please refer to the last section. Some scripts may assume the existence of a specific queue called {{{ 7 madgraph 8 }}}. If you have difficulties with these, simply create this queue on your cluster (this can help to limit the number of CPU for example), or remove the =-q madgraph= options in the {{{ 9 run_XXX 10 }}} scripts located in {{{ 11 bin 12 }}}. 7 13 8 14 ==== If you have a PBS-compatible batch managing system (PBSPro, !OpenPBS, Torque, ...) ==== 9 This is the easiest case since the default configuration should work out of the box using the {{{qsub}}} command (and {{{qstat}}} and/or {{{qdel}}} if the whole web interface is present). There is nothing special to do, just run the {{{generate_events}}} script as usual and select parallel mode. 15 This is the easiest case since the default configuration should work out of the box using the {{{ 16 qsub 17 }}} command (and {{{ 18 qstat 19 }}} and/or {{{ 20 qdel 21 }}} if the whole web interface is present). There is nothing special to do, just run the {{{ 22 generate_events 23 }}} script as usual and select parallel mode. 10 24 11 25 ==== If you use the Condor batch managing system ==== 12 A "translation" script exists (see attachments of this page) to emulate the {{{qsub}}} command using a Condor syntax. This script should be tuned to fit your Condor installation and put in a directory of the =$PATH= variable. 26 A "translation" script exists (see attachments of this page) to emulate the {{{ 27 qsub 28 }}} command using a Condor syntax. This script should be tuned to fit your Condor installation and put in a directory of the =$PATH= variable. 13 29 14 30 ==== If you use another batch managing system ==== 15 31 This is the most complicated case, you can either: 16 * Modify manually the {{{survey}}}, {{{refine}}} and {{{run_XXX}}} scripts located in the {{{bin}}} directory to force them to use your submission system. 17 * Write a "translation" script like the one available for Condor (see attachment) to emulate the {{{qsub}}} command. If you manage to do this and want to share your script to help other Software.MadGraph / Software.MadEvent users, please feel free to edit this page. 32 * Modify manually the {{{ 33 survey 34 }}}, {{{ 35 refine 36 }}} and {{{ 37 run_XXX 38 }}} scripts located in the {{{ 39 bin 40 }}} directory to force them to use your submission system. 41 * Write a "translation" script like the one available for Condor (see attachment) to emulate the {{{ 42 qsub 43 }}} command. If you manage to do this and want to share your script to help other Software.MadGraph / Software.MadEvent users, please feel free to edit this page. 18 44 19 45 === If you do not have a ''central'' data disk === 20 We are aware the central disk solution may be inefficient or even impossible to set up on large clusters. We are thus working on a permanent solution. In the meantime, an intermediate solution using temporary directory and the {{{scp}}} command exists. Please contact directly Pavel Denim (pavel.demin_AT_uclouvain.be) for more information on this. 46 We are aware the central disk solution may be inefficient or even impossible to set up on large clusters. We are thus working on a permanent solution. In the meantime, an intermediate solution using temporary directory and the {{{ 47 scp 48 }}} command exists. Please contact directly Pavel Denim (pavel.demin_AT_uclouvain.be) for more information on this. 21 49 22 50 -- Main.MichelHerquet - 02 Mar 2009 23 51 24 52 53