Changes between Version 2 and Version 3 of General-Scripts
- Timestamp:
- Apr 12, 2012, 11:03:35 AM (13 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
General-Scripts
v2 v3 1 1 2 2 3 == Running Software.MadGraph / Software.MadEvent in the parallel mode on your own cluster ==3 == Running MadGraph / MadEvent in the parallel mode on your own cluster == 4 4 5 5 === If you have a ''central'' data disk === 6 The current Software.MadGraph / Software.MadEvent version assumes all the scripts are ran from a central data disk mounted on all cluster nodes (e.g. a home directory). If you have access to such a central disk, read the following. If not, please refer to the last section. Some scripts may assume the existence of a specific queue called {{{ 6 The current MadGraph / MadEvent version assumes all the scripts are ran from a central data disk mounted on all cluster nodes (e.g. a home directory). If you have access to such a central disk, read the following. If not, please refer to the last section. Some scripts may assume the existence of a specific queue called 7 {{{ 7 8 madgraph 8 }}}. If you have difficulties with these, simply create this queue on your cluster (this can help to limit the number of CPU for example), or remove the =-q madgraph= options in the {{{ 9 }}} 10 . If you have difficulties with these, simply create this queue on your cluster (this can help to limit the number of CPU for example), or remove the =-q madgraph= options in the 11 {{{ 9 12 run_XXX 10 }}} scripts located in {{{ 13 }}} 14 scripts located in 15 {{{ 11 16 bin 12 }}}. 17 }}} 18 . 13 19 14 20 ==== If you have a PBS-compatible batch managing system (PBSPro, !OpenPBS, Torque, ...) ==== 15 This is the easiest case since the default configuration should work out of the box using the {{{ 21 This is the easiest case since the default configuration should work out of the box using the 22 {{{ 16 23 qsub 17 }}} command (and {{{ 24 }}} 25 command (and 26 {{{ 18 27 qstat 19 }}} and/or {{{ 28 }}} 29 and/or 30 {{{ 20 31 qdel 21 }}} if the whole web interface is present). There is nothing special to do, just run the {{{ 32 }}} 33 if the whole web interface is present). There is nothing special to do, just run the 34 {{{ 22 35 generate_events 23 }}} script as usual and select parallel mode. 36 }}} 37 script as usual and select parallel mode. 24 38 25 39 ==== If you use the Condor batch managing system ==== 26 A "translation" script exists (see attachments of this page) to emulate the {{{ 40 A "translation" script exists (see attachments of this page) to emulate the 41 {{{ 27 42 qsub 28 }}} command using a Condor syntax. This script should be tuned to fit your Condor installation and put in a directory of the =$PATH= variable. 43 }}} 44 command using a Condor syntax. This script should be tuned to fit your Condor installation and put in a directory of the =$PATH= variable. 29 45 30 46 ==== If you use another batch managing system ==== 31 47 This is the most complicated case, you can either: 32 * Modify manually the {{{ 48 * Modify manually the 49 {{{ 33 50 survey 34 }}}, {{{ 51 }}} 52 , 53 {{{ 35 54 refine 36 }}} and {{{ 55 }}} 56 and 57 {{{ 37 58 run_XXX 38 }}} scripts located in the {{{ 59 }}} 60 scripts located in the 61 {{{ 39 62 bin 40 }}} directory to force them to use your submission system. 41 * Write a "translation" script like the one available for Condor (see attachment) to emulate the {{{ 63 }}} 64 directory to force them to use your submission system. 65 * Write a "translation" script like the one available for Condor (see attachment) to emulate the 66 {{{ 42 67 qsub 43 }}} command. If you manage to do this and want to share your script to help other Software.MadGraph / Software.MadEvent users, please feel free to edit this page. 68 }}} 69 command. If you manage to do this and want to share your script to help other MadGraph / MadEvent users, please feel free to edit this page. 44 70 45 71 === If you do not have a ''central'' data disk === 46 We are aware the central disk solution may be inefficient or even impossible to set up on large clusters. We are thus working on a permanent solution. In the meantime, an intermediate solution using temporary directory and the {{{ 72 We are aware the central disk solution may be inefficient or even impossible to set up on large clusters. We are thus working on a permanent solution. In the meantime, an intermediate solution using temporary directory and the 73 {{{ 47 74 scp 48 }}} command exists. Please contact directly Pavel Denim (pavel.demin_AT_uclouvain.be) for more information on this. 75 }}} 76 command exists. Please contact directly Pavel Denim (pavel.demin_AT_uclouvain.be) for more information on this. 49 77 50 78 -- Main.MichelHerquet - 02 Mar 2009 51 79 52 80 53