Changes between Version 1 and Version 2 of General-Scripts


Ignore:
Timestamp:
04/06/12 16:33:02 (8 years ago)
Author:
trac
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • General-Scripts

    v1 v2  
    44
    55=== If you have a ''central'' data disk ===
    6 The current Software.MadGraph / Software.MadEvent version assumes all the scripts are ran from a central data disk mounted on all cluster nodes (e.g. a home directory). If you have access to such a central disk, read the following. If not, please refer to the last section. Some scripts may assume the existence of a specific queue called {{{madgraph}}}. If you have difficulties with these, simply create this queue on your cluster (this can help to limit the number of CPU for example), or remove the =-q madgraph= options in the {{{run_XXX}}} scripts located in {{{bin}}}.
     6The current Software.MadGraph / Software.MadEvent version assumes all the scripts are ran from a central data disk mounted on all cluster nodes (e.g. a home directory). If you have access to such a central disk, read the following. If not, please refer to the last section. Some scripts may assume the existence of a specific queue called {{{
     7madgraph
     8}}}. If you have difficulties with these, simply create this queue on your cluster (this can help to limit the number of CPU for example), or remove the =-q madgraph= options in the {{{
     9run_XXX
     10}}} scripts located in {{{
     11bin
     12}}}.
    713
    814==== If you have a PBS-compatible batch managing system (PBSPro, !OpenPBS, Torque, ...) ====
    9 This is the easiest case since the default configuration should work out of the box using the {{{qsub}}} command (and {{{qstat}}} and/or {{{qdel}}} if the whole web interface is present). There is nothing special to do, just run the {{{generate_events}}} script as usual and select parallel mode.
     15This is the easiest case since the default configuration should work out of the box using the {{{
     16qsub
     17}}} command (and {{{
     18qstat
     19}}} and/or {{{
     20qdel
     21}}} if the whole web interface is present). There is nothing special to do, just run the {{{
     22generate_events
     23}}} script as usual and select parallel mode.
    1024
    1125==== If you use the Condor batch managing system ====
    12 A "translation" script exists (see attachments of this page) to emulate the {{{qsub}}} command using a Condor syntax. This script should be tuned to fit your Condor installation and put in a directory of the =$PATH= variable.
     26A "translation" script exists (see attachments of this page) to emulate the {{{
     27qsub
     28}}} command using a Condor syntax. This script should be tuned to fit your Condor installation and put in a directory of the =$PATH= variable.
    1329
    1430==== If you use another batch managing system ====
    1531This is the most complicated case, you can either:
    16    * Modify manually the {{{survey}}}, {{{refine}}} and {{{run_XXX}}} scripts located in the {{{bin}}} directory to force them to use your submission system.
    17    * Write a "translation" script like the one available for Condor (see attachment) to emulate the {{{qsub}}} command. If you manage to do this and want to share your script to help other Software.MadGraph / Software.MadEvent users, please feel free to edit this page.
     32   * Modify manually the {{{
     33survey
     34}}}, {{{
     35refine
     36}}} and {{{
     37run_XXX
     38}}} scripts located in the {{{
     39bin
     40}}} directory to force them to use your submission system.
     41   * Write a "translation" script like the one available for Condor (see attachment) to emulate the {{{
     42qsub
     43}}} command. If you manage to do this and want to share your script to help other Software.MadGraph / Software.MadEvent users, please feel free to edit this page.
    1844
    1945=== If you do not have a ''central'' data disk ===
    20 We are aware the central disk solution may be inefficient or even impossible to set up on large clusters. We are thus working on a permanent solution. In the meantime, an intermediate solution using temporary directory and the {{{scp}}} command exists. Please contact directly Pavel Denim (pavel.demin_AT_uclouvain.be) for more information on this.
     46We are aware the central disk solution may be inefficient or even impossible to set up on large clusters. We are thus working on a permanent solution. In the meantime, an intermediate solution using temporary directory and the {{{
     47scp
     48}}} command exists. Please contact directly Pavel Denim (pavel.demin_AT_uclouvain.be) for more information on this.
    2149
    2250-- Main.MichelHerquet - 02 Mar 2009
    2351
    2452
     53