Version 5 (modified by omatt, 6 years ago) (diff)


Running MadGraph / MadEvent in the parallel mode on your own cluster

In MadGraph5, a list of cluster are supported internally. In the file input/mg5_configuration.txt you have the list of available cluster and the option to configure those.

You can add/edit those cluster to fit your need. More information on that on this link:

MadGraph4 information

If you have a central data disk

The current MadGraph / MadEvent version assumes all the scripts are ran from a central data disk mounted on all cluster nodes (e.g. a home directory). If you have access to such a central disk, read the following. If not, please refer to the last section. Some scripts may assume the existence of a specific queue called


. If you have difficulties with these, simply create this queue on your cluster (this can help to limit the number of CPU for example), or remove the =-q madgraph= options in the


scripts located in



If you have a PBS-compatible batch managing system (PBSPro, !OpenPBS, Torque, ...)

This is the easiest case since the default configuration should work out of the box using the


command (and




if the whole web interface is present). There is nothing special to do, just run the


script as usual and select parallel mode.

If you use the Condor batch managing system

A "translation" script exists (see attachments of this page) to emulate the


command using a Condor syntax. This script should be tuned to fit your Condor installation and put in a directory of the =$PATH= variable.

If you use another batch managing system

This is the most complicated case, you can either:

  • Modify manually the





scripts located in the


directory to force them to use your submission system.

  • Write a "translation" script like the one available for Condor (see attachment) to emulate the

command. If you manage to do this and want to share your script to help other MadGraph / MadEvent users, please feel free to edit this page.

If you do not have a central data disk

We are aware the central disk solution may be inefficient or even impossible to set up on large clusters. We are thus working on a permanent solution. In the meantime, an intermediate solution using temporary directory and the


command exists.

-- Main.MichelHerquet - 02 Mar 2009

Attachments (1)

Download all attachments as: .zip