Version 4 (modified by md987, 8 years ago) (diff)


What is needed to run MadWeight (software)?

  1. MadWeight is MadGraph based, so you need all the software needed to run it in single machine (bash, csh, f77, g77, ...)

Root is not needed

  1. MadWeight uses python scripts. So python must be installed on your computer. The program was created/tested with python 2.4.
  2. MadWeight runs with on different type of cluster (single machine, condor, SGE, BASH). A implementation on any other cluster (like 'pbs' for example) will be done (in one or two day) on demand.
  3. MadWeight can create automatically the graph for the likelihood, for this the code need gnuplot

How to start

  1. First you need to have the full package of MadGraph.
  2. You have to duplicate the template directory with a funny name. $>cp -r Template My_proc
  3. You must enter in your process $> cd My_proc
  4. You must switch to MadWeight mode $> ./bin/PassToMadWeight
  5. You must parametrize your proc_card.dat. The paramatrization of the proc_card needs your process in the DECAY CHAIN format, i.e. the W production should be pp > (W+ > e+ve). The MadWeight package also require the knowledge of a transfer functions (but this information can be change after the generation)
  6. You can run MadGraph $> ./bin/newprocess

Your are ready now!!

What is needed to run MadWeight (input files-parametrisation)?

  1. First you need to define your TransferFunction, if there are not well defined already. Follow TransferFunction instructions to create a new one. To use another TransferFunction, you can run the script $> ./bin/
  2. Secondly, you need a sample of event in the LHCO format. This must be placed in the directory "Events" under the name "input.lhco".
  3. Finally, you have to fill in all the following cards.
    • run_card.dat -> impose some cut in your phase space integration (Normaly you have to remove all cut for MadWeight except if you have divergencies)
    • transfer_card.dat -> See TransferFunction
    • MadWeight_card.dat -> See below
    • param_card.dat (initial value for the theoretical parameter: some parameter can be different in running time following what is asked in MadWeight_card.dat->see run option)

How to run and output information

As everything is configured you can simply launch the main schedullar: $> ./bin/

All the result will be putted in the directory Events/MY_NAME. where MY_NAME is the name of the process in the run_card.dat (by default it is fermi) depending of your run option some output file couldn't be there:

MY_NAME_banner.txtall card information leshouches accord
MY_NAME_cross_weights.out cross section for the full process "card_number" "value" "uncertainty" (cross section is given in GeV-2)
MY_NAME_norm_weights.out normalized weight (with cross section) "card_number"."event_number" "value" "uncertainty"
MY_NAME_weights.out unormalized weight "card_number"."event_number" "value" "uncertainty"
input.lhco input events for your analisys leshouches accord

If you want some weight for specific subprocesses, some those output are available in ./SubProcesses/Process_dir/MY_NAME/ cross section are given in P_... directories and weight in MW_P_... directories.

In order to parametrize your local SGE cluster (if you run with this type of cluster), you can edit the file ./Source/MadWeight_File/Tools/sge_schedular

How to run the code partially

They are 6 possibilities:

  • $> ./bin/ -ijk :launches only step i,j,k (all three are integer)
  • $> ./bin/ -i+ :launches all steps after (including) step i
  • $> ./bin/ -i- :launches all steps before (including) step i
  • $> ./bin/ A B C :launches only step A B C (all three are step name)
  • $> ./bin/ A+ :launches all steps after (including) A
  • $> ./bin/ A- :launches all steps before (including) A The different steps are the following:
step value (i) step name (A) program caracteristic
1 param Card creation Creates all the param_card.dat if asked in MadWeight_card
2 analyzer MadweightAnalyzer Analyzes the feynman diagrams, the transfer function and creates the fortran code for the integration
3 compilation Compilation Final compilation for all the SubProcesses
4 event Verif_event Verification of the LHCO file. select the event containing, the exact number of jet/electron/muon.
5 dir Create_dir Creates the directory for each parralel run (one run by param_card and by event)
6 launch launch_job Launches the computation of the weights in the condor cluster
7 control control_job launches the control of the status of the run
8 collect collect_data collects all the data
9 plot plot launches the plot for the likelihood and of some distributions

More option

You can use four more options

-help provides some help
-version provides the version number
relaunch available after the step eight (collect). If some job crashed (or gives zero result) you can relaunch those with this option.
clean[=NAME] When the step 8 (collect) fully succeed and the run is succeeded and finish. you can suppress the event by event log/input/output. This save a lot of memory.\
optional parameter NAME autorizes to clean old run.
refine=VALUE relaunch all the integration where the precision are worse than value. (include since version 2.1.6)

Example of the code.

In the following, we will give you all the information needed to reproduce example presented in articles/procedings/...

  1. WMassMeasurmentExample: determination of the W mass.
  2. SpinMeasurmentExample: discrimination between two spin hyppothesys.

-- Main.OlivierMattelaer - 17 Nov 2008