[[PageOutline]] = Description of the method = The method consist to use a sample of events (weighted or unweighted events) and associate to those events a new weight corresponding to a new theoretical hypothesis. It corresponds to a multidimensional version of the unidimensional re-weighing method commonly used by experiments. Once computed, this weight can be propagate through the all simulation chain in order to avoid to have to perform the full-simulation on a huge number of sample. This methods works only if the original hypothesis and the new one are both significant in the same part of the phase-space. We propose three types of reweightings. One for the Leading Order sample and two for the Next-to-Leading Order sample (called Kamikaze Reweighting and NLO Reweighting) '''Leading Order'''[[BR]][[BR]] At Leading Order, the new weight is given by $$W_{new} = |M^{new}_h|^2 /|M^{old}_h|^2 * W_{old} $$ where h is the helicity associated to the events and $|M^{new/old}_h|^2$ is the matrix element for that helicity. If the events is not associated to a specific helicity, then the sum over the helicity is used instead. This method is fully LO accurate and do not present any bias. Note that the statistical fluctuation of the original sample can be enhanced by the reweighting. To get an idea of such propagation, one can use the naive formula of propagation of error: $$\Delta\mathcal{O}_{new} = \bar R\cdot \Delta\mathcal{O}_{old} + \Delta R \cdot \mathcal{O}_{old} $$ where $\bar R$ is the avarage of the ratio of the matrix-element, $\Delta R$ the associated variance. $\mathcal{O}_{old/new}$ is the value of the observables under consideration for the associated hyppothesis and $\Delta\mathcal{O}_{old/new}$ the associated variance. '''Kamikaze Reweighting'''[[BR]][[BR]] This correspond to a Leading Order type of reweighting. Both the soft and hard events are reweighted according to the associated tree-level matrix element related to the number of particles in the final state. i.e., $$W^S_{new} = |M^{new}_{born}|^2 /|M^{old}_{born}|^2 * W^S_{old} $$ $$W^H_{new} = |M^{new}_{real}|^2 /|M^{old}_{real}|^2 * W^H_{old} $$ For obvious reason, this method is, in general, '''not NLO accurate'''. This is available since 2.3.2 '''NLO reweighting:'''[[BR]][[BR]] We use the basis introduced in http://arxiv.org/pdf/1110.4738v1.pdf to decompose the matrix-element component independant of the scale and pdf variation: $$d\sigma^{H} = d\sigma^E - d\sigma^{MC} $$ $$ d\sigma^{S} = d\sigma^{MC} + \sum_{\alpha=S,C,SC} d\sigma^\alpha $$ Each of the $d\sigma^\alpha$ can be written as $$ d\sigma^\alpha=f_1(x_1,\mu_F)f_2(x_2,\mu_F) \left[\mathcal{W}^\alpha_0 + \mathcal{W}^\alpha_F log\left(\mu_F/Q\right)^2 + \mathcal{W}^\alpha_R log\left(\mu_R/Q\right)^2 \right] d\chi$$ Additionally, we keep track of which part of the $\mathcal{W}$ are proportional to the Born ($\mathcal{W}_B$), the finite piece of virtual ($\mathcal{W}_V$) and of the real ($\mathcal{W}_R$). The equations are available in the appendix of http://arxiv.org/pdf/1110.4738v1.pdf In principle, the reweighting should be performed on each sub-part of the $\mathcal{W}$ according to the following formula $$\mathcal{W}_B^{new} = \frac{B^{new}}{B^{old}} * \mathcal{W}_B^{old} $$ $$\mathcal{W}_V^{new} = \frac{V^{new}}{V^{old}} * \mathcal{W}_V^{old} $$ $$\mathcal{W}_R^{new} = \frac{R^{new}}{R^{old}} * \mathcal{W}_R^{old} $$ the final weight is then computed by recombining the weight according to the above formula. However in MadGraph5_aMC@NLO, we use the virt-tricks method which avoid the computation of the virtual for some of the phase-space points. This speed optimisation method forbids the simple above reweighting since the generation will have $\mathcal{W}_V^{old}=0$ even if $V_{old} \neq 0$. To avoid this problem, $\mathcal{W}_B$ is splitted in two $\mathcal{W}_{BC}$, $\mathcal{W}_{BB}$ for the part proportional to the Born due to the counter-term and from the part really comming from the born or from the approximate virtual. The reweighting is then done as $$\mathcal{W}_{BB}^{new} = \frac{(B^{new}+V^{new})}{(B^{old}+V^{old})} * \mathcal{W}_{BB}^{old} $$ $$\mathcal{W}_V^{new} = \frac{(B^{new}+V^{new})}{(B^{old}+V^{old})} * \mathcal{W}_V^{old} $$ $$\mathcal{W}_{BC}^{new} = \frac{B^{new}}{B^{old}} * \mathcal{W}_{BC}^{old} $$ $$\mathcal{W}_R^{new} = \frac{R^{new}}{R^{old}} * \mathcal{W}_R^{old} $$ Such reweighting is fully NLO accurate. As in the LO case, the statistical uncertainty can be enhanced by the reweighting. Additionally the trick to support the virt-tricks adds an additional contribution to statistical uncertainty. This method will be released in a future version of MadGraph5_aMC@NLO and can currently be provided on request. Since this reweighting is based on a dedicated basis the NLO sample must be generated in a specific way to have the additional information in the leshouches event. = Technical details === Limitation 1. We do not perform any PDF and/or cut reweighting. 2. We do not allowed to change the functional form of alpha_S 3. In presence of decay chain the order of the particles in the events file is important. This is important if you want to use this tools with LHE events not produced by MadGraph5_aMC@NLO. == Installation This module is built-in in MadGraph5_aMC@NLO.2.3.2. Since MadGraph5_aMC@NLO.2.3.2, this module relies on '''f2py''' to be installed. The easiest way to install f2py is to install numpy (if not already done). == How to run the code How to use the code on the flight. --> Add comment: Same for the NLO. Here some examples. When running the generation of events (./bin/generate_events from the local directory or "launch" via the mg5 interface) you will have two questions. The first one is: {{{ The following switches determine which programs are run: 1 Run the pythia shower/hadronization: pythia=NOT INSTALLED 2 Run PGS as detector simulator: pgs=NOT INSTALLED 3 Run Delphes as detector simulator: delphes=NOT INSTALLED 4 Decay particles with the MadSpin module: madspin=OFF 5 Add weight to events based on coupling parameters: reweight=OFF Either type the switch number (1 to 5) to change its default setting, or set any switch explicitly (e.g. type 'madspin=ON' at the prompt) Type '0', 'auto', 'done' or just press enter when you are done. [0, 4, 5, auto, done, madspin=ON, madspin=OFF, madspin, reweight=ON, ... ][60s to answer] > }}} As you can see the reweight options is OFF by default, to set it on you can either type "5" (not adviced if you script) or "reweight=ON". Then you should have: {{{ The following switches determine which programs are run: 1 Run the pythia shower/hadronization: pythia=NOT INSTALLED 2 Run PGS as detector simulator: pgs=NOT INSTALLED 3 Run Delphes as detector simulator: delphes=NOT INSTALLED 4 Decay particles with the MadSpin module: madspin=OFF 5 Add weight to events based on coupling parameters: reweight=ON Either type the switch number (1 to 5) to change its default setting, or set any switch explicitly (e.g. type 'madspin=ON' at the prompt) Type '0', 'auto', 'done' or just press enter when you are done. [0, 4, 5, auto, done, madspin=ON, madspin=OFF, madspin, reweight=ON, ... ][60s to answer] }}} Then the second question is (after that you just press enter on that question): {{{ Do you want to edit a card (press enter to bypass editing)? 1 / param : param_card.dat 2 / run : run_card.dat 3 / reweight : reweight_card.dat you can also - enter the path to a valid card or banner. - use the 'set' command to modify a parameter directly. The set option works only for param_card and run_card. Type 'help set' for more information on this command. [0, done, 1, param, 2, run, 3, reweight, enter path][60s to answer] }}} You '''HAVE TO define/edit''' the reweight_card.dat, the syntax is explained in the file. The default file is basically empty and will make the re-weighting to crash since both theoretical hypothesis are identical. == How to use the code after the generation of events as been completed. --> Add for the NLO You can also use the madevent interface as explained below: 1. go to the process directory. 2. launch the '''./bin/madevent''' script 3. type '''reweight RUN_NAME''' 4. then you will see the following question: {{{ Do you want to edit one cards (press enter to bypass editing)? 1 / reweight : reweight_card.dat you can also - enter the path to a valid card. [0, done, 1, reweight, enter path][60s to answer] }}} If you didn't define the content of reweight_card.dat already. You need to do it. The syntax is explained inside the file, and you can see example below (validation section). The '''important''' point, is that the first line should be launch, and then you specify which parameter, you want to modify. This is the exact same syntax has for scripting a scan over parameter. 5. exit the file and you are done (the script will run). '''If the file Cards/reweight_card.dat is already defined''', you can launch the script with {{{ ./bin/madevent reweight RUN_NAME -f }}} The '''-f''' options prevent the question to be asked. == Options of the code: Note all the following options will be available in Madgraph5_aMC@NLO since version 2.3.2. They have to be included in the reweighting_card before the first launch command. 1. "change model " performed the reweighting within a new model (you then need to profide a full param_card and not a difference) 2. "change process " change the process definition of the process. 3. "change process --add" add one process definition of the process to the new list. 4. "change output ": Three options: 'default'(i.e. lhef version3 format), '2.0' (i.e. lhef version2 format, the main weight is replace), 'unweight' (a new unweighting is applied on the events sample.) 5. "change helicity ": perform reweighting for the given helicity (True --default--) or do the sum over helicity. 6. "change rwgt_dir ": change directory where the computation is performed. This can be use to avoid to recreate/recompile the fortran executable if pointing to a previously existing directory. 7. change mode LO: For NLO sample, this flag force to use the kamikaze reweighting (Leading Order type rew = Input/Output format 1. the output format follows the Leshouches agreement version 3 [link arxiv] as an example the header looks like: {{{ set param_card dim6 1 100.0 set param_card dim6 2 100.0 set param_card dim6 3 100.0 }}} and one associated events: {{{ 8 0 +7.9887000e-06 1.24664300e+02 7.95774700e-02 1.23856500e-01 1 -1 0 0 501 0 +0.0000000e+00 +0.0000000e+00 +1.3023196e+03 1.30231957e+03 0.00000000e+00 0.0000e+00 -1.0000e+00 -2 -1 0 0 0 501 +0.0000000e+00 +0.0000000e+00 -1.4499581e+02 1.44995814e+02 0.00000000e+00 0.0000e+00 1.0000e+00 -24 2 1 2 0 0 -1.2793809e+01 -8.3954553e+01 -1.1792566e+02 1.65987064e+02 8.02071978e+01 0.0000e+00 0.0000e+00 23 2 1 2 0 0 +1.2793809e+01 +8.3954553e+01 +1.2752494e+03 1.28132832e+03 9.12640692e+01 0.0000e+00 0.0000e+00 11 1 3 3 0 0 -1.2462673e+01 +1.3647422e+01 -2.6083861e+01 3.19677669e+01 0.00000000e+00 0.0000e+00 -1.0000e+00 -12 1 3 3 0 0 -3.3113586e-01 -9.7601975e+01 -9.1841804e+01 1.34019297e+02 0.00000000e+00 0.0000e+00 1.0000e+00 4 1 4 4 502 0 -1.8321803e+01 +9.0929609e+01 +9.3905973e+02 9.43629724e+02 0.00000000e+00 0.0000e+00 -1.0000e+00 -4 1 4 4 0 502 +3.1115612e+01 -6.9750557e+00 +3.3618969e+02 3.37698598e+02 0.00000000e+00 0.0000e+00 1.0000e+00 4.55278761371e-06 2.65941887458e-06 8.68203803896e-06 }}} 2. The reweight_card in that case was: {{{ launch set Dim6 1 100 set Dim6 2 0 set Dim6 3 0 set Dim6 4 0 set Dim6 5 0 launch set Dim6 1 0 set Dim6 2 100 set Dim6 3 0 set Dim6 4 0 set Dim6 5 0 launch set Dim6 1 0 set Dim6 2 0 set Dim6 3 100 set Dim6 4 0 set Dim6 5 0 }}} Note: a. The production events were in this case made via the SM where all those coefficient were in fact set to zero. Therefore all the line setting the line to zero were superfluous, and this is the reason why they didn't appear in the banner. b. You can also specify a path to a param_card in the reweight_card. The content of the header being computed automatically with the difference of the two param_card. 3. The cross-section of the original file and those associated with the new hyppothesis are printed at the end of the script: {{{ INFO: Original cross-section: 0.80086112072 +- 0.0025669959099 pb INFO: Computed cross-section: INFO: 119 : 5.0238030968 INFO: 120 : 4.46724081967 INFO: 121 : 0.790019392142 }}} = Validation 1. The comparison for the full cross-section are done like this: {{{ ./bin/madevent ./Cards/reweight_card.dat}}} == p p > e+ e- cross-section 1. The reweight_card is the following: {{{ launch set aewm1 100 launch set aewm1 200 launch set aewm1 300 }}} 2. The associated cross-section are 1. 1135.25 pb 2. 1095.28 pb 3. 1329.52 pb 3. The cross-section computed with MadEvent are 1. 1130 +- 2.815 pb 2. 1098 +- 2.478 pb 3. 1336 +- 2.777 pb == EWDIM6 Validation === input 1. The model use for this validation is the EWDIM6 (See: http://arxiv.org/abs/arXiv:1205.4231). 10k events where generated with the standard model (cross-section: 0.8008 ± 0.0026 pb) 2. The reweight_card was: {{{ launch set Dim6 1 100 set Dim6 2 0 set Dim6 3 0 set Dim6 4 0 set Dim6 5 0 launch set Dim6 1 10 set Dim6 2 0 set Dim6 3 0 set Dim6 4 0 set Dim6 5 0 launch set Dim6 1 1 set Dim6 2 0 set Dim6 3 0 set Dim6 4 0 set Dim6 5 0 launch set Dim6 1 0.1 set Dim6 2 0 set Dim6 3 0 set Dim6 4 0 set Dim6 5 0 launch set Dim6 1 0.01 set Dim6 2 0 set Dim6 3 0 set Dim6 4 0 set Dim6 5 0 }}} The same scan was done for the three first coupling (CWWW, CW, CB) === result: 1. For CWWW || Coupling value ($TeV^{-2}$) || Reweight cross-section (pb) || MadEvent cross-section (pb) || Status || || 0.01 || 0.800810008029 || 0.7973 ± 0.0023 || OK || || 0.1 || 0.800903791291 || 0.799 ± 0.0026 || OK || || 1 || 0.802209013071 || 0.7987 ± 0.0025 || OK || || 10 || 0.85200014698 || 0.8584 ± 0.00092 || OK || || 100 || 5.0238030968 || 6.09 ± 0.0082 || '''FAIL''' || || 100 || 5.04763 || 6.09 ± 0.0082 || '''FAIL''' (make with a sample of 100k events) || The last entry fails since the expected distribution for such value of the coupling is too different from the distribution of the Standard Model. Such discrepancy are expected in this case. One hint is that the cross-section is an order of magnitude higher than the original one (Looking at the distribution confirm this). The inversed reweight (i.e. starting from the CWW=100 sample of events and reweight to find back the SM) is working properly. It returned: 0.803341120226 The various distribution for those generation are in attachment of this web page. The dashed blue curve is the one produced by reweighting. While the solid black is the curve generated by MadEvent. All sample are done with 100k events for the comparison of distributions. 2. For CW || Coupling value ($TeV^{-2}$) || Reweight cross-section (pb) || MadEvent cross-section (pb) || Status || || 0.01 || 0.800798262059 || 0.7953 +- 0.002497 || OK || || 0.1 || 0.801379445746 || 0.7988 ± 0.0023 || OK || || 1 || 0.806872565125 || 0.8065 ± 0.0023 || OK || || 10 || 0.889336417677 || 0.8832 ± 0.003 || OK || || 100 || 4.46724081967 || 4.519 ± 0.015 || '''FAIL''' || || 100 || 4.44273 || 4.519 ± 0.015 || '''FAIL''' (make with a sample of 100k events) || Same comment as for the previous coupling. 3. For CB || Coupling value ($TeV^{-2}$) || Reweight cross-section (pb) || MadEvent cross-section (pb) || Status || || 0.01 || 0.800798262059 || 0.7977 ± 0.0027 || OK || || 0.1 || 0.800782626532 || 0.7985 ± 0.0024 || OK || || 1 || 0.800626859275 || 0.7981 +- 0.002365 || OK || || 10 || 0.799127987884 || 0.7971 ± 0.0024 || OK || || 100 || 0.790019392142 || 0.7852 ± 0.0026 || OK || || 100 || 0.786698206995 || 0.7852 ± 0.0026 || OK (make with a sample of 100k events) || This operator has less impact on the cross-section/distributions, and therefore even a large value of the coupling is still working fine. Note: 1. The cross-section obtained for 100k events sample is 0.7989 ± 0.00087 2. The statistical fluctuation of the original sample is reflected on the reweighing cross-section (as expected)