Changes between Initial Version and Version 1 of LowMassSMHiggs


Ignore:
Timestamp:
03/20/12 16:17:56 (7 years ago)
Author:
trac
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • LowMassSMHiggs

    v1 v1  
     1
     2
     3-- Main.PierreArtoisenet - 2011-06-27
     4
     5=== I. Organization of the work ===
     6
     7We want to investigate the following signatures:
     8
     9WH -&gt; ell+- + 2b<br />ZH -&gt; ell+ ell- + 2b<br />tth -&gt; ell+ ell- 4b
     10
     11<br /> We are now focusing on ttH
     12=== II. Goal ===
     13
     14The project is to investigate the significance that can be achieved in the search for a Higgs produced in association with top quark pair<br />considering the '''dileptonic channel'''.<br />The main background is %$ t \bar t $% + 2 jets.<br /><br />The significance can be studied in different scenarios, for example<br />one can start to fill the following table
     15
     16|| Statistical significance  || "thin" TF || "Broad TF" ||
     17|| events with ISR || S_! || S_2 ||
     18|| events with no ISR || S_3 || S_4 ||
     19
     20<br />By estimating S_1,S_2 ... we will provide a reasonable estimate<br />of the maximum significance that can be reached at the LHC, and will<br />also show what are the most important factors controlling this maximum significance.
     21
     22The strategy is to organize the work into a validation procedure (first step) and a pheno study (second step).
     23=== III. Validation procedure ===
     24
     25Idea: apply MEM to reconstruct the mass of the Higgs or reconstruct the fraction of signal<br />events <strong>UNDER CONTROLLED CONDITIONS.</strong><br /><br />"under controlled conditions" means that the events in the prepared samples follow EXACTLY the probability <br />distribution function that is used to evaluate the matrix element weights.<br /><br />In this way, one can set up the calculation of the weights and check that <br />NO BIAS is observed in the final result, and hence validate the whole procedure in the absence of systematic errors. <br />The procedure for the calculation of the weights should be the same as the one used later on for the second step <br />(i.e. one should consider a finite resolution on jet energies, correct for ISR if necessary, ...)<br /><br />Only the samples of events are prepared in an artificial way so that we control exactly how the events are distributed in phase-space.<br />In particular: <br />- the energy of the final-state partons are smeared exaclty according to the shape of the transfer functions<br /> - the effect of ISR (if taken into account) is to boost the events in the transverse plan, according to a known distribution in pT
     26
     27The idea is that once this procedure is validated, it can be used in a reliable way for the pheno study.
     28
     29The subsection below gives the<strong> work plan</strong> for the validation procedure
     30===== A. Reconstruction of m_H (signal events only) =====
     31====== 1. Parton level + infinite resolution (DONE) ======
     32
     33Reconstruction of the mass of the Higgs using a pure sample of parton-level signal events, and considering a narrow transfer function<br />for jet energies <br /><br />This is done (Priscilla): there is no bias in the reconstructed mass of the Higgs -&gt; OK
     34
     35====== 2. Parton level + finite resolution (DONE) ======
     36
     37generation of a parton-level event sample (no showering), smearing of the partons energies according to a "broad" transfer<br />function, reconstruction of the mass of the Higgs with the same TF that has been used to smear the parton energies.\\<br /><br />Things to keep in mind:
     38   * when we smear the energy of the partons, we are forced to apply some cuts on jet energies<br />-&gt; one should include the acceptance term in the likelihood.
     39   * to save some time, we are now considering only the gluon-gluon initiated process (for this<br />sanity check phase, it is ok). But then we should also consider '''only''' the gluon-gluon initiated process<br />when generating the parton-level events. Otherwise we may introduce a bias.
     40   * one should also check the convergence for the evalutation of the matrix element weights in the regime where<br />the resolution on b-jet energies is much worse than the width of the Higgs.<br />This is a delicate point, because the Higgs decay process %$ H \rightarrow b \bar b$% is overconstrained.<br /> By default madweight consider 2 integration channels:
     41   1. in channel 1, the invariant mass of the Higgs is mapped onto one variable of integration,
     42   1. in channel 2, the energies of the b-quarks originating from the Higgs are mapped onto 2 variables of integration,<br /> but the invariant mass of the Higgs is not.
     43
     44When the width of the Higgs is orders of magnitude smaller than the resolution in jet energies, we expect that channel 1 is the most appropriate. This is indeed the case. But I also observed that running with \textbf{only} channel 1 makes a big difference: when I compare the values of the weights calculated with the one-channel integrator and with the two-channel integrator, the weights are systematically underestimated in the second case. The difference is quite sizable when we look at the likelihood: the difference is roughtly 4 units of $Log(L)$. So I would suggest to run MadWeight with only one channel of integration (the first one). This can be done by copying the files main_code_one_channel.f and data_one_channel.inc (available in the drop box) in the MW_P1_gg_bbxbmu+vmbxmu-vmx.
     45
     46UPDATE (06/10/2011, PA): I put a report in the DropBox ("Validation_A2") with a description of the results. I think these are good enough to move to the next step.
     47
     48===== B. Testing S+B hypothesis against B-only hypothesis [or reconstructing S/(S+B)] =====
     49
     50The idea is the same, i.e. generate a sample of B+S events for which we know the probability distribution exactly, then use MadWeight to reconstruct the fraction of signal events and check that there is no bias.
     51
     52Inputs:
     53
     54   * for the transfer function, we consider the TF descibed in A.2.
     55
     56   * for the cuts, we consider the cuts givein in V.A. below.
     57     ATTENTION: since we will smear the energy of the final state parton,
     58     we need to apply a milder pT cut on the partons (pT>15 GeV).
     59     The final cut pT(jet)>30 GeV is applied after the smearing procedure.
     60
     61   * for the background processes: as a first check, only one background: gg>tt~+bb~
     62
     63=== IV. Pheno study ===
     64
     65<br /> - Redo the analysis, but with samples of events that are as realistic as possible.<br /><br /> - Evaluate all the systematic uncertainties.
     66=== V. Inputs of the analysis ===
     67
     68<br /><br />We will do the analysis for the LHC at '''14 TeV'''.<br />There are several input parameters that need to be fixed right now.<br /><br />Even during the validation procedure, it will be very useful if we <br />consider realistic values the parameters associated with the <br />final-state cuts, the reconstruction efficiencies, <br />the b-taggings and the energy resolution.<br />In such a way, the significance that we will obtain<br />at the end of the validation procedure will not be completely unrealistic,<br />and this will give us some insights to jump into the second part<br />(e.g. if we find that the significance is extremely low for a given <br />signature even under ideal conditions, it may not be worth to push <br />the analysis further for this signature.)
     69
     70For the theoretical parameters, we can stick to the default param_card.dat file on the web.
     71===== A. Cuts =====
     72
     73<br />We need to agree on a set of cuts to be applied on the jets and on the leptons.<br />I think a resonable set of cuts are (see http://arxiv.org/pdf/1106.0902.pdf)
     74
     75pT(jets) &gt; 30 GeV, ||eta(jet)||&lt;2.4 delta R (p_i,p_j) &gt; 0.3 with p_i, p_j =jet or lepton
     76
     77pT(e) &gt; 20 GeV, ||eta(e)||&lt;2.5, pT(mu)&gt; 30 GeV, ||eta(mu)||&lt; 2.5
     78
     79<br /><br /><em>Parton-level cuts vs reconstructed-level cuts</em>:<br /><br />In the validation procedure, parton-level cuts are different from reconstructed-level cuts<br />because:<br /> - parton-level events are boosted in the transverse plan (if ISR is taken into account) <br /> - final-state parton energies are smeared according to the shape of the transfer function<br /><br />So one need to apply looser cuts at the parton-level, and then apply the correct set of cuts <br />at the "reconstructed level".
     80===== B. Transfer function =====
     81
     82<br />For the parametrization of the transfer functions,<br />we can stick to the usual asumptions: a superposition<br />of two Gaussian distributions for the energy of the jets, <br />a delta function for all other visible quantities.
     83
     84The parametrization of the TF for jet energies is given by
     85
     86
     87
     88with %$ \delta=E_p-E_j $% (parton-level energy minus reconstructed energy).It would be good to choose values for the parameters<br /> The parameters %$p_i$% can be assumed to depend linearly on the '''parton-level''' energy (%$ p_i=a_1+b_i*E_p $%).
     89
     90|| || '''a_i''' || '''b_i''' ||
     91|| p_1 || XXXX || XXXX ||
     92|| p_2 || XXXX || XXXX ||
     93|| p_3 || XXXX || XXXX ||
     94|| p_4 || XXXX || XXXX ||
     95|| p_5 || XXXX || XXXX ||
     96
     97It would be good to choose values for the parameters %$a_i, b_i$% in the TF that capture the typical resolution of the <br />CMS detector. Olivier, do you think you could get these values ?
     98
     99ANSWER from Olivier:
     100
     101In fact we cann't use the CMS tf since they are not public yet. If we use them, we will create trouble for Vincent/Arnaud (even more if they sign the paper).So the best that we can do is to use the TF computed for Delphes. Arnaud computed TF which are very close to the CMS resolution. So this should be ok.
     102
     103===== C. Efficiencies =====
     104
     105At some point, we will also need to know the typical reconstruction efficiencies for each channel (taking into account the b-tagging). If we evaluate the matrix element weights for each channel and under each asumption separately, one can incorporate the relative efficiencies for each channel when the likelihood is evaluated. So we don't need to address this problem right now.