Contact
Name
Pavel Demin
Position
Research scientist. Funding: UCL.

Email
pavel.demiclouvain.be
Address
Centre for Cosmology, Particle Physics and Phenomenology - CP3
Université catholique de Louvain
2, Chemin du Cyclotron - Box L7.01.05
B-1348 Louvain-la-Neuve
Belgium
Phone
+32 10 47 3165
Office
E.355
UCL member card
Projects
I am involved in the following research directions:

a C++ software package to compute Matrix Element weights: MoMEMta

MoMEMta is a C++ software package to compute Matrix Element weights. Designed in a modular way, it covers the needs of experimental analysis workflows at the LHC. MoMEMta provides working examples for the most common final states (Formula: 0, WW, ...). If you are an expert user, be prepared to feel the freedom of configuring your MEM computation at all levels.
MoMEMta is based on:

- C++, ROOT, Lua scripting language
- Cuba (Monte-Carlo integration library)
- External PDFs (LHAPDF by default)
- External Matrix Elements (currently provided by our MadGraph C++ exporter plugin)

Development of a framework for fast simulation of a generic collider experiment: Delphes

Observability of new phenomenological models in High Energy experiments is delicate to evaluate, due to the complexity of the related detectors, DAQ chain and software. Delphes is a new framework for fast simulation of a general purpose experiment. The simulation includes a tracking system, a magnetic field, calorimetry and a muon system, and possible very forward detectors arranged along the beamline. The framework is interfaced to standard file format from event generators and outputs observable analysis data objects. The simulation takes into account the detector resolutions, usual reconstruction algorithms for complex objects (FastJet) and a simplified trigger emulation. Detection of very forward scattered particles relies on the transport in beamlines with the Hector software.

NA62 computing

NA62 will look for rare kaon decays at SPS accelerator at CERN. A total of about $10^{12}$ kaon decays will be produced in two/three years of data taking. Even though the topology of the events is relatively simple, and the amount of information per event small, the volume of data to be stored per year will be of the order of ~1000 TB. Also, an amount of 500 TB/year is expected from simulation.

Profiting from the synergy inside CP3 in sharing computer resources our group is participating in the definition of the NA62 computing scheme. CP3 will be also one of the grid virtual organization of the experiment.

External collaborators: INFN (Rome I), University of Birmingham, University of Glasgow.

The CMS silicon strip tracker upgrade

Development of the "phase II" upgrade for the CMS silicon strip stracker.

More precisely, we are involved in the development of the uTCA-based DAQ system and in the test/validation of the first prototype modules. We take active part to the various test-beam campaigns (CERN, DESY, ...)

This activity will potentially make use of the cyclotron of UCL, the probe stations and the SYCOC setup (SYstem de mesure de COllection de Charge) to test the response to laser light, radioactive sources and beams.

The final goal is to take a leading role in the construction of part of the CMS Phase-II tracker.

External collaborators: CRC and CMS collaboration.

World LHC Computing Grid: the Belgian Tier2 project

The World LHC Computing GRID (WLCG) is the worldwide distributed computing infrastructure controlled by software middleware that allows a seamless usage of shared storage and computing resources.

About 10 PBytes of data are produced every year by the experiments running at the LHC collider. This data must be processed (iterative and refined calibration and analysis) by a large scientific community that is widely distributed geographically.

Instead of concentrating all necessary computing resources in a single location, the LHC experiments have decided to set-up a network of computing centres distributed all over the world.

The overall WLCG computing resources needed by the CMS experiment alone in 2016 amount to about 1500 kHepSpec06 of computing power, 90 PB of disk storage and 150 PB of tape storage. Working in the context of the WLCG translates into seamless access to shared computing and storage resources. End users do not need to know where their applications run. The choice is made by the underlying WLCG software on the basis of availability of resources, demands of the user application (CPU, input and output data,..) and privileges owned by the user.

Back in 2005 UCL proposed the WLCG Belgian Tier2 project that would involve the 6 Belgian Universities involved in CMS. The Tier2 project consists of contributing to the WLCG by building two computing centres, one at UCL and one at the IIHE (ULB/VUB).

The UCL site of the WLCG Belgian Tier2 is deployed in a dedicated room close to the cyclotron control room of the IRMP Institute and is currently a fully functional component of the WLCG.

The UCL Belgian Tier2 project also aims to integrate, bring on the GRID, and share resources with other scientific computing projects. The projects currently integrated in the UCL computing cluster are the following: MadGraph/MadEvent, NA62 and Cosmology.

External collaborators: CISM (UCL), Pascal Vanlaer (Belgium, ULB), Lyon computing centre, CERN computing centre.


Show past projects.
Publications in CP3
Showing 5 publications over 7. Show all publications.
All my publications on Inspire

2013

DELPHES 3, A modular framework for fast simulation of a generic collider experiment
de Favereau, J. and others
[Abstract] [PDF] [Journal] Published in JHEP
Refereed paper. 25th July.

2010

CMS Tracking Performance Results from early LHC Operation
CMS collaboration
[Abstract] [PDF] [Journal] [Full text] Published in Eur.Phys.J.C70:1165-1192,2010.
Refereed paper. 21st December.

2009

Alignment of the CMS Silicon Tracker during Commissioning with Cosmic Rays
CMS Collaboration
[Abstract] [PDF] [Journal] [Full text] CMS PAPER CFT-09-003
Published in JINST

Refereed paper. 26th December.
Commissioning and Performance of the CMS Pixel Tracker with Cosmic Ray Muons
CMS Collaboration
[Abstract] [PDF] [Journal] [Full text] CMS-CFT-09-001.
Published in JINST

Refereed paper. 26th December.
CMS Data Processing Workflows during an Extended Cosmic Ray Run
CMS Collaboration
[Abstract] [PDF] [Journal] [Full text] Published in JINST
Refereed paper. 21st December.


[UCLouvain] - [SST] [IRMP] - [SC]
Contact : Jérôme de Favereau
Research