Systematic uncertainties due to the limited size of calibration samples

Get Complete Project Material File(s) Now! »

The LHC collider

To understand LHC’s primary physics goal we have to take a step back in time to the Large Electron-Positron Collider (LEP) era in the 1990s. Back then UA1 and UA2 at the Super Proton Synchrotron (SPS) proton-antiproton collider had detected the W (2; 3) and Z (4; 5) bosons while the LEP detectors were running precision studies of electroweak bosons(6; 7). When the LHC was approved(8), two pieces of the Standard Model particle zoo were missing: the top quark and the Higgs boson. The CKM matrix predicted a third generation of quarks and the E288 experiment had detected the bottom quark (9). Discovery of the top quark  was a matter of time (and collider energy). In fact about two months after the LHC’s approval CDF (10) and D0 (11) announced the discovery of the heaviest SM quark. The nal piece sought by LHC (12), the Higgs Boson, would conclusively prove the Higgs mechanism, so far a hypothetical particle to explain spontaneous symmetry breaking in the electroweak sector.
At the time of its proposal, the LHC was focused on the origin of mass (Higgs Boson discovery) but it was designed as an exploratory machine, with an objective to have sensitivity to mass scales up to 1TeV. Due to the fact that protons are composite particles, this required an accelerator with signicantly higher centre of mass energy and, equally important, much higher sustained instantaneous luminosity than the previous generations of colliders. The LHC design goal was set at 14TeV centre of mass energy and L = 1034 cm􀀀2s􀀀1 peak instantaneous luminosity.
The former is achieved via an injection chain shown in Fig. 3.1. The latter limited LHC to a proton-proton collider since the highly inecient antiproton production ruled out another proton-antiproton collider like the Tevatron, which was operating during LHC’s proposal.

production at the LHCb interaction point

The precision of a particle physics measurement is limited by two general sources of uncertainty, systematic and statistical. Reducing the former depends on detector design and analysis technique but the latter is within the control of a collider, which can produce as many interesting physics event as possible. The production rate of a physics event in a collider is expressed as N = L (3.1) where L is the collider luminosity, the cross-section of the physics event and N the production rate of a physics event. Naturally, particle physics experiments want as much luminosity delivered as possible, albeit at a stable rate that does not saturate detector occupancy and overwhelm event reconstruction.

Instantaneous Luminosity at the LHCb

In the LHC, the instantaneous luminosity for head-on proton-proton collisions at an interaction region is L = N2 b nbfrev r 4n F (3.2) where Nb is the number of particles per bunch, nb the number of bunches per beam, frev the revolution frequency, r the relativistic gamma factor, n the normalised transverse beam emittance, the beta function at the collision point and F the geometric luminosity reduction factor due to the crossing angle at the interaction point: F = 􀀀 1 + ( cz 2 )2)􀀀1=2.

bb cross-section at the LHC

In addition to high luminosity, the LHC pp collisions have a high b production cross-section, which makes it a suitable collider to study B-physics. Fig. 3.4 plots the production cross-section for various processes in hadron colliders, showing that b production is orders of magnitude larger than corresponding electroweak or Higgs production in the LHC. Another fact to note is the b production crosssection increases with collider energy, and it approximately doubles between 7TeV in Run 1 and 13TeV in Run 2.

LHCb Tracking Performance

LHCb has published dedicated papers on tracking performance, such as Ref. (32). Nevertheless, we will present some important tracking performance metrics here. The LHCb tracking performance in J= ! +􀀀 decays from b hadrons were extensively studied by Ref. (33). This study looked at long track reconstruction eciency and long track momentum resolution, shown in Fig. 3.14, and how it  aects LHCb’s mass resolution, shown in Fig. 3.15. In these plots, the LHCb long track reconstruction achieves a 0:5% momentum resolution when the momentum is below 20 GeV, and slowly degrades to about 1% for particles around 150 GeV. The long track reconstruction eciency exceeds 95% except for tracks in the p < 5 GeV momentum region. These result in a mass resolution of about 12:5 MeV in J= ! +􀀀 decays.

Software-based Particle Identication

The PID reconstruction strategy depends on the purpose and the available computing resources. The earliest, fast hardware triggers rely on requirements within single subdetectors, and they will be the main topic of discussion in the L0 trigger (Sec. 3.3.3). The software triggers and oine physics analysis have access to two, more computationally demanding, algorithms: a log likelihood variable and a multivariate (MVA) classier probability. The log likelihood approach sums the individual log likelihoods from each subdetector linearly into a global likelihood. A drawback of this approach is that correlations between subdetectors are not accounted for. In contrast, the MVA classiers infer a single, global probability based on information from all particle identication subdetectors, which accounts for correlation among subdetectors. Interestingly, some physics analyses report an increased performance with neural network probabilities compared to global likelihoods. This is the case for example in +! p+􀀀 decays, which is shown in Fig. 3.24.
Despite their edge, the MVA algorithms still rely on individual subdetector likelihoods as an input variable. Among them, the most important likelihoods are provided by the RICH reconstruction. The RICH reconstruction begins with the charged track found by the tracking algorithms, and assumes a mass hypothesis for each track. Given the mass hypothesis, the RICH reconstruction then computes the expected photon yield at each HPD cell, which takes photocathode eciencies and electronics noise into account. The expected HPD photon yield is then compared to the detected signature to assign a likelihood, assuming Poisson statistics. For each track, the RICH reconstruction then iterates over ve possible particle species, which are the electrons, muons, kaons, pions and protons 3. The mass hypothesis with the highest likelihood is assigned to the track.

READ  DIGITAL AND MIXED SUB-SYSTEMS DESIGN

Table of contents :

1 Introduction 
2 The World as We Know It : Theory 
2.1 Standard Model of Particle Physics
2.2 Limitations of the Standard Model
2.3 Test of Lepton Universality with b! s `+`􀀀 Decays
2.4 Measurements of b! s `+`􀀀 Lepton Universality Ratios
2.4.1 B-factories
2.4.2 LHCb
2.4.3 Experimental Results
2.5 New Physics Interpretation
2.5.1 Model Independent Analysis of Wilson Coecients
2.5.2 New Physics Explanation
2.5.3 Common Explanation with RD and RD Anomalies
2.6 Conclusion
2.7 Bibliography
3 Beauty and the Machine: LHCb detector at LHC 
3.1 The LHC collider
3.2 b production at the LHCb interaction point
3.2.1 Instantaneous Luminosity at the LHCb
3.2.2 bb cross-section at the LHC
3.3 The LHCb Detector
3.3.1 Tracking System
3.3.2 Particle Identication System
3.3.3 LHCb Trigger
3.4 Conclusion
3.5 Bibliography
4 Sensing an Imbalance in the Force : Test of Lepton Flavour Universality with RK and RK 
4.1 Analysis Strategy
4.2 Data Sample
4.2.1 Real Data Samples
4.2.2 Triggers
4.2.3 Stripping
4.2.4 Simulation Sample
4.3 Selections
4.3.1 Generic Selections
4.3.2 Exclusive Background Selections
4.3.3 Multivariate Classiers
4.3.4 HOP cut optimisation
4.4 Corrections and Eciencies
4.4.1 Corrections to Simulation
4.4.2 Eciencies
4.5 Mass ts
4.5.1 Mass Ranges
4.5.2 Simulated Shapes
4.5.3 Rare and resonant mode signal PDFs
4.5.4 Background PDFs in B0 modes
4.5.5 Background PDFs in B+ modes
4.5.6 Background Normalisation Constraints
4.5.7 Congurations of the Rare Mode Fits
4.5.8 Fits to Real Data
4.5.9 Fitter validation with pseudoexperiments
4.6 Systematic Uncertainty
4.6.1 Systematic uncertainties due to the limited size of calibration samples
4.6.2 Systematic uncertainties due to simulation corrections and the eciency measurement
4.6.3 Systematic uncertainty due to the mass t
4.6.4 Residual Non atness of rJ=
4.6.5 Summary of Systematic Uncertainties
4.7 Cross checks
4.7.1 Integrated rJ=
4.7.2 Flatness of rJ=
4.7.3 Double ratio R (2S)
4.8 Results and Conclusion
4.9 Bibliography
5 The Fault in Our Fitter: Binned Fits with RooFit 
5.1 RooFit and Likelihood Maximisation
5.1.1 A Historical Introduction to RooFit
5.1.2 Likelihoods
5.1.3 Binned vs Unbinned Time Complexity
5.1.4 RooFit’s Binned Likelihood
5.2 The Independent Study
5.2.1 Strategy
5.2.2 Results
5.2.3 TF1 Cross check
5.3 Numerical Integration in RooFit’s Binned Likelihood
5.3.1 Choice of Integral Algorithms
5.3.2 Results
5.4 Conclusion
5.5 Bibliography
Appendices
A HLT1 Lines Eciencies 
B B Momentum Fraction in Rare and Control modes 
C Stripping Cuts 
D Corrections 
D.1 HLT TIS Lines
D.2 Invariant mass ts to m(ee)
D.3 Measurements of and s
E Mass Fits 
E.1 Shapes of Simulated Signal
E.2 Shapes of Simulated Backgrounds
E.2.1 B0 Mode Backgrounds
E.2.2 B+ Mode Backgrounds
E.3 Rare Mode Fits to Data
F Cross Checks 
F.1 Integrated rJ=
F.2 Dierential rJ=
F.3 R (2S) G List of gures

GET THE COMPLETE PROJECT

Related Posts