5 | | Suppose I generate 50k events, and divide them in two subprocesses with a 50% probability. Naively I would get 25000 +/- sqrt(25000) = 25000 +/- 158 events in the first subprocess and the same for the second subprocess. However, the total number of events is fixed and this makes that the uncertainties are correlated. We have to include a sqrt(2) for the uncertainty<br /><br />25000 +/- 158/Sqrt[2] = 25000 +/- 112 events per subprocess.<br /><br />You can easily convince yourself that the uncertainty has to be smaller by taking the limit of 1 subprocess: I always have 50k events, '''without any spread'''.<br /><br />Now, I looked a bit on wikipedia, and found actually a formula how to compute it in more general cases:<br /><br />If the total number of events is fixed to be 50000 divided into two sets, the uncertainties in the two sets have to be equal. The distribution over the two sets is given by a binomial (see second bullet point of http://en.wikipedia.org/wiki/Poisson_distribution#Related_distributions ) and we can compute the square root of the variance (http://en.wikipedia.org/wiki/Binomial_distribution#Mean_and_variance ) as<br /><br />sigma_1 = Sqrt[ 50000 * p_1 * ( 1 - p_1 ) ]<br />sigma_2 = Sqrt[ 50000 * p_2 * ( 1 - p_2 ) ],<br /><br />where<br />p_1 = lambda_1/(lambda_1 + lambda_2)<br />p_2 = lambda_2/(lambda_1 + lambda_2).<br /><br />where lambda_j is the variance (i.e. the number of events) in subprocess j. As expected, this indeed gives 112 for sigma_1 and sigma_2 for the example above. |
| 5 | Suppose I generate 50k events, and divide them in two subprocesses with a 50% probability. Naively I would get 25000 +/- sqrt(25000) = 25000 +/- 158 events in the first subprocess and the same for the second subprocess. However, the total number of events is fixed and this makes that the uncertainties are correlated. We have to include a sqrt(2) for the uncertainty[[br]][[br]]25000 +/- 158/Sqrt[2] = 25000 +/- 112 events per subprocess.[[br]][[br]]You can easily convince yourself that the uncertainty has to be smaller by taking the limit of 1 subprocess: I always have 50k events, '''without any spread'''.[[br]][[br]]Now, I looked a bit on wikipedia, and found actually a formula how to compute it in more general cases:[[br]][[br]]If the total number of events is fixed to be 50000 divided into two sets, the uncertainties in the two sets have to be equal. The distribution over the two sets is given by a binomial (see second bullet point of http://en.wikipedia.org/wiki/Poisson_distribution#Related_distributions ) and we can compute the square root of the variance (http://en.wikipedia.org/wiki/Binomial_distribution#Mean_and_variance ) as[[br]][[br]]sigma_1 = Sqrt[ 50000 * p_1 * ( 1 - p_1 ) ][[br]]sigma_2 = Sqrt[ 50000 * p_2 * ( 1 - p_2 ) ],[[br]][[br]]where[[br]]p_1 = lambda_1/(lambda_1 + lambda_2)[[br]]p_2 = lambda_2/(lambda_1 + lambda_2).[[br]][[br]]where lambda_j is the variance (i.e. the number of events) in subprocess j. As expected, this indeed gives 112 for sigma_1 and sigma_2 for the example above. |