Statistical inference ===================== The procedure of drawing conclusions about model parameters from randomly distributed data is known as *statistical inference*. This section explains the mathematical theory behind statistical inference. If you are already familiar with this subject feel free to skip to the next section. General case ------------ For a statistician a model is simply a *probability density function* (PDF) :math:`f`, which describes the (joint) distribution of a set of *observables* :math:`\vec x=(x_1,\ldots,x_n)` in the sample space :math:`\mathcal X=\R^n` and depends on some *parameters* :math:`\vec\omega\in\Omega`. The *parameter space* :math:`\Omega` can be any (discrete or continuous) set and :math:`f` can be any function of :math:`\vec x` and :math:`\vec\omega` with .. math:: \int d^n\vec x\,f(\vec x,\vec\omega) = 1 :label: fnorm for all :math:`\vec\omega\in\Omega`. Statistical inference can help us answer the following questions: 1. Which choice of the parameters is most compatible with some observed data :math:`\vec x_0\in\mathcal X`? 2. Based on the observed data :math:`\vec x_0`, which parts of the parameter space :math:`\Omega` can be ruled out (at which confidence level)? Let us begin with the first question. What we need is a function :math:`\hat{\vec\omega}:\R^n\to\Omega` which assigns to each observable vector :math:`\vec x` an estimate of the model parameters. Such a function is called an *estimator*. There are different ways to define an estimator, but the most popular choice (and the one that myFitter supports) is the *maximum likelihood estimator* defined by .. math:: \hat{\vec\omega}(\vec x) = \argmax_{\vec\omega\in\Omega}f(\vec x,\vec\omega) \eqsep. In other words, for a given observation :math:`\vec x` the maximum likelihood estimate of the parameters are those parameters which maximise :math:`f(\vec x,\cdot)`. When we keep the observables fixed and regard :math:`f` as a function of the parameters we call it the *likelihood function*. Finding the maximum of the likelihood function is commonly referred to as *fitting* the model :math:`f` to the data :math:`\vec x`. It usually requires numerical optimisation techniques. The second question is answered (in frequentist statistics) by performing a *hypothesis test*. Let's say we want know if the model is realised with parameters from some subset :math:`\Omega_0\subset\Omega`. We call this the *null hypothesis*. To test this hypothesis we have to define some function :math:`T(\vec x)`, called *test statistic*, which quantifies the disagreement of the observation :math:`\vec x` with the null hypothesis. The phrase "quantifying the disagreement" is meant in the rather vague sense that :math:`T(\vec x)` should be large when :math:`\vec x` looks like it was not drawn from a distribution with parameters in :math:`\Omega_0`. In principle any function on :math:`\mathcal{X}` can be used as a test statistic, but of course some are less useful than others. The most popular choice (and the one myFitter is designed for) is .. math:: T(\vec x) = -2\ln\frac{\max_{\vec\omega_0\in\Omega_0}f(\vec x,\vec\omega_0)} {\max_{\vec\omega\in\Omega}f(\vec x,\vec\omega)} \eqsep. For this choice of :math:`T`, the test is called a *likelihood ratio test*. With a slight abuse of notation, we define .. math:: \chi^2(\vec x,\vec\omega) \equiv -2\ln f(\vec x,\vec\omega) \eqsep. :label: chisq Then we may write .. math:: T(\vec x) = \min_{\vec\omega_0\in\Omega_0}\chi^2(\vec x,\vec\omega_0) -\min_{\vec\omega\in\Omega}\chi^2(\vec x,\vec\omega) \equiv \Delta\chi^2(\vec x) \eqsep. :label: dchisq .. _p-value: .. index:: single: p-value Clearly large values of :math:`\Delta\chi^2` indicate that the null hypothesis is unlikely, but at which value should we reject it? To determine that we have to compute another quantity called the *p-value* .. math:: p\equiv p(\Omega_0,\vec x_0) = \max_{\vec\omega_0\in\Omega_0} \int d^n\vec y\,f(\vec y,\vec\omega_0) \theta(\Delta\chi^2(\vec y)-\Delta\chi^2(\vec x_0)) \eqsep, :label: pvalue where :math:`\theta` is the Heavyside step function. Note that the integral is simply the probability that for some random vector :math:`\vec y` of *toy observables* drawn from the distribution :math:`f(\cdot,\vec\omega_0)` the test statistic :math:`\Delta\chi^2(\vec y)` is larger than the observed value :math:`\Delta\chi^2(\vec x_0)`. The meaning of the p-value is the following: if the null hypothesis is true and we reject it for values of :math:`\Delta\chi^2` larger than the observed value :math:`\Delta\chi^2(\vec x_0)` the probability for wrongly rejecting the null hypothesis is at most :math:`p(\Omega_0,\vec x_0)`. Thus, the smaller the p-value, the more confident we can be in rejecting the null hypothesis. The complementary probability :math:`1-p(\Omega_0,\vec x_0)` is therefore called the *confidence level* at which we may reject the null hypothesis :math:`\Omega_0`, based on the observation :math:`\vec x_0`. .. _Z-value: .. index:: single: Z-value Instead of specifying the p-value directly people often give the *Z-value* (or number of '*standard deviations*' or '*sigmas*') which is related to the p-value by .. math:: Z = \sqrt 2\Erf^{-1}(1-p) \eqsep, where ':math:`\Erf`' denotes the error function. The relation is chosen in such a way that the integral of a normal distribution from :math:`-Z` to :math:`+Z` is the confidence level :math:`1-p`. So, if someone tells you that they have discovered something at :math:`5\,\sigma` they have (usually) done a likelihood ratio test and rejected the "no signal" hypothesis at a confidence level corresponding to :math:`Z=5`. .. index:: pair: linear; regression model Linear regression models and Wilks' theorem ------------------------------------------- Evaluating the right-hand side of :eq:`pvalue` exactly for a realistic model can be extremely challenging. However, for a specific class of models called *linear regression models* we can evaluate :eq:`pvalue` analytically. The good news is that most models can be approximated by linear regression models and that this approximation is often sufficient for the purpose of estimating the p-value. A concrete formulation of this statement is given by a theorem by Wilks, which we shall discuss shortly. The myFitter package was originally written to handle problems where Wilks' theorem is not applicable, but of course the framework can also be used in cases where Wilks' theorem applies. .. _linear-regression-model: In a linear regression model the parameter space :math:`\Omega=\R^k` is a :math:`k`-dimensional real vector space and the PDF :math:`f` is a (multi-dimensional) normal distribution with a fixed (i.e. parameter-independent) covariance matrix :math:`\Sigma` centered on an observable-vector :math:`\vec\mu(\vec\omega)` which is an affine-linear function of the parameters :math:`\vec\omega`: .. math:: f(\vec x,\vec\omega) &= \mathcal{N}(\vec\mu(\vec\omega), \Sigma)(\vec x) \propto \exp[-\tfrac12(\vec x-\vec\mu(\vec\omega))^\trans \Sigma^{-1} (\vec x-\vec\mu(\vec\omega))] \eqsep,\\ \vec\mu(\vec\omega) &= A\vec\omega + \vec b \eqsep, :label: regressionModel where :math:`\Sigma` is a symmetric, positive definite matrix, :math:`A` is a non-singular (:math:`n\times k`)-matrix and :math:`\vec b` is a :math:`n`-dimensional real vector. Now assume that the parameter space :math:`\Omega_0` under the null hypothesis is a :math:`k_0`-dimensional affine sub-space of :math:`\Omega` (i.e. a linear sub-space with a constant shift). In this case the formula :eq:`pvalue` for the p-value becomes .. _asymptotic-p-value: .. math:: p(\Omega_0,\vec x_0) = Q_{(k-k_0)/2}(\Delta\chi^2(\vec x_0)/2) \eqsep, :label: wilks where :math:`Q` is the *normalised upper incomplete Gamma function* which can be found in any decent special functions library. Note that, for a linear regession model, the p-value only depends on the dimension of the affine sub-space :math:`\Omega_0` but not on its position within the larger space :math:`\Omega`. .. _non-linear-regression-model: .. index:: pair: non-linear; regression model The models we encounter in global fits are usually not linear regession models. They typically belong to a class of models known as *non-linear regression models*. The PDF of a non-linear regression model still has the form :eq:`regressionModel` with a fixed covariance matrix :math:`\Sigma`, but the means :math:`\vec\mu` may depend on the parameters :math:`\vec\omega` in an arbitrary way. Furthermore, the parameter spaces :math:`\Omega` and :math:`\Omega_0` don't have to be vector spaces but can be arbitrary smooth manifolds. The means :math:`\vec\mu` of a non-linear regression model are usually the *theory predictions* for the observables and have a known functional dependence on the parameters :math:`\vec\omega` of the theory. The information about experimental uncertainties (and correlations) is encoded in the covariance matrix :math:`\Sigma`. .. _wilks-theorem: .. _asymptotic-limit: .. index:: single: Wilks' theorem single: asymptotic limit Since any smooth non-linear function can be locally approximated by a linear function it seems plausible that :eq:`wilks` can still hold approximately even for non-linear regression models. This is essentially the content of Wilks' theorem. It states that, for a general model (not only non-linear regression models) which satisfies certain regularity conditions, :eq:`wilks` holds *asymptotically* in the limit of an infinite number of independent observations. The parameter spaces :math:`\Omega` and :math:`\Omega_0` in the general case can be smooth *manifolds* with dimensions :math:`k` and :math:`k_0`, respectively. Note, however, that for Wilks' theorem to hold :math:`\Omega_0` must still be a subset of :math:`\Omega`. .. index:: single: plug-in p-value Plug-in p-values and the myFitter method ---------------------------------------- In practice we cannot make an arbitrarily large number of independent observations. Thus :eq:`wilks` is only an approximation whose quality depends on the amount of collected data and the model under consideration. To test the quality of the approximation :eq:`wilks` for a specific model one must evaluate or at least approximate :eq:`pvalue` by other means, e.g. numerically. Such computations are called *toy simulations*, and their computational cost can be immense. Note that the evaluation of :math:`\Delta\chi^2` (and thus of the integrand in :eq:`pvalue`) requires a numerical optimisation. And then the integral in :eq:`pvalue` still needs to be maximised over :math:`\vec\omega_0`. .. _plug-in-p-value: In most cases the integral is reasonably close to its maxiumum value when :math:`\vec\omega_0` is set to its maximum likelihood estimate for the observed data :math:`\vec x_0` and the parameter space :math:`\Omega_0`: .. math:: \hat{\vec\omega}_0(\vec x_0)\equiv \argmax_{\vec\omega\in\Omega_0}f(\vec x_0,\vec\omega) \eqsep. Thus, the *plug-in* p-value .. math:: \hat p(\Omega_0,\vec x_0) = \int d^n\vec y\,f(\vec y,\hat{\vec\omega}_0(\vec x_0)) \theta(\Delta\chi^2(\vec y)-\Delta\chi^2(\vec x_0)) is often a good approximation to the real p-value. The standard numerical approach to computing the remaining integral is to generate a large number of *toy observations* :math:`\vec y_1,\ldots,\vec y_N` distributed according to :math:`f(\cdot,\hat{\vec\omega}_0(\vec x_0))` and then determine the fraction of toy observations where :math:`\Delta\chi^2(\vec y_i)>\Delta\chi^2(\vec x_0)`. If the p-value is small this method can become very inefficient. The efficiency can be significantly improved by *importance sampling methods*, i.e. by drawing toy observations from some other (suitably constructed) sampling density which avoids the region with :math:`\Delta\chi^2(\vec y)<\Delta\chi^2(\vec x_0)`. This method lies at the heart of the myFitter package. .. _systematic-errors: Gaussian and systematic errors ------------------------------ .. index:: single: correlation matrix single: correlation coefficient As mentioned above the most common type of model we deal with in global fits are :ref:`non-linear regression models ` where the means :math:`\vec\mu(\vec\omega)` represent theoretical predictions for the observables and the information about experimental uncertainties is encoded in the covariance matrix :math:`\Sigma`. If the value of an observable is given as :math:`7.1\pm 0.5` in a scientific text this usually implies that the observable has a Gaussian distribution with a standard deviation of 0.5. Measurements from different experiments are usually uncorrelated. Thus the covariance matrix for :math:`n` independent observables is .. math:: \Sigma = \diag(\sigma_1^2,\ldots,\sigma_n^2)\eqsep, where the :math:`\sigma_i` are the errors of the individual measurements. If several quantities are measured in the same experiment they can be correlated. Information about the correlations is usually presented in the form of a *correlation matrix* :math:`\rho`. The correlation matrix is always symmetric and its diagonal entries are 1. (Thus it is common to only show the lower or upper triangle of :math:`\rho` in a publication.) The elements :math:`\rho_{ij}` of :math:`\rho` are called *correlation coefficients*. For :math:`n` observables with errors :math:`\sigma_i` and correlation coefficients :math:`\rho_{ij}` the covariance matrix :math:`\Sigma` is given by .. math:: \Sigma_{ij} = \sigma_i\sigma_j\rho_{ij} In some cases the error of an observable is broken up into a *statistical* and a *systematic* component. Typical notations are :math:`7.1\pm 0.3(\text{stat})\pm 0.2(\text{syst})` or simply :math:`7.1\pm 0.3\pm 0.2` with an indication which component is the systematic one given in the text. Sometimes the systematic error is asymmetric and denoted as :math:`7.1\pm 0.3^{+0.1}_{-0.2}`. .. _parametric-uncertainty: .. _nuisance-parameter: .. index:: pair: systematic; uncertainty pair: parametric; uncertainty single: nuisance parameter What are we supposed to do with this extra information? This depends largely on the context, i.e. on the nature of the systematic uncertainties being quoted. For example, the theoretical prediction of an observable :math:`x` (or its extraction from raw experimental data) might require knowledge of some other quantity :math:`\lambda` which has been measured elsewhere with some Gaussian uncertainty. In this case one also speaks of a *parametric uncertainty*. If you know the dependence of :math:`x` on :math:`\lambda` it is best to treat :math:`\lambda` as a parameter of your model, add an observable which represents the measurement of :math:`\lambda` (i.e. with a Gaussian distribution centered on :math:`\lambda` and an appropriate standard deviation), and use only the statistical error to model the distribution of :math:`x`. In particular, this is the only (correct) way in which you can combine :math:`x` with other observables that depend on the same parameter :math:`\lambda`. In this case the parameter :math:`\lambda` is called a *nuisance parameter*, since we are not interested in its value but need it to extract values for the parameters we are interested in. If you don't know the dependence of :math:`x` on :math:`\lambda` but you are sure that the systematic uncertainty for :math:`x` given in a paper is (mainly) parametric due to :math:`\lambda` and that :math:`\lambda` does not affect any other observables in your fit you can combine the statistical and systematic error in quadrature. This means you assume a Gaussian distribution for :math:`x` with standard deviation :math:`(\sigma^2_\text{stat} + \sigma^2_\text{syst})^{1/2}`, where :math:`\sigma_\text{stat}` is the statistical and :math:`\sigma_\text{syst}` the systematic error (which should be symmetric in this case). This procedure is also correct if the systematic error is the combined parametric uncertainty due to several parameters, as long as none of these parameters affect any other observables in your fit. So far in our discussion we have assumed that the :ref:`nuisance parameter(s) ` :math:`\lambda` can be measured and have a Gaussian distribution. This is not always the case. If, for example, the theoretical prediction for an observable is an approximation there will be a constant offset between the theory prediction and the measured value. This offset does not average out when the experiment is repeated many times, and it cannot be measured separately either. At best we can find some sort of upper bound on the size of this offset. The extraction of an observable from the raw data may also depend on parameters which can only be bounded but not measured. In this case the offset between the theory prediction and the mean of the measured quantity should be treated as an additional model parameter whose values are restricted to a finite range. Assume that the systematic error of :math:`x` is given as :math:`{}^{+\delta_+}_{-\delta_-}` with :math:`\delta_+,\delta_->0`. Let :math:`\mu_\text{th}(\vec\omega)` be the theory prediction for :math:`x`. The mean :math:`\mu` of the observable :math:`x` then depends on an additional nuisance parameter :math:`\delta` which takes values in the interval :math:`[-\delta_-,+\delta_+]`: .. math:: \mu(\vec\omega,\delta) = \mu_\text{th}(\vec\omega) - \delta \eqsep\text{with}\eqsep\delta\in[-\delta_-,+\delta_+] \eqsep. Note that the formula above holds for systematic uncertainties associated with the *measurement* of :math:`x`. If the *theory prediction* for an observable is quoted with a systematic error :math:`{}^{+\delta_+}_{-\delta_-}` the correct range for :math:`\delta` is :math:`[-\delta_+,+\delta_-]`.