Scoring rule

In decision theory, a score function, or scoring rule, measures the accuracy of probabilistic predictions. It is applicable to tasks in which predictions must assign probabilities to a set of mutually exclusive discrete outcomes. The set of possible outcomes can be either binary or categorical in nature, and the probabilities assigned to this set of outcomes must sum to one (where each individual probability is in the range of 0 to 1). A score can be thought of as either a measure of the "calibration" of a set of probabilistic predictions, or as a "cost function" or "loss function".

If a cost is levied in proportion to a proper scoring rule, the minimal expected cost corresponds to reporting the true set of probabilities. Proper scoring rules are used in meteorology, finance, and pattern classification where a forecaster or algorithm will attempt to minimize the average score to yield refined, calibrated probabilities (i.e. accurate probabilities). Various scoring rules have also been used to assess the predictive accuracy of forecast models for association football.[1]

Definition

Suppose and are two random variables defined on a sample space with and as their corresponding density (mass) functions, in which is a forecast target variable and is the random variable generated from a forecast schema. Also, assume that the , for is the realized value. A scoring rule is a function such as (i.e., ) which calculates the distance between and .

Orientation

is positively oriented if for two different probabilistic forecasts (such as and ), means that is a better probabilistic forecast than .

Expected Score

Expected score is the expected value of the scoring rule over all possible values of the target variable. For example, for a continuous random variable we have

Expected Loss

The expected score loss is the difference between the expected score for the target variable and the forecast:

Propriety

Assuming positive orientation, a scoring rule is considered to be strictly proper, if the value of the expected score loss is positive for all possible forecasts. In other words, based on a strictly proper score rule, a forecasting scheme must score best, if it suggests the target variable as the forecast, and if it scores best, the suggested forecast must be the target variable.[2]

Non-probabilistic Forecast Accuracy Measures

Although scoring rules are introduced in probabilistic forecasting literature, the definition is general enough to consider non-probabilistic measures such as mean absolute error or mean square error as some specific scoring rules. The main characteristic of such scoring rules is is just a function of the expected value of (i.e., ).

Example application of scoring rules

The logarithmic rule

An example of probabilistic forecasting is in meteorology where a weather forecaster may give the probability of rain on the next day. One could note the number of times that a 25% probability was quoted, over a long period, and compare this with the actual proportion of times that rain fell. If the actual percentage was substantially different from the stated probability we say that the forecaster is poorly calibrated. A poorly calibrated forecaster might be encouraged to do better by a bonus system. A bonus system designed around a proper scoring rule will incentivize the forecaster to report probabilities equal to his personal beliefs.[3]

In addition to the simple case of a binary decision, such as assigning probabilities to 'rain' or 'no rain', scoring rules may be used for multiple classes, such as 'rain', 'snow', or 'clear'.

The image to the right shows an example of a scoring rule, the logarithmic scoring rule, as a function of the probability reported for the event that actually occurred. One way to use this rule would be as a cost based on the probability that a forecaster or algorithm assigns, then checking to see which event actually occurs.

Proper scoring rules

Expected value of Logarithmic rule, when Event 1 is expected to occur with probability of 0.8

A probabilistic forecaster or algorithm will return a probability vector r with a probability for each of the i outcomes. One usage of a scoring function could be to give a reward of if the ith event occurs. If a proper scoring rule is used, then the highest expected reward is obtained by reporting the true probability distribution. The use of a proper scoring rule encourages the forecaster to be honest to maximize the expected reward.

A scoring rule is strictly proper if it is uniquely optimized by the true probabilities. Optimized in this case will correspond to maximization for the quadratic, spherical, and logarithmic rules but minimization for the Brier Score. This can be seen in the image at right for the logarithmic rule. Here, Event 1 is expected to occur with probability of 0.8, and the expected score (or reward) is shown as a function of the reported probability. The way to maximize the expected reward is to report the actual probability of 0.8 as all other reported probabilities will yield a lower expected score. This property holds because the logarithmic score is proper.

Examples of proper scoring rules

There are an infinite number of scoring rules, including entire parameterized families of proper scoring rules. The ones shown below are simply popular examples.

Logarithmic scoring rule

The logarithmic scoring rule is a local strictly proper scoring rule. This is also the negative of surprisal, which is commonly used as a scoring criterion in Bayesian Inference; the goal is to minimize expected surprise. This scoring rule has strong foundations in information theory.

Here, the score is calculated as the logarithm of the probability estimate for the actual outcome. That is, a prediction of 80% that correctly proved true would receive a score of ln(0.8) = -0.22. This same prediction also assigns 20% likelihood to the opposite case, and so if the prediction proves false, it would receive a score based on the 20%: ln(0.2) = -1.6. The goal of a forecaster is to maximize the score and for the score to be as large as possible, and -0.22 is indeed larger than -1.6.

If one treats the truth or falsity of the prediction as a variable x with value 1 or 0 respectively, and the expressed probability as p, then one can write the logarithmic scoring rule as x ln(p) + (1-x) ln(1-p). Note that any logarithmic base may be used, since strictly proper scoring rules remain strictly proper under linear transformation. That is:

is strictly proper for all

Brier/quadratic scoring rule

The quadratic scoring rule is a strictly proper scoring rule

where is the probability assigned to the correct answer.

The Brier score, originally proposed by Glenn W. Brier in 1950,[4] can be obtained by an affine transform from the quadratic scoring rule.

Where when the jth event is correct and otherwise and C is the number of classes.

An important difference between these two rules is that a forecaster should strive to maximize the quadratic score yet minimize the Brier score. This is due to a negative sign in the linear transformation between them.

Spherical scoring rule

The spherical scoring rule is also a strictly proper scoring rule

Comparison of proper scoring rules

Shown below on the left is a graphical comparison of the Logarithmic, Quadratic, and Spherical scoring rules for a binary classification problem. The x-axis indicates the reported probability for the event that actually occurred.

It is important to note that each of the scores have different magnitudes and locations. The magnitude differences are not relevant however as scores remain proper under affine transformation. Therefore, to compare different scores it is necessary to move them to a common scale. A reasonable choice of normalization is shown at the picture on the right where all scores intersect the points (0.5,0) and (1,1). This ensures that they yield 0 for a uniform distribution (two probabilities of 0.5 each), reflecting no cost or reward for reporting what is often the baseline distribution. All normalized scores below also yield 1 when the true class is assigned a probability of 1.

Score of a binary classification for the true class showing logarithmic (blue), spherical (green), and quadratic (red)
Normalized score of a binary classification for the true class showing logarithmic (blue), spherical (green), and quadratic (red)

Characteristics

Positive-affine transformation

A strictly proper scoring rule, whether binary or multiclass, after a positive-affine transformation remains a strictly proper scoring rule.[3] That is, if is a strictly proper scoring rule then with is also a strictly proper scoring rule.

Locality

A proper scoring rule is said to be local if its value depends only on the probability . All binary scores are local because the probability assigned to the event that did not occur is directly producible as .

Affine functions of the logarithmic scoring rule are the only strictly proper local scoring rules on a finite set that is not binary.

Decomposition

The expectation value of a proper scoring rule can be decomposed into the sum of three components, called uncertainty, reliability, and resolution,[5][6] which characterize different attributes of probabilistic forecasts:

If a score is proper and negatively oriented (such as the Brier Score), all three terms are positive definite. The uncertainty component is equal to the expected score of the forecast which constantly predicts the average event frequency. The reliability component penalizes poorly calibrated forecasts, in which the predicted probabilities do not coincide with the event frequencies. Resolution rewards probabilities that are close to one whenever the event happens, and which are close to zero if the event does not happen.

The equations for the individual components depend on the particular scoring rule. For the Brier Score, they are given by

where is the average probability of occurrence of the binary event , and is the conditional event probability, given , i.e.

References

  1. Constantinou, Anthony; Fenton, N. (2012). "Solving the Problem of Inadequate Scoring Rules for Assessing Probabilistic Football Forecast Models.". Journal of Quantitative Analysis in Sports. 8 (1): Article 1. doi:10.1515/1559-0410.1418. Retrieved 25 March 2014.
  2. Mojab, Ramin (2016-08-04). "Probabilistic Forecasting with Stationary VAR Models". Rochester, NY: Social Science Research Network. doi:10.2139/ssrn.2818213.
  3. 1 2 Bickel, E.J. (2007). "Some Comparisons among Quadratic, Spherical, and Logarithmic Scoring Rules" (PDF). Decision Analysis. 4 (2): 49–65. doi:10.1287/deca.1070.0089.
  4. Brier, G.W. (1950). "Verification of forecasts expressed in terms of probability" (PDF). Monthly weather review. 78: 1–3. doi:10.1175/1520-0493(1950)078<0001:VOFEIT>2.0.CO;2.
  5. Murphy, A.H. (1973). "A new vector partition of the probability score". Journal of Applied Meteorology. 12: 595–600. doi:10.1175/1520-0450(1973)012<0595:ANVPOT>2.0.CO;2.
  6. Bröcker, J. (2009). "Reliability, sufficiency, and the decomposition of proper scores" (PDF). Quarterly Journal of the Royal Meteorological Society. 135 (643): 1512–1519. doi:10.1002/qj.456.

External links

This article is issued from Wikipedia - version of the 10/18/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.