1. Introduction
The experience-based calculation of the risk premium for an insurance account is affected by several sources of uncertainty, the most obvious—and perhaps the best understood—of which is the limited size of the historical database of losses of the client.
To make up for such uncertainty the analyst may use average or other relevant information from the market (the market risk premium) to replace or complement the client risk premium. The problem with this is that the market experience may not be fully relevant to a particular client. This is usually captured by the spread, or heterogeneity, of the client risk premiums around the standard market rate. As an added complication, although the market rate is typically computed from a larger data set than that of a client, it too is based on a loss database of limited size and is therefore affected by the same type of uncertainty.
The standard way to combine client and market information is credibility. The credibility risk premium is a convex combination of the client risk premium and the market risk premium:
Credibility risk premium=Z×Client risk premium+(1−Z)×Market risk premium
where Z is a real number between 0 and 1, reflecting the relative weight that we give to the client’s experience.
The idea of this paper is to use the standard deviation of the client risk premium estimator (σc) as a measure of (lack of) credibility, weighting this against the market heterogeneity (σh) and the standard deviation of the market risk premium estimator (σm). Furthermore, since the risk premium of the market is calculated based on data from the whole market, including in general the client itself, the two estimators for the market and the client are correlated (ρm,c). The resulting formula for the credibility factor is
Z=σ2h+σ2m−ρm,cσmσcσ2h+σ2m+σ2c−2ρm,cσmσc
1.1. Research context and objective
The modern approach to credibility, which stems from the work of Bühlmann and Straub (Bühlmann 1967; Bühlmann and Straub 1970; Bühlmann and Gisler 2005), does not explicitly take the uncertainty of the market price into account in the formula for the credibility factor (see, e.g., Theorem 3.7 in Bühlmann and Gisler (2005), which gives results for both inhomogeneous and homogeneous credibility).
On the other hand, Boor (1992) displays a credibility factor that contains an extra term for market uncertainty. Boor’s paper, however, focuses on a two-sample model (client vs. rest of the market) and attempts no analysis of the overall market heterogeneity/spread.
This paper argues that by using uncertainty as the main driver for credibility, one is able to produce an intuitive and general method to calculate the credibility premium, which can be used both in insurance and in reinsurance.
The results have natural applications to excess-of-loss reinsurance, where client experience in the higher layers of experience is obviously scant but even the market experience is limited and the uncertainty on the parameters of market curves is therefore significant. The methodology described in this paper was initially used in the context of U.K. motor reinsurance (Parodi and Bonche 2008).
1.2. Outline
Section 2 introduces a measure of uncertainty. Section 3 illustrates the methodology of uncertainty-based credibility in a general context, proving the basic result (Proposition 1) that gives the optimal value for the credibility factor. It also illustrates how to apply the methodology to a simple example. The limitations of the methodology are given in Section 4. Section 5 draws the conclusions.
2. The risk premium and its uncertainty
2.1. Risk premium—definition and calculation
The risk premium[1] φ is given by φ = E(S)/w where E(S) is the expected aggregate loss in a given period and w is the expected exposure in that same period.
Using the collective risk model assumption, the losses to an insurer in a given period can be modeled as a stochastic process S = ∑Ni=1 Xi where N represents the number of claims in the period and X1, . . . , XN represent their amounts. Both the number of claims and their amounts are random variables. The claim amounts X1, . . . , XN are independent and identically distributed (i.i.d.) and also independent of N.
Using the collective risk model, E(S) can be written as E(S) = E(N)E(X) where E(N) is the expected number of claims and E(X) is the expected claim amount. To derive E(N) and E(X), we need to know the underlying frequency and severity distributions with their exact parameter values (e.g., N may follow a Poisson distribution: N ~ Poi(λw), and X an exponential distribution: X ~ Exp(μ) → E(S) = λμw, φ = λμ).
However, reality is usually not so straightforward, since it is not always possible to express E(S) in a simple analytical form. This may be due to policy modifications (excesses, limits, reinstatements, etc.) and to the effect of settlement delay and discounting. Therefore, E(S) will usually be estimated by a stochastic simulation or by an approximate formula.
2.2. Risk premium—sources and measures of uncertainty
In practice, we will only have an estimate of E(S) and therefore of the risk premium φ = E(S)/w. This estimate will be affected by several sources of uncertainty: the models for frequency and severity will not replicate reality perfectly (model uncertainty); the values of the model parameters will only be known approximately (parameter uncertainty); the claims data themselves are often reserve estimates rather than known quantities (data uncertainty).
Parameter uncertainty depends on the fact that we only have a limited sample from which to estimate the parameters of the model. This will be the main focus of the paper. Data uncertainty has the effect of increasing parameter uncertainty. Model uncertainty is difficult to quantify and usually will be dealt with in a low-profile fashion, by making sure that our models pass appropriate goodness-of-fit tests.
We will use the standard deviation of an estimator as a measure of the estimator’s uncertainty. Following standard use, we will denote that as “standard error.” In general, the standard error of the risk premium will depend on the process by which the risk premium is estimated. Notice that the standard deviation of the risk premium estimator should not be confused with the standard deviation of S/w, the aggregate loss per unit of exposure!
Section 3.3.2 will give examples of how the standard deviation of the risk premium estimator can be calculated in practice.
3. Uncertainty-based credibility
Let φc be the “true” risk premium of the client. This is simply given by φc = E(Sc)/wc where Sc is the aggregate loss in a year and wc is the exposure in the same year. According to the collective model, E(Sc) can be written as E(Sc) = E(Nc)E(Xc) where Nc is the number of claims and Xc is the claim amount. However, we will only have an estimate of E(Sc). The accuracy of this estimate will be affected by data uncertainty, parameter uncertainty, and model uncertainty.
Let φ̂c be the estimated risk premium of the client. This will typically be obtained by
—applying a simple burning cost approach to the aggregate losses;
—estimating the average frequency and severity and calculating their product;
—estimating the parameters of the frequency and severity distributions and calculating the average frequency and severity based on those estimates; and
—hybrid approaches.
We can also define φm (true risk premium) and φ̂m (estimated risk premium) for the market. The estimated risk premium φ̂m will be obtained in a similar fashion as φ̂c but it will use data from all participating clients, including the data used to calculate φ̂c.
Credibility is a standard technique by which the estimated risk premium of the client, φ̂c, and the estimated risk premium for the market, φ̂ m, are combined to provide another estimate φ̂ , called the credibility estimate, of the client’s risk premium φc, via a convex combination:
ˆφ=Z⋅ˆφc+(1−Z)⋅ˆφm
where Z ∈ [0, 1] is called the credibility factor.
In this section, we provide a means to calculate the credibility factor Z based on the uncertainty of the estimates φ̂ c, φ̂ m and on the heterogeneity of the market. To do this, we need an uncertainty model, i.e., a set of assumptions on how uncertainty affects the estimates.
3.1. The uncertainty model—assumptions
-
The estimated risk premium of the market is described by a random variable φ̂m with expected value φm (the true risk premium for the overall market) and variance σ2m. For readability, we write this as
ˆφm=φm+σmεm
where εm is a random variable with zero mean and unit variance: E(εm) = 0, E(ε2m)= 1. Notice that φm is not viewed as a random variable here. Despite the terminology above, which resembles that used for Gaussian random noise, no other assumption is needed on the shape of the distribution of εm.
-
The true risk premium φc of the client is described by a random variable with mean E(φc) = φm (the true market risk premium) and variance Var(φc)= σ2h. In other terms,
φc=φm+σhεh
where σh measures the spread (or heterogeneity) of the different clients around the mean market value, and E(εh) = 0, E(ε2h) = 1.
-
The estimated risk premium of the client, φ̂ [CGMI1]c, given the true risk premium, φc, is described by a random variable with mean E(φ̂c | φc) = φc, Var(φ̂c | φc) = σ2c. In other words,
ˆφc∣φc=φc+σcεc(ˆφc=φm+σhεh+σcεc)
where εc is another random variable with zero mean and unit variance: E(εc) = 0, E(ε2c) = 1. Again, no other assumption is made on the distribution of εc. Notice that in this case both φ̂ c and φc are random variables.
-
Assume that εh is uncorrelated with both εm and εc: E(εmεh) = 0, E(εcεh) = 0.
We are now in a position to prove the following result.
Proposition 1. Given Assumptions 1–4 above, the value of Z that minimizes the mean squared error Em,c,h ((φ̂ − φc)2) = Em,c,h((Z · φ̂ c + (1 − Z) · φ̂ m − φc)2), where the expected value is taken on the joint distribution of εm, εc, εh, is given by
Z=σ2h+σ2m−ρm,cσmσcσ2h+σ2m+σ2c−2ρm,cσmσc
where ρm,c is the correlation between εm and εc.
Proof. The result is straightforward once we express φ̂ − φc in terms of εm, εc, εh only. The mean squared error is given by
Em,c,h((Z⋅ˆφc+(1−Z)⋅ˆφm−φc)2)=Em,c,h(((Z−1)(σhεh−σmεm)+Z⋅σcεc)2)=(Z−1)2(σ2h+σ2m)+Z2σ2c−2Z(Z−1)ρm,cσmσc
where ρm,c = E(εmεc). By minimizing the mean squared error with respect to Z, one obtains Equation (3.5).
The following sections will go into more detail as to the meaning of the assumptions and of this result.
3.1.1. Explaining the assumptions
Assumption 2 tries to capture market heterogeneity: different clients will have different risk premiums, reflecting the different risk profile (e.g., age profiles, location, etc.) of the accounts. We do not need to know what the prior distribution of the risk premiums is, as long as we know its variance. In practice, this will be determined empirically.
Assumptions 1 and 3 try to capture the uncertainty inherent in the process of estimating the risk premium. The quantities σm and σc should not be confused with the standard deviations of the underlying aggregate loss distributions for the market and the client.
The random variable εh gives the prior distribution of the client price around a market value, whereas εm, εc are parameter uncertainties on the market and the client. Therefore, Assumption 4 (E(εmεh) = 0, E(εcεh) = 0) is quite sound. The correlation between εm and εc, however, cannot be ignored. The reason for this is that the estimated risk premium of the market is based on data collected from different clients, including client c.[2]
3.2. Is φ̂ an unbiased estimator for φc?
It is important to notice that the expected value Em,c,h ((φ̂ − φc)2) is also taken over the distribution of εh. As a consequence, the mean squared error is not necessarily minimized for each individual client, but only over all possible clients.
For a given client c, φ̂ is in general a biased estimator for φc. The bias is given by bias(φ̂ | φc) = Em,c(φ̂ | φc) − φc = (1 − Z)(φm − φc) = −(1 − Z) · σhεh. The expected value is in this case taken over the joint distribution of εm and εc. Averaging over εh, the bias disappears: Eh(bias(φ̂ | φc)) = 0.
Notice how the quest for an estimate φ̂ of φc that is collectively unbiased is a common feature of credibility theory [see, e.g., Bühlmann’s approach as described by Klugman, Panjer, and Willmot (2004)].
The meaning of the formula for the bias, bias(φ̂ | φc) = −(1 − Z)σhεh, is that when credibility is close to 1, the credibility estimate for the risk premium will be close to the client estimated price, φ̂ c, and the bias will be close to zero. On the other hand, if the credibility is close to 0, the credibility estimate of the risk premium will be close to φ̂ m, and the bias will be about −σhεh: i.e., the expected value of the credibility estimate will be distributed randomly around the market risk premium with a standard deviation equal to σh, which is exactly what we expect to happen.
3.3. Credibility calculation in practice—a simple example
In practice, the standard deviations σh, σm, σc, and ρm,c are not known and they must be estimated from the data. Therefore the credibility factor can also be written as
Z≈s2h+s2m−rm,csmscs2h+s2m+s2c−2rm,csmsc
where sh is the estimated market heterogeneity, rm,c is the estimated correlation between the market and the client, and sm and sc are the estimated standard deviations of the estimators for the market and client risk premiums.
In this section we will show how the credibility factor can be calculated in practice using a simple example in which the risk premium is calculated by a simple burning cost method, based on several years of experience. Also, we assume that:
-
The usual tenets of the collective model (Klugman, Panjer, and Willmot 2004) hold: i.e., losses for each client are independent and identically distributed and do not depend on the number of claims.
-
The claims originate from a compound Poisson process. This is not a critical assumption but simplifies some of the algebra.
-
There is neither IBNR (incurred but not reported claims) nor IBNER (incurred but not enough reserved claims), so that the number of claims and loss amount for each claim are known at the moment of the analysis. Also, underwriting and other environment conditions have not changed over the period when data has been collected or have been adjusted so as to incorporate the changes. Notice that these assumptions have been made here for the sake of simplicity but are not critical (see Section 3.3.6 for more comments on this).
-
All claims are already revalued to current terms, or more accurately to the mid-period of exposure.
-
The claim count and the loss amount for each claim are known for each of n clients, including the client under consideration.
-
Conditional on the frequency and severity parameters for each client, the losses are independent. As a consequence, the losses of a client are independent of the losses of the rest of the market as a whole.
We will show how to calculate
- the client's risk premium estimator and its standard error,
- the market's risk premium estimator and its standard error,
- the correlation between the client's and the market's risk premium estimator, and
- the market heterogeneity.
3.3.1. Estimating the client’s risk premium and its standard error
If λc is the mean frequency per unit of exposure and μc is the mean severity, the theoretical risk premium is given by φc = λcμc.
As we have assumed that losses are already revalued to current terms, and that no other adjustments are needed (e.g., IBNR, IBNER), the risk premium can be estimated as
ˆφc=ˆScwc=∑nci=1X(c)iwc
where
-
Ŝc = ∑n~c~i=1 X(c)i is the cumulative loss over the k-years period,
-
wc = ∑kj=1 wc,j is the cumulative exposure over the k-years period (wc,j being the exposure for year j) for client c,
-
nc = ∑kj=1 nc,j is the cumulative number of claims over k years (nc,j being the number of claims in year j) for client c, and
-
X(c)i is the amount of the ith loss for client c. Note that Xi(c) represents an individual loss amount, not an aggregate loss.
The standard error is the square root of the variance of the estimator (3.8), which in turn can be calculated using standard results for the collective model (Klugman, Panjer, and Willmot 2004):
s2c=Var(ˆφc∣φc)=E(Nc)Var(Xc)+Var(Nc)(E(Xc))2w2c=E(Nc)E(X2c)w2c≈ncׯX2cw2c
The first equivalence is the general result for the collective model, which applies when the losses are iid variables. The second is true for a compound Poisson process. The third one simply replaces the mathematical expectations for the mean number of claims and mean squared loss with their empirical estimates. In the case of the number of claims, the empirical estimate is the cumulative number of claims itself.
3.3.2. Estimating the market’s risk premium and its standard error
The market risk premium can be calculated in a number of different ways, each with its own justification. It can be a weighted or an unweighted average of the risk premiums of individual clients. Alternatively, it can be calculated as the result of a market analysis, in which the losses of each client are collected and put into a single database. In this latter case, it can be calculated nonparametrically (e.g., the empirical mean of all market losses) or parametrically (e.g., the mean of the modeled distribution for the whole market).
In our simple example, the market risk premium is calculated exactly as the client risk premium by a burning cost approach:
ˆφm=ˆSmwm=∑All cˆScwm
where wm = ∑All c wc.
To calculate the variance of the estimator we cannot use Formula (3.9), which applies to iid variables. Since the aggregate losses of different clients are independent (see more on this in Section 3.3.3), we can, however, write
s2m=Var(ˆφm)=∑All cVar(ˆSc)w2m=∑All cw2cVar(ˆφc∣φc)w2m
and use Formula (3.9) to calculate the variance of the estimator for each client. Note that if the variance of all clients is the same, and so is the exposure, the formula above suggests (unsurprisingly) that the variance of the market is equal to the variance of the client divided by the number of clients.
3.3.3. Estimating the correlation
First of all, notice that the empirical aggregate losses of two different clients c, c′ are independent and therefore Cov(Ŝc, Ŝc′) = 0. This is because under our assumptions Ŝc, Ŝc′ are realizations of two separate random processes.
This might appear counterintuitive at first, as a number of common factors are at play (e.g., the judicial environment, the weather) affecting the losses of two different insurers. However, these factors will be reflected in the theoretical risk premiums φc, φc′, while the departures from the theoretical risk premiums for c and c′ will be uncorrelated, much in the same way as the empirical means of two distinct samples drawn from the same underlying distribution are uncorrelated.
By writing the aggregate losses for the market as Ŝm = Ŝc + Ŝm−c, where Ŝm−c are the aggregate losses excluding those from client c, we can now estimate the correlation as
rm,c=Cov(ˆφm,ˆφc)√Var(ˆφm)Var(ˆφc)=Cov(ˆSm,ˆSc)√Var(ˆSm)Var(ˆSc)=Cov(ˆSm−c,ˆSc)+Cov(ˆSc,ˆSc)√Var(ˆSm)Var(ˆSc)=Var(ˆSc)√Var(ˆSm)Var(ˆSc)=√Var(ˆSc)Var(ˆSm)=wcscwmsm
3.3.4. Estimating market heterogeneity
Market heterogeneity can be estimated as the empirical variance of the risk premium for all available clients. Depending on the pricing process and the analyst’s choices, the details of the calculation may vary. Specifically, a weighted or unweighted version of the variance may be used. There is no strict prescription on which version to use, but consistency with the way the market premium is calculated should be sought. If the market risk premium is calculated by collecting all data from all clients, larger clients will inevitably get more weight, and the weighted version of the variance is preferable.
In our example, we use the weighted version. The unweighted version can be obtained simply by replacing all weights wc with 1 and wm = ∑c wc with the number of clients.
s2h=∑cwc(ˆφc−ˆφm)2wm−(∑cwcs2cwm+s2m−2sm∑cwcscrm,cwm)=∑cwc(ˆφc−ˆφm)2−∑c(1−wcwm)wcs2cwm.
The unusual second term in the top part of Equation (3.12) is the bias-correction term relevant to our model. It can be derived by expanding the expression
E(∑cwc(ˆφc−ˆφm)2)=E(∑cwc(σcεc+σhεh−σmεm)2)
and using the estimated values sc, sh, sm, rm,c instead of the theoretical values σc, σh, σm, ρm,c. The more compact bottom part of Equation (3.12) is obtained by using the expressions for sm and rm,c derived in Equations (3.10) and (3.11), respectively. Note that owing to the bias-correction term, the estimated market heterogeneity can occasionally become negative. This phenomenon also appears in Bühlmann’s credibility theory (Bühlmann and Gisler 2005). When this happens one can follow the recommendation in Bühlmann and Gisler (2005) and set Z = 0.
By collating all the results in Sections 3.3.1 through 3.3.4, we now have all the ingredients to calculate the credibility factor Z and therefore the credibility estimate.
3.3.5. Numerical illustration
To give a more concrete idea of the calculations involved, we have performed an experiment on artificially generated data based on the simple example above. Losses have been generated by using a compound Poisson process with an exponential severity model.
The simulation uses five clients with different exposures, Poisson rates, and exponential means. The “true” values of these parameters are shown in Table 1. In practice, we do not know these values, but we only see a single realization of a random process. Table 2 shows
-
the theoretical risk premiums and standard errors for all the clients and the market (obtained by collating all clients) based on the true values, and
-
the risk premiums and standard errors based on five different realizations of the stochastic process, calculated as in Sections 3.3.1 and 3.3.2.
Table 3 shows the theoretical and empirical correlation between each client and the market. The theoretical correlation between the client and the market is calculated using Formula (3.11) and the theoretical standard errors; the correlation based on the five simulation runs is calculated using (3.11) with the estimated standard errors.
Finally, Table 4 shows the weighted market heterogeneity, calculated as in Formula (3.12). The table also shows the values of the credibility factors, calculated as in Formula (3.7). Notice that when the market heterogeneity—which is the most unstable variable in this exercise—appears to be higher, the credibility of the client’s risk premiums also increases significantly.
3.3.6. Practical issues
In more general cases, several complications will arise. The list below is not meant to be exhaustive, but to illustrate some of the typical issues that arise and how they should be addressed, and convey the idea that there is no single recipe to calculate the error on the risk premium, and that the error will depend crucially on the process by which the risk premium is calculated.
Most of the issues listed below have actually arisen in the real-world application to reinsurance pricing for which this methodology was originally devised (Parodi and Bonche 2008).
-
Separate frequency/severity analysis. Rather than by the simple burning cost approach described above, the risk premium will often be calculated by a separate frequency and severity analysis. This does not bring in itself much added complication.
-
Market severity model. The market severity distribution—which is in general a mixture of the severity curves of different clients—will usually be approximated by a single parametric distribution. Typically, the parameters of the distribution will be obtained using the maximum likelihood estimation (MLE) method and the standard error on the parameters as the reciprocal of Fisher information. As long as the fit is good, this is a useful approximation.
-
Using projected estimates for claim count/claim amounts. In the simple example described above, it was assumed that the number of losses and the loss amounts over the analysis period were known with certainty. In many cases, only projections are typically available and the error on the projected amounts will have to be incorporated in the overall standard error.
-
Changes in the risk profile. When the risk is not uniform over the analysis period due to changes in the portfolio, business mix, and the legal environment, corrections will need to be made to the losses for each period to bring them to a uniform basis. The uncertainty on these corrections should be incorporated in the calculation of the standard error: this is formally simple, the real difficulty being quantifying this uncertainty! This problem is common to all credibility approaches and to all experience rating.
-
Difficulties in error propagation. If the distributions used to model frequency and severity are not of the simple type, calculating Var(φ̂) may imply drawing at random from the distribution of the parameters. In the case where parameters are obtained through MLE, this distribution is approximately a multivariate normal distribution with a given covariance matrix.
-
Availability of an analytical formula for the risk premium. When an analytical formula for φ̂ is not available, φ̂ itself may have to be estimated by a stochastic simulation. As a consequence, the estimation of Var(φ̂ ) will have a larger computational complexity. Where possible, an analytical approximation should be used [see Parodi and Bonche (2008) for an example of this].
4. Limitations and future research
We now look into the limitations of this work and areas for improvement.
-
The credibility estimate relies on second-order statistics only. This may not always be appropriate when errors on the parameters are large and the standard deviation may not in itself characterize the distortions on the risk premium in a sufficiently accurate way. More general estimates can be obtained by replacing the mean-squared-error minimization criterion used in Proposition 1 with more sophisticated criteria, perhaps based on the quantiles or the higher moments of the aggregate loss distribution. Further research is needed to explore these different criteria.
-
In order to get sound results for the credibility factor, a good knowledge of the pricing process and its uncertainties is required. Consider, however, that it is part of the actuary’s job to acquire a sufficiently thorough knowledge of the uncertainties of the pricing process, anyway. If this knowledge is available, the credibility estimate is simply a byproduct.
-
For the method to work, it is critical that the process by which the uncertainties are computed be fully automated and that its computational complexity be kept at bay, identifying the variables that have real financial significance. This is especially important if an analytical formula for the price is not available.
-
Where adequate market experience is not available, the method will not give sensible results. A possible way of dealing with this issue is to write the optimal price with a “nested credibility” formula such as
Price =Z Client +(1−Z)×(W Market +(1−W) Risk )
where Risk is some pure price of risk and Market is the market risk premium, as suggested by Mildenhall (2008). A three-prong approach like this might explain the minimum rate on lines that you see in reality in reinsurance contracts: the credibility-weighted “Market” rate (W × Market) would become negligible for the higher layers, whereas the credibility-weighted “Risk” rate ((1 − W) × Risk) would remain significant. This makes sense as the top layers are affected by an uncertainty that is difficult to quantify. More research is needed on this topic, which might have to extend out of the “risk premium” paradigm.
5. Conclusions
This paper has presented a novel approach to calculating the credibility premium, called uncertainty-based credibility because it uses the standard deviation of the estimator of the risk premium (for both the client and the market) as the key to calculating the credibility factors.
This approach is especially useful for pricing excess-of-loss reinsurance, where the balance of client uncertainty, market uncertainty, and market heterogeneity is different for each layer of reinsurance. It has been used for pricing motor reinsurance in the U.K. market (Parodi and Bonche 2008).
The methodology is in itself quite general and can be applied to many different problems, essentially to all situations where it is possible to compute the uncertainties of the pricing process and the heterogeneity of the market. Other examples include experience rating in direct insurance (possibly with different excesses) and combining exposure rating (as calculated by using exposure curves) and experience rating in property and liability reinsurance.
Acknowledgments
This work has been done as part of the research and development activities of Aon Benfield, which is part of Aon Ltd.
We are grateful to Dr. Mary Lunn of St. Hugh’s College, University of Oxford, for a very helpful discussion on the proof of Proposition 1. Jane C. Weiss has proposed and subsequently supervised the project. Jun Lin has helped us with many useful suggestions during the real-world implementation of the methodology. Stephen Mildenhall has reviewed the paper and given us crucial advice on how to restructure it. Warren Dresner, Tomasz Dudek, Matthew Eagle, Liza Gonzalez, Di Kuang, David Maneval, Sophia Mealy, Mélodie Pollet-Villard, Jonathan Richardson, and Jim Riley have given us helpful suggestions during the project, tested the software implementation of the methodology, and reviewed the paper. We would also like to thank Paul Weaver for his support during the implementation of the project and for providing valuable commercial feedback.
Usually denoted as “pure premium” 1. The uncertainty mode in the United States.
There might be other drivers for correlation between market and client depending on how the client risk premium is determined, but in this paper we will usually be assuming that the client estimate is based on the client experience alone.