1. Introduction
In a recent paper, Morel (2013) discussed the use of power curves and midpoints of the reinsurance layers to price catastrophe excess of loss contracts. Morel (2013) pinpointed some flaws inherent in using power curves and in using the arithmetic mean or the geometric mean as the midpoint of the reinsurance layers. To solve these issues, Morel (2013) advocated replacing the power curve with spline functions. The present paper will highlight other important flaws in the power curve method and suggest a simpler procedure using power curves whose natural midpoint is the generalized logarithmic mean. The paper is organized as follows. Section 2 introduces the problem and the numerical example that will be worked throughout the paper. Section 3 sketches how to deal with paid reinstatements. Section 4 introduces the European Pareto distribution. Section 5 shows that the power curves method implicitly assumes that the prices behave according to a European Pareto distribution. Section 6 introduces some possible midpoints and shows related issues. Section 7 explains why we will not follow the spline functions route introduced by Morel (2013). Section 8 shows that the natural midpoint when using power curves is the generalized logarithmic mean. Section 9 introduces an alternative method based on the negative exponential distribution and its associated midpoint. The numerical example is further analyzed in Section 10. Section 11 concludes.
2. The rate on line method
Property catastrophe reinsurance offers insurance companies protection against losses due to natural catastrophes. This type of reinsurance is purchased by all insurance companies writing property business because it provides cost-effective capital relief. In other words, the margin ceded to the reinsurers is smaller than the cost of holding capital, and therefore it makes sense to purchase that type of reinsurance. We will use a European catastrophe excess of loss program (Table 2.1) throughout the paper.
The rate on line (ROL) is simply the up-front premium divided by the limit of the layer. In practice, the various layers of the program have a limited number of reinstatements (most of the time one), and furthermore, the reinstatements are usually payable at 100%. We will briefly discuss in Section 3 how to deal with this feature.
Reinsurers tend to use commercial models to price natural catastrophe perils. Further, they all have their own pricing models in order to factor their administration and capital costs into the commercial premium. Often reinsurance brokers and underwriters try to “fit” observed ROLs in order to extrapolate premiums to other layers and/or to predict premiums based on the evolution of the exposure and the brokers’ anticipation of pricing trends.
The aim of this paper is to justify a method commonly used by reinsurance actuaries that is based on power curves. We will show that when the midpoint for the reinsurance layers is well chosen, the method delivers consistent results.
3. Dealing with paid reinstatements
Most of the time, the reinsurance layers will have their yearly liability limited to two or more times the limit (denoted by C) of the layer. In principle, the cedent will purchase a sufficient number of limits, meaning that this feature will have a marginal influence on the price that can be assumed to be theoretically valid for an unlimited number of reinstatements, even though reinsurers would not provide an unlimited yearly capacity for property catastrophe business.
Furthermore, reinstating the limits generally is not free. An additional premium called the reinstatement premium must be paid. We will assume the most general case, in which the reinstatement premium is equal to 100% of the initial premium multiplied by the reinsured loss divided by the limit, as shown in Formula (3.1). We say that the reinstatements are payable at 100% pro rata capita. Reinstating the limit is not an option. So a loss to the layer will lead to a reinstatement premium (if there remain any limits to reinstate). Paid reinstatements in a layer imply that the up-front cost of the layer is smaller than it would be with free reinstatement(s). Working with paid reinstatements versus free reinstatements leads to a rebate on the initial reinsurance premium.
Let us define S as the stochastic loss in the layer and SLOL = Walhin (2001) shows that the expected additional ROL due to the paid reinstatements is given by
the stochastic loss on line (LOL). If ROL is the up-front ROL,ROL×k∑i=1ciCE[min(C,max(0,S−(i−1)C))],
where k is the number of reinstatements and ci is the price of the ith reinstatement.
Here we assume that all reinstatements are paid at 100% and that the number of reinstatements is large enough that we can approximate k → ∞. Thus, Formula (3.1) simplifies into
ROL×E[S]C=ROL×LOL,
where the LOL is equal to the expected value of the stochastic LOL.
Therefore the rebate that can be given for paid reinstatements against free reinstatements is equal to LOL. In other words, if the layer has one (or more) reinstatements at 100%, then the equivalent ROL with free reinstatements (denoted FROL) is approximated by ROL × (1 + LOL). The LOL is not readily available, but if one makes an assumption about the loading charged by reinsurers, it is possible to deduce LOL and, thereby, the corresponding up-front ROL when reinstatements are free (FROL). Let us assume that FROL is obtained by adding to the LOL 5% of the standard deviation of the stochastic LOL, which is approximated by
and loading by 100/90. These parameters denote very soft conditions. The equation to solve numerically isLOL+5%√LOL×(1−LOL)0.9=ROL×(1+LOL)=FROL.
Table 3.1 provides LOL and FROL for our numerical example.
Given the data in Table 3.1, the user has the choice between fitting
-
FROL—that is, the equivalent rate on line with free reinstatements;
-
LOL—that is, the loss on line; or
-
ROL—that is, the up-front rate on line (when reinstatements are paid). In principle, this would not be recommended because this method ignores the fact that the rebate due to the paid reinstatements is embedded in the value of ROL. See Table 10.6 and related comments in Section 10 for more details.
Let us emphasize that the above calculations are oversimplified. In practice, underwriters would use their modeled stochastic LOL to compute LOL and the expected additional reinstatement premiums precisely. Furthermore, they would apply the profitability model of the reinsurer to price the layers and arrive at FROL or ROL. In this paper, we proceed as if we do not have these pieces of information at our disposal. Instead, we want to make quick calculations to compare the pricing of various layers. That is what the ROL method aims to do.
4. The European Pareto distribution and the pure reinsurance premium
The European Pareto distribution dates back to Pareto (1895), who studied the distribution of the revenues in a given population. Hagstroem (1925) advocated its use in reinsurance. We will say that X follows a Pareto distribution (X ∼ Pa(A,α)) if the cumulative distribution function of X is given by
F(x)=P[X≤x]=1−(xA)−α,x>A.
The Pareto distribution is used by reinsurance actuaries because of its many nice mathematical properties (see Philbrick 1985 or Walhin 2003 for a discussion). Among others, the Pareto distribution is a particular case of the generalized Pareto distribution (GPD), introduced by Pickands (1975). The GPD can be shown to be the limiting case for the distribution of excesses above large thresholds, which is exactly the problem reinsurance actuaries try to solve.
Let us assume a layer C xs P. We will use the notation c =
Let NA be the number of losses in excess of A with A ≤ P. Let X1, X2, . . . be the large losses. We will assume that they are mutually independent and identically distributed according to a Pareto distribution. The yearly liability of the reinsurer (with an unlimited number of reinstatements) can be written asS=min(cP,max(0,X1−P))+⋯+min(cP,max(0,XNA−P)).
The pure reinsurance premium (PRP(P, C)) for a layer C xs P is given by
PRP(P,C)=ENAEmin(C,max(0,X−P))={ENAP1−αAα1−α((1+c)1−α−1) if α≠1ENAAln(1+c) if α=1.
The LOL (LOL(P, C)) is therefore
LOL(P,C)=PRP(P,C)C={ENAAαPα((1+c)1−α−1)(1−α)c if α≠1ENAAPln(1+c)c if α=1.
It is also worth noting that the pure reinsurance premium of an unlimited layer ∞ xs P is given by
PRP(P,∞)=Aαα−1P1−α if α>1
and exists only if α > 1.
5. The midpoint method for fitting ROLs
As Morel (2013) explained, a possible solution to the problem introduced in Section 2 is to fit a power curve through midpoints of the original program layers. Morel (2013) claimed that there is no literature on the subject. However, Verlaak, Beirlant, and Hürlimann (2009) had already justified the use of power curves in a Pareto framework.
Let us assume that ROL is based on a frequency distribution with mean λ and a severity distribution X, with survival function P[X > x] = S(x), x ≥ 0. We then have that
ROL(P,C)=λC∫P+CPS(x)dx.
Because S(x) is a decreasing function, we immediately find that
λS(P+C)≤ROL(P,C)≤λS(P).
Let us also note that
We then haveλP+C≤ROL(P,C)≤λP.
Therefore there exists a point x = MP(P, C) such that
λP+C≤λMP(P,C)≈ROL(P,C)≤λP.
If
can be calculated easily, then an approximation for ROL(P, C) is provided based on a certain midpoint MP(P, C) of the layer C xs P.Fitting a power curve of the type
ROL(Pi,Ci)≈λMP(Pi,Ci)=a[MP(Pi,Ci)]−b,
seems natural and will lead to estimating the parameters by linear regression. In fact, assuming a power curve corresponds to the case in which the severity of the process is Pareto distributed. Indeed, we have
λMP(P,C)=λA(MP(P,C)A)−α
where the midpoints are normalized by the parameter A and the λ parameter depends on the chosen value of A.
A can be arbitrarily chosen but must be less than the smallest attachment point of the program (otherwise not all the losses hitting the program would be modeled). Further, if we assume B < A, we have
λMP(P,C)=λA(xA)−α=λB(AB)−α(xA)−α=λB(xB)−α
showing that opting for any threshold lower than the lowest attachment point of the reinsurance program will lead to the same α.
As briefly explained by Verlaak, Beirlant, and Hürlimann (2009), Formula (5.1) justifies the use of a power function to fit the observed ROLs. Its parameters are easily adjusted by linear regression after log transformation. Assume that we have observed n layers (Pi, Ci) with ROLi, i = 1, . . . , n. We have
ROLi=ROL(Pi,Ci)=λAy−αi with yi=MP(Pi,Ci)A,i=1,…,n.
Taking the natural logarithm on both sides of the equality yields
ln(ROLi)=ln(λA)−αln(yi),i=1,…,n,
and the parameters
can easily be estimated by linear regression. This method therefore does not require any numerical procedure, making it comfortable for reinsurance brokers and underwriters to use for quick calculations.It remains to choose the midpoint, which we will do in Section 6.
6. Candidates for the midpoint and various issues
Morel (2013) used two midpoints: the arithmetic mean (ARI) and the geometric mean (GEO). The geometric mean has the nice feature of corresponding exactly to the case α = 2 in the Pareto setting. A third interesting case could be the logarithmic mean (LOG), corresponding exactly to the case α = 1 in the Pareto setting. We have
ARI(P,C)=P+(P+C)2=P(2+c2),GEO(P,C)=√P(P+C)=P√1+c,LOG(P,C)=(P+C)−Pln(P+C)−ln(P)=Pcln(1+c).
With the above midpoints, the approximated ROL (here we use ROL, but the reasoning is also valid for LOL and FROL) becomes
ROLARI(P,C)=λAP−αA−α(1+c2)−α,ROLGEO(P,C)=λAP−αA−α(1+c)−α/2, andROLLOG(P,C)=λAP−αA−α(ln(1+c)c)α.
and we immediately obtain the reinsurance premium as
RPARI(P,C)=λAP1−αA−αc(1+c2)−α,RPGEO(P,C)=λAP1−αA−αc(1+c)−α/2, andRPLOG(P,C)=λAP1−αA−αc(ln(1+c)c)α
The power curves and associated fits for these three midpoints are given in Tables 6.1 and 6.2, respectively. A has been chosen as equal to 50,000,000.
We observe that the fits based on the three proposed midpoints are of relatively good quality. The best fit is obtained with the logarithmic mean, which is not surprising because the fitted α is around 1.25—that is, not far from 1 (which corresponds to the LOG case). Let us now discuss various issues linked to the arbitrary choice of one of these midpoints.
Issue 1. Morel (2013) claimed that the quality of the fit is not excellent and in particular that the total premium is not matched. We believe that this is unavoidable when using parsimonious mathematical models. One could argue that the fit could be enhanced by using weights when fitting the parameters of the linear regression. Natural weights are the premiums. Tables 6.3 and 6.4 provide the fits through weighted linear regression.
There remains a difference between the observed total premium and the fitted total premium. The user could, for example, correct the λ parameter to force the total approximated ROL to match the total observed ROL.
Issue 2. Morel (2013) also claimed that different layers may have the same ROL. We do not believe that this is an issue. The theory perfectly allows for various layers to have the same ROL. See Table 10.4 in the numerical illustration of Section 10.
Issue 3. Morel (2013) also claimed that due to the unboundedness of the power curve, a layer attaching at an infinitely small level would have an infinite premium. That is in fact not true in all cases. In particular, we have
limP→0RPARI(P,C)→λA(2A)αC1−α,limP→0RPGEO(P,C)→∞, andlimP→0RPLOG(P,C)→∞.
So we observe that the limit does exist if the midpoint is the arithmetic mean. Anyway, we believe that extrapolating to layers with infinitely small deductibles does not make sense and does not need to be captured by the model. It is indeed most likely that the exposure at such low levels cannot be extrapolated from the exposure at higher levels. Attritional losses will require different modeling than large losses.
Issue 4. We also have that the price for adjacent layers is not additive, which is obviously nonsense:
RPARI(Pi,Ci)+RPARI(Pi+Ci,Cj)≠RPARI(Pi,Ci+Cj),RPGEO(Pi,Ci)+RPGEO(Pi+Ci,Cj)≠RPGEO(Pi,Ci+Cj),α≠2,RPLOG(Pi,Ci)+RPLOG(Pi+Ci,Cj)≠RPLOG(Pi,Ci+Cj),α≠1.
Issue 5. Other issues not mentioned by Morel (2013) are
limC→∞RPGEO(P,C)→∞,α<2,
limC→∞RPARI(P,C)→∞,α<1,
limC→∞RPLOG(P,C)→∞,α≤1,
limC→∞RPGEO(P,C)→0,α>2,
limC→∞RPARI(P,C)→0,α>1,
limC→∞RPLOG(P,C)→0,α>1,
limC→∞RPGEO(P,C)→λAA2P,α=2, and
limC→∞RPARI(P,C)→2λAA,α=1.
Limit (6.1) makes no sense for the cases 1 α 2. Indeed, premiums for unlimited reinsurance layers remain finite when (loaded) claims are Pareto distributed with α > 1 (see Formula [4.1]). Limits (6.2) and (6.3) make sense because the α parameter is smaller than 1 and the expectation of a Pareto random variable does not exist in this case. Limits (6.4), (6.5), and (6.6) make no sense. Unlimited reinsurance layers must lead to nonzero premiums. Finite and nonzero limits are obtained only for the very particular cases in (6.7) and (6.8), confirming again the danger of the method when dealing with layers having a huge limit.
7. The spline solution
Morel (2013) suggested overcoming most of the issues by integrating spline functions over the various layers instead of working with power curves and midpoints. In fact, Morel (2013) implicitly used the formula
RP(P,C)=λA∫P+CPS(x)dx,
where S(x) is the survival function of the underlying claims or premium process. Morel (2013) suggested fitting spline functions, and he thereby overcame issues 1 through 5. However, that solution comes at the cost of introducing heavy assumptions about maximum ROL in the bottom of the program as well as minimum ROL at the top of the program. The user can easily deal with these concepts outside the model by using expert judgment. More important, the survival function is a decreasing function and the spline functions are not everywhere decreasing, leading to possible negative prices for certain layers. Eventually all this is based on a model that is heavily overparameterized. Morel (2013) used 19 parameters to fit five layers.
We propose in the next section a method that will overcome most issues and remain parsimonious in terms of number of parameters.
8. The generalized logarithmic mean of order r
A two-variable continuous function f: R+2 → R+ is called a mean on R+ if min(x, y) ≤ f(x, y) ≤ max(x, y), x, y R+ holds.
A way to build a mean is to resort to the Cauchy mean value theorem (Cauchy 1882). Let the functions f(z) and g(z) be continuous on an interval [x, y], differentiable on (x, y), and g′(z) ≠ 0 for all z (x, y). Then there exists a point z = ξ such that
f(x)−f(y)g(x)−g(y)=f′(ξ)g′(ξ).
Let us choose f(x) = xr and g(x) = x. We obtain
xr−yrx−y=rξr−1.
Solving in ξ, we find
ξ=[xr−yrr(x−y)]1r−1.
ξ is called the generalized logarithmic mean of x and y.
Because we are interested in midpoints of layers [P, P + C], which we denote by MP(P, C), we will adopt the notation MP(x, y − x) with y ≥ x. In this context, we can write more precisely the generalized logarithmic mean as
Lr(x,y−x)={r−1√xr−yrr(x−y),r≠0,r≠1,x≠yIDENTRIC(x,y−x)=e−1(xxyy)1x−y,r=0,x≠yLOG(x,y−x)=x−yln(x)−ln(y),r=1,x≠yx,x=y.
The generalized logarithmic mean of order r was introduced by Galvani (1927). It is sometimes called extended logarithmic mean and often presented as a particular case of the Stolarsky mean with two parameters introduced by Stolarsky (1975).
Stolarsky (1975) showed that when x ≠ y, Lr(x, y − x) is strictly increasing with r. We have the following particular cases:
limr→−∞Lr(P,C)=P
L−2(P,C)=((GEO(P,C))4ARI(P,C))1/3
L−1(P,C)=GEO(P,C)
L0(P,C)=LOG(P,C)
L1/2(P,C)=QUAD(P,C)=12(ARI(P,C)+GEO(P,C))
L1(P,C)=IDENTRIC(P,C)
L2(P,C)=ARI(P,C)
limr→∞Lr(P,C)=P+C
The generalized logarithmic mean has been extensively researched by mathematicians to prove various inequalities. See, for example, Wang, Wang, and Chu (2012) and Qiu, Wang, and Chu (2011). It also has applications in convex function theory, economics, and physics. See, for example, Guo and Qi (2001), Pittenger (1985), Kahlig and Matkowski (1996), and Pòlya and Szergö (1951). In this paper we will make a link with excess of loss layers priced with a European Pareto distribution.
Now let us make the following change of variable α = 1 − r. The generalized logarithmic mean of order 1 − α is the midpoint that provides an exact formula for the ROL when α > 0:
ROL(P,C)=λA(L1−α(P,C)A)−α=(λAP−αA−αc(1−α)((1+c)1−α−1) if α≠1λAAPln(1+c)c if α=1.
It therefore becomes logical to make the fit with midpoints being calculated according to the generalized logarithmic mean of order 1 − α with the estimated α parameter. A very limited number of iterations will be required to obtain the fit based on the generalized logarithmic mean, as exemplified in Table 8.1.
Table 8.2 provides the adjusted ROL with the generalized logarithmic mean as midpoint.
When comparing these results with the ones in Table 6.3, we cannot claim that the fit is visually better than with the other midpoints. But that is not the goal of using the generalized logarithmic mean. We will see in Section 10 that the issues encountered in Section 6 disappear when using the generalized logarithmic mean, which is why we advocate the generalized logarithmic mean in a parsimonious mathematical model.
It is also worth noting that Bobtcheff (2003) used the generalized logarithmic mean to fit property catastrophe market curves by using nonlinear regression. In her master’s thesis, Bobtcheff (2003) used the rather intuitive terminology Pareto layer mean to define the midpoint of the layer.
9. Negative exponential setting
In Section 5, we assumed that the ROLs behave according to a power curve. Another straightforward parametric assumption would be to assume a negative exponential behavior:
ROLi=aexp(−bMP(Pi,Ci)).
This case exactly corresponds to a severity being distributed according to a negative exponential distribution with a survival function:
S(x)=exp(−x/θ),x>0.
Let us now find the midpoint that matches the exact value of ROL in the negative exponential setting. The exact ROL is given by
ROL(P,C)=λC∫P+CPexp(−x/θ)dx=λθC[exp(−P/θ)−exp(−(P+C)/θ)].
The approximated value of ROL is given by
ROL(P,C)≈λS(MP(P,C))=λexp(−MP(P,C)/θ).
The midpoint (let us call it EXP(P,C)) matching the exact value of the ROL will be the solution of the equation
λθC[exp(−P/θ)−exp(−(P+C)/θ)]=λexp(−MP(P,C)θ),
and we find
EXP(P,C)=P−θln[θC(1−exp(−C/θ))].
This midpoint corresponds to the case f(x) = exp(− x/θ) and g(x) = x in the Cauchy mean value theorem.
Table 9.1 shows the iterations to find the parameters.
Table 9.2 provides the adjusted ROL with the EXP midpoint.
We can visually perceive that the fit is of lower quality than the power curve fit.
10. Numerical application continued
In this section we will obtain the adjustment for FROL because we believe it is more appropriate to work on fixed premiums. Tables 10.1 and 10.2 provide the adjusted FROLs with the generalized logarithmic, arithmetic, and geometric means. A has been taken to be equal to 50,000,000.
Assume that we have the following information for the next reinsurance renewal:
-
The tariff will drop by 5%
-
The exposure will drop by 10%
We will now extrapolate the price for various layers at the next renewal. We know that the exposure will drop by 10%. In order to take this into account, we will reduce the parameter A by 10%. This is easily justified by the fact that the parameter A is a scale parameter for the Pareto distribution. Results are shown in Table 10.3.
We can make the following observations:
-
For the new program in four layers, the sum of the price of the four layers is equal to the price of the equivalent program in one layer with the generalized logarithmic mean. However, that is not the case with the arithmetic and geometric means, as shown in layers 4 and 6.
-
The price of layer 6 is inconsistent with the arithmetic mean, being lower than the price for a lower limit with the same attachment point.
-
Layer 7 illustrates Formulas 6.1 and 6.5. For the geometric mean, the premium tends to infinity, which is unexpected with an α parameter larger than 1. On the other hand, the premium for the arithmetic mean case converges to 0, which is nonsense.
The above exercise demonstrates again that we should work with the generalized logarithmic mean. We will concentrate on this case for the rest of the numerical application.
Let us also show (Table 10.4) that two different layers may have the same midpoint and the same ROL (here FROL).
Thus, obviously, there are as many layers as you like with the same midpoint and ROL.
We still have to find the ROL for the various layers, and that requires applying the paid reinstatements as well as the known 5% price off. This is easily done by reducing FROL by 5% and using the adapted equation to compute LOL and ROL:
(LOL+5%√LOL∗(1−LOL))0.950.9=ROL∗(1+LOL)=FROL.
Table 10.5 provides FROL, LOL, and ROL.
We will finish the numerical application by showing the final results when FROL is adjusted (which we did above) and also when LOL or ROL is adjusted. Table 10.6 provides the results.
We observe that the results with the FROL and LOL adjustments are rather similar. However, there are material deviations when we compare with the results obtained with the ROL adjustment. The latter method ignores the paid reinstatements and implies errors, in particular for layers with a high ROL, and therefore a larger impact of the paid reinstatement. Layers 1 and 6 are good examples. Thus, although the ROL method has the advantage that no assumption is needed to deduce LOL and/or FROL, we advocate first deducing LOL and/or FROL and then working the adjustment on these variables, because they are not impacted by the paid reinstatements.
11. Conclusion
This paper has analyzed how to make a quick analysis of the pricing of property catastrophe excess of loss layers. The method is not too complex and can easily be implemented in a spreadsheet. If a power curve is chosen to fit the ROL in a function of a midpoint of the layers, we have argued that it is worth using its corresponding midpoint, which is the generalized logarithmic mean. Calculations are marginally more complex than with the usually used arithmetic, geometric, or logarithmic means but will lead to consistent results. We have also shown how to take into account paid reinstatements in a simple way. The method can easily be used to compare the pricing of layers from one year to another, but also to build benchmark curves when using data from various insurance companies.
Acknowledgments
I would like to thank the editor and two anonymous reviewers, whose comments helped improve and clarify the paper.