1. Introduction
Off-balance and test correction factors are pervasive in class ratemaking and other circumstances where an overall change must be allocated to subsets of a book of business, but some of the subsets are not fully credible. For example, the off-balance is specifically discussed in Modlin and Werner (2016) and the test correction factor in mentioned in Daley (2009). The two types of factors work similarly, but test correction factors are special off-balance factors that also compensate for rate capping. In some situations, the correction factor is minor, but in other cases it is a substantial part of the pricing. When the off-balance is significant, using the best correction algorithms available will help compute the most effective rates.
As part of fine tuning the off-balance algorithm, this paper presents formulas that eliminate some undesirable aspects of the current algorithm. Additionally, these new correction algorithms are based on the theories underlying each of the credibility formulas commonly used in ratemaking. So, they are individually attuned to the mathematics underlying the corresponding approaches to credibility. Therefore, one would expect them to generate more accurate class, etc. indications.
Of course, those statements must be supported in the body of this paper. As a starting point, the properties of the current algorithm are relevant. Usually, when individual class rates are made for a large group of classes, the resulting exposure-weighted average of the post-credibility rates does not match the raw overall average rate in the loss and exposure data. So, the rates (or loss costs) must be adjusted so that they do average to the (presumably fully credible or “almost”[1] fully credible) overall average rate. Actuaries currently rebalance the rates resulting from the credibility process by multiplying all those post-credibility class rates by either a common off-balance factor or a common test correction factor to match the overall average rate. Of note, when the data elements underlying some of the individual rates are also fully credible (or almost fully credible), this will result in rates that are clearly inconsistent with the underlying data. In effect, because those classes are so credible by themselves, one may be fairly certain that, after altering the rates with the off-balance factor, the resulting rates will be either too high or too low. So, the challenge is to develop methods that simultaneously avoid this unreasonable behavior and best fit the rationale for each credibility method.
To illustrate how significant this issue can be, the next section will present two examples that could occur in common actuarial practice where the current algorithm creates some very significant distortions in some of the rates. Following some background to support the notation and certain formulas, a corrected off-balance algorithm (“Method 1”) for limited fluctuation credibility is presented in the article. Next, a similar algorithm (“Method 2”) for best estimate credibility is presented. Note that the methods involve exactly the same formula. Therefore, given the same final credibilities (an unlikely event given the differences in their basic goals) and the same data, the results of the two methods are identical. Then, the special concerns and relevance of the methods in a test correction situation are discussed. In each section, examples are provided. Lastly, further support and extensions of Method 2 are provided in appendices.
2. The Current Off-Balance Factor Algorithm in Typical Class and Subline Ratemaking Schemes
Two examples of how the current off-balance factor works in practice are provided in this section. This should lend some perspective to issues present in the current algorithm. Each begins with some sample data and shows all the resulting calculations and final indications. Reviewing those, one may evaluate the effectiveness of the current algorithm
The first case, in Table 1, involves the subsets of a small personal lines book of business (although the volatility might also match a medium sized commercial book), where the complement of credibility is assigned to no rate change. The subsets are listed as coverages, but they could also be states, producers, major classes, or any other relevant split of the book of business into subsets. The credibility of the individual coverages ranges from modest to full credibility, presumably per a limited fluctuation credibility approach. Although this is not the type of workers compensation or general liability class ratemaking problem where one would see a correction factor in the actuarial literature, this often arises in practice. One may see that the credibility adjusted changes for individual coverages combine together to produce a 6.9% increase. Consequently, they do not generate a sufficient overall change to match the 14.4% increase that the all classes combined dataset indicates is necessary. Therefore, standard ratemaking technique would introduce an additional loading factor, the off-balance factor, which would be multiplied by all the indicated “change ratios” (1.0+the rate change) to obtain the final rate changes by coverage. As one may see, the fully credible +16.9% for Coverage E becomes +25.1% after off-balance correction. That would appear to be excessive compared to the fully credible +16.9% indication. The other rate indications appear to be somewhat plausible, but given the limited credibility it is difficult to assess them critically. Considering class E, this approach appears to distort the rates in some classes (in this case, a class) considerably.
Next, calculations using the typical class rate ratemaking algorithm with an off-balance factor are shown below in Table 2. This tableau is an abstract version of what one might see in an individual company’s internal rate analysis or rate filing, or in rating/advisory organization ratemaking. So, in this case, the complement of credibility is assigned to the overall average rate.
A few aspects of the calculations should be disclosed, though. First, in this context, expense loadings create unnecessary calculations. Therefore, all the examples in this paper (except Tables 1 and 3) use “rate” to denote what are actually “loss costs”. Second, generalized exposures are used, but this could just as easily have used current level earned premium and loss ratios rather than rates. Third, the full credibility standard should be mentioned. Of course, given freedom to choose any confidence level and accuracy level, the full credibility standard may take any value. But, in order to use the limited fluctuation credibility process, some standard must be chosen. Since it is already present in the literature, a full credibility standard of 683 expected claims was used to generate the credibilities.
The general setup of the table is as follows. The top portion of the table (“Main Calculations for Class Rates”) has the main analysis. The bottom portion ("Reference Values’) has computations of various overall values that are needed for the main analysis. The table is annotated with the calculation formulas that describe each column. In order to compute the credibility values, the average severity is needed. The severity value used in the table, $200 is an additional input needed in addition to the class’s data. Of course, actuaries compute average severities fairly often, so computing this should not create a burden.
Note that with this type of data and complement of credibility a very large off-balance may be required. In this example it is 1.184. Consequently, the off-balance appears to generate clearly excessive rates for nearly fully credible classes in addition to fully credible classes. The schemes in the Tables 1 and 2 are similar, and as the examples suggest, both generate high corrections. Corrections this substantial may not arise in all ratemaking situations, but the examples illustrate how they can come about. So, there are situations where the off-balance factor is both very significant and very prone to distort fully credible rates.
A few things may be said about when such a large off-balance may arise. It appears that when the data multiplied by the complement of credibility is very similar to the new pre-credibility indications the off-balance will have a smaller size (up or down). Also, when the credibility of each class or other rating element is high, there is a lesser off-balance. Further, the situation is amplified when the smaller classes generally have lower indications than the larger classes. So, the amount of concern for the outsized impact of the off-balance on the high credibility classes varies from ratemaking situation to ratemaking situation. However, as the examples in Tables 1 and 2 illustrate, the impact can be quite significant.
3. Background: Notation and an Optimization Technique
Later sections deal with alternate views of the off-balance. To make the discussion in those sections easier to follow, this section contains some notation for various quantities used in rate calculations, as well as a reminder about the mathematics associated with finding values that simultaneously provide a best estimate and fulfill certain limitations (a “constraint”).
The notation is:
n= the number of classes (more generally subsets, etc.) included in the line of business or program;i= index running through the individual classes;ei= exposures for class i;li= historical losses for class i';Li=liei= unadjusted, pre-credibility loss rate for class i;Zi= credibility for class i;M= ∑n1li∑n1ei= overall loss rate (grand mean), (so ZiLi+(1−Zi)M is the post-credibility loss rate for class i);Ti= off-balance add-in for class i;ri=LiZi+(1−Zi)M+Ti= final off-balance corrected loss rate for class i;μi= true (but unknown) underlying loss rate for class i (target of the ratemaking process); andC= multiplier, constant across all classes, to be used in off-balance and, later, test correction.
Note that the off-balance or test correction is expressed as an add-in, rather than its common form as a factor. This will facilitate later computations (which, in all the cases in this paper, do eventually result in factors).
Notation associated with a specific method, the “Lagrange Multiplier” method from calculus, is needed. One method for calculating the off-balance add-ins requires the computation of the best estimate when the
’s are limited to values which fulfill a constraint. For example, a method may target that the ’s generated by the ’s minimize error in some way. But all off-balance approaches also require that the constraint (that the average rate resulting from the process matches the average rate in the data) be satisfied.The notation in this case uses an expression,
whose minimum sets the criterion (for example, “least squared error”) for the best estimate of the true underlying means However, that minimum must only consider the values that fulfill the constraint. So, the calculations also require a Lagrange multiplier, times a constraint term requiring that the off balance-corrected rates weight to the overall mean In effect, the combined function, defined here as “ ”, would look likeQ(T1,...,Tn) =S(T1,...,Tn) −λ[∑n1ei(ZiLi+(1−Zi)M+Ti)∑n1ei−M].
Then, one must compute the corresponding partial derivatives by each of the
’s (and as well). The mathematics of Lagrange multipliers then dictates that the values of the ’s and where all those derivatives are simultaneously zero will yield the minimum possible value of subject to the constraint.4. Method 1: Leave the Credible Data Alone—Spreading the Off-Balance Correction Across the Complement of Credibility
The first step in establishing an off-balance algorithm is to carefully articulate the goal of the algorithm. Among all possible ways to allocate the overall off-balance, this establishes a way to determine which one is best. As may be seen in the remainder of the paper, different off-balance approaches result from different definitions of why an off-balance approach is “best”. Once each such target is set, the problem may be expressed in mathematical terms and the optimum algorithm may be calculated.
This case begins with the theory of “square root” or “limited fluctuation” credibility (and similar credibility calculations). In that methodology, the experience data is the focus of the calculation, and the statistic receiving the complement of credibility is essentially ballast. So, from that viewpoint, it makes sense to decide that the
’s should be unaffected under the correction algorithm. Rather, the aggregate off-balanceMn∑i=1ei−n∑i=1ei×[ZiLi+(1−Zi)M]
should be pro-rated among the [2]. In this case the factor is multiplied by the complement of credibility-based portion of each rate, or This approach preserves the basic credibility process. Further, since the large, fully credible or almost fully credible classes have very small complements of credibility, the correction algorithm will barely affect them. As a result, this off-balance correction algorithm eliminates one of the main problems with the current approach.
’s. This is still a multiplier approach. But here the multiplicative factor is multiplied by a different"basis" for the correctionConsidering the details underlying limited fluctuation credibility, one may make a stronger statement about this method. Specifically, this method creates the most compliance with the underlying theory of limited fluctuation credibility. Therefore, it is optimal. The reasons for that begin with the basic theory of limited fluctuation credibility. The original theory held that one was limiting the probability that randomness in the loss data would cause a spurious increase or decrease beyond some chosen threshold, limiting say, the probability of a spurious 15% increase to under 5%. The current usage seems to focus instead on spurious differences from a some benchmark (“Table 2). However, the current off-balance factor, when it is above unity, magnifies the size of all the rates, including any rates that are close to the threshold. Thus, it enhances the probability that the criteria underlying the credibility will not be fulfilled. Since this “'Method 1” adds to the benchmark or complement term and not the random loss rate it could still swing a rate beyond the threshold. However, it does not disproportionately affect the rates with larger changes that are somewhat more prone to exceed the threshold.
” inOther issues must be considered as well. Just like most of the insurance premium, considerations of equity would require that the off-balance be allocated using expected losses. Since the only available unbiased estimators of
are and any allocation of the off-balance that tracks evenly across expected losses must use the same linear combination of the ’s and for each of the ’s. Since the ’s are more volatile, assigning all of the off-balance to would clearly be optimal. Thus, it is an optimal method.Further, given that the only unbiased estimators for each class rate are the raw data rate and the overall average rate, and that the credibility is to be maintained (eliminating the possibility of varying the mix of the two unbiased estimators from class to class[3]), this method generates the most plausible rates given the data and credibility values.
For illustration, the first two examples in section 2 may be reworked with the off-balance spread across the complement of credibility rather than the entire rate. Both the final indications resulting from applying the off-balance to the complement of credibility and the values from Table 1 in Section 2 that they replace are shown in Table 3. So the final results may be readily compared.
As one may see, the rate for the fully credible coverage E is now proper, and the smaller classes bear most of the weight of the off-balance they are primarily responsible for. The largest effects of the credibility process are on coverages A and B, with their pre-credibility 200% and 100% loss ratios and lower credibility. However, the resulting indications for those coverages are still very reasonable in comparison to the raw data. The combined credibility and off-balance computations in Table 3 did not affect the indications for those classes as much as the process in Table 1 reduced them. However, spreading the off-balance without using Coverage E meant that Coverages C and D were more affected than in Table 1. From a standpoint of overall fairness, though, this appears to be much more equitable than the present approach.
Sample calculations using this method with the by-class data from Table 2 is contained in Table 4. The table is annotated with the calculation formulas as well as the mathematical formulas that describe each column. As with Table 3, the result of Table 2 are shown in column (12) for comparison.
As one might expect, this generates much improved rates for classes 14 and 15. The class 13 rate is also much improved. Chance generated little difference in the class 12 rate. Lastly, the rates for the other classes are generally increased, in keeping with their loss experience generally being higher than the complement of credibility. So, once again, applying the test correction to the complement of credibility term appears to generate a better result than the present system .
Thus, strengths of this approach are it’s mitigation of off-balance effects on fully credible and nearly fully credible classes, that it focuses correction on the classes generating most of the off-balance, and how many circumstances it may be used in. When the off-balance matters, this can improve the quality of the rates.
5. Method 2: An Approach for Best Estimate Credibility: Minimizing the Expected Squared Error After Off-Balance
As discussed in Section 4, best estimates are driven by the mathematical definition chosen for “best”[4]. This section relates to the “ ” in the notation of this paper) best estimate credibility developed in Bailey (1945), where is the ratio of the expected process variance of the losses in a single unit of or divided by the variance of the hypothetical (or possible) values of the mean loss rate. Most actuaries would say that the formula and formulas derived from it are the most commonly used best estimate credibility formulas. Hence, the assumptions and approach underlying that formula are the foundation of the off-balance approach in this section.
Therefore, the assumptions and definition of the best estimate in this section will match those of that article. The first assumption is that the overall mean rate in the data Bailey (1945) assumptions are: that each true but unknown class mean is an independent sample from a common distribution with mean equal to the overall mean and parameter variance and, that all individual exposures have a common, independent from everything else and all other exposures, process variance of per exposure. Thus, each raw loss rate is affected by the “observation” error with process variance that interferes with predicting the underlying but unknown true loss rates ’s). Then, to estimate each using and the Bailey formula chooses each credibility to be where to minimize[5] each expected squared error term
contains so little process or parameter variance that it can be functionally treated as an exact measure of the overall average expected loss rate. The remainder of theE[{ZiLi+(1−Zi)M−μi}2|σ2,s2].
The
’s should fit within that framework. Of course, the left hand side inside the brackets is only an estimate of Since the post-credibility rate is the optimum estimate of the rate for each in isolation (given that is the true underlying overall mean), the estimates are point-by-point optimal when However, in that scenario the rates will not average to the overall mean. Thus, if the rates are to match the overall mean, one must initially rephrase the expression toE[{ZiLi+(1−Zi)M+Ti−μi}2]=minimum (for each i),
given the overall constraint, or
n∑i=1[E[{ZiLi+(1−Zi)M+Ti−μi}2]]=minimum,
subject of course to the rates averaging to the overall mean.
∑n1ei(ZiLi+(1−Zi)M+Ti)∑n1ei=M.
Under those assumptions and criteria, Appendix A also shows that all of the
’s should be in proportion to the “same” (similar formula, but with different underlying credibilities) ’s as in Method 1. Thus, up to differences in the underlying credibilities, Method 1 and Method 2 are the same. That illustrates how robust the formula is.As with the Method 1 analysis, this should be reviewed, in this case conceptually, to review whether it generates more reasonable results than the current method. As expected, this method eliminates the problems with the current off-balance factor method. Just as with Method 1, as the credibility of a class approaches unity, use of [6] improvement in the accuracy of the final rates.
as a basis will prevent the large changes to the nearly fully credible post-credibility rates. Further, it also assigns more of the the off-balance to the classes that generate more off-balance, without bypassing the credibility process. Considering the improvements in both accuracy and reasonableness this approach offers, this can generate significantImportantly, this method is “scalable”. Any needed amount of correction can be obtained by adjusting the multiplier
without changing the basis for off-balance correction. If more correction than that needed to balance to is required (for example, to compensate for capping), then one need only increase the multiplierIt is, however, relevant to ask whether or not increasing the multiplier in Method 2 additionally to offset capping (or other processes) will still generate optimal results. As discussed in Appendix A, the basis
without any changes, is still an optimal basis when the amount of correction changes. So, it provides a scalable formula for the optimum correction values. Thus, when extra correction is needed, all the off-balance formulas are flexible enough to generate appropriate results.Table 5 illustrates the algorithm flow from raw data to final rates in a step-by-step fashion. The table is annotated with the calculation formulas as well as the mathematical formulas that describe each column. Except as noted below, all the calculations from raw data to conclusion are shown. As one may see, the calculations are not overly difficult. The calculations are split among those that determine the underlying parameters needed for the credibility process in the main calculations (the “First Step”, with values denoted by Roman numerals); those that comprise the main part of the analysis (the “Main Step”, with columns denoted by numbers); and those in a “Reference Values” table containing the overall totals and other overall quantities (denoted by letters) needed for the computations in the Main Step.
Various formulas for estimating the variance structure Dean (2005). The calculations begin with the and described in the paper[7]. Due to the number of required computations, is presumed to have been computed outside the table. Note that the value of matches the data in the main table but the computations are not shown. Details of the calculation formulas for the variance structure are presented in Appendix B. However, reviewing the formulas , and should not be overly difficult to compute with modern tools.
and have been published. The example uses the nonparametric variance structure estimation method for Bühlmann-Straub data documented inSince a comparison of final rates using Method 1 to those obtained under the current algorithm was already included in section 4, this does not include a similar comparison. Further, the example of the current method only used limited fluctuation credibility, so no comparison is already available within this paper[8]. One may note, though, that since there is no full credibility under best estimate credibility, the values for the almost fully credible classes may experience some minor adjustments from the off-balance.
As a whole, this section explains why the
’s make the best basis and shows complete calculations to create rates using best estimate credibility and this best estimate off-balance algorithm. In summary, this off-balance approach, derived from best estimate credibility, provides a very effective off-balance correction method.6. Enhancements to the Best Estimate Formula
Additional results and information are provided in other appendices. Appendix C shows that the formula above still works when the process error variance per exposure varies from class to class. It also provides a similar, but not quite identical, formula for the basis when the parameter variance differs from class to class. One may conclude that, as long as the best estimate [9] usually forms the best basis for distributing the off-balance. This very general result, involving quantities that are already in the rate computation, should simplify any conversion to distributing the off-balance across the complement of credibility terms.
’s are defined in terms of minimum expected squared error and also when something other than a best estimate underlies the rates, the complement of credibility termTwo other appendices with relevant topics are included as well. Appendix D explains how, in the absence of capping, replacing
with the Bühlmann credibility-weighted mean can eliminate the need for off-balance correction in the best estimate scenario. Considering the Central Limit Theorem, Appendix E shows that when the various probability distributions are approximately normal, then the formula of Section 5 approximates the maximum likelihood estimate. Considering the amount of losses in many ratemaking scenarios, that would suggest that one may often expect an approximate maximum likelihood estimate when this approach is used with best estimate credibility.7. Comparison of the Two Methods
One cannot help but notice that formula for distributing the off-balance is exactly the same in Method 1 and Method 2. Of course, the credibilities used in that formula could be expected to vary considerably between limited fluctuation credibility and best estimate credibility. However, the algorithm employed following the calculation of the initial post-credibility class rates is exactly the same.
Also, while scalability was discussed for Method 2, it is apparent that Method 1 is scalable as well. Its basis is the set of complement of credibility portions of the rates, and clearly the multiplier may be adjusted as needed. Even the current algorithm, where the basis is the set of post-credibility rates, allows for the basis to be multiplied by whatever off-balance factor is needed to achieve whatever overall rate level is needed.
Overall, Method 1 and Method 2 are very similar, suggesting that the algorithm in this article may be used in a broad variety of situations.
8. Testing the Test Correction Factor: Off-Balance Corrections for Credibility and Capping Combined
When capping rates for individual classes, the off-balance process becomes a test correction process. For example, one might begin with a class ratemaking system that generates a group of rates, but then specify that no rate receive more than a 25 percent increase or 20 percent decrease. If more rates[10] are capped from above than from below, then the average rate after the capping will be lower than before the capping, even when the initial off-balance before capping is allocated using the ’s. Present practice is to successively increase or decrease the multiplier until the needed overall average rate is achieved.
So, in this test correction algorithm, the first step is to recompute the aggregate off balance for test correction. The new aggregate off-balance is equal to the total rate dollars in the overall mean less the aggregate rate dollars in uncapped classes, less again the aggregate rate dollars in capped classes
Mn∑i=1ei−n∑uncappedclasses j{ej×[ZjLj+(1−Zj)M]} −n∑cappedclasses k{ek×(capped rate for class k)}.
Since the
algorithm is scalable under both the Method 1 and Method 2 assumptions, that balance would be pro-rated according to the ’s above. That would adjust the common multiplier which, before capping, would now apply both sets of class rates. Then the caps would be applied to the new data. It is likely that the changes in rates that generates will now place some rates formerly outside the caps inside them (or vice versa). So, a different group of classes may be capped after this step. When that occurs, the value in equation (8.1) will change, and the needed overall test correction cannot be achieved without changing the inMn∑i=1ei−n∑uncappedclasses j{ej×[ZjLj+C(1−Zj)M]} −n∑cappedclasses k{ek×(capped rate for class k)}.
Consequently, additional iterations of the test correction algorithm are often needed. Since each test correction has the potential to result in a need to cap additional rates, or to move previously capped class rates away from the capping limits, the process often flows through a series of iterations before producing a final result. So, one effectively spreads the remaining aggregate off-balance correction among the classes whose rates are not capped, possibly with partial effects on the capped rates. In limited fluctuation and best estimate credibility alike, the basis for correction remains
through all the iterations.Since one must increase or decrease the multiplier applied to the basis values (again, the “
” in equation (A.11)) in order to accommodate capping, this requires flexibility that the Bühlmann best estimate complement of credibility does not have. Thus, the off-balance approach of this paper is more robust.Tables 6 and 7 illustrate two steps of test correction using the best estimate data from Table 5 and caps of ±15%. The internal process within each iteration proceeds as follows. In the part of the chart titled “First Step”, the loss cost rates resulting from the last test correction are compared against the caps and the presently capped rates are identified. In the “Last Step”, the initial values are adjusted by applying additional test correction to the pre-capping rates from the previous iteration (column 10 in Table 5 is input as the previous iteration for Table 6, then column 16 of Table 6 is the previous iteration used in Table 7). Key reference values applicable to all classes (including a new test correction factor) are computed and shown at the bottom of each table[11]. Following through the calculations, Tables 6 and 7 fully illustrate the test correction process.
9. Summary
A choice of how off-balances are split among classes establishes an approach to off-balance correction. However, it is important that the off-balance approach be coordinated with the credibility process that generated the off-balance. Two different goals for off-balance correction were presented. The first approach, leaving the credible data alone when using limited fluctuation credibility, was seen to create effective results . Moreover, that approach was also robust enough to use in virtually any multi-class ratemaking scenario involving a credibility-induced off-balance. The last approach, the optimum off-balance arising from minimizing the expected squared error with respect to best estimate credibility, is by definition the best companion approach for best estimate credibility. As this paper shows, ultimately the limited fluctuation and best credibility estimate approaches use exactly the same allocation formula (multiplying a constant
by the complement of credibility terms That formula should significantly improve the off-balance correction algorithm.This formula eliminates the disparity associated with correcting classes that are already fully (or almost fully) credible. Further, it tends to spread the off-balance more heavily among the classes that generate the off-balance. Since different sets of assumptions resulted in the same algorithm, it also has broad applicability for best estimate credibility situations. Last, the paper showed that the algorithm extends easily to accommodate the capping of class rates under both sets of assumptions. So, the use of
or just as a basis for distributing the off-balance appears to work in a very wide variety of situations.It is hoped that this method will lead to much more rational and optimal class rates. When the off-balance correction is material, absorbing the off-balance in the complement of credibility term appears to offer significant opportunities to improve both the accuracy and reasonableness of the resulting indications.