There are many interesting historical facts in this paper. I would like to add the following. The concept of cumulants, which plays a prominent role in the paper, was introduced in 1899 by Thorvald Nicolai Thiele (1838-1910) under the name half-invariants, several decades before the rediscovery by the eminent statistician R. A. Fisher. Thiele was a Danish actuary, astronomer, mathematician, and statistician. He was a cofounder of the Danish insurance company Hafnia and was its Mathematical Director (actuary). Also, he was the founding president of the Danish Actuarial Society, a corresponding member of the British Institute of Actuaries, and a member of the Board of the Permanent Committee of the International Congresses of Actuaries. The net premium reserve differential equation,

ddttV=P+tVδ−(b−tV)μx+t,

in the theory of life contingencies was due to him, although it was not published until after he died. More information about Thiele can be found in the *Encyclopedia of Actuarial Science* article by Norberg (2004) or by looking up the website *MacTutor History of Mathematics Archive* (Robertson and O’Connor, n.d.).

# Appendix B

Let me present a relatively simple way to derive the formulas in Appendix B. The cumulant-generating function of a random variable

isΨX(t):=ln[MX(t)].

Because *n*th cumulant of is the *n*th derivative of at 0, is the coefficient of in the Maclaurin series of It turns out that it is not necessary to do “tedious” differentiations to determine the Maclaurin series because we have the logarithm series

ln(1+z)=z−z22+z33−z44±⋯

Observe that, with

MX(t)=E[etX]=etμE[et(X−μ)],

and that, with

E[et(X−μ)]=E[∑j≥0[t(X−μ)]jj!]=∑j≥0E[[t(X−μ)]jj!]=1+0+∑j≥2μjtjj!.

Hence, by using the logarithm series and some school algebra,

ΨX(t)=μt+ln(1+∑j≥2μjtjj!)=μt+∑j≥2μjtjj!−12(∑j≥2μjtjj!)2±⋯=μt+μ2t22!+μ3t33!+(μ4−3μ22)t44!+(μ5−10μ2μ3)t55!+⋯

# Section 2

Let me derive the cumulant formulas on the right column of page 175 in a more general context. From the left column on page 175, one can see that the probability density function (pdf) of the log-gamma random variable

has the formfY(u)=eαu+g(u)+h(α),

which means that *linear exponential family*, a concept that can be found in the actuarial textbook Klugman, Panjer, and Willmot (2019). Because a pdf integrates to 1, we have

∫∞−∞eαu+g(u)du=e−h(α).

Thus, the moment-generating function of

isMY(t)=∫∞−∞e(t+α)u+g(u)+h(α)du=e−h(t+α)+h(α),

which implies that the cumulant-generating function is

ΨY(t)=h(α)−h(t+α)=h(α)−∑j≥0h(j)(α)tjj!=−∑j≥1h(j)(α)tjj!.

Hence for the linear exponential family, we have for

κj=−h(j)(α);

also,

κj+1=∂∂ακj.

The actuarial textbook Kaas et al. (2008, 300) calls the *cumulant function*. For the random variable in Section 2,

−h(α)=lnΓ(α)+αlnθ.

# Section 4

There seems no need for the moment-generating function calculation in the right column of page 177. Here,

Y=γlnX1−γlnX2+C.

With

and being independent random variables,ΨY(t)=ΨγlnX1(t)+Ψ−γlnX2(t)+Ct=ΨlnX1(γt)+ΨlnX2(−γt)+Ct.

Because the cumulants of a random variable can be obtained from the coefficients of the Maclaurin series of the cumulant-generating function, it follows from the last equation that

κ1(Y)=γκ1(lnX1)−γκ1(lnX2)+C,κj(Y)=γjκj(lnX1)+(−γ)jκj(lnX2),j=2,3,4,….

In Section 4,

is assumed to be a random variable, and a random variable. Hence,κj(lnX1)=dj dαjlnΓ(α)

and

κj(lnX2)=dj dβjlnΓ(β).

# Section 7

It is stated in the Conclusion paragraph of the paper that “loss distributions … are non-negative, positively skewed, and right-tailed.” Then it is claimed that “most loss distributions are transformations of the gamma distribution” which I find confusing. It may be useful for me to state the following result, whose proof can be found in a short discussion by Ko and Ng (2007). The distribution of any positive random variable can be arbitrarily closely approximated by a weighted average of Erlang distributions, where the weights are positive. An Erlang distribution is a gamma distribution with the shape parameter α being a positive integer.

Also, Ko and Ng (2007) show that the distribution of any positive random variable can be arbitrarily closely approximated by a weighted average of exponential distributions, but some of the weights may be negative. There are other approximation results given in this two-page discussion by Ko and Ng (2007).