Processing math: 2%
Skip to main content
Variance
  • Menu
  • Articles
    • Actuarial
    • Capital Management
    • Claim Management
    • Data Management and Information
    • Financial and Statistical Methods
    • Other
    • Ratemaking and Product Information
    • Reserving
    • Risk Management
    • All
  • For Authors
  • Editorial Board
  • About
  • Issues
  • Archives
  • Variance Prize
  • search

RSS Feed

Enter the URL below into your favorite RSS reader.

http://localhost:44961/feed
Actuarial
Vol. 8, Issue 1, 2014January 01, 2014 EDT

The Discrete Fourier Transform and Cyclical Overflow

Leigh J. Halliwell,
Collective-risk modeldiscrete Fourier transformcharacteristic functionroots of unity
Photo by A Chosen Soul on Unsplash
Variance
Halliwell, Leigh J. 2014. “The Discrete Fourier Transform and Cyclical Overflow.” Variance 8 (1): 73–79.
Save article as...▾

View more stats

Abstract

More casualty actuaries would employ the discrete Fourier transform (DFT) if they understood it better. In addition to the many fine papers on the DFT, this paper might be regarded as just one more introduction. However, the topic uniquely explained herein is how the DFT treats the probability of amounts that overflow its upper bound, a topic that others either have not noticed or have deemed of little importance. The cyclical overflow originates in the modular arithmetic whereby the DFT evaluates characteristic functions. To understand this is to attain a deeper understanding of the DFT, which may lead to its wider use.

Introduction

A common tool of casualty actuaries is the collective-risk model S = X1 + . . . + XN, according to which aggregate loss S is composed of a random number N of independent, identically distributed claims X. Although there are several techniques for deriving probability distributions for S, the discrete Fourier transform (DFT) is arguably the most elegant and powerful when severity X is both discrete and finite. However, it suffers from two drawbacks. First, its use of complex numbers may daunt some actuaries; certainly it does not easily program into a spreadsheet.[1] And, second, the finite support of S raises the issue of overflow, especially when the support of claim count N is unlimited. Since one can set an arbitrarily large common support for X and S, this issue is not a practical problem;[2] however, how the true distribution of S overflows the DFT-derived distribution is an interesting topic. Moreover, understanding the overflow issue may enable and encourage others to employ the DFT. It is to this end that we will start in Section 2 with the necessary complex algebra. From there, in Section 3 we will define abstractly the DFT as an invertible transformation (a finite matrix of complex numbers). Bringing in probability, we will in Section 4 derive the DFT of S = X1 + . . . + XN in terms of N and X. Finally, in Section 5 we will show what happens to overflow in the DFT, relating it back to the complex algebra of Section 2.

The nth roots of unity

Euler’s formula eiθ = cos θ + i sin θ forms the basis for the nth roots of unity.[3] The magnitude of z = eiθ is |z| = cos2 θ + sin2 θ = 1. Hence, eiθ is a point on the complex unit circle, and the function z(θ) = eiθ can be imagined as a point moving counterclockwise around this circle as θ increases. Plugging θ = 2π into Euler’s formula, we have ei • 2π = cos 2π + i sin 2π = 1. Since the circumference of a circle measures 2π radians, the exponential function is periodic along the imaginary axis.

Now consider the equation zn = 1, where n is a positive integer. For any integer k,

(ei2πkn)n=ei2πknn=ei⋅2πk=(ei⋅2π)k=(1)k=1

So numbers of the form ωk=ei2πkn are nth roots of unity. Furthermore, since ωn=ei2πnn=ei⋅2π=1, within this formally infinite set only n elements are distinct, viz., ωk=ei2πkn for k ∈ {0, 1, . . . , n − 1}. Multiplication of roots of unity involves modular arithmetic,[4] since

\begin{aligned} \omega_{k} \omega_{l}=e^{i \frac{2 \pi k}{n}} e^{i \frac{2 \pi l}{n}}=e^{i \frac{2 \pi(k+1)}{n}}= & \omega_{k+l}=\omega_{\bmod (k+l, n)} \\ & \Rightarrow k+1 \equiv \bmod (k+l, n) . \end{aligned}

Because an nth degree polynomial has at most n distinct roots,[5] there can be no other nth roots of unity.

The operation of multiplication on G, the set of the nth roots of unity, constitutes an algebraic group (Clark 1984, 17f). In addition to closure (if a, b ∈ G, then a · b ∈ G), multiplication on G has the three defining properties:

  1. ∀a, b, c ∈ G (a · b) · c = a · (b · c)

  2. ∃e ∈ G : ∀a ∈ G a · e = e · a = a

  3. ∀a ∈ G ∃b ∈ G : a · b = b · a = e

The properties are (1) associativity, (2) identity existence, and (3) inverse existence. That multiplication is associative hardly needs to be mentioned. The identity element is ω0 = 1. Abstractly, the identity element must be unique, for if e1 and e2 satisfy the identity property, then e1 = e1 · e2 = e2. Similarly, the inverse of an element must be unique, for, if both a · b1 = b1 · a = e and a · b2 = b2 · a = e, then

b_{1}=b_{1} \cdot e=b_{1} \cdot\left(a \cdot b_{2}\right)=\left(b_{1} \cdot a\right) \cdot b_{2}=e \cdot b_{2}=b_{2} .

For group G, the inverse of \omega_k is \omega_k^{-1}=\omega_{n-k}= \omega_{\text {mad }(n-k) n}; the third part of the equation applies to the identity element: \omega_0^{-1}=\omega_0. But, again from Euler’s formula:

\begin{aligned} \omega_{k}^{-1}=e^{-i \frac{2 \pi k}{n}} & =\cos \left(-\frac{2 \pi k}{n}\right)+i \sin \left(-\frac{2 \pi k}{n}\right) \\ & =\cos \left(\frac{2 \pi k}{n}\right)-i \sin \left(\frac{2 \pi k}{n}\right)=\bar{\omega}_{k} . \end{aligned}

This means that the inverse of a root of unity is its complex conjugate.

Of all the n^{\text {mh }} roots of unity, \omega_1=e^{\frac{2 \pi}{3}} is the principal, from which the others can be derived by exponentiation \omega_k=\omega_1^k. We will often drop the subscript from the principal root and designate all the roots as \omega_k=\omega_1^k=\omega^k. The value of n will be understood; however, were it forgotten, it could always be reclaimed as the smallest positive integer for which \omega^r=1.

Finally, as to the sum of the nth roots of unity:

(\omega-1) \sum_{k=0}^{n-1} \omega^{k}=\sum_{k=1}^{n} \omega^{k}-\sum_{k=0}^{n-1} \omega^{k}=\omega^{n}-\omega^{0}=1-1=0 .

Either ω = 1 or \sum_{k=0}^{n-1} \omega^{k}=0 But the principal root is unity if and only if n = 1. Hence

\sum_{k=0}^{n-1} \omega^{k}=\left\{\begin{array}{ll} 1 & \text { if } n=1 \\ 0 & \text { if } n>1 \end{array} .\right.

Intuitively, the formula for n > 1 is obvious from the equal dispersion of the roots about the circle; the origin is the center of their mass.[6]

The discrete Fourier transform

The DFT is simply an n × n matrix Ω involving the nth roots of unity ωk. The jkth element of Ω is ωj • k. As in the previous section, ω is the principal nth root of unity, and the indexing starts at zero. Define the two n × 1 vectors:

\mathbf{n}=\left[\begin{array}{c} 0 \\ \vdots \\ n-1 \end{array}\right] \text { and } \mathbf{p}=\left[\begin{array}{c} p_{0} \\ \vdots \\ p_{n-1} \end{array}\right]

The DFT of \mathbf{p} is \boldsymbol{\Omega} \mathbf{p}. Understanding the exponentiation as elementwise, we can express this as \boldsymbol{\Omega}=\omega^{\mathrm{nn}}, which matrix is symmetric. But it is unnecessary to perform the exponentiation on \omega. Due to the cyclicality of the multiplication, the elements of \mathbf{n n}^{\prime} can be reduced modulo n to the set \{0,1, \ldots, n-1\}, at which point they are just as much n^{\text {th }}-root subscripts as they are exponents. In sum, \boldsymbol{\Omega}=\omega^{m n^{\prime}}=\omega^{m a d\left(i n n^{\prime}: m\right)}.

To reclaim vector \mathbf{p}, one needs the inverse DFT, or \boldsymbol{\Omega}^{-1}, whose existence we will now show. Define the conjugate of matrix \boldsymbol{\Omega}, or \overline{\boldsymbol{\Omega}}, as the matrix of the conjugated elements of \boldsymbol{\Omega}. According to Section 2 , the conjugate of a root of unity is the inverse of the root. So the j k^{\text {th }} element of \overline{\boldsymbol{\Omega}} is \omega^{-j k}. Since \boldsymbol{\Omega} is symmetric, so too is \overline{\boldsymbol{\Omega}}. Now the j k^{\text {th }} element of \overline{\boldsymbol{\Omega}} \boldsymbol{\Omega} is the dot product of the j^{\text {th }} row of \bar{\Omega} and the k^{\text {th }} column of \boldsymbol{\Omega}. Hence:

[\overline{\mathbf{\Omega}} \boldsymbol{\Omega}]_{j k}=\sum_{l=0}^{n-1} \bar{\omega}_{j l} \omega_{t k}=\sum_{l=0}^{n-1} \omega_{j l}^{-1} \omega_{l k}=\sum_{l=0}^{n-1} \omega^{-j l} \omega^{k}=\sum_{l=0}^{n-1}\left(\omega^{k-j}\right)^{l} .

The formula for the geometric series gives us

\begin{aligned} {[\overline{\mathbf{\Omega}} \boldsymbol{\Omega}]_{j k} } & =\sum_{l=0}^{n-1}\left(\omega^{k-j}\right)^{l} \\ & =\left\{\begin{array}{ll} n & \text { if } \omega^{k-j}=1 \\ \frac{\left(\omega^{k-j}\right)^{n}-1}{\left(\omega^{k-j}\right)-1} & \text { if } \omega^{k-j} \neq 1 \end{array}=\left\{\begin{array}{ll} n & \text { if } \omega^{k-j}=1 \\ 0 & \text { if } \omega^{k-j} \neq 1 \end{array} .\right.\right. \end{aligned}

But \omega^{k-j}=1 if and only if \bmod (k-j, n)=0. Since j, k \in\{0,1, \ldots, n-1\}, \bmod (k-j, n)=0 if and only if j=k. So, finally,

[\overline{\boldsymbol{\Omega}} \boldsymbol{\Omega}]_{j k}=\left\{\begin{array}{ll} n & \text { if } j=k \\ 0 & \text { if } j \neq k \end{array} \Rightarrow \overline{\boldsymbol{\Omega}} \boldsymbol{\Omega}=n \mathbf{I}_{n} .\right.

And since both \boldsymbol{\Omega} and \overline{\boldsymbol{\Omega}} are symmetric, \boldsymbol{\Omega} \overline{\boldsymbol{\Omega}}= \boldsymbol{\Omega}^{\prime} \overline{\boldsymbol{\Omega}}^{\prime}=(\overline{\boldsymbol{\Omega}} \boldsymbol{\Omega})^{\prime}=\left(n \mathbf{I}_n\right)^{\prime}=n \mathbf{I}_n. Therefore, \boldsymbol{\Omega} is invertible: \boldsymbol{\Omega}^{-1}=\overline{\boldsymbol{\Omega}} / n .{ }[7]

The discrete Fourier transform as a characteristic function

The relevance of the previous two sections will become clear in connection with the characteristic function, which is a moment generating function (mgf) with an imaginary argument. The characteristic function of any real-valued random variable X, or \varphi_X, is \varphi_x(t)=E\left[e^{i x}\right]. Because \left|E\left[e^{i x}\right]\right|=\left|\int_{x=-\infty}^{\infty} e^{i x} d F_{F}(x)\right| \leq \int_{x=-\infty}^{\infty}\left|e^{i x \mid}\right| d F_{X}(x)=\int_{x=-\infty}^{\infty} 1 \cdot d F_{F}(x)=1, the characteristic function, unlike the mgf, always converges. However, since here we are concerned with discrete and finite random variables, whose mgfs do converge, this is of no advantage. Its advantage here lies in the simplicity and accuracy of its inversion.

First, we give a probability interpretation to vector \mathbf{p} of Section 3. The k^{\text {th }} element of \mathbf{p}_x will be the probability for random variable X to equal k : \operatorname{Pr} o b[X=k]=p_k. So now the elements of \mathbf{p}_x must be non-negative real numbers whose sum is unity. Second, we define t_k=\frac{2 \pi k}{n} for k \in\{0,1, \ldots, n-1\}. These n values of t are the radian measures on the unit circle of the n^{\text {th }} roots of unity, so \omega_k=e^{\frac{3+z}{7}}=e^a. Accordingly,

\varphi_{x}\left(t_{j}\right)=E\left[e^{i t_{j} X}\right]=\sum_{k=0}^{n-1} e^{i t_{j} k} p_{k}=\sum_{k=0}^{n-1}\left(\omega_{j}\right)^{k} p_{k}=\sum_{k=0}^{n-1} \omega^{j k} p_{k} .

The last expression is the j^{\text {th }} element of the matrix product \boldsymbol{\Omega}_{\mathbf{p}_X}. In fact, defining \mathbf{t} as the n \times 1 vector whose j^{\text {th }} element is t_j, and applying \varphi_x elementwise, we have \varphi_X(\mathbf{t})=\boldsymbol{\Omega} \mathbf{p}_x. Therefore, the DFT is a discrete form of the characteristic function, from which one can reclaim the probabilities as \mathbf{p}_x=\boldsymbol{\Omega}^{-1} \varphi_X(\mathbf{t})= \frac{\overline{\boldsymbol{\Omega}}}{n} \varphi_x(\mathbf{t}).

Even so, it seems that we are just playing games, jumping from probabilities to the characteristic function and back again. Must not one know the probabilities before knowing the characteristic function? The answer is “Not necessarily.” The collective-risk model is a case in point, for if S = X1 + . . . + XN:

\varphi_{s}(t)=E\left[e^{i s}\right]=E\left[e^{i\left(X_{1}+\cdots+x_{N}\right)}\right]=E\left[\prod_{k=-}^{N} e^{i x_{k}}\right] .

Separate N from the X variables by conditional expectation:

E\left[\prod_{k=1}^{N} e^{i x_{k}}\right]=\underset{N}{E}\left[E\left[\prod_{k=1}^{N} e^{i i x_{k}} \mid N\right]\right]=\underset{N}{E}\left[\prod_{k=1}^{N} E\left[e^{i i x_{k}}\right] \mid N\right] .

Finally, simplify due to the identical distribution of the X variables:

\begin{aligned} \underset{N}{E}\left[\prod_{k=1}^{N} E\left[e^{i X_{k}}\right] \mid N\right] & =\underset{N}{E}\left[\prod_{k=1}^{N} \varphi_{X}(t) \mid N\right] \\ & =\underset{N}{E}\left[\varphi_{x}(t)^{N} \mid N\right]=E\left[\varphi_{x}(t)^{N}\right] \end{aligned}

Therefore, \varphi_s(t)=E\left[\varphi_x(t)^N\right], in agreement with Heckman and Meyers (1983, 34), Klugman, Panjer, and Willmot (1998, 316), and Wang (1998, 864). The point is made: one can obtain the characteristic function of S without knowledge of its probabilities, and by DFT inversion (i.e., \mathbf{p}_s=\frac{\bar{\Omega}}{n} \varphi_s ) work back to the probabilities.

If N is a constant, or \operatorname{Prob}[N=k]=1, then S=X_1+\cdots+X_k and \varphi_s(t)=\varphi_x(t)^k. Therefore, \mathbf{p}_s=\frac{\overline{\boldsymbol{\Omega}}}{n} \varphi_s=\frac{\overline{\boldsymbol{\Omega}}}{n}\left(\boldsymbol{\Omega}_{\mathbf{p}_x}\right)^k. This is the formula for the k^{\text {th }} convolution of X, which holds for any non-negative integer k.[8] We will check it for k=0. In this case \left(\boldsymbol{\Omega} \mathbf{p}_x\right)^k=1_n, an n \times 1 vector of ones, and \mathbf{p}_s=\frac{\overline{\boldsymbol{\Omega}}}{n} 1_n, which contains the row averages of \overline{\boldsymbol{\Omega}}. Its j^{\text {th }} element is

\begin{aligned} {\left[\mathbf{p}_{s}\right]_{j} } & =\frac{1}{n} \sum_{l=0}^{n-1} \bar{\omega}_{j l}=\frac{1}{n} \sum_{l=0}^{n-1} \omega^{-j l}=\frac{1}{n} \sum_{l=0}^{n-1}\left(\omega^{-j}\right)^{l} \\ & =\frac{1}{n}\left\{\begin{array}{ll} n & \text { if } j=0 \\ \frac{\left(\omega^{-j}\right)^{n}-1}{\omega^{-j}-1} & \text { if } j>0 \end{array}\right. \\ & =\left\{\begin{array}{ll} 1 & \text { if } j=0 \\ 0 & \text { if } j>0 \end{array}\right. \end{aligned}

So Prob[S = 0] =1, as it must.

Again, if \operatorname{Prob}[X=1]=1, S=1_1+\cdots+1_N=N. In this case \varphi_X(t)=E\left[e^{i x}\right]=e^{i t} and

\varphi_{s}(t)=E\left[\varphi_{X}(t)^{N}\right]=E\left[\left(e^{i t}\right)^{N}\right]=E\left[e^{i N}\right]=\varphi_{N}(t) .

This means that S is distributed as N; in the realm of the DFT, \mathbf{p}_s=\mathbf{p}_N.

In general, E\left[z^N\right] is called the probability generating function (Klugman, Panjer, and Willmot 1998, 201) and denoted as P_N(z). Therefore, \varphi_s(t)=E\left[\varphi_X(t)^N\right]= \mathrm{P}_N\left(\varphi_x(t)\right). Klugman’s Appendix B contains the probability generating functions of many claim-count distributions, and our purpose here does not require us to duplicate his and others’ work. So we turn at last to the overflow problem.

The discrete Fourier transform and overflow

Example 4.11 of Klugman, Panjer, and Willmot (1998, 319f) demonstrates the use of the DFT to obtain the distribution of S=X_1+\cdots+X_N, where N is Poisson-distributed with a mean of 3 , and \mathbf{p}_x=\left[\begin{array}{llllll}0 & 0.5 & 0.4 & 0.1 & 0 & \ldots\end{array}\right]^{\prime}. The demonstration is performed twice, once over the support \left[\begin{array}{llll}0 & 1 & \ldots & 7\end{array}\right]^{\prime} and once over the support \left.\begin{array}{llll}0 & 1 & \ldots & 4,095\end{array}\right]^{\prime}. For the first support, n=8; for the second, n=4,096. The purpose of the twofold demonstration is to show the importance of choosing a value for n large enough for the overflow to be negligible. Having replicated his calculations, we present the results in Table 1.

Table 1.Example 4.11 of Klugman, Panjer, and Willmot Loss Models (1998)
s n = 8 s n = 4096 s Cont’d
0 0.11227 0 0.04979 13 0.00666
1 0.11821 1 0.07468 14 0.00381
2 0.14470 2 0.11575 15 0.00212
3 0.15100 3 0.13256 16 0.00115
4 0.14727 4 0.13597 17 0.00060
5 0.13194 5 0.12525 18 0.00031
6 0.10941 6 0.10558 19 0.00015
7 0.08518 7 0.08305 20 0.00008
8 0.06134 21 0.00004
1.00000 9 0.04293 22 0.00002
10 0.02863 23 0.00001
11 0.01829 24 0.00000
12 0.01123 ⋮
1.00000

Klugman remarks:

. . . all the entries for n = 4,096 are accurate to the five decimal places presented. On the other hand, with n = 8, the FFT gives values that are clearly distorted. If any generalization can be made it is that more of the extra probability has been added to the smaller values of S. [p. 319]

In other words, the values for n = 8 are distorted by the overflow, but the lower the value of s, the greater the error. This is true enough, but more generalization can be made.

The probability for S to exceed 7, or to overflow the procedure for n = 8, is about 17.7 percent. The remarkable point is that the DFT preserves all the probability, regardless of n. Why does it not simply cut off accurately at s = 7? Instead, it compresses all the probability into the eight values. The explanation will appear from the following example.

Starting with a Bernoulli(0.5)-distributed X, we derived its next four convolutions and display them in Table 2. The probabilities of C(n) are binomial(n, 0.5).

Table 2.Exact Probabilities of Convolutions of X
k X C(2) C(3) C(4) C(5)
0 0.5 0.250 0.125 0.0625 0.03125
1 0.5 0.500 0.375 0.2500 0.15625
2 0.250 0.375 0.3750 0.31250
3 0.125 0.2500 0.31250
4 0.0625 0.15625
5 0.03125

Next, we calculated those probabilities according to the DFT with n = 4. Accordingly,

\begin{array}{c} \mathbf{n}=\left[\begin{array}{l} 0 \\ 1 \\ 2 \\ 3 \end{array}\right] \mathbf{p}_{x}=\left[\begin{array}{l} 0.5 \\ 0.5 \\ 0.0 \\ 0.0 \end{array}\right] \\ \boldsymbol{\Omega}=\left[\begin{array}{rrrr} 1 & 1 & 1 & 1 \\ 1 & i & -1 & -i \\ 1 & -1 & 1 & -1 \\ 1 & -i & -1 & i \end{array}\right] \quad \mathbf{\Omega}^{-1}=\frac{1}{4}\left[\begin{array}{rrrr} 1 & 1 & 1 & 1 \\ 1 & -i & -1 & i \\ 1 & -1 & 1 & -1 \\ 1 & i & -1 & -i \end{array}\right] \end{array}

Hence, \mathbf{p}_{C(n)}=\boldsymbol{\Omega}^{-1}\left(\boldsymbol{\Omega}_{\mathbf{p}_x}\right)^n. Table 3 shows the DFT result.

Table 3.DFT Probabilities with n 4
k X C(2) C(3) C(4) C(5)
0 0.5 0.250 0.125 0.1250 0.18750
1 0.5 0.500 0.375 0.2500 0.18750
2 0.250 0.375 0.3750 0.31250
3 0.125 0.2500 0.31250
4
5

As expected, C(2) and C(3) agree. But overflow begins with C(4), and the form of the overflow is cyclical. What should have been in the k = 4 row added to the k = 0 row; what should have been in the k = 5 row added to the k = 1 row.

The DFT cannot ignore overflow; rather it must recycle it. Instead of calculating probabilities for S = X1+ . . . + XN, it calculates them for mod(S, n). Its arithmetic is as modular as that of the G group of Section 2:

\omega_{k} \omega_{l}=\omega_{k+l}=\omega_{\bmod (k+l, n)} \Rightarrow k+l \equiv \bmod (k+l, n) .

Alternatively, one may imagine the support to be infinite, over the set of natural numbers \{0,1,2, \ldots\}. However, there is no least root of unity; one must settle for some finite n and its principal root. So to match the roots of unity with the natural numbers, one must cycle endlessly through \omega^0 \quad \omega^1 \quad \ldots \quad \omega^{n-1}. The roots of unity are like ID variables according to which the probabilities may be summarized, as in the summation:

\sum_{k=0}^{\infty} \omega^{k} p_{k}=\sum_{k=0}^{\infty} \omega^{\bmod (k, n)} p_{k}=\sum_{k=0}^{n-1}\left(\omega^{k} \sum_{\bmod (l, n)=k} p_{l}\right)=\sum_{k=0}^{n-1}\left(\omega^{k} p_{k}^{*}\right) .

Summarizing the n = 4,096 probabilities of Table 1 according to mod(s, 8) will produce the n = 8 probabilities.

Conclusion

Many actuaries have neither the temperament nor the fortitude to penetrate the thicket of Σ operators found in many treatments of the DFT. Hopefully, they will find this treatment gentler, starting as it does with the nth roots of unity. The addition of a dash of group theory, about which much more could be said,[9] is suggestive of the modular arithmetic underlying the DFT and of the behavior of any overflow. The magic[10] of the DFT is not dispelled by its being opened to examination. But to those who understand the preceding sections of this paper, any impediments to working DFT magic will pertain only to computer hardware and software.[11]


  1. The Analysis ToolPack of Excel allows one to work with complex numbers, as well as to perform a Fourier analysis through the menu “Tools/Data Analysis.” But a matrix language with built-in complex arithmetic is far preferable.

  2. It becomes a practical problem if it involves limitations to computing speed, memory, and accuracy, which are limitations we will ignore here. The fast Fourier transform (FFT) is a remarkable algorithm for reducing, but not eliminating, them.

  3. See in References the Wikipedia articles on “Euler’s formula” and “roots of unity.” No complex analysis will be used in this paper, but we presume the reader to be proficient in complex algebra.

  4. More accurately, such multiplication is the basis of modular arithmetic. It has enriched the concept in classical number theory of congruence modulo n, i.e., p ≡ q mod n, and made complex analysis, culminating in the Riemann zeta function, essential to modern number theory.

  5. See Wikipedia, “Fundamental theorem of algebra,” for a history of the attempts to prove this. The article disputes the opinion of most mathematical historians that Gauss in his 1799 doctoral dissertation produced the first rigorous proof. Appendix D of Havil [2003], “Complex Function Theory,” a readable introduction to complex analysis, contains an elegant proof of one version of the theorem, viz., that every finite polynomial with complex coefficients has a complex root (pp. 241f). Induction completes the proof: If r is a root, factoring z − r out of the polynomial leaves a polynomial for solution whose degree is one less.

  6. Alternatively, according to the theory of equations, the sum of the roots of zn + a1z*n*−1 + . . . + an = 0 equals −a1. Since the ω*k* are the roots of the equation zn − 1 = 0, they sum either to one (if n = 1) or to zero (if n > 1).

  7. One could define the DFT as \boldsymbol{\Omega}=\omega^{\mathbf{n n}^{\prime}} / \sqrt{n},in which case Ω−1 would simply be Ω. But we have kept the standard definition, on account of which one must be careful to include the 1/n factor.

  8. Even fractional convolutions make sense. The half convolution of X is a distribution of a random variable whose second convolution is the distribution of X. If X can be formed from a second convolution, the DFT will yield the correct result. For example, if p*X* = [0.25 0.50 0.25]′, then \mathbf{p}_{S}=\frac{\overline{\mathbf{\Omega}}}{3}\left(\Omega \mathbf{p}_{X}\right)^{0.5}=\left[\begin{array}{lll} 0.5 & 0.5 & 0 \end{array}\right] . If X cannot be so formed and n is large enough (p*X* right-padded with zeros), non-zero probabilities will appear beyond max(X), even though all the probabilities will sum to unity. Moreover, when the convolution is impossible, negative probabilities may arise.

  9. The properties of an algebraic group ensure that for any element a of group G, a ⋅ G and G ⋅ a are in one-to-one correspondence with G. Because of this, group theory is all about shifts, cycles, or rotations, especially for finite groups. Moreover, a finite group with a generating element (i.e., an element by repeated operations on which all the elements of G can be generated, such as a primitive root of unity) has a counting mechanism, from which a modular arithmetic can be developed. One who understands this can apply the DFT to abstractions of {0, 1, . . . , n − 1}.

  10. A friend of the author and former colleague, James C. Sandor, FCAS, MAAA, who introduced the Fourier transform into his company’s pricing models, never tires of saying, “The Fourier transform works like magic.”

  11. After this paper was reviewed and revised, the author was made aware of the paper, “Panjer Recursion versus FFT for Compound Distributions,” by Paul Embrechts and Marco Frei (2007). On page 8, the authors wrote, “Compound mass which lies at M and beyond will be ‘wrapped around’ and erroneously appears in the range 0, . . . , M − 1. For severities with considerable tail mass, the truncation and wrap-around error (the so-called aliasing error) can become quite an issue.” Although they did not explicitly claim the wrapping around to be modulo M, it is likely that they so understood it. Also, they called DFT overflow “aliasing” by analogy with a problem in signal processing (cf. Wikipedia, “Aliasing”).

References

Clark, A. 1984. Elements of Abstract Algebra. New York: Dover.
Google Scholar
Havil, J. 2003. Gamma: Exploring Euler’s Constant. Princeton: Princeton University Press.
Google Scholar
Heckman, P. E., and G. G. Meyers. 1983. “The Calculation of Aggregate Loss Distributions from Claim Severity and Claim Count Distributions.” Proceedings of the Casualty Actuarial Society 70:22–61. http:/​/​www.casact.org/​pubs/​proceed/​proceed83/​83022.pdf.
Google Scholar
Klugman, S. A., H. H. Panjer, and G. E. Willmot. 1998. Loss Models: From Data to Decisions. New York: Wiley.
Google Scholar
Wang, S. S. 1998. “Aggregation of Correlated Risk Portfolios: Models and Algorithms.” Proceedings of the Casualty Actuarial Society 85:848–939. http:/​/​www.casact.org/​pubs/​proceed/​proceed98/​980848.pdf.
Google Scholar
Wikipedia contributors. n.d.-a. “Euler’s Formula.” Wikipedia, The Free Encyclopedia. Accessed July 2013. http:/​/​en.wikipedia.org/​wiki/​Euler%27s_formula.
———. n.d.-b. “Fundamental Theorem of Algebra.” Wikipedia, The Free Encyclopedia. Accessed July 2013. http:/​/​en.wikipedia.org/​wiki/​Fundamental_theorem_of_algebra.
———. n.d.-c. “Root of Unity.” Wikipedia, The Free Encyclopedia. Accessed July 2013. http:/​/​en.wikipedia.org/​wiki/​Root_of_unity.

This website uses cookies

We use cookies to enhance your experience and support COUNTER Metrics for transparent reporting of readership statistics. Cookie data is not sold to third parties or used for marketing purposes.

Powered by Scholastica, the modern academic journal management system