Suscripción a Biblioteca: Guest
Factor de Impacto: 4.911 Factor de Impacto de 5 años: 3.179 SJR: 1.008 SNIP: 0.983 CiteScore™: 5.2

ISSN Imprimir: 2152-5080
ISSN En Línea: 2152-5099

Acceso abierto

# International Journal for Uncertainty Quantification

DOI: 10.1615/Int.J.UncertaintyQuantification.v1.i1.50
pages 77-98

## ON A POLYNOMIAL CHAOS METHOD FOR DIFFERENTIAL EQUATIONS WITH SINGULAR SOURCES

Abstract

Singular source terms in the differential equation represented by the Dirac δ-function play a crucial role in determining the global solution. Due to the singular feature of the δ-function, physical parameters associated with the δ-function are highly sensitive to random and measurement errors, which makes the uncertainty analysis necessary. In this paper we use the generalized polynomial chaos method to derive the general solution of the differential equation under uncertainties associated with the δ-function. For simplicity, we assume the uniform distribution of the random variable and use the Legendre polynomials to expand the solution in the random space. A simple differential equation with the singular source term is considered. The polynomial chaos solution is derived. The Gibbs phenomenon and the convergence of high order moments are discussed. We also consider a direct collocation method which can avoid the Gibbs oscillations on the collocation points and enhance the accuracy accordingly.

KEYWORDS: generalized polynomial chaos, stochastic Galerkin method, singular source, Dirac ±-function, Gibbs phenomenon

## 1. Introduction

Differential equations with singular source terms are commonly found in various areas of applications [1-5]. Singular source terms are defined in a highly localized regime and play a crucial role in determining the global solution of the given differential equations. It is important to capture properly such small-scale phenomenon induced by the local singular source terms and understand the interaction between the small- and large-scale solution dynamics. Singular source terms are mathematically represented by the Dirac δ-function, δ(x-c), and its derivative(s) defined in the distribution sense with a function f(x), which is defined at x = c such that

 (1)

The derivatives of the δ-function are also defined in a similar way for a function f(x), whose derivatives are defined at x = c such that

 (2)

where the superscript ' denotes the derivative with respect to x.

Although singular sources are defined in a compact form mathematically, various uncertainties are easily involved to define them physically. For example, it is not easy to pinpoint the location of the singular source term. The detection of the singular object is based on the physical measurement, and such measurement has errors due to the locality of the singularity. Thus the realistic model of the singular source term should include the uncertainty of the location of the singular source, which can be introduced by a new random variable ξ in the physical space where the δ-function is defined such as

where ξ is a random variable replacing c in δ(x - c). Another type of uncertainties can be introduced for the amplitude, for which one can rewrite the δ-function as the following form

where η is a random variable. If η → 0, then the singular source term vanishes. One can consider other types of uncertainties for the δ-function besides the location and the amplitude. In this paper we consider the case where the uncertainty exists in the location.

In many cases solutions of differential equations with singular sources are nonsmooth, singular, or discontinuous. For example, in nonlinear optics, a defect in the optical media is modeled by the singular source, and such a singular source term plays a role as a potential around which the input signals yield nonlinear reflection and scattering phenomena. These nonlinear phenomena have been investigated using nonlinear partial differential equations (PDEs) including the sine-Gordon equation

 (3)

and the nonlinear Schrödinger equation

 (4)

Previous research shows that the global solutions of PDEs as given above are sensitive to the singular potential term and that the mathematical structure of the solution dynamics is rich and complex [2, 3]. The sensitivity of the global solution to the singular source term is amplified if uncertainties are involved, which makes the global solution dynamics more complex. No significant research has been conducted for the uncertainty analysis for the solution of such singularly perturbed differential equations. In this paper, as a preliminary research, we use the polynomial chaos method to analyze the solution of differential equations with the singular source term.

The polynomial chaos method was introduced by Wiener [6] and was recently much further developed by Xiu and co-workers [7-14]. The polynomial chaos method with the spectral method approach has gained great popularity these days [15-17] (see Xiu’s recent book and references therein [17]). The polynomial chaos method seeks the solution in a higher dimensional polynomial space by introducing a random variable associated with the uncertainty. Then the method expands the solution as a polynomial using the orthogonal polynomials [16, 17]. The orthogonal polynomials are determined by the distribution of the random variables considered. Different distributions and the corresponding orthogonal polynomials are given in Table 1 [11, 17]. In this paper we consider the uniform distribution and use the Legendre polynomials for simplicity.

TABLE 1: Continuous probability density functions and the associated orthogonal polynomials [11, 17].

 Distribution (PDF) Orthogonal polynomials Support (1/2)χ[-1,1] Ll(x), Legendre polynomials [-1.1] (1/√2π)exp(-x2/2) H l(x), Hermite polynomials (-∞,∞) xk-1 exp(-x/θ)/Γ(k)θk Ll(x), Laguerre polynomials [0,∞) {[Γ(α + β)]/[Γ(α)Γ(β)]}xα-1(1 - x)β-1 P l(α,β), Jacobi polynomials [-1,1]

This paper is composed of the following sections. In Section 2 we consider the simple differential equation with a singular source term. The uncertainty is in the location of the singular source term. The random variable has a uniform distribution. We derive the solution using the Galerkin method and provide some convergence results. We also consider the case that the uncertainty is confined in a local regime. This assumption yields the domain decomposition method. In Section 3 we discuss the Gibbs phenomenon which exists in the solutions obtained in Section 2. In Section 4 discussions on high-order moments are given. In Section 5 we consider the simple linear advection equation with the singular source term. A similar solution is obtained for the time-dependent problem. In Section 6 we consider the collocation method to solve the same time-dependent problem considered in Section 5. The singular source term is directly projected to the collocation space. As a result, the direct projection method removes the Gibbs phenomenon in the solution. In Section 7 we provide a brief summary and remark on future work.

## 2. A first-order differential equation with uniform distribution, ξ ∈ [-1,1]

First we consider the following simple differential equation for the real-valued function u(x),

 (5)

The exact solution is simply given by the Heaviside function H(x) which is an integral of the right-hand side of Eq. (5), δ(x). The singularity is located at the origin for Eq. (5), but we assume that there is an uncertainty in the location of the δ-function and use ξ as the random variable to denote the uncertainty of the location. Then the given differential equation becomes

 (6)

The solution u is now a function of both x and ξ. We also assume that ξ has the uniform distribution and is defined in the same interval of x, i.e., ξ ∈ [-1,1], with the probability density function (PDF) given by (1/2)χ[-1,1](ξ) where χ(ξ) is the characteristic function. The assumption of the uniform distribution yields that the associated orthogonal polynomials for ξ are the Legendre polynomials, as given in Table 1. The Legendre polynomials are defined by the solution to the Sturm-Liouville problem {(1 - x2)[Ll(x)]'}' + l(l + 1)Ll(x) = 0 with x ∈ [-1,1] with the orthogonality condition ∫-11Ll(x)Ll'(x)dx = [2/(2l + 1)]δll', where the superscript ' denotes the derivative with respect to x and δ is the Kronecker delta.

The solution of Eq. (6) is obvious,

 (7)

Let E(u) denote the expectation value of u, and V ar(u) the variance of u. These two quantities, E(u) and V ar(u), are all functions of x only. Let f(x) = E(u) and g(x) = V ar(u). Then we have

 (8)

Similarly,

 (9)

Definition: Let u(1)(x), u(2)(x) and u(x,ξ) be defined by the Galerkin solution of Eq. (5) in Legendre polynomials, the Galerkin projection of the exact solution of Eq. (5), H(x) and the polynomial chaos solution of Eq. (6), respectively. The superscripts (1) and (2) denote that the associated quantity corresponds to u(1)(x) and u(2)(x). For example, û(1) and û(2) are the expansion coefficients of u(1)(x) and u(2)(x), respectively.

First we consider the function u(2)(x) = ∑ l=0ûl(2)Ll(x), which is the direct projection of the exact solution of Eq. (5), that is, ûl(2) are given by the following equation:

 (10)

Lemma 1. The expansion coefficients ûl(2) in Eq. (10) are given by

 (11)

Proof. By multiplying each side of Eq. (10) by Ll(x) and using the orthogonality of the Legendre polynomials, the expansion coefficients are given by

 (12)

If l = 0, it is obvious that ûl(2) = (1/2). For l ≠ 0, we use the following property of the Legendre polynomials [18]:

 (13)

and

 (14)

Since 01Ll(x)dx = -10Ll(x)dx for l = even and 01Ll(x)dx = - -10Ll(x)dx for l = odd, the above relations yield

 (15)

Since Ll(0) = 0 if l = odd, we obtain Eq. (11).

Next we consider the function u(1)(x) = ∑ l=0ûl(1)Ll(x), which is the Galerkin solution of the differential equation, Eq. (5). By plugging u(1)(x) into the differential equation and the initial condition, we have

 (16)

 (17)

Lemma 1. The coefficients ûl(1), satisfying Eqs. (16) and (17), are given by

 (18)

Proof. From Eq. (13) we have

 (19)

Adding both sides of Eq. (19) all together, we have

 (20)

for n = even. Similarly, we have

 (21)

for n = odd. From Eqs. (20) and (21), we know that Ln'(x) is a linear combination of all the previous odd (if n is odd) or even (if n is even) terms with coefficients (2k + 1) for Lk(x). By plugging Eqs. (20) and (21) into Eq. (16), we have

 (22)

In Eq. (22), ... means 3L1(x) + 7L3(x) + 11L5(x) + ... + (2l - 3)Ll-2 for even l or 1 + 5L2(x) + 9L4(x) + 13L6(x) + ... + (2l - 3)Ll-2 for odd l. Multiplying each side of Eq. (22) by Lk(x) yields

 (23)

We then integrate the above equation over x and switch the left and right sides to obtain

 (24)

where we used the orthogonality condition of the Legendre polynomials. Eq. (24) also reads

 (25)

Subtracting Eq. (24) from Eq. (25) yields

 (26)

Now consider the boundary condition. Since ûk+1(1) vanishes if k is odd, the boundary condition becomes

 (27)

This completes the proof.

Finally we consider u(x,ξ), which is the solution of the stochastic differential equation, Eq. (6),

 (28)

where the expansion coefficients ûl are functions of x. Plugging u(x,ξ) into the differential equation yields

 (29)

where the superscript ' denotes the derivative with respect to x. To consider u(x,ξ), let us first consider the general case where ξ is defined in the subinterval of x.

Domain decomposition: ξ ∈ (-ϵ,ϵ). Assume that the location of the δ-function is confined in a small region ξ ∈ (-ϵ,ϵ), 0 < ϵ << 1. The solution for Eq. (6) is then given by

 (30)

where ξ ∈ (-ϵ,ϵ). In the interval x ∈ [-ϵ,ϵ], the expectation value is

 (31)

Similarly, the variance is

 (32)

Thus, for a fixed ϵ, the expectation value and variance are given by

 (33)

and

 (34)

For any ϵ, we have

 (35)

 (36)

which shows the expectation value and variance are bounded although the PDF [(1/2)[-ϵ,ϵ](ξ) = (1/2ϵ)] diverges as ϵ → 0. We know that x → 0 as ϵ → 0, and we obtain the expectation value and variance at x = 0 by letting ϵ → 0. Also by Eqs. (35) and (36) we have

 (37)

These values are the same as those for ξ ∈ [-1,1]. Thus, we know that if x → 0, the expectation value and the variance are the same for any value of ξ ∈ (-ϵ,ϵ).

The assumption that ξ ∈ (-ϵ,ϵ) breaks the original differential equation into three equations in three regions, (1) xI = [-1,-ϵ], (2) xII = (-ϵ,ϵ], and (3) xIII = (ϵ,1].

Interval I, x ∈ [-1,-ϵ]: In this interval, the δ-function is absent and the differential equation and the boundary condition are given by

and the solution is simply

 (38)

Interval II, x ∈ (-ϵ,ϵ): In this interval, the δ-function exists and the equation is given by

where η ∈ (-ϵ,ϵ). Then we seek a solution u(x,η) as

 (39)

where ξ = (η/ϵ) and ξ ∈ [-1,1].

Lemma 3. The expansion coefficients ûl(x) in Eq. (39) are given by

 (40)

Furthermore, the boundary value u(x = ϵ,η) is unity for any value of η, i.e.,

 (41)

Proof. By plugging Eq. (39) into the differential equation and using the orthogonality of Ll(ξ) we obtain

 (42)

The boundary condition at x = -ϵ is obtained by the solution at x = -ϵ in interval I,

 (43)

Thus, ûl(-ϵ) = 0 for all l = 0,1,2,.... Using this boundary condition, we obtain

 (44)

if k = 0. If k ≠ 0, we have

 (45)

where we used Eq. (14). The boundary value of u(x,η) at x = ϵ is

 (46)

From lemma 2, we know that the mean value of u(x,η) in this interval is given by

 (47)

which is û0(x). If x → 0, we confirm that

Interval III, x ∈ (-ϵ,1]: Since there is no δ-function in this interval, using the boundary value of u(ϵ,η) = 1, the solution in x ∈ (ϵ,1] is given by u(x) = 1. It is easy to show that if ϵ → 0, then we have

 (48)

Using lemma 2, we have the following corollary for ξ[-1,1].

Corollary 4. The expansion coefficients ûl(x) are given by

 (49)

and

 (50)

Proof. From lemma 2, for ϵ → 1, we have Eqs. (49) and (50). Furthermore, by plugging ξ = 0 and equations in Eq. (49) into Eq. (28), we obtain

 (51)

It is a simple exercise to show that Eq. (51) becomes

Here note that the first coefficient, û0(x) = (1/2)(x + 1), is the same as the expectation value of u(x,ξ) in Eq. (8).

Remark: Equations (50) and (51) are equivalent. They may, however, become different if they are truncated with the finite N in their given forms. For Eq. (51), since Ll+1(x) = Ll-1(x) at x = ±1 if l is even, we know that at x = ±1,

which are the boundary values and they are determined regardless of how many terms are used in the series. For Eq. (50), since u(x,0) = 1 at x = 1, and u(x,0) = 0 at x = -1, the following should be

which is only true if N →∞. Thus we use Eq. (51) for the computation in the following sections.

Figure 1 shows the expansion coefficients ul(x), Eq (49). The top figure shows ul(x) for l = 0,...,9 and the bottom figure for l = 10,...,40.

Theorem 5.

Furthermore, uN(x,ξ) converges to u(x,ξ) at ξ = 0, i.e.,

 (52)

Proof. From lemmas 1 and 2 and corollary 4, we know that all the coefficients of u(1)(x), u(2)(x), and u(x,ξ = 0) are the same.

Using the recurrence relation (n + 1)Ln+1(x) = (2n + 1)xLn(x) - nLn-1(x), we have

 (53)

which yields

 (54)

Then we let k(x) be defined as

 (55)

Using the Stirling formula n! ~√n(n/e)n and (2n)! ~[√n(2n/e)2n], we have

 (56)

Thus, the following series converges

 (57)

and we have

 (58)

From theorem 5, we know that the stochastic solution u(x,ξ) matches the deterministic solution well, particularly if the singularity is located at x = 0.

Remark: Theorem 5 can be extended to the more general case that

where c is the real constant c ∈ [-1,1]. Then solutions u(1)(x;c) and u(2)(x;c) are the same as u(x,ξ) for any ξ = c.

This can be easily shown using the properties of the Legendre polynomials. First, for H(x-c) = ∑ l=0Nûl(2)Ll(x), the coefficients are given by

 (59)

By the Galerkin projection, we get similar results as lemma 2,

 (60)

for any l > 0. To prove Eqs. (59) and (60) are equal, we use the identity formula (14). Setting x = 1 and x = c in Eq. (14), respectively, we have

 (61)

 (62)

Subtracting Eq. (62) from Eq. (61) yields the equation implying that Eqs. (59) and (60) are equal. Also, the boundary conditionl=0Nûl(1)Ll(-1) = 0 is obtained by plugging Eq. (60) into this formula. The coefficients from the polynomial chaos method for any c are obtained in as similar way as corollary 2 by just replacing 0 with c in Eq. (50). After some simple algebraic calculations, we can show that the coefficients by the polynomial chaos method are equal to those by the previous two methods.

Now we consider the convergence of u(x,ξ) for any ξ. That is, we want to show

 (63)

Using corollary 4 we have

 (64)

where we used |Ll(ξ)| ≤ 1. Here we do not provide the convergence analytically, but instead we show the numerical result. Define R1(n),

and the remainder

For the numerical calculation of R1(n) and R2(n,N), we use N = 6000. Figure 2 shows the decay of R1(n) (blue solid line) and R2(n) (black solid line) with n in logarithmic scale. The figure shows that R1(n) decays with a rate of about ~ n-4.95. The red line in the figure is a reference line which decays ~ n-4.95. With this decay rate, we know that the series ∑ n=1|Ll+1(x) - Ll-1(x)| will converge. Thus, the remainder R2(n,N) will decay as n →∞ and N →∞ for n < N, i.e.,

The black solid line shows the decay of R2(n,N) with N = 6000. The figure implies that due to the decay property of R1(n), the remainder R2(n,∞) will also decay to zero as n →∞, but the decay rate is only algebraic. That is, we know that uN(x,ξ) converges to u(x,ξ), but convergence is slow because of the existence of the discontinuity at x = ξ.

## 3. Gibbs phenomenon

The solutions obtained in the previous section yield the Gibbs phenomenon. The Gibbs phenomenon is commonly found in high-order approximations of discontinuous functions with the spectral method [19, 20]. The exact solution of Eq. (5) is the Heaviside function with H(0) = 1 and limx→0-H(x) = 0. As we already saw, all solutions obtained in the previous section have the expectation value of 1/2 at x = 0. Thus all solutions converge to H(x) at every point x except x = 0. This appears as the Gibbs oscillations in the partial sum of each solution near x = 0.

Figure 3 shows the partial sum solution of u(x,ξ = 0) (left) and u(x,ξ) (right) for N = 40. The left figure shows the solution when ξ = 0. As shown in the figure, the solution is oscillatory near the discontinuity x = 0. The right figure shows the collection of solutions for every ξ and x. As shown in the figure, u(x,ξ) are oscillatory near x = ξ. Figure 4 shows the variance and the mean of u(x,ξ). The top figure shows the computed variance of u(x,ξ) with 501 points of x and ξ for N = 10 (blue solid line), N = 20 (green), N = 40 (purple), and the theoretical variance of u(x,ξ),(1 - x2)/4 (red). As the figure shows, the variance approaches the exact variance as N increases, but the convergence is slow. The slow convergence is due to the fact that the variance is computed using every term in the series of the solution. As the series converges slowly, the variance also converges slowly. The bottom figure shows the error between the computed mean of u(x,ξ) and the exact mean (1 + x)/2 in logarithmic scale using 5001 uniform points for N = 4,6,10. For the numerical integration, we used the Simpson’s rule. The figure shows that the pointwise errors of the mean value are close to machine accuracy for the small value of N. This is because the first mode is the mean and the rest of the terms are canceled out. As N increases, the pointwise errors increase, which results from the incomplete numerical cancellations of high modes due to round-off errors.

## 4. High-order moments of u(x,ξ)

With the uniform distribution, it is easy to show that the variance is given by

 (65)

where one should note that the index l runs from 1. In general, all the terms of ûl(x) are involved for the computation of the variance, as shown in the above equation and Fig. 4. It is, however, interesting to observe that the variance in our case is simply given by the second mode of ûl(x),

 (66)

That is, as the mean of u(x,ξ), the variance can be determined exactly once û1(x) is found. This implies that the slow convergence of the variance found in Fig. 4 can be resolved as the variance is obtained instantly. To understand this interesting aspect, we need to show the following:

 (67)

where ul(x) = (1/2)[Ll+1(x) -Ll-1(x)]. To prove Eq. (67), first we plug Eq. (14) into the left-hand side (LHS) of Eq. (67). Then the LHS becomes

For the proof we use the well-known property that the Legendre polynomials are complete, that is, {√[(2l + 1)/2]Ll(x)}l=0 are complete and orthonormal [21]. The completeness condition yields

Thus,

Using L0(x) = 1 we obtain [21]

This completes the proof. This special result is due to the following relation:

 (68)

Since the exact solution is H(x - ξ), it is simple to show that

The fact that the mean of any power of u(x,ξ) is the same as the mean of u(x,ξ) yields the following property. Let E[u(x,ξ)] = u and v = -u. Then for n = 0,1,..., we have

 (69)

It is easy to show Eq. (69),

 (70)

where we used Eq. (68) and v = -u. Equation (69) yields interesting results about the high-order moments of u. For example,

 (71)

The second moment is the variance and the third moment is related to the skewness. Thus, we know that the first three moments (the mean, variance, and skewness) are obtained exactly by the first three modes of ul(x) for our case.

Figure 5 shows E[(u -u)n] with different n = 1,...,50. The left figure shows E[(u -u)n] for n = 1,...,20 and the right for n = 21,...,50. If n = 1, E[(u -u)n] = 0. As n increases, the maximum value of E[(u -u)n] decreases in the figures. Note the different scale in the left and right figures.

## 5. Time-dependent linear advection equation with uncertainty

We consider the time-dependent problem with a singular source term

 (72)

If the boundary condition is homogeneous, i.e., u(-1,t) = 0, the solution u(x,t) goes to the steady-state solution which is the Heaviside function, H(x). We consider the case that the location of the singular source has an uncertainty ξ as in the previous sections, that is,

 (73)

where u = u(x,t,ξ). We assume that ξ ∈ (-1,1) with the uniform PDF and

 (74)

By plugging Eq. (74) and using the orthogonality condition of the Legendre polynomials, we obtain

 (75)

If l = 0, we have

 (76)

Then the solution of Eq. (76) is given by

 (77)

where C is the integration constant and x0 = x - t. To determine the integration constant C, we use the given boundary and initial conditions. From the boundary condition u(-1,t,ξ) = h(t), we have

 (78)

Similarly, using the given initial condition

we obtain

 (79)

Using Eqs. (78) and (79), we obtain

 (80)

If l ≠ 0, by using the orthogonality condition we obtain

 (81)

where we used Eqs. (18) and (79). Thus, the general solution of the stochastic equation (73) is given by

 (82)

If g(x) = 0 = h(t), then we obtain

This is the Legendre expansion of H(x), and we know that u(x,t) → H(x) as t →∞.

To consider the numerical approximation of the solution, we use the Legendre polynomials both in x and ξ,

 (83)

We seek the truncated sum of Eq. (83) for the numerical solution

 (84)

For simplicity, we assume that N = M. Multiplying each side of Eq. (75) by Ll'(x), l' = 0,...,N and using the integration by parts, we obtain

 (85)

where we use Ll'(1) = 1, ∀l'. We then plug the following relation into Eq. (85),

For the given l, using the boundary condition and the properties of the Legendre polynomials, we obtain

 (86)

Define the column vectors l and 1, whose ith elements are il and (-1)i, and define the matrix b2l, whose ith column has the element δil for i = 0,...,N. Also, define the matrices A, B, and C, whose ij elements are Aij = (2i + 1)/2, Bij = [2/(2i + 1)]δij, and Cij = -11L'i(x)Lj(x)dx, for i,j = 0,...,N, respectively. Then for given l, Eq. (86) becomes

 (87)

Equation (87) is solved numerically using the initial condition

 (88)

For the numerical experiment, we use the following initial and boundary conditions:

With these conditions, the mean f(x,t) and the variance g(x,t) of the exact solution u(x,t,ξ) are given by

 (89)

The variance is the same as the variance of Eq. (9), which is because the homogeneous solution is independent of the random variable ξ. For the time integration we use the third-order Runge-Kutta total variation diminishing (TVD) scheme [22]. The mean and the variance at t are computed by

Figure 6 shows the solution for ξ = 0 (left figure) and the variance (right) at t = 10. As shown in the right figure, convergence of variance is slow due to the Gibbs phenomenon.

## 6. Direct projection collocation methods

In the previous sections, we used the Galerkin approach to obtain the solution of the differential equations with the random variable ξ. The Galerkin approach yields the Gibbs phenomenon, as shown in the previous sections. In this section, we solve the same equations using the collocation method based on the direct projection approach for the singular source term [23]. The direct projection approach uses the direct derivative of the Heaviside function for the singular source term on the collocation points. The direct collocation method was applied to several applications [23-25]. The main idea of the direct projection approach is to project the Heaviside function H(x) to the collocation points using the spectral derivative matrix DN, that is,

where δN(x) is the spectral approximation of the δ-function on the collocation points with DN the derivative matrix and HN(x) the Heaviside function on the collocation points. Several spectral derivative matrices related to the orthogonal polynomials can be found in [19].

### 6.1 A Simple First-Order Differential Equation

Consider the following differential equation with the random variable ξ,

Let UN be the approximation of u on N + 1 collocation points for ξ, {ξl}l=0N. The collocation method yields the approximation UN(x,ξ) in the Legendre polynomials as in the previous sections,

 (90)

Here we assume that we also seek UN(x,ξ) on the collocation points for x, {xl}l=0M. That is, the spectral method is applied for both x and ξ directions, and the solution UN(x,ξ) is defined on the two-dimensional grid. By plugging UN(x,ξ) into the differential equation, we obtain

 (91)

where DM is the spectral derivative matrix for the variable x on M + 1 collocation points associated with some orthogonal polynomials such as Chebyshev or Legendre polynomials. For the singular source term in the right-hand side (RHS) of the above equation, the direct projection method uses

where HM is the Heaviside function on the collocation points which has the jump at x = ξ. Then Eq. (91) becomes

 (92)

To solve the differential equation, we first use the boundary condition, which is

 (93)

From Eqs. (92) and (93) we obtain

where M is the submatrix of DM, which is obtained by subtracting the boundary column and row from DM. The RHS 0 denotes a null vector, and UN(x,ξ) in the LHS is a solution vector for a certain ξ. Since M is nonsingular [19], we obtain

 (94)

which is the same as the exact solution, and we know that such solution is Gibbs-free on the collocation points.

Remark: We note that the interpolation based on the solution at the collocation points yields the Gibbs oscillations, but the solution is Gibbs-free on the collocation points.

### 6.2 A Simple Time-Dependent Problem

Now we consider the time-dependent problem with the collocation method

 (95)

where U = U(x,t,ξ) and Dx denotes the derivative operator with respect to x. U is defined in the same way,

 (96)

For the steady-state problem, using the following boundary condition,

 (97)

and we have the given differential equation which becomes as t →∞,

 (98)

This steady-state solution becomes U(x,t,ξ) → H(x - ξ), as shown in the previous section.

For the numerical experiment we use the Chebyshev polynomials for x and the Legendre polynomials for ξ. As in the previous section, we use the third-order Runge-Kutta TVD scheme for the time integration [22]. For the initial and boundary conditions we use the following:

 (99)

where xi,i = 0,...,M are the Chebyshev Gauss-Lobatto collocation points, xi = -cos(πi/M), i = 0,...,M. With these initial and boundary conditions, the exact solution u(x,ξ) is given by

 (100)

where the first term is the homogeneous solution and the second term is the particular solution due to the singular source term.

Figure 7 shows the collocation solution for u(x,ξ). Figure 7a shows the solution when ξ = 0, and the middle shows the collection of solutions with various ξ. Figure 7a shows that the solution is not affected by the Gibbs phenomenon without any oscillations on the collocation points. Figure 7b shows that along the line x = ξ, the jump of each solution is sharp, without any Gibbs oscillations. For these figures, we use M = N + 1 and N = 81. Figure 7c shows the variance with N = 41 and M = 21. The variance from the numerical solution is the blue line with the symbol. As shown in the figure, the variance is more accurately computed compared to the result in Fig. 6. The numerical results, however, show that the degree of accuracy is similar to that with the numerical simulation with the Galerkin approach, although the Gibbs oscillations are not seen on the collocation points. This result is somewhat different from what the authors expected, partly because the collocation approach has the ambiguity in defining the location of the δ-function and the Heaviside function. If the δ-function is located at a certain collocation point, the error does not decay at that point because the actual location of the δ-function with our collocation method exists off the collocation points. This issue will be further investigated in our future work.

## 7. Conclusion

In this paper we considered simple differential equations with a singular source term. For the singular source term, we used the Dirac δ-function. Due to the uncertainty of the location of the singular source term, we introduced a random variable and used the generalized polynomial chaos method to find the general solution of the differential equation under the uncertainty. For simplicity, we used the assumption that the uncertainty is associated with the uniform distribution. Based on this assumption, we derived the general solution of the differential equation in the Legendre polynomials using the Galerkin method, as well as the expectation value and variance of the solution. For this particular case, we show that the second- and third-order moments as well as the mean can be computed exactly using the first three expansion coefficients. The same technique was applied to the simple time-dependent problem. We showed that the Gibbs phenomenon appears in the polynomial chaos solution and consequently convergence is slow. As a preliminary work dealing with the Gibbs phenomenon in the solution, we considered the direct collocation method for the polynomial chaos solution. We showed that the direct collocation method can avoid the Gibbs phenomenon for the simple differential equations considered in this paper. Although the Gibbs oscillations are much reduced, the convergence of variance is about the same order as the Galerkin approach, which will be further investigated in our future work. The assumption of uniform distribution yields relatively easy analysis. In our future work we will consider more realistic cases with different distributions for more general types of differential equations with the singular source term. Thus, our future work will include the polynomial chaos method for more types of uncertainty variables associated with the singular source term and will further investigate the collocation method for the polynomial chaos solution and the Gibbs phenomenon with the singular source term.

## Acknowledgments

J.H.J. is supported by the National Science Foundation under grant no. DMS-0608844. The authors thank Dongbin Xiu for his help on the initial setup of the problem and his valuable feedback on our manuscript.

## References

1. Engquist, B., Tornberg, A.-K., and Tsai, R., Discretization of Dirac delta function in level set methods, J. Comput. Phys., 207:28-51, 2005.

2. Fei, Z., Kivshar, Y. S., and Váquez, L., Resonant kink-impurity interactions in the sine-Gordon model, Phys. Rev. A, 45(8):6019-6030, 1992.

3. Goodman, R. H. and Haberman, R., Chaotic scattering and the n-bounce resonance in solitary-wave interactions, Phys. Rev. Lett., 98(10):104103-1-104103-4, 2007.

4. Jacobs, G. and Don, W.-S., A high-order WENO-Z finite difference based particle-source-in-cell method for computation of particle-laden flows with shocks, J. Comput. Phys., 228:1365-1379, 2008.

5. Lousto, C. O. and Price, R. H., Head-on collisions of black holes: The particle limit, Phys. Rev. D, 55:2124-2138, 1997.

6. Wiener, S., The homogeneous chaos, Am. J. Math., 60:897-936, 1937.

7. Gottlieb, D. and Xiu, D., Galerkin method for wave equations with uncertain coefficients, Comm. Comput. Phys., 3:505-518, 2008.

8. Xiu, D., Fast numerical methods for stochastic computations: A review, Commun. Comput. Phys., 5(2-4):242-272, 2009.

9. Xiu, D. and Hesthaven, J., High-order collocation methods for differential equations with random inputs, SIAM J. Sci. Comput., 27(3):1118-1139, 2005.

10. Xiu, D. and Karniadakis, G., Modeling uncertainty in steady state diffusion problems via generalized polynomial chaos, Comput. Meth. Appl. Mech. Eng., 191:4927-4948, 2002.

11. Xiu, D. and Karniadakis, G., The Wiener-Askey polynomial chaos for stochastic differential equations, SIAM J. Sci. Comput., 24:619-644, 2002.

12. Xiu, D. and Karniadakis, G., Modeling uncertainty in flow simulations via generalized polynomial chaos, J. Comput. Phys., 187:137-167, 2003.

13. Xiu, D. and Karniadakis, G., Supersensitivity due to uncertain boundary conditions, Int. J. Numer. Meth. Eng., 61(12):2114-2138, 2004.

14. Xiu, D., Lucor, D., Su, C., and Karniadakis, G., Stochastic modeling of flow structure interactions using generalized polynomial chaos, J. Fluids Eng., 124:51-59, 2002.

15. Ghanem, R. G. and Spanos, P., Stochastic Finite Elements: A Spectral Approach, Springer, New York, 1991.

16. Le Maître, O. P. and Knio, O. M., Spectral Methods for Uncertainty Quantification, Springer, New York, 2010.

17. Xiu, D., Numerical methods for stochastic computations: A spectral method approach, Princeton UP, Princeton, 2010.

18. Bateman, H., Higher Transcendental Functions, Vol. 2 (edited by A. Erdélyi, W. Magnus, F. Oberhettinger, and F. G. Tricomi) McGraw-Hill, New York, 1953.

19. Hesthaven, J. S., Gottlieb, S., and Gottlieb, D., Spectral Methods for Time-Dependent Problems, Cambridge UP, Cambridge, 2007.

20. Gottlieb, D. and Orszag, S. A., Numerical Analysis of Spectral Methods: Theory and Applications, SIAM, Philadelphia, 1977.

21. Sansone, G., Orthogonal Functions, A. H. Diamond (Trans.), Robert E. Krieger Publishing Co., Huntington, NY, 1997.

22. Shu, C.-W., Total variation diminishing Runge-Kutta schemes, SIAM J. Sci. Stat. Comp., 9:1079-1084, 1988.

23. Jung, J.-H., A note on the spectral collocation approximation of differential equations with singular sources in one dimension, J. Sci. Comput., 39(1):49-66, 2009.

24. Jung, J.-H. and Don, W.-S., Collocation methods for hyperbolic partial differential equations with singular sources, Adv. Appl. Math. Mech., 1(6):769-780, 2009.

25. Jung, J.-H., Khanna, G., and Nagle, I., A spectral collocation approximation for the radial-infall of a compact object into a Schwarzschild black hole, Int. J. Mod. Phys. C, 20(11):1827-1848, 2009.