Factor de Impacto:
4.911
Factor de Impacto de 5 años:
3.179
SJR:
1.008
SNIP:
0.983
CiteScore™:
5.2
ISSN Imprimir: 21525080
Acceso abierto
Volumes:

International Journal for Uncertainty Quantification
DOI: 10.1615/Int.J.UncertaintyQuantification.v1.i1.50
pages 7798 ON A POLYNOMIAL CHAOS METHOD FOR DIFFERENTIAL EQUATIONS WITH SINGULAR SOURCESAbstract Singular source terms in the differential equation represented by the Dirac δfunction play a crucial role in determining the global solution. Due to the singular feature of the δfunction, physical parameters associated with the δfunction are highly sensitive to random and measurement errors, which makes the uncertainty analysis necessary. In this paper we use the generalized polynomial chaos method to derive the general solution of the differential equation under uncertainties associated with the δfunction. For simplicity, we assume the uniform distribution of the random variable and use the Legendre polynomials to expand the solution in the random space. A simple differential equation with the singular source term is considered. The polynomial chaos solution is derived. The Gibbs phenomenon and the convergence of high order moments are discussed. We also consider a direct collocation method which can avoid the Gibbs oscillations on the collocation points and enhance the accuracy accordingly. KEYWORDS: generalized polynomial chaos, stochastic Galerkin method, singular source, Dirac ±function, Gibbs phenomenon
1. IntroductionDifferential equations with singular source terms are commonly found in various areas of applications [15]. Singular source terms are defined in a highly localized regime and play a crucial role in determining the global solution of the given differential equations. It is important to capture properly such smallscale phenomenon induced by the local singular source terms and understand the interaction between the small and largescale solution dynamics. Singular source terms are mathematically represented by the Dirac δfunction, δ(xc), and its derivative(s) defined in the distribution sense with a function f(x), which is defined at x = c such that
The derivatives of the δfunction are also defined in a similar way for a function f(x), whose derivatives are defined at x = c such that
where the superscript ' denotes the derivative with respect to x. Although singular sources are defined in a compact form mathematically, various uncertainties are easily involved to define them physically. For example, it is not easy to pinpoint the location of the singular source term. The detection of the singular object is based on the physical measurement, and such measurement has errors due to the locality of the singularity. Thus the realistic model of the singular source term should include the uncertainty of the location of the singular source, which can be introduced by a new random variable ξ in the physical space where the δfunction is defined such as where ξ is a random variable replacing c in δ(x  c). Another type of uncertainties can be introduced for the amplitude, for which one can rewrite the δfunction as the following form where η is a random variable. If η → 0, then the singular source term vanishes. One can consider other types of uncertainties for the δfunction besides the location and the amplitude. In this paper we consider the case where the uncertainty exists in the location. In many cases solutions of differential equations with singular sources are nonsmooth, singular, or discontinuous. For example, in nonlinear optics, a defect in the optical media is modeled by the singular source, and such a singular source term plays a role as a potential around which the input signals yield nonlinear reflection and scattering phenomena. These nonlinear phenomena have been investigated using nonlinear partial differential equations (PDEs) including the sineGordon equation
and the nonlinear Schrödinger equation
Previous research shows that the global solutions of PDEs as given above are sensitive to the singular potential term and that the mathematical structure of the solution dynamics is rich and complex [2, 3]. The sensitivity of the global solution to the singular source term is amplified if uncertainties are involved, which makes the global solution dynamics more complex. No significant research has been conducted for the uncertainty analysis for the solution of such singularly perturbed differential equations. In this paper, as a preliminary research, we use the polynomial chaos method to analyze the solution of differential equations with the singular source term. The polynomial chaos method was introduced by Wiener [6] and was recently much further developed by Xiu and coworkers [714]. The polynomial chaos method with the spectral method approach has gained great popularity these days [1517] (see Xiu’s recent book and references therein [17]). The polynomial chaos method seeks the solution in a higher dimensional polynomial space by introducing a random variable associated with the uncertainty. Then the method expands the solution as a polynomial using the orthogonal polynomials [16, 17]. The orthogonal polynomials are determined by the distribution of the random variables considered. Different distributions and the corresponding orthogonal polynomials are given in Table 1 [11, 17]. In this paper we consider the uniform distribution and use the Legendre polynomials for simplicity. TABLE 1: Continuous probability density functions and the associated orthogonal polynomials [11, 17].
This paper is composed of the following sections. In Section 2 we consider the simple differential equation with a singular source term. The uncertainty is in the location of the singular source term. The random variable has a uniform distribution. We derive the solution using the Galerkin method and provide some convergence results. We also consider the case that the uncertainty is confined in a local regime. This assumption yields the domain decomposition method. In Section 3 we discuss the Gibbs phenomenon which exists in the solutions obtained in Section 2. In Section 4 discussions on highorder moments are given. In Section 5 we consider the simple linear advection equation with the singular source term. A similar solution is obtained for the timedependent problem. In Section 6 we consider the collocation method to solve the same timedependent problem considered in Section 5. The singular source term is directly projected to the collocation space. As a result, the direct projection method removes the Gibbs phenomenon in the solution. In Section 7 we provide a brief summary and remark on future work. 2. A firstorder differential equation with uniform distribution, ξ ∈ [1,1]First we consider the following simple differential equation for the realvalued function u(x),
The exact solution is simply given by the Heaviside function H(x) which is an integral of the righthand side of Eq. (5), δ(x). The singularity is located at the origin for Eq. (5), but we assume that there is an uncertainty in the location of the δfunction and use ξ as the random variable to denote the uncertainty of the location. Then the given differential equation becomes
The solution u is now a function of both x and ξ. We also assume that ξ has the uniform distribution and is defined in the same interval of x, i.e., ξ ∈ [1,1], with the probability density function (PDF) given by (1/2)χ_{[1,1]}(ξ) where χ(ξ) is the characteristic function. The assumption of the uniform distribution yields that the associated orthogonal polynomials for ξ are the Legendre polynomials, as given in Table 1. The Legendre polynomials are defined by the solution to the SturmLiouville problem {(1  x^{2})[L_{l}(x)]'}' + l(l + 1)L_{l}(x) = 0 with x ∈ [1,1] with the orthogonality condition ∫_{1}^{1}L_{l}(x)L_{l'}(x)dx = [2/(2l + 1)]δ_{ll'}, where the superscript ' denotes the derivative with respect to x and δ is the Kronecker delta. The solution of Eq. (6) is obvious,
Let E(u) denote the expectation value of u, and V ar(u) the variance of u. These two quantities, E(u) and V ar(u), are all functions of x only. Let f(x) = E(u) and g(x) = V ar(u). Then we have
Similarly,
Definition: Let u^{(1)}(x), u^{(2)}(x) and u(x,ξ) be defined by the Galerkin solution of Eq. (5) in Legendre polynomials, the Galerkin projection of the exact solution of Eq. (5), H(x) and the polynomial chaos solution of Eq. (6), respectively. The superscripts (1) and (2) denote that the associated quantity corresponds to u^{(1)}(x) and u^{(2)}(x). For example, û^{(1)} and û^{(2)} are the expansion coefficients of u^{(1)}(x) and u^{(2)}(x), respectively. First we consider the function u^{(2)}(x) = ∑ _{l=0}^{∞}û_{l}^{(2)}L_{l}(x), which is the direct projection of the exact solution of Eq. (5), that is, û_{l}^{(2)} are given by the following equation:
Lemma 1. The expansion coefficients û_{l}^{(2)} in Eq. (10) are given by
Proof. By multiplying each side of Eq. (10) by L_{l}(x) and using the orthogonality of the Legendre polynomials, the expansion coefficients are given by
If l = 0, it is obvious that û_{l}^{(2)} = (1/2). For l ≠ 0, we use the following property of the Legendre polynomials [18]:
and
Since _{0}^{1}L_{l}(x)dx = _{1}^{0}L_{l}(x)dx for l = even and _{0}^{1}L_{l}(x)dx =  _{1}^{0}L_{l}(x)dx for l = odd, the above relations yield
Since L_{l}(0) = 0 if l = odd, we obtain Eq. (11). Next we consider the function u^{(1)}(x) = ∑ _{l=0}^{∞}û_{l}^{(1)}L_{l}(x), which is the Galerkin solution of the differential equation, Eq. (5). By plugging u^{(1)}(x) into the differential equation and the initial condition, we have
Lemma 1. The coefficients û_{l}^{(1)}, satisfying Eqs. (16) and (17), are given by
Proof. From Eq. (13) we have
Adding both sides of Eq. (19) all together, we have
for n = even. Similarly, we have
for n = odd. From Eqs. (20) and (21), we know that L_{n}^{'}(x) is a linear combination of all the previous odd (if n is odd) or even (if n is even) terms with coefficients (2k + 1) for L_{k}(x). By plugging Eqs. (20) and (21) into Eq. (16), we have
In Eq. (22), ... means 3L_{1}(x) + 7L_{3}(x) + 11L_{5}(x) + ... + (2l  3)L_{l2} for even l or 1 + 5L_{2}(x) + 9L_{4}(x) + 13L_{6}(x) + ... + (2l  3)L_{l2} for odd l. Multiplying each side of Eq. (22) by L_{k}(x) yields
We then integrate the above equation over x and switch the left and right sides to obtain
where we used the orthogonality condition of the Legendre polynomials. Eq. (24) also reads
Subtracting Eq. (24) from Eq. (25) yields
Now consider the boundary condition. Since û_{k+1}^{(1)} vanishes if k is odd, the boundary condition becomes
This completes the proof. Finally we consider u(x,ξ), which is the solution of the stochastic differential equation, Eq. (6),
where the expansion coefficients û_{l} are functions of x. Plugging u(x,ξ) into the differential equation yields
where the superscript ' denotes the derivative with respect to x. To consider u(x,ξ), let us first consider the general case where ξ is defined in the subinterval of x. Domain decomposition: ξ ∈ (ϵ,ϵ). Assume that the location of the δfunction is confined in a small region ξ ∈ (ϵ,ϵ), 0 < ϵ << 1. The solution for Eq. (6) is then given by
where ξ ∈ (ϵ,ϵ). In the interval x ∈ [ϵ,ϵ], the expectation value is
Similarly, the variance is
Thus, for a fixed ϵ, the expectation value and variance are given by
and
For any ϵ, we have
which shows the expectation value and variance are bounded although the PDF [(1/2)_{[ϵ,ϵ]}(ξ) = (1/2ϵ)] diverges as ϵ → 0. We know that x → 0 as ϵ → 0, and we obtain the expectation value and variance at x = 0 by letting ϵ → 0. Also by Eqs. (35) and (36) we have
These values are the same as those for ξ ∈ [1,1]. Thus, we know that if x → 0, the expectation value and the variance are the same for any value of ξ ∈ (ϵ,ϵ). The assumption that ξ ∈ (ϵ,ϵ) breaks the original differential equation into three equations in three regions, (1) x ∈ I = [1,ϵ], (2) x ∈ II = (ϵ,ϵ], and (3) x ∈ III = (ϵ,1]. Interval I, x ∈ [1,ϵ]: In this interval, the δfunction is absent and the differential equation and the boundary condition are given by and the solution is simply
Interval II, x ∈ (ϵ,ϵ): In this interval, the δfunction exists and the equation is given by where η ∈ (ϵ,ϵ). Then we seek a solution u(x,η) as
where ξ = (η/ϵ) and ξ ∈ [1,1]. Lemma 3. The expansion coefficients û_{l}(x) in Eq. (39) are given by
Furthermore, the boundary value u(x = ϵ,η) is unity for any value of η, i.e.,
Proof. By plugging Eq. (39) into the differential equation and using the orthogonality of L_{l}(ξ) we obtain
The boundary condition at x = ϵ is obtained by the solution at x = ϵ in interval I,
Thus, û_{l}(ϵ) = 0 for all l = 0,1,2,.... Using this boundary condition, we obtain
if k = 0. If k ≠ 0, we have
where we used Eq. (14). The boundary value of u(x,η) at x = ϵ is
From lemma 2, we know that the mean value of u(x,η) in this interval is given by
which is û_{0}(x). If x → 0, we confirm that Interval III, x ∈ (ϵ,1]: Since there is no δfunction in this interval, using the boundary value of u(ϵ,η) = 1, the solution in x ∈ (ϵ,1] is given by u(x) = 1. It is easy to show that if ϵ → 0, then we have
Using lemma 2, we have the following corollary for ξ[1,1]. Corollary 4. The expansion coefficients û_{l}(x) are given by
and
Proof. From lemma 2, for ϵ → 1, we have Eqs. (49) and (50). Furthermore, by plugging ξ = 0 and equations in Eq. (49) into Eq. (28), we obtain
It is a simple exercise to show that Eq. (51) becomes Here note that the first coefficient, û_{0}(x) = (1/2)(x + 1), is the same as the expectation value of u(x,ξ) in Eq. (8). Remark: Equations (50) and (51) are equivalent. They may, however, become different if they are truncated with the finite N in their given forms. For Eq. (51), since L_{l+1}(x) = L_{l1}(x) at x = ±1 if l is even, we know that at x = ±1, which are the boundary values and they are determined regardless of how many terms are used in the series. For Eq. (50), since u(x,0) = 1 at x = 1, and u(x,0) = 0 at x = 1, the following should be which is only true if N →∞. Thus we use Eq. (51) for the computation in the following sections. Figure 1 shows the expansion coefficients u_{l}(x), Eq (49). The top figure shows u_{l}(x) for l = 0,...,9 and the bottom figure for l = 10,...,40. FIG. 1: Expansion coefficients u_{l}(x). Top:u_{l}(x) for l = 0,...,9. Bottom: u_{l}(x) for l = 10,...,40. Theorem 5. Furthermore, u_{N}(x,ξ) converges to u(x,ξ) at ξ = 0, i.e.,
Proof. From lemmas 1 and 2 and corollary 4, we know that all the coefficients of u^{(1)}(x), u^{(2)}(x), and u(x,ξ = 0) are the same. Using the recurrence relation (n + 1)L_{n+1}(x) = (2n + 1)xL_{n}(x)  nL_{n1}(x), we have
which yields
Then we let _{k}(x) be defined as
Using the Stirling formula n! ~√2πn(n/e)^{n} and (2n)! ~[√4πn(2n/e)^{2n}], we have
Thus, the following series converges
and we have
From theorem 5, we know that the stochastic solution u(x,ξ) matches the deterministic solution well, particularly if the singularity is located at x = 0. Remark: Theorem 5 can be extended to the more general case that where c is the real constant c ∈ [1,1]. Then solutions u^{(1)}(x;c) and u^{(2)}(x;c) are the same as u(x,ξ) for any ξ = c. This can be easily shown using the properties of the Legendre polynomials. First, for H(xc) = ∑ _{l=0}^{N}û_{l}^{(2)}L_{l}(x), the coefficients are given by
By the Galerkin projection, we get similar results as lemma 2,
for any l > 0. To prove Eqs. (59) and (60) are equal, we use the identity formula (14). Setting x = 1 and x = c in Eq. (14), respectively, we have
Subtracting Eq. (62) from Eq. (61) yields the equation implying that Eqs. (59) and (60) are equal. Also, the boundary condition ∑ _{l=0}^{N}û_{l}^{(1)}L_{l}(1) = 0 is obtained by plugging Eq. (60) into this formula. The coefficients from the polynomial chaos method for any c are obtained in as similar way as corollary 2 by just replacing 0 with c in Eq. (50). After some simple algebraic calculations, we can show that the coefficients by the polynomial chaos method are equal to those by the previous two methods. Now we consider the convergence of u(x,ξ) for any ξ. That is, we want to show
Using corollary 4 we have
where we used L_{l}(ξ) ≤ 1. Here we do not provide the convergence analytically, but instead we show the numerical result. Define R_{1}(n), and the remainder For the numerical calculation of R_{1}(n) and R_{2}(n,N_{∞}), we use N_{∞} = 6000. Figure 2 shows the decay of R_{1}(n) (blue solid line) and R_{2}(n) (black solid line) with n in logarithmic scale. The figure shows that R_{1}(n) decays with a rate of about ~ n^{4.95}. The red line in the figure is a reference line which decays ~ n^{4.95}. With this decay rate, we know that the series ∑ _{n=1}^{∞}L_{l+1}(x)  L_{l1}(x)_{∞} will converge. Thus, the remainder R_{2}(n,N_{∞}) will decay as n →∞ and N_{∞} →∞ for n < N_{∞}, i.e., The black solid line shows the decay of R_{2}(n,N_{∞}) with N_{∞} = 6000. The figure implies that due to the decay property of R_{1}(n), the remainder R_{2}(n,∞) will also decay to zero as n →∞, but the decay rate is only algebraic. That is, we know that u_{N}(x,ξ) converges to u(x,ξ), but convergence is slow because of the existence of the discontinuity at x = ξ. FIG. 2: Decay of R_{1}(n) and R_{2}(n,N_{∞}) with N_{∞} = 6000. The red solid line is the reference line which decays as n^{4.95}. 3. Gibbs phenomenonThe solutions obtained in the previous section yield the Gibbs phenomenon. The Gibbs phenomenon is commonly found in highorder approximations of discontinuous functions with the spectral method [19, 20]. The exact solution of Eq. (5) is the Heaviside function with H(0) = 1 and lim_{x→0}H(x) = 0. As we already saw, all solutions obtained in the previous section have the expectation value of 1/2 at x = 0. Thus all solutions converge to H(x) at every point x except x = 0. This appears as the Gibbs oscillations in the partial sum of each solution near x = 0. Figure 3 shows the partial sum solution of u(x,ξ = 0) (left) and u(x,ξ) (right) for N = 40. The left figure shows the solution when ξ = 0. As shown in the figure, the solution is oscillatory near the discontinuity x = 0. The right figure shows the collection of solutions for every ξ and x. As shown in the figure, u(x,ξ) are oscillatory near x = ξ. Figure 4 shows the variance and the mean of u(x,ξ). The top figure shows the computed variance of u(x,ξ) with 501 points of x and ξ for N = 10 (blue solid line), N = 20 (green), N = 40 (purple), and the theoretical variance of u(x,ξ),(1  x^{2})/4 (red). As the figure shows, the variance approaches the exact variance as N increases, but the convergence is slow. The slow convergence is due to the fact that the variance is computed using every term in the series of the solution. As the series converges slowly, the variance also converges slowly. The bottom figure shows the error between the computed mean of u(x,ξ) and the exact mean (1 + x)/2 in logarithmic scale using 5001 uniform points for N = 4,6,10. For the numerical integration, we used the Simpson’s rule. The figure shows that the pointwise errors of the mean value are close to machine accuracy for the small value of N. This is because the first mode is the mean and the rest of the terms are canceled out. As N increases, the pointwise errors increase, which results from the incomplete numerical cancellations of high modes due to roundoff errors. FIG. 3: Left: u(x,ξ) for ξ = 0. Right: u(x,ξ). For these figures, N = 40 is used. FIG. 4: Top: Variance of u(x,ξ) with the exact variance (1  x^{2})/4 (red), variances for N = 10 (blue), N = 20 (green), and N = 40 (purple). Bottom: Pointwise errors between the exact mean (x+1)/2 and the computed mean for N = 4 (green), 6 (red), 10 (blue). 4. Highorder moments of u(x,ξ)With the uniform distribution, it is easy to show that the variance is given by
where one should note that the index l runs from 1. In general, all the terms of û_{l}(x) are involved for the computation of the variance, as shown in the above equation and Fig. 4. It is, however, interesting to observe that the variance in our case is simply given by the second mode of û_{l}(x),
That is, as the mean of u(x,ξ), the variance can be determined exactly once û_{1}(x) is found. This implies that the slow convergence of the variance found in Fig. 4 can be resolved as the variance is obtained instantly. To understand this interesting aspect, we need to show the following:
where u_{l}(x) = (1/2)[L_{l+1}(x) L_{l1}(x)]. To prove Eq. (67), first we plug Eq. (14) into the lefthand side (LHS) of Eq. (67). Then the LHS becomes For the proof we use the wellknown property that the Legendre polynomials are complete, that is, {√[(2l + 1)/2]L_{l}(x)}_{l=0}^{∞} are complete and orthonormal [21]. The completeness condition yields Thus, Using L_{0}(x) = 1 we obtain [21] This completes the proof. This special result is due to the following relation:
Since the exact solution is H(x  ξ), it is simple to show that The fact that the mean of any power of u(x,ξ) is the same as the mean of u(x,ξ) yields the following property. Let E[u(x,ξ)] = u and v = u. Then for n = 0,1,..., we have
It is easy to show Eq. (69),
where we used Eq. (68) and v = u. Equation (69) yields interesting results about the highorder moments of u. For example,
The second moment is the variance and the third moment is related to the skewness. Thus, we know that the first three moments (the mean, variance, and skewness) are obtained exactly by the first three modes of u_{l}(x) for our case. Figure 5 shows E[(u u)^{n}] with different n = 1,...,50. The left figure shows E[(u u)^{n}] for n = 1,...,20 and the right for n = 21,...,50. If n = 1, E[(u u)^{n}] = 0. As n increases, the maximum value of E[(u u)^{n}] decreases in the figures. Note the different scale in the left and right figures. FIG. 5: Moments, E[(u u)^{n}]. Top: n = 1,...,20. Bottom: n = 21,...,50. 5. Timedependent linear advection equation with uncertaintyWe consider the timedependent problem with a singular source term
If the boundary condition is homogeneous, i.e., u(1,t) = 0, the solution u(x,t) goes to the steadystate solution which is the Heaviside function, H(x). We consider the case that the location of the singular source has an uncertainty ξ as in the previous sections, that is,
where u = u(x,t,ξ). We assume that ξ ∈ (1,1) with the uniform PDF and
By plugging Eq. (74) and using the orthogonality condition of the Legendre polynomials, we obtain
If l = 0, we have
Then the solution of Eq. (76) is given by
where C is the integration constant and x_{0} = x  t. To determine the integration constant C, we use the given boundary and initial conditions. From the boundary condition u(1,t,ξ) = h(t), we have
Similarly, using the given initial condition we obtain
Using Eqs. (78) and (79), we obtain
If l ≠ 0, by using the orthogonality condition we obtain
where we used Eqs. (18) and (79). Thus, the general solution of the stochastic equation (73) is given by
If g(x) = 0 = h(t), then we obtain This is the Legendre expansion of H(x), and we know that u(x,t) → H(x) as t →∞. To consider the numerical approximation of the solution, we use the Legendre polynomials both in x and ξ,
We seek the truncated sum of Eq. (83) for the numerical solution
For simplicity, we assume that N = M. Multiplying each side of Eq. (75) by L_{l'}(x), l' = 0,...,N and using the integration by parts, we obtain
where we use L_{l'}(1) = 1, ∀l'. We then plug the following relation into Eq. (85), For the given l, using the boundary condition and the properties of the Legendre polynomials, we obtain
Define the column vectors ^{l} and _{1}, whose ith elements are _{i}^{l} and (1)^{i}, and define the matrix b_{2}^{l}, whose ith column has the element δ_{il} for i = 0,...,N. Also, define the matrices A, B, and C, whose ij elements are A_{ij} = (2i + 1)/2, B_{ij} = [2/(2i + 1)]δ_{ij}, and C_{ij} = _{1}^{1}L'_{i}(x)L_{j}(x)dx, for i,j = 0,...,N, respectively. Then for given l, Eq. (86) becomes
Equation (87) is solved numerically using the initial condition
For the numerical experiment, we use the following initial and boundary conditions: With these conditions, the mean f(x,t) and the variance g(x,t) of the exact solution u(x,t,ξ) are given by
The variance is the same as the variance of Eq. (9), which is because the homogeneous solution is independent of the random variable ξ. For the time integration we use the thirdorder RungeKutta total variation diminishing (TVD) scheme [22]. The mean and the variance at t are computed by Figure 6 shows the solution for ξ = 0 (left figure) and the variance (right) at t = 10. As shown in the right figure, convergence of variance is slow due to the Gibbs phenomenon. FIG. 6: Left: The numerical solution (oscillatory, blue) and the exact solution (red). Right: Variance. ξ = 0 at t = 10 with N = 41. 6. Direct projection collocation methodsIn the previous sections, we used the Galerkin approach to obtain the solution of the differential equations with the random variable ξ. The Galerkin approach yields the Gibbs phenomenon, as shown in the previous sections. In this section, we solve the same equations using the collocation method based on the direct projection approach for the singular source term [23]. The direct projection approach uses the direct derivative of the Heaviside function for the singular source term on the collocation points. The direct collocation method was applied to several applications [2325]. The main idea of the direct projection approach is to project the Heaviside function H(x) to the collocation points using the spectral derivative matrix D_{N}, that is, where δ_{N}(x) is the spectral approximation of the δfunction on the collocation points with D_{N} the derivative matrix and H_{N}(x) the Heaviside function on the collocation points. Several spectral derivative matrices related to the orthogonal polynomials can be found in [19]. 6.1 A Simple FirstOrder Differential EquationConsider the following differential equation with the random variable ξ, Let U_{N} be the approximation of u on N + 1 collocation points for ξ, {ξ_{l}}_{l=0}^{N}. The collocation method yields the approximation U_{N}(x,ξ) in the Legendre polynomials as in the previous sections,
Here we assume that we also seek U_{N}(x,ξ) on the collocation points for x, {x_{l}}_{l=0}^{M}. That is, the spectral method is applied for both x and ξ directions, and the solution U_{N}(x,ξ) is defined on the twodimensional grid. By plugging U_{N}(x,ξ) into the differential equation, we obtain
where D_{M} is the spectral derivative matrix for the variable x on M + 1 collocation points associated with some orthogonal polynomials such as Chebyshev or Legendre polynomials. For the singular source term in the righthand side (RHS) of the above equation, the direct projection method uses where H_{M} is the Heaviside function on the collocation points which has the jump at x = ξ. Then Eq. (91) becomes
To solve the differential equation, we first use the boundary condition, which is
From Eqs. (92) and (93) we obtain where _{M} is the submatrix of D_{M}, which is obtained by subtracting the boundary column and row from D_{M}. The RHS 0 denotes a null vector, and U_{N}(x,ξ) in the LHS is a solution vector for a certain ξ. Since _{M} is nonsingular [19], we obtain
which is the same as the exact solution, and we know that such solution is Gibbsfree on the collocation points. Remark: We note that the interpolation based on the solution at the collocation points yields the Gibbs oscillations, but the solution is Gibbsfree on the collocation points. 6.2 A Simple TimeDependent ProblemNow we consider the timedependent problem with the collocation method
where U = U(x,t,ξ) and D_{x} denotes the derivative operator with respect to x. U is defined in the same way,
For the steadystate problem, using the following boundary condition,
and we have the given differential equation which becomes as t →∞,
This steadystate solution becomes U(x,t,ξ) → H(x  ξ), as shown in the previous section. For the numerical experiment we use the Chebyshev polynomials for x and the Legendre polynomials for ξ. As in the previous section, we use the thirdorder RungeKutta TVD scheme for the time integration [22]. For the initial and boundary conditions we use the following:
where x_{i},i = 0,...,M are the Chebyshev GaussLobatto collocation points, x_{i} = cos(πi/M), i = 0,...,M. With these initial and boundary conditions, the exact solution u(x,ξ) is given by
where the first term is the homogeneous solution and the second term is the particular solution due to the singular source term. Figure 7 shows the collocation solution for u(x,ξ). Figure 7a shows the solution when ξ = 0, and the middle shows the collection of solutions with various ξ. Figure 7a shows that the solution is not affected by the Gibbs phenomenon without any oscillations on the collocation points. Figure 7b shows that along the line x = ξ, the jump of each solution is sharp, without any Gibbs oscillations. For these figures, we use M = N + 1 and N = 81. Figure 7c shows the variance with N = 41 and M = 21. The variance from the numerical solution is the blue line with the symbol. As shown in the figure, the variance is more accurately computed compared to the result in Fig. 6. The numerical results, however, show that the degree of accuracy is similar to that with the numerical simulation with the Galerkin approach, although the Gibbs oscillations are not seen on the collocation points. This result is somewhat different from what the authors expected, partly because the collocation approach has the ambiguity in defining the location of the δfunction and the Heaviside function. If the δfunction is located at a certain collocation point, the error does not decay at that point because the actual location of the δfunction with our collocation method exists off the collocation points. This issue will be further investigated in our future work. FIG. 7: (a) The solution at ξ = 0. (b) Polynomial chaos solutions for every ξ. The total number of grid points for x is N = 81. (c) The computed variance (blue line with square) and the exact variance (red line) with M = 21 and N = 41. 7. ConclusionIn this paper we considered simple differential equations with a singular source term. For the singular source term, we used the Dirac δfunction. Due to the uncertainty of the location of the singular source term, we introduced a random variable and used the generalized polynomial chaos method to find the general solution of the differential equation under the uncertainty. For simplicity, we used the assumption that the uncertainty is associated with the uniform distribution. Based on this assumption, we derived the general solution of the differential equation in the Legendre polynomials using the Galerkin method, as well as the expectation value and variance of the solution. For this particular case, we show that the second and thirdorder moments as well as the mean can be computed exactly using the first three expansion coefficients. The same technique was applied to the simple timedependent problem. We showed that the Gibbs phenomenon appears in the polynomial chaos solution and consequently convergence is slow. As a preliminary work dealing with the Gibbs phenomenon in the solution, we considered the direct collocation method for the polynomial chaos solution. We showed that the direct collocation method can avoid the Gibbs phenomenon for the simple differential equations considered in this paper. Although the Gibbs oscillations are much reduced, the convergence of variance is about the same order as the Galerkin approach, which will be further investigated in our future work. The assumption of uniform distribution yields relatively easy analysis. In our future work we will consider more realistic cases with different distributions for more general types of differential equations with the singular source term. Thus, our future work will include the polynomial chaos method for more types of uncertainty variables associated with the singular source term and will further investigate the collocation method for the polynomial chaos solution and the Gibbs phenomenon with the singular source term. AcknowledgmentsJ.H.J. is supported by the National Science Foundation under grant no. DMS0608844. The authors thank Dongbin Xiu for his help on the initial setup of the problem and his valuable feedback on our manuscript. References1. Engquist, B., Tornberg, A.K., and Tsai, R., Discretization of Dirac delta function in level set methods, J. Comput. Phys., 207:2851, 2005. 2. Fei, Z., Kivshar, Y. S., and Váquez, L., Resonant kinkimpurity interactions in the sineGordon model, Phys. Rev. A, 45(8):60196030, 1992. 3. Goodman, R. H. and Haberman, R., Chaotic scattering and the nbounce resonance in solitarywave interactions, Phys. Rev. Lett., 98(10):10410311041034, 2007. 4. Jacobs, G. and Don, W.S., A highorder WENOZ finite difference based particlesourceincell method for computation of particleladen flows with shocks, J. Comput. Phys., 228:13651379, 2008. 5. Lousto, C. O. and Price, R. H., Headon collisions of black holes: The particle limit, Phys. Rev. D, 55:21242138, 1997. 6. Wiener, S., The homogeneous chaos, Am. J. Math., 60:897936, 1937. 7. Gottlieb, D. and Xiu, D., Galerkin method for wave equations with uncertain coefficients, Comm. Comput. Phys., 3:505518, 2008. 8. Xiu, D., Fast numerical methods for stochastic computations: A review, Commun. Comput. Phys., 5(24):242272, 2009. 9. Xiu, D. and Hesthaven, J., Highorder collocation methods for differential equations with random inputs, SIAM J. Sci. Comput., 27(3):11181139, 2005. 10. Xiu, D. and Karniadakis, G., Modeling uncertainty in steady state diffusion problems via generalized polynomial chaos, Comput. Meth. Appl. Mech. Eng., 191:49274948, 2002. 11. Xiu, D. and Karniadakis, G., The WienerAskey polynomial chaos for stochastic differential equations, SIAM J. Sci. Comput., 24:619644, 2002. 12. Xiu, D. and Karniadakis, G., Modeling uncertainty in flow simulations via generalized polynomial chaos, J. Comput. Phys., 187:137167, 2003. 13. Xiu, D. and Karniadakis, G., Supersensitivity due to uncertain boundary conditions, Int. J. Numer. Meth. Eng., 61(12):21142138, 2004. 14. Xiu, D., Lucor, D., Su, C., and Karniadakis, G., Stochastic modeling of flow structure interactions using generalized polynomial chaos, J. Fluids Eng., 124:5159, 2002. 15. Ghanem, R. G. and Spanos, P., Stochastic Finite Elements: A Spectral Approach, Springer, New York, 1991. 16. Le Maître, O. P. and Knio, O. M., Spectral Methods for Uncertainty Quantification, Springer, New York, 2010. 17. Xiu, D., Numerical methods for stochastic computations: A spectral method approach, Princeton UP, Princeton, 2010. 18. Bateman, H., Higher Transcendental Functions, Vol. 2 (edited by A. Erdélyi, W. Magnus, F. Oberhettinger, and F. G. Tricomi) McGrawHill, New York, 1953. 19. Hesthaven, J. S., Gottlieb, S., and Gottlieb, D., Spectral Methods for TimeDependent Problems, Cambridge UP, Cambridge, 2007. 20. Gottlieb, D. and Orszag, S. A., Numerical Analysis of Spectral Methods: Theory and Applications, SIAM, Philadelphia, 1977. 21. Sansone, G., Orthogonal Functions, A. H. Diamond (Trans.), Robert E. Krieger Publishing Co., Huntington, NY, 1997. 22. Shu, C.W., Total variation diminishing RungeKutta schemes, SIAM J. Sci. Stat. Comp., 9:10791084, 1988. 23. Jung, J.H., A note on the spectral collocation approximation of differential equations with singular sources in one dimension, J. Sci. Comput., 39(1):4966, 2009. 24. Jung, J.H. and Don, W.S., Collocation methods for hyperbolic partial differential equations with singular sources, Adv. Appl. Math. Mech., 1(6):769780, 2009. 25. Jung, J.H., Khanna, G., and Nagle, I., A spectral collocation approximation for the radialinfall of a compact object into a Schwarzschild black hole, Int. J. Mod. Phys. C, 20(11):18271848, 2009. 
Portal Digitalde  Biblioteca Digital  eLibros  Revistas  Referencias y Libros de Ponencias  Colecciones 