Library Subscription: Guest
International Journal for Uncertainty Quantification

Published 6 issues per year

ISSN Print: 2152-5080

ISSN Online: 2152-5099

The Impact Factor measures the average number of citations received in a particular year by papers published in the journal during the two preceding years. 2017 Journal Citation Reports (Clarivate Analytics, 2018) IF: 1.7 To calculate the five year Impact Factor, citations are counted in 2017 to the previous five years and divided by the source items published in the previous five years. 2017 Journal Citation Reports (Clarivate Analytics, 2018) 5-Year IF: 1.9 The Immediacy Index is the average number of times an article is cited in the year it is published. The journal Immediacy Index indicates how quickly articles in a journal are cited. Immediacy Index: 0.5 The Eigenfactor score, developed by Jevin West and Carl Bergstrom at the University of Washington, is a rating of the total importance of a scientific journal. Journals are rated according to the number of incoming citations, with citations from highly ranked journals weighted to make a larger contribution to the eigenfactor than those from poorly ranked journals. Eigenfactor: 0.0007 The Journal Citation Indicator (JCI) is a single measurement of the field-normalized citation impact of journals in the Web of Science Core Collection across disciplines. The key words here are that the metric is normalized and cross-disciplinary. JCI: 0.5 SJR: 0.584 SNIP: 0.676 CiteScore™:: 3 H-Index: 25

Indexed in

EFFICIENT NUMERICAL METHODS FOR STOCHASTIC PARTIAL DIFFERENTIAL EQUATIONS THROUGH TRANSFORMATION TO EQUATIONS DRIVEN BY CORRELATED NOISE

Ju Ming

Department of Scientific Computing, Florida State University, Tallahassee, Florida 32306-4120, USA

Max Gunzburger

Department of Scientific Computing, Florida State University, Tallahassee, Florida 32306-4120, USA

Abstract

A procedure is provided for the efficient approximation of solutions of a broad class of stochastic partial differential equations (SPDEs), that is, partial differential equations driven by additive white noise. The first step is to transform the given SPDE into an equivalent SPDE driven by a correlated random process, specifically, the Ornstein-Uhlenbeck process. This allows for the use of truncated Karhunen-Loève expansions and sparse-grid methods for the efficient and accurate approximation of the input stochastic process in terms of few random variables. Details of the procedure are given and its efficacy is demonstrated through computational experiments involving the stochastic heat equation and the stochastic Navier-Stokes equations.

KEYWORDS: finite element methods, Monte Carlo method, Karhunen-Loève expansion, Smolyak quadrature rule


1. INTRODUCTION

In this paper, we attempt to improve the efficiency and accuracy of computations for evolution-type stochastic partial differential equations (SPDEs) having the form

(1)

where [0, T] denotes a time interval and D ∈ ℝd is a spatial domain. In (1), 𝒜 denotes a positive semidefinite, selfadjoint linear operator; 𝒩 denotes a nonlinear operator acting on the stochastic process u(t, ); g(t, ), h(t, ), and u0() denote deterministic data functions; σ denotes a constant; and ξ(t, ) is a space-time random field representing a stochastic perturbation. Corresponding to the operator 𝒜 with zero Dirichlet boundary condition, we have the eigenpairs {λj, Φj}j=1 such that 0 < λ1 ≤ λ2 ≤ … and limj→∞ λj = ∞; the set {Φj()} j=1 forms a complete orthonormal basis in L2(D).

Certainly, various stochastic perturbations ξ(t, ) may be defined in a stochastic dynamical system, see, e.g.,[1–5]. Our interest is in noise having the form

(2)

i.e., the noise is defined by an expansion in terms of the orthonormal eigenfunctions of the linear operator 𝒜; here, {ςj(t)}j=1 denotes a set of independent and identically distributed Brownian motions. Thus, if the coefficients {wj}j=1 satisfy some restrictions, we have that (2) is a white noise process in time and, in general, has some spatial regularity whose character, e.g., whose spatial smoothness, is determined by the properties of the coefficients wj(t). Note that one can view (2) as a spectral expansion in that wj(t)j (t) = (ξ(t, ), Φj()) for j = 1, 2, …, where (⋅, ⋅) denotes the L2(D) inner product.

A specific example of the form (2) process is the Q-Wiener process, [6]. Suppose Q is a symmetric non-negative operator, and for the sake of convenience, we assume that Q and 𝒜 in (1) have the same eigenfunctions {Φj()}j=1 and eigenvalues {λj}j=1. In this case the random process represented by the expansion

(3)

is a Wiener process with covariance operator Q. There have been a number of papers devoted to discussing the SPDEs involving dW; see, e.g., [7–12], where the definition of the notation dW can be found in [6].

In this paper, the process (2) is approximated in terms of a finite number of independent Brownian motions ςj(t), j = 1, …, N, through truncation, i.e.,

(4)

Note that ξ(t, ) = limN→∞ ξN(t, ); see, e.g., [13]. Processes of the form (2) and its approximation (4) are used in spectral Galerkin approximations of SPDEs such as (1), in which the approximate solution of the SPDE is also ePxpressed in terms of a finite sum of the orthonormal eigenfunctions {Φj()}, i.e., one has u(t, ) ≈ uN(t, ) = ∑Nj=1 uj(tj(); see, e.g., [14–16].

However, in practice, there is a general lack of knowledge about stochastic dynamic systems, including the nature of stochastic inputs. For example, the coefficient functions wj(t) in the stochastic input (2) are not always known in practice. It is natural to then investigate specific simple stochastic processes that preserve the essential aspects of more general processes. Thus, we consider specific, time-independent cases for wj(t).

Crucial to our approach to finding approximate solutions of (1) is knowledge of an auxiliary stochastic process that satisfies an SPDE related to (1) but for which the correlation function can be explicitly written. In particular, we consider the Ornstein-Uhlenbeck (OU) process η(t, ) that formally solves the stochastic parabolic equation

(5)

where a > 0 is a constant. The OU process is a correlated random field in time. As such and as we discuss in this paper, it can be approximated more accurately and efficiently compared to the white noise process ξ(t, ).

It is not difficult to show that the integral solution of (5) is given by

(6)

Using the definition (2) of ξ(t, ), we have

(7)

From this explicit solution, the space-time mean and covariance function of η(t, ) can be explicitly deduced; see Section 2.2. Consequently, the OU process can be expressed in terms of a Karhunen-Loève expansion whose truncation provides an efficient means for determining approximations of the OU process; see Section 2.2.1.

By setting u = v + η in (1), we obtain the equivalent SPDE

(8)

for the random field v(t, ). Note that (1) is driven by the white (in time) stochastic process ξ(t, ), whereas (8) is driven by the colored (in time) stochastic process η(t, ). Of course, the former appears in (1) simply as an additive forcing function, whereas the latter appears in (8) within the possibly nonlinear term 𝒩(⋅) as well.

The advantage of solving (8) instead of (1) is that because η(t, ) is a correlated (in time) random field, one can use one of several effective numerical tools available for discretization that are not appropriate for discretizing white noise random fields. For example, one can use stochastic collocation methods based on sparse quadrature rules [17– 24] or polynomial chaos methods [25–29]. Thus, such tools can be used to determine approximations of the stochastic process v(t, ), and eventually u(t, ), avoiding the direct discretization of the white noise random field ξ(t, ).

Actually, the effectiveness of numerical methods for determining approximations of an SPDE is usually dependent on the dimensionality of the involved probability field, i.e., the computational complexity is dominated by the number of degrees of the freedom needed to adequately represent the random inputs. Therefore, in practice, one of the basic ideas for reducing the computational cost is to develop a good low-dimensional approximation to the probability field without engendering much loss of accuracy. After converting the system perturbed by the white noise ξ to an equivalent one perturbed by the colored noise η, we have means for determining such "low"-dimensional approximations so that a reduction in computational cost is effected.

The paper is devoted to providing details about our approach as well as the results of computational experiments that illustrate the efficiency gains. In Section 2, we first discuss direct approximations of white noise random fields such as ξ(t, ), provide additional details about the Ornstein-Uhlenbeck process η(t, ) derived from ξ(t, ), and discuss how the OU process is approximated using truncated Karhunen-Loève expansions. In Sections 3 and 4, we illustrate our approach by applying it to a stochastic heat equation and a stochastic Navier-Stokes system, respectively. Concluding remarks are given in Section 5.

2. FROM WHITE TO APPROXIMATE COLORED NOISE RANDOM FIELDS

In this section, we provide details about how we "convert" a white noise random field into a correlated random field, and how both types of fields are approximated.

2.1 White Noise

We first discuss the white noise stochastic process defined by a spectral (or Fourier) expansion in terms of a basis for L2(D), where D ∈ ℝd denotes a bounded spatial domain. An example of such a process is given by (2).

Let {φj()}j=1 denote an orthonormal basis for L2(D) or for a subspace. Let {ςj(t)}j=1 denote a set of i.i.d. one-dimensional Brownian motions or Wiener processes with respect to time. We consider the infinite-dimensional random process given by

(9)

where the coefficients {wj(t)}j=1 are such that the convergence of the series is guaranteed.

Clearly, the noise is white in time if the coefficients {wj}j=1 are constants and, by the definition of Brownian motion, μ(t, ) satisfies

where each δj denotes the Dirac δ-function and Ε[⋅] denotes the expected value. Depending on the choice of basis {φj}j=1 and the decay rates of the coefficients {wj(t)}j=1, the noise μ(t, ) may represent very different spatial behavior. We illustrate this through the two examples given below. To simplify our problem, we assume that the elements of the set {wj}j=1 are independent of time. We note that the behavior of the particular noise processes we report here resembles the results given in [4], except for a larger magnitude of oscillations.

Before proceeding to the examples, we first introduce a direct approach for approximating the infinite-dimensional space-time random field μ(t, ). Note that we have to consider both the infinite dimensionality of the sum in (9) and of each of the one-dimensional random processes {ςj(t)}j=1. As in (4), we truncate the sum in (9) to the first N terms to obtain the approximation

(10)

To approximate the finite-dimensional white noise random fields {j(t)} Nj=1 appearing in (4), we consider a direct, grid-based discretization approach described as follows. We partition the interval [0, T] by means of the points {tk = kΔt}Mk=0 with Δt = T/M. Then, each of the time-dependent white noise random fields j(t), j = 1, 2, …, N, is approximated by the expansions

(11)

where αj,k ∼ 𝒩(0, 1) denote i.i.d. standard Gaussian variables and {χk(t)}Mk=1 denotes the set of characteristic functions given by

Thus, each infinite-dimensional random field j(t) is approximated in terms of M random variables {αj,k}Mk=1 and the space-time random field μ(t, ) is approximated by

(12)

A realization of this approximation requires the sampling of the MN i.i.d. random variables αj,k ∼ 𝒩(0, 1), j = 1, …, N, k = 1, …, M.

Example 1.

Let {φj}j=1 denote the trigonometric basis φj(x) = √2 sin(jπx), j = 1, 2, …, defined on L2(0, 1) and let wj = 1/2j or 1/j3/2. Realizations of the approximate noise (12) are given in Fig. 1 for T = 1 and M = 200 for each of the two choices for wj and for four choices of N. Plots are provided for μN,M (1, x) and μN,M (t, 1/2). Clearly, the approximate noise is highly uncorrelated in time, as is expected given that we are approximating white noise in time. The approximate noise is highly regularized in space. Moreover, we see that there is a higher degree of smoothness for wj = 1/2j than for wj = 1/j3/2, which is to be expected because of the faster decay, as j increases, of the first choice relative to the second.

FIG. 1: Realizations of the approximate space-time noise μN,M (t, ) given in (12) for φj(x) = √2 sin(jπx) and for wj = 1/2j (left two columns) and wj = 1/j3/2 (right two columns).

Example 2.

We consider a special case of the random field ξ(t, ) introduced in Section 1. Specifically, we let 𝒜 = −νΔ with a Dirichlet boundary condition, where ν > 0 is a constant and Δ denotes the Laplace operator. As in Section 1, we let {λj, Φj()} j=1 denote the eigenpairs of the operator 𝒜, i.e., for j = 1, 2, …, we have

Because −Δ with a zero Dirichlet condition is self-adjoint and positive definite, we have 0 < λ1 ≤ λ2 ≤ λ3 ≤ … and limj→∞ λj = ∞. Furthermore, {Φj()}j=1 constitutes a complete orthonormal basis for L2(D). Hence, we can define a random process having the form (2), or equivalently, the form (9), and approximate it using (12).

In particular, let D be the unit square. Then, the eigenpairs are known to be, for n, m = 1, 2, …,

(13)

with x1, x2 denoting the components of .

We now sort the eigenpairs {λn,m, Φn,m} via a two-dimensional multi-index l = (l1, l2) ∈ ℕ20 with the norm

(14)

The resulting noise is then given by

(15)

and is approximated by

(16)

Therefore, with the appropriate relations between the components of the multi-index l and j and between Ñ and N, (15) and (16) are of the form (9) and (12), respectively.

For the computational illustration, we set ν = 0.001, σ = 0.1, Ñ = 100, M = 200, and wl = 1/|l|3/2l2 or 1/|l|2l2. Figure 2 illustrates realizations of the approximate noise σξÑ,M (t, x1, x2), where ξÑ,M (t, x1, x2) is given in (16); specifically, we show σξÑ,M (1, x1, x2) and σξÑ,M (t, 1/2, 1/2). Again, the noise is (discretely) white in time and regularized in space, with the smoothness being more prominent in the plot in the right column, which again corresponds the choice of wl having the faster decay.

FIG. 2: Realizations of the approximate space-time noise σξÑ,M (t, x, y) given in (16) for t = 1 (top row) and at the point (1/2, 1/2) (bottom row) for wl = 1/|l|3/2l2 (left column) and wl = 1/|l|2l2 (right column). It can be seen that the faster the coefficients wl decay, the smoother the noise presents in space.

2.2 The Ornstein-Uhlenbeck Process and Its Approximation

We now consider the Ornstein-Uhlenbeck process η(t, ) defined by (5). We first formalize results given in Section 1, all of which can be verified by direct substitution.

Let 𝒜 denote the operator −νΔ with zero Dirichlet boundary condition. Then, by the orthonormality of the basis {Φj} and Itô′s formula, we have, for any non-negative integer ,

Thus, if

(17)

then

Thus, we have proved the following convergence result about the OU process η(t, ).

Theorem 1. If (17) holds, then Ε[‖𝒜η‖ 2L2(D)] < ∞.

Setting = 0, we have the following result.

Corollary 2. If

(18)

then Ε[‖η‖ 2L2(D)] < ∞.

Theorem 3. [6] If (17) holds, the solution η(t, ) of (5) is given by (6). Moreover, Ε[η] = 0. In particular, if ξ(t, ) = ∑j=1 wj Φj() j(t), where −νΔΦj = λjΦj with Φj|∂D = 0, then the OU process η(t, ) is given by (7).

From the definition of covariance function Cov(⋅, ⋅), we have that the covariance of the OU process η(t, ) at each point D is given by

so that

(19)

Setting t = s in (19), we see that the variance of the OU process η(t, ) at any point D is given by

(20)

where Var[⋅] denotes the variance,

For a normally distributed random variable X ∼ 𝒩(0, σ2), one has

(21)

Note that the OU process η is a stationary Gaussian process having a normal distribution. If D is a bounded region, then, based on Hölder′s inequality and (21), we observe that

Therefore, if p = 2 and ΦjL4(D) for all j, we obtain

In particular, for D the unit square, we have from (13) that ΦjL4(D) so that η is well defined in the space {η : Ε[‖η‖L4(D)] < ∞}.

2.2.1 Karhunen-Loève Representation of the OU Process

We specialize to the case of the OU process results that hold for general random fields having continuous covariance functions that are symmetric and positive definite. The key step in deriving these results is the application, at each D, of Mercer′s theorem with respect to the variables t and s; see, e.g., [30] for details.

Consider the problem of determining pairs {τi(), ψi(t, )}i=1 that satisfy

(22)

From (19), we have that Cov(η(t, ), η(s, )) is positive-definite and symmetric with respect to t and s. Then, it can be shown that such a set {τi(), ψi(t, )}i=1 exists that satisfies

and

It easily follows that Cov(η(t, ), η(s, )) has the eigendecomposition

from which one has

It also follows that the OU process η(t, ) has the Karhunen-Loève expansion

(23)

where {ζi}i=1 are uncorrelated standard Gaussian variables; (23) is simply the decomposition of η(t, ) in terms of the eigenpairs {τi(), ψi(t, )}i=1 of its correlation matrix. Thus, the KL expansion (23) provides a second way, in addition to (7), for expressing the OU process.

2.2.2 Approximation of the Ornstein-Uhlenbeck Process

A computable approximation to the OU process can be defined by truncating its KL expansion (23). Thus, one determines the dominant NKL eigenpairs {τi(), ψi(t, )}NKLi=1 of the covariance function (19) and then defines the truncated KL expansion approximation

(24)

Of course, the KL eigenpairs can also only be determined approximately; one can do so by discretizing (22) using a finite element method for spatial discretization and a quadrature rule for approximating the integral. We note that truncated KL expansions are in common use for reduced-order modeling (ROM) of deterministic systems [31, 32]. Most of the properties of KL approximations for deterministic ROM also hold in the current stochastic setting. For example, the Galerkin projection of η(t, ) onto the space spanned by the basis function {ψi(t, )}NKLi=1   is, in the Fourier sense, a best approximation [30].

The effectiveness of (24) as an approximation to the OU process η(t, ) is of course dependent on the rate of decay of the KL eigenvalues τi(). Figure 3 depicts the L2(D) norm of the first 25 KL eigenvalues {τi()}25i=1 corresponding to the OU covariance function (19) as σ = 1. That figure also depicts, for NKL = 1, …, 25, the corresponding energy ratio

(25)

that is used to measure the amount of information or "energy" of η "captured" by the first NKL orthonormal basis functions, i.e., by {ψi}NKLi=1 . The denominator in (25) is approximated by summing over 1000 ≫ 25 terms. Obviously, ‖τi()‖L2(D) is almost identical to zero as i > 10, which implies that ∑i=1 ‖τiL2(D) could be approximated by summing over the leading 10 terms of {‖τi()‖L2(D)}i=1, not to mention 1000 terms.

FIG. 3: The L2(D) norm of the first 25 KL eigenvalues and the energy ratio (25) for a = 0 and wl = 1/|l|2l2 (circles) and for a = 1 and wl = 1/|l|3/2l2  (asterisks).

One can use (25) to select the value of NKL in the KL approximation (24). One simply selects a desired value ê for the energy ratio and then selects the smallest integer NKL such that eNKL given by (25) is greater than ê. Our simulations result in e5 > 0.96 for a = 0 and wl = 1/|l|2l2 and e10 > 0.97 for a = 1 and wl = 1/|l|3/2l2 . Clearly, most of the energy of the OU process η(t, ) can be captured by a very limited number of eigenfunctions. Thus, the feasibility of the KL approximation (24) with relatively small values of NKL is demonstrated. The importance of this observation is that the infinite-dimensional OU process η(t, ) can be accurately approximated by the NKL- dimensional process ηNKL(t, ), i.e., by a process involving only the NKL uncorrelated variables {ζi}NKLi=1. As a result, the computational cost of simulating a stochastic dynamic system involving the random input η(t, ), e.g., the system (8), can be substantially economized by using the relative low-dimensional approximation that (24) can provide.

To obtain a realization of the approximate OU process (24) one merely has to sample values for the random variables {ζi}NKLi=1. In Sections 3 and 4, we use such realizations as inputs for discretized versions of the modified equation (8) so as to obtain approximate realizations of the random field v(t, ). In this section, we want to examine the accuracy of statistical information, specifically statistical moments, obtained using (24) as an approximation to the OU process. The rth statistical moment is approximated by evaluating an NKL-dimensional integral, i.e., we have

(26)

where dρ refers to the Gaussian measure. One has to further approximate by discretizing the integral in (26). For example, approximations to the statistical moments of the OU process may be determined using an ℝNKL-dimensional quadrature rule to approximate the integral appearing (26). Let {ωq}Qq=1 denotes the quadrature weights corresponding to the quadrature points {q}Qq=1 where Q denotes the number of quadrature points and where q ∈ ℝNKL. Then, we obtain the approximation

(27)

for the rth statistical moment of ηKL(t, ), where {ζi,q}NKLi=1 denote the components of q.

The accuracy of the approximation defined in (27) depends not only on how many terms are kept, i.e., on NKL, but also on the quadrature rule used. For moderate values of NKL (certainly for NKL ≤ 10 ), using an appropriate ℝNKL- dimensional Smolyak quadrature rule is a good option; in our setting, a Smolyak rule based on the one-dimensional Gauss-Hermite quadrature rule is appropriate. See, e.g., [18, 33] for details about the Smolyak quadrature points and weights. In this case, we refer to the approximation (27) as the KL-Smolyak, or more succinctly, the KL-S approximation to the OU process. In the sequel, QkNKL denotes the Smolyak quadrature rule with NKL random inputs and level k accuracy.

To examine the accuracy and efficiency of the KL-S approximation, we compare it with the direct approximation of the defining system (5) for the OU process. Note that this requires the approximation of the white noise random input field ξ(t, ), e.g., by an MC approximation of the type (12). Of course, we would not use such a approximation of the OU process to solve (8) because, if one is willing to discretize ξ(t, ), one can discretize (1) directly with no need for introducing the OU process. In fact, this is exactly what we want to avoid doing. So, we only solve (5) directly using a white noise discretization in order to evaluate the efficacy of the KL-S approximation.

Determining the KL-S approximation of the OU process requires the "off-line" determination of the NKL dominant KL eigenpairs {τi(), ψi(t, )}NKLi=1 . After that, to obtain statistical information about the approximate OU process simply requires the evaluation of (27). Note that because one is able to exactly determine the covariance function for the OU process, one completely avoids having to solve the PDE system (5) in determining the statistics of the KL-S approximation of the OU process. This should be contrasted with using direct discretization of (5) by an MC method for which one has to solve that system of PDEs to obtain realizations of the approximate OU process.

We introduce the following error measures for the rth statistical moments of the OU process η(t, ):

(28)

and

(29)

where (⋅)MC refers to the direct solution of (5) by a Monte Carlo method whereas (⋅)KL refers to our KL-S approach. Note that Ε[ηr] can be determined exactly because we have an explicit solution for the OU process η(t, ).

Figures 4 and 5 provide plots of the errors ET,rMC and ET,rKL for r = 1, 2, 3, 4. For the KL error, the level 3 Gauss-Hermite Smolyak rule Q3NKL is used with NKL = 1, …, 11. The errors are plotted against the total number of points sampled, i.e., the total number of realizations of the approximate OU processes used to estimate the errors. The two figures correspond to the two choices a = 0 and wl = 1/|l|2l2 (Fig. 4) and for a = 1 and wl = 1/|l|3/2l2  (Fig. 5). The results indicate that similar to the use of KL expansions in deterministic cases, the efficiency of KL expansions are determined by the decay rate of μi(); also, the accuracy of the moments is proportional to the dimensionality of the random inputs. For the first and third statistical moment, whose exact values are identical zero, the KL expansion can provide an excellent approximation (< 10−15) by the properties of the quadrature rule. The dominated errors are thus the round-off errors at the quadrature points, which explains the behavior of ET,rKL as r = 1, 3.

FIG. 4: Comparison of the KL-S (left) and MC (right) approximations of the OU process η(t, ) with a = 0 at T = 3 for wl = 1/|l|2l2.

FIG. 5: Comparison of the KL-S (left) and MC (right) approximations of the OU process η(t, ) with a = 0 at T = 3 for wl = 1/|l|3/2l2 .

Note the stability of the KL approximation. Taking the fourth moment as an example, from (4), ET,4MC computed for the sample size 400 is smaller than the ones obtained with 500 samples or even 700. ET,4KL is monotonically decreasing as the number of samples increases.

Now that we have demonstrated the high accuracy of the KL-S approximation of the OU process, we proceed in the next two sections to use that approximation to determine approximations of the given system (1) by instead solving the the modified equations (8).

3. STOCHASTIC HEAT EQUATIONS

In this section, we use the OU process to transform a stochastic heat equation driven by white noise into one driven by the OU process. As a result, to determine approximate solutions of the stochastic heat equation we can use the KL approximation (24) of the OU process to effect a discretization of noise instead of relying on a discretization of white noise.

Consider the stochastic heat equation for the random field u(t, ) given by

(30)

where ξ(t, ) is the white noise (in time) random field given in (2). Of course, (30) bears a strong resemblance to the defining equations (5) for the OU process, except for the nonzero deterministic data functions g, ƒ, and u0 in (30) and the term −aη in (5). Thus, we will use the OU process for which the data functions vanish and for which one can explicitly write the correlation function [see (2.2)] so that one can define KL approximations.

Let u(t, ) = v(t, ) + η(t, ), where η(t, ) denote the OU process satisfying (5). Then, (30) is equivalent to

(31)

Unlike (30) which is driven by the white noise random field ξ(t, ), (31) is driven by the correlated OU process η(t, ). We then discretize the noise η(t, ) by using its truncated KL expansion ηNKL(t, ) given in (24).

The simple example we use has the spatial domain D be the unit square, T = 1, ν = 0.001, ƒ(t, ) = 2(1 + κπ2)e2t sin(πx1) sin(πx2), g(t, ) = 0, u0() = sin(πx1) sin(πx2), and noise parameters wl = 1/|l|3/2l2  and σ = 0.5. The corresponding exact expected value of the solution u of (30) is clearly

Temporal discretization is effected using the Crank-Nicolson scheme with Δt = 5 × 10−3, whereas a continuous piecewise-quadratic finite element method based on a uniform triangulation of D is used for spatial discretization; the grid size h ≤ 0.007.

We use the MC method to approximately solve the given stochastic heat equation (30) with the sample size 500. Specifically, in addition to the temporal and spatial discretization, we discretize the white noise (in time) random field ξ(t, ) as in (12) and then perform 500 realizations of the resulting discrete system. Each of the 500 realizations requires the sampling of the MN standard Gaussian parameters αj,k. This provides a direct approach toward approximating the solution of (30).

We also use the indirect approach of solving for an approximation of the solution v of (31), then adding to it the KL approximation of the OU process to obtain an approximation of the solution u of (30). To obtain the approximation of v, we use the Q35 Smolyak rule, for which the resulting sample size is 61.

Results for the two approaches are presented in Fig. 6. Specifically, for the comparison, we compute the error measures ErMC(t) and ErKL(t) for the random field u(t, ) defined in (28) with r = 1. The results suggest that compared to the MC method, the KL expansion combined with a Smolyak rule has the ability to approximate the expected value of the solution of (30) effectively by solving a relatively small low-dimensional system.

FIG. 6: Comparison of the error measurement for the MC (left) and KL-S (right) methods for determining approximation of solutions of the stochastic heat equation; 500 MC samples are taken for the left plot and 5 KL modes and a level 3 accurate Smolyak rule (resulting in 61 samples) is used for the right plot.

We also compare the rth statistical moments for r = 1, 2, 3, 4 obtained by the MC method and KL expansions by defining the difference measure

(32)

where Ε[(uMC)r] and Ε[(uKL)r] denote the rth statistical moments obtained by the MC method and KL-S expansion, respectively. In Fig. 7, plots of eTr are given with KL-S approximation fixed to be Q35 and for different MC sample sizes. For our example problem, an examination of Fig. 7 shows the reduction in difference eTr. Note that the MC solutions tend to approximate the exact stochastic solutions as the sample size increases and Fig. 7 shows that the MC approximation also approaches the KL-based approximation as the MC sample size increase. Thus, the effectiveness of the low-dimensional approximation based on KL expansion is verified for higher-order statistical moments as well.

FIG. 7: The difference eTr given in (32) for r = 1, 2, 3, 4 (left to right and top to bottom). The MC approximation uMC of solutions of (30) are for the sample sizes N = 50, 100, 150, …, 500; the KL-S approximation is obtained using 5 KL modes and a level 3 accurate Smolyak rule.

4. STOCHASTIC NAVIER-STOKES PROBLEM

In this section, we use the OU process to transform a two-dimensional stochastic Navier-Stokes (SNS) system [34] driven by white noise into one driven by the OU process. Because of the continuity equation and the nonlinear convection terms in the Navier-Stokes equation, the OU process appears in the modified equations in ways other than as a simple additive forcing term.

Letting u = (u, v), P, and θ denote velocity, pressure, and temperature fields, respectively, we consider the nondimensionalized system holding in a bounded domain D ⊂ ℝ2 over the time interval [0, T]

(33)

along with boundary conditions, where D is a bounded subset of ℝ2. In (33), ξ = (ξ1, ξ2) is a white noise (in time) vector with independent components defined by (15), σ = diag(σ1, σ2) accounts for possibly different variances in different directions, and κ and ν represent the temperature diffusivity and fluid viscosity, respectively. For simplicity, we consider only one-way temperature-momentum coupling.

In our computational example, we again assume that the domain is a square and that the velocity field u vanisihes on the boundary. The initial condition for u is given indirectly via vorticity ω = vxuy which is initially set to

(34)

where γ > 0 is a constant, I() = {1 + ε[cos(4πx) − 1]}, and C is a constant such that ∫D ω(x, y)dxdy = 0. Then, the initial condition for u = (u1, u2) is determined by solving

A plot of the initial velocity is given in Fig. 8. A horizontal layer is centered at y = 0.5 whose width is determined by the parameters δ and ε. According to [34], as δ tends to zero, the initial vorticity ω will become a flat vortex sheet with some perturbations having size determined by ε. In our computational examples, we set γ = 0.005, δ = 0.025, and ε = 0.3.

FIG. 8: Initial velocity (left) and temperature (right). The sharp transition between temperature layers is due to the small value of δ.

The initial condition for temperature is given as

where Hδ(x) is the mollified Heaviside function

Figure 8 shows that θ0 has four smoothly connected layers symmetric with respect to x2 = 0.5. The thickness of the interfaces between any two different layers is δ.

Instead of directly discretizing the SNS system (33), which would necessitate approximating the white noise random vector field ξ(t, ), we set u = v + η and instead discretize the modified system

(35)

where each component of η = (η1, η2) satisfies the OU system (5). In addition, v vanishes on the boundary.

In our computational examples, we choose ν = κ = 0.001, σ = 0.05, and T = 1.3. The system (35) is solved for a = 1 and via approximating the components of ´ by KL-S expansion using the Q34 Smolyak rule, resulting in a sample size of 41. In addition, the finite element method based on Crank-Nicolson method is used. The mean value and the variance of both u and θ are plotted in Figs. 9 and 10, respectively. From those figures, we see that through convection by the velocity u, the sharp interfaces in the temperature field are smeared. A clockwise rotational vortex is formed. The structure of the variance of the temperature is similar to that of the mean value. Our results also correspond to those obtained by [34] in that the random perturbation ξ acts like an extra random diffusion for the mean equation and the random processes u and θ are obviously not Gaussian processes.

FIG. 9: Quiver plot of the expected value of the velocity field (left) and the contour plot of the expected value of the temperature (right) at t = 0.5, 1, 1.5.

FIG. 10: Quiver plot of the second moment of the velocity field (left) and the contour plot of the second moment of the temperature (right) at t = 0.5, 1, 1.5.

To verify the accuracy of the KL solutions, analogous to (32), we introduce the following difference measurement:

The MC expectations are obtained using 200 realizations. From Fig. 11 one sees that to reach a similar accuracy as that obtained with 41 KL-S realizations, the direct MC approach needs hundreds of realizations. Thus, for the twodimensional Navier-Stokes/temperature system considered here, it is clear that at least for short time integrations, the KL-S approximation is more efficient than the MC method.

FIG. 11: eTm as m = 1, 2, 3, 4.

5. CONCLUSION

Obtaining precise statistics about solutions of nonlinear SPDEs driven by white noise in the form of (2) is, in general, very costly due to the large number degrees of the freedom needed to represent the random inputs. In this paper, via the OU process (5), we show that the white noise can be "regularized" into a Gaussian colored noise so that a lowdimensional stochastic system (8) can be determined by using the truncated KL-S expansion (24). We consider the stochastic heat equation and the stochastic Navier-Stokes Boussinesq system as our linear and nonlinear examples, respectively. Numerical tests are given. The convergence of the Monte Carlo solutions to the KL-S solutions in both examples suggests the accuracy and effectiveness of our algorithm.

For the SPDEs (1) defined on arbitrary spatial domains, the eigenfunctions of 𝒜 have to be determined numerically. However, taking into account the low-dimensional approximation to the transformed equation (8) that the truncated KL-S approximation can provide, our algorithm is still, in most cases, more efficient than approximating the SPDE (1) directly.

We also note that our approximation is based on the KL expansion at every point in the spatial region, i.e., we approximate η(t, ) by ηKL(t, ) at every point in the domain. However, according to the definition of the KL expansion, the derivatives of ηKL(t, ) may not give a good approximation to the random process ηx(t, ) or ηy(t, ). To avoid this difficulty, in our stochastic Navier-Stokes Boussinesq test, for example, we construct KL approximations of ηx(t, ) and ηy(t, ) directly instead of differentiating ηKL(t, ). This can be understood as constructing a probability space spanned by the random variables {ζ1 ζ2 …, ζNKL} such that the the random processes η(t, ), ηx(t, ) and ηy(t, ) can be well approximated by the KL expansion (24).

We note that much of our consideration can be extended to a variety of similar SPDEs, e.g., the Cahn-Hilliard equation and reaction diffusion equations. Thus, future investigations in such related fields are called for.

REFERENCES

1. Allen, E. J., Novosel, S. J., and Zhang, Z., Finite element and difference approximation of some linear stochastic partial differential equations, Stoch. Stoch. Rep., 64:117–142, 1998.

2. Alós, E. and Bonaccorsi, S., Stochastic partial differential equations with Dirichlet white-noise boundary conditions, Ann. Inst. Henr.; Poincaré, 38(2):125–154, 2002.

3. Coutin, L. and Qian, Z., Stochastic analysis, rough path analysis and fractional Brownian motions, Probab. Theory Relat. Fields, 122:108–140, 2002.

4. Du, Q. and Zhang, T., Numerical approximation of some linear stochastic partial differential equations dirven by special additive noises, SIAM J. Numer. Anal., 40(4):1421–1445, 2002.

5. Hausenblas, E. and Marchis, I., A numerical approximation of parabolic stochastic partial differential equations driven by a Poisson random measure, Bit Numer. Math., 46(4):773–811(39), 2006.

6. Da Prato, G. and Zabczyk, J., Stochastic Equations in Infinite Dimensions, Cambridge University Press, Cambridge, 1992.

7. Flandoli, F., An introduction to 3D stochastic fluid dynamics, in SPDE in Hydrodynamic: Recent Progress and Prospects, Springer, Berlin, 2005.

8. Lord, G. J. and Shardlow, T., Post processing for stochastic parabolic partial differential equations, SIAM J. Numer. Anal., 45(2):870–889, 2007.

9. Jentzen, A. and Kloeden, P. E., Overcoming the order barrier in the numerical approximation of stochastic partial differential equations with additive spacetime noise, Proc. R. Soc. London, Ser. A Math. Phys. Eng. Sci., 465:649–667, 2008.

10. Shardlow, T., Weak convergence of a numerical method for a stochastic heat equation, BIT, 43(1):179–193, 2003.

11. Kloedena, P. E., Lordb, G. J., Neuenkirchc, A., and Shardlowd, T., The orthogonal development of nonlinear functionals in series of Fourier-Hermite functionals, J. Comput. Appl. Math., 235(5):1245–1260, 2011.

12. Yan, Y., Galerkin finite element methods for stochastic parabolic partial differential equations, SIAM J. Numer. Anal., 43:1363–1384, 2005.

13. Lord, G. J. and Tambue, A., A modified semi-implict Euler-Maruyama Scheme for finite element discretization of SPDEs, arXiv:1004.1998v2, 2011.

14. Blömker, D. and Jentzen, A., Galerkin approximations for the stochastic Burgers equa- tion, preprint, Institute for Mathematics, Univ. Augsburg., Available at http://opus.bibliothek.uni-augsburg.de/volltexte/2009/1444/.

15. Grecksch, W. and Kloeden, P. E., Time-discretised Galerkin approximations of parabolic stochastic PDEs, Bull. Austral. Math. Soc., 54(1):79–85, 1996.

16. Liu, D., Convergence of the spectral method for stochastic Ginzburg-Landau equation driven by space-time white noise, Commun. Math. Sci., 1(2):361–375, 2003.

17. Babuška, I., Nobile, F., and Tempone, R., A stochastic collocation method for elliptic partical differentical equations with random input data, SIAM J. Numer. Anal., 45(3):1005–1034, 2007.

18. Bungartz, H. J. and Griebel, M., Sparse grids, Acta Numer., 13:1–123, 2004.

19. Ganapathysubramanian, B. and Zabaras, N., Sparse grid collocation schemes for stochastic natural convection problems, J. Comput. Phys., 225:652–685, 2007.

20. Xiu, D., Efficient collocational approach for parametric uncertainty analysis, Commun. Compur. Phys., 2(2):293–309, 2007.

21. Xiu, D., Numerical integration formulas of degree two, Appl. Numer. Math., 58:1515–1520, 2008.

22. Narayanan, V. and Zabaras, N., Variational multiscale stabilized FEM formulations for transport equations: Stochastic advection-diffusion and incompressible stochastic Navier-Stokes equations, J. Comput. Phys., 202:4–133, 2005.

23. Nobile, F., Tempone, R., and Webster, C. G., An anisotropic sparse grid stochastic collocation method for partial differential equations with random input data, SIAM J. Numer. Anal., 46(5):2411–2442, 2008.

24. Smolyak, S. A., Quadrature and interpolation formulas for tensor products of certain classes of functions, Sov. Math. Dokl., 4:240–243, 1963.

25. Asokan, B. and Zabaras, N., Using stochastic analysis to capture unstable equilibrium in natural convection, J. Comput. Phys., 208:134–153, 2005.

26. Cameron R. H., and Martin, W. T., The orthogonal development of nonlinear functionals in series of Fourier-Hermite functionals, Ann. Math., 48(2):385–392, 1947.

27. Le Maitre, O. P., Reagan, M. T., Najm, H. N., Ghanem, R. G., and Knio, O. M., A stochastic projection method for fluid flow II.: Random process, J. Comput. Phys., 181:9–44, 2002.

28. Le Maitre, O. P., Ghanem, R., Knio, O., and Najm, H., Uncertainty propagation using Wiener-Harr expansions, J. Comput. Phys., 197:28–57, 2004.

29. Le Maitre, O. P., Reagan, M., Debusschere, B., Najm, H., Ghanem, R., and Knio, O., Natural convection in a closed vavity under stochastic, non-Boussinesq conditions, SIAM J. Sci. Comput., 26:375–394, 2004.

30. Ghanem, R. and Spanos, P. D., Stochastic Finite Elements: A Spectral Approach, Revised Ed., Dover Publications, New York, 2003.

31. Holmes, P., Lumley, J. L., and Berkooz, G., Turbulence, Coherent Structures, Dynamical Systems and Symmetry, 1st ed., Cambridge University Press, New York, 1996.

32. Sirovich, L., Knight, B. W., and Rodriguez, J. D., Turbulence and the dynamics of coherent structures Part I-Part III, Quart. Appl. Math., 45(3):561–590, 1987.

33. Heissa, F. and Winschelb, V., Likelihood approximation by numerical integration on sparse grids, J. Econ., 144(1):62–80, 2008.

34. Hou, T. Y., Luo, W., Rozovskii, B., and Zhou, H., Wiener Chaos expansions and numerical solutions of randomly forced equations of fluid mechanics, J. Comput. Phys., 216:687–706, 2006.

Begell Digital Portal Begell Digital Library eBooks Journals References & Proceedings Research Collections Prices and Subscription Policies Begell House Contact Us Language English 中文 Русский Português German French Spain