程序代写代做代考 scheme algorithm This article appeared in a journal published by Elsevier. The attached

This article appeared in a journal published by Elsevier. The attached
copy is furnished to the author for internal non-commercial research
and education use, including for instruction at the authors institution

and sharing with colleagues.

Other uses, including reproduction and distribution, or selling or
licensing copies, or posting to personal, institutional or third party

websites are prohibited.

In most cases authors are permitted to post their version of the
article (e.g. in Word or Tex form) to their personal website or
institutional repository. Authors requiring further information

regarding Elsevier’s archiving and manuscript policies are
encouraged to visit:

http://www.elsevier.com/copyright

http://www.elsevier.com/copyright

Author’s personal copy

Journal of Computational and Applied Mathematics 233 (2009) 938–950

Contents lists available at ScienceDirect

Journal of Computational and Applied
Mathematics

journal homepage: www.elsevier.com/locate/cam

Spectral methods for weakly singular Volterra integral equations with
smooth solutions
Yanping Chen a,∗, Tao Tang b
a School of Mathematical Sciences, South China Normal University, Guangzhou 510631, China
b Department of Mathematics, Hong Kong Baptist University, Kowloon Tong, Hong Kong & Faculty of Science, Beijing University of Aeronautics and Astronautics,
Beijing, China

a r t i c l e i n f o

Article history:
Received 23 July 2008
Received in revised form 19 May 2009

MSC:
35Q99
35R35
65M12
65M70

Keywords:
Volterra integral equations
Weakly singular kernels
Smooth solutions
Spectral methods
Error analysis

a b s t r a c t

Wepropose and analyze a spectral Jacobi-collocation approximation for the linear Volterra
integral equations (VIEs) of the second kind with weakly singular kernels. In this work,
we consider the case when the underlying solutions of the VIEs are sufficiently smooth.
In this case, we provide a rigorous error analysis for the proposed method, which shows
that the numerical errors decay exponentially in the infinity norm and weighted Sobolev
space norms. Numerical results are presented to confirm the theoretical prediction of the
exponential rate of convergence.

Crown Copyright© 2009 Published by Elsevier B.V. All rights reserved.

1. Introduction

We consider the linear Volterra integral equations (VIEs) of the second kind, with weakly singular kernels

y(t) = g(t)+
∫ t
0
(t − s)−µK(t, s)y(s)ds, t ∈ I, (1.1)

where I = [0, T ], the function g ∈ C(I), y(t) is the unknown function, µ ∈ (0, 1) and K ∈ C(I × I) with K(t, t) 6= 0 for
t ∈ I . Several numerical methods have been proposed for (1.1) (see, e.g., [1–11]).
The numerical treatment of the VIEs (1.1) is not simple, mainly due to the fact that the solutions of (1.1) usually have a

weak singularity at t = 0, even when the inhomogeneous term g(t) is regular. As discussed in [4], the first derivative of
the solution y(t) behaves like y′(t) ∼ t−µ. In [12], a Jacobi-collocation spectral method is developed for (1.1). To handle the
non-smoothness of the underlying solutions, both function transformation and variable transformation are used to change
the equation into a new Volterra integral equation defined on the standard interval [−1, 1], so that the solution of the new
equation possesses better regularity and the Jacobi orthogonal polynomial theory can be applied conveniently. However,
the function transformation (see also [9]) generally makes the resulting equations and approximations more complicated.
We point out that for (1.1) without the singular kernel (i.e., µ = 0), spectral methods and the corresponding error analysis

∗ Corresponding author.
E-mail addresses: yanpingchen@scnu.edu.cn (Y. Chen), ttang@math.hkbu.edu.hk (T. Tang).

0377-0427/$ – see front matter Crown Copyright© 2009 Published by Elsevier B.V. All rights reserved.
doi:10.1016/j.cam.2009.08.057

Author’s personal copy

Y. Chen, T. Tang / Journal of Computational and Applied Mathematics 233 (2009) 938–950 939

have been provided recently [13,14]; see also [15,16] for spectral methods to pantograph-type delay differential equations.
In both cases, the underlying solutions are smooth.
In this work, we will consider a special case, namely, the exact solutions of (1.1) are smooth. This case may occur when

the source function g in (1.1) is non-smooth; see, e.g., Theorem 6.1.11 in [4]. In this case, the Jacobi-collocation spectral
method can be applied directly; and the main purpose of this work is to carry out an error analysis for the spectral method.
It is known that most systems of weakly singular VIEs that arise in many application areas are of large dimension. There
have been also some recent developments for solving systems of weakly singular VIEs of high dimensions, see, e.g., [5–8]. It
is pointed out that although problem (1.1) under consideration is scalar, the proposed methods can be applied to systems
of large dimension in a quite straightforward way. We will demonstrate this for a two-dimensional case in Section 2.
This paper is organized as follows. In Section 2, we introduce the spectral approaches for the Volterra integral equations

with weakly singular kernels. We present some lemmas useful for establishing the convergence results in Section 3. The
convergence analysis is provided in Section 4. Numerical experiments are carried out in Section 5, which will be used to
verify the theoretical results obtained in Section 4.

2. Jacobi-collocation methods

Throughout the paper C will denote a generic positive constant that is independent of N but which will depend on the
length T of the interval I = [0, T ] and on bounds for the given functions f , K̃ which will be defined in (2.5), and the indexµ.
Let ωα,β(x) = (1− x)α(1+ x)β be a weight function in the usual sense, for α, β > −1. As defined in [17–20], the set of

Jacobi polynomials {Jα,βn (x)}∞n=0 forms a complete L
2
ωα,β

(−1, 1)-orthogonal system, where L2
ωα,β

(−1, 1) is a weighted space
defined by

L2
ωα,β

(−1, 1) =
{
v : v is measurable and ‖v‖ωα,β <∞ } , equipped with the norm ‖v‖ωα,β = (∫ 1 −1 |v(x)|2ωα,β(x)dx ) 1 2 , and the inner product (u, v)ωα,β = ∫ 1 −1 u(x)v(x)ωα,β(x)dx, ∀u, v ∈ L2 ωα,β (−1, 1). For a given positive integer N , we denote the collocation points by {xi}Ni=0, which is the set of (N + 1) Jacobi–Gauss, or Jacobi–Gauss–Radau, or Jacobi–Gauss–Lobatto points, and by {wi}Ni=0 the corresponding weights. Let PN denote the space of all polynomials of degree not exceeding N . For any v ∈ C[−1, 1], we can define the Lagrange interpolating polynomial Iα,βN v ∈ PN , satisfying Iα,βN v(xi) = v(xi), 0 ≤ i ≤ N, (2.1) see, e.g., [17,18,20]. The Lagrange interpolating polynomial can be written in the form Iα,βN v(x) = N∑ i=0 v(xi)Fi(x), where Fi(x) is the Lagrange interpolation basis function associated with {xi}Ni=0. In this paper, we deal with the special case that α = −µ, β = 0, ω−µ,0(x) = (1− x)−µ. 2.1. Numerical scheme in one dimension For the sake of applying the theory of orthogonal polynomials, we use the change of variable t = 1 2 T (1+ x), x = 2t T − 1, to rewrite the weakly singular VIEs problem (1.1) as follows u(x) = f (x)+ ∫ T (1+x)/2 0 ( T 2 (1+ x)− s )−µ K ( T 2 (1+ x), s ) y(s)ds, (2.2) Author's personal copy 940 Y. Chen, T. Tang / Journal of Computational and Applied Mathematics 233 (2009) 938–950 where x ∈ [−1, 1], and u(x) = y ( T 2 (1+ x) ) , f (x) = g ( T 2 (1+ x) ) . (2.3) Furthermore, to transfer the integral interval [0, T (1 + x)/2] to the interval [−1, x], we make a linear transformation: s = T (1+ τ)/2, τ ∈ [−1, x]. Then, Eq. (2.2) becomes u(x) = f (x)+ ∫ x −1 (x− τ)−µK̃(x, τ )u(τ )dτ , x ∈ [−1, 1], (2.4) where K̃(x, τ ) = ( T 2 )1−µ K ( T 2 (1+ x), T 2 (1+ τ) ) . (2.5) Firstly, Eq. (2.4) holds at the collocation points {xi}Ni=0 on [−1, 1]: u(xi) = f (xi)+ ∫ xi −1 (xi − τ) −µK̃(xi, τ )u(τ )dτ , 0 ≤ i ≤ N. (2.6) In order to obtain high order accuracy for the VIEs problem, the main difficulty is to compute the integral term in (2.6). In particular, for small values of xi, there is little information available for u(τ ). To overcome this difficulty, we first transfer the integral interval [−1, xi] to a fixed interval [−1, 1]∫ xi −1 (xi − τ) −µK̃(xi, τ )u(τ )dτ = ( 1+ xi 2 )1−µ ∫ 1 −1 (1− θ)−µK̃(xi, τi(θ))u(τi(θ))dθ, (2.7) by using the following variable change τ = τi(θ) = 1+ xi 2 θ + xi − 1 2 , θ ∈ [−1, 1]. (2.8) Next, using a (N + 1)-point Gauss quadrature formula relative to the Jacobi weight {wi}Ni=0, the integration term in (2.6) can be approximated by∫ 1 −1 (1− θ)−µK̃(xi, τi(θ))u(τi(θ))dθ ∼ N∑ k=0 K̃(xi, τi(θk))u(τi(θk))wk, (2.9) where the set {θk}Nk=0 coincides with the collocation points {xi} N i=0 on [−1, 1]. We use ui, 0 ≤ i ≤ N to approximate the function value u(xi), 0 ≤ i ≤ N , and use uN(x) = N∑ j=0 ujFj(x) (2.10) to approximate the function u(x), namely, u(xi) ≈ ui, u(x) ≈ uN(x), and u(τi(θk)) ≈ N∑ j=0 ujFj(τi(θk)). (2.11) Then, the Jacobi-collocation method is to seek uN(x) such that {ui}Ni=0 satisfies the following collocation equations: ui = f (xi)+ ( 1+ xi 2 )1−µ N∑ j=0 uj ( N∑ k=0 K̃(xi, τi(θk))Fj(τi(θk))wk ) , 0 ≤ i ≤ N. (2.12) It follows from (2.3) that the exact solution of the VIEs problem (1.1) can be written as y(t) = y ( T 2 (1+ x) ) = u(x), t ∈ [0, T ] and x ∈ [−1, 1]. (2.13) Thus, we can define yN(t) = yN ( T 2 (1+ x) ) = uN(x), t ∈ [0, T ] and x ∈ [−1, 1], (2.14) as the approximated solution of the VIEs problem (1.1). It is obvious to see that (y− yN)(t) = (u− uN)(x) := e(x), t ∈ [0, T ] and x ∈ [−1, 1]. (2.15) Author's personal copy Y. Chen, T. Tang / Journal of Computational and Applied Mathematics 233 (2009) 938–950 941 2.2. Two-dimensional extension Problem (1.1) is considered to be scalar; however, many applications involve systems of weakly singular VIEs with high dimensions. The spectral methods proposed in the last subsection are generalizable to large systems of VIEs and to higher dimensions. To demonstrate this, we briefly outline how to solve the second-kind VIEs in two dimension: y(s, t) = g(s, t)+ ∫ s 0 ∫ t 0 (s− σ)−α(t − τ)−βK(s, t, σ , τ )y(σ , τ )dσdτ , (2.16) where (s, t) ∈ [0, T ]2. By using some linear transformations as in Section 2.1, Eq. (2.16) becomes u(x, y) = f (x, y)+ ∫ x −1 ∫ y −1 (x− ξ)−α(y− η)−β K̃(x, y, ξ , η)u(ξ , η)dξdη, (2.17) where (x, y) ∈ [−1, 1]2 and K̃(x, y, ξ , η) = ( T 2 )(2−α−β) K ( T 2 (1+ x), T 2 (1+ y), T 2 (1+ ξ), T 2 (1+ η) ) . For the weight function ω−α,0(x), we denote the collocation points by {xi}Ni=0, which is the set of (N + 1) Jacobi–Gauss, or Jacobi–Gauss–Radau, or Jacobi–Gauss–Lobatto points, and by {w(1)i } N i=0 the corresponding weights. Similarly, for the weight function ω−β,0(y), we denote the collocation points by {yj}Nj=0, which is the set of (N + 1) Jacobi–Gauss, or Jacobi–Gauss–Radau, or Jacobi–Gauss–Lobatto points, and by {w(2)j } N j=0 the corresponding weights. Assume that Eq. (2.17) holds at the Jacobi-collocation point-pairs (xi, yj). Using the linear transformation and tricks used in one-dimensional case yields ui,j = f (xi, yj)+ ( 1+ xi 2 )1−α (1+ yj 2 )1−β N∑ k=0 N∑ l=0 uk,lak,l, (2.18) where ak,l = N∑ m=0 N∑ n=0 K̃(xi, yj, ξi(θm), ηj(θn))Fk(ξi(θm))Fl(ηj(θn))w (1) m w (2) n , ξi(θm) = 1+ xi 2 θm + xi − 1 2 , ηj(θn) = 1+ yj 2 θn + yj − 1 2 , for any 0 ≤ i, j ≤ N . 3. Some useful lemmas We first introduce some weighted Hilbert spaces. For simplicity, denote ∂xv(x) = (∂/∂x)v(x), etc. For non-negative integerm, define Hm ωα,β (−1, 1) := { v : ∂ k x v ∈ L 2 ωα,β (−1, 1), 0 ≤ k ≤ m } , with the semi-norm and the norm as |v|m,ωα,β = ‖∂ m x v‖ωα,β , ‖v‖m,ωα,β = ( m∑ k=0 |v| 2 k,ωα,β )1/2 , respectively. Let ω(x) = ω− 1 2 ,− 1 2 (x) denote the Chebyshev weight function. In bounding some approximation error of Chebyshev polynomials, only some of the L2-norms appearing on the right-hand side of above norm enter into play. Thus, it is convenient to introduce the semi-norms |v| Hm;Nω (−1,1) = ( m∑ k=min(m,N+1) ‖∂ k x v‖ 2 L2ω(−1,1) ) 1 2 . For bounding some approximation error of Jacobi polynomials, we need the following non-uniformly weighted Sobolev spaces: Hm ωα,β ,∗ (−1, 1) := { v : ∂ k x v ∈ L 2 ωα+k,β+k (−1, 1), 0 ≤ k ≤ m } , Author's personal copy 942 Y. Chen, T. Tang / Journal of Computational and Applied Mathematics 233 (2009) 938–950 equipped with the inner product and the norm as (u, v)m,ωα,β ,∗ = m∑ k=0 (∂ k xu, ∂ k x v)ωα+k,β+k , ‖v‖m,ωα,β ,∗ = √ (v, v)m,ωα,β ,∗. Furthermore, we introduce the orthogonal projection πN,ωα,β : L 2 ωα,β (−1, 1) → PN , which is a mapping such that for any v ∈ L2 ωα,β (−1, 1), (v − πN,ωα,β v, φ)ωα,β = 0, ∀φ ∈ PN . It follows from Theorem 1.8.1 in [20] and (3.18) in [18] that Lemma 3.1. Let α, β > −1. Then for any function v ∈ Hm
ωα,β ,∗

(−1, 1) and any non-negative integer m, we have

‖∂
k
x (v − πN,ωα,β v)‖ωα+k,β+k ≤ CN

k−m
‖∂
m
x v‖ωα+m,β+m , 0 ≤ k ≤ m. (3.1)

In particular,

‖v − πN,ωα,β v‖ωα,β ≤ CN
−1
|v|1,ωα+1,β+1 . (3.2)

Applying Theorem 1.8.4 in [20] Theorem 4.3, 4.7, and 4.10 in [21], we have the following optimal error estimate for the
interpolation polynomials based on the Jacobi–Gauss points, Jacobi–Gauss–Radau points, and Gauss–Lobatto points.

Lemma 3.2. For any function v satisfying ∂xv ∈ Hmωα,β ,∗(−1, 1), we have, for 0 ≤ k ≤ m,

‖∂
k
x (v − I

α,β

N v)‖ωα+k,β+k ≤ CN
k−m
‖∂
m
x v‖ωα+m,β+m , (3.3)

|v − Iα,βN v|1,ωα,β ≤ C (N(N + α + β))
1−m/2

‖∂
m
x v‖ωα+m,β+m . (3.4)

Let us define a discrete inner product. For any u, v ∈ C[−1, 1], define

(u, v)N =
N∑
j=0

u(xj)v(xj)wj. (3.5)

Due to (5.3.4) in [17], and Lemmas 3.1 and 3.2, we can have the integration error estimates from the Jacobi–Gauss polyno-
mials quadrature.

Lemma 3.3. Let v be any continuous function on [−1, 1] andφ be any polynomial of PN . For the Jacobi–Gauss and Jacobi–Gauss–
Radau integration, we have

|(v, φ)ωα,β − (v, φ)N | ≤ ‖v − I
α,β

N v‖ωα,β‖φ‖ωα,β

≤ CN−m‖∂mx v‖ωα+m,β+m‖φ‖ωα,β . (3.6)

For the Jacobi–Gauss–Lobatto integration, we have

|(v, φ)ωα,β − (v, φ)N | ≤ C
(
‖v − πN−1,ωα,β v‖ωα,β + ‖v − I

α,β

N v‖ωα,β

)
‖φ‖ωα,β

≤ CN−m‖∂mx v‖ωα+m,β+m‖φ‖ωα,β . (3.7)

From [22], we have the following result on the Lebesgue constant for the Lagrange interpolation polynomials associated
with the zeros of the Jacobi polynomials.

Lemma 3.4. Assume α = −µ, β = 0 and assume that Fj(x) is the corresponding Nth Lagrange interpolation polynomials
associated with the Gauss, or Gauss–Radau, or Gauss–Lobatto points of the Jacobi polynomials. Then

‖I−µ,0N ‖∞ := max
x∈(−1,1)

N∑
j=0

|Fj(x)| = O(

N). (3.8)

The following generalization of Gronwall’s Lemma for singular kernels, whose proof can be found, e.g. in [23] (Lemma
7.1.1), will be essential for establishing our main results.

Author’s personal copy

Y. Chen, T. Tang / Journal of Computational and Applied Mathematics 233 (2009) 938–950 943

Lemma 3.5. Suppose L ≥ 0, 0 < µ < 1 and v(t) is a non-negative, locally integrable function defined on [0, T ] satisfying u(t) ≤ v(t)+ L ∫ t 0 (t − s)−µu(s)ds. (3.9) Then there exists a constant C = C(µ) such that u(t) ≤ v(t)+ CL ∫ t 0 (t − s)−µv(s)ds for 0 ≤ t < T . (3.10) From now on, for r ≥ 0 and κ ∈ [0, 1], C r,κ([−1, 1])will denote the space of functions whose rth derivatives are Hölder continuous with exponent κ , endowed with the usual norm ‖v‖r,κ = max 0≤k≤r max x∈[−1,1] |∂ k x v(x)| + max 0≤k≤r sup x,y∈[−1,1] x6=y |∂kx v(x)− ∂ k x v(y)| |x− y|κ . When κ = 0, C r,0([−1, 1]) denotes the space of functions with r continuous derivatives on [−1, 1], which is also commonly denoted by C r([−1, 1]), and with norm ‖ · ‖r . We shall make use of a result of Ragozin [24,25] (see also [26]), which states that, for non-negative integers r and κ ∈ (0, 1), there exists a constant Cr,κ > 0 such that for any function v ∈ C r,κ([−1, 1]), there exists a polynomial function
TNv ∈ PN such that

‖v − TNv‖∞ ≤ Cr,κN
−(r+κ)

‖v‖r,κ . (3.11)

Actually, as stated in [24,25], TN is a linear operator from C r,κ([−1, 1]) into PN .
We further define a linear, weakly singular integral operatorM:

Mv =

∫ x
−1
(x− τ)−µK̃(x, τ )v(τ )dτ . (3.12)

Below we will show thatM is compact as an operator from C([−1, 1]) to C0,κ([−1, 1]) provided that the index κ satisfies
0 < κ < 1−µ. A similar result can be found in Theorem 3.4 of [27]. The proof of the following lemma can be found in [12]. Lemma 3.6. Let κ ∈ (0, 1) andM be defined by (3.12) under the assumption that 0 < κ < 1− µ. Then, for any function v ∈ C([−1, 1]), there exists a positive constant C, which is dependent of ‖K̃‖0,κ , such that |Mv(x′)−Mv(x′′)| |x′ − x′′|κ ≤ C max x∈[−1,1] |v(x)|, (3.13) for any x′, x′′ ∈ [−1, 1] and x′ 6= x′′. This implies that ‖Mv‖0,κ ≤ C‖v‖∞, (3.14) where ‖ · ‖∞ is the standard norm in C([−1, 1]). 4. Convergence analysis 4.1. Error estimate in L∞ Theorem 4.1. Let u be the exact solution to the Volterra integral equation (2.4), which is assumed to be sufficiently smooth. Let the approximated solution uN be obtained by using the spectral collocation scheme (2.12) togetherwith a polynomial interpolation (2.10). If µ associated with the weakly singular kernel satisfies 0 < µ < 12 and u ∈ H m ω (−1, 1) ∩ H m ω−µ,0,∗ (−1, 1), then ‖u− uN‖∞ ≤ CN 1−m |u| Hm;Nω (−1,1) + CN1/2−m · K ∗‖u‖∞, (4.1) for N sufficiently large, where τi(θ) is defined by (2.8) and K ∗ := max 0≤i≤N ‖∂ m θ K̃(xi, τi(·))‖ωm−µ,m . Proof. First, we use the weighted inner product to rewrite (2.6) as u(xi) = f (xi)+ ( 1+ xi 2 )1−µ ( K̃(xi, τi(·)), u(τi(·)) ) ω−µ,0 , 0 ≤ i ≤ N. (4.2) Author's personal copy 944 Y. Chen, T. Tang / Journal of Computational and Applied Mathematics 233 (2009) 938–950 By using the discrete inner product (3.5), we set ( K̃(xi, τi(·)), φ(τi(·)) ) N = N∑ k=0 K̃(xi, τi(θk))φ(τi(θk))wk. Then, the numerical scheme (2.12) can be written as ui = f (xi)+ ( 1+ xi 2 )1−µ ( K̃(xi, τi(·)), u N (τi(·)) ) N , 0 ≤ i ≤ N, (4.3) where uN is defined by (2.10). Subtracting (4.3) from (4.2) gives the error equation: u(xi)− ui = ( 1+ xi 2 )1−µ ( K̃(xi, τi(·)), e(τi(·)) ) ω−µ,0 + ( 1+ xi 2 )1−µ Ii,2 = ∫ xi −1 (xi − τ) −µK̃(xi, τ )e(τ )dτ + ( 1+ xi 2 )1−µ Ii,2, (4.4) for 0 ≤ i ≤ N , where e(x) = u(x)− uN(x) is the error function, Ii,2 = ( K̃(xi, τi(·)), u N (τi(·)) ) ω−µ,0 − ( K̃(xi, τi(·)), u N (τi(·)) ) N , and the integral transformation (2.7) was used here. Using the integration error estimates from Jacobi–Gauss polynomials quadrature in Lemma 3.3, we have∣∣∣∣∣ ( 1+ xi 2 )1−µ Ii,2 ∣∣∣∣∣ ≤ CN−m‖∂mθ K̃(xi, τi(·))‖ωm−µ,m‖uN(τi(·))‖ω−µ,0 . (4.5) Multiplying Fi(x) on both sides of the error equation (4.4) and summing up from i = 0 to i = N yield I−µ,0N u− u N = I−µ,0N (∫ x −1 (x− τ)−µK̃(x, τ )e(τ )dτ ) + N∑ i=0 ( 1+ xi 2 )1−µ Ii,2Fi(x). (4.6) Consequently, e(x) = ∫ x −1 (x− τ)−µK̃(x, τ )e(τ )dτ + I1 + I2 + I3, (4.7) where I1 = u− I −µ,0 N u, I2 = N∑ i=0 ( 1+ xi 2 )1−µ Ii,2Fi(x), (4.8a) I3 = I −µ,0 N (∫ x −1 (x− τ)−µK̃(x, τ )e(τ )dτ ) − ∫ x −1 (x− τ)−µK̃(x, τ )e(τ )dτ . (4.8b) It follows from the Gronwall inequality (Lemma 3.5) ‖e‖∞ ≤ C ( ‖I1‖∞ + ‖I2‖∞ + ‖I3‖∞ ) . (4.9) Let IcNu ∈ PN denote the interpolant of u at any of the three families of Chebyshev–Gauss points. From (5.5.28) in [17], the interpolation error estimate in the maximum norm is given by ‖u− IcNu‖∞ ≤ CN 1/2−m |u| Hm;Nω (−1,1) . (4.10) Note that I−µ,0N p(x) = p(x), i.e., (I −µ,0 N − I)p(x) = 0, ∀p(x) ∈ PN . (4.11) By using (4.11), Lemma 3.4 and (4.10), we obtained that ‖I1‖∞ = ‖u− I −µ,0 N u‖∞ = ‖u− IcNu+ I −µ,0 N (I c Nu)− I −µ,0 N u‖∞ ≤ ‖u− IcNu‖∞ + ‖I −µ,0 N (I c Nu− u)‖∞ Author's personal copy Y. Chen, T. Tang / Journal of Computational and Applied Mathematics 233 (2009) 938–950 945 ≤ ( 1+ ‖I−µ,0N ‖∞ ) ‖u− IcNu‖∞ ≤ ( 1+ √ N ) · N1/2−m|u| Hm;Nω (−1,1) ≤ CN1−m|u| Hm;Nω (−1,1) . (4.12) Next, using the estimate (4.5) and Lemma 3.4, we have ‖I2‖∞ ≤ CN 1/2−m max 0≤i≤N ‖∂ m θ K̃(xi, τi(·))‖ωm−µ,m · max 0≤i≤N ‖uN(τi(·))‖ω−µ,0 ≤ CN1/2−m max 0≤i≤N ‖∂ m θ K̃(xi, τi(·))‖ωm−µ,m · (‖e‖∞ + ‖u‖∞) ≤ 1 3 ‖e‖∞ + CN 1/2−m max 0≤i≤N ‖∂ m θ K̃(xi, τi(·))‖ωm−µ,m · ‖u‖∞, (4.13) provided that N is sufficiently large. We now estimate the third term I3. It follows from (3.11) and (4.11), and Lemma 3.4 that ‖I3‖∞ = ‖(I −µ,0 N − I)Me‖∞ = ‖(I −µ,0 N − I)(Me− TNMe)‖∞ ≤ (1+ ‖I−µ,0N ‖∞) · ‖Me− TNMe‖∞ ≤ C √ N · ‖Me− TNMe‖∞ ≤ CN 1 2−κ‖Me‖0,κ ≤ CN 1 2−κ‖e‖∞, (4.14) where in the last step we have used Lemma 3.6 under the assumption 0 < κ < 1 − µ. It is clear that if κ > 12 (which is
equivalent to µ < 12 ), then ‖I3‖∞ ≤ 1 3 ‖e‖∞, (4.15) provided that N is sufficiently large. Combining (4.9), (4.12), (4.13) and (4.15) gives the desired estimate (4.1). � 4.2. Error estimate in weighted L2 norm To prove the error estimate in weighted L2 norm, we need the generalized Hardy’s inequality with weights (see, e.g., [28–30]). Lemma 4.1. For all measurable function f ≥ 0, the following generalized Hardy’s inequality(∫ b a |(Tf )(x)|qu(x)dx )1/q ≤ C (∫ b a |f (x)|pv(x)dx )1/p holds if and only if sup a 0. Then, we can see that

‖I3‖ω−µ,0 ≤ ‖I3,1‖ω−µ,0 + ‖I3,2‖ω−µ,0 , (4.23)

where

I3,1 =
(
I−µ,0N − I

) ∫ x−δ
−1

(x− τ)−µK̃(x, τ )e(τ )dτ ,

I3,2 =
(
I−µ,0N − I

) ∫ x
x−δ
(x− τ)−µK̃(x, τ )e(τ )dτ .

Here I denotes the identical operator. It follows from Lemma 3.2 that

‖I3,1‖ω−µ,0 ≤ CN
−1
∥∥∥∥∂x

(∫ x−δ
−1

(x− τ)−µK̃(x, τ )e(τ )dτ
)∥∥∥∥

ω1−µ,1

= CN−1
∥∥∥I(1)3,1 + I(2)3,2∥∥∥

ω1−µ,1
, (4.24)

Author’s personal copy

Y. Chen, T. Tang / Journal of Computational and Applied Mathematics 233 (2009) 938–950 947

where I(1)3,1 := δ
−µK̃(x, x− δ)e(x− δ) and

I(2)3,2 :=
∫ x−δ
−1

[
−µ(x− τ)−µ−1K̃(x, τ )+ (x− τ)−µK̃x(x, τ )

]
e(τ )dτ . (4.25)

Extending e ≡ 0 for x ≤ 0, we can easily obtain that

‖I(1)3,1‖ω1−µ,1 ≤ δ
−µ
‖K̃(·, ·)‖0,∞‖e(· − δ)‖ω1−µ,1 ≤ Cδ

−µ
‖K̃(·, ·)‖0,∞‖e‖ω−µ,0 . (4.26)

It follows from (4.25) that

‖I(2)3,1‖ω1−µ,1 ≤ Cδ
−µ
‖K̃(·, ·)‖0,∞‖e‖ω−µ,0 + C‖K̃(·, ·)‖1,∞‖e‖ω−µ,0 . (4.27)

Combining the above two estimates leads to

‖I3,1‖ω−µ,0 ≤ C(N
−1
δ
−µ
+ N−1)‖e‖ω−µ,0 . (4.28)

We choose δ ≤ N−2, so that N−1δ−µ ≤ N−(1−2µ) which can be sufficiently small due to the assumption that 0 < µ < 12 . Therefore, for sufficiently large N , we have ‖I3,1‖ω−µ,0 ≤ 1 4 ‖e‖ω−µ,0 . (4.29) On the other hand, we have ‖I3,2‖ω−µ,0 ≤ ‖I (1) 3,2‖ω−µ,0 + ‖I (2) 3,2‖ω−µ,0 , (4.30) where I(1)3,2 := I −µ,0 N ∫ x x−δ (x− τ)−µK̃(x, τ )e(τ )dτ , I(2)3,2 := ∫ x x−δ (x− τ)−µK̃(x, τ )e(τ )dτ . It follows from (2.25) of [18] that ‖I(1)3,2‖ 2 ω−µ,0 = N∑ i=0 ∣∣∣∣ ∫ xi xi−δ (xi − τ) −µK̃(xi, τ )e(τ )dτ ∣∣∣∣2 · wi, (4.31) where {xi}Ni=0 is the set of (N + 1) Jacobi–Gauss, or Jacobi–Gauss–Radau, or Jacobi–Gauss–Lobatto points, and {wi} N i=0 are the corresponding weights. From (2.13) of [18] and noting that µ < 1/2, we have ‖I(1)3,2‖ 2 ω−µ,0 ≤ CN−1 N∑ i=0 ∣∣∣∣ ∫ xi xi−δ (xi − τ) −µK̃(xi, τ )e(τ )dτ ∣∣∣∣2 (1− xi)−µ+1/2(1+ xi)1/2 ≤ CN−1 N∑ i=0 ∣∣∣∣ ∫ xi xi−δ (xi − τ) −µK̃(xi, τ )e(τ )dτ ∣∣∣∣2 . From Cauchy inequality and using the fact that K̃ is bounded, we obtain ‖I(1)3,2‖ 2 ω−µ,0 ≤ CN−1 N∑ i=0 ∫ xi xi−δ (1− τ)µ(xi − τ) −2µdτ · ∫ xi xi−δ (1− τ)−µe2(τ )dτ ≤ CN−1‖e‖2 ω−µ,0 , (4.32) where we have used the assumptions δ ≤ N−2 ≤ 1 and 1− 2µ > 0. Again, using Cauchy inequality and the boundedness
of K̃ gives

|I(2)3,2|
2
≤ C

∫ x
x−δ
(1− τ)µ(x− τ)−2µdτ

∫ x
x−δ
(1− τ)−µe2(τ )dτ ≤ Cδ1−2µ‖e‖2

ω−µ,0
.

Consequently,

‖I(2)3,2‖
2
ω−µ,0

≤ CN−2(1−2µ)‖e‖2
ω−µ,0

, (4.33)

Author’s personal copy

948 Y. Chen, T. Tang / Journal of Computational and Applied Mathematics 233 (2009) 938–950

where we also used δ ≤ N−2 and µ < 1/2. It follows from (4.32) and (4.33) that ‖I3,2‖ω−µ,0 ≤ ‖I3,2,1‖ω−µ,0 + ‖I3,2,2‖ω−µ,0 ≤ 1 4 ‖e‖ω−µ,0 , (4.34) provided that N is sufficiently large. Hence, by (4.23), (4.29) and (4.34), we have ‖I3‖ω−µ,0 ≤ 1 2 ‖e‖ω−µ,0 , (4.35) for sufficiently large N . The desired estimate (4.17) is obtained by combining (4.18), (4.19), (4.22) and (4.35). � 5. Algorithm implementation and numerical experiments Denoting UN = [u0, u1, . . . , uN ]T and FN = [f (x0), f (x1), . . . , f (xN)]T, we can obtain an equation of the matrix form: UN = FN + AUN , (5.1) where the entries of the matrix A is given by aij = ( 1+ xi 2 )1−µ N∑ k=0 K̃(xi, τi(θk))Fj(τi(θk))wk. Here, we simply introduce the computation of Gauss–Jacobi quadrature rule nodes and weights (see the detailed algorithm and download related codes in [32]). The Gauss–Jacobi quadrature formula is used to numerically calculate the integral∫ 1 −1 (1− x)α(1+ x)β f (x)dx, f (x) ∈ [−1, 1], α, β > −1,

by using the formula∫ 1
−1
(1− x)α(1+ x)β f (x)dx ∼

N∑
i=0

wif (xi).

With the help of a change in the variables (which changes both weights wi and nodes xi), we can get onto the arbitrary
interval [a, b].

Example 5.1. We consider the following the linear Volterra integral equations of second kind with weakly singular kernels

y(t) = b(t)−
∫ t
0
(t − s)−αy(s)ds, 0 ≤ t ≤ T , (5.2)

with

b(t) = tn+β + tn+1+β−αB(n+ 1+ β, 1− α),

where 0 ≤ β ≤ 1, and B(·, ·) is the Beta function defined by

B(x, y) =
∫ 1
0
tx−1(1− t)y−1dt.

This problem has an unique solution: y(t) = tn+β . Obviously, y(t) belongs to Hn+1ω (I). It follows from the theoretical
results obtained in this work, the numerical errors will decay with a rate ofO(N−n−1). The weighted functionω is chosen as
ω = (1− x)−α with α = 0.35. The solution interval is t ∈ [0, 6], and the exact solution is y(t) = t3.6, indicating that n = 3
and β = 0.6. In Fig. 1, numerical errors are plotted for 2 ≤ N ≤ 16 in both L∞ and L2ω-norms. We also present in Table 1
the corresponding numerical errors. As expected, the errors decay algebraically as the exact solution for this example is not
sufficiently smooth.

Example 5.2. Consider the following the nonlinear Volterra integral equations of second kind with weakly singular kernels

y(t) = b(t)+
∫ t
0
(t − s)−1/3y2(s)ds, 0 ≤ t ≤ T , (5.3)

where

b(t) = (t + 2)3/2 −
243
440
t11/3 −

81
20
t8/3 −

54
5
t5/3 − 12t2/3.

Author’s personal copy

Y. Chen, T. Tang / Journal of Computational and Applied Mathematics 233 (2009) 938–950 949

1 2 3 4 5

y
(t

)
=

tn
+

β

Numerical Solution

Exact Solution

101

10
-5

10
-4

10
-3

10
-2

10
-1

10
0

10
1

2≤ N ≤ 16

0

100

200

300

400

500

600

700

0 6

0≤ t ≤ 6

Fig. 1. Left: Numerical and exact solution of y(t) = t3+0.6 with n = 3 and β = 0.6. Right: The L∞ error and L2ω error versus N .

Table 1
Example 5.1: L∞ error and L2ω errors, for the solution interval t ∈ [0, 6].

N 2 4 6 8

L∞ error 6.7887e+01 2.4594e−01 1.3307e−02 1.9500e−03
L2ω error 2.3692e+01 6.5956e−02 1.7042e−03 1.4331e−04

N 10 12 14 16

L∞ error 4.4826e−04 1.3478e−04 4.8583e−05 1.9980e−05
L2ω error 2.2145e−05 7.8788e−06 6.6148e−06 6.5174e−06

Table 2
Example 5.2: L∞ error and L2ω errors, for the solution interval t ∈ [0, 10].

N 6 8 10 12

L∞ error 1.0571e−03 1.1271e−04 1.3630e−05 1.7825e−06
L2ω error 3.8912e−04 3.6562e−05 3.9691e−06 4.7263e−07

N 14 16 18 20

L∞ error 2.4594e−07 3.5287e−08 5.2114e−09 7.9513e−010
L2ω error 6.0029e−08 7.9972e−09 1.1046e−09 1.5821e−010

This example has a smooth solution y(t) = (2 + t)3/2. As a result, we can expect an exponential rate of convergence.
In Fig. 2, numerical errors are plotted for 2 ≤ N ≤ 24 in both L∞- and L2ω-norms. The weighted function ω is chosen as
ω = (1 − x)−α with α = 1/3. The solution interval is t ∈ [0, 10]. We also present in Table 2 the corresponding numerical
errors. As expected, the errors decay exponentially which confirmed our theoretical predictions.

Acknowledgments

The authors are grateful to Mr. Xiang Xu of Fudan University for his assistance in the numerical work. The first author
is supported by Guangdo