Weighted Residual Methods
There is a significant difference among various spectral methods, which lies in the “error distribution” principle they employ. Specifically, this refers to how each method minimizes the residual function
with
where
The “Method of Mean Weighted Residuals (MWR)” refers to a class of methods that require the residual to be zero in some sense. If the chosen basis functions
where
Different choices of test functions
- Pseudospectral Methods:
; - Method of Moments:
; - Least Squares:
; - Galerkin Methods:
.
These methods each have distinct characteristics, with the most important and commonly used being the pseudospectral methods and Galerkin methods. The following sections provide a brief overview of these approaches. This article restricts the discussion to the one-dimensional case, but the ideas can be easily extended to higher dimensions.
Pseudospectral Methods
Pseudospectral methods, also known as collocation spectral methods or method of selected points, choose
where
This is the reason why the pseudospectral methods are also called the method of selected points: it forces the approximation to be exact at the collocation points. Specially, if the basis functions
The most important idea behind the pseudospectral methods is that the series coefficients
Because of this equivalence, pseduspectral methods may use either the series coefficients or the grid values as the unknowns. For example, consider differential equation
with some boundary conditions, there are two ways to solve the problem:
- Construct Value Equation
, where and are the vectors of the values of and at the nodes . - Construct Coefficient Equation
, where are the coefficients of the approximating series, and are the coefficients of expanded by the same basis functions , that is,
The first approach is usually called the value approach, in which the basis functions are chosen to be the Lagrange basis functions with invoking the barycentric Lagrange interpolation introduced in the following section. Cardinal functions can also be used in this approach, they are not as numerically stable as the Lagrange basis functions, but are useful for theoretical analysis. The second approach is called the coefficient approach, in which the basis functions are chosen to be the Chebyshev, Legendre polynomials, or generalized ultraspherical polynomials [4] and Jacobi polynomials (see this note for more details).
Barycentric Lagrange Interpolation
The barycentric Lagrange interpolation in [3] serves as a powerful tool for implementing pseudospectral methods (of value approach), especially for constructing the differentiation matrices.
Brief Derivation
Firstly, recall that given a set of
where
If we define the barycentric weights by
that is,
Therefor the Lagrange interpolation becomes
If we interpolate the constant function
which implies
and thus we obtain
which is the barycentric Lagrange interpolation of
Differentiation of Polynomials Interpolants
Suppose
then the first and second derivatives of
Since the barycentric representation of
multiplying through, and then multiplying both sides by
from which it follows with
and
At
Now the first and second differentiation matrices
thus we have
where
Application to Differential Equations
Differentiation of interpolants is the basis of one of the most important applications of polynomial interpolation: spectral collocation methods for the numerical solution of differential equations. Suppose
One way to do this is to set up a Chebyshev grid and consider the
where
Boundary Conditions
To solve a boundary value problem, it’s necessary to enforce the numerical solution satisfy boundary conditions. In general, boundary conditions can be classified into two types: behavioral and numerical. Behavioral boundary conditions include ones that does not impose any specific numerical value upon
The significance of these categories is that it’s almost always necessary to explicitly enforce numerical boundary conditions, while behavioral boundary conditions may be satisfied implicitly by choosing basis functions such that each have the required property or behavior.
For numerical boundary conditions which must be explicitly imposed, there are two common strategies:
- Boundary bordering: reducing the number of collocation conditions on the residual of the differential equation, and using rows of the pseudospectral matrix to explicitly enforce the boundary contraints;
- Basis recombination: modifying the problem (if need be) so that the boundary conditions of the modified problem are homogeneous and the alterting the basis functions individually to satisfy the boundary conditions.
Galerkin Methods
Galerkin methods are a class of methods that choose the basis functions
This choice of test function is successful because any well-chosen set of basis functions
Note that the residual function
where the coefficients are given by the usual inner product in the case of
Hence in such case, the Galerkin method, as defined above, enforces that the residual
For simplicity, we assume the operator
where
If the basis functions do not individually satisfy the boundary conditions, then it’s necessary to replace some rows of previous linear system by equations that express the boundary conditions. In the end, we need construct and then solve the final linear system to calculate
- right-hand side
can be computed efficiently; - the linear system
can be solved efficiently.
To achieve these goals, the common idea is to use compact combinations of orthogonal polynomials or orthonormal functions to construct the basis functions.
Petrov-Galerkin
The Petrov-Galerkin method is a generalization of the Galerkin method, where the test functions are different from the basis functions (sometimes called the trial functions). The boundary conditions determine the trial basis, and the boundary conditions associated to the dual differential equation can determine the test basis.
For example, now we consider the following linear differential equation
with homogeneous linear boundary conditions (if not, the lifting technique can be used to convert the problem to a homogeneous one by subtracting from the solution a low degree polynomial that satisfies the boundary conditions). Let
where
where
then the Petrov-Galerkin method leads to the following linear system
where
For more information about efficient Petrov-Galerkin methods, see [5] as well as this note.
Tau-method
Method of Moments
Generally, the term
This method is relatively easy to implement and exhibits good numerical stability when
The method of moments works well only if (1)
Least Squares
The least squares method chooses
The reason for the name is that one can show that this minimizes the “norm” of the residual where the norm is defined by
In other words, the least squares method determines the
If the basis functions have the property of completeness, i.e., if we can approximate the exact solution
Remark. One great advantage of the least squares method is it always generates symmetric matrices even if the operator
- It greatly simplifies the theoretical numerical analysis;
- Symmetric matrices can be solved by a variety of special tricks which are faster and need less storage than the algorithms for non-symmetric matrices;
- Symmetric matrices are more stable in time marching schemes.
For nonlinear problems, it’s possible to generalize the least squares method by defining
which includes the linear case as a special case.
Despite its many virtue, the least squares method is not as popular as the Galerkin method in practice for given specific problems, such as:
- For self-adjoint problems, the Galerkin method is simpler and has the same virtues: minimum norm of the residual and a symmetric matrix.
- For nonlinear problems, collocation/pseudospectral methods are preferable choices, since the least squares method gives a very unwieldy set of algebraic equations.
However, the least squares method is still the safest way of generating approximations that dependent nonlinearly on the unknowns.
References
- John P. Boyd, Chebyshev and Fourier Spectral Methods, 2nd Edition, Dover Publications, 2001.
- Jie Shen, Tao Tang, and Li-Lian Wang, Spectral Methods - Algorithms, Analysis and Applications, Springer, 2011.
- Jean-Paul Berrut and Lloyd N. Trefethen, Barycentric Lagrange Interpolation, SIAM Rev, 46(3), 501-517, 2004.
- Sheehan Olver, Alex Townsend. A Fast and Well-Conditioned Spectral Method, SIAM Rev. 55(3), 462-489, 2013.
- Mikael Mortensen. A Generic and Strict Banded Spectral Petrov-Galerkin Method for Differential Equations with Polynomial Coefficients, SIAM J. Sci. Comput. 45(3), A1741-A1766, 2023.
- Title: Weighted Residual Methods
- Author: Gypsophila
- Created at : 2024-11-21 20:04:14
- Updated at : 2025-01-17 22:26:30
- Link: https://chenx.space/2024/11/21/WeightResidual/
- License: This work is licensed under CC BY-NC-SA 4.0.