A Posteriori Error Estimation
- 1. Motivations & Preliminaries
- 2. Bound theorems
- 2.1. Questions
- 2.2. A Posteriori Error Estimation: Requirements
- 2.3. uN(μ) : Error equation and residual dual norm
- 2.4. Dual norm of the residual
- 2.5. uN(μ) : Definitions of energy error bounds and effectivity
- 2.6. sN(μ) : output error bounds
- 2.7. Relative output error bounds
- 2.8. X-norm error bounds
- 2.9. Remarks on error bounds
- 3. Offline-Online decomposition
- 4. Sampling strategy: a Greedy algorithm
1. Motivations & Preliminaries
1.1. « Truth » Problem statement
Let μ∈Dμ, evaluate s(μ)=ℓ(u(μ)) where u(μ)∈X satisfies a(u(μ),v;μ)=f(v),∀v∈X.
Assumptions :
-
linearity, coercivity, continuity
-
affine parameter dependence; and
-
compliance: ℓ=f, a symmetric
1.2. Reduced Basis Sample and Space
Parameter Samples: SN={μ1∈Dμ,…,μN∈Dμ}1≤N≤Nmax, with S1⊂S2…SNmax−1⊂SNmax⊂Dμ.
Lagrangian Hierarchical Space WN=span{ξn≡u(μn)⏟uN(μn),n=1,…,N}, with W1⊂W2…⊂WNmax⊂XN⊂X
1.3. Reduced basis approximation
Given μ∈Dμ evaluate sN(μ)=f(uN(μ);μ) where uN(μ)∈XN satisfies a(uN(μ),v;μ)=f(v;μ), ∀v∈XN.
Recall:
-
RB Space: XN=«Gram-Schmidt»(WN)
-
uN(μ) unique (coercivity, continuity, linear dependence)
1.4. Coercivity and Continuity Constants
We assume a coercive and continuous
Recall that
-
coercivity constant (0<)α(μ)≡infv∈Xa(v,v;μ)||v||2X,
-
continuity constant γ(μ)≡supw∈Xsupv∈Xa(w,v;μ)‖
Affine dependence and parametric coercivity We assume that a: X\times X \times \mathcal{D} \rightarrow \mathbb{R} is
-
affine : a(u,v;\mu) = \displaystyle\sum_{q=1}^{Q_a} \Theta^q_a(\mu)\ a^q( u, v ),\, \forall u,v \in X
-
and paraletrically coercive : \Theta^q_a(\mu) > 0\quad \forall \mu \in \mathcal{D}, \, 1 \leq q \leq Q_a and a^q(u,v) \geq 0\quad \forall u,v \in X, \, 1 \leq q \leq Q_a
1.5. Inner Products and Norms
-
energy inner product and associated norm (parameter dependant)
-
X-inner product and associated norm (parameter independant)
2. Bound theorems
2.1. Questions
-
What is the accuracy of u_N(\mu) and s_N(\mu) ? Online
-
What is the best value for N ? Offline/Online
-
N too large \Rightarrow computational inefficiency
-
N too small \Rightarrow unacceptable error
-
-
How should we build S_N ? is there an optimal construction ? Offline
-
Good approximation of the manifold \mathcal{M} through the RB space, but
-
need for well conditioned RB matrices
-
2.2. A Posteriori Error Estimation: Requirements
We shall develop the following error bounds \Delta_N(\mu) and \Delta^s_N(\mu) with the following properties
-
Rigorous : 1 \leq N \leq N_{\mathrm{max}},
-
reasonably sharp 1 \leq N \leq N_{\mathrm{max}}
-
efficient : Online cost depend only on Q and N and not \mathcal{N}.
2.3. u_N(\mu) : Error equation and residual dual norm
Given our RB approximation u_N(\mu), we have e(\mu) \equiv u(\mu) - u_N(\mu) that satisfies a( e(\mu), v; \mu ) \ = \ r( u_N(\mu), v; \mu ), \forall v \in X, where r( u_N(\mu), v; \mu ) = f(v) - a( u_N(\mu), v; \mu ) is the residual. We have then from coercivity and the definitions above that
2.4. Dual norm of the residual
Given \mu \in \mathcal{D}^\mu, the dual norm of r(u_N(\mu),\cdot;\mu) is defined as follows
where \hat{e}(\mu)\in X satisfies (\hat{e}(\mu),v)_X = r(u_N(\mu),v;\mu)
The error residual equation can then be rewritten a( e(\mu), v; \mu ) \ = (\hat{e}(\mu),v)_X, \quad \forall v \in X
2.5. u_N(\mu) : Definitions of energy error bounds and effectivity
Given \alpha_\mathrm{LB}(\mu) a nonnegative lower bound of \alpha(\mu):
Denote \varepsilon_N(\mu) = \|\hat{e}(\mu)\|_X = \|r(u_N(\mu),v;\mu\|_{X'}
Remarks :
-
Rigorous : Left inequality ensures rigorous upper bound measured in ||\cdot||_{X} , i.e. ||e(\mu)||_{X} \leq \Delta_N(\mu),\ \forall \mu \in \mathcal{D}^\mu
-
Sharp : Right inequality states that \Delta_N(\mu) overestimates the « true » error by at most \gamma(\mu) / {\alpha_{{\mathrm{LB}}}}(\mu)
-
for a paralmetrically coercive and symmetric
2.6. s_N(\mu) : output error bounds
where \Delta^s_N (\mu) = {\Delta_N^{\mathrm{en}}}(\mu)^2 and \eta^s_N(\mu)\equiv \frac{\Delta^s_N(\mu)}{s(\mu)-s_N(\mu)}
Rapid convergence of the error in the output : Note that the error in the output vanishes quadratically
2.7. Relative output error bounds
We define
-
the relative output error bound \Delta^{s,rel}_N (\mu) \equiv \frac{\|\hat{e}(\mu)\|^2_X}{ \alpha_\mathrm{lb}(\mu) s_N(\mu)}= \frac{\Delta_N^{\mathrm{en}}(\mu)^2}{s_N(\mu)}
-
the relative output effectivity \eta^{s,rel}_N(\mu)\equiv \frac{\Delta^{s,rel}_N(\mu)}{s(\mu)-s_N(\mu)/s(\mu)}
for \Delta^{s,rel}_N \leq 1
2.8. X-norm error bounds
We define
-
the relative output error bound \Delta_N (\mu) \equiv \frac{\|\hat{e}(\mu)\|_X}{\alpha_\mathrm{lb}(\mu)}
-
the relative output effectivity \eta_N(\mu)\equiv \frac{\Delta_N(\mu)}{\|e(\mu)\|_X}
2.9. Remarks on error bounds
-
The error bounds are rigorous upper bounds for the reduced basis error for any N = 1,\ldots,N_{max} and for all \mu \in \mathcal{D}.
-
The upper bounds for the effectivities are
-
independent of N , and
-
independent of \mathcal{N} if \alpha_{\mathrm{lb}}(\mu) only depends on \mu, and are thus stable with respect to RB and FEM refinement.
-
-
Results for energy norm (and X-norm) bound directly extend to noncompliant (& nonsymmetric) problems
-
if we choose an appropriate definition for the energy (and X) norm
-
3. Offline-Online decomposition
Denote \hat{e}(\mu)\in X ||\hat{e}(\mu)||_X = \varepsilon_N(\mu) = ||r(u_N(\mu),\cdot;\mu)||_X such that (\hat{e}(\mu),v)_X = -r(u_N(\mu),v;\mu), \quad \forall v \in X.
And recall that -r(u_N(\mu),v;\mu) = f(v) - \displaystyle\sum_{q=1}^Q \sum_{n=1}^N\ \Theta^q(\mu)\ {u_N}_n(\mu)\ a^q( \zeta_n,v), \quad \forall v\ \in\ X
-
It follows next that \hat{e}(\mu)\in X satisfies
-
Observe then that the rhs is the sum of products of parameter dependent functions and parameter independent linear functionals, thus invoking linear superposition
where
-
\mathcal{C} \in X satisfies (\mathcal{C},v) = f(v), \forall v \in X
-
\mathcal{L} \in X satisfies (\mathcal{L}^q_n,v)_X = -a^q(\zeta_n,v), \forall v \in X, \, 1 \leq n \leq N, 1 \leq q \leq Q which are parameter independent problems
3.1. Error bounds
From a previous equation, we get
Remark In (Error bound decomposition), ||\hat{e}(\mu)||_X^2 is the sum of products of
-
parameter dependant (simple/known) functions and
-
parameter independant inner-product,
the offline-online for the error bounds is now clear.
3.2. Steps and complexity
Offline :
-
Solve for \mathcal{C} and \mathcal{L}^q_n,\ 1 \leq n \leq N,\ 1 \leq q \leq Q
-
Form and save (\mathcal{C},\mathcal{C})_X, ( \mathcal{C}, \mathcal{L}^q_n)_X and ( \mathcal{L}^{q}_{n}, \mathcal{L}^{q'}_{n'})_X, 1 \leq n,n' \leq N,\ 1 \leq q, q' \leq Q
Online :
-
Given a new \mu \in \mathcal{D}^\mu
-
Evaluate the sum ||\hat{e}(\mu)||_X^2 (Error bound decomposition) in terms of \Theta^q(\mu) and {u_N}_n(\mu)
-
Complexity in O(Q^2 N^2) independent of \mathcal{N}
4. Sampling strategy: a Greedy algorithm
4.1. Offline-Online Scenarii
Offline : Given a tolerance \tau, build S_N and W_N such that \forall \ \mu\ \in \mathcal{P} \equiv \mathcal{D}^{\mu} \ , \ \Delta_N(\mu) < \tau
Online : Given \mu and a tolerance \tau, find N^* and thus s_{N^*}(\mu) such that N^* = \operatorname{arg\ max}_N\ \left( \Delta_{N}(\mu) < \tau \right), or given \mu and a max execution time T, find N^* and thus s_{N^*}(\mu) s.uch that N^* = \operatorname{arg\ min}_N\ \left( \Delta_{N}(\mu) \mbox{ and execution time } < T \right)
4.2. S_N and W_N Generation Strategies
Offline Generation
-
Input : a tolerance \epsilon, set N = 0 and S_0 = \emptyset
-
While \Delta_N^{\mathrm{max}}> \epsilon
-
N = N+1
-
If N == 1; then Pick ((log-)randomly) \mu_1 \in \mathcal{D}^\mu
-
Build S_N:= \{ \mu_N \} \cup S_{N-1}
-
Build W_N:= \{ \xi = u(\mu_N) \} \cup W_{N-1}
-
Compute \Delta_N^{\mathrm{max}}:= \mathrm{max}_{\mu \in \mathcal{D}^\mu}\, \Delta_N(\mu)
-
\mu^{N+1} := \operatorname{arg\ max}_{\mu\in\mathcal{D}^\mu} \Delta_N(\mu)
-
-
End While
Condition number : recall that the \zeta_n are orthonormalizes, this ensures that the condition number will stay bounded by \gamma(\mu)/\alpha(\mu).
4.3. Online Algorithm I
-
Input : \mu \in \mathcal{D}^\mu
-
Compute (s_{N^{*}}(\mu), \Delta_{N^{*}}(\mu)) such that \Delta_{N^{*}}(\mu) < \tau.
-
N = 2
-
While \Delta_N(\mu) > \tau
-
Compute (s_N(\mu), \Delta_N(\mu)) using (S_N,W_N)
-
N = N * 2 use the (very) fast convergence properties of RB
-
-
End While
4.4. Online Algorithm II
-
While i \leqslant \mathrm{Imax} \gg 1
-
Pick log-randomly \mu \in \mathcal{D}^\mu
-
Store in table \mathcal{T}, \Delta_N(\mu) if worst case for N=1,..., {N^{\mathrm{max}}}
-
i = i + 1
-
-
End While
-
Given \mu \in \mathcal{D}^\mu, compute (s_{N^{*}}(\mu), \Delta_{N^{*}}(\mu)) such that \Delta_{N^{*}}(\mu) < \tau.
-
N^{*} := \mathrm{arg} \mathrm{max}_{\mathcal{T}}\, {\Delta_N(\mu) \, < \, \tau}
-
Use W_{N^{*}} to compute (s_{N^{*}}(\mu),\Delta_{N^{*}}(\mu))