Solving f(u) = 0 by Time Stepping

To solve an equation f(u)=(f_1(u),f_2(u),...,f_N(u))=0 of N equations f_i(u)=0, i=1,...,N, in N unknowns u=(u_1,u_2,...,u_N), with thus f:R^n\rightarrow R^n, it is natural to connect to solution of the IVP: Find u(t) such that

  • \dot u(t)+f(u(t))=0\quad\mbox{for }t>0,\quad u(0)=u^0,

with some given initial value u^0. If it turns out that as t increases, the function u(t) tends to some value \hat u, then \dot u(t) could be expected to become small, and if so, we would have

  • f(u(t))\approx 0,

and we would be led to set \hat u =u(t) for some large t and consider \hat u to be an approximate solution of f(u)=0 with small residual f(\hat u).

If f(u) has several different solutions, which is often the case, then we could expect to capture different solutions by chosing different initial values u^0.

Computing u(t) by Forward Euler with time step dt=1, we would have

  • u^{n+1}=u^n-f(u^n),\quad{for }\,\, n=0,1,2,...,    (1)

If \vert u^{n+1}-u^n\vert would become small for increasing n, then f(u^n) would become small and thus u^n would be an approximate solution of f(u)=0 with small residual f(u^n).

Solving u=g(u) by Fixed Point Iteration

With f(u)=u-g(u), we are thus led to study the convergence of the iteration, referred to as Fixed Point Iteration:

  • u^{n+1}=g(u^n),\quad n=0,1,2,....,   (2)

where

  • g(u)=u-f(u).

To this end we take the difference of (2) for two consecutive steps to get

  • e^{n+1}\equiv u^{n+1}-u^n=g(u^n)-g(u^{n-1}).

If g:R^N\rightarrow R^N is Lipschitz continuous with Lipschitz constant L, then

  • \vert e^{n+1}\vert =\vert g(u^n)-g(u^{n-1})\vert\le L\vert e^n\vert\le L^2\vert e^{n-1}\vert\le L^{n}\vert u^1-u^0\vert.

We see that if L<1, then \vert e^n\vert becomes vanishingly small as n increases, which by (1) means that f(u^n) becomes vanishingly small and thus u^n may be viewed as an approximate solution of f(u) in the sense that the residual f(u^n) is small. In the next chapter we will consider the error in the approximate root u^n compared to an exact root \bar u with vanishingly small residual f(\bar u).

We see that if L<<1 then the convergence is fast, and if L\approx 1 then the convergence is slow. If L=\frac{1}{2} then the residual \vert g(u^{n})-u^n\vert =\vert u^{n+1}-u^n\vert is reduced with a factor 2 in each iteration step, that is with a binary digit per step.

 

Next: Solving f(u)=0 by Newton’s Method   Previous: Rules of Integration 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: