Theory of Finite Difference Models
with Applications in Diffusion.

Objective: Learn about the theory of finite difference as it applies to diffusion. Because finite difference numerical techniques are inherently unstable a basic knowledge of finite difference theory is required to obtain stable solutions.

by Mukund Patel (resume). Modified by R.D. Kriz (5/14/95) and (5/16/04).

Introduction

During the course of study in materials science engineering, the student is exposed to the topic of diffusion a number of times. These discussions usually lead to the derivation of the flux equation:

(1)

or Fick's first law, for steady state diffusion, and for nonsteady state diffusion, Fick's second law is derived:

(2)

These discussions are not a very accurate depiction of diffusion, because they are useful only if diffusion occurrs in one dimension. In real life applications, diffusion occurs in all three dimensions. For diffusion in two dimensions Fick's second law becomes

(3)

where this equation considers the change in concentration, C, with time, t, in both the x and the y directions. The goal of this section is to explain the mathematics involved in solving the two dimensional diffusion problem. In this section we only provide a brief introduction to analytic models of diffusion, if you are not already familiar with the theory of diffusion the reader is encouraged to first go to the section on analytic models for diffusion.

The two dimensional diffusion equation is a nonhomogeneous form of the Laplace equation for nonsteady state conditions. To solve this problem, the Crank-Nicolson Method is used [1]. This method is always second order accurate for both a change in t and x: here a change in t and x will be refered to as delta-t and delta-x respectively. With a second order accurate soluion we can take larger time steps, Ref. [2]. Using this method and performing a Taylor's series expansion with respect to a change in time, delta-t, equation (3) becomes:

(4)

The operator notation, the "greek-delta-symbol" squared on concentration, C, enclosed in parentheses is a short-hand notation for the second order difference equation of the double derivative of C with respect to space, where the subscript "i" indicates this difference equation is resolved with respect to the x-direction and "j" the y-direction.

Stability Criteria:

Solution to this equation is: 1. stable when theta is greater than or equal to zero and less than one half, 2. stable if and only if when [(delta-t) / (delta-x)]^2 is less than or equal to one half times the inverse of one minus 2*theta, and unconditionally stable when theta is greater than or equal to one half and less than one.

The stability denotes a property of the particular finite difference equation used as the time increment is made vanishingly small. It means that there is an upper limit to the extent to which any piece of information, whether it is presented in the initial conditions, or brought in via the boundary conditions, or arising from any sort of error in calculations, can be amplified in the computation. The weighted average or Crank-Nicolson method was used here to derive the stability criteria. More information on stability criterion is available in reference [2].

Creating Difference equations From Partial Derivatives

With Fig. 1 we can now explain how the numerical solution is accomplished. The solution starts at node point number 9 and continues to other nodes where there are always north (N), south (S), east (E) and west (W) points surronding the center node(O). With this mesh we can now expand on the terms in eqn. (4). For example on the left side of eqn. (4) we expanded the differential term in eqn. (3) as a change in concentration, delta-C, divided by a change in time, delta-t, at point j=i=9. Because the difference is in time, the superscript on the concentration, C, goes from the present time, n, to the next time step, n+1. Transforming the partial differential on concentration with respect to time into a difference equation is the simplest type of difference approximation which is refered to as a first order approximation. This simple relationship can also be derived from the first two terms of a Taylor series expansion.

Figure 1: Finite difference mesh and solution averaging technique.

We now extend this concept to transform the second order terms found on the right hand side of eqn. (4). A first order approximation in the x-direction would now be a change in concentration , delta-C, divided by a change in length along the x-direction, delta-x. Now the superscript "n" on the concentration is fixed, where time is held constant, but the subscript on the concentration indicate changes in concentration as we move in the east (E) and west (W), x-directions. It is not difficult to show that the difference equations in eqn. (5) for the second order partial derivatives in eqn. (3) can be obtained by taking the first three terms of the Taylor series. Similarly we can also derive a difference equation for changes in concentration in the north (N) and south (S), y-direction. Substituting these difference equations into right hand side of eqn (4) give use our final difference equation.

(5)

This equation shows how the concentration at the center node "O" at the next time step is dependent on the concetrations of the points at "N", "W", "E", and "S" from the previous time. These locations represent points directly north, west, east, and south of the center node "O". Because this calculation is carried out at every node point of intersection on a grid independently, this calculation can best be done on parallel computers where each CPU on a cluster updates the concentration to the next time step independently. Hence there can be a substantial reduction in computation time on parallel computers.

Truncation Error, T.E.

The truncation error is second order in delta-t, delta-x, and delta-y squared when theta is equal to one-half. Note: here we use a phonetic representation of Greek letters in the html text because embedding Greek letters in html text is not reproducable across all Web browsers.

(6)

The truncation error refers to discretization errors resulting from truncating the higher order terms in the Taylor's series when creating the difference equations. The Crank-Nicolson method reduces the truncation error on the time increment from order 0(delta-t) to 0(delta-t) squared.

Final Difference Equation and Succesive Over-Relaxation (SOR)

Equation (5) can be futher simplified if we let

(7)

The final difference equation (5) reduces to

(8)

This difference form of the nonhomogeneous Laplace equation can be solved implicitly for a grid with any given boundary conditions, and with initial conditions or guesses of values within the grid. Implicit and explicit finite difference forms of the Laplace equation are both convergent (this example is an implicit PDE). This means that the exact solution of the finite-difference problem goes towards the solution of the PDE as the grid spacings in time and distance go to zero. For the grid shown in the Fig. 2 below, we apply eqn. (8) to each of the nine nodes and obtain a set of nine simultaneous equations written below in matrix form where the right most column vector [V] define known values prescribed at the boundary. Since the boundary values [V] do not change with time there are no "n" or "n+1" superscripts on these variables.

Figure 2. Pentadiagonal matrix with succesive over-relaxation.

Here we show in matrix form what happens when we apply eqn (8) to the five by five mesh shown in Fig 2. The resulting pentadiagonal matrix simplifies the problem to the extent that the method of Successive-Over-Relaxation (SOR) can now be used iteratively to converge to the solution at each time step using eqn (9), see reference [1], pg. 508, eqn (7.88).

(9)

Successive-Over-Relaxation (SOR) is an iterative technique where an improved estimate of the concentration is computed by applying eqn. (9) at every interior grid point. The left hand side of eqn( 9) is the concentration at the next time step, "n+1", and the concentrations on the right hand side of eqn (9) are concentration at the previous time step "n". Once the improved estimate is calculated, it is used to replace the previous estimate of concentration. This calculation is repeated over and over for each point until there is no significant difference in concentratons for two consequative interations. Similarly eqn (8) can be manipulated into the same form as eqn (9), where all of the concentrations on the right hand side of eqn. (8) at time "n" are known and all of the concentrations at positions N, E, W, and S including the center node "O" on the left hand side of eqn. (8) at time "n+1" are unknown, but approximated by substituting the concentrations at time "n". If the correct values for concentrations at time "n" and time "n+1" were used in equation shown in Fig. 2, the term in square brackets would be zero. However this term is not zero when an approxiation is made for concentrations at time "n+1". This idea is implemented in the SOR subroutine named SLEEP of the example FORTRAN77/ FORTRAN90 computer programs. In subroutine SLEEP this non-zero term is added to the current concentration where the adjusted concentration is used again to calculate a more accurate concentration. A code fragment from subroutine SLEEP, shown below, demonstrates this adjustment.

Code fragment from subroutine SLEEP:
.
.
F(K)=G(K)*U(K)+H(K)*(U(K-IC)+U(K-1)+U(K+1)+U(K+IC))
T=U(K)+(W/C(K))*(F(K)-(A(K)*U(K-IC)+B(K)*U(K-1)+C(K)*U(K)+D(K)*U(K+1)+E(K)*U(K+IC))) .
.
Test for convergence: IF ( ABS( T-U(K) ) .LE. ERRSOR ) RETURN .
.

This adjustment becomes smaller for successive iteration at each point through out the grid until a convergence criterion is satisfied and the correct concentration at time "n+1" is approached much the same as an "over-damped" solution in dynamics, hence the name Successive-Over-Relaxation (SOR). The equation shown in Fig. 2 and eqn. (9) are identical except by a constant. Hence a pentadiagonal system of matrix equations shown in Fig. 2 can use the SOR technique to converge to a solution.

In eqn (9) omega is the convergence factor where in the range, 1 < omega < 2, the convergence is very rapid. The value of the optimum convergence factor, omega, is shown to be the square root of the eqn (10), below, see Ref.[1].

(10)

where n is the total number of increments into which the side of the square is divide. The number of iterations required for a given degree of convergence falls very rapidly when the parameter is very close to its optimal value, and it is generally better to overestimate the optimum convergence factor, than to underestimate it.

References:

1. Carnahan, B., Luther, H. R., Wilkes, J. O. Applied Numerical Methods. John Wiley & Sons, Inc., New York, 1969.

2. Morton, K. W. and Mayers, D. F. Numerical Solution of Partial Differential Equations, Cambridge University Press, Cambridge 1994.

End of information on Theory of Finite Difference Models with Applications in Diffusion