Guest post by Kevin Kilty Introduction I had contemplated a contribution involving feedback diagrams, systems of equations, differential equations, an

Submit guest post submit an article submit article guest author

Guest post by Kevin Kilty

Introduction

I had contemplated a contribution involving feedback diagrams, systems of equations, differential equations, and propagation of error ever since Nick Stoke’s original contribution about a system of differential equations involving feedback way back in June of this year. A couple of days ago the user Bartemis posted a pair of differential equations in a post of Roy Spencer which provided some inspiration on tying all these thoughts together in one posting. Finally, an added inspiration came from the controversy about Pat Frank’s recent contribution. Without taking a stand on the numerical values Pat calculated, or his approach, I hope to demonstrate why he and his critics, Nick for example, are really discussing different things; and why Pat’s take on this matter deserves very careful consideration. Here goes.

Let’s consider the following system of equations:

(1)

(2)

We can view this as a model of water vapor feedback where; *T *is a surface temperature, *C *is a concentration of water vapor, and *a,b,c,d *are constants. The set of equations is a system of first order differential equations, but non-linear. One can also view individual terms as second order if we differentiate the first equation, and substitute into it the second. For instance, in the example of temperature (*T*):

(3)

This, too, is non-linear but now a second order differential equation.

Thus, we can look at the temperature problem as part of a first order

system, or as a second order differential equation. It doesn’t matter except that the first order system is easier to deal with mathematically.

1. Making a linear approximation

The system of equations has two steady solutions, which a person can verify by simple substitution. These are (*T*_{0 }= 0*, C*_{0 }= 0) or (*T*_{0 }= (*bc/da*)^{1/3 }*, C*_{0 }= *c/d *· *T*_{0}). The first is a trivial solution of no interest. The second becomes a point around which we will make a linear approximation to the original equation set. The algebra is a little tedious and has little bearing on the issues at hand. But if we call *ζ *a small deviation from the steady solution in water vapor, and *θ *a small deviation in temperature the final form of the set of equations is.

(4)

(5)

These are valid near the stationary solution (*T*_{0 }= (*bc/da*)^{1/3 }*, C*_{0 }= *c/d*·*T*_{0}). Doing as we did to produce the second order Equation 3) we arrive at the following linear second order approximation.

(6)

2. Feedback Block Diagrams

Before continuing toward my main purpose, I’d like to show the relationship of the differential equations, above, to feedback diagrams over which people have spilled electrons galore, although only green ones, on these pages. Figures 1a and 1b show two possible block models of the second order differential equation in *θ*. The block models, and the differential equations are just different descriptions of the same thing. One has no mojo that the other doesn’t have. They have different utility. For example, engineers must turn a system into electronic or mechanical hardware that realizes the system, and the block diagram is useful for making this transformation.

Figure 1. Figures a, and b show alternative block models of the second order differential equation in *θ*. In a) the model consists of two integrators, which turn *θ*^{¨ }into *θ*, and the feedback loops contain gain blocks. In b) there is a filter that functions as a leaky integrator. Block representations are not unique.

3. Stability of the Linear System

We are in a position now to apply Nick’s stability analysis. Finding the eigenvalues of a two by two matrix is relatively easy. It just makes use of the quadratic formula. The interested reader might consult Wolfram Math World online (mathworld.wolfram.com/Eigenvalue.html) which shows eigenvalues in this instance explicitly.

(7)

There are two negative eigenvalues for some combinations of (h,b,c,d), meaning; the linearized system is stable, and the original non-linear system is then stable against infinitesimal disturbances. As Nick Stokes indicated, initial errors in this system will damp out. This, however, is not the full story. In my opinion it is the correct answer to a wrong question. The question of stability is not just a matter of the behavior of matrix A, where:

but also the question of what occurs with particular input to the system. In a stable system like A we can state that a bounded input produces a bounded output. That is nice to know, but bounded does not answer the more complex question of whether the output is useful for a specific purpose. This is a domain that propagation of error addresses. It is, I sense, the position Pat Frank was taking in his essay, and while I don’t speak for him, one cannot dismiss the importance of what he was trying to do.

4. The Real Concern of Error Propagation

Let’s return to the linearized system (Eqs. 4 and 5 ). The system really doesn’t do much of anything interesting because there is nothing to drive it. We must have a vector of driving terms, one involving the driver of temperature, and possibly one to drive water vapor. Without this all the solution ever does is decay back to the steady solution–i.e. its errors vanish. But this misses several important considerations. In design of control system, the engineer has to deal with inputs and disturbances to the system. Thus, the solar constant varies slightly and pushes the solution away from the equilibrium one.[1] We would put this in the vector **U** in Equation 8). There are undoubtedly disturbances driving the amount of water vapor as well. The next level of difficulty is having a model that is not fully specified. For instance, El Nino is not part of the state vector, but it does supply a disturbance to temperature and humidity. Thus it belongs in vector** e **in Equation 8)**.** Perhaps these missing parameters provide random influences which then appear to have come from the solar constant or from water vapor. Finally, by being only a model, we cannot possibly know true values of the matrix elements of A; we estimate them best we can, but they are uncertain.

A more realistic model of what we are dealing with looks like this state space model involving the vector of two state variables, temperature and humidity (**X**), and the drivers and random errors of input (**U **+ **e**). Just to be complete I have noted that what we observe is not necessarily the state variables, but rather some function of them that may have passed through instruments first. What we observe is then some other vector, **Y**, which might have its own added errors, **w**, completely independent of errors added to the drivers.

(8) **X**˙ = A · **X **+ B · (**U **+ **e**)

(9) **Y **= C · **X **+ D · (**U **+ **w**)

Even though it is a more realistic model, it still doesn’t describe propagation of error, but its solution is needed machinery to get at propagated error. What we would like to estimate, using a solution to these equations is an expectation of the difference between what we would observe with a best model, which we can’t possibly know exactly, and what the model equations above produce with uncertainties considered.

Here is a steady solution to our state variables: **X **= A · **U **+ **e**. The matrix G comes from a combination of the matrices A and B, the details of which do not matter here.[2] As a matrix we can say simply that G looks like this:

5. An Estimate of Uncertainty

There is no purpose to becoming bogged down in excessive mathematics. So, let‘s focus attention on only the equation for temperature, *θ*. It is apparent that we do not have a complete or accurate model of temperature. Thus, let whatever the equations 8) and 9) produce as a solution be called an estimate of temperature. We use the symbol *θ*^{ˆ }for this. The caret symbol is what statisticians generally use for an estimator. Then let the true temperature we would have found from a perfect model be *θ*. Even though we don’t actually know the true *θ *we figure our model is not too bad and so *θ*^{ˆ}is nearby despite uncertainties.

Generally people use a Taylor series to build a linear approximation to *θ*^{ˆ}, and calculate error propagation from it. This Taylor series is written in terms of partial derivatives of all uncertain independent variables and parameters, the *p _{i}*s, in the following.

(10)

But in our specific case the terms of G, the *g _{ij}*, are coefficients of the linear approximation, with the drivers, the elements of the vector (

*U*+

*e*), as inputs. Thus our first order Taylor series looks like:

(11) (*θ*^{ˆ}− *θ*) = *g*_{11 }· (*u*_{1 }+ *e*_{1}) + *g*_{12 }· (*u*_{2 }+ *e*_{2})

As an estimate of propagated error what we require is the variance of (*θ*^{ˆ}−*θ*) because, now having random input, this difference has become a random variable itself. Variance is defined in general as *E*((*Z*−*E*(*Z*))^{2}); where *E*(*…*) means *expectation value *of the random variable within the parentheses.[3] Therefore we should square equation 11) and take the expectation value, so that what resides on the left hand side is the expectation value of variance we seek. Call this *S _{θ}*

^{2}. The right hand side of equation 11) produces many terms as there are uncertain parameters, plus cross products.

(12)

*S _{θ}*

^{2}=

*g*

_{11}

^{2}·

*E*[(

*u*

_{1}+

*e*

_{1})

^{2}]+

*g*

_{12}

^{2}·

*E*[(

*u*

_{2}+

*e*

_{2})

^{2}]+2·

*g*

_{11}

*g*

_{12}·

*E*[(

*u*

_{1}+

*e*

_{1})·(

*u*

_{2}+

*e*

_{2})]

The terms within square brackets are variances or covariances. Even if the expectation values of the random inputs are zero, their variances are not. Thus, despite having a stable system of differential equations, the variance of the state variables probably will not tend to zero as time progresses.

There is a further point to discuss. The matrix A is not known exactly. Each element has some uncertainty which equations 8) and 9) do not indicate explicitly. One way to include this is to place a uncertainty matrix in series with A; which then becomes A + Z. Z is like a matrix of random variable which we assume have expectation values of zero for all elements, but once again do not have zero variance. This matrix will produce uncertainty in G, through its relationship to A. A complete propagation of error takes some thought and care.

6. Conclusion

The value of *S _{θ}*

^{2 }which results from a complete analysis of the contributors to uncertainty when compared to the precision needed, is what really determines whether or not model results are fit for purpose. As I wrote in a comment at one point in Pat’s original posting, the topic of propagation of error is complex; and I was told that it is indeed not complex. I think this discussion shows that it is more complex than many people suppose, and I hope it helps reinforce Pat Frank’s points.

7. Notes:

(1) G = (A −*λ*·I)^{−1}B See, for instance, Ogata, System Dynamics, 2004, Pearson-Pretice Hall.

(2) Mototaka Nakamura, in his recent monograph, which is available for download at Amazon.com, alludes to small variations in the solar constant. This would go into the vector **U **as an example.

This is explained well in Bevington, Data Reduction and Error Analysis for the Physical Sciences, McGraw-Hill, 1969. Bevington illustrates a number of simple cases. Neither his, nor any other reference I know, tackles propagated error through a system of linear equations.

## COMMENTS