Random Differential Equations - 2012-07-21
The reality is usually better described as probabilities.
Last summer I read the book “Random Differential Equations in Science and Engineering” by T.T. Soong (1973). It handled a type of research which seems to have stagnated, and only so-called Stochastic Differential Equations (SDE) are researched. Apparently, Random Differential Equations (RDE) is a superset of SDEs and contain a set of interesting cases:
- Differential equations with random initial conditions.
- Differential equations with stochastic input signals.
- Differential equations with random parameters.
For a simple differential equation $$\frac{dx(t)}{dt} = -ax(t) + u(t)$$ the three cases above can be explained. If we have stochastic initial conditions (case 1) we have an arbitrary distribution for $$x(t_0)$$. When $$u(t)$$ is stochastic we have an SDE (case 2), and for case 3 the parameter $$a$$ is described as a statistical distribution.
I contacted T.T. Soong by email to check if anything significant had happened in the field since 1973. Unfortunately, he had not been active in the field of RDEs for a long time and could not direct me further. He told me that in some cases RDEs may be transformed into SDEs by whitening the distributions, but it does not seem as if any significant research has been done.
Solving RDEs can be done either through Monte-Carlo simulations: draw a parameter value from the distribution and realize one trajectory, repeat until the result has converged. Another solution is to formulate the problem as a time-dependent partial differential equation (PDE) where the whole covariance structure is defined for the entire problem. It is easy to understand that when fluid dynamic computations struggle with computation time for three dimensions, we can quickly end up with many more dimensions for an RDE with many states (each state needs a statistical distribution). Solving such PDEs require an equivalent to mass balance in the form of probability flux balance. We are not allowed to produce or consume probability since the integral over the entire parameter space must be once.
All three cases above may be used to model recurring test procedures for a manufacturing line. The initial conditions may be different for each test since preconditioning of the test object could be difficult. Samples from a production line suffer from manufacturing tolerances leading to case 3. If the test procedure has to be performed by a human operator there could be variations in the input signals, hence case 2. By being able to model all of the three cases simultaneously, it is possible to estimate the robustness of the test procedure given all sources of variations. Of course, this is a difficult task to embark.
Last summer I solved case 1 for a one-dimensional linear differential equation:
$$\frac{dx(t)}{dt} = -0.2x(t)$$
The solution for $$x(t)$$ given $$x(t_0)$$ is:
$$x(t) = h(x(t_0))$$
If we invert the solution we get
$$x(t_0) = h^{-1}(x(t), t)$$
and according to the book the joint density function for a set of states is (in general):
$$f(x(t),t) = f(t_0) \left[ x(t_0) = h^{-1}(x(t),t) \right] |J|$$
where $$J$$ is the Jacobian
$$J = \left| \frac{\partial x(t_0)^T}{\partial x} \right|$$
So, for the particular ODE shown earlier we have $$x(t) = e^{-0.2t} x(t_0)$$ (recall that for a linear differential equation $$\frac{dx(t)}{dt} = Ax$$ the general solution is $$x(t) = e^{At} x(t_0)$$). Solving for $$x{t_0}$$ gives
$$x(t_0) = e^{0.2t} x(t)$$
and the Jacobian becomes
$$\frac{\partial x(t_0)}{\partial x(t)} = e^{0.2t}$$
which gives the distribution:
$$f(x(t),t) = e^{0.4t} f(x(t_0),t_0)$$
As shown in the figure below:
What is happening is that the initial distribution is compressed and its amplitude is increased to keep the integral at one. Somehow the density of the distribution is coupled to the density of all trajectories of the differential equation at a certain time. I wonder if there is an official measure for “trajectory density”?
The Python code to reproduce this plot is given below:
#!/usr/env python2.7
# Test of stochastic initial conditions for an ordinary linear
# differential equation.
#
# Stefan Larsson - 2011-08-15
# lastsys@gmail.com
from pylab import *
from mpl_toolkits.mplot3d import Axes3D
N = 500 # Resolution for density function
Nt = 20 # Number of time steps
x0 = linspace(-10, 10, N) # Range for density function
# Initialize density function
f_x0 = zeros((N))
# Two peaks for initial values at -5 and 5
f_x0 = f_x0 + exp(-(x0 - 5) ** 2.0) + exp(-(x0 + 5) ** 2.0)
# Normalize
f_x0 = f_x0 / trapz(f_x0, x0)
t = linspace(0, 10, Nt) # Time vector
a = -0.2 # Time constant (equation is dx/dt = a*x).
f_x = zeros((N, Nt)) # Storage of resulting density functions
for i_time in xrange(Nt):
if i_time == 0:
f_x[:, i_time] = f_x0
continue
# Inverse transition matrix for dx/dt = a*x.
Ainv = exp(-a * t[i_time])
# Transform initial density function using transition matrix.
f_x[:, i_time] = interp(Ainv * x0, x0, f_x0) * Ainv
# Plot
fig = figure()
ax = fig.add_subplot(111, projection='3d')
for i in xrange(Nt):
ax.plot(x0, tile(t[i], x0.shape), f_x[:, i], 'b')
hold(True)
ax.set_xlabel('$x(t)$ [state]')
ax.set_ylabel('$t$ [time]')
ax.set_zlabel('$f(x(t))$ [density function]')
show()