## You are here

Homeexamples of growth of perturbations in chemical organizations

## Primary tabs

# examples of growth of perturbations in chemical organizations

We will examine several simple examples of chemical systems where we start with one species of molecules (or a closed subset of species) then intoruduce a small perturbation and evolve the system using mass action dynamics. We want to know whether this perturbation will grow and, if so, at what rate. Ultimately, we would like to link the behavor to some feature of the reaction system, perhaps related to Rosen’s theory of M-R systems.

To get started, consider a trivial case, $A+B\to B+B$. The system of equations which describes this system is:

$\displaystyle\frac{dx}{dt}$ | $\displaystyle=-kxy$ | ||

$\displaystyle\frac{dy}{dt}$ | $\displaystyle=kxy$ |

It is easy enough to solve this system. We begin by noting that $\frac{d}{dt}(x+y)=0$, hence $x+y=x_{0}+y_{0}$. Substituting this back in to the second equation, we conclude that

$\frac{dy}{dt}=k(x_{0}+y_{0}-y)(y).$ |

This equation can readily be solved to yield the implicit solution

$kt=\frac{1}{x_{0}+y_{0}}\log\frac{y}{x_{0}+y_{0}-y}\cdot\frac{x_{0}}{y_{0}}$ |

which can be solved to produce the explicit solution

$y=x_{0}+y_{0}-\frac{x_{0}+y_{0}}{1+\frac{y_{0}}{x_{0}}\exp(k(x_{0}+y_{0})t)}.$ |

Looking at the solution, we see that it starts out at $y=y_{0}$ and grows towards $y=x_{0}+y_{0}$ as $t\to\infty$. This is as we expect — as time goes on, whatever A’s there are left react with B’s to turn into B’s until we are left with nothing but B’s.

If we suppose that, at the initial time $t=0$, there is only a tiny proportion of B’s, i.e. $y_{0}\ll x_{0}$, then we may make an expansion of the fraction and conclude that $y$ grows exponentially for small values of $t$:

$\frac{1}{1+\frac{y_{0}}{x_{0}}\exp(k(x_{0}+y_{0})t)}\approx 1-\frac{y_{0}}{x_{% 0}}\exp(k(x_{0}+y_{0})t)$ |

$y\approx\frac{y_{0}}{x_{0}}\exp(k(x_{0}+y_{0})t)$ |

We can also come to this conclusion by bounding $y$ without solving the equation first. For a simple equation like this which is readily solved, this is hardly needed but, for larger, more complicated equations, it becomes important and this simple example can serve as a illustration of the general technique.

###### Theorem 1.

Let $C$ be a real number such that $0<C<1$ and let $x_{0}$ and $y_{0}$ be strictly positive real numbers such that $0<y_{0}<C(x_{0}+y_{0})$. Set $t_{1}=\frac{1}{k(x_{0}+y_{0})}\log C\frac{x_{0}+y_{0}}{y_{0}}$. Then there exists a function $f\colon[0,t_{1})\to[0,C(x_{0}+y_{0}))$ such that $f$ satisfies the differential equation

$\frac{df(t)}{dt}=k(x_{0}+y_{0}-f(t))f(t).$ |

and, for all $t\in[0,t_{1})$,

$y_{0}\exp((1-C)k(x_{0}+y_{0})t)\leq f(t)\leq y_{0}\exp(k(x_{0}+y_{0})t).$ |

###### Proof.

By the existence theorem, there exists a positive real number $t_{{0}}$ and a function $f\colon[0,t_{{0}})\to\mathbb{R}$ such that $f(0)=y_{0}$ and $f$ satifies the differential equation. Since $f(0)=y_{0}<C(x_{0}+y_{0})$, by continuity there exists a positive real number $t_{2}$ and a function $f\colon[0,t_{{0}})\to[0,C(x_{0}+y_{0}))$ which satisfies the same differential equation with the same initial condition. Furthermore, we assume that $t_{2}$ is maximal.

Starting with this condition $f(t)<C(x_{0}+y_{0})$ and doing some algebra, we conclude that

$(1-C)k(x_{0}+y_{0})\leq\frac{1}{y}\frac{dy}{dt}\leq k(x_{0}+y_{0})$ |

Now, $\frac{1}{y}\frac{dy}{dt}=\frac{d}{dt}(\log y)$ so, by the mean value theorem, we conclude that

$(1-C)k(x_{0}+y_{0})t\leq\log\frac{y}{y_{0}}\leq k(x_{0}+y_{0})t.$ |

Exponentiating,

$y_{0}\exp((1-C)k(x_{0}+y_{0})t)\leq y\leq y_{0}\exp(k(x_{0}+y_{0})t).$ |

We can ensure that the bound on $y$ is satisfied if the condition $C(x_{0}+y_{0})\geq y_{0}\exp(k(x_{0}+y_{0})t)$ is met, which amounts to demanding that $0\leq t\leq t_{1}$ where $t_{1}=\frac{1}{k(x_{0}+y_{0})}\log C\frac{x_{0}+y_{0}}{y_{0}}$.

∎

- Forums
- Planetary Bugs
- HS/Secondary
- University/Tertiary
- Graduate/Advanced
- Industry/Practice
- Research Topics
- LaTeX help
- Math Comptetitions
- Math History
- Math Humor
- PlanetMath Comments
- PlanetMath System Updates and News
- PlanetMath help
- PlanetMath.ORG
- Strategic Communications Development
- The Math Pub
- Testing messages (ignore)

- Other useful stuff
- Corrections

## Comments

## Typo?

Hi Ray, is there some typo in your reaction formula

A+B –¿ B+B?

For a former chemist, it is incomprehensible.

What are the meanings of the variables x and y?

Thanks,

Jussi

## failure functions and group theory-another example

Let the mother function be the quadratic x*2+1. When x = 4 we get the failure function x = 4 + 17*k ( k belongs to W ). We also have the failure function x = 38 + 289*k . This generates values of x such that f(x)/289 constitute a remainder group isomorphic

^{}with Z_17.## failure functions and indirect primality tests

Let our definition of a failure be a composite number.Let the mother function be x^2+1 ( x belongs to Z). To test whether f(x) is prime or not all we have to do is to test whether x is covered by any of failure functions, x = 1 + 2k, x= 2 + 5k, x= 4 + 17k…….. If the value of x under scutiny is covered by one or more of the failure functions then f(x) is a failure (composite ). Otherwise it is prime. Note we are not directly testing the primality of f(x); we are only testing whether the x under scrutiny is covered by one or more of the failure functions. Here k belongs to Z.

## failure functions and indirect primality tests - II

Let our definition of a failure continue to be a composite number. Let the mother function be the quadratic polynomial

^{}x^2 + 7. Then the values of x generated by the failure functions x = 1 +2k, 2 + 7k, 4 +23k etc are such that f(x) is composite. Any value of x not generated by failure functions is such that f(x) is prime and need not be tested for primality. (k belongs to Z ).## failure functions - refresher

Crudely put failure funcions predict failures. To be more exact failures functions predict the values of the variable when substituted in the original or mother function we get failures in accordance with our definition of a failure. Needless to say the definition of a failure will depend on the problem in hand. The variable itself acts as a function of a specific value of it. Three examples: a) polynomials

^{}- let f(x) be a polynomial in x (x belongs to Z). Let our definition of a failure be a composite number. Then x = f(x +kf(x_0)) is a failure function. k belongs to Z.( to be continued)