Table of Contents
Fetching ...

Delayed logistic equation as a limit of long memory Markov chains

Eldon Barros, Dirk Erhard, Tertuliano Franco, Milton Jara

Abstract

We introduce and analyze a long-memory continuous-time Markov chain on $\mathbb{R}_{+}$ whose jump mechanism depends explicitly on a state in the past. From the present state $x_0$, the process jumps to $x_0\left(1+\frac{1}{N}\right)$ or $x_0\left(1-\frac{x_{-\lfloor τN \rfloor}}{N^2}\right)$, each at rate $\tfrac{1}{2}$, where $x_{-\lfloor τN \rfloor}$ denotes the state located $\lfloor τN \rfloor$ jumps backward in time. Here the delay $τ> 0$ is fixed and $N$ is the scaling parameter. The initial condition is prescribed by a vector of length $\lfloor τN \rfloor + 1$, all of whose entries are equal to $μN$. Using a genuine space-time replacement lemma, we prove that, as $N \to \infty$, the rescaled process converges to a deterministic limit governed by the Delayed Logistic Equation (also known as the Hutchinson equation) with delay $τ$ and initial condition $ρ(t) \equiv μ$ for $t \in [-τ, 0]$.

Delayed logistic equation as a limit of long memory Markov chains

Abstract

We introduce and analyze a long-memory continuous-time Markov chain on whose jump mechanism depends explicitly on a state in the past. From the present state , the process jumps to or , each at rate , where denotes the state located jumps backward in time. Here the delay is fixed and is the scaling parameter. The initial condition is prescribed by a vector of length , all of whose entries are equal to . Using a genuine space-time replacement lemma, we prove that, as , the rescaled process converges to a deterministic limit governed by the Delayed Logistic Equation (also known as the Hutchinson equation) with delay and initial condition for .

Paper Structure

This paper contains 8 sections, 11 theorems, 78 equations, 6 figures.

Key Result

Theorem 2.1

Fix $\tau > 0$ and $\mu \in (0,1)$. For each $N \in \mathbb N$, let $X^N$ be the continuous-time Markov chain defined in chain. Define the scaled process Then, for every $T > 0$, the sequence of processes $\{Y^N(t):t\in[-\tau,T]\}$ converge in probability in the Skorohod topology of $D([-\tau,T], {\mathbb R})$ to the unique continuous solution $u:[-\tau,T]\to {\mathbb R}$ to the Delayed Logistic

Figures (6)

  • Figure 1: Jump rates for the long memory Markov chain considered here. Note that the origin is an absorbent state. Above, the real value $x_0$ denotes the present state while $x_{-\lfloor \tau N\rfloor}$ denotes the state $\lfloor \tau N\rfloor$ jumps backward in time.
  • Figure 2: This simulation starts from the invariant profile $u=1$. Therefore, one expects that output of the simulation of the Markov chain to be approximately constant. This precisely what we see in the picture noting the drawing's scale: the oscillation in the $y$-axis is less than $0.05$ around the invariant value $1$, hence less than $0.5\%$ with respect to the invariant value $u=1$.
  • Figure 3: This simulation has delay $\tau = 1$ and starts at $u=2$. Since $\tau < \pi$, which is the critical value of the Hopf's bifurcation of the delayed Logistic Equation, the invariant profile $u\equiv 1$ is attractive. Moreover, contrarily to the classical logistic differential equation, which does not go below one once starting from $u>1$, the delayed logistic does, as we can see in the picture.
  • Figure 4: This simulation has delay $\tau = 2.5<\pi$, so it lies in the subcritical regime and starts at $u=1.1$. The proximity of the starting point with the invariant profile (constant equal to one) creates some noise in the simulation.
  • Figure 5: Another picture in the subcritical regime with delay $\tau = 2.5< \pi$, starting at $u=2$. The simulation is smoother in comparison with the previous picture. We believe is due to a bigger distance of the starting point to the invariant profile equal to one.
  • ...and 1 more figures

Theorems & Definitions (22)

  • Theorem 2.1
  • Remark 2.2
  • Lemma 3.1
  • proof
  • Lemma 3.2
  • proof
  • Lemma 3.3
  • proof
  • Proposition 3.4
  • proof
  • ...and 12 more