Table of Contents
Fetching ...

Toward Efficient Deployment and Synchronization in Digital Twins-Empowered Networks

Hossam Farag, Cedomir Stefanovic

Abstract

Digital twins (DTs) are envisioned as a key enabler of the cyber-physical continuum in future wireless networks. However, efficient deployment and synchronization of DTs in dynamic multi-access edge computing (MEC) environments remains challenging due to time-varying communication and computational resources. This paper investigates the joint optimization of DT deployment and synchronization in dynamic MEC environments. A deep reinforcement learning (DRL) framework is proposed for adaptive DT placement and association to minimize interaction latency between physical and digital entities. To ensure semantic freshness, an update scheduling policy is further designed to minimize the long-term weighted sum of the Age of Changed Information (AoCI) and the update cost. A relative policy iteration algorithm with a threshold-based structure is developed to derive the optimal policy. Simulation results show that the proposed methods achieve lower latency, enhanced information freshness, and reduced system cost compared with benchmark schemes

Toward Efficient Deployment and Synchronization in Digital Twins-Empowered Networks

Abstract

Digital twins (DTs) are envisioned as a key enabler of the cyber-physical continuum in future wireless networks. However, efficient deployment and synchronization of DTs in dynamic multi-access edge computing (MEC) environments remains challenging due to time-varying communication and computational resources. This paper investigates the joint optimization of DT deployment and synchronization in dynamic MEC environments. A deep reinforcement learning (DRL) framework is proposed for adaptive DT placement and association to minimize interaction latency between physical and digital entities. To ensure semantic freshness, an update scheduling policy is further designed to minimize the long-term weighted sum of the Age of Changed Information (AoCI) and the update cost. A relative policy iteration algorithm with a threshold-based structure is developed to derive the optimal policy. Simulation results show that the proposed methods achieve lower latency, enhanced information freshness, and reduced system cost compared with benchmark schemes

Paper Structure

This paper contains 10 sections, 1 theorem, 8 equations, 5 figures, 1 table, 1 algorithm.

Key Result

Theorem 1

Given $s=(\Delta,\delta)$, if $p_r(1)-p_r(\delta+1)\;\le\;\frac{\delta}{\,V_j(\Delta+1,1)-V_j(1,1)\,}$ for any $\Delta$ and $j$$\in\mathbb{Z}_{\ge 0}$, then the optimal updating policy $\Psi^\ast(s)$ exhibits a $\Delta$-threshold structure for each fixed $\delta$. Specifically, the optimal action is

Figures (5)

  • Figure 1: Illustration of DT-empowered MEC network.
  • Figure 2: Evaluation of the DT deployment cost with varying $N$ and $B$.
  • Figure 3: Performance comparison against baselines methods.
  • Figure 4: Performance comparison against baselines methods with $C=12$ and $\omega=1$.
  • Figure 5: Performance comparison against baselines deployment methods with $C=12$ and $\omega=1$.

Theorems & Definitions (1)

  • Theorem 1