Splitwise: Collaborative Edge-Cloud Inference for LLMs via Lyapunov-Assisted DRL
Abstract
Deploying large language models (LLMs) on edge devices is challenging due to their limited memory and power resources. Cloud-only inference reduces device burden but introduces high latency and cost. Static edge-cloud partitions optimize a single metric and struggle when bandwidth fluctuates. We propose Splitwise, a novel Lyapunov-assisted deep reinforcement learning (DRL) framework for fine-grained, adaptive partitioning of LLMs across edge and cloud environments. Splitwise decomposes transformer layers into attention heads and feed-forward sub-blocks, exposing more partition choices than layer-wise schemes. A hierarchical DRL policy, guided by Lyapunov optimization, jointly minimizes latency, energy consumption, and accuracy degradation while guaranteeing queue stability under stochastic workloads and variable network bandwidth. Splitwise also guarantees robustness via partition checkpoints with exponential backoff recovery in case of communication failures. Experiments on Jetson Orin NX, Galaxy S23, and Raspberry Pi 5 with GPT-2 (1.5B), LLaMA-7B, and LLaMA-13B show that Splitwise reduces end-to-end latency by 1.4x-2.8x and cuts energy consumption by up to 41% compared with existing partitioners. It lowers the 95th-percentile latency by 53-61% relative to cloud-only execution, while maintaining accuracy and modest memory requirements.
Links & Resources
Authors
Cite This Paper
Younesi, A., Maryan, A. S., Oustad, E., Samani, Z. N., Ansari, M., Fahringer, T. (2025). Splitwise: Collaborative Edge-Cloud Inference for LLMs via Lyapunov-Assisted DRL. arXiv preprint arXiv:2512.23310.
Abolfazl Younesi, Abbas Shabrang Maryan, Elyas Oustad, Zahra Najafabadi Samani, Mohsen Ansari, and Thomas Fahringer. "Splitwise: Collaborative Edge-Cloud Inference for LLMs via Lyapunov-Assisted DRL." arXiv preprint arXiv:2512.23310 (2025).