Podcast cover for "Audio-Visual Cross-Modal Compression for Generative Face Video Coding" by Youmin Xu et al.
Episode

Audio-Visual Cross-Modal Compression for Generative Face Video Coding

Dec 17, 202510:56
eess.IV
No ratings yet

Abstract

Generative face video coding (GFVC) is vital for modern applications like video conferencing, yet existing methods primarily focus on video motion while neglecting the significant bitrate contribution of audio. Despite the well-established correlation between audio and lip movements, this cross-modal coherence has not been systematically exploited for compression. To address this, we propose an Audio-Visual Cross-Modal Compression (AVCC) framework that jointly compresses audio and video streams. Our framework extracts motion information from video and tokenizes audio features, then aligns them through a unified audio-video diffusion process. This allows synchronized reconstruction of both modalities from a shared representation. In extremely low-rate scenarios, AVCC can even reconstruct one modality from the other. Experiments show that AVCC significantly outperforms the Versatile Video Coding (VVC) standard and state-of-the-art GFVC schemes in rate-distortion performance, paving the way for more efficient multimodal communication systems.

Links & Resources

Authors

Cite This Paper

Year:2025
Category:eess.IV
APA

Xu, Y., Guo, M., Zhao, S., Li, W., Li, J., Zhang, L., Zhang, J. (2025). Audio-Visual Cross-Modal Compression for Generative Face Video Coding. arXiv preprint arXiv:2512.15262.

MLA

Youmin Xu, Mengxi Guo, Shijie Zhao, Weiqi Li, Junlin Li, Li Zhang, and Jian Zhang. "Audio-Visual Cross-Modal Compression for Generative Face Video Coding." arXiv preprint arXiv:2512.15262 (2025).