Podcast cover for "First Demonstration of Second-order Training of Deep Neural Networks with In-memory Analog Matrix Computing" by Saitao Zhang et al.
Episode

First Demonstration of Second-order Training of Deep Neural Networks with In-memory Analog Matrix Computing

Dec 5, 20259:30
cs.ETNeural and Evolutionary Computing
No ratings yet

Abstract

Second-order optimization methods, which leverage curvature information, offer faster and more stable convergence than first-order methods such as stochastic gradient descent (SGD) and Adam. However, their practical adoption is hindered by the prohibitively high cost of inverting the second-order information matrix, particularly in large-scale neural network training. Here, we present the first demonstration of a second-order optimizer powered by in-memory analog matrix computing (AMC) using resistive random-access memory (RRAM), which performs matrix inversion (INV) in a single step. We validate the optimizer by training a two-layer convolutional neural network (CNN) for handwritten letter classification, achieving 26% and 61% fewer training epochs than SGD with momentum and Adam, respectively. On a larger task using the same second-order method, our system delivers a 5.88x improvement in throughput and a 6.9x gain in energy efficiency compared to state-of-the-art digital processors. These results demonstrate the feasibility and effectiveness of AMC circuits for second-order neural network training, opening a new path toward energy-efficient AI acceleration.

Links & Resources

Authors

Cite This Paper

Year:2025
Category:cs.ET
APA

Zhang, S., Luo, Y., Wang, S., Zuo, P., Li, Y., Pan, L., Miao, Z., Sun, Z. (2025). First Demonstration of Second-order Training of Deep Neural Networks with In-memory Analog Matrix Computing. arXiv preprint arXiv:2512.05342.

MLA

Saitao Zhang, Yubiao Luo, Shiqing Wang, Pushen Zuo, Yongxiang Li, Lunshuai Pan, Zheng Miao, and Zhong Sun. "First Demonstration of Second-order Training of Deep Neural Networks with In-memory Analog Matrix Computing." arXiv preprint arXiv:2512.05342 (2025).