Podcast cover for "LLM-Guided Exemplar Selection for Few-Shot Wearable-Sensor Human Activity Recognition" by Elsen Ronando & Sozo Inoue
Episode

LLM-Guided Exemplar Selection for Few-Shot Wearable-Sensor Human Activity Recognition

Dec 26, 20257:59
Computation and LanguageArtificial IntelligenceComputer Vision and Pattern Recognition
No ratings yet

Abstract

In this paper, we propose an LLM-Guided Exemplar Selection framework to address a key limitation in state-of-the-art Human Activity Recognition (HAR) methods: their reliance on large labeled datasets and purely geometric exemplar selection, which often fail to distinguish similar weara-ble sensor activities such as walking, walking upstairs, and walking downstairs. Our method incorporates semantic reasoning via an LLM-generated knowledge prior that captures feature importance, inter-class confusability, and exemplar budget multipliers, and uses it to guide exemplar scoring and selection. These priors are combined with margin-based validation cues, PageRank centrality, hubness penalization, and facility-location optimization to obtain a compact and informative set of exemplars. Evaluated on the UCI-HAR dataset under strict few-shot conditions, the framework achieves a macro F1-score of 88.78%, outperforming classical approaches such as random sampling, herding, and $k$-center. The results show that LLM-derived semantic priors, when integrated with structural and geometric cues, provide a stronger foundation for selecting representative sensor exemplars in few-shot wearable-sensor HAR.

Links & Resources

Authors

Cite This Paper

Year:2025
Category:cs.CL
APA

Ronando, E., Inoue, S. (2025). LLM-Guided Exemplar Selection for Few-Shot Wearable-Sensor Human Activity Recognition. arXiv preprint arXiv:2512.22385.

MLA

Elsen Ronando and Sozo Inoue. "LLM-Guided Exemplar Selection for Few-Shot Wearable-Sensor Human Activity Recognition." arXiv preprint arXiv:2512.22385 (2025).