Episode

PocketX: Preference Alignment for Protein Pockets Design through Group Relative Policy Optimization

Dec 28, 202510:39
Bioinformatics
No ratings yet

Abstract

Designing protein pockets that target specific ligands is crucial for drug discovery and enzyme engineering. Although deep generative models show promise in proposing high-quality pockets, they are usually trained purely to match the data distribution and therefore overlook key biophysical properties, such as binding affinity, expression, and solubility, that ultimately determine developability and success. We introduce PocketX, an online reinforcement learning framework that explicitly aligns a generative model with desired biophysical properties. The framework first trains a base model that co-designs pocket structures and sequences conditioned on a target ligand, and then fine-tunes this model with Group Relative Policy Optimization (GRPO) to reward the desired attributes. Because GRPO employs group-relative rewards, it produces lower-variance policy updates, resulting in more stable and efficient learning than competing alignment strategies. Evaluated on the CrossDocked2020 benchmark, PocketX surpasses existing methods in metrics such as binding energy and evolutionary plausibility. Ablation studies further show that GRPO outperforms alternative alignment strategies, including Direct Preference Optimization (DPO), confirming GRPO's effectiveness for biophysical property alignment.

Links & Resources

Authors

Cite This Paper

Year:2025
Category:bioinformatics
APA

Y., F., Z., H., B., L., B., H., M., Z., J., Z., H., Z. (2025). PocketX: Preference Alignment for Protein Pockets Design through Group Relative Policy Optimization. arXiv preprint arXiv:10.64898/2025.12.28.696754.

MLA

Fan, Y., He, Z., Li, B., He, B., Zhang, M., Zhang, J., and Zhang, H.. "PocketX: Preference Alignment for Protein Pockets Design through Group Relative Policy Optimization." arXiv preprint arXiv:10.64898/2025.12.28.696754 (2025).