Table of Contents
Fetching ...

Generative Modeling in Protein Design: Neural Representations, Conditional Generation, and Evaluation Standards

Senura Hansaja Wanasekara, Minh-Duong Nguyen, Xiaochen Liu, Nguyen H. Tran, Ken-Tye Yong

Abstract

Generative modeling has become a central paradigm in protein research, extending machine learning beyond structure prediction toward sequence design, backbone generation, inverse folding, and biomolecular interaction modeling. However, the literature remains fragmented across representations, model classes, and task formulations, making it difficult to compare methods or identify appropriate evaluation standards. This survey provides a systematic synthesis of generative AI in protein research, organized around (i) foundational representations spanning sequence, geometric, and multimodal encodings; (ii) generative architectures including $\mathrm{SE}(3)$-equivariant diffusion, flow matching, and hybrid predictor-generator systems; and (iii) task settings from structure prediction and de novo design to protein-ligand and protein-protein interactions. Beyond cataloging methods, we compare assumptions, conditioning mechanisms, and controllability, and we synthesize evaluation best practices that emphasize leakage-aware splits, physical validity checks, and function-oriented benchmarks. We conclude with critical open challenges: modeling conformational dynamics and intrinsically disordered regions, scaling to large assemblies while maintaining efficiency, and developing robust safety frameworks for dual-use biosecurity risks. By unifying architectural advances with practical evaluation standards and responsible development considerations, this survey aims to accelerate the transition from predictive modeling to reliable, function-driven protein engineering.

Generative Modeling in Protein Design: Neural Representations, Conditional Generation, and Evaluation Standards

Abstract

Generative modeling has become a central paradigm in protein research, extending machine learning beyond structure prediction toward sequence design, backbone generation, inverse folding, and biomolecular interaction modeling. However, the literature remains fragmented across representations, model classes, and task formulations, making it difficult to compare methods or identify appropriate evaluation standards. This survey provides a systematic synthesis of generative AI in protein research, organized around (i) foundational representations spanning sequence, geometric, and multimodal encodings; (ii) generative architectures including -equivariant diffusion, flow matching, and hybrid predictor-generator systems; and (iii) task settings from structure prediction and de novo design to protein-ligand and protein-protein interactions. Beyond cataloging methods, we compare assumptions, conditioning mechanisms, and controllability, and we synthesize evaluation best practices that emphasize leakage-aware splits, physical validity checks, and function-oriented benchmarks. We conclude with critical open challenges: modeling conformational dynamics and intrinsically disordered regions, scaling to large assemblies while maintaining efficiency, and developing robust safety frameworks for dual-use biosecurity risks. By unifying architectural advances with practical evaluation standards and responsible development considerations, this survey aims to accelerate the transition from predictive modeling to reliable, function-driven protein engineering.

Paper Structure

This paper contains 86 sections, 10 equations, 4 figures, 7 tables.

Figures (4)

  • Figure 1: Illustration of the hierarchical organization of protein structure, progressing from the (a) linear amino acid sequence (primary structure) through (b) local folding patterns (secondary), (c) overall 3D conformation (tertiary), (d) to multi-subunit complexes (quaternary).
  • Figure 2: Overview of the ESM-IF inverse folding architecture. The model transforms a protein’s 3D structure into its corresponding amino acid sequence by processing vector and scalar features through a structure encoder, followed by sequence decoding using a transformer-based architecture.
  • Figure 3: Overview of the DiffDock model architecture. The system integrates a 3D protein structure and a ligand pose as inputs, which are processed through a score model and a confidence model to predict docking outcomes. The output includes top ligand poses with associated 3D coordinates (translation, rotation, torsion angles) and a confidence score, enabling accurate structure-based drug docking predictions.
  • Figure 4: DLM-DTI employs a dual-encoder framework for drug-target interaction prediction. The Target Encoder processes protein sequences using a Teacher Model (ProtBERT) and a Student Model, while the Drug Encoder utilizes ChemBERTa to encode chemical structures. The outputs from both encoders are concatenated and passed through a feed-forward neural network to predict binding probability.