Comparative Analysis of LSTM, Transformer, and TCN in Predicting Peptide-MHC Class II Binding Affinity
DOI:
https://doi.org/10.3126/jacem.v11i1.84250Keywords:
Artificial Intelligence, Deep Learning, IEDB, LSTM, Machine Learning, TCNAbstract
Accurate prediction of Major Histocompatibility Complex (MHC) class II-peptide binding affinity remains a critical challenge in immunotherapy and vaccine development due to the complexity of molecular interactions and allele-specific binding preferences. While existing computational approaches have shown promise, identifying the optimal deep learning architecture for this task remains an open question. This study addresses this gap by systematically comparing three state-of-the-art deep learning architectures: Long Short-Term Memory Networks (LSTM), Transformers, and Temporal Convolutional Networks (TCN) using the IEDB2016 dataset containing 134,281 MHC class II-peptide binding records. Our comprehensive evaluation employed five-fold cross-validation and ensemble modeling to assess both regression and classification performance. Results demonstrate that TCN consistently outperforms competing architectures, achieving superior regression metrics with R² of 0.6208 (vs. LSTM: 0.5923, Transformer: 0.5706) and classification performance with AUROC of 0.8766 (vs. LSTM: 0.8736, Transformer: 0.8707). Cross-validation confirmed TCN's robustness (average R²: 0.6229), while ensemble methods further validated its superiority (R²: 0.6805). These findings establish TCN as the most effective architecture for peptide-MHC binding prediction, offering significant implications for computational immunology and therapeutic design.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
JACEM reserves the copyright for the published papers. Author will have right to use content of the published paper in part or in full for their own work.