Efficient Transformer Models for Large Scale NLP: An Empirical Statistical Research

Authors

  • Abubakri O. Murainah
  • Fadilulahi O. Popoola
  • Abdulazeez Murainah

Keywords:

Transformer Models, Natural Language Processing, BERT, XLNet, DistilBERT, ALBERT

Abstract

The advent of Transformer models has essentially improved performance in Natural Language Processing (NLP) tasks such as machine interpretation and text summarization. However, their resource-intensive nature poses challenges for large-scale applications. This study evaluates the efficiency of four Transformer-based models—BERT, XLNet, DistilBERT, and ALBERT, focusing on significant metrics: accuracy, training time, memory usage, and inference speed. Evaluations were conducted using the Wikipedia dump corpus on an NVIDIA Tesla V100 GPU, employing the PyTorch library with a reliable batch size and a learning pace of 3e-5. The findings shows that XLNet and BERT attain higher accuracy, at 94.8% and 92.8%, respectively, and are resource-intensive as a result of their high parameter counts (340 million for XLNet and 345 million for BERT). DistilBERT, with 91.3% accuracy and only 66 million parameters, efficiently balances performance and resource efficiency, making it a strong contender for conditions demanding lower computational control. ALBERT, known for its memory efficiency, delivers acceptable performance with 90.0% accuracy and just 18 million parameters, thanks to its parameter-sharing techniques. This study highlights the trade-offs between accuracy, computational efficiency, and memory usage in selecting Transformer models for large-scale NLP tasks. The study recommended among others, that BERT and XLNet are ideal for applications where maximum accuracy and resources are available, while DistilBERT and ALBERT provide viable choices for resource-constrained situations, ensuring effective deployment in practical circumstances.

Downloads

Published

2024-10-09

How to Cite

Abubakri O. Murainah, Fadilulahi O. Popoola, & Abdulazeez Murainah. (2024). Efficient Transformer Models for Large Scale NLP: An Empirical Statistical Research. American Journal of Engineering , Mechanics and Architecture (2993-2637), 2(10), 55–65. Retrieved from https://grnjournal.us/index.php/AJEMA/article/view/5913