Top Accurate Models for Handling Complex Arabic Linguistic Structures

Authors

  • Murteza Hanoon Tuama Department of Computer Techniques Engineering, Imam Al-Kadhum University College, Baghdad, Iraq
  • Wahhab Muslim mashloosh Department of Computer Techniques Engineering, Imam Al-Kadhum University College, Baghdad, Iraq
  • Yasir Mahmood Younus Department of Computer Techniques Engineering, Imam Al-Kadhum University College, Baghdad, Iraq

Keywords:

Natural Language Processing, Arabic language, Sentiment Analysis

Abstract

Arabic, a language rich in morphology but deficient in resources and syntactical exploration when compared to English, poses major hurdles for Applications of Arabic Natural Language Processing (NLP) include Question Answering, Named Entity Recognition (NER), and Sentiment Analysis (SA). (QA). However, recent advances in transformer-based models have demonstrated that language-specific BERT models, when pre-trained on large corpora, outperform in Arabic comprehension. These models have set new benchmarks and produced outstanding outcomes across a wide range of NLP tasks. In this study, we offer AraBERT, a BERT model built exclusively for Arabic, with the goal of replicating BERT's success in English. We compare AraBERT to Google's multilingual BERT and other cutting-edge techniques. The results revealed that the newly designed AraBERT outperformed most Arabic NLP.

Downloads

Published

2024-09-28

How to Cite

Tuama, M. H., mashloosh, W. M., & Younus , Y. M. (2024). Top Accurate Models for Handling Complex Arabic Linguistic Structures. American Journal of Engineering , Mechanics and Architecture (2993-2637), 2(9), 113–122. Retrieved from http://grnjournal.us/index.php/AJEMA/article/view/5806