%0 Journal Article %T State Space Models Based Efficient Long Documents Classification %A Bo Song %A Yuanhao Xu %A Penghao Liang %A Yichao Wu %J Journal of Intelligent Learning Systems and Applications %P 143-154 %@ 2150-8410 %D 2024 %I Scientific Research Publishing %R 10.4236/jilsa.2024.163009 %X Large language models like Generative Pretrained Transformer (GPT) have significantly advanced natural language processing (NLP) in recent times. They have excelled in tasks such as language translation question answering and text generation. However, their effectiveness is limited by the quadratic training complexity of Transformer models O (L2), which makes it challenging to handle complex tasks like classifying long documents. To overcome this challenge researchers have explored architectures and techniques such as sparse attention mechanisms, hierarchical processing and efficient attention modules. A recent innovation called Mamba based on a state space model approach offers inference speed and scalability in sequence length due to its unique selection mechanism. By incorporating this selection mechanism Mamba allows for context reasoning and targeted focus on particular inputs thereby reducing computational costs and enhancing performance. Despite its advantages, the application of Mamba in long document classification has not been thoroughly investigated. This study aims to fill this gap by developing a Mamba-based model, for long document classification and assessing its efficacy across four datasets; Hyperpartisan, 20 Newsgroups, EURLEX and CMU Book Summary. Our study reveals that the Mamba model surpasses NLP models such as BERT and Longformer showcasing exceptional performance and highlighting Mamba¡¯s efficiency in handling lengthy document classification tasks. These results hold implications for NLP applications empowering advanced language models to address challenging tasks with extended sequences and enhanced effectiveness. This study opens doors for the exploration of Mamba¡¯s abilities and its potential utilization, across diverse NLP domains. %K Mamba %K Transformer %K NLP %U http://www.scirp.org/journal/PaperInformation.aspx?PaperID=133869