%0 Journal Article %T GUARDIAN: A Multi-Tiered Defense Architecture for Thwarting Prompt Injection Attacks on LLMs %A Parijat Rai %A Saumil Sood %A Vijay K. Madisetti %A Arshdeep Bahga %J Journal of Software Engineering and Applications %P 43-68 %@ 1945-3124 %D 2024 %I Scientific Research Publishing %R 10.4236/jsea.2024.171003 %X This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assistance to simulate real threats. We introduce a comprehensive, multi-tiered defense framework named GUARDIAN (Guardrails for Upholding Ethics in Language Models) comprising a system prompt filter, pre-processing filter leveraging a toxic classifier and ethical prompt generator, and pre-display filter using the model itself for output screening. Extensive testing on Meta¡¯s Llama-2 model demonstrates the capability to block 100% of attack prompts. The approach also auto-suggests safer prompt alternatives, thereby bolstering language model security. Quantitatively evaluated defense layers and an ethical substitution mechanism represent key innovations to counter sophisticated attacks. The integrated methodology not only fortifies smaller LLMs against emerging cyber threats but also guides the broader application of LLMs in a secure and ethical manner. %K Large Language Models (LLMs) %K Adversarial Attack %K Prompt Injection %K Filter Defense %K Artificial Intelligence %K Machine Learning %K Cybersecurity %U http://www.scirp.org/journal/PaperInformation.aspx?PaperID=130663