%0 Journal Article %T Security Vulnerability Analyses of Large Language Models (LLMs) through Extension of the Common Vulnerability Scoring System (CVSS) Framework %A Alicia Biju %A Vishnupriya Ramesh %A Vijay K. Madisetti %J Journal of Software Engineering and Applications %P 340-358 %@ 1945-3124 %D 2024 %I Scientific Research Publishing %R 10.4236/jsea.2024.175019 %X Large Language Models (LLMs) have revolutionized Generative Artificial Intelligence (GenAI) tasks, becoming an integral part of various applications in society, including text generation, translation, summarization, and more. However, their widespread usage emphasizes the critical need to enhance their security posture to ensure the integrity and reliability of their outputs and minimize harmful effects. Prompt injections and training data poisoning attacks are two of the most prominent vulnerabilities in LLMs, which could potentially lead to unpredictable and undesirable behaviors, such as biased outputs, misinformation propagation, and even malicious content generation. The Common Vulnerability Scoring System (CVSS) framework provides a standardized approach to capturing the principal characteristics of vulnerabilities, facilitating a deeper understanding of their severity within the security and AI communities. By extending the current CVSS framework, we generate scores for these vulnerabilities such that organizations can prioritize mitigation efforts, allocate resources effectively, and implement targeted security measures to defend against potential risks. %K Common Vulnerability Scoring System (CVSS) %K Large Language Models (LLMs) %K DALL-E %K Prompt Injections %K Training Data Poisoning %K CVSS Metrics %U http://www.scirp.org/journal/PaperInformation.aspx?PaperID=133503