%0 Journal Article %T Evaluating Privacy Leakage and Memorization Attacks on Large Language Models (LLMs) in Generative AI Applications %A Harshvardhan Aditya %A Siddansh Chawla %A Gunika Dhingra %A Parijat Rai %A Saumil Sood %A Tanmay Singh %A Zeba Mohsin Wase %A Arshdeep Bahga %A Vijay K. Madisetti %J Journal of Software Engineering and Applications %P 421-447 %@ 1945-3124 %D 2024 %I Scientific Research Publishing %R 10.4236/jsea.2024.175023 %X The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. We describe different black-box attacks from potential adversaries and study their impact on the amount and type of information that may be recovered from commonly used and deployed LLMs. Our research investigates the relationship between PII leakage, memorization, and factors such as model size, architecture, and the nature of attacks employed. The study utilizes two broad categories of attacks: PII leakage-focused attacks (auto-completion and extraction attacks) and memorization-focused attacks (various membership inference attacks). The findings from these investigations are quantified using an array of evaluative metrics, providing a detailed understanding of LLM vulnerabilities and the effectiveness of different attacks. %K Large Language Models %K PII Leakage %K Privacy %K Memorization %K Overfitting %K Membership Inference Attack (MIA) %U http://www.scirp.org/journal/PaperInformation.aspx?PaperID=133625