Large language models (LLMs) provide cybercriminals and social engineers with a new and powerful method to efficiently create phishing emails and other deceptive content (e.g., fake company websites). In this thesis project, you will conduct a literature review on the risks of LLMS to cyber security, focusing on social engineering attacks, such as phishing. You will then design and conduct an empirical experiment to assess the persuasiveness, convincingness, and effectiveness of AI-generated phishing emails (compared to human-authored phishing emails). Ideally, you will be able to present recommendations about how to mitigate potential identified risks.
If you are interested in this topic, please send an email to Hanna Schraffenberger via hanna.schraffenberger@ru.nl .