How does the mathematical theory of adversarial AI hold up when it meets the messy reality of physical objects?
This internship is inspired by the seminal work Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition by Sharif et al..
They successfully demonstrated how eyeglass frames could be used as a medium for evasion and impersonation attacks.
However, the goal of this project is to move beyond the specific scope of that study.
We want to investigate the broader feasibility of adversarial attacks in practice, testing what is possible when applying evasion or backdoor attacks to physical objects and evaluating their performance against live systems in real-time scenarios.
We invite students to explore the practical feasibility of adversarial attacks in the real world.
Whether you want to manipulate glasses to fool face recognition, mess with physical textures to break image classification, or even explore adversarial triggers outside the vision domain, the direction of the research is open to your creativity and initiative.
Interested students should have a strong interest in AI security and proficiency in Python.
To apply or inquire for further details, please send an email to Tom Janssen-Groesbeek via tom.janssen-groesbeek@ru.nl.