Security of AI/ML.
Investigating the unique vulnerabilities of artificial intelligence and developing defensive frameworks to ensure model integrity and privacy.
Focus Areas
-
Adversarial Robustness
Analyzing how small, targeted perturbations can deceive neural networks and designing robust training protocols to mitigate them.
-
Model Integrity & Poisoning
Defending against data poisoning attacks that compromise the decision-making process of models during the training phase.
-
Privacy-Preserving AI
Utilizing Differential Privacy and Federated Learning to build intelligent systems that protect the underlying sensitivity of training data.
Selected Publications
IEEE SVCC 2025
RTOS Security Risks in Telecommunications Hardware
ASME 2016
Detecting Malicious Defects in 3D Printing Process Using Machine Learning and Image Classification
Foundational Research
Game Theory based Cyber-Insurance to Cover Potential Loss from Mobile Malware Exploitation
Springer Nature 2025
Beyond Normality: Rethinking Behavioral Biometric Data