International Journal of Innovative Research in Engineering & Multidisciplinary Physical Sciences
E-ISSN: 2349-7300Impact Factor - 9.907

A Widely Indexed Open Access Peer Reviewed Online Scholarly International Journal

Call for Paper Volume 13 Issue 2 March-April 2025 Submit your research for publication

An Integrated Approach for Detecting and Addressing Security Vulnerabilities in Machine Learning Models

Authors: Rahul Roy Devarakonda

Country: India

Full-text Research PDF File:   View   |   Download


Abstract: The extensive use of machine learning (ML) models across various industries poses significant security risks as these models continue to evolve. Adversarial attacks, data poisoning, and model inversion are methods that attackers exploit flaws in machine learning models, which can lead to decreased performance and potential data breaches. These dynamic threats are challenging for traditional security systems to handle; therefore, an integrated approach to vulnerability detection and mitigation is required. To enhance the security of ML models, this study proposes a comprehensive framework that combines anomaly detection, adversarial robustness approaches, and safe data management. The suggested strategy employs automatic de-identification techniques to safeguard private data and prevent unauthorized data extraction. Furthermore, we incorporate intrusion detection technologies powered by deep learning to spot unusual activity instantly, guaranteeing proactive threat prevention. Through the use of reinforcement learning and hybrid program analysis, our system improves resistance to changing attack vectors. Additionally, to ensure adherence to security best practices, we implement an audit-driven security assessment that tracks vulnerabilities from model training todeployment. According to experimental results, our method preserves model performance and interpretability while drastically lowering attack success rates. To enhance defenses against emerging cyber threats, this study highlights the importance of integrating AI-driven security measures into machine learning (ML) workflows.

Keywords: Machine Learning Security, Adversarial Attacks, Data Poisoning, Privacy-Preserving ML, Secure Model Deployment, Anomaly Detection, Explainable AI (XAI)


Paper Id: 232294

Published On: 2014-01-04

Published In: Volume 2, Issue 1, January-February 2014

Share this