Threat Detection Strategies in Artificial Intelligence Algorithms: A Modern Approach to Smart Security
Abstract
The rapid adoption of AI in mission-critical systems has raised the specter of cybersecurity threats specific to the inner workings of AI algorithms. This paper seeks to explore the new threat vector around AI and to assess new plans to detect and trust these systems. We believe that a fusion of machine learning with real-time monitoring systems will increase the effectiveness of attack performance monitoring. The overarching research ORCA is based around the following research question: which strategy should be developed to counter the emerging specific AI cyber reconnaissance’s/attacks? To that end, this paper presents a review of the state-of-the-art literature related to AI-specific cybersecurity threats such as data poisoning, adversarial attacks, and model inference vulnerabilities. We present a critical review of state-of-the-art detection methods, focusing on hybrid models that combine rule-based systems with machine learning (and variant, reinforcement learning) techniques. As a supplement to the review, we have developed a technical case study, solving a concrete threat detection problem with an end-to-end solution that leverages a new detection engine (using TensorFlow and Elastic Stack) for real-time threat detection and response. Experimental results show that our framework reaches an accuracy of 98.7% in realistic testing situations, better than naive approaches. The work describes the main challenges like limiting the false positive and following dynamic attack vectors. With these constraints in mind, we advocate for security solutions with multiple layers, coupled with cross-disciplinary research towards strengthening AI systems. This work provides a theoretical integration and empirical evidence to AI cybersecurity.
Downloads
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.