7 Ways AI and ML Are Helping and Hurting Cybersecurity
In the right hands, artificial intelligence and machine learning can enrich our cyber defenses. In the wrong hands, they can create significant harm.
Artificial intelligence (AI) and machine learning (ML) are now part of our everyday lives, and this includes cybersecurity. In the right hands, AI/ML can identify vulnerabilities and reduce incident response time. But in cybercriminals’ hands, they can create significant harm.
Here are seven positive and seven negative ways AI/ML is impacting cybersecurity.
7 Positive Impacts of AI/ML in Cybersecurity
- Fraud and Anomaly Detection: This is the most common way AI tools are coming to the rescue in cybersecurity. Composite AI fraud-detection engines are showing outstanding results in recognizing complicated scam patterns. Fraud detection systems’ advanced analytics dashboards provide comprehensive details about incidents. This is an extremely important area within the general field of anomaly detection.
- Email Spam Filters: Defensive rules filter out messages with suspect words to identify dangerous email. Additionally, spam filters protect email users and reduce the time it takes to go through unwanted correspondence.
- Botnet Detection: Supervised and unsupervised ML algorithms not only facilitate detection but also prevent sophisticated bot attacks. They also help identify user behavior patterns to discern undetected attacks with an extremely low false-positive rate.
- Vulnerability Management: It can be difficult to manage vulnerabilities (manually or with technology tools), but AI systems make it easier. AI tools look for potential vulnerabilities by analyzing baseline user behavior, endpoints, servers, and even discussions on the Dark Web to identify code vulnerabilities and predict attacks.
- Anti-malware: AI helps antivirus software detect good and bad files, making it possible to identify new forms of malware even if it’s never been seen before. Although complete replacement of traditional techniques with AI-based ones can speed detection, it also increases false positives. Combining traditional methods and AI can detect 100% of malware.
- Data-Leak Prevention: AI helps identify specific data types in text and non-text documents. Trainable classifiers can be taught to detect different sensitive information types. These AI approaches can search data in images, voice records, or video using appropriate recognition algorithms.
- SIEM and SOAR: ML can use security information and event management (SIEM) and security orchestration, automation, and response (SOAR) tools to improve data automation and intelligence gathering, detecting suspicious behavior patterns, and automating the response depending on the input.
AI/MI is used in network traffic analysis, intrusion detection systems, intrusion prevention systems, secure access service edge, user and entity behavior analytics, and most technology domains described in Gartner’s Impact Radar for Security. In fact, it’s hard to imagine a modern security tool without some kind of AI/ML magic in it.
7 Negative Impacts of AI/ML in Cybersecurity
- Data Gathering: Through social engineering and other techniques, ML is used for better victim profiling, and cybercriminals leverage this information to accelerate attacks. For example, in 2018, WordPress websites experienced massive ML-based botnet infections that granted hackers access to users’ personal information.
- Ransomware: Ransomware is experiencing an unfortunate renaissance. Examples of criminal success stories are numerous; one of the nastiest incidents led to Colonial Pipeline’s six-day shutdown and $4.4 million ransom payment.
- Spam, Phishing, and Spear-Phishing: ML algorithms can create fake messages that look like real ones and aim to steal user credentials. In a Black Hat presentation, John Seymour and Philip Tully detailed how an ML algorithm produced viral tweets with fake phishing links that were four times more effective than a human-created phishing message.
- Deepfakes: In voice phishing, scammers use ML-generated deepfake audio technology to create more successful attacks. Modern algorithms such as Baidu’s “Deep Voice” require only a few seconds of someone’s voice to reproduce their speech, accents, and tones.
- Malware: ML can hide malware that keeps track of node and endpoint behavior and builds patterns mimicking legitimate network traffic on a victim’s network. It can also incorporate a self-destructive mechanism in malware that amplifies the speed of an attack. Algorithms are trained to extract data faster than a human could, making it much harder to prevent.
- Passwords and CAPTCHAs: Neural network-powered software claims to easily break human-recognition systems. ML enables cybercriminals to analyze vast password data sets to target password guesses better. For example, PassGAN uses an ML algorithm to guess passwords more accurately than popular password-cracking tools using traditional techniques.
- Attacking AI/ML Itself: Abusing algorithms that work at the core of healthcare, military, and other high-value sectors could lead to disaster. Berryville Institute of Machine Learning’s Architectural Risk Analysis of Machine Learning Systems helps analyze taxonomies of known attacks on ML and performs an architectural risk analysis of ML algorithms. Security engineers must learn how to secure ML algorithms at every stage of their life cycle.
It is easy to understand why AI/ML is gaining so much attention. The only way to battle devious cyberattacks is to use AI’s potential for defense. The corporate world must notice how powerful ML can be when it comes to detecting anomalies (for example, in traffic patterns or human errors). With proper countermeasures, possible damage can be prevented or drastically reduced.
Overall, AI/ML has huge value for protecting against cyber threats. Some governments and companies are using or discussing using AI/ML to fight cybercriminals. While the privacy and ethical concerns around AI/ML are legitimate, governments must ensure that AI/ML regulations won’t prevent businesses from using AI/ML for protection. Because, as we all know, cybercriminals do not follow regulations.
DataArt’s Vadim Chakryan, Information Security Officer, and Eugene Kolker, Executive Vice President, Global Enterprise Services & Co-Director, AI/ML Center of Excellence, also contributed to this article.
Andrey joined DataArt in 2016 as Chief Compliance Officer. He has more than 25 years of experience in the IT industry. He began his career as a software developer and has played many roles. He has experience in managing projects, managing programs in the medical device … View Full Bio