Artificial intelligence has been called “all-purpose technology”. This means that AI, like electricity, computers and the internet before it, is expected to find applications in all aspects of society. Unfortunately for companies looking to protect their IT, this includes cybercrime.
In 2020, a study by European law enforcement agency Europol and security provider Trend Micro identified how cybercriminals are already using AI to make their attacks more effective, and how AI will drive cybercrime in multiple ways in the future.
“Cybercriminals have always been early adopters of the latest technology, and AI is no different,” said Martin Roesler, Trend Micro’s head of forward-looking threat research. if the report was published. “It’s already being used for password guessing, CAPTCHA cracking, and voice cloning, with many more malicious innovations in the works.”
Just as technology leaders need to understand how AI can help their organizations achieve their own goals, understanding how AI supports the sophistication and scale of criminal cyberattacks is critical so they can prepare for them.
How AI is being used for cybercrime today
Cyber criminals are already using AI to improve the effectiveness of traditional cyber attacks. Many applications focus on bypassing the automated defenses that protect IT systems.
One example identified in the Europol report is the use of AI to create malicious emails that can bypass spam filters. In 2015, researchers discovered a system that used “generative grammar” to create a large data set of email text. “These texts are then used to fuzz the anti-spam system and adapt it to various filters to identify content that would no longer be recognized by spam filters,” the report warns.
Researchers have also demonstrated malware that uses a similar approach to antivirus software, deploying an AI agent to find weaknesses in the software’s malware detection algorithm.
AI can be used to help other hacking techniques like password guessing. Some tools use AI to analyze a large dataset of passwords recovered from public leaks and hacks on major websites and services. This shows how people change their passwords over time – e.g. B. Add numbers at the end or replace “a” with “@”.
Work is also underway to use machine learning to break CAPTCHAs found on most websites to ensure the user is human, with Europol uncovering evidence of active development in criminal forums in 2020. It is not clear how far this development has progressed, but given sufficient computer power, AI will eventually be able to crack CAPTCHAs, predicts Europol.
AI and social engineering
Other cybercrime applications of AI focus on social engineering by enticing human users to click on malicious links or share sensitive information.
First, cyber criminals use AI to gather information about their targets. This includes identifying all social media profiles of a specific person, including cross-platform matching of their user photos.
Once they identify a target, cybercriminals use AI to trick them more effectively. This includes creating fake images, audio, and even videos to trick their targets into believing they are interacting with someone they trust.
A tool identified by Europol performs real-time voice cloning. With a five-second voice recording, hackers can clone anyone’s voice and use it to gain access to services or fool other people. In 2019, the chief executive of a UK-based energy company was tricked into paying £200,000 by scammers using an audio deep fake.
Even brazen cybercriminals are using video deepfakes — which show someone else’s face over their own — in remote IT job interviews to gain access to sensitive IT systems, the FBI warned last month.
In addition to these individual methods, cybercriminals are using AI to automate and streamline their operations, says Bill Conner, CEO of cybersecurity provider SonicWall. Modern cybercriminal campaigns involve a cocktail of malware, cloud-delivered ransomware-as-a-service, and AI-powered targeting.
These sophisticated attacks require AI for testing, automation, and quality assurance, Conner explains. “Without AI, this would not be possible on this scale.”
The future of AI-powered cybercrime
Cybercriminals’ use of AI is expected to increase as the technology becomes more widely available. Experts assume that this will allow them to launch cyber attacks on a much larger scale than is currently possible. For example, criminals will be able to use AI to analyze more information to identify targets and vulnerabilities, and target more victims at once, Europol predicts.
They will also be able to generate more content that they can use to fool people. Large language models such as OpenAI’s GPT-3, which can be used to generate realistic text and other output, can have a number of uses for cybercriminals. This could include mimicking someone’s writing style or creating chatbots that mistake victims for real people.
AI-assisted software development, which companies are beginning to use, could also be used by hackers. Europol warns that AI-based “no-code” tools that convert natural language into code could lead to a new generation of “script kiddies” with little technical knowledge but ideas and motivation for cybercrime.
Malware itself becomes smarter when AI is embedded in it, Europol warns. Future malware could scan documents on a computer looking for specific information, such as employee data or protected intellectual property.
Ransomware attacks should also be strengthened with AI. AI will not only help ransomware groups find new vulnerabilities and victims, but also help them avoid detection longer by “listening” to the measures companies use to prevent intruders into their IT systems recognize.
As AI’s ability to mimic human behavior develops, so will its ability to breach certain biometric security systems, e.g. B. those that identify a user by the way they type. It could also simulate realistic user behavior – such as B. Being active during specific hours – so stolen accounts are not flagged by behavior-based security systems.
Eventually, AI will enable cybercriminals to make better use of compromised IoT devices, predicts Todd Wade, an interim CISO and author of the BCS book on cybercrime. These devices, already used to power botnets, become all the more dangerous when coordinated by AI.
How to Prepare for AI Cybercrime
Protecting against AI-powered cybercrime requires responses at the individual, organizational, and societal levels.
Employees need to be trained to spot new threats like deepfakes, Wade says. “People are used to attacks happening a certain way,” he says, “they’re not used to the one-off, maybe something that just happens to appear in a Zoom call or a WhatsApp message, and are therefore.” unprepared for when it happens.”
In addition to standard cybersecurity best practices, organizations must adopt AI tools themselves to meet the scale and complexity of future threats. “You’re going to need AI tools just to keep up with the attacks, and if you’re not using those tools to combat this, there’s no way you’re going to be able to keep up,” says Wade.
But the way AI is developed and commercialized also needs to be managed to ensure it cannot be hijacked by cybercriminals. In its report, Europol called on governments to ensure that AI systems comply with “security-by-design” principles and to develop specific data protection frameworks for AI.
Today, many of the AI capabilities discussed above are too expensive or technically too complex for the typical cybercriminal. But that will change as technology advances. Now is the time to prepare for widespread AI-powered cybercrime.