The AI vulnerabilities have been assessed from the perspective of Mikko Hyppönen, a cybersecurity expert known for hunting hackers. He perceives that AI could “fan the flames of deception” in 2024 through deepfake and zero-day security exploits.
Who is the hacker hunter?
Mikko Hyppönen, a leading cybersecurity expert from Finland, believes that “AI is changing everything.” He anticipates that the AI revolution will have a greater impact than the internet revolution.
Aged 54, Hyppönen has spent numerous years on the frontline combating malware and the individuals behind their dissemination. He previously worked at F-Secure, tracking down hackers responsible for some notorious computer viruses on the internet.
Currently, he holds the position of Chief Research Officer at WithSecure – the largest cybersecurity company in Northern Europe and oversees the Malware Museum, an online repository of malicious code and malware. He has highlighted some of the most significant concerns that AI could pose this year.
Deepfake – The AI Vulnerabilities
In 2023, AI-powered synthetic tools have made fake images and videos widespread. According to a study by Onfido, a London-based data analysis company, fraudulent activities using deepfake technology have surged by 3,000% over the past year. Although financial fraud exploiting deepfake techniques hasn’t been widespread, it’s predicted to escalate significantly in the near future. “Things haven’t yet happened on a large scale but will erupt shortly,” Hyppönen stated.
To mitigate risks, users should prioritize cybersecurity measures. For instance, when relatives or friends request money transfers or sensitive documents, users should engage in video calls for an adequate duration to verify authenticity. “It might seem funny, but it holds value,” Hyppönen remarked. “Establishing safety protocols for oneself is cost-effective yet efficient.”
Deepscam – A Broader Threat
Deepscam, akin to deepfake, presents a more extensive form of deception. The term “deep” here denotes the scale of fraudulent methods, from investment scams to emotional manipulation. Thanks to AI, the scale and speed of deception will escalate significantly compared to before.
“Malefactors can entice 10,000 victims simultaneously with AI’s assistance rather than just three or four victims,” Hyppönen shared.
He gave an example of Airbnb room bookings. An ill-intentioned person could use stolen images and create convincing content to entice travelers to book. Traditionally, this process would take time. However, with AI generation, those barriers would no longer exist. An AI flaw that becomes the foundation of an exceptionally cunning scam could potentially cause severe damage to the global economy.
With tools like Stable Diffusion, Dall-E, and Midjourney, one can create countless reasonable and highly convincing images of rental properties on Airbnb, without limitations, making detection extremely challenging for others.
The Hacker Hunter on Malware
Hyppönen mentioned that his research team had discovered several AI programs capable of autonomously writing malicious code and uploading it to open forums like GitHub. For instance, using OpenAI’s GPT API, a hacker could easily generate malicious code with just a few descriptive lines.
Companies like OpenAI usually blacklist accounts if they request ChatGPT or GPT APIs to write malicious code. However, with the open-source nature of most large language models (LLMs), an entire model can be downloaded and utilized for personal purposes. “You could download an entire LLM and run it locally on a private server. At that point, nobody can blacklist you. This is another AI vulnerability, a flaw of open-source programs,” Hyppönen explained.
Exploiting Zero-Day Vulnerabilities
Zero-day vulnerabilities (undiscovered or unpatched vulnerabilities) are still frequently discovered across platforms. However, AI can expedite the process of finding these system flaws.
“It’s fantastic when you can discover a system flaw early with AI. But it’s also terrifying if hackers can do it faster than you. Everything is under control, but the threat could soon become a reality in a short time,” a Finnish expert-hacker hunter commented.
He cited a thesis from a WithSecure intern demonstrating this threat. Specifically, with a few lines of code on a Windows 11 computer and an AI tool, that student completely automated the process of scanning AI vulnerabilities to become a local administrator, then controlled other computers on the same network without requiring the approval of an admin account. However, Hyppönen declined to disclose the specifics of the thesis as it’s currently under research.
The Perilous Path to AGI
In addition to the four AI vulnerabilities mentioned above, Hyppönen also expressed concerns about cybersecurity in the future when the general artificial superintelligence AGI appears. AGI are described as machines or systems capable of thinking like humans. This concept was introduced by American physicist Mark Gubrud during discussions on automation in 1997.
Hyppönen – Hacker Hunter recalls a hypothesis regarding IoT security first presented in 2016 called Hyppönen’s Law, stating that whenever a device is described as “smart,” it becomes susceptible to attacks. In other words, all smart technologies that humans use are at risk of being hacked. Applied to super-intelligent machines, humanity might face severe trouble.
“I think we will only be the second smartest person on the planet. AGI won’t appear in 2024, but at some point in my life, everything will happen,” he further predicted about AI vulnerabilities.
Previously, some experts also made similar comments about the development speed of AGI. “I used to think it would take 20-50 years for humans to achieve AGI, but now things change so quickly. Our problem is finding ways to control them,” said Geoffrey Hinton, Turing Award-winning professor. highest in computer science told CBS News in March 2023.
According to Hyppönen, humans should control the AI model by closely linking goals and needs. “The things we are building must have an understanding of people and share long-term benefits with people,” he said. “AI has greater advantages than anything before, but so do the disadvantages.”
Hyppönen addressed the many potential threats posed by advances in artificial intelligence and how they could be used to cause harm. Deepfake and Deepscam are both forms of using AI to create fake information, from images to videos, that can be used to commit fraud and harm.
A more serious issue with AI vulnerabilities is the possibility that malware can generate malicious code on its own. AI has opened the door to automating the malware creation process, making detection and prevention more difficult. Along with exploiting zero-day vulnerabilities, AI offers the ability to find system errors faster, creating more serious cybersecurity risks.
These threats require special attention from the cybersecurity community and researchers to develop new security measures to prevent the negative effects of AI vulnerabilities and the rapid development of intelligence. in the field of security and risk management.
Conclude
Intelligence and caution in using artificial intelligence will be the key to the right direction in the future, where we can take full advantage of the potential that artificial intelligence brings. Are you interested in cybersecurity and artificial intelligence?
Do you want to learn about the AI environment? Learn lessons from hacker hunters and in a world increasingly dependent on technology, protecting information and preventing damaging AI vulnerabilities is extremely important.
Tinasoft not only provides modern security solutions but also accompanies businesses in protecting data and network systems. With solutions ranging from security monitoring to multi-layered security, we are committed to bringing safety and reliability to your business environment.