Listen to this Post
Artificial intelligence (AI) has been transforming multiple sectors in the past few years, and cybersecurity is no exception. As AI continues to evolve, it brings both significant advancements and growing concerns. A recent report from Gartner predicts that in the next two years, AI agents will drastically reduce the time it takes for cybercriminals to hijack exposed accounts, increasing the risk of Account Takeovers (ATOs). But there’s more to this story—AI’s role in social engineering attacks, like deepfakes, is expected to rise, forcing businesses to adapt quickly to new threats. In this article, we dive into Gartner’s insights and the broader implications of AI’s impact on cybersecurity.
Accelerating Account Takeovers with AI
Gartner’s analysis highlights a frightening new reality for cybersecurity: AI agents will reduce the time it takes for threat actors to hijack accounts by a staggering 50% over the next two years. AI’s ability to automate complex attack processes, like using deepfake-driven social engineering and compromising credentials, will make these breaches faster and harder to detect.
The technology fueling this shift is known as Agentic AI, which is being hailed as the next frontier after generative AI (GenAI). Unlike its predecessor, which focuses on creating content like text and images, Agentic AI can make decisions and adapt to dynamic environments without human intervention. For cybercriminals, this means that automated attacks can evolve rapidly, allowing them to exploit exposed accounts with greater precision.
A Growing Concern: Account Takeovers (ATOs)
ATOs are already one of the most pressing issues in cybersecurity. As the use of malicious bots and infostealers rises, ATOs have become a major headache for both corporations and their customers. In fact, last year, a report from Abnormal Security found that ATOs had surpassed ransomware as the top security concern for businesses. Around 83% of organizations reported at least one ATO incident within the previous year.
The consequences of such breaches can be devastating. Large-scale fraud, financial theft, and corporate data compromises are all potential outcomes, making it critical for businesses to find ways to mitigate these threats. Gartner suggests that businesses should focus on transitioning to passwordless, phishing-resistant multi-factor authentication (MFA) to combat the increasing sophistication of these attacks.
Deepfakes: The New Frontier in Social Engineering
One of the most concerning predictions from Gartner involves the rise of deepfake technology in social engineering attacks. By 2028, Gartner predicts that 40% of social engineering attacks will target not just employees, but executives as well, using deepfake audio and video to deceive people on voice and video calls. This adds a new layer of difficulty for organizations to defend against, as deepfakes can manipulate visual and auditory cues to impersonate trusted individuals.
In response, businesses will need to constantly evolve their cybersecurity procedures. Staying aware of the rapidly developing market for deepfake technology and implementing adaptive workflows will be crucial. Gartner also emphasizes the importance of educating employees about the dangers of deepfake-driven social engineering attacks and offering specialized training to help them identify such scams.
What Undercode Says:
The of Agentic AI into the world of cybercrime is a game-changer, and it’s clear that AI will both aid and challenge security efforts in the years to come. On one hand, the use of AI by cybercriminals will make account takeovers faster, more automated, and more difficult to detect. This will require organizations to rethink their defense strategies, especially regarding traditional login methods like passwords, which are increasingly vulnerable.
Passwordless authentication, such as multi-device passkeys, is the obvious solution, but it will require widespread user education and incentives for adoption. The technology itself must be made more robust to withstand AI-powered attacks, and businesses need to ensure that employees are well-versed in the risks posed by AI-driven attacks like deepfakes.
Moreover, it’s not just the tools but also the response times that will matter. AI can speed up the identification of true threats, as seen with ReliaQuest’s claim that AI can process security alerts 20 times faster than traditional methods. This means security teams will be able to act much faster, although they will also need to be prepared for an increasing volume of threats driven by AI.
On the flip side, organizations must stay ahead of the curve when it comes to deepfakes and other sophisticated social engineering tactics. With AI making it easier to produce convincing fake identities and communications, employees must be thoroughly trained in spotting signs of deepfake attacks. This will require a dynamic and continuous learning approach to cybersecurity training, as these attacks will evolve rapidly.
Ultimately, businesses must be proactive rather than reactive in dealing with the looming threats posed by AI-powered attackers. Security frameworks that can evolve alongside AI technologies will be essential in mitigating the growing risks associated with these emerging threats.
Fact Checker Results:
- The claim that AI agents will accelerate ATOs by 50% within two years is based on credible industry projections, though specific data on this timeline is speculative.
2. The
- The push for passwordless, phishing-resistant MFA is aligned with current industry best practices for improving cybersecurity.
References:
Reported By: https://www.infosecurity-magazine.com/news/gartner-agentic-ai-accelerate/
Extra Source Hub:
https://www.github.com
Wikipedia
Undercode AI
Image Source:
Pexels
Undercode AI DI v2