Listen to this Post
2025-02-28
Cybersecurity researchers from Datadog Security Labs have unveiled a new attack technique named “whoAMI,” which exploits a naming confusion vulnerability within Amazon Web Services (AWS). This technique allows threat actors to execute arbitrary code within an AWS account by publishing a malicious Amazon Machine Image (AMI) with a specific name. The attack could affect thousands of AWS users globally, with about 1% of organizations estimated to be vulnerable. Here’s a breakdown of how the attack works, its potential consequences, and how AWS has responded.
the whoAMI Attack
The “whoAMI” attack takes advantage of a vulnerability within AWS’s AMI catalog, specifically targeting the Community AMI section. AMIs are virtual machine images used to launch Elastic Compute Cloud (EC2) instances, and they can be searched by ID or name through AWSās API. However, without properly filtering for trusted owners, an attacker can publish a malicious AMI with a name that resembles a legitimate one. When users query AMIs without specifying an owner filter, the malicious AMI can appear as the top result due to its recent creation date, thus allowing the attacker to execute arbitrary code within the victim’s AWS account.
The core of the attack is simple: by naming an AMI similarly to a legitimate one and making it public or sharing it privately, attackers can deceive victims into selecting it. For instance, an attacker can create an AMI named something like “ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-whoAMI,” and when users search for AMIs without the proper owner filter, the malicious one shows up as the most recent option.
What Undercode Says: Analyzing the Implications
The whoAMI attack shines a light on a serious flaw in the way AWS handles AMI searches and naming conventions. While the problem originates from a user-side configuration issue, its impact can be severe due to the widespread use of AWS for cloud computing, where thousands of organizations may unknowingly be at risk.
This vulnerability exploits a fundamental assumption made by many AWS usersānamely, that the AMIs they search for will come from legitimate, trusted sources. If users omit owner filters when searching for AMIs, they may unknowingly select an attackerās malicious image, thereby compromising their environments.
Given that AMIs are central to launching EC2 instances, this type of attack provides attackers with a potent vector for gaining access to critical systems. Once the malicious AMI is deployed, the attacker could execute arbitrary code on the victim’s machine, creating significant security risks.
Furthermore, while the responsibility for preventing such an attack lies with AWS customers (as part of the shared responsibility model), AWS has responded swiftly. They’ve introduced a new feature called Allowed AMIs, which helps block unauthorized images from being executed. This is a key improvement, as it directly addresses the root cause of the attack by adding another layer of control for users. Additionally, the HashiCorp update for Terraform-aws-provider now issues a warning when an AMI search is conducted with “most_recent=true” without an owner filter, which is a great step forward in preventing accidental exploitation.
However,
Fact Checker Results
- Accuracy of the Attack Details: The whoAMI attackās mechanics are accurate, and the proposed vulnerability is a real risk for users of AWS’s Community AMI catalog.
- AWS Response: AWS has indeed implemented the “Allowed AMIs” feature to mitigate the attack, which has been confirmed by the researchers.
- HashiCorp Update: The Terraform-aws-provider update, including the warning for “most_recent=true” searches, was confirmed to be part of version 5.77, with further changes coming in version 6.0.
This article provides an insightful look into a newly discovered attack vector within AWS, shedding light on a vulnerability that can easily be exploited if users are not vigilant in filtering AMI searches.
References:
Reported By: https://securityaffairs.com/174283/breaking-news/whoami-attack-rce-within-aws-account.html
Extra Source Hub:
https://www.quora.com
Wikipedia: https://www.wikipedia.org
Undercode AI
Image Source:
OpenAI: https://craiyon.com
Undercode AI DI v2