How a Roblox cheat script started a chain reaction that ended inside one of the most critical pieces of developer infrastructure on the web. And what it reveals about the hidden architecture of trust holding the modern software ecosystem together.
There is a detail buried in the Vercel breach that most coverage missed: the attack that compromised infrastructure trusted by millions of developers did not begin with a hacker. It began with a video game cheat.
Somewhere in February 2026, an employee at a small AI company called Context AI downloaded a Roblox "auto-farm" script onto what was probably their work laptop. The script ran. A piece of malware called Lumma Stealer executed silently in the background, harvested everything it could find on the machine, and uploaded the results to a command-and-control server. The employee probably noticed nothing.
Ten weeks later, that single infection had crossed an OAuth boundary, impersonated a Vercel employee's Google Workspace account, and landed inside Vercel's internal systems — exposing environment variables, npm tokens, GitHub tokens, and according to BreachForums, partial source code and 580 employee records.
Vercel confirmed the breach on April 19, 2026. The bulletin is still being updated as I write this. The full scope is still being determined.
This article is my attempt to explain exactly what happened — technically, layer by layer — and what it reveals about the way the developer ecosystem thinks (or doesn't think) about trust.
Why Vercel Is Worth Caring About
Quick context before we go deep.
Vercel is not just a hosting provider. It is infrastructure for a significant fraction of the modern web: deployment platform for applications built by millions of developers, primary maintainer of Next.js (one of the most downloaded JavaScript frameworks on the planet), and trusted custodian of production secrets for a large number of companies including, famously, OpenAI, Cursor, Pinterest, and Bose.
When you use Vercel, you are handing it your database credentials, your API keys, your signing secrets, your webhook tokens — everything that makes your production application run. The npm packages Vercel maintains ship to hundreds of millions of installs per week. If those packages were compromised, the cascading damage would be enormous.
The good news: Vercel confirmed the npm supply chain is clean, validated in collaboration with GitHub, Microsoft, npm, and Socket. No tampering with Next.js, Turbopack, or any other Vercel-maintained package.
The bad news: everything else about this story is a lesson in how trust works — and how catastrophically it can fail — in a software ecosystem built on chains of third-party integrations.
Understanding Supply Chain Attacks
Before we walk through what happened, it helps to understand the attack category.
A supply chain attack does not target the victim directly. It targets something the victim trusts. The logic is elementary: large companies invest heavily in defending their own perimeter. They have security teams, intrusion detection, monitoring, hardened infrastructure. Attacking them head-on is expensive and often fails.
But every company also trusts vendors, tools, services, and third-party integrations. Those third parties have their own perimeters — often far less defended. If you can compromise a trusted third party, you inherit whatever access that third party has been granted by its customers. You do not need to break through the wall if you have a key to a side door.
This is why supply chain attacks have become one of the dominant breach categories in recent years. The most famous example is SolarWinds (2020), where attackers compromised a software update system to reach thousands of organizations at once. The pattern has accelerated since then.
The Vercel breach is a textbook illustration — but with a modern twist. The supply chain is not a software build system. It is an OAuth integration between AI productivity tools. And the key to the side door was a token, not a password.
Act I — Lumma Stealer, Explained
To understand how a Roblox script started all of this, you need to understand what Lumma Stealer actually is.
The Business Model of Malware
Lumma Stealer (also known as LummaC2) is not written by a lone hacker in a basement. It is a commercial product. A Malware-as-a-Service (MaaS) operation, developed and maintained by a Russian-speaking threat actor who goes by the alias "Shamel" — tracked by Microsoft's Threat Intelligence division under the designation Storm-2477.
Lumma was sold through subscription tiers ranging from $250 to $1,000 per month, with source code access offered at $20,000. Lower tiers include basic log filtering and download options. Higher tiers add custom data collection configurations, enhanced evasion techniques, and early access to new features. The highest tier — source code access — lets customers build their own derivative and resell it.
In an interview with cybersecurity researcher "g0njxa" in November 2023, Shamel shared that he had "about 400 active clients." He created a Lumma brand, using a distinctive logo of a bird to market his product, calling it a symbol of "peace, lightness, and tranquility," and adding the slogan "making money with us is just as easy."
This is a business. With customers, support tiers, marketing, and a changelog. The barrier to using it is deliberately minimal — the goal is volume, not sophistication.
What Lumma Does on Your Machine
When Lumma executes on a Windows machine, the first thing it does is fingerprint the environment: OS version, hardware ID, CPU, RAM, screen resolution, system language. This is not curiosity — it is filtering. The malware checks whether it is running inside a sandbox or analysis environment. If it detects a sandbox, it exits without doing anything. This evasion is built in at the core.
Assuming it determines the machine is real, it executes its collection routines. Here is what it actually harvests:
From browsers (Chrome, Edge, Firefox, Brave, Opera, and others):
The core browser data lives in a few key locations. Chrome stores passwords in an SQLite database called Login Data and cookies in another SQLite database called Cookies. The values are encrypted — but here is the catch.
Historically, Chrome used Windows DPAPI (Data Protection API) to encrypt this data. DPAPI encrypts data using a key derived from the user's Windows login credentials. The protection it provides is between users — not between processes. Any process running as the same logged-in Windows user can call CryptUnprotectData() and decrypt the data. This means Lumma, running as the current user (because the current user downloaded and ran it), can decrypt Chrome's entire password and cookie store trivially.
Google introduced App-Bound Encryption in Chrome 127 (July 2024) specifically to address this. The new system routes decryption through a privileged Windows service running as SYSTEM, which verifies that the requesting process is actually Chrome before returning the key. This theoretically prevents user-level malware from decrypting cookies.
App-Bound Encryption was released on July 30, 2024 and security researchers observed evidence of bypass capabilities as early as September 12, 2024, less than 45 days later.
Beyond browsers:
Lumma Stealer also searches for cryptocurrency wallets and extensions — wallet files, browser extensions, and local keys associated with wallets like MetaMask, Electrum, and Exodus. The configuration is retrieved from the active C2 server, specifying targets across crypto, browsers, VPN configurations, email clients, FTP clients, Telegram session data, SSH keys, .env files, password manager data, and user documents.
The output of a Lumma infection is a structured archive — called a "log" in the infostealer ecosystem — uploaded in real-time to the attacker's C2 infrastructure. A single log can contain hundreds of credentials, dozens of session cookies, and OAuth tokens for every web service the infected user was logged into.
How Lumma Gets Distributed
Lumma Stealer exemplifies a shift toward multi-vector delivery strategies. Its operators demonstrate resourcefulness and proficiency in impersonation tactics, continually rotating malicious domains, exploiting ad networks, and leveraging legitimate cloud services to evade detection.
In the Context AI employee's case, the delivery vector was a Roblox exploit tool — a game cheat. This is one of the oldest delivery methods in the book, and one of the most effective. People downloading game cheats have typically already decided to ignore security warnings, disabled security tools, and entered "take a risk" mode mentally. The user provides their own justification for bypassing every defense.
The Disruption and the Comeback
Between March 16 and May 16, 2025, Microsoft identified over 394,000 Windows computers globally infected by Lumma. Working with law enforcement and industry partners — including Europol, the FBI, ESET, BitSight, Cloudflare, and others — Microsoft's Digital Crimes Unit filed civil action and seized approximately 2,300 malicious domains that formed the backbone of Lumma's infrastructure.
The disruption was significant. But it did not kill the operation.
Activity began recovering within weeks, and by late 2025 and early 2026, campaigns were again increasing globally. The Context AI employee's machine was infected in February 2026 — roughly nine months after the major takedown. Lumma had rebuilt.
This is the nature of MaaS: the infrastructure can be seized, but the developer, the code, and the affiliate base remain.
Act II — The OAuth Bridge
The Lumma infection gave the attacker a credential dump from the Context AI employee's machine. Among the harvested data were Google Workspace credentials for the employee's work account, credentials for support@context.ai — identified by Hudson Rock's analysis as a core team account — and API keys for Supabase, Datadog, and Authkit.
OAuth: The Protocol That Replaced Passwords (And Created New Problems)
OAuth was designed to solve a real problem: how do you let Application B access your data stored in Service A without giving Application B your Service A password?
The OAuth 2.0 flow issues two tokens after the initial authorization:
An access token: short-lived (typically 60 minutes), used directly in API calls
A refresh token: long-lived, used to get new access tokens without re-prompting the user
Why Refresh Tokens Are the Real Attack Surface
Refresh tokens are bearer credentials — whoever possesses the token can use it with no additional identity verification. After the initial OAuth consent flow completes with MFA, the resulting refresh token enables ongoing access without re-authentication.
"Bearer credential" means exactly what it sounds like: possession equals authorization. There is no second factor, no device binding, no IP verification. If you have the token, you are the authorized party as far as the resource server is concerned.
This has a critical implication: MFA does not protect against refresh token theft. The MFA happened when the user originally authorized the app. The resulting token represents that past authorization. An attacker who steals the token does not need to re-authenticate, because the authentication already happened — and the proof of it is the token itself.
Most SaaS apps do not automatically invalidate existing refresh tokens when passwords are reset or MFA settings change. Attackers who have stolen refresh tokens retain access until those specific tokens are explicitly revoked, even after standard incident response actions.
Context AI's OAuth Position
Context AI is an AI Office Suite — it connects to your Google Workspace, reads your documents, emails, and calendar, and surfaces AI-powered assistance. To do this, it stores refresh tokens for every user who authorizes it. Those tokens live on Context AI's servers.
The attacker, now holding credentials for the support@context.ai account, accessed Context AI's systems. Inside those systems were refresh tokens for every connected Google Workspace account.
Vercel is not a Context customer, but it appears at least one Vercel employee signed up for the AI Office Suite using their Vercel enterprise account and granted "Allow All" permissions. The critical phrase: "Allow All." This is the maximum scope — an OAuth token with this permission level gives the holder essentially complete access to the connected Google account.
The attacker found this token in the Context AI database. They used it. Google's systems saw a valid, legitimately issued refresh token. No alarm fired.
Act III — Inside Vercel
From the Vercel employee's Google Workspace account, the attacker had a foothold into corporate Google infrastructure. Depending on how single sign-on was configured, this could extend to any internal tool that authenticates via Google.
Vercel assessed the attacker as highly sophisticated based on their operational velocity and detailed understanding of Vercel's systems. They moved quickly and precisely. They knew what to look for.
What they found: environment variables.
The Sensitive vs. Non-Sensitive Problem
Vercel provides two storage modes for environment variables:
Sensitive variables are encrypted at rest using a mechanism that prevents the value from being read after storage — not by Vercel's systems, not by users with dashboard access, not by an attacker with internal access.
Non-sensitive variables are stored in a readable form. Systems with appropriate internal access can retrieve the plaintext value. This enables features like showing you the value of a variable in the dashboard — and it also means an attacker with internal access can read them.
The attacker accessed non-sensitive variables. According to the ITECS analysis, the compromised data included environment variables, npm tokens, GitHub tokens, and 580 employee records.
Immediately after the incident, Vercel flipped the default: new environment variables now default to sensitive: on. This is the right call, and it should have been the default from the beginning.
The Full Kill Chain, Assembled
February 2026
├── Context.ai employee downloads Roblox exploit tool on work/personal laptop
└── Lumma Stealer (MaaS, Storm-2477, ~$250-$1,000/month rental) executes
Lumma harvests:
├── Browser cookies and saved passwords (DPAPI/App-Bound Encryption bypass)
├── Google Workspace credentials (work email, `support@context.ai` account)
├── API keys: Supabase, Datadog, Authkit
└── OAuth tokens stored by desktop/browser applications
Attacker receives credential log. Pattern-matching reveals:
└── `support@context.ai` = elevated access inside Context AI's Vercel team
Attacker uses support@context.ai account:
├── Accesses Context AI's AWS environment (March 2026 per Context AI's disclosure)
└── Extracts OAuth refresh tokens for connected Google Workspace users
Inside the token harvest:
└── Refresh token for Vercel employee's Google account ("Allow All" scope)
Attacker presents stolen refresh token to Google's API:
├── Google issues fresh access token (no MFA required — token IS the proof of auth)
└── Attacker has full access to Vercel employee's Google Workspace
From employee's Google Workspace:
├── SSO pivot into Vercel internal tools
├── Access to environments and non-sensitive environment variables
└── Exfiltration of credentials, npm tokens, GitHub tokens, employee records
April 19, 2026: Vercel discloses incident
April 20, 2026: BreachForums listing appears ($2M asking price, ShinyHunters persona — group denied involvement)
April 21, 2026: This article
Four hops. No zero-days. No targeted phishing campaign aimed at Vercel. No sophisticated custom tooling. Just commodity malware, an AI tool's OAuth integrations, and a token that bypassed every authentication control because it already represented completed authentication.
The Part About AI Tools Specifically
Here is the uncomfortable structural truth that this breach exposes.
Every AI productivity tool you connect to your Google Workspace asks for broad OAuth scopes because broad access is the product. An AI email assistant needs to read your email. An AI meeting summarizer needs your calendar. An AI document tool needs your Drive. You cannot build the product with narrow scopes. The scopes are the product.
This creates a structural problem. The more useful an AI tool is — the deeper it integrates with your work context — the broader the access it requires. And the broader the access, the more catastrophic the blast radius when the tool's OAuth tokens are compromised.
There is no clean solution to this. You can minimize it (scope auditing, access revocation, purpose-specific accounts for AI tools) but you cannot eliminate it without sacrificing the integration value that makes the tool useful.
What you can do is understand the trust relationship explicitly. When you click "Allow" on a Google permission screen for an AI tool, you are not just granting access to your documents. You are granting access to whatever attacker gains control of that tool's server. That is the accurate framing.
The Salesloft-Drift Precedent Nobody Talked About
The Vercel breach did not emerge in a vacuum. There is a direct precedent from 2025 that the developer community largely missed.
In August 2025, threat actor UNC6395 used stolen OAuth tokens from Drift's Salesforce integration to access customer environments across more than 700 organizations. The attacker needed no exploit and no phishing. They accessed Salesloft's GitHub in March 2025, then exploited Drift integration OAuth tokens to access Salesforce instances across hundreds of customer organizations. The full exfiltration chain: Compromised GitHub account → Drift's AWS environment → Extracted OAuth tokens → Custom Python scripts queried customer Salesforce instances → Exfiltrated contacts, opportunities, AWS keys, Snowflake tokens.
One integration. Seven hundred organizations breached. The pattern: compromise a SaaS tool that has been granted broad access by many customers, extract the stored OAuth tokens, use each one to access the corresponding customer environment.
The Vercel breach is this pattern applied to the developer tooling layer. Context AI is to Vercel what Drift was to those 700 Salesforce organizations.
What You Should Actually Do
Enough analysis. Here is the actionable part, ordered by priority.
If You Have Projects on Vercel (Do This Today)
1. Open your environment variables overview
Go to vercel.com/all-env-vars. This shows you all variables across all projects in one place, including their sensitivity status.
2. Identify everything that is not marked sensitive
Database URLs, API keys, auth secrets, webhook tokens, signing keys — anything that provides access to external systems. If the value would cause real damage if it leaked, it should be marked sensitive.
3. Rotate exposed values — not just re-save them
Rotating means generating a new credential at the source, not just changing where you store the existing value. If the old value was potentially exposed, it is compromised. A new env var with the old value provides zero protection.
4. Mark everything sensitive going forward
The new Vercel default does this automatically for new variables. For existing variables: click the variable in the dashboard, enable the sensitive flag, re-enter the value.
5. Enable MFA on your Vercel account
Go to Settings → Authentication → configure an authenticator app or passkey.
6. Check the IOC published by Vercel
If you are a Google Workspace administrator, check for this OAuth app in your authorized apps:
Filter by "Authorized" to see everything that has been granted access across your org
Look for apps with broad scopes on core team accounts
GitHub audit:
Settings → Applications → Authorized OAuth Apps
Revoke anything inactive or unfamiliar
The Scope Minimization Principle
When evaluating new AI tools: before clicking "Allow," examine the scopes being requested. Does this tool actually need what it is asking for? Broad scopes create broad blast radius.
Where possible, connect AI productivity tools to purpose-specific Google accounts (not your main corporate identity). Yes, this adds friction. It also means a compromised token for that account does not give access to your corporate email, sensitive documents, and admin systems.
Token Lifecycle Management
OAuth tokens should not be permanent. Best practices from RFC 9700 (published January 2025):
Refresh token rotation: each time a refresh token is used to get a new access token, it should be invalidated and replaced with a new one.
Appropriate token lifetimes: for sensitive APIs, refresh tokens should expire within 7-30 days.
Token inventory: maintain a register of what OAuth apps your organization has authorized. Include this in your offboarding checklist.
The Deeper Lesson
Here is what I keep returning to as I read through the technical details.
Every link in the attack chain — the Lumma infection, the OAuth handshake, the token storage, the Google Workspace pivot — was technically working as designed. Lumma did what infostealers do. OAuth issued tokens the way OAuth issues tokens. Google honored a valid refresh token. Vercel stored environment variables the way it always had.
Every link in that chain was technically working as designed. That is exactly why it worked.
There was no bug to patch that would have stopped this. The attack exploited the gap between what the technology does and what users assume it does. Most people assume OAuth is safe because they used MFA when they set it up. Most people assume their credentials are protected because they use a password manager. Most people assume that if someone does not know their password, they cannot access their account.
None of those assumptions hold against a token that is already in someone else's database.
The question is not "is this tool trustworthy?" — it is "what is the blast radius if this tool's infrastructure is compromised tomorrow?"
That is the question most developers never ask.
Status as of April 21, 2026
The investigation is active. Vercel is working with Mandiant, additional cybersecurity firms, and law enforcement. Context AI is cooperating. The scope of what was accessed and exfiltrated is still being determined.
Vercel's CEO Guillermo Rauch posted on X confirming the supply chain — Next.js, Turbopack, and open source packages — is verified safe. Product updates shipped: environment variable creation now defaults to sensitive, improved team-wide variable management, better activity log tooling, clearer team invite flows.
If you use Vercel, the action items from the checklist above are not optional. Do them today.
If you are thinking "this will not happen to my company, we are too small" — the company that got compromised here is Context AI, a small AI startup. The attacker did not choose Context AI because of its size or strategic importance. They found a Lumma log with useful credentials and followed the chain.
The threat model is not targeted. It is opportunistic and systematic.