Over the last couple of years, Vercel has quietly become the default front-end cloud for a big chunk of the Fortune 1000. PayPal, Nike, Target, Walmart, AT&T, Hulu, and Under Armour all run production workloads on its infrastructure, and since Vercel is also the team behind Next.js and the AI SDK, it already sits underneath a meaningful slice of the modern web. Adoption has picked up even more with v0, Vercel's vibe coding platform, which lets marketers, PMs, and analysts inside these enterprises ship real production apps straight from natural-language prompts while IT keeps SSO, deployment protection, and access controls in place. That mix of speed and governance is the pitch that's won over risk-conscious enterprises who wouldn't go near Replit or Lovable, which is exactly what made the breach disclosed last weekend resonate as much as it did with the security community.
A Vercel employee signed up for a consumer AI productivity tool using their corporate Google Workspace account and granted it broad OAuth scopes. The vendor behind that tool, Context.ai, was breached. The stored OAuth grant became the attacker's path back into the employee's corporate identity and from there into Vercel's internal environment.
The same consent and token-storage pattern is a standard design choice across the current generation of agentic AI tools, including tools that read and write to the systems enterprises rely on to defend everything else: SIEM, IAM, EDR, CMDB, firewall. When the vendor behind one of those tools is compromised, the resulting incident is more than a data exfiltration; it is control-plane compromise.
This post summarizes the public record, identifies the operational implication, and describes the controls that apply.
Timeline
In February 2026, a Context.ai employee with elevated access downloaded Roblox "auto-farm" scripts, a recognized Lumma Stealer delivery vector. Per Hudson Rock's infostealer telemetry, the resulting infection drained saved credentials from the endpoint, including Google Workspace, Supabase, Datadog, Authkit, and the employee's platform login to Context.ai's Vercel tenant. ShinyHunters later obtained the log via an infostealer marketplace.
Separately and earlier, at least one Vercel employee signed up for Context.ai's AI Office Suite using their corporate Google Workspace account. Per Context.ai's security update, that employee "granted 'Allow All' permissions," and Vercel's OAuth configuration permitted the broad grant against the enterprise tenant. Vercel was not a Context.ai customer; the relationship was an individual consumer sign-up with a corporate identity. Context.ai retained the resulting OAuth refresh token in its backend.
Figure 1. Vercel Compromise Chain
In March 2026, using credentials from the infostealer log, the attacker accessed Context.ai's AWS environment and the company's own Vercel customer tenant, including the context-inc/valinor project environment variables and logs per OX Security. Context.ai detected the AWS intrusion, shut down the AI Office Suite infrastructure, and engaged CrowdStrike for forensics. The Context AI Chrome extension was removed from the Chrome Web Store on March 27.
Context.ai's own account of the subsequent step is hedged: the unauthorized actor "appears to have used a compromised OAuth token to access Vercel's Google Workspace." Vercel CEO Guillermo Rauch described the escalation from Workspace to internal environment as "a series of maneuvers" without specifying the mechanism. Inside Vercel's environment, the attacker enumerated environment variables marked as non-sensitive, a class Vercel permits to be read through authenticated API and UI paths. Per BreachForums posts captured by OX Security, the exfiltrated material included a Vercel database access key and portions of source code, listed for sale at $2 million USD. Vercel disclosed the incident on April 19, 2026 and engaged Google Mandiant for forensics.
The only customers to publicly acknowledge exposure so far are all crypto frontends running on Vercel. Orca, the Solana DEX, rotated every deployment credential and secret it could have had exposed, though it says its on-chain protocol and user funds were not affected. Jupiter, the Solana DeFi aggregator, reviewed its logs, reported no signs of suspicious activity, and is rotating keys preemptively. Cork Protocol went a step further: its CTO publicly urged users to stop interacting with any DeFi application hosted on Vercel until the dust settled.
The AI security implication
Context.ai's AI Office Suite held Drive-wide read access. This is the minimum permission level most AI productivity tools request and the level that consent screens present routinely enough that users approve it without detailed review.
The current generation of agentic AI tools requests broader permissions by design. A tool that files pull requests, provisions cloud resources, runs SOC triage, or writes firewall rules requires action scopes: write, execute, approve, deprovision. The consent and token-storage pattern has not changed with that shift. Every agentic tool that runs continuously on behalf of an enterprise user requires a long-lived credential stored in the vendor's backend. In the Context.ai case the credential was a refresh token. In other deployments the equivalent is a GCP service account key, an AWS access key, or an Azure client secret. The risk profile is the same.
This configuration is present across AI tools with read or write scope against SIEM, IAM, EDR, ITSM and CMDB, network and firewall management, and CI/CD. When the vendor is compromised through the same vectors that produced the Context.ai infection, the attacker acquires the ability to invoke the tool's scopes against the customer's infrastructure. Where those scopes include write access to security or infrastructure systems, the attacker can modify detections, grant standing IAM access, disable EDR responses, push firewall exceptions, or rewrite change records. The affected surface is the control plane.
Operational controls
Let’s Talk!
Consortium's AI Security Center of Excellence helps clients accelerate AI adoption by making it safe and secure. The work spans sustainable AI governance, agent control architecture patterns, vendor capability assessments, and the operational controls described above. Contact your Consortium account team to engage.
Sources
About Consortium
Consortium is the industry’s first cybersecurity and networking value-added reseller, combining strategic advisory, vendor-agnostic procurement, and concierge-level support into a single, client-centric model. Through its NextGen VAR approach, Consortium unifies holistic security strategy, proactive risk management, and simplified vendor oversight to deliver measurable business outcomes — resetting the standard for how organizations protect and enable their business. Leveraging its proprietary Metrics that Matter® (MTM®) platform, Consortium translates technical security data into business-ready insights, empowering executives and boards to make informed, financially grounded decisions while continuously improving security posture. Learn more at www.consortium.net.