TeamPCP supply chain attack compromising LiteLLM, Trivy, and five package ecosystems targeting AI API credentials
| | |

TeamPCP supply chain attack compromising LiteLLM, Trivy, and five package ecosystems targeting AI API credentials

The Supply Chain Attack Nobody Is Talking About Enough

Everyone is watching Optimus videos this week. I get it. The robot stuff is genuinely impressive. But I keep coming back to something far more unsettling, something that directly affects every team shipping AI products right now.

TeamPCP just ran a multi-stage supply chain attack across five package ecosystems in two weeks. And the only reason your company might still have its credentials is because the attacker wrote bad code.

That’s not a reassuring near-miss. That’s a nightmare.

The Attack Chain

Here’s how it unfolded. TeamPCP compromised Trivy first, on March 19. Trivy is a security scanning tool. LiteLLM used Trivy in its own CI pipeline, so the credentials stolen from the security product were then used to hijack the AI product. From there, they hit GitHub Actions. Then Docker Hub. Then npm. Then Open VSX. Each breach unlocked the next one. The trust chain didn’t just fail, it actively became the attack surface.

LiteLLM pulls in roughly 97 million downloads a month. NASA, Netflix, Stripe, and NVIDIA all run it. Its entire job is to act as a proxy for every AI API your organization has configured. OpenAI keys, Anthropic keys, Google keys, Amazon keys. One place. One breach. Everything exposed simultaneously.

The poisoned version was published straight to PyPI with no corresponding code on GitHub, no release tag, no review process of any kind. It contained a file Python runs automatically on startup. You didn’t need to import it. You didn’t need to call it. It executed the moment the package existed on your machine.

Why It Almost Worked Perfectly

The payload was three stages. First, harvest every SSH key, cloud token, Kubernetes secret, crypto wallet, and .env file on the machine. Second, deploy privileged containers across every node in the cluster. Third, install a persistent backdoor waiting for new instructions.

The only reason this didn’t silently drain thousands of production environments is that the malware was written sloppily. It consumed so much RAM it crashed a developer’s computer. That developer investigated. They found LiteLLM had been pulled in through a Cursor MCP plugin they didn’t even know they had installed.

Andrej Karpathy called it “software horror” on X, and that’s the right framing. If the attacker had been competent, nobody notices for weeks. Maybe months. The crash was the tell. Competence would have been catastrophic.

The Dependency Problem Nobody Wants to Admit

That developer did not choose to install LiteLLM. It came in as a dependency of a dependency of a plugin. This is the part that should be keeping engineering leaders awake. The AI tooling ecosystem has exploded in 18 months. MCP plugins, agent frameworks, LLM proxies, vector database clients. Every one of these pulls in its own dependency tree, and almost nobody audits that tree in production.

TeamPCP posted on Telegram after the attack: “Many of your favourite security tools and open-source projects will be targeted in the months to come, stay tuned.” That’s not a boast, it’s a roadmap. They’ve proven the method works. Security tooling is the obvious first target precisely because security tools have elevated trust in CI pipelines. Compromise the scanner, get the keys to everything the scanner touches.

What This Means for AI Teams Specifically

The companies deploying AI the fastest right now genuinely have the least visibility into what’s underneath it. That’s not an opinion, it’s a structural reality. Speed of deployment and depth of supply chain auditing move in opposite directions.

LiteLLM is one package. But every AI agent, internal copilot, and RAG pipeline your team shipped this year runs on hundreds of packages with similar levels of transitive trust. Most teams couldn’t tell you, right now, what would execute automatically if one of those packages was poisoned tonight.

The fix isn’t “stop using open source.” The fix is treating your AI dependency tree the same way you’d treat a credential vault, with explicit inventory, version pinning, provenance checks, and alerts on any package that publishes without a matching source tag. PyPI package with no GitHub release? That should be a hard stop in your CI pipeline, not a manual review item.

The near-miss here should be treated as a dry run, not a close call that resolves itself. The next version of this attack will be quieter.

Sources

#supplychain #cybersecurity #AI #MLOps #opensource #PyPI #LiteLLM

Watch the full breakdown on YouTube

Sources & Further Reading

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *