You Just Don’t Know It Yet.
he Evolution That Already Happened
May 2026 · SPR{K}3 Security Research
In 2012, a company called FairFight shipped an anti-cheat system for online games that had no client-side component at all. No memory scanning. No binary signatures. No kernel driver. It watched server telemetry and asked a single question: could this behavior legitimately exist?
A player snaps to a headshot through a wall they can’t see through. Movement speed exceeds the physics engine’s maximum. A shot lands 4 milliseconds after a target becomes visible — faster than human reaction time allows.
FairFight didn’t know which cheat was running. It didn’t need to. It knew the behavior was impossible.
Twenty-five years of gaming anti-cheat evolution produced that insight, and it’s the single most important idea in adversarial runtime defense: stop asking “is this known malware?” and start asking “could this state legitimately exist?”
AI infrastructure security is about to learn the same lesson. The hard way.
The Evolution That Already Happened
Gaming anti-cheat went through five distinct phases over 25 years. Each phase maps — imperfectly, partially, but usefully — to where AI infrastructure security sits today.
Phase 1: Signatures (1996–2004)
The first commercial anti-cheat, PunkBuster, launched in 2000. Valve Anti-Cheat followed in 2002. Both worked the same way: maintain a database of known cheat binaries, scan for matches, ban on detection.
Attackers adapted trivially. Rename the DLL. Recompile with different flags. Obfuscate the binary. The signature database became a treadmill — always running, never arriving.
This is where most AI security tooling operates right now. Static scanners matching known patterns: pickle.loads() on untrusted input, torch.load() without weights_only=True, eval() on user-controlled strings. These tools catch known vulnerability patterns. They are blind to novel attack vectors, multi-step exploitation chains, and anything that doesn’t match a rule someone already wrote.
Signature-based detection in gaming achieved roughly 40–60% effectiveness against actively evolving threats. The AI security industry hasn’t published comparable numbers, which should concern anyone relying on static scanning alone.
Phase 2: The “We’ll Handle It Ourselves” Era (2004–2014)
BattlEye launched in 2004 with kernel-mode memory scanning. Easy Anti-Cheat followed with hybrid client-server analysis. Anti-cheat shifted from “scan for known files” to “watch runtime behavior.”
But the more important development was organizational, not technical. Game studios resisted outsourcing security. “We know our game better than any third party. We’ll build it in-house.” This resistance persisted until cheating-driven player churn became measurable against revenue.
The numbers that eventually broke the resistance: surveys indicate over 60% of multiplayer players have dropped at least one game because of cheating. Over half have reduced or stopped their in-game spending. For a game with ten million players and ten dollars average revenue per user, a one percent increase in churn from cheating represents a million dollars in lost revenue.
When the cost of cheating exceeded the cost of outsourcing detection, studios adopted third-party solutions. Many smaller anti-cheat companies failed or were acquired during the resulting consolidation. The market compressed to three or four dominant providers.
AI infrastructure vendors are in this exact phase right now. The responses we’ve received to confirmed vulnerability disclosures read like a script: “trusted network environment.” “By design.” “Defense in depth.” “Filters plus human-in-the-loop sufficient.”
These are technically accurate statements about intended deployment contexts. They are also the precise equivalent of a game studio saying “our server architecture handles it” in 2006. The statement is true in theory and catastrophically incomplete in practice, because it assumes the deployment environment is controlled — and increasingly, it isn’t.
We can’t predict when the AI infrastructure equivalent of measurable churn will arrive. But the vulnerability surface that would produce it is documented and growing. When a publicly disclosed model poisoning or agent compromise causes measurable financial harm to an enterprise, the “we’ll handle it ourselves” era will end the same way it ended in gaming.
Phase 3: Impossible States (2014–2020)
This is where gaming anti-cheat got interesting.
The winning systems stopped trying to enumerate every possible cheat. Instead, they modeled the physics of the game — valid movement speeds, line-of-sight constraints, human reaction time floors, inventory rules, causality — and detected violations.
A player can’t move faster than the engine allows. A player can’t shoot what they can’t see. A player can’t react in 2 milliseconds. These aren’t signatures. They’re physics constraints. Violating them is impossible without external manipulation, regardless of which specific tool produced the violation.
Valve’s VACNet, introduced in 2017–2018, took this further: a machine learning system that analyzed gameplay recordings to detect aimbots and wallhacks from behavioral patterns invisible to rule-based systems.
The parallel to AI infrastructure is direct but bounded. The “physics” of a healthy AI system include:
- Valid trust flow. Who trusts whom, and is that trust earned through verified provenance rather than assumed from network position?
- Valid execution ordering. Did events happen in a causally possible sequence, or does the timeline contain impossible gaps?
- Valid capability acquisition. Did this process acquire capabilities through a legitimate path, or did privileges appear without a corresponding grant?
These map cleanly to gaming’s behavioral categories. But two additional categories extend beyond what gaming needed to solve: valid provenance (where did this artifact come from, and is the chain intact?) and valid economic behavior(are resource consumption patterns consistent with legitimate operation, or does this workload’s cost profile indicate something running that shouldn’t be?). Gaming didn’t need artifact genealogy or resource economics because games are closed systems with known physics. AI infrastructure is an open system where the “physics” must be partially inferred.
This is where the analogy earns its keep and where it hits its limits, simultaneously.
Phase 4: Shared Intelligence (2018–Present)
Epic Games acquired Easy Anti-Cheat in October 2018, integrated it into Epic Online Services, and made it free for all developers on the platform. This created a network effect: every game using EAC feeds detection intelligence back into the shared system. A cheat discovered in one game is immediately detectable across all protected games.
BattlEye built a global ban infrastructure that correlates device fingerprints and hardware identifiers across every game it protects. Get banned in one game, and the ban propagates.
The anti-cheat services market reached approximately $1.3 billion in 2025. The top five providers hold about 58% of revenues. The market consolidated around providers with cross-customer intelligence — not because they had the best detection on any single title, but because network effects made their detection improve with every new customer.
This is the lesson that matters most for AI infrastructure security, and the one we’re most honest about not having solved yet. Cross-customer intelligence — shared pattern registries, artifact genealogy, exploit-chain propagation tracking — is what turns a detection product into a defensible platform. It’s also the hardest thing to build while maintaining the privacy guarantees that enterprise customers require.
The gaming industry’s answer was centralized telemetry: the anti-cheat provider sees everything. That won’t work for AI infrastructure, where customers won’t ship model weights or training data to a third party. The architectural challenge is building network effects from metadata alone — pattern frequencies, alert signatures, temporal distributions — without requiring access to the underlying data.
We think this is solvable. We haven’t proven it yet.
Phase 5: The Privacy Correction (2025–Present)
In July 2024, a botched CrowdStrike kernel driver update bricked millions of PCs worldwide. The incident convinced Microsoft — and most of the security industry — that kernel residency should be a last resort.
Riot Games, whose Vanguard anti-cheat had been the most aggressive kernel-mode system in gaming since its 2020 launch with Valorant, became the first major publisher to commit to moving detection to user-mode AI. The new architecture: GPU-accelerated machine learning ensembles in user space, behavioral biometrics, minimal kernel footprint.
This matters because it validates an architectural choice we made independently: run detection intelligence locally on the customer’s infrastructure, keep the agent lightweight, send only metadata to the coordination layer. We made this choice for privacy reasons, not because we predicted CrowdStrike. The alignment is fortunate, not prescient. But it puts us on the right side of a major industry-wide architectural correction.
Where the Analogy Breaks (And Why That Matters)
A document that presents an analogy without naming its limits is a sales pitch, not an analysis. Here’s where the gaming precedent stops applying.
The adversaries are different. Gaming cheaters are mostly individuals or small operations selling subscriptions for twenty to fifty dollars a month. AI infrastructure attackers include nation-states, organized crime, and corporate espionage operations with budgets that are orders of magnitude larger. The detection challenge is harder and the adversaries are more resourceful. This also means the market need may be more urgent — but urgency and difficulty don’t cancel out.
The detection surface is narrower than it appears. Gaming anti-cheat validates the runtime behavioral detection component of AI security. It does not validate static vulnerability scanning, supply chain provenance tracking, or the detection of slow-burn attacks like model poisoning that may not produce observable anomalies until long after the compromise succeeds. The gaming analogy covers perhaps a third of the total AI security threat surface. It covers the third we think is most underserved — but a third is not the whole.
Ground truth is harder. In gaming, the server knows the physics. A player can’t move faster than the engine allows — period. That’s deterministic ground truth. In AI systems, behavior is probabilistic by design. An LLM producing unexpected output might be poisoned, or it might be exhibiting normal stochastic variation. Defining “impossible” in a probabilistic system requires statistical modeling, not physics constraints. The false positive calculus is fundamentally different.
Most market entrants failed. The gaming anti-cheat market consolidated to three or four dominant players. The clean five-era timeline above smooths over the companies that didn’t survive the consolidation. PunkBuster, which pioneered the category, is no longer the market leader. Being early and being right about the architecture are necessary but not sufficient.
Twenty-five years is a long time. Gaming took a quarter century to evolve from signatures to behavioral AI detection. The AI security market will likely compress this timeline because the threat economics are more severe — but “likely” is not “certainly,” and we don’t know the compression factor. Presenting the gaming trajectory as a predictive model rather than a structural analogy would be dishonest.
What Gaming Proved That Transfers
Strip away the inflation and four things survive scrutiny:
Runtime behavioral detection is a commercially viable product category. Gaming proved this with $1.3 billion in annual services revenue, growing at roughly 12% annually. The product category exists because adversarial adaptation makes static detection permanently insufficient. The arms race doesn’t end — it sustains the market.
“We’ll handle it ourselves” resistance collapses under breach economics. Every game studio said this. Today, only one in eight studios relying on anti-cheat use purely in-house solutions. The rest outsource to specialists or use hybrid approaches. The forcing function was measurable revenue loss from cheating. The AI infrastructure equivalent forcing function hasn’t arrived yet, but the vulnerability surface is documented.
Network effects determine market winners more than detection quality. EAC became dominant partly because Epic made it free for developers on their platform, not because it was the best detector on any individual game. The shared intelligence network — where every customer makes every other customer safer — is the moat. Detection quality gets you in the door. Network effects keep you in the market.
The industry converges on “security as physics.” This is the deepest insight. Anti-cheat stopped asking “is this known malware?” and started asking “could this state legitimately exist?” That question is domain-independent. It applies anywhere an adversary manipulates a system that has definable constraints — whether those constraints are game physics, trust boundaries, causal ordering, or economic behavior.
What We Built
SPR{K3 is a security research operation that has identified and disclosed vulnerabilities across NVIDIA, Microsoft, Meta, Amazon, Google, HuggingFace, Intel, Weights & Biases, and ClearML infrastructure — with confirmed CVEs published across multiple vendor security bulletins.
That research produced a pattern corpus of over 700 vulnerability signatures across AI and ML infrastructure. That corpus is the foundation for two products:
Ora scans AI infrastructure for known and novel vulnerability patterns. It operates at the static analysis layer — the “Phase 1” equivalent from the gaming timeline, but with a pattern corpus built from real vulnerability research rather than theoretical threat models.
Defend is the runtime layer — the “Phase 3” equivalent. It monitors AI infrastructure for impossible states: behavioral violations, trust boundary failures, temporal anomalies, and economic irregularities that no static scanner can detect. It runs locally on customer infrastructure and sends only alert metadata to the coordination layer.
Together, they implement the architectural lesson gaming took 25 years to learn: static detection finds what you already know about; behavioral detection finds what you don’t.
We prepare for what might happen by blocking the impossible.
We’re at defend.sprk3.com.
SPR{K3 Security Research · support@sprk3.com
