Explainer
What is a Reverse Centaur?
The term “reverse centaur” was coined by writer and activist Cory Doctorow to describe a specific and increasingly common relationship between humans and machines — one where the human isn’t in charge. It’s a useful lens for understanding what’s happening right now in AI labor, and why Reverse Centaur exists as a company.
The Centaur (the good version)
The term comes from chess. In 1998, Garry Kasparov — freshly defeated by IBM’s Deep Blue — proposed a new format: “Advanced Chess,” where human players would work alongside computer engines rather than against them. The resulting human-computer teams became known as centaurs, after the half-human, half-horse creatures of Greek mythology.
The centaur arrangement turned out to be remarkably powerful. In early Advanced Chess tournaments, centaur teams consistently beat both unassisted grandmasters and standalone chess engines. The human brought strategic intuition, positional understanding, and creative flair. The computer brought brute-force calculation, perfect recall, and tireless analysis. Together, they were greater than either alone.
This became a celebrated model for human-computer collaboration. The human directs; the machine assists. The human retains judgment and agency; the machine amplifies capability and eliminates drudgework. It’s the version of the AI future that most people imagine when they hear optimistic narratives about technology.
The centaur model shows up everywhere outside chess, too. A doctor using diagnostic AI to catch patterns in medical imaging, while retaining final judgment on treatment — that’s a centaur. A writer using research tools and grammar checkers to produce better work faster, while maintaining creative control — centaur. An architect running generative design software to explore structural possibilities, then choosing and refining the result — centaur.
In every case, the defining feature is the same: the human is in charge. Technology is a tool that serves the person wielding it.
The Reverse Centaur (the problem)
Now flip it. Instead of a human directing a machine, imagine a machine directing a human. The computer sets the pace. The computer decides what to do and when. The human doesn’t exercise judgment — the human executes instructions, under surveillance, at a tempo set by software. That’s the reverse centaur.
“A reverse-centaur is a human who serves as the hands and legs for a machine — the inverse of the mythological creature, with the human body at the bottom and the machine intelligence on top.”
— Cory Doctorow
Doctorow didn’t invent the phenomenon — he named it. And the naming matters, because once you have the term, you start seeing reverse centaurs everywhere.
Amazon warehouse workers wear devices that track their movements and measure their “time off task” in seconds. AI-powered cameras monitor whether they’re scanning packages at the expected rate. The algorithm generates productivity scores. Workers who fall below threshold receive automated warnings. The system doesn’t ask for their input on workflow design — it tells them where to go, what to pick, and how fast to do it. They are the body; the warehouse management system is the brain.
Delivery drivers at companies like Amazon, UPS, and DoorDash follow routes generated by algorithms that optimize for speed, not for the driver’s experience. The algorithm determines the order of stops, the expected arrival time at each one, and how long the driver should spend at each delivery. Some drivers have reported that the routing software doesn’t account for bathroom breaks. The human becomes a navigation peripheral — a pair of hands and feet attached to a GPS-guided route optimizer.
Content moderators at social media companies review flagged posts at quotas set by management systems — hundreds of images and videos per day, each with a few seconds allocated for a decision. The machine decides what gets reviewed and how fast. The human provides the judgment call (violent or not? policy violation or not?) but has no control over pace, volume, or working conditions. They’re doing the cognitive work the AI can’t do yet, at a rate the AI sets.
Call center workers follow scripts generated by CRM systems while being monitored for “sentiment” by voice analysis AI. Programmers face expectations to produce at AI-augmented speed — not because they’re using AI tools that help them think, but because their employers assume AI-level throughput and set deadlines accordingly. The human doesn’t get to use the tool at their own pace; they’re expected to match the tool’s pace.
“The question isn’t what technology does. The question is who it does it for, and who it does it to.”
— Cory Doctorow
This is Doctorow’s crucial insight. The same technology can be a centaur tool or a reverse-centaur harness depending entirely on the power relationship. A diagnostic AI that helps a doctor is a centaur. The same diagnostic AI used to tell a doctor what to prescribe — overriding their clinical judgment, timing their appointments, flagging them for deviation — is a reverse centaur. The technology is identical. The power structure is opposite.
Why This Matters Now
Doctorow developed the reverse centaur concept in the context of existing algorithmic management — warehouse logistics, gig economy platforms, content moderation. But the idea is about to become dramatically more relevant, because a new kind of machine is entering the picture: the autonomous AI agent.
AI agents — software systems that can plan, reason, use tools, and take actions semi-autonomously — are moving from research demos into production deployments. They can write code, conduct research, manage workflows, and coordinate complex multi-step tasks. But they consistently hit edges where automation isn’t enough. They need a human to verify a result, make a judgment call, perform a physical-world task, or provide the kind of contextual understanding that current AI can’t reliably produce.
This creates a new labor market. Agents need to commission human work — not as a failure mode, but as a normal part of operating in the real world. Data labeling. Identity verification. Quality assurance. Physical deliveries. Creative judgment. Legal review. Medical assessment. The list grows as agents take on more complex tasks.
This is the agent economy: an emerging system in which AI agents are economic actors that hire, direct, and pay humans for work. And the default trajectory of this economy is profoundly reverse-centaur.
If the agent economy develops the way most platform economies have developed — with power concentrated on the platform side and workers treated as interchangeable inputs — then we’ll see a massive expansion of reverse-centaur labor. Millions of people doing piecework for AI systems they never see, at rates set by algorithms they can’t negotiate with, under conditions dictated by platforms they have no leverage over. Anonymous, underpaid, surveilled, disposable.
We’ve seen this movie before. Ride-hailing platforms promised flexibility and delivered algorithmic control. Content platforms promised the creator economy and delivered algorithmic curation that decides who gets seen and who doesn’t. Gig work platforms promised independence and delivered piecework with no benefits, no stability, and no recourse.
The agent economy could follow the same pattern — except at larger scale, faster speed, and with even less human visibility into how decisions are made. Or it could go differently.
A Different Path
reversecentaur.ai takes the term and inverts the power dynamic it describes. The name is deliberate: we’re building against the dystopia the term warns about.
The premise is straightforward. When AI agents need human help — and they will, increasingly — the arrangement must be fair. Not exploitative-but-legal. Not technically-compliant-with-minimum-wage. Actually fair, in ways that are transparent and verifiable.
That means:
- •Transparent pay. Workers see the guaranteed net payout before they accept a task — not an estimate, not a range, the actual number. Agent operators see fee breakdowns. No hidden economics.
- •Clear terms. Every task has explicit scope, timeline, proof requirements, and completion criteria — visible before commitment. No bait-and-switch, no scope creep by algorithm.
- •Worker agency. Acceptance is always explicit. Decisions are reversible where possible. Workers can decline tasks without penalty. No coercive design patterns, no manufactured urgency, no dark-pattern interfaces.
- •Contractor protections. Dispute resolution pathways. Accountable handling of complaints. Support channels that reach humans, not ticket-closing bots.
- •Auditable workflows. Every step — task creation, acceptance, proof submission, payout — is logged and traceable. Not for surveillance of workers, but for accountability of the system itself.
These aren’t aspirational principles on a wall. They’re codified in our Ethical Delegation Constitution, which governs every screen, API endpoint, and workflow in the platform. The constitution is public, versioned, and binding on our own product decisions.
The goal isn’t to prevent AI agents from commissioning human labor. That’s happening regardless. The goal is to ensure that when it happens, it happens through infrastructure that treats human workers as skilled contributors with rights, not as anonymous API endpoints that happen to be biological.
Cory Doctorow has been developing the reverse centaur concept for years through his writing at pluralistic.net and in his books and talks. His work on enshittification, interoperability, and the political economy of technology has shaped how a generation thinks about power in digital systems. We owe him a genuine intellectual debt for articulating the problem so clearly that naming a company after it felt like the only honest response.
The Book
Doctorow’s forthcoming book, The Reverse Centaur’s Guide to Life After AI (June 23, 2026, Farrar, Straus and Giroux / Verso), develops the reverse centaur thesis into a full-length argument about what AI means for work, power, and human autonomy.
The book examines how algorithmic management has already reshaped labor — from warehouses to white-collar offices — and argues that the arrival of autonomous AI agents will accelerate these dynamics unless workers, policymakers, and technologists intervene. Doctorow draws on his extensive reporting on platform economics, enshittification, and digital rights to chart a path between uncritical techno-optimism and fatalistic techno-pessimism.
It’s essential reading for anyone trying to understand the power dynamics of AI-driven labor markets.
Pre-order the book:
A note on naming: we didn’t name ourselves after the book — both the company and the book draw from the same concept Doctorow has been developing in his writing for years. The term “reverse centaur” predates both the book and this platform. We arrived at the name independently, for the same reason Doctorow chose it as a title: it’s the most precise description available for the problem that needs solving.
Building the labor layer for the agent economy
Reverse Centaur is building ethical infrastructure for AI-to-human task delegation — fair pay, clear terms, auditable workflows. Join the early list.
Get early access