CFJ Op-Ed: AI Term Sheets, Big Players, and the Market Process of AI Governance
- Jeffrey Depp
- 4 days ago
- 3 min read
In a new piece published at Truth on the Market, “Too Much Order, Too Soon: The Case Against AI Term Sheets,” Jeffrey Depp takes aim at one of the latest ideas gaining traction in Washington: the push for non-binding “term sheets” to guide the development of artificial intelligence. Proponents present these frameworks as modest, pragmatic steps toward coordination—ways to align expectations, reduce uncertainty, and lay the groundwork for future regulation.
But that framing is misleading. Even preliminary political agreements can shape investment, redirect resources, and influence how firms design and deploy new technologies. In a fast-moving space like AI, that kind of early coordination is not harmless—it is consequential. And when it is driven by political actors rather than market signals, it risks distorting the very process that makes innovation possible.
The Problem with Political “Coordination”
The appeal of AI term sheets rests on the idea that government can help “organize” the market at an early stage, creating a shared baseline for safety and responsibility. But as the article explains, this assumes a level of knowledge and foresight that policymakers simply do not possess.
AI is not a settled industry. Its capabilities, risks, and best practices are still being discovered in real time. Efforts to define “acceptable” approaches to safety today risk freezing the market at its current level of understanding. Worse, they can redirect entrepreneurial energy away from serving users and toward anticipating regulatory preferences—encouraging conformity rather than discovery.
History shows that this kind of top-down coordination often leads to exactly the wrong outcomes: herding behavior, misallocated investment, and regulatory frameworks that entrench incumbents while crowding out better alternatives that have yet to emerge.
How the Market Is Already Governing AI
What makes the push for early political intervention especially unnecessary is that AI is already being disciplined—by the market.
Firms developing AI systems face intense and immediate pressure from users, enterprise customers, and the broader public. Reputational risks are high, switching costs are often low, and demand for reliable, safe, and trustworthy systems is growing rapidly. These forces create strong incentives for companies to build safeguards, adjust model behavior, and respond quickly to failures.
Recent decisions by leading AI firms to impose constraints on their own models before release are not the result of government mandates—they are the result of market discipline. Companies understand that releasing powerful but unreliable or unsafe systems would undermine trust, damage adoption, and threaten long-term viability.
This is how governance works in a dynamic market: not through static rules imposed in advance, but through continuous feedback, adaptation, and competition.
Why “Noisy” Markets Still Outperform Regulators
Critics often respond that market signals are imperfect—that users may not fully understand risks or that firms may not internalize all potential harms. That is true. But it misses the central point.
Markets do not need to be perfect to outperform centralized regulation—especially in environments defined by uncertainty. What matters is that they are adaptive. Market signals, even when “noisy,” create incentives for entrepreneurs to identify problems and develop solutions. Firms that fail to respond lose customers and market share; those that succeed are rewarded and scaled.
By contrast, political frameworks like AI term sheets are inherently static. They attempt to impose order based on incomplete information and then struggle to adjust as conditions change. The result is often not greater safety or efficiency, but rigidity—and a tendency to lock in early mistakes.
Let Discovery, Not Politics, Lead
The core problem with AI term sheets is not that they do nothing. It is that they do the wrong thing at the wrong time.
They create the appearance of coordination while distorting the underlying process of discovery. They encourage firms to align with political expectations rather than user needs. And they risk embedding today’s limited understanding of AI into tomorrow’s regulatory framework.
If we are serious about fostering safe, innovative AI, we should resist the urge to impose “order” before the relevant knowledge exists. The better approach is to allow markets—through competition, feedback, and experimentation—to continue doing what they already do best: discovering what works.





