OpenAI Aims to Address the Chaos It Is Creating

OpenAI Aims to Address the Chaos It Is Creating

Sam Altman proposes raising capital gains taxes and shortening the workweek to mitigate AI's impact. The irony? The creator of the chaos designs the solution.

Ignacio SilvaIgnacio SilvaApril 7, 20267 min
Share

OpenAI Aims to Address the Chaos It Is Creating

On April 6, 2026, OpenAI published a 13-page document titled Industrial Policy for the Intelligence Age: Ideas to Keep People First. In it, the company leading the charge toward superintelligence proposes increasing taxes on capital gains, taxing income generated by automated labor, piloting a four-day workweek without salary cuts, and creating a public wealth fund to ensure all American citizens benefit from economic growth linked to AI. Such proposals are significant, especially considering they come not from a progressive senator or a labor think tank, but from the company that, by its own admission, is developing systems capable of surpassing even the smartest humans—especially when those humans are aided by AI.

The paradox here is structural, not accidental. From a corporate portfolio design perspective, it reveals something more uncomfortable than a political contradiction.

The Unexpected Document from Silicon Valley

What stands out about this document is not merely its content but its author. OpenAI is neither a non-governmental organization nor an academic lab. It is a company that directly competes for capturing the economic value its own public policy proposals would seek to redistribute. CEO Sam Altman acknowledges in the document that he spoke with a senior Republican senator who pointed out something rarely heard in that political spectrum: capitalism has always relied on a balance between labor and capital, and AI is quickly and irreversibly disrupting that balance.

That statement isn't just rhetorical; it serves as an operational diagnosis of what happens when a technology displaces the primary income source for the majority while concentrating profitability among those who own the infrastructure that operates it. OpenAI proposes a reconfiguration of the tax system: reduced reliance on payroll taxes—vulnerable to job displacement—and increased burden on corporate profits, capital gains in the higher brackets, and sustained returns driven by automation. Additionally, it suggests incentives for companies to retain and retrain workers, enhance health and retirement benefits, and pilot the four-day workweek linked to productivity gains.

This is no minor proposal. JPMorgan Chase CEO Jamie Dimon has arrived at similar conclusions independently, predicting that AI will reduce the workweek to three and a half days and calling for a system of public and private incentives for worker retraining and early retirement. When leaders from technology and finance—the two most influential sectors in modern capitalism—converge on the same diagnosis, it is worth seriously considering the mechanics behind such proposals.

The Portfolio Strategy Behind Political Philanthropy

Seen from the outside, this may appear as corporate altruism. However, from an organizational design perspective, it seems more calculated: a risk management maneuver for the long-term business portfolio.

OpenAI understands that its current revenue model relies on widespread adoption of its tools by companies and individuals. Yet, that widespread adoption hits a political limit: if job displacement triggers uncontrollable legislative backlash, the outcome could be punitive regulation, tariffs on AI services, or usage restrictions that no company in the sector desires. By proposing redistribution terms, OpenAI seeks to position itself as the reasonable actor defining the parameters of the conversation before others can.

This has a very clear portfolio logic. OpenAI's core business—with its commercial models, enterprise licenses, and APIs—is currently the cash engine that funds the race toward superintelligence. Protecting that engine means avoiding the political context that could choke it. The public policy proposal serves, in that sense, as a regulatory shield for the core income streams: if Altman is already calling for taxes on companies like his, it becomes much harder to accuse him of evading social responsibility.

The issue, however, is that proposing a capital gains tax is easy when your company has yet to generate the profits that such a tax would target. OpenAI remains a company in the midst of massive investment, not a stable cash flow generator. The proposal carries almost no current political cost for its existing shareholders and offers a significant immediate narrative benefit. While this may not necessarily render it hypocritical, it does present an incomplete fiscal architecture.

The Four-Day Workweek as a Portfolio Experiment, not a Labor Concession

One element of the document warrants separate analysis: the pilot proposal for a four-day workweek without salary cuts, linked to the productivity increases generated by AI. On paper, it sounds like a generous labor concession. From a corporate incentive design perspective, it appears entirely different.

If a company adopts AI and its employees manage to produce the equivalent of five days' work in four, then the extra day off costs the employer nothing in terms of output. The cost in fixed salary structure arises only if that productivity doesn't materialize. This is why the document doesn't recommend a universal four-day workweek but proposes it as a pilot conditioned on productivity metrics. It's a validation experiment, not a concession. The company retains the worker, minimizes political friction, and maintains or increases output. If the pilot fails, it discards it; if it succeeds, it scales it up.

This is precisely how well-designed internal innovation should be executed: with limited autonomy, its own learning metrics, and without extending operational evaluation criteria to the experiment. The problem is that most companies likely to adopt these recommendations—if they ever become policy—lack the AI infrastructure or analytical capacity to measure that trade-off accurately. For them, the pilot may turn into a cost without measurable returns.

OpenAI's document implicitly assumes that all companies will capture value from AI at the rate OpenAI anticipates. This is an unvalidated assumption at the portfolio level; the market has yet to confirm it.

OpenAI’s Portfolio Has a Legitimacy Problem, Not an Ideas Problem

The proposals in the document are not technically outrageous. Taxing capital instead of labor when labor is being automated makes coherent fiscal sense. Creating public wealth funds from AI returns is an idea various economists have explored for years. Mass retraining of workers is an operational necessity, not merely an ethical one.

However, there is a governance issue in the design of all this. OpenAI is simultaneously the developer of disruptive technology, the author of the diagnosis of the damage it causes, and the proposer of the regulatory remedy. This concentration of roles in a single actor—without independent institutional checks validating the analysis—is precisely the kind of organizational bottleneck that undermines any public policy proposal, regardless of its technical merits.

The Trump administration signed an executive order in December 2025 to reduce state regulation on AI. OpenAI operates within this deregulatory context while publishing a blueprint to regulate itself and its competitors. The bipartisan framing of the document—citing both Republicans and building consensus with financial establishment figures like Dimon—suggests a sophisticated political reading of the moment. Yet no narrative skill resolves the underlying issue: a company cannot simultaneously be the primary beneficiary of a process and the most reliable arbiter of its consequences.

The long-term viability of OpenAI's portfolio depends less on its fiscal proposals and more on whether the market and regulators accept that dual role. So far, there are no clear signs that they will.

Share
0 votes
Vote for this article!

Comments

...

You might also like