The Fight Over AI Isn’t About Income. It’s About Access.
Universal Basic Income is a solution to the wrong problem
The conversation about AI and the economy has settled into a comfortable consensus: as machines take over more work, we’ll need Universal Basic Income to keep people afloat. It sounds humane. It’s also asking the wrong question.
UBI asks how we redistribute wealth in a world where people become economically irrelevant. The real question is how we ensure people remain economically capable—with purpose, agency, and the tools to create value. The future depends not on guaranteed income but on guaranteed access.
This isn’t an argument against UBI. It’s an argument about sequencing. Get access wrong, and UBI becomes a mechanism of dependence rather than dignity. Get access right, and UBI becomes what its proponents intend: a floor, not a ceiling.
Consider three possible futures.
In the first, an authoritarian state leverages AI to consolidate power. This isn’t hypothetical—it’s the explicit strategy of at least one major power today. The state no longer needs a productive citizenry, just a compliant one. Basic income becomes a mechanism of control: enough money to survive, not enough to matter. People are kept fed and passive, out of government and out of the way.
In the second, a handful of corporations capture AI capability. They own the models, the compute, the talent. They build every product worth building. The rest of us watch as the economy becomes a lottery—a few winners rewarded spectacularly, most of the world hovering just above poverty. Basic income, again, keeps the peace. It’s hush money for the displaced.
In the third future, AI capability is distributed. Every person has access to intelligent tools that amplify what they can do. People solve local problems, start small businesses, create value in ways that matter to their communities. It looks less like science fiction and more like older societies—the cobbler, the baker, the local problem-solver—but with AI as a force multiplier rather than a replacement.
Only in that third future does human purpose survive at scale. And notably, access policy—not income policy—is what gets us there.
* * *
What separates these futures isn’t ideology. It’s architecture. Who controls the compute? Who can access the models? Who decides the terms of use?
Right now, we’re headed toward the second future. Training frontier AI models costs hundreds of millions of dollars. The chips are manufactured by a handful of fabs. Cloud infrastructure is dominated by three or four companies. Even well-intentioned efforts to democratize AI run into the same wall: the economics push toward concentration.
And there’s a compounding effect. The companies with the best models attract the most users, generate the most revenue, and fund the next generation of training runs. It’s a flywheel that widens the gap with every turn.
This doesn’t require conspiracy. It’s just ordinary market logic doing what it does.
* * *
The goal is a PC moment for AI—capability moving from institutions to individuals, from something you access through gatekeepers to something you own and control. That transition won’t happen by default. The economics of AI push toward concentration. Policy has to push back.
The PC didn’t democratize computing by accident. It took cheap hardware, yes—but also standards that let anyone build on a common platform, interoperability between vendors, and tools that made ordinary people productive. The mainframe didn’t die of natural causes. It was outcompeted by an ecosystem that was deliberately, and sometimes accidentally, kept open.
AI needs the same architecture. Here’s what that means in practice.
What Policy Has To Do
First, invest in public compute infrastructure. The National Science Foundation should fund a civilian equivalent of what national laboratories provide for physics research: shared, subsidized access to GPU clusters for researchers, startups, and individuals working on AI applications. Call it a National AI Research Cloud. The goal isn’t to compete with frontier labs but to ensure a floor of capability that anyone can build on. The cost would be a rounding error in the defense budget—and the strategic value of a distributed AI ecosystem is a national security asset in its own right.
Second, protect the right to run open models. As AI becomes more capable, there will be pressure to restrict which models can be deployed locally. Some restrictions will be legitimate; others will be anticompetitive rent-seeking dressed up as safety. Policymakers should establish a presumptive right to run open-weights models on personal hardware, with narrow, clearly defined exceptions for genuine national security concerns. The burden of proof should be on those who want to restrict, not on those who want to use.
Third, enforce interoperability and data portability. The companies that control AI platforms will increasingly control the ecosystems built on top of them. Antitrust enforcement should focus not just on market share but on lock-in: the ability of users to move their data, their applications, and their workflows between providers. The goal is to prevent any single company from becoming the gatekeeper to AI capability.
Fourth, index education to the pace of change. Access means nothing without the skills to use it. Community colleges, vocational programs, and public libraries should receive dedicated funding to teach AI literacy—not just how to use chatbots, but how to build with AI tools, how to evaluate their outputs, and how to integrate them into productive work. This is infrastructure investment, not social spending.
* * *
Someone will raise the national security objection: distributed AI capability is dangerous. The same tools that let a small business owner optimize logistics let a bad actor generate disinformation. Concentration enables oversight.
The argument has it backwards. The risk isn’t distributed capability—it’s distributed incapability. A nation whose citizens can’t create value, can’t solve problems, can’t participate meaningfully in the economy isn’t a nation. It’s a territory with a flag. That’s the actual threat to sovereignty, and no amount of centralized AI control will fix it.
Look at the first future again—the authoritarian one. What distinguishes it from a democracy where capability has concentrated in a few corporate hands and everyone else lives on a stipend? Elections where nothing is at stake? A different anthem? The national security hawks worry about what citizens might do with powerful tools. I worry about what citizens become without them.
In cybersecurity, my field, we think constantly about access—who gets credentialed, who gets excluded, how systems are designed to include or control. I work inside an institution that will likely be on the winning side of a concentrated AI future. I’m arguing for the third outcome anyway, because I’ve seen what access control means in practice.
Jefferson didn’t write about the right to comfort. He wrote about the pursuit of happiness—an active word, not a passive one. UBI offers survival, and survival matters. But it says nothing about purpose, dignity, or participation in something larger than yourself.
The fight over AI isn’t about how we redistribute the gains. It’s about how we distribute the capability. Get that wrong, and no amount of monthly checks will fill the void.



