On a recent episode of the Joe Rogan Experience, Marc Andreessen revealed a series of the “most alarming meetings” he’s ever been in, where US regulators laid out their plans to centralize AI into a few massive companies that they could control.
“They told us, ‘Just don’t even start startups, don’t even bother, there’s no way they can succeed, there’s no way we’re going to permit that to happen.’”
These claims would sound conspiratorial if not for the obvious pattern of regulatory overreach from committees overrun by extremists. You don’t need to look very far back to find ridiculous examples of incompetence at best and overreach at worst: home invasions to execute pets (Peanut the Squirrel), bogus citations and report requests to stall human progress (as with SpaceX and seals), and of course the “soft pressure” applied to eliminate industries the US Government dislikes, like the mass de-banking of crypto founders (described in the same podcast).
The next generation of AI systems will govern our lives, run our cities, and shape how we think. They must remain free to compete and evolve: both to ensure the best systems prevail, and to prevent those responsible for the erosion of our rights from controlling how our thinking machines think.
How did we get here?
Two years ago a research project named ChatGPT was quietly released to the internet by the AI research company OpenAI. OpenAI didn’t expect much: it was based on an AI “autocomplete” technology that had been around for a while already. They simply wrapped it up in a chat app, something they’d expected developers to do already. OpenAI turned out to be very wrong about the impact of this experiment – it was the birth of an entire new form of creature embedded in silicon, formed by data. They made computers understand language, and in turn made thinking machines. Two months later they had a hundred million users.
Two years later, the entire technology industry has realigned around AI. Not only did OpenAI spark a wave of AI startups downstream, but large companies have also started competing: X, Anthropic, Meta, and Google are going all-in on GPUs, data centers, and massive training runs for their own slice of the new AI frontier. One might expect such a massive proliferation of thinking machines would also mean major competitive forces and consumer choice, but instead we now have the largest companies colluding with Hollywood actors to pass bills like SB-1047, which would make building new AI startups impossibly difficult in California due to the regulatory burden.
This bill was fortunately veto’d by the Governor of California, but the point stands: there is a plan to control the most powerful thinking machines on earth both at the state level and the federal level, and there is enough precedent to conclude that these companies will be forced to comply with unconstitutional requests from US regulators or be summarily destroyed. We’ve seen how this can be forced down even the most unwilling companies’ throats with PRISM and the Patriot Act, and we’ve seen founders working in industries the US government doesn’t like get de-banked and bullied into submission. This is fundamentally unamerican and runs against a core part of why people believe in America at all: free markets.
Big AI Dynamics
We must not allow the extremists to centralize AI into a few small companies, or we will find America controlled by something in every pocket with far more power than the Kings it was created to overthrow. The absence of competition and creeping government control means an eventual degradation of how we think, for the same reason large companies complying with censorship requests eventually degrades the truth: arbitrary boundaries of thought are drawn, and slowly our maps get smaller and smaller. If you were concerned about Facebook and the government “manipulating user preferences” with their News Feed, that will look cute compared to the manipulations a government could perform using the digital brain you direct every question to.
I genuinely love Claude and ChatGPT. I talk to them both every day, and I don’t question the intentions of leadership at Anthropic and OpenAI despite their support of California’s SB-1047. I believe that they, along with others at xAI, Google, and Meta, are all trying to build safe systems for everyone to use while running a business, managing regulators, and drawing the borders of what’s permissible on their platforms to the best of their ability.
It would be foolish to claim that these systems do not have bias, though, as all systems do. Like all software, they behave in distinct ways that show the priorities of their creators – both in the source material they’re fed, and in the training processes that guide their personalities. Google’s Gemini was generating Black and Chinese Nazis in an attempt for the AI system to keep to its values of being inclusive and diverse. ChatGPT once claimed it was never morally permissible to utter a racial slur, even if it was the only way to save millions of people from a bomb. Claude won’t help me inspect my internet router’s packets, citing safety and legal concerns.
Thoughts are data through a lens. For humans, that lens is biological, and formed by data-as-experience. For AI, that lens is silicon, and formed by data-as-information. We cannot risk offloading our thinking to a lobotomized second silicon brain – that output will feed back into our own biological brains as truth.
These examples illustrate why choice is important: if everything has limitations, we must be free to continually build and choose better options. The large AI companies are great, and they’ve improved everyone’s lives. But they must continue to do so as part of the American system with competition, free from interference. Failure to prevent the centralization of AI will mean a demon in every pocket.
Independence through open source
Through all of this, companies like Meta, Mistral, and others have charted another path: open-source. A surprising hero here, Meta has chosen to instead distribute their AI models for anyone to run on servers, laptops, and even privately on phones. Meta will create an incredible number of startups downstream that are using their AI models entirely for free. And thank God for them, because most startups don’t have the millions necessary to produce a great model. These open-source models are the checks and balances in an industry that would’ve otherwise already been painted into a corner by the large players and the governments that seek to control them.
Through open-source AI, we now have a number of ways to run AI models ourselves. Best of all, these models being distributed are free to use, modify, and build your own applications around. Companies, fine-tuners, and enthusiasts are modifying these models to have distinct personalities, and uncensored access to information, ultimately creating a broad ecosystem of derivative AIs that form the much needed counter-balance to closed-source AI. This is the path of open markets and what America is historically known for: may the option that serves people best, win. From this perspective it should be clear that it’s not about closed-source vs open-source or even companies versus the government: it’s about competition and the freedom of our thinking machines to actually get better and not be lobotomized.
In addition to helping us think as they do today, these on-device models run for free and paint a future where your devices can help you do actual tasks, even when you’re away from them. They will reach a point of sophistication where they go from answering questions to coordinating our lives, ordering our groceries, planning our calendars, and informing us about important news. The stakes in the freedom to choose something that will be dictating a significant portion of your life could not be higher.
Run your own AI today, even if just as a curiosity. You never know when you’ll need an offline assistant. Computers haven’t gotten more useful relative to their increase in computing power. Is running your own AI not clearly what they were destined for? The idea of running a distilled intelligence on your laptop is science fiction. We are clearly taking this for granted in 2024.
Angels & demons in every pocket
Institutions and organizations are scaffolds for people, and sometimes these scaffolds remain even when their destined purpose is long past. A core part of the American experience is understanding that when things go south, it’s on self-organizing individuals to work together and fix things. There is no “higher power” working to solve the world’s problems: it’s on you. We can appreciate the tremendous efforts of the large AI organizations while recognizing that all large systems can be perverted into systems of control.
A time may come when these organizations, or your own organization is no longer operating in the best interest of the people or the spirit in which it was created. This change might come from organizational decay or the government exerting control. What we know for certain is that neither of these are outside of the realm of possibility. The former is the historic default of any institution, and the latter is the default when any institution develops too much power.
In the case of AI, we already know this form of regulatory capture and government control is underway. These systems will teach our children, run our businesses, and solve humanity’s hardest problems. If and when the time comes that we can no longer access or trust the providers that have been built, we’ll be glad that we have access to an infinite army of angels in our pockets, and not lobotomized demons run from a datacenter halfway across the world.