Defining "AGI"

Defining "AGI"
An image taken from the intersection of Greene Street and Washington Place of the Triangle Shirtwaist Factory fire, March 25, 1911.

Hello again everyone. It's a bright new week, which means it's time for some more Arachne. This week, I want to provide some context and some working definitions around artificial general intelligence, or "AGI." What it is, what it isn't, and why getting to define the term is so important.

So what is it?

AGI is understood to mean a system of artificial intelligence that matches and/or surpasses the capabilities of humans. It is the ultimate goal of OpenAI, the company behind ChatGPT, and they describe it in their charter as "highly autonomous systems that outperform humans at most economically valuable work."

While you may be impressed with modern consumer LLMs like Claude, Gemini, and ChatGPT, these existing models are not particularly close to the accepted industry definition of AGI. They are not autonomous, they lack true understanding and true learning, and they are restricted in a number of ways. This is broad strokes-y, but these systems are basically very fancy word generators built on an extremely large set of parameters. If you ask one of these chatbots what the capital of France is, they do not know in the way that human brains know this. They are deriving an answer from an intricately trained set of linguistic probabilities to arrive at the highest probable answer. This is why these systems and AI image generators are not particularly good at novel problems. They were not trained on every novel problem a person could have.

AGI is the next step above this, the one where the system can approach novel problems (ones not in its enormous training data) and solve them as a person might. Whereas current AI is used to listen in on work meetings and provide summaries and meaningful insights after the fact, an AGI would be an additional contributor to the conversation, one that is making decisions, has opinions, and can execute on them.

That sounds horrifying

Yeah, I mean, I don't love it philosophically. The basic idea is not just "what if we had a computer human?" but also "what if we had an entity equal to or greater than the combined intelligence and problem solving capability of the world's smartest humans?"

How could we ensure it wouldn't "go rogue"? What about the biases in its training data? We could go on forever with questions.

AGI and the broader field of so-called "agentic" AI has a pretty explicit purpose. These companies want to create a system that can replace human labor, thus the part in the OpenAI charter about "economically valuable" work. Consider for a moment the cynicism of this. OpenAI believes it can build a highly advanced, autonomous super-intelligence, and in the first ten words of their definition they specifically call out "economically valuable work." Not work that can promote greater human unity against climate change, or aid in resolving violent conflict, or curing Alzheimers, and so on.

It's important that I say that OpenAI thinks an AGI could do these things, but their framing of the definition suggests that AGI would go after these goals because of the ways in which they are economically valuable.

There are a million reasons people are born and live, there are infinite ways the human intelligence can be deployed, and I would argue that very, very few of them are because humans are "economically valuable." We are not created to make dollars, we are not loved for our productivity, our genius is not represented by the bottom line. Our genius is for the ways we improve the lives of people around us and promise the world for those who have yet to arrive. Economics can help us solve problems, it is not the problem we need to solve.

But there is something else going on

Since 2019, OpenAI has received funding, computing support, and other resources from Microsoft. This partnership yielded incredible results for OpenAI. They were able to release ChatGPT in December of 2022 and basically overnight became a household name, not to mention the billions and billions of dollars they have been given in funding, from Microsoft and others. OpenAI likely could not have become what it is today without the support of Microsoft's Azure computing platform, which went out of its way to prioritize the data crunching that training advanced LLMs requires.

Now, though, there is increasing tension between the two companies. Last fall, The New York Times reported on breakdowns in the personal and professional relationship of the two companies. On Microsoft's side, they want to see some financial return on their investment. On OpenAI's side, they feel they aren't getting enough support to achieve AGI.

But there's a fun little wrinkle. In the terms of Microsoft and OpenAI's partnership agreement, OpenAI has an out if they achieve AGI. But the definition of what this means is not abundantly clear in their agreement, and over the last few months Sam Altman has gone on tour to move the goalposts on what AGI really is.

Sam Altman lowers the bar for AGI
OpenAI used to say that artificial general intelligence would change everything. Not anymore.

Simply, he's doing this because he wants out of that Microsoft deal, he wants to be the first to AGI, and he thinks doing so would make his company piles and piles of money as other companies begin to "hire" his computer intelligence.

So where does this leave us?

Well, for one, I would take any assertion that someone had achieved AGI with a massive grain of salt. There are incentives beyond "this would be sick, bro" that might prompt Altman to say he's got AGI when what he's got is just a really good AI. Namely, the deal he's trying to get out of, but also the pitch he's making to other companies that they could, uh, stop hiring pesky humans and start employing his economically valuable machine.

For two, even if OpenAI does get to something that can be reasonably declared to be AGI, our biggest problem will not be the AGI, but the downstream effects of replacing/supplementing that much human labor. During the Industrial Revolution, while more manual tasks were automated and government worker safety and welfare programs were essentially nonexistent, work became extraordinarily cheap. Your factory worker mangled her arm in the machinery of your textile loom? Well there's a woman waiting right outside for the honor of the pennies she'll get to clean the blood and take that seat. In a world with true AGI, it is desk jobs and so-called "knowledge" work that will be replaced. In the United States, this class of people relies on their jobs for retirement funding and healthcare benefits. In turn, our federal government relies on these workers for the 89% of federal tax revenue that comes from income tax and Social Security/Medicare taxes.

What happens when these workers lose their jobs en masse? Those people who once got basic needs covered by private industry will now need to turn to the government for support. But if the government loses tax revenue for the programs it uses to support these people, how will it pay for a rising tide of people who once contributed to funds, but now needs the government to "pay" them? The greatest negative effect on humans from AGI is not a science fiction fantasy of enslavement, it is an exacerbation of our existing inequities and our profound mismanagement of global wealth.

Following the Industrial Revolution, a series of dramatic reforms were made to protect human workers. OSHA was formed as the Bureau of Labor Standards in 1934. The National Labor Relations Act (the Wagner Act) was signed by FDR in 1935, federally enshrining workers' rights to organize. And of course there was the creation of Social Security and Medicare as part of the New Deal. What will our 21st century version of this look like?