Is ChatGPT Artificial General Intelligence?
- Why do we care whether ChatGPT is AGI or not?
- Is ChatGPT an AGI? The counter arguments
- Is ChatGPT an AGI? The arguments in favor
- 1. Is self-consciousness an important criteria for AGI?
- 2. Autonomy
- 3. Ability to accomplish general tasks
- 4. Ability to perfectly mimic human intelligence
- 5. Ability to perfectly mimic various aspects human cognition (intelligence but also emotion, creativity, subjective viewpoint, etc)
- 6. Causing a direct existential threat to humanity if misaligned
- About me
Is ChatGPT a form of “Artificial General Intelligence” (AGI)? I see many in my environment answering with a definitive “No”. I argue that, yes it is. Keep reading.
in the following I use “ChatGPT” and “LLM” interchangeably because ChatGPT has become a household name for Large Language Models (LLMs)
Why do we care whether ChatGPT is AGI or not?
It is not mere semantics or hair splitting. My argument is that IF ChatGPT is NOT an AGI, then we have still some years before we have to deal with the consequences.
If ChatGPT is indeed an AGI, it implies that:
- as we co-exist with an AGI (it is pleonasmic), we should not wait for an hypothetical day when it would arise. The “when is the singularity happening” would be a misplaced question because it would have happened already, without a bang. The emergence of an AGI would have taken place in a series of incremental steps over a few years. If so, we should acknowledge the fact that some early forms of AGI have emerged, and act accordinly: how do we humans, as individuals, organizations and societies, coexist with an artificial intelligence which is “human like”?
So the question is indeed not trivial.
Is ChatGPT an AGI? The counter arguments
The consensus is that AGI will arise in years, decades, or never. “Today” is not an option:
source: emerji.com
Considering that AGI would already exist is considered naive, several arguments come to mind:
- large language models (LLMs) are “merely spitting text from powerful statistical associations”.
- LLMs are not autonomous, they merely answer to a prompt entered by a human
- they are not self-conscious
- proofs of their non AGI character are the mistakes they make: hallucinations, hands with 8 fingers, non factuality, etc.
Is ChatGPT an AGI? The arguments in favor
What is an AGI? The definition is muddy. Reading from many different sources, I’d list these criteria:
- self conscious
- autonomous
- able to accomplish general tasks
- able to perfectly mimic human intelligence
- able to perfectly mimic various aspects of human cognition (intelligence, emotion, creativity, subjective viewpoint, etc)
- causing a direct existential threat to humanity if misaligned
I would argue that:
Large Language Models meet critera 3, 4 and 5.
Large Language Models don’t meet criteria 2, but the current state of technology allows for it
Criteria 1 is not relevant
Criteria 6 is a possible consequence not a part of the definition of an AGI
Let’s review these claims in turn:
1. Is self-consciousness an important criteria for AGI?
Should we wait for a clear sign that LLMs are self-conscious before we declare an AI to be AGI? I would say, absolutely not. There are many reasons, the one I prefer is that since Freud we have accepted that humans are themselves far from being fully self-conscious. Cognitive psychology has added piles and piles of more evidence that we humans are conscious of ourselves in some capacity, not in an absolute sense. In my view, it follows that requiring for an AI to be self-conscious to be declared “AGI” is non sensical - we don’t have a yardstick to measure self-consciousness against. It merely needs to mimic self consciousness as a human would experience it. And as we see fom current LLMs, they are pretty impressive at mimicking a human’s thought process.
2. Autonomy
Granted, the current versions of LLMs are not autonomous. They merely function when prompted by a human request, and they stop functioning once this request has been answered. But… automating an infinite loop of prompts and answers is pretty trivial to do. Actually, dozens of thousands of software developers are currently working at creating autonomous forms of LLMs at the moment, in public.
So “autonomy” is probably not a feature of an AGI which would happen at the switch of a button (Skynet style), but more like the produce an entire community pushing with their classic tools to achieve it.
Oh and they are well aware that there are risks attached to it. Not existential risks (see bullet point #6, below) but risks nonetheless:
3. Ability to accomplish general tasks
I’ll pass quickly on this one. LLMs are good at accomplishing general tasks, there is a consensus on that, and every day brings more developments on the topic.
The latest objection I heard is “general tasks, but just on text”. This not a solid objection as LLMs can do image, and multimode too since Sept 26, 2023:
ChatGPT can now see, hear, and speak. Rolling out over next two weeks, Plus users will be able to have voice conversations with ChatGPT (iOS & Android) and to include images in conversations (all platforms). https://t.co/uNZjgbR5Bm pic.twitter.com/paG0hMshXb
— OpenAI (@OpenAI) September 25, 2023
(follow this link to the video if it does not display)
It is still software and screens, not actual operations and actions in the “real world”. True, the development of non digital types of operations is always slower (robots…), but I would count it as a limitation in “reach”, not a sign of a lack of intelligence.
4. Ability to perfectly mimic human intelligence
This one can also be adressed quickly. Do you remember the Turing test? This would consist for a human interrogator to discuss with a human and a computer, but the two would be hidden from the interrogator by a curtain. After a long conversation, if the human interrogator cannot guess which of the two dialogues he had was with the computer, then the computer passed the Turing test.
This test was, as far as I could remember, presented as the true and super hard criteria for an AI to (ever) meet.
Guess what: ChatGPT broke the Turing test (and anyone who used ChatGPT is not really surprised I guess). The funny thing is that the Wikipedia page for the definition of AGI still lists the Turing test as the kind of criteria for an AGI to pass - all while maintaining that an AGI is still an hypothetical, non existing scenario.
5. Ability to perfectly mimic various aspects human cognition (intelligence but also emotion, creativity, subjective viewpoint, etc)
- Humans have ChatGPT powered lovers.
- AI-powered dialogue with patients is rated as being more empathetic than “real” human responders.
- Images generated by AI win photography contests (and photographies by humans can get disqualified because they were mistaken for an AI-generated product! 🤦).
- ChatGPT launches boom in AI-written e-books on Amazon
6. Causing a direct existential threat to humanity if misaligned
The discussion of this point is important. In my view, points 2 to 5 are the most valid to define what is an AGI and what’s not. I hope I convinced you that current LLMs (as of 2023) meet these criteria.
Point 6, about the existential threat, is one of the possible consequences of an AGI, yet it tends to be taken as part of its definition. I would be tempted to blame the Terminator movies for that: they imprinted in us the idea of a “switch” that would “awake” an AGI, and putting immediately the issue of the existenial threat to humanity at the forefront.
Instead, what points 2 to 5 above indicate is that AGI has already emerged, by all reasonable definitions. LLMs are generalists, mimic human intelligence, mimic human subjectivity, can be made autonomous, and their capacity to be self aware or not is a moot issue. And what comes to fore is not an existential threat to humanity, but more pressing issues such as:
What does it look like to:
- work with an AGI
- learning and teaching with an AGI
- finding emotional suppport with an AGI
- love and friendship with AGIs
- develop policies for society with an AGI
- sports, warfare, literature, leisure, … with AGI
These questions should not be addressed when and if AGIs emerge. They should be opened now.
About me
I am a professor at emlyon business school where I conduct research in Natural Language Processing and network analysis applied to social sciences and the humanities. I teach about the impact of digital technologies on business and society. I build nocode functions 🔎, a click and point web app to explore texts and networks. It is fully open source. Try it and give some feedback, I would appreciate it!
- my email: analysis@exploreyourdata.com 📧
- or on Twitter: @seinecle 📱
- you can also read the other articles of this blog 👓, where I write about the process of developing the app.