Lately I’ve found myself getting very annoyed when I encounter yet another creative person who is confidently – but almost entirely incorrectly – explaining the difference between “Artificial Intelligence” and “Real Intelligence”. The most egregious among these wrongsplanations assert that the software systems are no more intelligent than your word processor or your spell checker.
In one sense, I get it. Software is a highly technical field, one that is not only difficult to fully understand but frequently insidious in its operation. It is also under constant threat from folks who would create snake oil out of the latest buzzwords in order to get rich quick.
But the ultimate outcome is that there is a lot of “The Moon Landing was faked” energy in the air right now. Folks who see their livelihood put at risk by these advances and yet have the deep knowledge about these systems and a willingness to acknowledge the profound advances made in the last decade in these systems are few and far between.
I admit I am coming into this essay with my own biases, but, as ever, I believe my primary bias emerges from the annoyance I mentioned. And that annoyance mostly makes me want to attempt to correct some misinformation that I have seen presented as fact.
Agents, Software, and Intelligence
Before I get too deep into the actual technology, I want to briefly talk about how software intelligence is organized. Software developers, particularly those focused on “intelligent” systems, very often use the generic term “agent” to describe a single cohesive subprogram whose role is to make “choices” that guide the behaviour of the system in which the agent is embedded. An agent might have very simple duties, such as monitoring a specific signal – this is common in industrial safety applications – or it might have broad domain within its system and guide most of the happenings therein.
Chatbots can be thought of as “naked” agents. They do not live inside another software system in the way a traditional agent does; their entire role is to embody one or more fields of expertise and combine that knowledge with some kind of communication “adapter”, typically a “language model”, which is just a domain-specific term for a machine that can construct meaningful sentences.
ChatGPT and other similar systems combine both pieces of this into a Large Language Model, and it would be very easy to imagine that this means these systems are not particularly bright. After all, if a chatbot already uses a language model, making that language model bigger and baking domain expertise right into it doesn’t, from a distance, sound like a big step.
A Brief History of AI: Artificial Intelligence, Machine Learning, Deep Learning, LLMs, and S-Tier AI
You’ll sometimes hear folks invoke terms like “AGI” or “ASI” (artificial general/super intelligence, respectively) as distinct from AI. This is a category error (or possibly a reverse category error, if you want to pick nits). Artificial Intelligence is a broad umbrella, and captures any intelligence that is artificial (ie that we have constructed). Thus, AGI/ASI both live firmly under the umbrella of AI; there is no distance between the two. AI is simply a much broader term.
Nonetheless, you will hear people use AGI/ASI to suggest the idea of “true” intelligence as distinct from “fake” intelligence. I think of this as the S-Tier AI Theory, and I don’t believe it has any real validity, any more than I believe there is any unambiguous dividing line between human intelligence and the intelligence possessed by other animals (and, increasingly, plants).
It’s not that folks who believe in the S-Tier AI Theory are dumb or even ignorant; at least as far back as the 1960s, Marvin Minsky and his contemporaries believed that some of the emergent tools of that time – language models, neural networks, and so on – were already on the verge of demonstrating some aspects of general intelligence. It is clear, in retrospect, that this was an overly generous interpretation of the state of the art, but it was overly generous only in the same sense that fusion’s proponents are overly generous – the technology is tantalizingly close, and has been for a very long time, yet we still have every reason to believe that the core principles are valid and the technology is within our reach as a civilization.
But if you are willing to admit that there are no clear, unambiguous lines which separate human intelligence from animal, you should also be willing to at least consider the idea that there are no such lines separating our intelligence from the artificial variety.
Neural nets are biologically inspired models of computing, for example, and they have demonstrated many of the attributes we value in human intelligence – flexibility, trainability, almost inexplicable insight into problems that are difficult to solve with more straightforward tools. When I was a Comp Sci student in the 90s, we talked about using neural nets primarily for pattern matching, and using other tools for other problems – graph generation and search for navigation, genetic algorithms for adaptive behaviour, expert systems for domain knowledge, and so on.
In the 2000s, however, it became more obvious that Machine Learning was going to become a dominant mode for constructing AI. Machine Learning uses neural networks, among other tools, to classify data using known categories. This is a natural and yet surprisingly deep extension to the pattern matching I talked about earlier – humans are largely visual creatures, and many of us tend to think of pattern matching in visual terms.
And, indeed, if I extend my brain far enough, I can just barely comprehend the rough correlation between a large set of raw data and a large collection of images. If I tell my machine learning system to evaluate a resume or a mortgage application or what have you, I can imagine it turning that data into a kind of JPEG in memory, and applying its trained network against that image to determine which training category it most resembles.
Deep Learning builds on Machine Learning in some ways, but it, by contrast, does not require categories to be identified in advance. Deep Learning systems instead identify categories as part of their operation, and classify data according to those categories. This is, to me, already a spooky advance in the state of the art. We still need a human to interpret the meaning of these categories for the rest of society, but only in the sense that two groups of castaways who speak different languages might need to find a common pidgin to interact and cooperate with one another.
Large Language Models (LLMs) are, at base, specialized deep learning systems. They self-assemble immense amounts of data into an internal model that has two interfaces with the outside world – an input vector and an output vector. When we talk about how much context ChatGPT can handle, we are talking about its input vector. When we talk about how much information it can provide at a time, we are talking about its output vector
When provided input and sufficient computational resources, an LLM’s internal model can generate output that demonstrates both meaning and insight. These systems can handle an incredibly diverse array of problems, and indeed we’ve seen the same system demonstrate superhuman expertise across a multitude of highly-skilled fields of endeavour.
It is intimidating, as someone with a basic understanding of their internal workings, to interact with these systems. It is, in fact, humbling. I’ve been interested in the nature of intelligence most of my adult life, and these systems have given me a stark sense of perspective about certain possibilities regarding that nature, and how we might compare our intelligence against systems like these.
Measuring Artificial Intelligence: The Turing Test
The Turing Test has been reasonably well-known for most of a century now. The Turing Test is a subjective test of sentience. An intelligent system – biological, computational, or otherwise – passes the Turing Test when, if it is anonymously intermingled with a number of “known” sentients (ie human beings), it can convince all the known sentients in a conversation that it is also sentient.
AI tools have passed limited versions of the Turing Test for decades now. It turns out that if you restrict the topics of conversation enough, even very simple software agents can sometimes pass it. And it turns out that the test’s subjective nature means it’s only as good as its judges at distinguishing sentience from system. There have been a variety of attempts to correct for that issue, with varying results.
What we’re seeing in 2022-23, however, is that ChatGPT and its ilk are passing much less restricted versions of the test, to the extent that it is becoming very difficult to distinguish between a “restricted Turing Test” and a “general Turing Test” without also requiring prohibitive restrictions on the nature of the “known sentients” involved in the exercise.
At some point, it starts to feel like excluding AI systems also means we are saying “these people over here are not sentient”; I hope we can all agree that is a line we should never cross. But it does beg the question: what do we know about our own form of intelligence?
Biological Intelligence, Theory of Mind, and Sentience
Biological intelligence gets disappointingly little space in public discussions of AI. The “why” of that fact isn’t important to this article; I simply want to point out that many folks making strident statements about AI fail to acknowledge the following at all.
Biological intelligence, and particularly human intelligence, are on one level very well-understood, and on another level very poorly-understood. The signaling mechanisms that comprise the main activity of the brain are mostly obvious. Watch an explanation of how neuroactive drugs work, for example, and you will quickly learn that not only do we understand how signals propagate from neuron to neuron, but we understand incredibly granular details about what happens before, during, and after propagation.
And yet there are huge divides between theorists about what the experience of self is, and what mechanisms truly underlie it. We don’t know whether consciousness, sentience, and sapience are intrinsically quantum phenomena or purely mechanistic ones, for example, and there are plenty of disagreements within those camps. Even without those, we don’t have a good grasp of the implications either possibility might have for selfhood, responsibility, and governance.
Philosophy, meanwhile, has grappled with the idea of the self for hundreds, even thousands, of years, and in its quest has turned many “regular” people away from the idea of finding answers.
The modern study of selfhood and its various permutations is a conversation between Philosophy of Mind and the study of consciousness. Participants in this conversation – physicist Roger Penrose, neuroscientist Daniel Dennett, and many, many others – have made attempts to bridge the gap between the subjective experience of being a person in the world and the (theoretically) objective physics of being an object in the universe.
It is possible that the nature of consciousness is a problem on par with Godel’s Incompleteness theorems, and if that’s the case, there can be no universe in which we can both manifest as a mind and also solve the question of how those minds work and what it means to be sentient. A common rephrasing of the question at the heart of Godel’s theorem is
What is the truth value of the statement “This Statement is False“?
Maybe a possible analogous question for Theory of Mind is
Is an intelligence sentient if it believes itself to be non-sentient?
Philosophy has struggled with objective reality for centuries now; perhaps it is not surprising, therefore, that it would fare little better when tasked with characterizing subjective reality. We seek an objective test for subjectivity. The questions posed in service to that search would strain anyone’s intellectual capacity.
So when we talk about sentience and general intelligence and difference in kind, it is important to remember that there is a very real sense in which we do not understand the nature of our own minds. It is a trivial step from that admission to the statement that we will not know whether an artificial system is “truly” intelligent until long after systems have begun to cross that dividing line. To me, at least, the Sparks of AI affair suggests that we are already deep in the grey with respect to this question.
For what it’s worth, my own belief is that the combination of learned knowledge and natural communication facilities that exist in current LLMs are extremely hard – perhaps even impossible – to distinguish from “true” mindhood. We are not talking about Glorified Markov Chains, whatever certain celebrity intelligentsia may tell you. You have to limit yourself to a very restrictive set of beliefs about what an intelligence can be in order to make the case that LLMs and other generative AI systems cannot be intelligent. I don’t feel comfortable placing those restrictions on the discussion, and I don’t believe that everyone who does fully understands the implications of doing so.
Some Thoughts On More Constructive Approaches
The biggest thing that people seem to want to do when they dismiss the idea of LLMs as emergent intelligence is put paid to any notion that the architects of these systems have any right to feed the system certain kinds of materials during its training, particularly materials under copyright and other kinds of intellectual property protection. For what it’s worth, I think it’s fine to have that discussion. I just don’t think it’s a useful approach to attack the nature of the software to achieve that end.
I see a number of ways to approach this issue more directly, and I’d like to offer them up for consideration.
- Focus directly on the legal structure of copyright. If the problem is that someone’s work is being used in a way that is seen as undesirable, the remedy is to fix the legal structure protecting that person’s work. There are far-reaching consequences in doing so – copyright and contract law in general have been debated and fought over since their inception, and every adjustment has unforeseen effects at some point. It is particularly worthwhile to consider your own position on Transformative Use, Andy Warhol, and whether software engineers can be considered artists. Nonetheless, this is something that’s already happening in some circles, and I have been encouraging folks to get involved in those efforts for quite a while now
- Focus on the broader economic systems under which abuse is possible. This requires a conversation with yourself about what your goals and acceptable outcomes really are. Do you think that it would be fine to abandon copyright entirely if we also made sure your needs would be met regardless of how you spend your time, for example? Or is the moral violation of rights worse than the economic one, in which case you may need to determine whether an automatic royalty should apply when a generative system is trained on existing work, or even whether you believe – and this gets into very morally difficult territory – that artificial intelligence should have no right to ingest certain kinds of data under any circumstance. Either way, you’ll likely find organizations that are working towards something very similar to your ideal outcome, and it is worth lending your shoulder to their particular wheel.
- Oppose the technology itself. This one is an uphill and most likely impossible battle, given the number of places in the world where AI research is being conducted and the state of international relations in 2023, but it is, again, something folks are working on – some very aggressively indeed. This position may feel uncomfortable for some – Luddite has long been used as a pejorative in some societies – but if it aligns with your feelings on the matter, I believe it’s worth attempting to be honest about that.
There are, no doubt, many other approaches to contend with what generative AI promises to do to society. Some of them may be fruitful. It is worth looking around to try to get a better sense of the full spectrum of perspectives on the matter.
My only hope in writing this article is that you will consider the possibility that it is also worth getting the facts right. Doing so should help ensure that any eventual solutions to the challenges posed by AI systems will be built on solid, well-understood footing rather than a deep misunderstanding of what these systems do and what they might mean for us as a society.