Artificial General Intelligence: A Solution in Search of a Problem?

Pt. 1: Introduction

Made with DALL-E
Made with DALL-E

“Successfully transitioning to a world with superintelligence is perhaps the most important—and hopeful, and scary—project in human history”

OpenAI, “Planning for AGI and beyond”

At this point you, your coworkers, your grandparents, and your dentist probably know a good deal about artificial intelligence, and probably have some opinions about it. The ones I’ve heard tend to skew a little apocalyptic, and more than a few of them remind me of Bitcoin/NFTs’ moments in the spotlight a few years back. I truly have no new wisdom to bring to these discussions – even my recommendations of others’ views are probably out of date by now.

And, honestly, the only stake I have in the public discourse around A.I. is in its contact with the arts. All its other applications – including the ones most often lauded by proponents and feared by doomsayers – baffle me. 

I have nothing to add except my overpowering sense that I’m missing something profound.

In the popular imagination at least, A.I./machine learning/large language models share a “cart before the horse,” “build it and they will come” quality that confuses and irritates me. Industries that appear to have no prior knowledge of, or use for, A.I. are flinging open their doors to welcome it into their core structures. Some A.I.-specialist services and think tanks are hawking these technologies onto sectors that, as far as I’m aware, never really asked for them and don’t know what to do with them.

More absurdly, the very think tanks promoting the development of A.G.I. (Artificial General Intelligence) are the very ones – often the only ones – waxing hysterical about the civilization-destroying possibilities of doing so. Let’s hear from ChatGPT itself:

(I had to cut him off somewhere.)

If we’re going to spend the next decade being lectured by the developers of A.G.I. about the “existential risk” of inviting A.G.I. into our culture, then we need a reasonable level of clarity — or even just a better elevator pitch — on what it’s for. The justifications I’ve heard seem either nebulous, defeatist, or just plain wrong. If the technology really is as dangerous as its most enthusiastic champions claim, then it needs a better purpose before it’s fully developed.

By the way, as far as these existential risks go: my contrarian side has grown to consider “the singularity”1 as a boogeyman used to scare tech illiterates like myself. I’m just not convinced that the recursive self-improvement of machines could lead to “runaway intelligence” the way some evangelists claim. The logic seems a little grade-school to me: if machines can learn to solve problems, they can program themselves to solve the problem of solving problems better, and so on, and a few steps later (which are never really explained), they’ll rocket exponentially toward intellectual self-perfection. “What if machines were so smart they could make themselves infinity smart?”

Everything we know about human intelligence in the real world contradicts this – there is always an upper bound beyond which “progress” is attainable only asymptotically. And besides, when we talk about intelligence, we’re putting a name on a cluster of traits found in biological reality that can’t be multiplied as though they were a mathematical variable. We can roughly imagine a poet with an IQ of 100; there is no conceivable analogue in existence for a poet with an IQ of 10,0002. To claim that a superintelligent machine would just be smart enough to know how to breach these material barriers and keep going is fine, but it rings about as true as the mind-enhancing pill that Bradley Cooper’s hero takes in Limitless.

People much smarter than I am, who spend careers studying machine learning, appear convinced that there is a whole lot to worry about with A.G.I. My experience is barely surface-deep; right now, I’m more than willing to take their word for it. We should be extremely careful with disruptive technology. Amen.

One way to do this is to make the clearest possible case for A.G.I.’s benefits to a public full of laymen like me. We know very little about A.I.’s honest-to-God purpose in civilization. We hear it applauded for automating work that no one wants to do. Then we see it displace comfortable, high-skilled jobs that were doing just fine. We hear it will free up human creativity. Then we see services like Midjourney and DALL-E 2 hit the market and appear very much like they want to displace human artists. We hear things like this…

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

“Statement on AI Risk” – Center for AI Safety

…from the only people who are actually developing A.I. at a societal scale (seriously, look at these signatories).

This series will be written from a layman’s perspective, and with a lot of layman’s snark. I’m concerned about the future of civilization and want to know where these things fit into it – especially if, as we’re constantly reminded, they could conceivably be the very things that end us all, or at least make life substantially more annoying. I’m genuinely curious to hear the best possible justification for pursuing this technology: please comment below with any thoughts, leads, opinions, suggestions, or critiques. I’d love to hear from you.

In the next piece, I’ll address some of the most commonly-cited cultural applications for A.I., and give my amateur’s perspective. My colleague ChatGPT has kindly given us a preview of what’s to come:

  1. “The singularity” is a hypothetical point-of-no-return at which artificial intelligence becomes proficient enough to be able to improve itself exponentially, leading to a superhuman or even God-like level of intelligence. Needless to say, I’m extremely skeptical that such a thing is physically possible. I could be wrong. ↩︎
  2. The 10,000 figure is lifted, and paraphrased, from a Sam Harris podcast episode (Making Sense “Episode 8: Ask Me Anything 1”). Harris is somebody who worries a lot about “recursive self-improvement” in intelligent machines, predicting they “could make tens of thousands of years of human intellectual progress in days, or even minutes.” Again, it depends on how you class the intelligence involved, but I’m still extremely skeptical that this could be physically possible. The dangers of A.I., in my view, are far more near-term than these science fiction-esque hypotheses. ↩︎

Leave a comment