pebblebed ventures

Remember: In case of emergency, panic first, THEN follow protocol.

"Intelligence" is not a technical term. Hardness is.

by Keith Adams

AGI and ASI and Paperclips and Safety and ...

I first started crossing paths with the AI eschatology community in 2013 when starting FAIR. Their core idea, once obscure but known widely since ChatGPT's entry into public awareness in 2023, is that AI poses a totally unique threat to human flourishing, inscrutable and implacable. Hydrogen bombs, and weaponized pandemic pathogens are as mere playthings, compared to an artificial superintelligence, which presumably can wield those terrors of a previous generations, and new ones of its own devising, as instrumental tools.

For strong advocates of this framing, the menace of AI merits an immediate halt to further work in the field. The most committed advocate a transnational, trans-legal Leviathan, mandated to thwart progress in AI by any means necessary. While that necessarily involves some despotism, there are greater evils than despotism; recall, we’re talking about survival of the species here. We would presumably need to keep this supergovernment around long enough for the AI safety research community to complete a yet-to-be-enumerated set of breakthroughs in computing, mathematics, psychology, philosophy, and probably several disciplines that have yet to be founded. Until we’ve hammered all of that out, all we can be truly sure of is the need to generously fund further AI safety research.

So as not to bury the lede: I believe that worries about “strong AI safety” are at best not falsifiable enough to justify extreme measures. At worst they are the product of cynical memetic engineering. Most individuals worried about AI ending human life, or the concept of employment, are propagating a divide-by-zero error through the spreadsheet of their mind. p(doom) or whatever is a type error, not a number. The vast majority of them strike me as sincere. At the same time, enough money, prestige, and attention have been at stake for long enough that it is impossible for the field of AI safety not to have attracted some charlatans. I have no foolproof way to separate the bootleggers from the Baptists, but the sheer number of preacher-hustlers running versions of the centuries-old apocalypse scam is another reason I feel compelled to speak out; the “strong AI safety” ideology harms people, to the benefit of a smaller group of at least partly insincere people. We would do well to close that part of the Overton Window that looks out on it.

Strong AI Safety

Strong AI Safety is a big tent. Adherents include:

Geoff Hinton’s ideas have shaped the future in which we we now live, and I oppose his views here with significant trepidation. Nevertheless, I have watched the “strong AI safety” worldview lead smart, well-meaning people to extreme conclusions, and sometimes harmful life decisions, enough times over the last 12 years, to merit laying out a clear (and in a few places somewhat harsh) case against it.

The confusion, whether organic or engineered, is downstream of the word “intelligence”, which has no specific technical meaning that would allow the kinds of arguments and thought experiments that we should want to have. Let me emphasize this, because this is often lost in the sauce, even for technologists who know this is true upon reflection: “Intelligence” is not a technical term. There is no solid concept in information theory, or mathematics, or computer science, that corresponds to “intelligence” that we can use to prove theorems about intelligence, compute bounds on its capabilities, test for its presence or absence in a system, etc. When we use derived abbreviations like ASI and AGI, we are still riffing on our intuitions about what “smart” means, even when Geoff Hinton does it. We should take it more seriously when Geoff Hinton does it, of course, because he is a lot smarter and more prescient than almost all of us. But not because he knows a hard technical truth about intelligence that we cannot know.

The technical concept we do have in CS that can shed some light on intelligence and its limits is the concept of hardness. While the word “hard” sounds informal, in the context of computational complexity theory it has a concrete meaning. A “hard” problem in the world of computational complexity theory is one that we have extremely robust theoretical grounds to doubt that any computational entity, no matter how smart, can efficiently solve. I go into greater detail below. If we are serious about erecting obstacles around AI, we should build these obstacles out of such hard problems. This is essentially the engineering approach that cryptography has taken: sound cryptosystems require attackers to solve a hard problem to read a cyphertext without access to key material. Serious people are (mostly) not worried that strong AI will obviate cryptography, because computational complexity theory is an incredibly strong mathematical edifice.

Even if we fail to follow cryptography’s example, and “box” AIs in with hard problems, hard problems occur often enough in organic settings that AIs will probably find themselves boxed in one way or another just as we find ourselves. The intractability results of computational complexity theory are depressing when we view them as obstacles to humans achieving godlike things with computers; but when we invert the lens to look at them as the raw material out of which we can build boxes that no intelligence can escape, they become invigorating AI safety results. Just as humanity is surrounded on all sides by natural problems that are intractable, so are these pesky, nefarious would-be-godlike ASIs. Thankfully, the intractability results don’t care how smart they are.

What do we mean by “intelligence”? Or AI?

If we’re lacking a technical definition of “intelligence”, maybe we should just try to make one? We would not be the first to have the thought. Unfortunately, this effort has a history of failure. When we attempt to assign a fixed technical meaning to “intelligence,” it seems to resist our efforts. The word itself as used by English speakers might refer to a god-of-the-gaps. The cognitive processes we are able to systematize and measure cease to be what we mean by “intelligence” once we have a fixed understanding of them.


Writing in 2025, when almost everything that involves a computing device in some way is struggling to be seen as “AI”, it can be hard to remember that the opposite was once true.

When I was an undergraduate CS student in the mid-1990's, civilization was about 50 years into its investment in “artificial intelligence” without a single machine behavior that a layperson would call “intelligent.” A psychologist, who, unlike a computer scientist, might have a working definition for “intelligence” at various points in the century, would have found no need for it in describing a computer.

So, by 1998, the term “artificial intelligence" was teetering on the edge of academic respectability. Much that was in hindsight undeniably AI went to great lengths to avoid the label. PAC learning was “computational learning theory” before it was “ML”; Q learning was “optimal control theory” when it was afraid to be “reinforcement learning”; the scalability+stats revolution in search was about document-partitioning and ranking and storage and crawling the web and so on. These were all both downstream and upstream of the larger AI project, but at the time they were careful to shelter themselves in narrower, less ambitious nooks. Even straight-up neural networks sometimes hid behind euphemisms; Hochreiter et al.’s original LSTM paper from 1997 only uses the string “neural” to describe other, competing systems that their “novel, efficient, gradient-based method” outperforms.

Nevertheless there was a continuity of research into AI that was unapologetic, and they themselves were quick to point out that “intelligence” had been a moving target. In the history of the field, topics like compilers and XXX The opinions of psychologists and laypeople really do deserve some consideration here. While "artificial intelligence" is a technical topic, "intelligence" is not a technical term. When CS people use the word “intelligence”, they are vibing, no matter how precise and mathematically well-defined they are in the rest of their professional lives. Much miscommunication in AI discourse surrounds this confusion.

Modifying “intelligence” into TLAs like “ASI” and “AGI” makes them sound much more technical, because we are used to TLAs referring to concrete technical topics. But since “intelligence” is not a well-defined mathematical object, so its derivates, abbreviated or not, are also vibes-based.

A Real Technical Term: Hardness

“Intelligence” sounds like a technical term, but isn’t. Meanwhile, “hard problems” sound like a non-technical term, but mean something quite specific. And the specific thing they mean places some limitations on all possible computer programs, intelligent or not.

We get the formal notion of hardness in general (and specifically NP-hardness) from computational complexity theory, a branch of theoretical computer science that tells us how much memory and how many computational steps are required to solve a given problem (as a function of the size of the problem instance). Complexity theory achieves shocking generality through two hugely successful leaps of abstraction:

  1. Computational complexity theory is wholly agnostic about the particulars of the physical machine used to solve the problem; as long as the machine obeys the principles of computers we currently know how to design, the results of complexity theory hold.
  2. Computational complexity theory focuses on problems rather than solutions. It classifies problems based on any possible solution to the problem, no matter how clever a natural or artificial mind is at devising algorithms to solve it.

There is more than a lifetime to study about computational complexity theory if you are so inclined. The field was discovered naturally, by early computer people trying to solve problems that seemed straightforward enough, but were unusually resistant to efficient solutions. For example:

  1. Visit every city in an itinerary and return to the starting city using a minimum of vehicle miles.
  2. Determine if some assignment of variables can make a boolean expression true.
  3. Given a collection of items, each with a weight and a value, and a bag that can only hold finite weight, select a set of items with maximum value that does not exceed the weight.

These problems are all NP-complete, which means roughly that an answer can be checked efficiently, but that we have no known way to efficiently compute an answer. That “known” qualification is an angstrom-diameter pinhole through which a glimmer of possibility can shine. But a successful discovery of an efficient algorithm for an NP-hard problem would be a solution to the P vs. NP problem. While this problem is open, in the sense that we have no proof that P!=NP yet, all serious people studying the question believe that P!=NP. A shocking result to the contrary would, for example, break all existing practical cryptography, and generally destroy the world in which we live.

We’ve gotten into the weeds here a bit! My apologies, but then again #sorrynotsorry as the kids were briefly saying a few years back. Getting into the weeds this way is what happens when we’re having a technical discussion about a technical possibility. The only weed we can get into after I counter your riposte with “yeah but what about artificial superduperduperintelligence??” are the kind you used to have to know a guy to buy.

Hardness and Limits of AI

Arguments about the limits of a much-smarter-than-human machine usually grant it godlike computational powers up front, and skip ahead to the bits-to-atoms gap: can it deceive humans into serving it; can it seize the power grid; can it coerce a grad student into mixing the right reagents; &c. Computational complexity reminds us that there are no gods, even in the pure land of bits. No entity, however clever, manipulates information with the agility these discussions assume. Our natural computational objectives are boxed in on all sides by hard problems; the result reads as depressing from the human side of the wall, and as a quiet gift from the ASI side of it.

I don't claim a closed-form answer that rotates cryptography around a hyperplane into a one-to-one correspondence with AI safety. I do want to drag the standard of discourse closer to "hardness" (a technical term; settled theorems; centered on problems) and farther from "intelligence" (vibes-based; luminaries disagree on what it entails; centered on thought experiments about hypothetical solvers). If we are entertaining extreme measures (nationalizing a vibrant industry; despotic prohibitions on whole branches of computing research; an open-ended and socially expensive hunt for "safety" with no test we'd know how to apply to a candidate solution), we should demand a load-bearing formalism at least as stringent as computational complexity, rather than a war of smart people's vibes.

© 2026 Pebblebed · San Francisco, CA