The war on curiosity
Behind the drive to AI lies a profound anxiety about our big messy world.
Here’s a version of something you’ll find all over LinkedIn/Twitter these days:
Reading books is now a waste of time. AI reasoning models can distil insights and tell you exactly how to implement them based on everything they know about you.
(Random person on Twitter)1
What is a book and why do we read them? To this chap a book is a collection of facts, unhelpfully buried in prose. The purpose of reading is to extract those facts, to mine them from all that worthless text and assimilate them into your brain so you can use them. The more efficiently you can do this, the better. Reading and writing are necessary evils in this process; words merely spoil.
It’s a pretty reductive world view, but if you’ve read a lot of non-fiction you might be sympathetic, especially the genre where a crusty academic repeates the same argument with slightly different examples for sixty thousand words. Or Substack, where it’s apparently illegal to write less than 2,000 words on a topic2. Still, I think this misses something important about the purpose of reading and writing.
Facts don’t live in a void. They have context - where they came from, what they relate to, the arguments built around them. Taking the facts out of a book is like inspecting the skeleton of some extinct beast - without the connective tissue and muscle you miss the whole character of the animal, your understanding is incomplete, fragile.3
Writing is thinking: the process of writing is the process of organising and reviewing thoughts and knowledge, structuring them to tell a story or make a point or convey a mood. Generative AI can be a fantastic tool to get past a blank page or to stimulate new ideas, but it can’t replicate thought, or the value of figuring things out for yourself.
And thinking is a social process. When we write something, we put that thought into a community, and we challenge them to interrogate it4: to agree or disagree and ultimately improve on it. In doing so we help each other to think about things in new ways, or make new connections we wouldn’t have made otherwise. That process of discovery, of serendipity, is the whole point. It’s how we understand, in a way we can’t from a list of cold factoids.
Why are we so keen to let machines think for us? It reminds me of a demo I saw a few years ago, in which an AI looked at the Dominos pizza menu and recommended which pizza you should choose (the obvious answer - not Dominos - was apparently beyond it). With the advent of generative AI, this new kind of recommender system is being extended to every part of life - where should I go on holiday? Which restaurant should I visit? What dish should I order? Who should I date? Just ask Glorp!
Scratch the surface, and the problem being solved for here is anxiety. The people who design and use these systems are terrified that they might do something wrong, that they might miss out on a better choice, fail to learn the right facts. That anxiety, that FOMO5, is so profound that even ordering a pizza becomes a stressful experience. What if I order the wrong thing? What if I don’t like the topping that arrives? Previously, these people would just choose the same order over and over again. Now they have a multi-billion dollar language model to do this for them.
But to never risk a bad holiday or a crap meal is to experience a kind of living death. It’s to exist like Victoria Ratliff in the new season of White Lotus, lost in a Lorazepam-induced haze, drifting along in a liminal state of consciousness where nothing can touch her, so riddled with anxiety that even the thought of an impending massage prompts her to pop more pills. Without risk there is no learning, no growth, no surprise, no delight, no adventure, no anecdote for the dinner table, just the bland homogenous tyranny of ‘good’.
More than that: you can’t truly enjoy life until you embrace and explore risk. If you treat everything as an optimisation project, where every moment has to be perfect and every experience the best it can be, then you’re building an incredibly fragile cage for yourself, a small, fraught, anxious world where you’re never more than one cancelled reservation, missed train or mixed-up order away from disaster.
Underpinning this anxiety is a core rationalist belief about the nature and purpose of intelligence: that in any situation there is a best possible option, so if you have a machine programmed with all the facts available, and you apply Bayesian probability to divine the best course of action, you can always make the best choice. Curiosity - to act without knowing - is a weakness to be eliminated. It’s a profoundly fragile world-view that misunderstands both human intelligence and, well, life.
The intelligence of the individual is a fixation of the AI and rationalist community. Many of the ‘grand challenges’ driving research in artificial general intelligence have focused on matching the achievements of a human being in some contest. Can a computer beat a person at chess? Can a computer beat a person at Go? What about Jeopardy? Starcraft?
The world of AI is filled with nerdy men, and it is blindingly obvious that the selection of these challenges - and therefore the definition of intelligence being tested - is biased toward the kinds of things nerdy men value. A player who buys into the mythology of Go as a test of strategic brilliance will take pride in their skill at the game, and be impressed when a computer can match it.
More than that, these challenges assert the supremacy of the individual: that intelligence is a property best tested against not humanity, but a single person who just happens to embody the things young rationalist men tend to value in themselves. They validate, in some unconscious way, the testers’ own place in the order of things, somewhere between the earthbound man and the heavenly machine. In this model, the human race is only as intelligent as the smartest individual.
This assumption is roughly equivalent to the ‘Great Man’ view of history: the idea that the spark of individual intelligence powers the engine that drives humanity forward. If you think of human progress as a slow exploration of the space of all possible advances, it is the singular genius of exceptional people that moves us from one point to the next, expanding the scope of human possibility.
Except that’s largely horseshit. Early humans were an order of magnitude smarter than apes, but spent the thousands of years doing sod all, teetering on the brink of extinction. The first Neanderthals were possibly smarter than the first humans but were out-bonked or out-eaten to extinction. It’s not immediately clear that a race of super-intelligent augmented humans would fare much better.
Human intelligence is fundamentally social, and built on our shared curiosity. Millions of meat sacks bimble around the world trying stuff out, to differing degrees of success. Our strength is our ability to record and communicate the consequences of our curiosity, and to build gigantic social structures far larger and cleverer than ourselves. It takes a village to be smart, and it takes that village hundreds or thousands of years to develop the culture, processes, frameworks, supply chains and knowledge required for each increment of progress. Stick Da Vinci on his own in an ancient savannah and he’s not going to invent much; hell he probably won’t last the week.
Curiosity and serendipity are at least as important to human progress as raw intelligence, yet these are the exact things that the tech industry seems determined to optimise out of existence, first through algorithms and latterly through dubious applications of generative AI. The result, a species of explorers all flocking to the same place to take the same picture of the same plate of food in the same restaurant at the same tourist destination.
When Elon Musk walked into Twitter, kitchen sink in hand, one of his first concerns was the matter of ‘ghost employees’. Musk had convinced himself, to nervous laughter from executives, that the company payroll was full of employees who didn’t exist. There was no real evidence for this, no reason for him to assume that it was the case, but anxious brains have a habit of finding things to be anxious about. And so the company wasted time, money and people looking for employees who didn’t exist… who, er, didn’t exist.
The same happened again this year, with the federal workforce. Once again, Musk became fixated on the idea that thousands of government employees were made up or dead. That fear soon metastasised to the social security budget, where extraordinary claims were made that tens of millions of dead people might be receiving benefits. In a bureaucracy the size of the U.S. government there is bound to be some level of fraud - that’s just a fact of life at scale - but once again there was no evidence to back up these claims, just paranoia.
There’s a pattern of behaviour here that’s very familiar if you’ve argued with, say, 9/11 conspiracy theorists with strong engineering mindsets6. The real world is messy, with fuzzy edges, and hard to pin down. A certain kind of fragile, hyper-rational brain really struggles to deal with that, demanding certainty where it isn’t possible. They obsess about the tiny inconsistencies that exist in any human story, to the point where it breaks their brain.
In this case, the way it breaks is by feeding every federal contract or funding proposal into Grok to see what it makes of it. Which isn’t necessarily a bad idea - generative AI, backed with guardrails, can be a good way to make sense of a large number of documents, guiding people toward relevant source material. It can be useful during mergers and acquisitions for example, or legal cases where teams have to wade through thousands of pages of documents to find buried nuggets of relevant information.
There’s a powerful role for AI in our future. At its best, it has the potential to break at least some of the tyranny of algorithms. A key limitation of generative AI - it’s stochastic, unpredictable nature - is also a strength: anything that expands the scope of discovery, that introduces the serendipity that’s sadly lacking in tools like Google, is a technology worth pursuing.
But what it can’t do is replace thought and understanding, or make value judgements on our behalf. At that point we’re no longer working with an assistant but building a fragile god in the machine, feeding our data to it and asking it to cast judgment like some divine oracle. It’s irrational, incurious, silly even, but for people defined by anxiety and the need for mathematical certainty it provides a simple get-out, a way to defer to a higher power. A comforting religion, with Grok as its god.
Many thanks for your time - please follow me on Bluesky. And do please take the time to subscribe, it really means a lot.
I’m not linking to the source because I don’t want to single some poor kid out for a pile-on and because you can find a thousand versions of this all over LinkedIn and Twitter.
1993 words, so there.
And you’ll draw hilarious bad reconstructions of dinosaurs.
Which you can do in the comments... I’m really enjoying the comments on this Substack.
FOMO is probably the most powerful force acting on the tech industry at any given moment.
Although honestly, I wouldn’t bother.
The one thing which is rarely discussed when it comes to AI is cultural differences. In some parts of the world, it's so far simply not possible to do stuff simply via a screen, and business is done person-to-person over many years of building trust. A large part of our AI dialogue is because most AI firms in the new are US-centric, so this view of how non-Anglo cultures will interact with AI is hardly thought of. AI means that the dominant culture becomes more dominant, but over time, different norms will evolve, just like they always have in different parts of the world. AI doesn't have to lead to a global monoculture, and it won't...
Your point about the elite defining true intelligence as the qualities they themselves have -- that's exactly right. It's a big theme of the book I've been working on. Also, as women enter more and more arenas, there's a retreat to the motte of the last few male-dominated fields as being the "real" ones, cf computer science and physics. Like biologists have just been vibing this whole time.