Vivek Haldar

AI doomers will doom us

(The following is a slightly edited script of my video on the topic)

The AI doomers are everywhere, and they want to start blowing up stuff, and I want to talk about it.

When Yudkowsky blithely advocates airstriking datacenters that could possibly be training an AGI, even at the risk of that setting of a nuclear exchange, that is the point at which the podcast hosts should’ve gone – woah woah woah, slow down, that’s kind of batshit crazy. You’re willing to actually doom humanity for real before it’s theoretically doomed at some point in the future by AGI. I know you’re constructing some pretty sophisticated arguments that you’re totally convinced of, but I’ll withold my unabashed support for just a little bit, if that’s ok with you.

There’s obviously been a rigorous debate about the merits of the AI doomer argument, but I want to highlight some considerations that have not even entered into the debate, much to my surprise.

Long-termism

But a bit of background first. The moral-philosophical underpinnings from which this line of thought comes makes this stance consistent within that world view.

There is a passage in the Bostrom Superintelligence book, which is the bible of the AI doomer movement, that explains the basis of the philosophy of long-termism (who could even argue with a name like that?).

The total “endowment” – the thing we have the highest moral duty to protect – is the total number of potential human minds, both real (in the flesh?) and virtual, that could live on a substantial fraction of the habitable planets (and even structures in space that are not planets) up to the space reachable by light from Earth, up to the time of the heat death of the universe. That’s estimated to be 10^58 minds.

The AI doomer camp was birthed out of the philosophy of long-termism. Bostrom was one of the founding philosophers. Yes, the same Bostrom who wrote Superintelligence. Many others have crafted excellent critiques of that philosophy, so go read those.

But the gist of it is that it is being used to justify some pretty horrendous suffering wrought upon the real humans beings that are alive today, in order to make sure that the 10^58 humans in our potential future have a chance at existing. What’s 10^9 sacrifices when compared to saving 10^58 ? Not even significant.

And that is the reason why the Bostrom-Yudkowsky camp are so comfortable proposing inflicting massive suffering today to reduce that probability of existential wipeout by even a tiny bit. It is guilt-laundering from the present into a future so vast that it is absolved by dilution.

Police state panopticon

OK, let’s even step over the “it’s better to provoke nuclear war than get to AGI” argument. Let’s look at the kind of moratorium they are calling for, and the enforcement that requires. Think about what kind of precedent this would set. Forget for a minute the specific field of AI that the AI doomers are talking about.

What they’re arguing for is a global government enforcement mechanism that would subject both private corporations and citizens to an all-seeing police state. Because that’s what it would take to peek into every data center everywhere. That’s what it would take to ensure that GPU manufacturers build tamper-proof GPUs that call home and report malicious training loads, for some definition of malicious that the doomer cabal deems acceptable. And of course, it won’t take only monitoring GPUs, because one could train an AGI on CPUs. So we’re already at a place where we’d need to police all computation, whether it happens on GPUs or CPUs. No more encryption. Every computational load anywhere will be subject to random, or ongoing, surveillance to ensure that the humanity-destroying AGI isn’t being trained on it.

Keep going down the compute chain. True enforcement would require monitoring and watchdogs at every semiconductor fab. And to prevent the possibility of someone maybe spinning up their own little fab hidden from the government panopticon, you would give this police state even deeper powers to monitor every aspect of life, industry and consumption. After all, you’d want to detect large draws of electricity in case it was being used to train an AGI.

And this politburo once established will of course want to save humanity from other new technologies that pose existential risks. So we’ve gone and created a body outside democratic and judicial guardrails that has the power to squint real hard and declare some class of human endeavor dangerous enough to pose an existential risk, then put a stop to it.

That is the path the AI doomers want to send us down. And this question of individual liberties and preserving free societies never comes up. Not once in the Superintelligence book, and not once in the blitzkrieg of podcast appearances that Yudkowsky has gone on recently. None of the hosts thought to ask about it either. I thought that was a large enough omission to make this rant video.

History

History is not devoid of humanity successfully dealing with what were at the time thought to be existential risks.

Example #1 is nuclear power. We made atomic bombs, and then the entire world got together to make sure that we didn’t wipe ourselves out while preserving the ability to go to war with each other. So, yaay, score for humanity!

Another example is the Asilomar Conference. Back in 1975 the prominent biotechnologists of the day got together to discuss how to safely perform research that involved recombinant DNA. They were worried that splicing DNA around would create a killer virus or bacteria or cancer cell that would spread out in the world. They came up with restraints on themselves, and principles for how to safely carry out research while moving the field forward. And all this without dragging government and regulators into it.

So scientists can self-organize to deal with risks, they can self-regulate, they can find ways to push the boundary of knowledge further without shutting down the whole endeavor, and certainly without advocating for airstriking labs.

Closing

The AI doomers have all the attention right now. Perhaps because we’re all worried that AI will take our jobs and strip us of our humanity. That voice saying “AI will kill us all! Shut it down now!” sounds pretty appealing. But it deserves a bit more pushback.