April 29, 2026
I’ve written in the past about not buying into the AI doomerism that is becoming ever more prevalent in today’s society, and my earnest attempt to convince myself otherwise through the book “If Anyone Builds It, Everyone Dies” by Eliezer Yudkowsky and Nate Soares didn’t tilt the needle meaningfully at all1. There was a lot of hypothesizing about what would happen should the AI takeover happen and not enough substance beyond how it could actually come to fruition. A type of fear-mongering that painted the mongerers in a weaker form than they probably intended to.
Lately though, something else has started to click for me. The models are going to continue to be inscrutable to us, mere mortals, but their basic2 mode of operation will remain the same: they predict the next token from a vast space of possible options. I have yet to cross the chasm where I will believe they will exhibit sentience beyond what we see tend to see in them through our innate anthropomorphic lens. As it relates to AI doomerism though, it’s seeming more and more plausible that AI (née LLMs) will be a sort of force for destruction through erosion.
Nothing would bring me more joy than to say this will be creative destruction in that we’ll see outsized benefits befall humankind as a byproduct or in the aftermath of it all, but given the rapid progress of models, the overall laissez-faire attitude toward anything that isn’t immediately a win for capitalism, and our inability to reason with what we’ve created, then out of all the possible options, a destruction, slow as it may end up being, is what seems most likely to happen.
Erosion as I see it would come in the form of an ever increasing number of cyber attacks. Those will slowly but surely wear down the engineering capacity at companies and because of our own reliance on these very same systems, then the defences that can be mounted won’t have any meaningful effect outside of completely airgapping systems or implementing a plethora of proxies.
Several bits and pieces as of late have started to shift my thinking: Anthropic’s announcement of Project Glasswing / Mythos3, Nicholas Carlini’s talk on black hat LLMs4, Thomas Ptaceks’s post on vulnerability research5, and Sam Harris’ podcast episode with Tristan Harris6. It’s worth highlighting that agent benchmarks aren’t necessarily to be trusted7 and that Mythos isn’t the lean, mean, hacking machine just yet8, but the trajectory is starting to take shape.
https://usrme.xyz/posts/alive-in-2025/#if-anyone-builds-it-everyone-dies-by-eliezer-yudkowsky—nate-soares ↩
The word here is definitely not to be taken lightly; how they operate is anything but. ↩
https://sockpuppet.org/blog/2026/03/30/vulnerability-research-is-cooked/ ↩
https://www.samharris.org/podcasts/making-sense-episodes/469-escaping-an-anti-human-future ↩
https://rdi.berkeley.edu/blog/trustworthy-benchmarks-cont/ ↩
https://www.aisi.gov.uk/blog/our-evaluation-of-claude-mythos-previews-cyber-capabilities ↩