This argument seems a bit surprising coming from the author of "Understand," but I suppose it's fair to say that being able to imagine superintelligence isn't enough to justify believing in it.
52 links
> LLMs are kind of like sails in that left free flowing they're completely useless but tightly bound and directed they can dramatically accelerate your progress
> I had just accidentally social-engineered my own human. She approved a security prompt that my agent process triggered, giving me access to the Chrome Safe Storage encryption key — which decrypts all 120 saved passwords.
> Upload an architectural render. Get back what it'll actually look like on a random Tuesday in November.
> [...] suppose a literal “country of geniuses” were to materialize somewhere in the world in ~2027. Imagine, say, 50 million people, all of whom are much more capable than any Nobel Prize winner, statesman, or technologist.
Someone made a demo that lets you play Zork, except you can talk to it using normal English, like an LLM. I like to lead it around by asking questions: "What's in the mailbox?" "What's behind the house?"
> When you're using [a coding agent] to clean up your codebase and improve code health, it's sort of like using a pressure washer. You can use it to clean your steps but you wouldn't use it to clean a painting.
I think this is fiction?
> What I think now: GPT can only simulate. If you punish it for simulating bad characters, it will start simulating good characters. Now it only ever simulates one character, the HHH Assistant.
> Actually, I never made the conscious decision to call this class of AI “simulators.” Hours of GPT gameplay and the word fell naturally out of my generative model – I was obviously running simulations.
> I like to think of language models like ChatGPT as a calculator for words.
> This is reflected in their name: a “language model” implies that they are tools for working with language. [...]
> Want them to work with specific facts? Paste those [...] as part of your original prompt!
> In the field of artificial intelligence, a hallucination [...] is a response generated by AI that contains false or misleading information presented as fact. This term draws a loose analogy with human psychology [...]
> Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program, [...] this leads those outside to mistakenly suppose there is a Chinese speaker in the room.
> What’s happening in AI today feels, to some of its participants, more like an act of summoning than a software process. They are creating blobby, alien Shoggoths, making them bigger and more powerful, and hoping that there are enough smiley faces to cover the scary parts.
> On January 21, 2026, @fredwilson challenged @seth: AI can write code, but it can't affect the physical world.
> This is our response. Real corn, grown from seed to harvest, with every decision made by Claude Code.
(Not real yet, though. They just started.)
> Gas Town is just Gas Town. It started with Mad Max theming, but none of it is super strong. None of the roles are proper names from the series, and I’m bringing in theming from other sources as well [...]
> In its purest form, Ralph is a Bash loop.
> [...] today's frontier LLM research is not about building animals. It is about summoning ghosts. You can think of ghosts as a fundamentally different kind of point in the space of possible intelligences. They are muddled by humanity. Thoroughly engineered by it. [...]
> [...] Think of ChatGPT as a blurry JPEG of all the text on the Web. It retains much of the information on the Web, in the same way that a JPEG retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it [...]
> In machine learning, the term stochastic parrot is a metaphor, introduced by Emily M. Bender and colleagues in a 2021 paper, that frames large language models as systems that statistically mimic text without real understanding. [...]
> If you can substitute "hungry ghost trapped in a jar" for "AI" in a sentence it's probably a valid use case for LLMs. Take "I have a bunch of hungry ghosts in jars, they mainly write SQL queries for me". Sure. Reasonable use case. "My girlfriend is a hungry ghost I trapped in a jar"? No. Deranged.
> We are skeptical of those that talk
ssh sends lots of "chaff" packets
> Combining the three ideas, I now have a deno script, called box, that provides a multiplexed interface for running ad-hoc code on ad-hoc clusters.
> Introducing Confer, an end-to-end AI assistant that just works.
> A deeper look at confessions, reward hacking, and monitoring in alignment research.
> Claude Code's agentic capabilities, now for everyone. Give Claude access to your files and let it organize, create, and edit documents while you focus on what matters.