> I like to think of language models like ChatGPT as a calculator for words.
> This is reflected in their name: a “language model” implies that they are tools for working with language. [...]
> Want them to work with specific facts? Paste those [...] as part of your original prompt!
Skybrian's Links
> In the field of artificial intelligence, a hallucination [...] is a response generated by AI that contains false or misleading information presented as fact. This term draws a loose analogy with human psychology [...]
> Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program, [...] this leads those outside to mistakenly suppose there is a Chinese speaker in the room.
> What’s happening in AI today feels, to some of its participants, more like an act of summoning than a software process. They are creating blobby, alien Shoggoths, making them bigger and more powerful, and hoping that there are enough smiley faces to cover the scary parts.
> On January 21, 2026, @fredwilson challenged @seth: AI can write code, but it can't affect the physical world.
> This is our response. Real corn, grown from seed to harvest, with every decision made by Claude Code.
(Not real yet, though. They just started.)
> Gas Town is just Gas Town. It started with Mad Max theming, but none of it is super strong. None of the roles are proper names from the series, and I’m bringing in theming from other sources as well [...]
> In its purest form, Ralph is a Bash loop.
> [...] today's frontier LLM research is not about building animals. It is about summoning ghosts. You can think of ghosts as a fundamentally different kind of point in the space of possible intelligences. They are muddled by humanity. Thoroughly engineered by it. [...]
> [...] Think of ChatGPT as a blurry JPEG of all the text on the Web. It retains much of the information on the Web, in the same way that a JPEG retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it [...]
> In machine learning, the term stochastic parrot is a metaphor, introduced by Emily M. Bender and colleagues in a 2021 paper, that frames large language models as systems that statistically mimic text without real understanding. [...]
> If you can substitute "hungry ghost trapped in a jar" for "AI" in a sentence it's probably a valid use case for LLMs. Take "I have a bunch of hungry ghosts in jars, they mainly write SQL queries for me". Sure. Reasonable use case. "My girlfriend is a hungry ghost I trapped in a jar"? No. Deranged.
> We are skeptical of those that talk
> The repeat-test library is my attempt to come up with a nicer API for writing property tests in TypeScript.
> Cross-platform shell tools for Deno and Node.js inspired by zx.
ssh sends lots of "chaff" packets
It's a neat presentation because the 3d models are all on the web page.
> Combining the three ideas, I now have a deno script, called box, that provides a multiplexed interface for running ad-hoc code on ad-hoc clusters.
> A Go game with cute faces.
> A Rust port of Steve Yegge's beads, frozen at the "classic" SQLite + JSONL architecture [...]
A PDS browser, pointed to my own PDS.
> There is such a thing as a free lunch
> In differential spec analysis, you write multiple implementations from a spec and then compare their behavior. It is most powerful when paired with fuzzing, but as we’ll see, it even works without fuzzing.
> Beans is an issue tracker for you, your team, and your coding agents. Instead of tracking tasks in a separate application, Beans stores them right alongside your code. You can use the beans CLI to interact with your tasks, but more importantly, so can your favorite coding agent!
> Sixty-eight years of highly defective people
> Introducing Confer, an end-to-end AI assistant that just works.
> A deeper look at confessions, reward hacking, and monitoring in alignment research.