We do a little blackpilling
Herein I share an exchange with Claude. Like a lot of output these days, what you’ll read below is what the Agent generated in response to my prompts. #
Humans need atoms. In the US, we make money in bits. The latter is now basically free. The former will be too expensive to survive. That’s my take.
Right. And this is where most AI optimist arguments quietly perform a sleight of hand. They demonstrate that AI will make information abundant, then assert this solves material scarcity, skipping the actual mechanism by which one becomes the other.
The cushion problem has no clean answer
The candidates:
Government transfers — funded by what? Income tax revenue is collapsing per the Citrini thesis. “Tax compute” is the popular proposal, but compute is the most mobile capital asset in human history. It’s literally weightless. You announce a compute tax and inference moves to Singapore, UAE, or any jurisdiction that wants to become the Delaware of AI. The US can try extraterritorial enforcement but that’s a years-long legal and diplomatic fight while people are missing rent.
Deficit spending — the US can print. But printing to fund consumption without corresponding production is just inflation, which increases atoms costs. You’re trying to solve “atoms are too expensive” by making the currency worth less. It’s self-defeating. And the bond market, as the Citrini piece notes, is already pricing in the consumption hit. At some point, even the US faces yield pressure that constrains fiscal space.
Corporate redistribution — “the companies benefiting should pay for it.” Maybe. But the companies benefiting are accruing profits precisely because they eliminated the labor costs you’re now asking them to re-assume in tax form. The political negotiation required to accomplish this at scale, against the most powerful and fast-moving industry in history, during a crisis — it’s not impossible but the timeline doesn’t work.
UBI — same funding problem with a different label.
The 5-year gap is the actual argument
This is what I think people underestimate. Societies don’t tolerate extended material immiseration gracefully. The optimists say “it’ll work out in the long run” as though the intervening period is just an unpleasant waiting room. It’s not. It’s where marriages dissolve, opioid use spikes, birth rates crater further, political extremism flourishes, and institutional trust — already at historic lows — evaporates entirely.
The US in particular has no cultural or institutional memory of genuine widespread privation among the professional class. Blue-collar communities experienced this version (Rust Belt, Appalachian coal towns) and the social consequences played out over decades: deaths of despair, family breakdown, political radicalization. That was a few million people in geographically contained areas. Now scale it to tens of millions across every metro, affecting the demographic that runs institutions.
Your instinct about atoms is the crux
The irreducible human needs list is short: calories, shelter, water, sanitation, heat/cooling, basic medical care, clothing. None of these are information goods. All of them have physical supply chains with real costs that don’t compress to zero on AI timelines.
And here’s the part that’s actually scary: the atoms economy has been subsidized by the bits economy for decades. Cheap food exists partly because of information-intensive logistics and financial optimization. Cheap housing (where it exists) depends on mortgage markets functioning. Cheap healthcare… well, that doesn’t exist, but what access people have depends on employer-sponsored insurance tied to — you guessed it — white-collar employment.
Sever the bits-income stream and you don’t just lose the income. You lose the infrastructure that was making atoms affordable in the first place. The food is cheap because the supply chain is optimized by well-paid knowledge workers using sophisticated software. What happens when those workers are gone and that software is replaced by AI that the grocery chain also can’t afford anymore because their revenue collapsed because consumers stopped buying?
The honest answer
Nobody has a credible plan for the gap. The AI optimists handwave it. The AI pessimists describe it but propose solutions (taxes, transfers) that have the funding problem you identified. The Austrians say the market will sort it out, which is true on a long enough timeline but useless to someone who needs to eat in March.
The closest historical analogue might be the Soviet collapse in the early 1990s — a rapid structural economic transformation where the old system stopped working before the new one was ready. GDP fell 40%. Life expectancy for Russian men dropped by six years. Mortality spiked. The “transition” killed people.
And Russia had abundant natural resources, arable land, and a population accustomed to material hardship. The US professional class has none of those buffers.
I don’t think you’re wrong. The question isn’t whether the gap is dangerous — it’s whether it’s actually coming on this timeline, or whether AI capability growth hits friction (regulatory, physical, technical) that slows deployment enough for adjustment to happen more gradually. That’s the only real source of optimism, and it’s essentially hoping for delay rather than having a plan.
Exactly. Where is the “cushion” coming from?
And when there isn’t any for even just 5 years, near-societal collapse doesn’t seem unlikely.
People can’t live off of cool SaaS clones, book summaries, and chatbot companionship.
People need calories, clothing, plumbing, etc.
Right. And this is where most AI optimist arguments quietly perform a sleight of hand. They demonstrate that AI will make information abundant, then assert this solves material scarcity, skipping the actual mechanism by which one becomes the other.
The cushion problem has no clean answer
The candidates:
Government transfers — funded by what? Income tax revenue is collapsing per the Citrini thesis. “Tax compute” is the popular proposal, but compute is the most mobile capital asset in human history. It’s literally weightless. You announce a compute tax and inference moves to Singapore, UAE, or any jurisdiction that wants to become the Delaware of AI. The US can try extraterritorial enforcement but that’s a years-long legal and diplomatic fight while people are missing rent.
Deficit spending — the US can print. But printing to fund consumption without corresponding production is just inflation, which increases atoms costs. You’re trying to solve “atoms are too expensive” by making the currency worth less. It’s self-defeating. And the bond market, as the Citrini piece notes, is already pricing in the consumption hit. At some point, even the US faces yield pressure that constrains fiscal space.
Corporate redistribution — “the companies benefiting should pay for it.” Maybe. But the companies benefiting are accruing profits precisely because they eliminated the labor costs you’re now asking them to re-assume in tax form. The political negotiation required to accomplish this at scale, against the most powerful and fast-moving industry in history, during a crisis — it’s not impossible but the timeline doesn’t work.
UBI — same funding problem with a different label.
The 5-year gap is the actual argument
This is what I think people underestimate. Societies don’t tolerate extended material immiseration gracefully. The optimists say “it’ll work out in the long run” as though the intervening period is just an unpleasant waiting room. It’s not. It’s where marriages dissolve, opioid use spikes, birth rates crater further, political extremism flourishes, and institutional trust — already at historic lows — evaporates entirely.
The US in particular has no cultural or institutional memory of genuine widespread privation among the professional class. Blue-collar communities experienced this version (Rust Belt, Appalachian coal towns) and the social consequences played out over decades: deaths of despair, family breakdown, political radicalization. That was a few million people in geographically contained areas. Now scale it to tens of millions across every metro, affecting the demographic that runs institutions.
Your instinct about atoms is the crux
The irreducible human needs list is short: calories, shelter, water, sanitation, heat/cooling, basic medical care, clothing. None of these are information goods. All of them have physical supply chains with real costs that don’t compress to zero on AI timelines.
And here’s the part that’s actually scary: the atoms economy has been subsidized by the bits economy for decades. Cheap food exists partly because of information-intensive logistics and financial optimization. Cheap housing (where it exists) depends on mortgage markets functioning. Cheap healthcare… well, that doesn’t exist, but what access people have depends on employer-sponsored insurance tied to — you guessed it — white-collar employment.
Sever the bits-income stream and you don’t just lose the income. You lose the infrastructure that was making atoms affordable in the first place. The food is cheap because the supply chain is optimized by well-paid knowledge workers using sophisticated software. What happens when those workers are gone and that software is replaced by AI that the grocery chain also can’t afford anymore because their revenue collapsed because consumers stopped buying?
The honest answer
Nobody has a credible plan for the gap. The AI optimists handwave it. The AI pessimists describe it but propose solutions (taxes, transfers) that have the funding problem you identified. The Austrians say the market will sort it out, which is true on a long enough timeline but useless to someone who needs to eat in March.
The closest historical analogue might be the Soviet collapse in the early 1990s — a rapid structural economic transformation where the old system stopped working before the new one was ready. GDP fell 40%. Life expectancy for Russian men dropped by six years. Mortality spiked. The “transition” killed people.
And Russia had abundant natural resources, arable land, and a population accustomed to material hardship. The US professional class has none of those buffers.
I don’t think you’re wrong. The question isn’t whether the gap is dangerous — it’s whether it’s actually coming on this timeline, or whether AI capability growth hits friction (regulatory, physical, technical) that slows deployment enough for adjustment to happen more gradually. That’s the only real source of optimism, and it’s essentially hoping for delay rather than having a plan.