Big Iron and Legacy Systems: Why Old Computing Never Dies
The popular narrative of technology treats obsolescence as the natural endpoint for older systems — replaced by the new, archived, and eventually forgotten. The reality of enterprise computing is almost the opposite: mainframes from the 1960s process the majority of the world’s financial transactions, COBOL programs written decades ago run the global banking system, and the economics of replacing critical infrastructure make “good enough” effectively permanent. These five episodes examine why old computing endures.
The Case for Mainframes
- Big Iron: Why Mainframes Still Run the Global Economy made the strongest possible version of the case for mainframes in 2026. The hosts examined what IBM Z-series mainframes actually offer that cloud alternatives struggle to match: transaction throughput at scale (a single mainframe can process hundreds of thousands of transactions per second with sub-millisecond latency), reliability architectures that achieve five-nines uptime through redundancy that runs at the hardware level rather than the software level, and security models designed from the beginning for financial-grade data isolation. The episode examined the total cost of ownership honestly — mainframes are expensive — and why the major banks that have run “mainframe exit” programs often quietly re-centralize workloads after encountering the hidden complexity of distributing systems that were designed to be monolithic.
The episode also addressed the COBOL question directly. The global banking system runs on hundreds of billions of lines of COBOL — a language from 1959 that is still being actively maintained and extended. The shortage of COBOL programmers is real, and the risk profile of replacing working systems is genuinely high. The hosts examined what responsible modernization looks like versus the risks of “big bang” rewrites that have produced spectacular failures at tax authorities, health insurance systems, and other institutions that attempted them.
The Supercomputing Frontier
- The Power of Quintillions: Inside Supercomputing examined the opposite end of the enterprise computing spectrum — the Top 500 list of the world’s most powerful supercomputers, and what the exascale milestone (one quintillion floating-point operations per second) represents. The episode explained the benchmarks used to rank supercomputers, why they’re controversial (LINPACK performance doesn’t necessarily predict performance on real applications), and the architectural diversity of modern supercomputers — some built on commodity GPU clusters, some on custom silicon, some on hybrid designs. The hosts also examined the hobbyist supercomputing movement: what it actually takes to build a cluster competitive with historical Top 500 entries, and how dramatically the cost of serious compute has fallen.
The Pattern of Technology Persistence
-
The Arc of Deprecation: Why Old Tech Still Rules the World provided the theoretical framework for understanding why critical systems run old technology. The episode identified several distinct survival mechanisms: network effects (everyone uses SWIFT because everyone uses SWIFT); reliability reputation (older systems have been stress-tested in ways newer systems haven’t); switching costs (the true cost of migration, including risk, downtime, and retraining, exceeds the value of modernization); and regulatory friction (financial and aviation systems face certification requirements that make software changes expensive regardless of technical merit). The hosts examined specific examples across aviation, finance, healthcare, and government.
-
The Hidden Copper Graveyard: Our Legacy of Dead Cables extended this analysis to physical infrastructure. Legacy copper telephone cable, coaxial TV plant, and obsolete fiber runs persist in urban infrastructure because removal is expensive, disruptive, and often legally complicated. The environmental contamination risk from lead-sheathed cable in older urban deployments has emerged as an unexpected liability issue decades after the cables stopped carrying traffic. This pattern — infrastructure that outlives its usefulness by decades because removal costs exceed storage costs — appears across computing and telecommunications.
The Road Ahead
- 2026 AI Roadmap: From Invisible Agents to Physical Robots connected the legacy computing discussion to the near-term future. As AI agents begin to interact with enterprise systems, they encounter the mainframe and legacy software reality directly: the systems that run the global economy have APIs that look nothing like modern REST interfaces, data formats that predate Unicode, and operational characteristics that make them difficult to integrate with agentic AI workflows. The episode examined how the AI agent ecosystem is developing adapters and translation layers for legacy enterprise integration, and whether this represents a genuine path to modernization or just another layer of abstraction on top of systems that will never actually be replaced.
Understanding why old computing persists is essential context for anyone building technology strategies for large organizations. The answer is rarely inertia or incompetence — it’s usually a rational risk calculation made in the shadow of genuine migration disasters. These episodes provide the background to make that assessment accurately.
Episodes Referenced