Does the Star Trek Computer Run on COBOL?

We like to imagine the future as sleek, seamless, and intelligently designed — a world where a single voice query to a starship's LCARS interface dispatches 575 trillion calculations per nanosecond through the bio-neural gelpacks (biological FPGAs), to run a massively parallel computation to calculate the optimal molecular dynamics for the replicator to materialize the perfect cup of Earl Grey, hot, as an instantaneous, almost magical, manifestation of desire. But what if the reality is less Star Trek and more like our current software landscape, just with centuries more layers of legacy code piled on top?

Imagine a starship doing relativistic frame calculations for spaceflight, but at the root of the system is a timer counting down from the Unix epoch in 1970. A little time_t integer overflow, and suddenly your carefully plotted course to Proxima Centauri B is aiming you directly into a black hole, all because someone, somewhere, 400 years ago, didn't think anyone would still be using that particular library. And when you try to fire the engines to correct course, they fail to ignite because the engine control system was compiled against glibc 2.35 but the navigation computer is running 2.34. Vernor Vinge's novel A Deepness in the Sky paints a wonderfully terrifying picture of this:

So behind all the top-level interfaces was layer under layer of support. Some of that software had been designed for wildly different situations. Every so often, the inconsistencies caused fatal accidents. Despite the romance of spaceflight, the most common accidents were simply caused by ancient, misused programs finally getting their revenge.
“We should rewrite it all,” said Pham.
“It’s been done,” said Sura, not looking up. She was preparing to go off-Watch, and had spent the last four days trying to root a problem out of the coldsleep automation.
“It’s been tried,” corrected Bret, just back from the freezers. “But even the top levels of fleet system code are enormous. You and a thousand of your friends would have to work for a century or so to reproduce it.” Trinli grinned evilly. “And guess what—even if you did, by the time you finished, you’d have your own set of inconsistencies. And you still wouldn’t be consistent with all the applications that might be needed now and then.”
Sura gave up on her debugging for the moment. “The word for all this is ‘mature programming environment.’ Basically, when hardware performance has been pushed to its final limit, and programmers have had several centuries to code, you reach a point where there is far more signicant code than can be rationalized. The best you can do is understand the overall layering, and know how to search for the oddball tool that may come in handy

Vinge's depiction of a "mature programming environment" where layers of ancient, often misunderstood code form the bedrock of advanced civilizations is incredibly compelling in its cynicism (or perhaps hard science fiction realism). It's merely an extrapolation of trends already deeply embedded in our own societies approach to software development. The romantic notion of constantly rewriting and perfecting our digital infrastructure heritage often collides with the gritty pragmatic reality: the overwhelming cost-to-benefit ratio of replacing systems that, for all their quirks, simply work.

Consider the vast edifice of scientific and engineering software. Even today, a significant portion of high-performance numerical computing—the kind that powers climate models, astrophysical simulations, and complex engineering designs—still relies on libraries and kernels written in FORTRAN. Not necessarily FORTRAN 77 in its rawest form, but codebases whose lineage and core algorithms trace back decades. Every time you pip install scipy for a mundane optimization routine, there's a vast swatch of Fortran at play and for good reason. These aren't trivial pieces of code; they are often the product of person-centuries of development, validation, and refinement. The prospect of rewriting them in a "modern" language, ensuring bit-for-bit identical (and correct) output, and then re-validating the entire new stack against decades of experimental data and theoretical work, is a task so monumental and fraught with risk that it's rarely undertaken. Why would it be, when the existing code, running on increasingly powerful hardware or within sophisticated modern wrappers, continues to deliver reliable results?

One could argue this isn't a failure of progress, but a pragmatic consequence of it. The stability and proven correctness of these foundational blocks allow scientists and engineers to build higher, to ask more complex questions, rather than perpetually rebuilding the basement. The same principle applies, perhaps even more acutely, to the invisible infrastructure of global finance. Core banking systems, transaction processing engines, and stock market clearinghouses often run on software written COBOL, on mainframe architectures that seem anachronistic to a generation raised on cloud computing. The interconnectedness is staggering; countless other systems, from national payment networks to individual corporate accounting software, are built to interface with these legacy cores. A "rewrite" isn't just a coding project; it's a global logistical, economic, and political challenge of the highest order.

Yet, these systems process trillions of dollars in transactions daily with incredible reliability. The idea of halting, even for a week, the machinery that underpins global capitalism to attempt a wholesale replacement is unthinkable. Because capitalism is inevitable and there is no alternative, right? Or maybe best not to think about such questions.

But as these systems become more deeply embedded and more critical, a peculiar thing happens: they become taken for granted. The intricacies of their internal workings fade from common institutional knowledge. Successive generations of engineers learn to interface with them, to build around them, to coax new functionalities by adding new layers, but the core itself becomes an opaque black box. It works, so why delve too deeply into the "how," especially when the original architects are long retired or deceased, and the documentation is sparse, outdated, or both?

Since I love science fiction literature, I've been thinking about this a lot. Here are some of the more interesting outcomes I've come up with. Each one could probably be a humerous Greg Egan-style novella.

  1. The Ritualistic Maintainers: Imagine a future where critical infrastructure—say, atmospheric processing for a terraformed Mars, or the global food production AI—runs on a core operating system whose deepest layers were written in the late 21st century. No living human fully understands its architecture. Maintenance becomes a highly specialized, almost priestly role, involving the careful application of historically successful patches and workarounds, interpreting cryptic log files whose error codes reference hardware that hasn't existed for centuries, all while hoping not to disturb the delicate, poorly understood equilibrium of the ancient code. New features are bolted on through increasingly elaborate APIs that treat the core as an unchangeable, god-given constant. Really not much different from the current state of software development, to be honest.

  2. Cascading Obsolescence as Systemic Risk: A seemingly minor, ubiquitous standard from our near future—perhaps a data compression algorithm or a cryptographic protocol from the 2040s—becomes deeply embedded in countless automated systems. Decades later, a subtle flaw is discovered, or the hardware to efficiently process it becomes scarce. But it's so deeply woven into the fabric of, for example, interplanetary logistics or medical diagnostic equipment, that phasing it out without causing massive disruption is a multi-generational project. The "Y2K bug" might seem trivial compared to the "2275 GCC Undefined Behavior Crisis."

  3. The Rise of Software Archaeologists: When a truly novel problem arises, or a deep-seated bug in the mature programming environment finally surfaces with critical consequences, society might rely on specialists who aren't innovators in any traditional sense, but rather "software archaeologists." Their expertise lies in reverse-engineering and understanding these digital antiquities, using AI-assisted tools to analyze compiled code from forgotten languages, sifting through petabytes of historical data archives for fragments of documentation or ancient online forum discussions that might offer a clue. Maybe future historians will debate interpretations of APL code like we do with Minoan Linear A scripts.

  4. Progress as Increasingly Complex Abstraction: True innovation continues, but often at the periphery. We might develop incredibly sophisticated AI that can diagnose rare diseases, but the AI's learning algorithms might still be outputting results into a data format originally designed for 21st-century hospital record systems, because that's what the national health database, with its petabytes of historical (and legally mandated) data, still requires at its core. The AI doesn't replace the old system; it learns to expertly navigate its limitations and idiosyncrasies. Just as we carefully preserve pre-atomic low-background steel for sensitive scientific instruments, vast archives of Reddit threads and Twitter posts are maintained in specialized data centers, their antiquated social media discourse essential for bootstrapping new language models that need to understand the linguistic roots of modern communication.

Is this a form of stagnation, a build-up of civilizational technical debt that will eventually come due? Perhaps. But it can also be seen as a strange, emergent form of progress. Each "solved" problem, encapsulated in a piece of software that becomes part of the unquestioned infrastructure, allows subsequent generations to focus their intellectual energy on new challenges, standing on the shoulders of digital giants whose names and methods they may no longer even know.

Alternatively, a more advanced (and arguably more enlightened) society might adopt a philosophy akin to the Shinto shrines in Japan. Where the shrines are completely demolished and meticulously rebuilt on an adjacent site every 20 years, a tradition known as Shikinen Sengu, practiced for over 1300 years. This isn't renovation; it's a complete rebuild using fresh materials and ancient techniques, rooted in beliefs about impermanence, purity, and the transfer of knowledge to the next generation. Not unlike how many 50 year old Unix utilities are currently being rewritten in memory-safe languages.

However, such an approach would likely require a societal structure far more advanced and economically different from our current one. It presupposes a society where resources and skilled human-time are sufficiently abundant to undertake these monumental rebuilds not out of immediate crisis, but as a matter of principle and long-term health. It would also imply a culture where software development is treated less like the current dog-eat-dog, hype-driven lunatic fest focused on rapid iteration, enclosure, and market capture, and more like a deeply valued craft, akin to the meticulous artisanship and traditions to persist. And anyone who works in software today knows that's a far, far, removed from what we do today.

The future of software may not be one of constant, radical reinvention from the ground up, but rather a story of accretion, layering, and the increasingly sophisticated management of an ever-growing, ever-aging technological heritage. The most advanced systems of tomorrow might be running, at their very core, on the digital ghosts of today. Not sure if that's a scary thing, but it does imbue the craft of software development with a certain kind of reverence and weight worth considering.