Skip to content

Blog Entries

The "God of the Gaps" and Scientific Progress

Throughout human history, "God" has been the name we give to the things we do not understand. When we lacked a natural explanation for a phenomenon, we defaulted to a supernatural one. This is known as the "God of the Gaps" theology. The problem for religion, however, is that as science advances, those gaps have a persistent habit of closing, leaving God with less and less to do.

The Shrinking Realm of the Divine

There was a time when almost every aspect of the natural world was attributed to direct divine intervention.

  • Meteorology: Lightning was the bolt of Zeus or the anger of Yahweh. We now understand it as an electrostatic discharge caused by the movement of ice and water in clouds.
  • Medicine: Plagues and mental illnesses were seen as divine punishments or demonic possessions. Today, we have germ theory, genetics, and neurology. We don't pray away a staph infection; we use antibiotics.
  • Cosmology: The sun was a chariot driven across the sky. We now understand planetary orbits, stellar fusion, and the vastness of the expanding universe.

The Ultimate Gap: Origins

In the 19th century, the "ultimate" gap was the complexity of life. It seemed impossible that such intricate systems could arise without a designer. Charles Darwin closed this gap by demonstrating that natural selection could produce complexity over vast timescales without any foresight or purpose.

Today, the remaining gaps are often found in the most extreme frontiers of science: the first millisecond of the Big Bang, the transition from chemistry to biology (abiogenesis), or the "hard problem" of consciousness. But if history is any guide, there is no reason to assume these gaps require a supernatural filler. They are simply unanswered questions awaiting a naturalistic discovery.

The Problem of a Falsifiable God

The danger for religion is that by tying the existence of God to our current ignorance, they make God's existence a "falsifiable" claim. Every time a scientist publishes a paper, the "God of the Gaps" gets a little smaller.

A God who only exists where science has not yet looked is a God in permanent retreat. If the divine is only found in the unknown, then knowledge becomes the enemy of faith.

Conclusion

Science does not claim to know everything, but it has a proven track record of finding natural answers to previously "supernatural" mysteries. The "God of the Gaps" is a placeholder for ignorance. As our understanding of the universe grows, the need for a supernatural "prime mover" becomes increasingly redundant. We no longer live in a haunted world; we live in a physical one.

The Mesopotamian Roots of the Flood Myth

The story of Noah and the Great Flood is one of the most recognizable narratives in the world. For many, it is a unique account of divine judgment and mercy. However, to historians and Assyriologists, the biblical flood story is not an original work but a sophisticated adaptation of much older Mesopotamian traditions.

The Earlier Versions: Atrahasis and Gilgamesh

Centuries before the Book of Genesis was compiled, the civilizations of Sumer, Akkad, and Babylon were already telling the story of a great deluge.

  • The Atrahasis Epic (c. 18th century BCE): This Akkadian epic explains that the gods decided to wipe out humanity because their "noise" was keeping the gods from sleeping. The god Enki, however, warns the hero Atrahasis to build a boat.
  • The Epic of Gilgamesh (Tablet XI, c. 12th century BCE): In the most famous version, the hero Gilgamesh meets Utnapishtim, the "Mesopotamian Noah," who tells the story of how he survived the flood and was granted immortality.

Striking Similarities

The parallels between the Genesis account and the Epic of Gilgamesh are too precise to be coincidental.

  1. The Divine Warning: In both stories, a deity warns a chosen individual of a coming flood and instructs him to build a massive vessel.
  2. The Boat's Contents: Both heroes are told to bring their families, "the seed of all living creatures," and craftsmen onto the boat.
  3. The Seven Days: In both accounts, the storm lasts for a specific period (though the exact number varies), after which the boat comes to rest on a mountain (Mount Nimush in Gilgamesh, Mount Ararat in Genesis).
  4. The Birds: To check if the water has receded, both heroes release birds. Utnapishtim sends a dove, a swallow, and a raven; Noah sends a raven and a dove.
  5. The Sacrifice: Upon exiting the boat, both heroes offer a sacrifice. In Gilgamesh, the hungry gods "gathered like flies" over the smoke, while in Genesis, "the Lord smelled the soothing aroma."

Theological Evolution

While the plot remains almost identical, the intent of the story shifted significantly as it entered Hebrew thought. In the Mesopotamian versions, the flood is often triggered by the capricious whims or annoyance of a group of squabbling gods. The Hebrew writers transformed this into a moral drama: the flood was a response to human "wickedness" and a demonstration of the justice and covenantal mercy of a single, sovereign God.

Conclusion

The discovery of the Gilgamesh tablets in the 19th century revolutionized our understanding of the Bible. It showed that the writers of Genesis were part of a broader Near Eastern literary culture. They didn't invent the flood; they "re-mythologized" a common regional tradition to serve their own evolving monotheistic theology.

The Morality of the "Ban"

The Book of Joshua is often read as a heroic tale of a nation claiming its promised land. But beneath the Sunday school versions of "Joshua Fit the Battle of Jericho" lies one of the most ethically disturbing concepts in religious literature: the Herem, or "The Ban."

What is the Herem?

The Herem was a divine mandate for total annihilation. According to the text, when the Israelites entered a city under the ban, they were commanded to "completely destroy" everything that breathed.

In Joshua 6:21, describing the fall of Jericho, the text is explicit:

"They devoted the city to the Lord and destroyed with the sword every living thing in it—men and women, young and old, cattle, sheep and donkeys."

This was not just a military conquest; it was a religious ritual. The inhabitants were not killed for tactical reasons, but as a sacrifice to Yahweh. The "moral" failure, according to the Bible, was not the killing of children, but the occasional failure of an Israelite soldier to kill every child or to keep some of the loot for himself.

The Modern Comparison: Genocide

If these events were described in any other context, we would call them by their modern name: genocide. The targeted, systematic destruction of an entire ethnic group, including non-combatants and livestock, meets every international definition of a war crime and a crime against humanity.

Theological apologists often try to justify this by claiming the Canaanites were "wicked" or that the commands were "of their time." But this creates a profound moral paradox: if a command to slaughter infants can be called "good" simply because a god ordered it, then "good" has no meaning.

Textual Reality vs. Historical Fact

There is a silver lining to this dark narrative: archaeological evidence strongly suggests that the mass slaughter described in Joshua never actually happened. As we discussed in our post on the origins of the Israelites, the transition into the highlands was likely gradual and largely internal to Canaan. There is no evidence of a widespread, systemic destruction of cities in the 13th century BCE.

The story of the Herem was likely written centuries later, during a period of intense religious nationalism, as a way to "cleanse" the identity of the people by retroactively creating a hard, violent boundary between themselves and their "polluting" Canaanite ancestors.

Conclusion

The Herem remains a stain on the moral character of the biblical text. Whether or not it happened, the fact that such a command was attributed to the Creator of the Universe reveals the dangerous lengths to which religious nationalism can go. It serves as a reminder that when humans claim to have a divine mandate for violence, the first casualty is always our shared human morality.

The Silence of Contemporaries on Jesus

If the accounts of the New Testament are historically accurate, Jesus of Nazareth was a figure of immense regional impact. He reportedly drew crowds of thousands, performed public miracles that baffled the authorities, and was executed in a high-profile trial that involved both the Jewish Sanhedrin and the Roman governor. Yet, when we look at the secular records of the 1st century, there is a profound and deafening silence.

The Missing Witnesses

During the purported life and immediate aftermath of Jesus, several highly literate and observant historians were active in the region. Their silence is one of the most significant challenges to the traditional historical narrative.

Philo of Alexandria (c. 20 BCE – 50 CE)

Philo was the most important Jewish philosopher of his time. He lived in Alexandria but frequently visited Jerusalem and had close family ties to the Judean aristocracy. He wrote extensively on Jewish history, law, and contemporary events, including the actions of Pontius Pilate. Yet, in all his tens of thousands of words, he never mentions a wonder-working rabbi named Jesus or the movement he supposedly started.

Seneca the Younger (c. 4 BCE – 65 CE)

A Roman Stoic philosopher and statesman, Seneca wrote on a wide range of topics, including ethics, natural phenomena, and religion. He was deeply interested in new religious movements and "superstitions." Despite living during the height of the early Christian expansion, he makes no mention of the Christians or their founder.

The Problem of Josephus

The most famous "early" reference to Jesus is found in the Antiquities of the Jews by Flavius Josephus (written c. 93 CE). The passage, known as the Testimonium Flavianum, describes Jesus as a "wise man" and "the Christ" who rose from the dead.

However, almost all modern scholars agree that this passage was heavily edited or entirely forged by later Christian scribes. The language is uncharacteristically Christian, and the passage interrupts the flow of Josephus's own narrative. When the obviously forged parts are removed, we are left with, at best, a brief mention of a man named Jesus—written sixty years after the fact.

Why Does It Matter?

Apologists often argue that "history is written by the winners" or that Jesus was just a "minor peasant." But the Gospels claim he was anything but minor. They claim he was a figure whose presence shook the very foundations of Roman Judea.

The total absence of contemporary secular evidence suggests that either Jesus did not exist, or—more likely—that the real historical figure was so vastly different from the legendary "Christ of Faith" that he failed to register on the radar of the great thinkers of his day. The Jesus of the Gospels appears to be a literary creation that grew in the telling, long after those who could have fact-checked the story were gone.

The Invention of Satan

In modern Christianity, Satan is the ultimate personification of evil—a rebel angel who fell from heaven and now rules a kingdom of darkness. However, this character is almost entirely absent from the early Hebrew Bible. The "Satan" we know today is not a biblical original, but a late invention influenced by foreign cultures and shifting political anxieties.

Satan as a Title, Not a Name

In the earliest parts of the Hebrew Bible, the word satan is not a proper name. It is a common noun meaning "adversary" or "accuser." It can refer to a human enemy (1 Kings 11:14) or a divine functionary.

In the Book of Job, ha-satan ("The Accuser") appears as a member of God’s own heavenly council. He is not God’s enemy; he is God’s "prosecuting attorney." His job is to wander the earth and test the loyalty of humans. He acts with God's permission and for God's purposes. He is a servant of the divine, not a rebel against it.

The Persian Connection

The shift from "divine servant" to "cosmic enemy" began during the Persian period (539–332 BCE). During the Babylonian Exile and its aftermath, the Israelites came into close contact with Zoroastrianism, the state religion of Persia.

Zoroastrianism was a deeply dualistic religion that viewed the universe as a battlefield between Ahura Mazda (the god of light and truth) and Angra Mainyu (or Ahriman, the spirit of darkness and lies). This dualism provided a powerful new answer to the problem of evil: if God is good, there must be a nearly equal and opposite power responsible for suffering.

The Apocalyptic Turn

During the Second Temple period (c. 200 BCE–100 CE), Jewish "apocalyptic" literature began to flourish. In works like the Book of Enoch and the Jubilees, the figure of the adversary became more personified and malicious.

As Israel suffered under Greek and then Roman occupation, the "adversary" was no longer just a heavenly tester; he became the spiritual leader behind the oppressive foreign empires. By the time the New Testament was written, this process was complete. Satan had been transformed into the "Prince of this World," a cosmic rebel whose defeat was the primary goal of the Messiah.

Conclusion

Satan is a historical composite. He began as a functional title in a monotheistic system that attributed both good and evil to God. Through cultural synthesis with Persian dualism and the desperate hopes of an occupied people, he evolved into the independent source of evil we recognize today. The "devil" wasn't there from the beginning; he was drafted into the story to solve a theological crisis.

Not Knowing the Probability Is Not the Same as 50/50

The Common Mistake

"I don't know whether X is true, so it's probably 50/50."

You've heard this. Maybe you've said it. It feels like humility — after all, if you don't know, isn't that the honest middle ground? But this reasoning is wrong, and it's worth understanding precisely why, because the mistake has consequences everywhere from everyday decisions to medicine, law, and science.

Not knowing the probability of an outcome is a statement about your knowledge. A 50/50 probability is a statement about the world. These are not the same thing.

What 50/50 Actually Means

A 50/50 probability means you have positive evidence that two outcomes are equally likely. A fair coin is 50/50 not because we're ignorant about which side will land up — it's because the physical symmetry of the coin, the mechanics of flipping, and extensive empirical trials all point to equal probability. The 50/50 is earned.

When you say "I don't know, so 50/50," you are importing a specific quantitative claim — equal likelihood — without any justification for it. You are disguising ignorance as knowledge.

The Principle of Indifference (and Its Limits)

There is a real principle in probability theory called the Principle of Indifference (or Principle of Insufficient Reason, attributed to Laplace): if you have no reason to prefer one outcome over another, assign them equal probability.

This is a useful starting point, but it has well-known failure modes:

  1. It is sensitive to how you carve up the possibility space. Is the question "will it rain or not?" (2 outcomes → 50/50?) or "will it rain lightly, rain heavily, or not rain?" (3 outcomes → 33% each?). The same state of ignorance produces different numbers depending on how you frame the question. That is a warning sign.

  2. It ignores base rates. Even in the absence of specific information about a case, we usually have general information. Most diseases are rare. Most startups fail. Most extraordinary claims are false. Assigning 50/50 to "does this patient have this disease" ignores the prior probability that any given patient has it — which may be 1 in 10,000.

  3. It conflates lack of evidence with evidence of equality. The absence of a reason to prefer A over B is not the same as a positive reason to believe A and B are equally likely.

A Concrete Example

Suppose someone asks: "Is there life on the planet Kepler-452b?"

You might say: "I have no idea — so maybe 50/50."

But consider what 50/50 implies: out of all the star systems we might examine, half contain life. That is a very specific and very strong claim. In reality, we have strong reasons — from the rarity of life's prerequisites and the difficulty of abiogenesis — to believe the prior probability of life on any given planet is quite low, even if not precisely known.

"I don't know" should translate to a distribution over possible probabilities, centered perhaps on a low value, not to a confident assignment of 50%.

The Right Response to Ignorance

When you genuinely don't know the probability of something, the honest response is to say exactly that — and then try to update based on whatever indirect evidence is available:

  • Base rates: How often does this kind of thing happen in general?
  • Reference classes: What do similar cases look like?
  • Asymmetric consequences: If one outcome is much less likely by nature (e.g., rare diseases, extraordinary events), the prior should reflect that.
  • Domain expertise: What do people who study this domain believe?

Saying "I don't know" and then acting as if 50/50 is your best guess is often worse than admitting you don't know and acting with appropriate caution.

Why This Matters in Practice

Medical Diagnosis

A doctor who says "I don't know if this patient has cancer, so let's call it 50/50" would be committing malpractice. The correct approach is to start from the base rate of that cancer in the relevant population and update based on symptoms and test results. Ignoring prior probabilities is a known cognitive bias called base rate neglect.

"Either the defendant did it or they didn't — 50/50" is a misunderstanding of both probability and the purpose of evidence. The prior probability of any given individual committing a specific crime is very low. Evidence must be evaluated against that baseline, not against a manufactured coin flip.

Extraordinary Claims

"Either bigfoot exists or it doesn't — 50/50." This conflates metaphysical possibility with epistemic probability. Many things are possible without being likely. The prior probability of a large undiscovered primate in densely surveilled North America is not 50%. It is very low, and extraordinary evidence would be required to move it.

"God exists or doesn't — 50/50"

This is perhaps the most common application of this mistake. The lack of a definitive disproof of God's existence is not the same as a 50% probability that God exists. The prior probability of any specific, interventionist, prayer-answering deity — as opposed to the infinite space of other possible metaphysical arrangements — is not self-evidently 50%. Saying "I don't know, so it's a coin flip" is asserting something very specific under the guise of open-mindedness.

Calibrated Uncertainty

The goal of good probabilistic reasoning is calibration: your stated probabilities should match your actual frequencies of being right over time. A well-calibrated person who says "70% confident" should be right about 70% of the time on such claims.

Defaulting to 50/50 whenever you're uncertain will make you systematically overconfident about rare events and systematically underconfident about common ones. It is not a neutral stance. It is a specific, often wrong, quantitative claim dressed up as intellectual humility.

The Takeaway

True epistemic humility is not "I'll call it 50/50." It is:

  • "I don't know the exact probability."
  • "My best estimate, given base rates and available evidence, is somewhere around X."
  • "I should seek more information before making high-stakes decisions."

Uncertainty about a probability is not a probability. The next time someone defaults to 50/50 because they "just don't know," ask them why exactly equal likelihood is their best guess — and watch what happens.

Linux io_uring vs Windows I/O: A Technical Comparison

The State of Async I/O

High-performance servers, databases, and storage engines all face the same bottleneck: how to perform massive amounts of I/O without drowning in system call overhead. Linux and Windows have taken fundamentally different approaches to this problem, and the gap has widened significantly since Linux 5.1 introduced io_uring in 2019.

What is io_uring?

io_uring is a Linux kernel interface for asynchronous I/O built around two lock-free ring buffers shared between user space and the kernel: a submission queue (SQ) and a completion queue (CQ). The application pushes I/O requests into the SQ, and the kernel delivers results into the CQ — all without system calls in the hot path.

Key properties:

  • Zero-copy submission — requests are written directly into shared memory. No syscall envelope is needed per operation once the rings are set up.
  • Batching — a single io_uring_enter() call can submit hundreds of operations and reap completions at the same time.
  • Polled mode — for ultra-low-latency NVMe workloads, the kernel can busy-poll for completions, eliminating interrupt overhead entirely.
  • Registered buffers and file descriptors — pre-registering resources removes repeated kernel lookups, shaving microseconds per operation.
  • Linked operations — chains of dependent I/O operations can be submitted as a single unit, executed in sequence by the kernel.
  • Fixed-size, pre-allocated rings — no allocations on the hot path.

Windows Async I/O Mechanisms

Windows offers several overlapping async I/O mechanisms, each from a different era:

I/O Completion Ports (IOCP)

IOCP, introduced in Windows NT 3.5 (1994), is the primary high-performance async I/O mechanism on Windows. An application creates a completion port, associates file handles with it, issues overlapped I/O operations, and dequeues completions from the port.

  • Thread-pool aware — the kernel limits concurrency to avoid context-switch storms.
  • Well integrated with Winsock for network I/O.
  • Every I/O operation is a system call. There is no batching or shared-memory shortcut.

Overlapped I/O

The foundation under IOCP. Each I/O call takes an OVERLAPPED structure and completes asynchronously. Completion notification comes via IOCP, event objects, or alertable wait (APC). The per-operation system call overhead remains.

Registered I/O (RIO)

Introduced in Windows 8 / Server 2012, RIO is Microsoft's attempt at a higher-performance network I/O path. It pre-registers buffers and uses submission/completion queues — conceptually similar to io_uring but limited to network sockets only. RIO never gained wide adoption and is rarely used outside of specialized financial trading applications.

Head-to-Head Comparison

System Call Overhead

This is where io_uring wins decisively. IOCP requires one system call per I/O operation issued and one per completion dequeued. io_uring can submit and reap thousands of operations with zero system calls using shared-memory polling mode, or at most one io_uring_enter() call for a full batch. On workloads with millions of small I/O operations per second, this difference alone can mean 30-50% higher throughput.

Generality

io_uring supports virtually every I/O operation the kernel offers: read, write, fsync, poll, accept, connect, send, recv, openat, close, statx, rename, unlink, mkdir, and many more. It has effectively become a general-purpose async syscall interface.

IOCP is primarily designed for file and socket I/O. RIO is sockets only. Windows has no equivalent of io_uring's ability to perform arbitrary filesystem operations asynchronously through a unified interface.

Buffer Management

io_uring allows pre-registering fixed buffers that the kernel maps once. Provided buffer groups allow the kernel to pick buffers on behalf of the application, eliminating a round-trip for receive operations.

IOCP requires pinning pages for each overlapped operation. RIO pre-registers buffers but only for network I/O. There is no equivalent of io_uring's provided buffer groups for file I/O on Windows.

Kernel Bypass and Polling

io_uring's SQPOLL mode spawns a kernel thread that continuously polls the submission queue, meaning the application never enters the kernel at all in steady state. Combined with NVMe polled mode, this achieves latencies close to SPDK-style kernel bypass without giving up the safety of kernel-mediated I/O.

Windows has no equivalent. The closest is a user-mode driver framework (UMDF) or a custom kernel driver, both of which are far more complex to develop and deploy.

Linked and Dependent Operations

io_uring supports operation chaining: read a file, then write to a socket, then fsync — all submitted as a single linked chain. The kernel executes them in order without returning to user space between steps.

IOCP has no equivalent. Each dependent operation must be submitted from user space after the previous one completes, adding a round-trip per link in the chain.

Maturity and Ecosystem

IOCP has 30 years of production use. It is well-understood, well-documented, and deeply integrated into the Windows ecosystem (.NET, Win32, Winsock). Virtually every Windows server application uses it. The debugging and profiling tooling (ETW, xperf, WPA) is mature.

io_uring is younger (2019) and has gone through several security hardening iterations. Early kernel versions had io_uring-related CVEs, and some distributions (notably Google's production kernels and earlier versions of Docker's seccomp profiles) disabled it entirely for a period. The API has stabilized considerably since Linux 5.15+, and major projects (PostgreSQL, RocksDB, NGINX, Tokio, liburing) now use it in production.

Advantages of io_uring Over Windows I/O

  • Dramatically lower per-operation overhead due to shared-memory ring buffers and batched submission.
  • Unified interface for all I/O types — file, network, filesystem metadata — rather than separate mechanisms for each.
  • Kernel-side polling eliminates syscall overhead entirely for latency-sensitive workloads.
  • Operation chaining reduces round-trips for multi-step I/O sequences.
  • Provided buffer groups let the kernel manage receive buffers, simplifying application code and reducing memory waste.
  • Rapid evolution — new operations and optimizations are added in every kernel release.
  • Open source — anyone can read, audit, and contribute to the implementation.

Advantages of Windows I/O Over io_uring

  • Decades of stability — IOCP's API has been frozen for 30 years. No surprise breaking changes.
  • Thread-pool integration — IOCP's built-in concurrency throttling makes it harder to write a server that melts under load.
  • Superior documentation and tooling — Microsoft's IOCP documentation, ETW tracing, and WPA analysis are polished.
  • No security teething pains — IOCP's attack surface has been hardened over three decades, while io_uring is still accumulating CVE fixes.
  • RIO for niche use — for pure network workloads, RIO offers some of io_uring's benefits without the complexity.
  • Broader language support — C#/.NET async I/O is built directly on IOCP, making high-performance I/O accessible without manual ring buffer management.

Disadvantages of io_uring

  • Complexity — the API is powerful but large. Correct use requires understanding ring buffer semantics, memory ordering, and submission queue entry (SQE) flags. Libraries like liburing help, but the abstraction is inherently more complex than "call ReadFile with OVERLAPPED."
  • Security track record — io_uring has been a recurring source of privilege escalation vulnerabilities. The large kernel attack surface is an ongoing concern.
  • Kernel version sensitivity — features and fixes vary significantly across kernel versions. An application targeting io_uring must either require a recent kernel or implement fallback paths.
  • Debugging difficulty — tracing I/O through shared-memory ring buffers is harder than tracing system calls. Standard strace does not capture io_uring operations by default.

Disadvantages of Windows I/O

  • Syscall-per-operation overhead — the fundamental architectural limitation. No amount of optimization can match a zero-syscall path.
  • Fragmented API surface — IOCP, RIO, overlapped I/O, and APCs are separate mechanisms with different semantics, leading to confusion and bugs.
  • No true async filesystem metadata operations — operations like rename, delete, and stat are synchronous on Windows. Applications needing async metadata operations must use thread pools, which defeats the purpose.
  • RIO stagnation — Microsoft's most io_uring-like API has seen minimal development since its introduction and remains network-only.
  • Closed source — impossible to audit, debug at the kernel level, or contribute fixes without Microsoft's involvement.

The Bottom Line

io_uring represents a generational leap in I/O interface design. It addresses the fundamental inefficiency that plagued all previous async I/O models — the per-operation system call — and replaces it with shared-memory communication that can achieve millions of IOPS with minimal CPU overhead.

Windows IOCP remains competent and battle-tested, but its architecture is showing its age. Microsoft has not shipped a comparable modern I/O interface, and RIO was a half-step that never reached its potential.

For new high-performance systems — databases, storage engines, proxies, messaging systems — io_uring on Linux is the clear technical winner. The performance difference is not marginal; it is architectural. Applications that previously needed kernel bypass frameworks like DPDK or SPDK can now achieve comparable performance through io_uring while remaining within the standard kernel I/O path.

The Linux kernel's willingness to rethink fundamental interfaces, even at the cost of short-term complexity and security growing pains, has produced a measurably superior I/O subsystem. Windows, constrained by decades of backward compatibility commitments and a more conservative kernel development culture, has fallen behind on this front.

The Long-Term Damage Windows Has Done to the Tech Industry

Production Runs on Linux. Development Should Have Too.

Here's a fact that should make every tech executive uncomfortable: virtually all production infrastructure today runs on Linux. Cloud servers, containers, Kubernetes clusters, CI/CD pipelines, embedded systems, networking equipment, supercomputers — Linux, all the way down. Yet for decades, the industry trained its developers on Windows.

That mismatch has cost us enormously.

A Generation Trained on the Wrong OS

For roughly 25 years (mid-1990s through the mid-2010s), the default development environment in most companies and universities was Windows. Developers wrote code on Windows, tested on Windows, and then deployed to Linux servers where things behaved differently. This created an entire class of bugs and inefficiencies that simply shouldn't exist:

  • Path separators and case sensitivity — Windows uses backslashes and case-insensitive filenames. Linux uses forward slashes and is case-sensitive. How many production bugs have been caused by this mismatch alone? Too many to count.

  • Line endings — CR+LF vs LF. Decades of tooling, git configs, and workarounds for a problem that only exists because developers use a different OS than production.

  • Shell scripting illiteracy — Windows developers grew up with CMD and later PowerShell, neither of which translates to the Bash/POSIX shell that runs every production script, Dockerfile, and CI pipeline. This created a skills gap that persists to this day.

  • Permission models — Windows ACLs and Linux POSIX permissions are fundamentally different. Developers who never used Linux often don't understand file permissions, ownership, or the principle of least privilege as implemented in production systems.

  • Process management — Signals, daemons, systemd, cgroups, namespaces — the building blocks of modern containerization — are all Linux concepts that Windows developers had to learn from scratch when the industry moved to Docker and Kubernetes.

The Cultural Damage

Beyond technical skills, Windows dominance created a cultural problem. It taught developers that:

  • GUIs are primary, CLIs are secondary. In production, it's the opposite. You SSH into servers. You write automation scripts. You read logs with grep, awk, and sed. The GUI-first mindset made developers less effective at operations.

  • You don't need to understand the OS. Windows actively hides its internals. Linux exposes everything as files and processes. The Windows mindset of "don't worry about what's underneath" produces developers who can't debug production issues because they never learned how an OS actually works.

  • Proprietary formats are normal. The Windows ecosystem normalized closed formats, closed protocols, and vendor lock-in. This slowed adoption of open standards and made interoperability harder than it needed to be.

The Tooling Tax

The industry spent enormous effort building bridges between the Windows development world and the Linux production world:

  • Vagrant and later Docker Desktop for Windows — entire projects that exist primarily to let Windows developers run Linux environments locally.

  • WSL (Windows Subsystem for Linux) — Microsoft itself eventually admitted the problem by embedding Linux inside Windows. Think about that: the solution to developing on Windows was to run Linux inside it.

  • Cross-platform build systems — CMake, various CI abstractions, and countless Makefiles with Windows-specific branches. Complexity that exists solely because development and production environments didn't match.

  • Cygwin and MSYS — heroic efforts to bring POSIX tools to Windows, used by millions of developers who needed Unix tools but were stuck on Windows machines.

The Wasted Years

Universities taught computer science on Windows for years. Students graduated without knowing how to use a terminal effectively, how to write a shell script, or how Linux package management works. Their first job required all of these skills.

Companies then spent months onboarding these developers into Linux-based production environments. Senior engineers became full-time translators, explaining Linux concepts that would have been obvious had the developers learned on Linux from the start.

What We Should Learn From This

The lesson isn't "Windows is bad" — it serves its purpose for desktop users, gamers, and certain enterprise workflows. The lesson is:

Your development environment should match your production environment.

This principle, so obvious in hindsight, was ignored for decades because of market momentum, licensing deals with universities, and the assumption that the OS you develop on doesn't matter. It does. It always did.

Today, the industry is finally converging. Linux desktops are viable for developers. macOS provides a Unix-like environment. WSL exists for those who stay on Windows. Cloud-based development environments run Linux natively. New developers are more likely to encounter Linux early.

But let's not forget the cost. Decades of reduced productivity, entire categories of bugs that shouldn't have existed, a generation of developers who had to relearn fundamental skills, and billions of dollars spent on tooling to bridge a gap that was self-inflicted.

The tech industry chose the wrong default OS for developers, and we're still paying for it.

MkDocs with GitHub Pages: File Layout That Works

If you use MkDocs to build a site hosted on GitHub Pages, and you also have static files (HTML, JS, CSS) that aren't part of the blog, getting the file layout right can be tricky. Here's what I learned.

The Problem

MkDocs wipes its output directory (site_dir) on every build. If you put your static files directly in docs/ (the default GitHub Pages root), mkdocs build deletes them.

The Solution

Put everything in the MkDocs source directory (docs_dir). MkDocs copies non-Markdown files through as-is.

My mkdocs.yml:

docs_dir: "blog"
site_dir: "docs"

My layout:

blog/               # MkDocs source (docs_dir)
  index.md          # Blog home page
  about.md
  posts/            # Blog posts (Markdown)
  media.html        # Static HTML page (passed through)
  calendar.html     # Static HTML page (passed through)
  keys.js           # Static JS (passed through)
  data/             # Static data files (passed through)
docs/               # MkDocs output (site_dir) - don't edit manually

On mkdocs build, everything in blog/ ends up in docs/. Markdown files get rendered with the theme. HTML, JS, CSS, and other files are copied unchanged. GitHub Pages serves docs/.

Key Points

  • Never manually edit files in docs/ — they'll be overwritten on next build.
  • Put all static assets in blog/ alongside your Markdown.
  • Add a .nojekyll file in blog/ to prevent GitHub from running Jekyll.
  • Reference static pages in nav without a leading slash:
nav:
  - 'Home': 'index.md'
  - 'Media': 'media.html'
  - 'Calendar': 'calendar.html'

Using a leading / makes MkDocs treat the path as an external URL and it won't validate the file exists.

How to upgrade Ubuntu without their upgrade tool

The upgrade problem

The heart of the problem is that sometimes when you try to upgrade ubuntu the upgrade fails. This happened to me when trying to upgrade to plucky (25.04). The tool would just fail and I tried waiting it out hoping that ubuntu will solve the bug. No such luck. Finally I decided to upgrade it myself manually and it worked like a charm.

The manual upgrade solution

Sync up

The first thing you need to do is sync up with the previous release:

$ sudo apt update
$ sudo apt dist-upgrade

Disable third party repos

The next thing is to manually disable any non ubuntu source of packages from /etc/apt/sources.list.d. I usually just create a folder called /etc/apt/sources.list.moved and move all but ubuntu there.

Setup the ubuntu source to the new distribution

update /etc/apt/sources.list.d/ubuntu.sources to the following content (replace your distro name):

Enabled: yes
Types: deb
URIs: http://us.archive.ubuntu.com/ubuntu
Suites: plucky plucky-updates plucky-security plucky-backports
Components: main restricted universe multiverse
Architectures: amd64
Signed-By: /usr/share/keyrings/ubuntu-archive-keyring.gpg

Upgrade and solve all issues

$ sudo apt update
$ sudo apt dist-upgrade

You will need to solve issues along the way but they are standard things.

Reboot

And that's it.