Skip to content

Blog Entries

Welcome! This page lists all my technical articles, notes, and findings.

Subscribe via RSS

Here are the latest entries:

Why Known Liars Making a Claim Actually Reduces Its Probability: A Bayesian Explanation

The Intuition

Most people think that if someone makes a claim, it should at least slightly increase our belief that the claim is true - after all, why would they say it if it weren't true? But Bayes' theorem shows us something counterintuitive: if the person making the claim is a known liar, their assertion can actually make the claim less likely to be true than it was before they opened their mouth.

A Quick Refresher on Bayes' Theorem

Bayes' theorem tells us how to update our beliefs when we receive new evidence:

P(A|B) = P(B|A) * P(A) / P(B)

Where:

  • P(A|B) is the probability of A being true, given that we observed B
  • P(B|A) is the probability of observing B if A were true
  • P(A) is our prior probability of A being true
  • P(B) is the overall probability of observing B

Setting Up the Problem

Let's say there's a claim C, and a known liar L asserts that C is true.

We need to figure out:

  • P(C) - our prior belief that C is true before the liar speaks. Let's say 50% (we have no idea).
  • P(L says C | C is true) - the probability the liar would assert C if C were actually true.
  • P(L says C | C is false) - the probability the liar would assert C if C were actually false.

Here's the key insight: a known liar is someone who is more likely to say things that are false than things that are true. So:

  • P(L says C | C is true) = 0.2 (a liar rarely tells the truth)
  • P(L says C | C is false) = 0.8 (a liar usually lies)

Running the Numbers

We want P(C is true | L says C).

First, compute P(L says C):

P(L says C) = P(L says C | C is true) * P(C) + P(L says C | C is false) * P(not C)
            = 0.2 * 0.5 + 0.8 * 0.5
            = 0.1 + 0.4
            = 0.5

Now apply Bayes' theorem:

P(C is true | L says C) = P(L says C | C is true) * P(C) / P(L says C)
                         = 0.2 * 0.5 / 0.5
                         = 0.2

We started with a 50% belief that C was true. After the known liar asserted C, our belief dropped to 20%. The liar's endorsement is actually evidence against the claim.

Why This Matters

This result has profound real-world implications:

Propaganda and Disinformation

When a source with an established track record of lying makes a claim, rational observers should treat that claim with more skepticism than they had before, not less. The claim is tainted by its source. This is not an ad hominem fallacy - it is correct probabilistic reasoning.

The Inverse is Equally Useful

If a known liar denies something, that denial is actually evidence for the thing being true. If an authoritarian regime denies committing atrocities, and that regime has a strong track record of lying, the denial should increase your belief that the atrocities occurred.

Stacking Liars Does Not Help

If multiple known liars independently assert the same claim, each additional liar's assertion further reduces the probability. Ten liars all saying the same thing is not reinforcement - it is ten pieces of evidence pointing away from the claim. This assumes their assertions are independent; if they're coordinating, it's essentially one assertion from one source.

Religious Texts and Religious Claims

This reasoning applies directly to religious texts and claims made by religious figures. Religious texts such as the Bible, the Quran, and others contain numerous claims that have been demonstrably shown to be false: the age of the earth, the global flood, the creation narrative, the sun standing still, and many more. These texts have, by any empirical standard, an extremely poor track record of making true claims about the physical world.

Now consider what happens when these same texts make unfalsifiable claims - the existence of God, an afterlife, a soul, divine purpose, or miracles that conveniently left no trace. A naive observer might say "well, we can't disprove those claims." But Bayes' theorem tells us something stronger: the very fact that a source with such a poor track record is the one making these claims is evidence against them. If the Bible gets geology, biology, cosmology, and history wrong repeatedly, its assertions about metaphysics deserve less credence, not a free pass simply because they are unfalsifiable.

The same applies to religious authorities. A priest, rabbi, or imam who makes verifiably false claims - about history, science, or even the contents of their own texts - establishes themselves as an unreliable source. When that same person then asserts the existence of God or the truth of their theology, Bayes' theorem tells us their assertion should lower our posterior probability, not raise it. The more such unreliable sources pile on to the same claim, the worse it gets - as we saw with stacking liars above.

This is not a proof that God does not exist. It is a mathematical observation that the primary sources making the claim have disqualified themselves as evidence for that claim. If your best witnesses are known liars, calling them to the stand hurts your case.

Trust is Information-Theoretic

This analysis shows that trust is not just a social nicety - it has rigorous mathematical consequences. A source's reliability directly determines whether their statements function as evidence for or against their claims. A perfectly reliable source's assertions would push your belief toward 100%. A perfectly unreliable source's assertions push your belief toward 0%. And a source that is right exactly half the time? Their statements carry zero information - you can ignore them entirely.

The Takeaway

Bayes' theorem formalizes what many people intuitively sense but struggle to articulate: the credibility of a source matters just as much as the content of their claim. Known liars asserting something is true is, mathematically speaking, evidence that it is false. The next time someone with a track record of dishonesty makes a bold claim, remember: their very act of claiming it has made it less likely to be true.

Not Knowing the Probability Is Not the Same as 50/50

The Common Mistake

"I don't know whether X is true, so it's probably 50/50."

You've heard this. Maybe you've said it. It feels like humility — after all, if you don't know, isn't that the honest middle ground? But this reasoning is wrong, and it's worth understanding precisely why, because the mistake has consequences everywhere from everyday decisions to medicine, law, and science.

Not knowing the probability of an outcome is a statement about your knowledge. A 50/50 probability is a statement about the world. These are not the same thing.

What 50/50 Actually Means

A 50/50 probability means you have positive evidence that two outcomes are equally likely. A fair coin is 50/50 not because we're ignorant about which side will land up — it's because the physical symmetry of the coin, the mechanics of flipping, and extensive empirical trials all point to equal probability. The 50/50 is earned.

When you say "I don't know, so 50/50," you are importing a specific quantitative claim — equal likelihood — without any justification for it. You are disguising ignorance as knowledge.

The Principle of Indifference (and Its Limits)

There is a real principle in probability theory called the Principle of Indifference (or Principle of Insufficient Reason, attributed to Laplace): if you have no reason to prefer one outcome over another, assign them equal probability.

This is a useful starting point, but it has well-known failure modes:

  1. It is sensitive to how you carve up the possibility space. Is the question "will it rain or not?" (2 outcomes → 50/50?) or "will it rain lightly, rain heavily, or not rain?" (3 outcomes → 33% each?). The same state of ignorance produces different numbers depending on how you frame the question. That is a warning sign.

  2. It ignores base rates. Even in the absence of specific information about a case, we usually have general information. Most diseases are rare. Most startups fail. Most extraordinary claims are false. Assigning 50/50 to "does this patient have this disease" ignores the prior probability that any given patient has it — which may be 1 in 10,000.

  3. It conflates lack of evidence with evidence of equality. The absence of a reason to prefer A over B is not the same as a positive reason to believe A and B are equally likely.

A Concrete Example

Suppose someone asks: "Is there life on the planet Kepler-452b?"

You might say: "I have no idea — so maybe 50/50."

But consider what 50/50 implies: out of all the star systems we might examine, half contain life. That is a very specific and very strong claim. In reality, we have strong reasons — from the rarity of life's prerequisites and the difficulty of abiogenesis — to believe the prior probability of life on any given planet is quite low, even if not precisely known.

"I don't know" should translate to a distribution over possible probabilities, centered perhaps on a low value, not to a confident assignment of 50%.

The Right Response to Ignorance

When you genuinely don't know the probability of something, the honest response is to say exactly that — and then try to update based on whatever indirect evidence is available:

  • Base rates: How often does this kind of thing happen in general?
  • Reference classes: What do similar cases look like?
  • Asymmetric consequences: If one outcome is much less likely by nature (e.g., rare diseases, extraordinary events), the prior should reflect that.
  • Domain expertise: What do people who study this domain believe?

Saying "I don't know" and then acting as if 50/50 is your best guess is often worse than admitting you don't know and acting with appropriate caution.

Why This Matters in Practice

Medical Diagnosis

A doctor who says "I don't know if this patient has cancer, so let's call it 50/50" would be committing malpractice. The correct approach is to start from the base rate of that cancer in the relevant population and update based on symptoms and test results. Ignoring prior probabilities is a known cognitive bias called base rate neglect.

"Either the defendant did it or they didn't — 50/50" is a misunderstanding of both probability and the purpose of evidence. The prior probability of any given individual committing a specific crime is very low. Evidence must be evaluated against that baseline, not against a manufactured coin flip.

Extraordinary Claims

"Either bigfoot exists or it doesn't — 50/50." This conflates metaphysical possibility with epistemic probability. Many things are possible without being likely. The prior probability of a large undiscovered primate in densely surveilled North America is not 50%. It is very low, and extraordinary evidence would be required to move it.

"God exists or doesn't — 50/50"

This is perhaps the most common application of this mistake. The lack of a definitive disproof of God's existence is not the same as a 50% probability that God exists. The prior probability of any specific, interventionist, prayer-answering deity — as opposed to the infinite space of other possible metaphysical arrangements — is not self-evidently 50%. Saying "I don't know, so it's a coin flip" is asserting something very specific under the guise of open-mindedness.

Calibrated Uncertainty

The goal of good probabilistic reasoning is calibration: your stated probabilities should match your actual frequencies of being right over time. A well-calibrated person who says "70% confident" should be right about 70% of the time on such claims.

Defaulting to 50/50 whenever you're uncertain will make you systematically overconfident about rare events and systematically underconfident about common ones. It is not a neutral stance. It is a specific, often wrong, quantitative claim dressed up as intellectual humility.

The Takeaway

True epistemic humility is not "I'll call it 50/50." It is:

  • "I don't know the exact probability."
  • "My best estimate, given base rates and available evidence, is somewhere around X."
  • "I should seek more information before making high-stakes decisions."

Uncertainty about a probability is not a probability. The next time someone defaults to 50/50 because they "just don't know," ask them why exactly equal likelihood is their best guess — and watch what happens.

Linux io_uring vs Windows I/O: A Technical Comparison

The State of Async I/O

High-performance servers, databases, and storage engines all face the same bottleneck: how to perform massive amounts of I/O without drowning in system call overhead. Linux and Windows have taken fundamentally different approaches to this problem, and the gap has widened significantly since Linux 5.1 introduced io_uring in 2019.

What is io_uring?

io_uring is a Linux kernel interface for asynchronous I/O built around two lock-free ring buffers shared between user space and the kernel: a submission queue (SQ) and a completion queue (CQ). The application pushes I/O requests into the SQ, and the kernel delivers results into the CQ — all without system calls in the hot path.

Key properties:

  • Zero-copy submission — requests are written directly into shared memory. No syscall envelope is needed per operation once the rings are set up.
  • Batching — a single io_uring_enter() call can submit hundreds of operations and reap completions at the same time.
  • Polled mode — for ultra-low-latency NVMe workloads, the kernel can busy-poll for completions, eliminating interrupt overhead entirely.
  • Registered buffers and file descriptors — pre-registering resources removes repeated kernel lookups, shaving microseconds per operation.
  • Linked operations — chains of dependent I/O operations can be submitted as a single unit, executed in sequence by the kernel.
  • Fixed-size, pre-allocated rings — no allocations on the hot path.

Windows Async I/O Mechanisms

Windows offers several overlapping async I/O mechanisms, each from a different era:

I/O Completion Ports (IOCP)

IOCP, introduced in Windows NT 3.5 (1994), is the primary high-performance async I/O mechanism on Windows. An application creates a completion port, associates file handles with it, issues overlapped I/O operations, and dequeues completions from the port.

  • Thread-pool aware — the kernel limits concurrency to avoid context-switch storms.
  • Well integrated with Winsock for network I/O.
  • Every I/O operation is a system call. There is no batching or shared-memory shortcut.

Overlapped I/O

The foundation under IOCP. Each I/O call takes an OVERLAPPED structure and completes asynchronously. Completion notification comes via IOCP, event objects, or alertable wait (APC). The per-operation system call overhead remains.

Registered I/O (RIO)

Introduced in Windows 8 / Server 2012, RIO is Microsoft's attempt at a higher-performance network I/O path. It pre-registers buffers and uses submission/completion queues — conceptually similar to io_uring but limited to network sockets only. RIO never gained wide adoption and is rarely used outside of specialized financial trading applications.

Head-to-Head Comparison

System Call Overhead

This is where io_uring wins decisively. IOCP requires one system call per I/O operation issued and one per completion dequeued. io_uring can submit and reap thousands of operations with zero system calls using shared-memory polling mode, or at most one io_uring_enter() call for a full batch. On workloads with millions of small I/O operations per second, this difference alone can mean 30-50% higher throughput.

Generality

io_uring supports virtually every I/O operation the kernel offers: read, write, fsync, poll, accept, connect, send, recv, openat, close, statx, rename, unlink, mkdir, and many more. It has effectively become a general-purpose async syscall interface.

IOCP is primarily designed for file and socket I/O. RIO is sockets only. Windows has no equivalent of io_uring's ability to perform arbitrary filesystem operations asynchronously through a unified interface.

Buffer Management

io_uring allows pre-registering fixed buffers that the kernel maps once. Provided buffer groups allow the kernel to pick buffers on behalf of the application, eliminating a round-trip for receive operations.

IOCP requires pinning pages for each overlapped operation. RIO pre-registers buffers but only for network I/O. There is no equivalent of io_uring's provided buffer groups for file I/O on Windows.

Kernel Bypass and Polling

io_uring's SQPOLL mode spawns a kernel thread that continuously polls the submission queue, meaning the application never enters the kernel at all in steady state. Combined with NVMe polled mode, this achieves latencies close to SPDK-style kernel bypass without giving up the safety of kernel-mediated I/O.

Windows has no equivalent. The closest is a user-mode driver framework (UMDF) or a custom kernel driver, both of which are far more complex to develop and deploy.

Linked and Dependent Operations

io_uring supports operation chaining: read a file, then write to a socket, then fsync — all submitted as a single linked chain. The kernel executes them in order without returning to user space between steps.

IOCP has no equivalent. Each dependent operation must be submitted from user space after the previous one completes, adding a round-trip per link in the chain.

Maturity and Ecosystem

IOCP has 30 years of production use. It is well-understood, well-documented, and deeply integrated into the Windows ecosystem (.NET, Win32, Winsock). Virtually every Windows server application uses it. The debugging and profiling tooling (ETW, xperf, WPA) is mature.

io_uring is younger (2019) and has gone through several security hardening iterations. Early kernel versions had io_uring-related CVEs, and some distributions (notably Google's production kernels and earlier versions of Docker's seccomp profiles) disabled it entirely for a period. The API has stabilized considerably since Linux 5.15+, and major projects (PostgreSQL, RocksDB, NGINX, Tokio, liburing) now use it in production.

Advantages of io_uring Over Windows I/O

  • Dramatically lower per-operation overhead due to shared-memory ring buffers and batched submission.
  • Unified interface for all I/O types — file, network, filesystem metadata — rather than separate mechanisms for each.
  • Kernel-side polling eliminates syscall overhead entirely for latency-sensitive workloads.
  • Operation chaining reduces round-trips for multi-step I/O sequences.
  • Provided buffer groups let the kernel manage receive buffers, simplifying application code and reducing memory waste.
  • Rapid evolution — new operations and optimizations are added in every kernel release.
  • Open source — anyone can read, audit, and contribute to the implementation.

Advantages of Windows I/O Over io_uring

  • Decades of stability — IOCP's API has been frozen for 30 years. No surprise breaking changes.
  • Thread-pool integration — IOCP's built-in concurrency throttling makes it harder to write a server that melts under load.
  • Superior documentation and tooling — Microsoft's IOCP documentation, ETW tracing, and WPA analysis are polished.
  • No security teething pains — IOCP's attack surface has been hardened over three decades, while io_uring is still accumulating CVE fixes.
  • RIO for niche use — for pure network workloads, RIO offers some of io_uring's benefits without the complexity.
  • Broader language support — C#/.NET async I/O is built directly on IOCP, making high-performance I/O accessible without manual ring buffer management.

Disadvantages of io_uring

  • Complexity — the API is powerful but large. Correct use requires understanding ring buffer semantics, memory ordering, and submission queue entry (SQE) flags. Libraries like liburing help, but the abstraction is inherently more complex than "call ReadFile with OVERLAPPED."
  • Security track record — io_uring has been a recurring source of privilege escalation vulnerabilities. The large kernel attack surface is an ongoing concern.
  • Kernel version sensitivity — features and fixes vary significantly across kernel versions. An application targeting io_uring must either require a recent kernel or implement fallback paths.
  • Debugging difficulty — tracing I/O through shared-memory ring buffers is harder than tracing system calls. Standard strace does not capture io_uring operations by default.

Disadvantages of Windows I/O

  • Syscall-per-operation overhead — the fundamental architectural limitation. No amount of optimization can match a zero-syscall path.
  • Fragmented API surface — IOCP, RIO, overlapped I/O, and APCs are separate mechanisms with different semantics, leading to confusion and bugs.
  • No true async filesystem metadata operations — operations like rename, delete, and stat are synchronous on Windows. Applications needing async metadata operations must use thread pools, which defeats the purpose.
  • RIO stagnation — Microsoft's most io_uring-like API has seen minimal development since its introduction and remains network-only.
  • Closed source — impossible to audit, debug at the kernel level, or contribute fixes without Microsoft's involvement.

The Bottom Line

io_uring represents a generational leap in I/O interface design. It addresses the fundamental inefficiency that plagued all previous async I/O models — the per-operation system call — and replaces it with shared-memory communication that can achieve millions of IOPS with minimal CPU overhead.

Windows IOCP remains competent and battle-tested, but its architecture is showing its age. Microsoft has not shipped a comparable modern I/O interface, and RIO was a half-step that never reached its potential.

For new high-performance systems — databases, storage engines, proxies, messaging systems — io_uring on Linux is the clear technical winner. The performance difference is not marginal; it is architectural. Applications that previously needed kernel bypass frameworks like DPDK or SPDK can now achieve comparable performance through io_uring while remaining within the standard kernel I/O path.

The Linux kernel's willingness to rethink fundamental interfaces, even at the cost of short-term complexity and security growing pains, has produced a measurably superior I/O subsystem. Windows, constrained by decades of backward compatibility commitments and a more conservative kernel development culture, has fallen behind on this front.

The Long-Term Damage Windows Has Done to the Tech Industry

Production Runs on Linux. Development Should Have Too.

Here's a fact that should make every tech executive uncomfortable: virtually all production infrastructure today runs on Linux. Cloud servers, containers, Kubernetes clusters, CI/CD pipelines, embedded systems, networking equipment, supercomputers — Linux, all the way down. Yet for decades, the industry trained its developers on Windows.

That mismatch has cost us enormously.

A Generation Trained on the Wrong OS

For roughly 25 years (mid-1990s through the mid-2010s), the default development environment in most companies and universities was Windows. Developers wrote code on Windows, tested on Windows, and then deployed to Linux servers where things behaved differently. This created an entire class of bugs and inefficiencies that simply shouldn't exist:

  • Path separators and case sensitivity — Windows uses backslashes and case-insensitive filenames. Linux uses forward slashes and is case-sensitive. How many production bugs have been caused by this mismatch alone? Too many to count.

  • Line endings — CR+LF vs LF. Decades of tooling, git configs, and workarounds for a problem that only exists because developers use a different OS than production.

  • Shell scripting illiteracy — Windows developers grew up with CMD and later PowerShell, neither of which translates to the Bash/POSIX shell that runs every production script, Dockerfile, and CI pipeline. This created a skills gap that persists to this day.

  • Permission models — Windows ACLs and Linux POSIX permissions are fundamentally different. Developers who never used Linux often don't understand file permissions, ownership, or the principle of least privilege as implemented in production systems.

  • Process management — Signals, daemons, systemd, cgroups, namespaces — the building blocks of modern containerization — are all Linux concepts that Windows developers had to learn from scratch when the industry moved to Docker and Kubernetes.

The Cultural Damage

Beyond technical skills, Windows dominance created a cultural problem. It taught developers that:

  • GUIs are primary, CLIs are secondary. In production, it's the opposite. You SSH into servers. You write automation scripts. You read logs with grep, awk, and sed. The GUI-first mindset made developers less effective at operations.

  • You don't need to understand the OS. Windows actively hides its internals. Linux exposes everything as files and processes. The Windows mindset of "don't worry about what's underneath" produces developers who can't debug production issues because they never learned how an OS actually works.

  • Proprietary formats are normal. The Windows ecosystem normalized closed formats, closed protocols, and vendor lock-in. This slowed adoption of open standards and made interoperability harder than it needed to be.

The Tooling Tax

The industry spent enormous effort building bridges between the Windows development world and the Linux production world:

  • Vagrant and later Docker Desktop for Windows — entire projects that exist primarily to let Windows developers run Linux environments locally.

  • WSL (Windows Subsystem for Linux) — Microsoft itself eventually admitted the problem by embedding Linux inside Windows. Think about that: the solution to developing on Windows was to run Linux inside it.

  • Cross-platform build systems — CMake, various CI abstractions, and countless Makefiles with Windows-specific branches. Complexity that exists solely because development and production environments didn't match.

  • Cygwin and MSYS — heroic efforts to bring POSIX tools to Windows, used by millions of developers who needed Unix tools but were stuck on Windows machines.

The Wasted Years

Universities taught computer science on Windows for years. Students graduated without knowing how to use a terminal effectively, how to write a shell script, or how Linux package management works. Their first job required all of these skills.

Companies then spent months onboarding these developers into Linux-based production environments. Senior engineers became full-time translators, explaining Linux concepts that would have been obvious had the developers learned on Linux from the start.

What We Should Learn From This

The lesson isn't "Windows is bad" — it serves its purpose for desktop users, gamers, and certain enterprise workflows. The lesson is:

Your development environment should match your production environment.

This principle, so obvious in hindsight, was ignored for decades because of market momentum, licensing deals with universities, and the assumption that the OS you develop on doesn't matter. It does. It always did.

Today, the industry is finally converging. Linux desktops are viable for developers. macOS provides a Unix-like environment. WSL exists for those who stay on Windows. Cloud-based development environments run Linux natively. New developers are more likely to encounter Linux early.

But let's not forget the cost. Decades of reduced productivity, entire categories of bugs that shouldn't have existed, a generation of developers who had to relearn fundamental skills, and billions of dollars spent on tooling to bridge a gap that was self-inflicted.

The tech industry chose the wrong default OS for developers, and we're still paying for it.

MkDocs with GitHub Pages: File Layout That Works

If you use MkDocs to build a site hosted on GitHub Pages, and you also have static files (HTML, JS, CSS) that aren't part of the blog, getting the file layout right can be tricky. Here's what I learned.

The Problem

MkDocs wipes its output directory (site_dir) on every build. If you put your static files directly in docs/ (the default GitHub Pages root), mkdocs build deletes them.

The Solution

Put everything in the MkDocs source directory (docs_dir). MkDocs copies non-Markdown files through as-is.

My mkdocs.yml:

docs_dir: "blog"
site_dir: "docs"

My layout:

blog/               # MkDocs source (docs_dir)
  index.md          # Blog home page
  about.md
  posts/            # Blog posts (Markdown)
  media.html        # Static HTML page (passed through)
  calendar.html     # Static HTML page (passed through)
  keys.js           # Static JS (passed through)
  data/             # Static data files (passed through)
docs/               # MkDocs output (site_dir) - don't edit manually

On mkdocs build, everything in blog/ ends up in docs/. Markdown files get rendered with the theme. HTML, JS, CSS, and other files are copied unchanged. GitHub Pages serves docs/.

Key Points

  • Never manually edit files in docs/ — they'll be overwritten on next build.
  • Put all static assets in blog/ alongside your Markdown.
  • Add a .nojekyll file in blog/ to prevent GitHub from running Jekyll.
  • Reference static pages in nav without a leading slash:
nav:
  - 'Home': 'index.md'
  - 'Media': 'media.html'
  - 'Calendar': 'calendar.html'

Using a leading / makes MkDocs treat the path as an external URL and it won't validate the file exists.

How to upgrade Ubuntu without their upgrade tool

The upgrade problem

The heart of the problem is that sometimes when you try to upgrade ubuntu the upgrade fails. This happened to me when trying to upgrade to plucky (25.04). The tool would just fail and I tried waiting it out hoping that ubuntu will solve the bug. No such luck. Finally I decided to upgrade it myself manually and it worked like a charm.

The manual upgrade solution

Sync up

The first thing you need to do is sync up with the previous release:

$ sudo apt update
$ sudo apt dist-upgrade

Disable third party repos

The next thing is to manually disable any non ubuntu source of packages from /etc/apt/sources.list.d. I usually just create a folder called /etc/apt/sources.list.moved and move all but ubuntu there.

Setup the ubuntu source to the new distribution

update /etc/apt/sources.list.d/ubuntu.sources to the following content (replace your distro name):

Enabled: yes
Types: deb
URIs: http://us.archive.ubuntu.com/ubuntu
Suites: plucky plucky-updates plucky-security plucky-backports
Components: main restricted universe multiverse
Architectures: amd64
Signed-By: /usr/share/keyrings/ubuntu-archive-keyring.gpg

Upgrade and solve all issues

$ sudo apt update
$ sudo apt dist-upgrade

You will need to solve issues along the way but they are standard things.

Reboot

And that's it.

Move from Google-Chrome to Firefox on Linux

The problems of Google-Chrome

There are several issues with Google-Chrome, some specific to Linux some not

  • Google-Chrome is spying on you and sends way too much information to google and advertisers.
  • Google-Chrome is using way too much CPU on Linux and it's responsiveness is much worse than that of Firefox. I've actually seen this in actual touch typing sites.
  • Google-Chrome is writing too much and wears out your disk in Linux. This is a known issue https://unix.stackexchange.com/questions/438456/google-chrome-high-i-o-writes

The result of all of this is that I recommend Firefox on Linux rather than Google-Chrome.

I wrote a script called browser_move_to_firefox.sh where you can see all the configs that need to be changed when moving to a different browser.

Problems with Netflix web and Netflix webos clients

I've had some issues with the Netflix service recently.

Here is my grievance list:

  • The UI is too intrusive, starts preview of videos/shows when you are just browsing. Cannot turn this behaviour off. Video plays from the start even though it clearly shows that the video is in mid viewing. Very annoying since I have to find the right position again.
  • Items disappear from “My List” with no heads up warning. Very annoying. This is sometimes because Netflix remove shows from the platform which is also annoying and what’s more – they don’t clearly state what is going to go away and when inside the app. I have to go online and find out for myself.
  • Things disappear from the “Continue watching as…” list with no heads up. Very annoying and forces me to maintain my own list of stuff I’m in the middle of watching. Sometimes this happens because a show is going off the platform (again, no heads up) and sometimes for no apparent reason at all.
  • The UI does not allow me to store more than one list. I need one for things I’ve seen and for things I want to see as well as things I’m in the middle of seeing (see above why Netflix support for studd you are in the middle of watching is terrible).
  • The site doesn’t provide an API for getting your data from Netflix. This may be a problem shared by a small minority of programming inclined users but it is important to me.

These problems are endemic to Netflix in general not just to a certain Netflix app or it’s website, so they cannot be solved at the application level. Netflix really needs to fix core issues to make progress on any of these issues.

As a result of all this I decided to leave Netflix. Bye bye.

Open heart surgery on a Fatar StdioLogic SL880

This one is for all of you who have a Fatar keyboard of version StudioLogic SL880 or similar ones. If one of your keys stops working and slumps down it may be that an inner plastic has broken in which case you will need to either send it to the shop or do surgery on it. This one is for the brave of heart who want to take the surgery road. Why should you do it? Because you are brave, because you don't want to haul the heavy keyboard to an expensive lab to fix it for lots of money. In any case the idea is to get a plastic from one of the unused keys (I used the lowest notes) and put it instead of the broken one on the broken notes. One piece of advice: no fear - and read the entire guide before starting!. Photos were taken using my iPhone and you can click on them to get a more detailed image.

Here are the stages:

First gut out the keyboard. You'll have to open 6 deep screws (hidden in trenches), 3 on either side at the bottom of the case. It's hard but it's doable. I have also released 6 more screws at the bottom and gutted the keyboard totally. You really don't have to do that but I wanted to clean the inside while I'm at it.

The SL-880 case

The keyboard gutted — case with PCB exposed

Now find the key(s) that cause(d) the problem. You need to use a small flat screwdriver in order to free the keys. Just insert the screwdriver into the back of the key and press on the small plastic. Once it's pushed the key could be pulled upwards and released. You will now see the problem.

The broken green plastic piece — and a good one for comparison

Close-up of the broken plastic

In order to fix the problem you will have to release all keys!. Yes - I know this hurts but there is a long steel rod that runs through all of them. As long as the keys are clicked into place they apply pressure on the rod and you will not be able to pull it out or, if you happen to pull it out, to get it back in again. So, release all the keys with the screw driver as before. You can either put them on the side or keep them in their place. I started with the former and ended up with the latter since it is better. Since you will be releasing all the keys this is your chance to clean the keys as well.

Keys released — the internal mechanism exposed

Another angle of the exposed mechanism

Close-up of the key mechanism

During the whole process watch out for the small springs. Each key has one and the spring is not held by anything once you release the keys...

Now you will get to a situation where there is no iron bar for the key you want to work on...

Keys with the steel rod visible

Close-up showing the key labels

Get the bad plastic out and put in a good piece of plastic from an unused key. I used the bottom most notes.

Keys close-up — the green plastic holders

Another view of the key mechanism

Some keys on the side. I pulled out a couple only to realize that it is better to keep them in place to avoid having to reconstruct exactly where each key goes. In any case, if you do pull them out, it is not a big deal since the keys are all numbered. White keys are "A B C D E F G" and black ones are numbered "1 2 3 4 5" and stand for C#, D#, F#/Gb, Ab, Bb. It looks like the black keys are interchangeable so you their numbers are not as important as those of the white keys. The ends of the keyboard have special keys. Keep an eye on those.

Removed keys and screwdriver on a table

Another view of the removed keys

Keys leaning with the steel rod pulled out

Another angle of keys with rod

If you do decide to gut out the keyboard completely by removing the extra set of 6 screws at the bottom then you will be able to clean the case itself. If you decide on this remember to release the keyboard only after you disengage the 4 data cables (two fat, two thin) that connect the keyboard to the case. Here is an image of the case after the cleanup...

The cleaned case with PCB

The SL-880 label on the back

The whole procedure took me about 3 hours and some. Well worth it.

more links about Fatar fixes: bad sounds electronics (original link dead) hardware issues (original link dead)

The official owners guide (from my site): fatar-sl880.pdf

Reviews of the Fatar SL-880: Harmony Central (original link dead)

Java runtime environment control

There are four ways to control Java environment for runtime:

- _JAVA_OPTIONS environment variable.
- Command line when running the java virtual machine.
- Java source code. In this case you must make sure to set the option before it is picked up by whatever subsystem it is intended for.
  • In Java web start you can also use the JNLP file to control the environment passed over to the executing JVM.

Examples of them can be:

  • export _JAVA_OPTIONS='-Dawt.useSystemAAFontSettings=lcd'
  • java -Dawt.useSystemAAFontSettings=lcd [arguments...]
  • System.setProperty("awt.useSystemAAFontSettings","lcd");
  • property name="awt.useSystemAAFontSettings" value="lcd" (under the resources element)

Each of these methods naturally has it's own advantages and disadvantages. In Java web start you have a hard time controlling the environment variables or the command line but two options (the JNLP file and the source code itself) are still open to you.

Some properties, like the anti-aliasing option, is notoriously bad by default and setting it (as shown above) will give you much better look and feel.

The values of the awt.useSystemAAFontSettings key are as follows:

  • false corresponds to disabling font smoothing on the desktop.

  • on corresponds to Gnome Best shapes/Best contrast (no equivalent Windows setting).

  • gasp corresponds to Windows Standard font smoothing (no equivalent Gnome desktop setting).

  • lcd corresponds to Gnome's subpixel smoothing and Windows ClearType.

What is the best option to choose? Well - I really don't know. On my laptop lcd looks best. Let me know about your own experience...