Skip to content

Blog Entries

Welcome! This page lists all my technical articles, notes, and findings.

Here are the latest entries:

MkDocs with GitHub Pages: File Layout That Works

If you use MkDocs to build a site hosted on GitHub Pages, and you also have static files (HTML, JS, CSS) that aren't part of the blog, getting the file layout right can be tricky. Here's what I learned.

The Problem

MkDocs wipes its output directory (site_dir) on every build. If you put your static files directly in docs/ (the default GitHub Pages root), mkdocs build deletes them.

The Solution

Put everything in the MkDocs source directory (docs_dir). MkDocs copies non-Markdown files through as-is.

My mkdocs.yml:

docs_dir: "blog"
site_dir: "docs"

My layout:

blog/               # MkDocs source (docs_dir)
  index.md          # Blog home page
  about.md
  posts/            # Blog posts (Markdown)
  media.html        # Static HTML page (passed through)
  calendar.html     # Static HTML page (passed through)
  keys.js           # Static JS (passed through)
  data/             # Static data files (passed through)
docs/               # MkDocs output (site_dir) - don't edit manually

On mkdocs build, everything in blog/ ends up in docs/. Markdown files get rendered with the theme. HTML, JS, CSS, and other files are copied unchanged. GitHub Pages serves docs/.

Key Points

  • Never manually edit files in docs/ — they'll be overwritten on next build.
  • Put all static assets in blog/ alongside your Markdown.
  • Add a .nojekyll file in blog/ to prevent GitHub from running Jekyll.
  • Reference static pages in nav without a leading slash:
nav:
  - 'Home': 'index.md'
  - 'Media': 'media.html'
  - 'Calendar': 'calendar.html'

Using a leading / makes MkDocs treat the path as an external URL and it won't validate the file exists.

How to upgrade Ubuntu without their upgrade tool

The upgrade problem

The heart of the problem is that sometimes when you try to upgrade ubuntu the upgrade fails. This happened to me when trying to upgrade to plucky (25.04). The tool would just fail and I tried waiting it out hoping that ubuntu will solve the bug. No such luck. Finally I decided to upgrade it myself manually and it worked like a charm.

The manual upgrade solution

Sync up

The first thing you need to do is sync up with the previous release:

$ sudo apt update
$ sudo apt dist-upgrade

Disable third party repos

The next thing is to manually disable any non ubuntu source of packages from /etc/apt/sources.list.d. I usually just create a folder called /etc/apt/sources.list.moved and move all but ubuntu there.

Setup the ubuntu source to the new distribution

update /etc/apt/sources.list.d/ubuntu.sources to the following content (replace your distro name):

Enabled: yes
Types: deb
URIs: http://us.archive.ubuntu.com/ubuntu
Suites: plucky plucky-updates plucky-security plucky-backports
Components: main restricted universe multiverse
Architectures: amd64
Signed-By: /usr/share/keyrings/ubuntu-archive-keyring.gpg

Upgrade and solve all issues

$ sudo apt update
$ sudo apt dist-upgrade

You will need to solve issues along the way but they are standard things.

Reboot

And that's it.

Move from Google-Chrome to Firefox on Linux

The problems of Google-Chrome

There are several issues with Google-Chrome, some specific to Linux some not

  • Google-Chrome is spying on you and sends way too much information to google and advertisers.
  • Google-Chrome is using way too much CPU on Linux and it's responsiveness is much worse than that of Firefox. I've actually seen this in actual touch typing sites.
  • Google-Chrome is writing too much and wears out your disk in Linux. This is a known issue https://unix.stackexchange.com/questions/438456/google-chrome-high-i-o-writes

The result of all of this is that I recommend Firefox on Linux rather than Google-Chrome.

I wrote a script called browser_move_to_firefox.sh where you can see all the configs that need to be changed when moving to a different browser.

Problems with Netflix web and Netflix webos clients

I've had some issues with the Netflix service recently.

Here is my grievance list:

  • The UI is too intrusive, starts preview of videos/shows when you are just browsing. Cannot turn this behaviour off. Video plays from the start even though it clearly shows that the video is in mid viewing. Very annoying since I have to find the right position again.
  • Items disappear from “My List” with no heads up warning. Very annoying. This is sometimes because Netflix remove shows from the platform which is also annoying and what’s more – they don’t clearly state what is going to go away and when inside the app. I have to go online and find out for myself.
  • Things disappear from the “Continue watching as…” list with no heads up. Very annoying and forces me to maintain my own list of stuff I’m in the middle of watching. Sometimes this happens because a show is going off the platform (again, no heads up) and sometimes for no apparent reason at all.
  • The UI does not allow me to store more than one list. I need one for things I’ve seen and for things I want to see as well as things I’m in the middle of seeing (see above why Netflix support for studd you are in the middle of watching is terrible).
  • The site doesn’t provide an API for getting your data from Netflix. This may be a problem shared by a small minority of programming inclined users but it is important to me.

These problems are endemic to Netflix in general not just to a certain Netflix app or it’s website, so they cannot be solved at the application level. Netflix really needs to fix core issues to make progress on any of these issues.

As a result of all this I decided to leave Netflix. Bye bye.

Open heart surgery on a Fatar StdioLogic SL880

This one is for all of you who have a Fatar keyboard of version StudioLogic SL880 or similar ones. If one of your keys stops working and slumps down it may be that an inner plastic has broken in which case you will need to either send it to the shop or do surgery on it. This one is for the brave of heart who want to take the surgery road. Why should you do it? Because you are brave, because you don't want to haul the heavy keyboard to an expensive lab to fix it for lots of money. In any case the idea is to get a plastic from one of the unused keys (I used the lowest notes) and put it instead of the broken one on the broken notes. One piece of advice: no fear - and read the entire guide before starting!. Photos were taken using my iPhone and you can click on them to get a more detailed image.

Here are the stages:

First gut out the keyboard. You'll have to open 6 deep screws (hidden in trenches), 3 on either side at the bottom of the case. It's hard but it's doable. I have also released 6 more screws at the bottom and gutted the keyboard totally. You really don't have to do that but I wanted to clean the inside while I'm at it.

The SL-880 case

The keyboard gutted — case with PCB exposed

Now find the key(s) that cause(d) the problem. You need to use a small flat screwdriver in order to free the keys. Just insert the screwdriver into the back of the key and press on the small plastic. Once it's pushed the key could be pulled upwards and released. You will now see the problem.

The broken green plastic piece — and a good one for comparison

Close-up of the broken plastic

In order to fix the problem you will have to release all keys!. Yes - I know this hurts but there is a long steel rod that runs through all of them. As long as the keys are clicked into place they apply pressure on the rod and you will not be able to pull it out or, if you happen to pull it out, to get it back in again. So, release all the keys with the screw driver as before. You can either put them on the side or keep them in their place. I started with the former and ended up with the latter since it is better. Since you will be releasing all the keys this is your chance to clean the keys as well.

Keys released — the internal mechanism exposed

Another angle of the exposed mechanism

Close-up of the key mechanism

During the whole process watch out for the small springs. Each key has one and the spring is not held by anything once you release the keys...

Now you will get to a situation where there is no iron bar for the key you want to work on...

Keys with the steel rod visible

Close-up showing the key labels

Get the bad plastic out and put in a good piece of plastic from an unused key. I used the bottom most notes.

Keys close-up — the green plastic holders

Another view of the key mechanism

Some keys on the side. I pulled out a couple only to realize that it is better to keep them in place to avoid having to reconstruct exactly where each key goes. In any case, if you do pull them out, it is not a big deal since the keys are all numbered. White keys are "A B C D E F G" and black ones are numbered "1 2 3 4 5" and stand for C#, D#, F#/Gb, Ab, Bb. It looks like the black keys are interchangeable so you their numbers are not as important as those of the white keys. The ends of the keyboard have special keys. Keep an eye on those.

Removed keys and screwdriver on a table

Another view of the removed keys

Keys leaning with the steel rod pulled out

Another angle of keys with rod

If you do decide to gut out the keyboard completely by removing the extra set of 6 screws at the bottom then you will be able to clean the case itself. If you decide on this remember to release the keyboard only after you disengage the 4 data cables (two fat, two thin) that connect the keyboard to the case. Here is an image of the case after the cleanup...

The cleaned case with PCB

The SL-880 label on the back

The whole procedure took me about 3 hours and some. Well worth it.

more links about Fatar fixes: bad sounds electronics (original link dead) hardware issues (original link dead)

The official owners guide (from my site): fatar-sl880.pdf

Reviews of the Fatar SL-880: Harmony Central (original link dead)

Java runtime environment control

There are four ways to control Java environment for runtime:

- _JAVA_OPTIONS environment variable.
- Command line when running the java virtual machine.
- Java source code. In this case you must make sure to set the option before it is picked up by whatever subsystem it is intended for.
  • In Java web start you can also use the JNLP file to control the environment passed over to the executing JVM.

Examples of them can be:

  • export _JAVA_OPTIONS='-Dawt.useSystemAAFontSettings=lcd'
  • java -Dawt.useSystemAAFontSettings=lcd [arguments...]
  • System.setProperty("awt.useSystemAAFontSettings","lcd");
  • property name="awt.useSystemAAFontSettings" value="lcd" (under the resources element)

Each of these methods naturally has it's own advantages and disadvantages. In Java web start you have a hard time controlling the environment variables or the command line but two options (the JNLP file and the source code itself) are still open to you.

Some properties, like the anti-aliasing option, is notoriously bad by default and setting it (as shown above) will give you much better look and feel.

The values of the awt.useSystemAAFontSettings key are as follows:

  • false corresponds to disabling font smoothing on the desktop.

  • on corresponds to Gnome Best shapes/Best contrast (no equivalent Windows setting).

  • gasp corresponds to Windows Standard font smoothing (no equivalent Gnome desktop setting).

  • lcd corresponds to Gnome's subpixel smoothing and Windows ClearType.

What is the best option to choose? Well - I really don't know. On my laptop lcd looks best. Let me know about your own experience...

Debugging shared library problems

A tip: sometimes you install stuff from source and library search order makes analyzing which library you are actually using a mess. A useful tool is ldconfig -p that will print the cache of the dynamic linker for you allowing you to understand which libraries are actually being used.

Using gpg-agent to write authenticating scripts

Sometimes you want to write a shell or other script, and that script is going to have to run under sudo. Under such conditions if the script does anything that requires authentication it will not act as expected. In plain terms it means that the regular popup for authentication will not appear. The tool maybe written in a way which deals with the problem and falls back on other authentication methods, and yet it may not. In any case what you really want is for your own authentication agent (the little program called gpg-agent which is running on almost every Linux distribution from the time you log in till the time you log out) will do the authentication. This saves you lots of clicking. Imagine that the script has to do something which requires authentication X number of times. If the script does not use an agent it will not be able to cache the pass-phrases and so you will have to retype the pass-phrase several times. It can also be the case that your authenticating agent already has your pass-phrase in it's cache and you can save typing it yet another time.

Ok. So how do you do it? Well, in your original environment you have a variable called GPG_AGENT_INFO. This variable holds the details of how to connect to your authenticating agent. If you are running regular scripts then this variable, which is an environment variable, is automatically available to them. But if you run your scripts via ssh or sudo then it is not. Just make the variable available to those scripts. Obviously the users that these scripts will be running under will have to have the right level of permission to talk to your gpg agent. How do you make them available? One way is to pass this variable over the command line and turn it into an environment variable as soon as the script starts.

Producing MySQL dates from Perl

Ever written the occasional Perl script and wanted to insert the current date and time into a MySQL database? Here is the function to do it. This works for a column of type 'datetime'.

# function to return the current time in mysql format
sub mysql_now() {
        my($sec,$min,$hour,$mday,$mon,$year,$wday, $yday,$isdst)=localtime(time);
        my($result)=sprintf("%4d-%02d-%02d %02d:%02d:%02d",$year+1900,$mon+1,$mday,$hour,$min,$sec);
        return $result;
}

Test procedures for new memory installations

When you buy a new computer or get one and you are not sure of the quality of memory that it has, or when you buy, upgrade or add new memory you should test it before going on to simply use it. The reason is quite intricate. In most probability the memory with will either work or won't and if the machine works it will be a good indication that the memory works fine. But in a few of the cases your machine may exhibit very strange behavior indeed. Various programs crashing, machine freezes, kernel crashes and the like. In that case, which may happen some time after the upgrade you may fail to connect the symptoms with hardware memory issues and attribute them to other factors like OS upgrades, driver installations or other peripheral failures. This may lead you, as it has led me, on wild goose chases after non issues which will certainly drive you insane or into writing blog posts at 4 AM. So what do I suggest? A simple and short 2 step procedure to be executed when using new memory in order to be sure that your memory is functional and well configured. This can also save you money since from my experience the probability of buying faulty memory is very high (at least 15% from my statistics).

First phase is to run the ubiqitous memtest86+. This is available via the boot menu of most current linux distros. This test runs for some time and long years of using it have led me to a solid statistic according to which if memtest does not find a problem with your memory in the first 30 seconds it will not find any problems in the next 30 hours. But, then again, this is just a statistic, feel free to run this for as long as you wish. If memtest fails return the chips to the manufacturer and get new ones (if you feel that it is the chips fauls - see the note below). If it succeeds then you need to go on to the second phase of configuring the memory properly.

Once the memory is installed open your bios configuration and see how is it configured. How are its parameters (speed and 4 more numbers) set. Is it on automatic or is it on manual? Do you have heterogenous memory banks? If so what is the speed of each and what is the overall speed of the entire memory subsystem? Why should you know all of this info, you rightly ask. Well, in a perfect world you would just buy memory, plug it in and the BIOS would configure and use it properly. Alas, this is not the world we live in. In reality you usualy buy the motherboard at date X and buy the upgrade or new memory at date Y. Y is a couple of years following X. This means that the memory you are buying is too fast for your motherboard. Shouldn't your BIOS be able to handle this? Well, yes and no. In lots of cases it does manage to handle it but in some it doesn't and believe me, you don't want to get stuck in the latter.

In my case I installed a DDR2 800 MHz memory on a standard Intel board which did not complain and the BIOS ran that memory at the auto-selected optimal speed of 800 MHz. There was no problem with the memory and so memtest ran smoothly. It's just that when the 2 cores were accessing it together with high speed then put more pressure on it than memtest did and memory faults started happening.

Second test is to just see if the memory is working properly with multi core. This phase can also be used to "overclock" your RAM and to make sure that you will not be experiencing any weird side effects from this overclocking. In this phase we will test the memory in practice using the N cores. I found that the best way to achieve this is to just compile the Linux kernel on the machine using make -j N where N is the number of your cores. Whenever I had memory problems this compilation would crash in some spectacular way and in random places and so served as a clear indication of RAM issues.

If you want to learn more about memtest and dual core checkout this and this in the memtest86+ discussion board. It seems that memtest86 (as opposed to memtest86+) does have multi core support. Cool. The problem is that on Linux systems usually memtest86+ is the only one installed...

If you want to know how to compile a Linux kernel learn more at this URL.

memtester: There is a package called memtester in Linux which will test memory from user space. In Ubuntu this package is simply called memtester. It is developed here. I have tried it out and it is a fine piece of code but does not do multi-threaded testing with CPU affinity. You have to do that on your own at the command line by running two instances of memtester and assigning them to different CPUs via taskset. Another problem with memtester is that you need to let it know how much ram to test which is very hard to do since you want to test as much as possible. This means that you need to calculate the size to test which is roughly total_ram_size-(size_of_os+size_of_all_currently_running_programs) which is a hard to calculate and if you miscalculate the program may fail since it locks the memory that it gets using mlock, which you need to have permission to perform. It may also throw other programs that you are running at the time into swap (since they are not locked into memory).

The kernel compilation mentioned above is better in my opinion due to the following reasons: it uses all of your CPUs and it also uses every last bit of RAM you have since the kernel is big and during the compilation stage will use up all of your Linux cache which means all of your spare memory.

Note: as mentioned in the memtester documentation, if you do find any problems with your memory it may not be the fault of your memory chips at all. It may be the fault of your motherboard not supplying enough power for the chips or the CPU, it may be an overheating CPU, a mis-configured BIOS or other reasons.

Please leave comments if you think that I am wrong in any of the above and I promise to improve the post if you convince me that I could do better...