SysAdmin Blog

SysAdmin Blog

Debian bullseyse / Devuan chimaera openssl minimum TLS version

Alexander Bochmann Saturday 23 of April, 2022
I recently spent way too much time trying to find out why my mail server wasn't able to send mail to a system that apparently only supported TLSv1. None of the TLS options in the sendmail configuration made any difference.

Things started to click only after I noticed that connecting to the system in question via openssl s_client produced the same error message:

> openssl s_client -connect mail.some.domain:25 -starttls smtp
139770261177664:error:1425F102:SSL routines:ssl_choose_client_version:unsupported 

As it turns out, /etc/ssl/openssl.cnf in current Debian / Devuan has the following global configuration settings:

MinProtocol = TLSv1.2

So yeah, anything using openssl that doesn't explicitly override that configuration will not be able to make TLS connections to systems that don't support TLSv1.2...

Changing the settings to MinProtocol = TLSv1 made it possible to deliver my mail.

network interfaces renamed following Proxmox 7 upgrade

Alexander Bochmann Wednesday 24 of November, 2021
After upgrading my standalone Proxmox host from PVE 6 to 7, the interface names were suddenly changed back from "predictable" to the old ethX names. The setup is Proxmox on Debian, so when I initially set up the system, I manually installed Debian 10 first and then added the Proxmox 6 repositories and packages.

After some debugging it turned out there was an old systemd network configuration file that prevented systemd-udevd from starting up correctly:

systemd-udevd[xxxx]: /etc/systemd/network/99-default.link: No valid settings found in the [Match] section, ignoring file. To match all interfaces, add OriginalName=* in the [Match] section.

I currently have no idea where the file /etc/systemd/network/99-default.link originated from (it doesn't have a package owner after the upgrade), but apparently it contains an invalid syntax for the systemd-udevd in Debian Bullseye. Removing the file solved the problem, and I'm now back to the interface names in the ifupdown2 configuration used by Proxmox (I rebooted the system to be sure it comes up in the right way now).

WireGuard on the OpenPandora

Alexander Bochmann Sunday 02 of May, 2021


WireGuard is a VPN system built on modern cryptography that provides for a comparatively simple setup and uses UDP as a transport, with moderate overhead. It "just works" for road warrior setups where one end doesn't have a stable address.

The OpenPandora (cache) is an ARM Linux pocket computer, first released around 2010, that uses an ancient OpenEmbedded Ångström as base OS, with an Linux 3.2 kernel that has quite a few device-specific modules that never were upstreamed.

A couple of weeks ago, I decided to try to combine the two, provided I wouldn't turn out as too much of an effort. With that in mind, I looked at the wireguard-go userspace implementation instead of attempting the make the WireGuard linux-compat kernel module build against the outdated OpenPandora kernel.

Setting up a tunnel requires two WireGuard components:

  1. a WireGuard protocol implementation (like the kernel module or wireguard-go)
  2. a version of wireguard-tools that is used to provide a configuration to WireGuard

As for wireguard-go, I made a short attempt at trying to build golang on the Pandora itself, but hit the "too much effort" barrier pretty quickly. Fortunately, golang now provides for cross-compiling to supported platforms - but the Pandora is not one of those: The Pandora OS (SuperZaxxon) is built with the outdated "softfp" ARM binary ABI, which is backwards-compatible with ARM CPUs that don't have floating-point hardware, but actually is capable to use vfp and NEON in the backend, if supported by the compiler. The workaround here is to crosscompile with ARMv5 as target architecture, which produces a pure software floating point executable (that also works on softfp by design).

cross-building wireguard-go

I built wireguard-go on a Debian Buster host, and since buster-backports only provides go1.14, I couldn't use the most recent version (which currently requires go1.16): Went with wireguard-go 0.0.20210212 instead.

After checking out or unpacking the sources, building a binary is a simple matter of running make with the appropriate environment parameters:

env GOOS=linux GOARCH=arm GOARM=5 make

Just copy the resulting wireguard-go over to /usr/local/bin on your Pandora and make it executable.

compiling wireguard-tools

wireguard-tools has only a small set of build dependencies, the most important of which unfortunately isn't even mentioned: On Linux, you need a copy of the kernel headers that roughly matches the destination kernel.

Turns out that SuperZaxxon only ships the include files for the initial kernel (2.6), but not those for the last available kernel build. Also Linux 2.6 apparently doesn't provide some required functions, so my first attempt failed.

I ended up downloading the latest 3.2 kernel sources from the OpenPandora git.

When I compile software on the Pandora, I usually first try to use the cdevtools PND - it has an older gcc, but is generally more leightweight than the other option (Code::Blocks). So I start cdevtools, make a src/wireguard directory, and then download and unpack both wireguard-tools and the Pandora kernel sources in there.

In the wireguard-tools directory, go to src/ and run something like this:
env CFLAGS="-I`pwd`/../../pandora-kernel-pandora-3.2-c4c68a4/include -Os -mtune=cortex-a8 -mcpu=cortex-a8 -mfpu=neon -mfloat-abi=softfp -pipe" make

...and then, to install the resulting programs below /usr/local:
sudo env PREFIX=/usr/local WITH_WGQUICK=yes WITH_SYSTEMDUNITS=no make install

Pandora caveats

  • SuperZaxxon does not autoload the tun module, so /dev/net/tun doesn't exist. (Ironically, it would be loaded if /dev/net/tun did exist and then something tried to access the device...)
  • wg-quick uses some fancy bash i/o redirection which requires /dev/fd. Which is not there on the Pandora either, but it's easy to create, since it's just a symlink to /proc/self/fd.
  • Do not use a VPN interface name that starts with "w" (like the default of wg0)! It triggers bugs in other scripts on the OpenPandora, for example loading of the WiFi firmware will fail after a resume from sleep.
  • Add /usr/local/bin to the PATH of root so the binaries are found in their directory.
  • A couple of the advanced wg-quick functions fail, mostly due to missing or outdated tools. One that I encountered was changing nameservers, but I assume anything the makes changes to the firewall configuration will be broken too. I did not try calling external commands from the wg-quick config file yet (which might serve as a workaround for some uses).
  • Basic setup of a v4 tunnel with several routes has been tested successfully.
  • IPv6 is completely untested.

I wrote a small wrapper script that creates a suitable environment for wg-quick invocation that's included as /usr/local/bin/wg-pandora in the tar file below:

if [ `id -u` -ne "0" ]; then
  echo "[!] script needs to be run as root, use su oder sudo"
  exit 1

if [ "$1" == "" ]; then
  echo "[!] please use the VPN interface name as parameter"
  echo "NOTE: do not use any device names starting with \"w...\" -"
  echo "      it will prevent Wifi reconfiguration on SuperZaxxon."
  exit 1

if [ ! -f /etc/wireguard/$1.conf ]; then
  echo "[!] please create /etc/wireguard/$1.conf with a valid wg-quick configuration"
  exit 1

if [ ! -e /dev/net/tun ]; then
  echo "[+] load tun kernel module"
  modprobe tun

if [ ! -e /dev/fd ]; then
  echo "[+] create missing /dev/fd symlink"
  ln -s /proc/self/fd /dev/fd

echo "[+] launching wg-quick"
/usr/local/bin/wg-quick up "$1"

exit 0


  • Download wireguard-pandora-20210502.tar.gz and unpack to the root directory:
    tar -C/ -xpf wireguard-pandora-20210502.tar.gz
  • Create a wg-quick (cache) configuration in /etc/wireguard (man pages are included in the download, but man is not installed on the Pandora by default).
  • Run /usr/local/bin/wg-pandora <if-name>. (Remember the note about interface names.)
  • You will need an existing WireGuard endpoint to connect to ;)
  • Manual setup using wg (see WireGuard quickstart) is also possible, as soon as the tun module has been loaded and wireguard-go is running.
  • There's a discussion thread over on the OpenPandora forums.

Apache httpd, reverse proxy, and caching

Alexander Bochmann Tuesday 24 of November, 2020
There's tons of guides out there on either how to set up Apache httpd as a reverse proxy, or how to enable (disk) caching for content being served.

The web has surprisingly little information on how to combine both in a working manner, and to have Apache cache content that's being retrieved from a proxied backend.

Just using the default configuration and then dropping something like a CacheEnable disk into the <Location ...> that holds your proxy rules will not work: Nothing ever is written to the cache directory.

With debug logging you see either nothing at all or maybe a quick succession of AH00750: Adding CACHE_SAVE filter .. and AH00751: Adding CACHE_REMOVE_URL filter ... messages in the error.log

So what's up? Likely your configuration is entirely correct, but you're missing one statement:

CacheQuickHandler off

It seems that with the default of CacheQuickHandler being enabled, proxied content never hits the quick handler phase that allows it to be processed for caching.

When CacheQuickHandler is disabled, everything just drops into place, though some fine tuning might be required.

The current configuration for my use case of caching media for my Mastodon instance that's being retrieved from a horribly sluggish Minio backend looks like this:

<IfModule mod_cache_disk.c>
        CacheQuickHandler off
        CacheRoot /var/cache/apache2/mod_cache_disk
        CacheMaxFileSize 10000000
        CacheDirLevels 2
        CacheDirLength 1
        CacheLock off
        CacheIgnoreCacheControl On
        CacheIgnoreQueryString On
        CacheStoreNoStore On
        CacheIgnoreHeaders Set-Cookie X-Amz-Request-Id

...and then:

<Location "/">
        Require all granted
        ProxyPass http://<backend-address>:9000/
        ProxyPassReverse http://<backend-address>:9000/
        <IfModule mod_cache_disk.c>
               CacheEnable disk

25Gbit ethernet is complicated...

Alexander Bochmann Monday 10 of February, 2020
We just spent about a week trying to put a bunch of systems into production that had been ordered with 25Gbit fiber interfaces. We had planned to collect those on two of our Arista 7050CX3, using 100GBit QSFP28 in 4 * 25GBit mode and MPO breakout cables to 4 * LC for the 25Gbit SFP28 end. So we cable everything up, configure our LACP channels on both ends, and ... nothing. All of the links stay down.

They do show a signal on the transciever though (at least on the switch side where we can look at optics information). An show interfaces et10/1-4 status says "notconnect" for all four subinterfaces. An show interfaces et10/1-4 phy displays an "errDisabled" on the phy layer. We are stumped.

Over the course of the next few days, we try several changes, to no avail. Directly connecting two Arista switches works though, as does a direct connection between two end hosts. We even swap everything down to 40G on the Arista side and 10G SFP+ in the end hosts, which turns out perfectly fine (so at least our cabling is correct).

At this point, support for the appliances we're trying to connect gives us credentials for shell access. It's a non-root user on what turns out as a normal Linux system, but at least I can see that it comes with QLogic Corp. FastLinQ QL45000 Series 25GbE controllers (for a short moment we had suspected we had the wrong controllers), and I can get some information by using ethtool. One of those is that ethtool reports the host interfaces as "25GBASE-KR", which tells me nothing. Someone on IRC mentions that "-KR" denotes an "electrical backplane" connection. Armed with those two small bits of information, I hit the search engines, and find this useful table in a document on the Marvell web site:
It's accompanied by the following text:

The –S short reach interfaces aim to support high-quality cables without
ForwardError Correction (FEC) to minimize latency. Full reach interfaces
aim to support the lowest possible cable or backplane cost and the longest
possible reach, which do require the use of FEC. FEC options include
BASE-R FEC (also referred to as Fire Code) and RS-FEC (also referred to
as Reed-Solomon).

There's two different, incompatible, error correction mechanisms on the bitstream layer of 25Gbit interfaces!? I didn't know that.

Since the default on Arista switches seems to be Reed-Solomon, and I don't have any way to configure a detail like that on the end host, we change the configuration on the Arista side:

interface et10/1-4
error-correction encoding fire-code

That's all. We do the same for three other interface groups, and all links work just excpected (except for one that apparently has a bad transciever in the end host). I call off the screen-sharing session with Arista support planned for five minutes later.

backing up lxc container snapshots, Amanda style

Alexander Bochmann Sunday 10 of November, 2019
I'm probably about the only person in the world using that kind of setup, but here we go:

  • I have an active Amanda (cache) installation that I use to back up various UNIX systems (to disk, with a weekly flush out to a tape rotation)
  • I run a system with lxc containers, using btrfs as storage backend

On btrfs, lxc containers are just subvolumes mounted into the host filesystem, and container snapshots are btrfs snapshots attached to the snapshots/ subdirectory of the container host volume.

So I'm running a simple script on the lxc host each night that cycles through all the containers and creates a snapshot named "amanda" for each of them - deleting the previous version if present. The main loop of the bash script looks more or less like this:

if [ -d /${lxdpool}/snapshots/${container}/amanda ]; then
 lxc delete ${container}/amanda
 sleep 2
lxc snapshot ${container} amanda

Amanda can do incremental backups using GNU tar (in addition to a host of other options). One of the less obvious stumbling blocks with this is that GNU tar takes the device ID into account when calculating incrementals - and as each btrfs snapshot is a new device, the default configuration will back up all of the files in the snapshot every day, even if the file metadata is unchanged. So to make this setup work, Amanda needs a new dumptype with a tar configuration that ignores the device ID (tar option --no-check-device). The amanda.conf on my backup server now defines this in addition to the pre-existing defaults:

 define application-tool app_amgtar_snap { #
    comment "amgtar for btrfs snapshots"
    plugin "amgtar"
    property "ONE-FILE-SYSTEM" "yes"  #use '--one-file-system' option
    property "ATIME-PRESERVE" "yes"   #use '--atime-preserve=system' option
    property "CHECK-DEVICE" "no"      #use '--no-check-device' if set to "no"
    property "IGNORE" ": socket ignored$"  # remove some log clutter
    property append "IGNORE" "directory is on a different filesystem"

define dumptype dt_amgtar_snap { #
    comment "new dump type that uses the above application definition"
    program "APPLICATION"
    application "app_amgtar_snap"

 define dumptype comp-user-ssh-tar-lxd-snap { #
    global-ssh   # use global ssh transport configuration
    client_username "backup"
    program "GNUTAR"
    dt_amgtar_snap    # that's my new dumptype
    comment "partitions dumped with tar as lxd snapshot, using gnutar --no-device option"
    priority low
    compress client fast
    exclude list "./rootfs/.amandaexclude"  # each container can have individual exclude lists in /.amandaexclude

All that's left now is to add entries to the Amanda disklist that are using my new dump type:

host.example.com        /lxdpool/snapshots/container1/amanda     comp-user-ssh-tar-lxd-snap
host.example.com        /lxdpool/snapshots/container2/amanda     comp-user-ssh-tar-lxd-snap

adding a current FreeMiNT release into an existing EasyMiNT install on the Atari TT

Alexander Bochmann Sunday 03 of November, 2019
I spent this weekend installing EasyMiNT (cache) on my Atari TT, and then making it work with the Lightning VME USB board (cache).

Some more on the journey in getting there can be seen in these Fediverse threads:

  • I haven't had much luck in installing EasyMiNT to anything other than the C: drive
  • The MiNT kernel provided with EasyMiNT is too old to be able to load the Lightning drivers, but since I had successfully installed the EasyMiNT distribution already, I wanted to upgrade it with a current FreeMiNT (cache) release.
  • Booting the current MiNT kernel (as of 1-19-73f) hangs after the "Installing BIOS keyboard" message. This thread (cache) on atari-forum.com recommends removing BIGDOS.PRG from the AUTO folder. Apparently BIGDOS is not required when using recent MiNT kernels anyway (I also got rid of WDIALOG.PRG while I was at it).

EasyMiNT installed on C: boots the kernel from C:\MINT\1-19-CUR. I didn't want to touch that working part of the setup, so I downloaded a full snapshot from https://bintray.com/freemint/freemint/snapshots/ that uses the snapshot version as MiNT SYSDIR (C:\MINT\1-19-73f for my build). Changing from the EasyMiNT kernel to the current MINT030.PRG in C:\AUTO\ then implicitly executes everything else from the corresponding SYSDIR.

As it turns out, the USB drivers included with the current FreeMiNT distribution are incompatible with those from the Lighting VME driver disk. The easiest way is to rename $SYSDIR\USB to something else and replace the directory with the files from the TT\MINT directory in the Lightning distribution - and then add a missing file (ETH.UDD) attached to this forum post (cache). Using the ETH.UDD provided with FreeMiNT does not work and leads to an "API Mismatch" message.

To keep using most of the EasyMiNT setup, I adapted the boot sequence and MINT.CNF (some hints on the general boot layout can be found in MiNTBootSequence) by replacing some of the sln links. The relevant sections of my current configuration looks like this (E: is my EasyMiNT ext2 filesystem):

# add some binaries provided by FreeMiNT, later referenced in PATH
sln c:/mint/1-19-73f/sys-root/bin              u:/sysbin
# GEM programs included in the FreeMiNT distribution
sln c:/mint/1-19-73f/sys-root/opt              u:/opt
sln c:/mint/1-19-73f/sys-root/share            u:/share
# EasyMINT links
sln e:/etc     u:/etc
sln e:/bin     u:/bin
sln e:/sbin    u:/sbin
sln e:/home    u:/home
sln e:/usr     u:/usr
sln e:/mnt     u:/mnt
sln e:/root    u:/root
sln e:/tmp     u:/tmp
# this line only works after removing the /usr/bin/xaaes symlink in EasyMiNT!
# with this, the EasyMiNT/SpareMiNT init script keeps starting XaAES without any further changes
sln c:/mint/1-19-73f/xaaes/xaloader.prg    u:/usr/bin/xaaes

# I've found that using TOS paths in MINT.CNF works better?
setenv PATH u:\sysbin,u:\bin,u:\usr\bin,u:\usr\sbin,u:\sbin,u:\c\mint\1-19-73f\xaaes

setenv TMPDIR u:\tmp

# provided by EasyMiNT, only works when the appropriate direcories on E: are linked in
exec u:\c\mint\bin\sh u:\c\mint\bin\fscheck.sh

setenv TZ 'Europe/Berlin'
exec u:\sbin\tzinit -l

# load Lightning USB drivers
exec u:\c\mint\1-19-73f\usb\loader.prg

# use SpareMiNT init system, as installed by EasyMiNT

Linking in XALOADER.PRG via an sln link makes it easy to adapt the configuration to new releases. Most of the rest of the sln link tree comes from the MINT.CNF created by the EasyMiNT installer.

Apache 2.4 as a reverse proxy for Mastodon

Alexander Bochmann Friday 31 of May, 2019
The standard setup for Mastodon is to use nginx as a reverse proxy. After one too many missing features I recently switched my installation over to using good old Apache.

There's an example Apache config (cache) in the unmaintained old documentation archive for Mastodon, and since I assume it's useless to try to update that, I'll quickly dump my current config here. There's no guarantee for correctness, but it currently seems to work for me. Note that this configuration does not do any caching for requests to static content retrieved through the reverse proxy.

The following Apache modules are used:

  • proxy
  • proxy_http
  • http2
  • proxy_http2
  • proxy_wstunnel
  • headers
  • socache_shmcb
  • ssl

General SSL configuration (personal preference, CipherSuite selection is probably going to age badly). TLS v1.3 is disabled since Ubuntu bionic ships an Apache version that's too old for that:

<IfModule mod_ssl.c>

        SSLCertificateFile     <path to combined public key / certificate chain file>
        SSLCertificateKeyFile  <path to private key>
        #   the referenced file can be the same as SSLCertificateFile
        #   when the CA certificates are directly appended to the server
        #   certificate for convinience.
        SSLCertificateChainFile <path to combined public key / certificate chain file>

        # SSLProtocol -all +TLSv1.2 +TLSv1.3
        SSLProtocol -all +TLSv1.2 +TLSv1.1
        SSLHonorCipherOrder on
        SSLCompression off
        SSLSessionTickets off
        SSLSessionCache "shmcb:logs/session-cache(512000)"
        SSLStaplingResponderTimeout 5
        SSLStaplingReturnResponderErrors off
        SSLUseStapling on
        SSLStaplingCache "shmcb:logs/stapling-cache(150000)"

        # needs to be generated first, see https://weakdh.org/sysadmin.html
        SSLOpenSSLConfCmd DHParameters /etc/ssl/dhparam.pem


Mastodon vhost configuration:

<VirtualHost *:443>
        ServerAdmin webmaster@example.com
        ServerName mastodon.example.com

        SSLEngine on

        Protocols h2 http/1.1

        # fetch static files directly from local file system (adapt to installation path)
        DocumentRoot /home/mastodon/live/public

        Header always set Strict-Transport-Security "max-age=31536000"

        <LocationMatch "^/(assets|avatars|emoji|headers|packs|sounds|system)">
                Header always set Cache-Control "public, max-age=31536000, immutable"
                Require all granted

        <Location "/">
                Require all granted

        ProxyPreserveHost On
        RequestHeader set X-Forwarded-Proto "https"
        ProxyAddHeaders On

        # these files / pathes don't get proxied and are retrieved from DocumentRoot
        ProxyPass /500.html !
        ProxyPass /sw.js !
        ProxyPass /robots.txt !
        ProxyPass /manifest.json !
        ProxyPass /browserconfig.xml !
        ProxyPass /mask-icon.svg !
        ProxyPassMatch ^(/.*\.(png|ico)$) !
        ProxyPassMatch ^/(assets|avatars|emoji|headers|packs|sounds|system) !
        # everything else is either going to the streaming API or the web workers
        ProxyPass /api/v1/streaming ws://localhost:4000
        ProxyPassReverse /api/v1/streaming ws://localhost:4000
        ProxyPass / http://localhost:3000/
        ProxyPassReverse / http://localhost:3000/

        ErrorDocument 500 /500.html
        ErrorDocument 501 /500.html
        ErrorDocument 502 /500.html
        ErrorDocument 503 /500.html
        ErrorDocument 504 /500.html


The trailing / on the websocket ProxyPass directive is missing by design (it's there in the old example config): Some API requests seen in the wild will not match /api/v1/streaming/ and will get lost.