SysAdmin Blog

SysAdmin Blog

Apache httpd, reverse proxy, and caching

Alexander Bochmann Tuesday 24 of November, 2020
There's tons of guides out there on either how to set up Apache httpd as a reverse proxy, or how to enable (disk) caching for content being served.

The web has surprisingly little information on how to combine both in a working manner, and to have Apache cache content that's being retrieved from a proxied backend.

Just using the default configuration and then dropping something like a CacheEnable disk into the <Location ...> that holds your proxy rules will not work: Nothing ever is written to the cache directory.

With debug logging you see either nothing at all or maybe a quick succession of AH00750: Adding CACHE_SAVE filter .. and AH00751: Adding CACHE_REMOVE_URL filter ... messages in the error.log

So what's up? Likely your configuration is entirely correct, but you're missing one statement:

CacheQuickHandler off

It seems that with the default of CacheQuickHandler being enabled, proxied content never hits the quick handler phase that allows it to be processed for caching.

When CacheQuickHandler is disabled, everything just drops into place, though some fine tuning might be required.

The current configuration for my use case of caching media for my Mastodon instance that's being retrieved from a horribly sluggish Minio backend looks like this:

<IfModule mod_cache_disk.c>
        CacheQuickHandler off
        CacheRoot /var/cache/apache2/mod_cache_disk
        CacheMaxFileSize 10000000
        CacheDirLevels 2
        CacheDirLength 1
        CacheLock off
        CacheIgnoreCacheControl On
        CacheIgnoreQueryString On
        CacheStoreNoStore On
        CacheIgnoreHeaders Set-Cookie X-Amz-Request-Id

...and then:

<Location "/">
        Require all granted
        ProxyPass http://<backend-address>:9000/
        ProxyPassReverse http://<backend-address>:9000/
        <IfModule mod_cache_disk.c>
               CacheEnable disk

25Gbit ethernet is complicated...

Alexander Bochmann Monday 10 of February, 2020
We just spent about a week trying to put a bunch of systems into production that had been ordered with 25Gbit fiber interfaces. We had planned to collect those on two of our Arista 7050CX3, using 100GBit QSFP28 in 4 * 25GBit mode and MPO breakout cables to 4 * LC for the 25Gbit SFP28 end. So we cable everything up, configure our LACP channels on both ends, and ... nothing. All of the links stay down.

They do show a signal on the transciever though (at least on the switch side where we can look at optics information). An show interfaces et10/1-4 status says "notconnect" for all four subinterfaces. An show interfaces et10/1-4 phy displays an "errDisabled" on the phy layer. We are stumped.

Over the course of the next few days, we try several changes, to no avail. Directly connecting two Arista switches works though, as does a direct connection between two end hosts. We even swap everything down to 40G on the Arista side and 10G SFP+ in the end hosts, which turns out perfectly fine (so at least our cabling is correct).

At this point, support for the appliances we're trying to connect gives us credentials for shell access. It's a non-root user on what turns out as a normal Linux system, but at least I can see that it comes with QLogic Corp. FastLinQ QL45000 Series 25GbE controllers (for a short moment we had suspected we had the wrong controllers), and I can get some information by using ethtool. One of those is that ethtool reports the host interfaces as "25GBASE-KR", which tells me nothing. Someone on IRC mentions that "-KR" denotes an "electrical backplane" connection. Armed with those two small bits of information, I hit the search engines, and find this useful table in a document on the Marvell web site:
It's accompanied by the following text:

The –S short reach interfaces aim to support high-quality cables without
ForwardError Correction (FEC) to minimize latency. Full reach interfaces
aim to support the lowest possible cable or backplane cost and the longest
possible reach, which do require the use of FEC. FEC options include
BASE-R FEC (also referred to as Fire Code) and RS-FEC (also referred to
as Reed-Solomon).

There's two different, incompatible, error correction mechanisms on the bitstream layer of 25Gbit interfaces!? I didn't know that.

Since the default on Arista switches seems to be Reed-Solomon, and I don't have any way to configure a detail like that on the end host, we change the configuration on the Arista side:

interface et10/1-4
error-correction encoding fire-code

That's all. We do the same for three other interface groups, and all links work just excpected (except for one that apparently has a bad transciever in the end host). I call off the screen-sharing session with Arista support planned for five minutes later.

backing up lxc container snapshots, Amanda style

Alexander Bochmann Sunday 10 of November, 2019
I'm probably about the only person in the world using that kind of setup, but here we go:

  • I have an active Amanda (cache) installation that I use to back up various UNIX systems (to disk, with a weekly flush out to a tape rotation)
  • I run a system with lxc containers, using btrfs as storage backend

On btrfs, lxc containers are just subvolumes mounted into the host filesystem, and container snapshots are btrfs snapshots attached to the snapshots/ subdirectory of the container host volume.

So I'm running a simple script on the lxc host each night that cycles through all the containers and creates a snapshot named "amanda" for each of them - deleting the previous version if present. The main loop of the bash script looks more or less like this:

if [ -d /${lxdpool}/snapshots/${container}/amanda ]; then
 lxc delete ${container}/amanda
 sleep 2
lxc snapshot ${container} amanda

Amanda can do incremental backups using GNU tar (in addition to a host of other options). One of the less obvious stumbling blocks with this is that GNU tar takes the device ID into account when calculating incrementals - and as each btrfs snapshot is a new device, the default configuration will back up all of the files in the snapshot every day, even if the file metadata is unchanged. So to make this setup work, Amanda needs a new dumptype with a tar configuration that ignores the device ID (tar option --no-check-device). The amanda.conf on my backup server now defines this in addition to the pre-existing defaults:

 define application-tool app_amgtar_snap { #
    comment "amgtar for btrfs snapshots"
    plugin "amgtar"
    property "ONE-FILE-SYSTEM" "yes"  #use '--one-file-system' option
    property "ATIME-PRESERVE" "yes"   #use '--atime-preserve=system' option
    property "CHECK-DEVICE" "no"      #use '--no-check-device' if set to "no"
    property "IGNORE" ": socket ignored$"  # remove some log clutter
    property append "IGNORE" "directory is on a different filesystem"

define dumptype dt_amgtar_snap { #
    comment "new dump type that uses the above application definition"
    program "APPLICATION"
    application "app_amgtar_snap"

 define dumptype comp-user-ssh-tar-lxd-snap { #
    global-ssh   # use global ssh transport configuration
    client_username "backup"
    program "GNUTAR"
    dt_amgtar_snap    # that's my new dumptype
    comment "partitions dumped with tar as lxd snapshot, using gnutar --no-device option"
    priority low
    compress client fast
    exclude list "./rootfs/.amandaexclude"  # each container can have individual exclude lists in /.amandaexclude

All that's left now is to add entries to the Amanda disklist that are using my new dump type:

host.example.com        /lxdpool/snapshots/container1/amanda     comp-user-ssh-tar-lxd-snap
host.example.com        /lxdpool/snapshots/container2/amanda     comp-user-ssh-tar-lxd-snap

adding a current FreeMiNT release into an existing EasyMiNT install on the Atari TT

Alexander Bochmann Sunday 03 of November, 2019
I spent this weekend installing EasyMiNT (cache) on my Atari TT, and then making it work with the Lightning VME USB board (cache).

Some more on the journey in getting there can be seen in these Fediverse threads:

  • I haven't had much luck in installing EasyMiNT to anything other than the C: drive
  • The MiNT kernel provided with EasyMiNT is too old to be able to load the Lightning drivers, but since I had successfully installed the EasyMiNT distribution already, I wanted to upgrade it with a current FreeMiNT (cache) release.
  • Booting the current MiNT kernel (as of 1-19-73f) hangs after the "Installing BIOS keyboard" message. This thread (cache) on atari-forum.com recommends removing BIGDOS.PRG from the AUTO folder. Apparently BIGDOS is not required when using recent MiNT kernels anyway (I also got rid of WDIALOG.PRG while I was at it).

EasyMiNT installed on C: boots the kernel from C:\MINT\1-19-CUR. I didn't want to touch that working part of the setup, so I downloaded a full snapshot from https://bintray.com/freemint/freemint/snapshots/ that uses the snapshot version as MiNT SYSDIR (C:\MINT\1-19-73f for my build). Changing from the EasyMiNT kernel to the current MINT030.PRG in C:\AUTO\ then implicitly executes everything else from the corresponding SYSDIR.

As it turns out, the USB drivers included with the current FreeMiNT distribution are incompatible with those from the Lighting VME driver disk. The easiest way is to rename $SYSDIR\USB to something else and replace the directory with the files from the TT\MINT directory in the Lightning distribution - and then add a missing file (ETH.UDD) attached to this forum post (cache). Using the ETH.UDD provided with FreeMiNT does not work and leads to an "API Mismatch" message.

To keep using most of the EasyMiNT setup, I adapted the boot sequence and MINT.CNF (some hints on the general boot layout can be found in MiNTBootSequence) by replacing some of the sln links. The relevant sections of my current configuration looks like this (E: is my EasyMiNT ext2 filesystem):

# add some binaries provided by FreeMiNT, later referenced in PATH
sln c:/mint/1-19-73f/sys-root/bin              u:/sysbin
# GEM programs included in the FreeMiNT distribution
sln c:/mint/1-19-73f/sys-root/opt              u:/opt
sln c:/mint/1-19-73f/sys-root/share            u:/share
# EasyMINT links
sln e:/etc     u:/etc
sln e:/bin     u:/bin
sln e:/sbin    u:/sbin
sln e:/home    u:/home
sln e:/usr     u:/usr
sln e:/mnt     u:/mnt
sln e:/root    u:/root
sln e:/tmp     u:/tmp
# this line only works after removing the /usr/bin/xaaes symlink in EasyMiNT!
# with this, the EasyMiNT/SpareMiNT init script keeps starting XaAES without any further changes
sln c:/mint/1-19-73f/xaaes/xaloader.prg    u:/usr/bin/xaaes

# I've found that using TOS paths in MINT.CNF works better?
setenv PATH u:\sysbin,u:\bin,u:\usr\bin,u:\usr\sbin,u:\sbin,u:\c\mint\1-19-73f\xaaes

setenv TMPDIR u:\tmp

# provided by EasyMiNT, only works when the appropriate direcories on E: are linked in
exec u:\c\mint\bin\sh u:\c\mint\bin\fscheck.sh

setenv TZ 'Europe/Berlin'
exec u:\sbin\tzinit -l

# load Lightning USB drivers
exec u:\c\mint\1-19-73f\usb\loader.prg

# use SpareMiNT init system, as installed by EasyMiNT

Linking in XALOADER.PRG via an sln link makes it easy to adapt the configuration to new releases. Most of the rest of the sln link tree comes from the MINT.CNF created by the EasyMiNT installer.

Apache 2.4 as a reverse proxy for Mastodon

Alexander Bochmann Friday 31 of May, 2019
The standard setup for Mastodon is to use nginx as a reverse proxy. After one too many missing features I recently switched my installation over to using good old Apache.

There's an example Apache config (cache) in the unmaintained old documentation archive for Mastodon, and since I assume it's useless to try to update that, I'll quickly dump my current config here. There's no guarantee for correctness, but it currently seems to work for me. Note that this configuration does not do any caching for requests to static content retrieved through the reverse proxy.

The following Apache modules are used:

  • proxy
  • proxy_http
  • http2
  • proxy_http2
  • proxy_wstunnel
  • headers
  • socache_shmcb
  • ssl

General SSL configuration (personal preference, CipherSuite selection is probably going to age badly). TLS v1.3 is disabled since Ubuntu bionic ships an Apache version that's too old for that:

<IfModule mod_ssl.c>

        SSLCertificateFile     <path to combined public key / certificate chain file>
        SSLCertificateKeyFile  <path to private key>
        #   the referenced file can be the same as SSLCertificateFile
        #   when the CA certificates are directly appended to the server
        #   certificate for convinience.
        SSLCertificateChainFile <path to combined public key / certificate chain file>

        # SSLProtocol -all +TLSv1.2 +TLSv1.3
        SSLProtocol -all +TLSv1.2 +TLSv1.1
        SSLHonorCipherOrder on
        SSLCompression off
        SSLSessionTickets off
        SSLSessionCache "shmcb:logs/session-cache(512000)"
        SSLStaplingResponderTimeout 5
        SSLStaplingReturnResponderErrors off
        SSLUseStapling on
        SSLStaplingCache "shmcb:logs/stapling-cache(150000)"

        # needs to be generated first, see https://weakdh.org/sysadmin.html
        SSLOpenSSLConfCmd DHParameters /etc/ssl/dhparam.pem


Mastodon vhost configuration:

<VirtualHost *:443>
        ServerAdmin webmaster@example.com
        ServerName mastodon.example.com

        SSLEngine on

        Protocols h2 http/1.1

        # fetch static files directly from local file system (adapt to installation path)
        DocumentRoot /home/mastodon/live/public

        Header always set Strict-Transport-Security "max-age=31536000"

        <LocationMatch "^/(assets|avatars|emoji|headers|packs|sounds|system)">
                Header always set Cache-Control "public, max-age=31536000, immutable"
                Require all granted

        <Location "/">
                Require all granted

        ProxyPreserveHost On
        RequestHeader set X-Forwarded-Proto "https"
        ProxyAddHeaders On

        # these files / pathes don't get proxied and are retrieved from DocumentRoot
        ProxyPass /500.html !
        ProxyPass /sw.js !
        ProxyPass /robots.txt !
        ProxyPass /manifest.json !
        ProxyPass /browserconfig.xml !
        ProxyPass /mask-icon.svg !
        ProxyPassMatch ^(/.*\.(png|ico)$) !
        ProxyPassMatch ^/(assets|avatars|emoji|headers|packs|sounds|system) !
        # everything else is either going to the streaming API or the web workers
        ProxyPass /api/v1/streaming ws://localhost:4000
        ProxyPassReverse /api/v1/streaming ws://localhost:4000
        ProxyPass / http://localhost:3000/
        ProxyPassReverse / http://localhost:3000/

        ErrorDocument 500 /500.html
        ErrorDocument 501 /500.html
        ErrorDocument 502 /500.html
        ErrorDocument 503 /500.html
        ErrorDocument 504 /500.html


The trailing / on the websocket ProxyPass directive is missing by design (it's there in the old example config): Some API requests seen in the wild will not match /api/v1/streaming/ and will get lost.

creating an iPXE boot floppy

Alexander Bochmann Sunday 01 of July, 2018
The iPXE open source boot firmware project provides an CD image that boots the iPXE binary using isolinux.

Over on the Fediverse, the topic of bootstraping a system from a floppy disk came up, and with the iPXE binary being a mere 330KB, there's really no reason why it shouldn't be possible to boot that from a floppy disk. And it actually does work, with a few simple steps (on a Debian-ish Linux):

  • format floppy disk and create FAT filesystem
    fdformat /dev/fd0
    mkfs -t fat /dev/fd0
  • get syslinux and install to floppy
    apt install syslinux syslinux-utils
    syslinux --install /dev/fd0
  • get iPXE ISO
    curl -O http://boot.ipxe.org/ipxe.iso
  • mount both iPXE ISO and floppy, copy over required files, rename isolinux.cfg to syslinux.cfg
    mkdir fd iso
    mount /dev/fd0 fd
    mount -o ro ipxe.iso iso
    cp iso/ipxe.krn fd/
    cp iso/boot.cat fd/
    cp iso/isolinux.cfg fd/syslinux.cfg
    umount fd
    umount iso
    rmdir fd iso

That's all! Take your floppy and boot a system

Once iPXE has been started, hit Ctrl-B to call the shell. If you have a DHCP server on your network and a web server with a bootable ISO image, it's just two iPXE commands:

sanboot http://<webserver>/<filename>.iso

SolidFire FDVA software repository downgrade

Alexander Bochmann Thursday 21 of December, 2017
We've been playing with a SolidFire flash storage cluster for some time, and recently wanted to update the nodes to the current ElementOS 10.1 release.

Unfortunately, our FDVA management node installation was borked, so we decided to just roll a new one from the current VM appliance template - easy.
As it turns out though, the FDVA appliance only ships with the latest software release files, and the individual SolidFire nodes check back for a repository with their current version before starting the update, which consequently fails (it's all very Ubuntu-ish):

admin@SF-7323:~$ sudo sfinstall -u admin -p password -l
2017-12-20 17:27:52: sfinstall Release Version: Revision:  Build date: 2017-11-23 01:27
2017-12-20 17:27:52: Checking connectivity to MVIP
2017-12-20 17:27:52: Successfully connected to cluster MVIP
2017-12-20 17:27:53: PrintRepositoryPackages failed: SolidFireApiError server=[] method=[AptUpdate], params=[{'quiet': 2}] - error name=[xCheckFailure], 
message=[cmdResult={ rc=255 stdout="W: Failed to fetch  404  Not Found
W: Failed to fetch  404  Not Found

The SolidFire docs don't really mention what to do from there, so we tinkered around for some time and found this:

Any older version of the repository can be fetched using the update-fdva tool with the currently used ElementOS release version as command line (version number can be seen on the cluster web UI or when asking the cluster nodes for their mnode repository using sfinstall). In our case, the active version was -

admin@SF-7323:~$ sudo update-fdva
Get: 1 http://localhost precise Release.gpg [490 B]
Get: 2 http://localhost precise-updates Release.gpg [490 B]

This will fetch the version of the SolidFire repository, but will also downgrade to the matching (old) versions of solidfire-fdva-tools and solidfire-python-framework...

admin@SF-7323:~$ dpkg -l | grep fdva
ii  solidfire-fdva-tools-fluorine-patch2-                               SolidFire FDVA Tools 9 [fluorine-patch2]

...so we immediately reinstalled the current versions, using update-fdva again, this time with the current release version number:

admin@SF-7323:~$ sudo update-fdva

With all that in place, we could just run the update routine using the usual sfinstall command.

find obsolete packages on a Debian system

Alexander Bochmann Saturday 08 of July, 2017
After dist-upgrading a Debian system recently, I wondered which packages might have been left over from previous releases (the system in question has been through several dist-upgrades over its lifetime), even after running apt-get autoremove and deborphan. After dropping that question on Mastodon (cache), I got an answer pointing to apt-show-versions, which I didn't know of up to now.

This totally does what I've been looking for. From the man page:

       apt-show-versions - Lists available package versions with distribution

       apt-show-versions parses the dpkg status file and the APT lists for the installed and available package
       versions and distribution and shows upgrade options within the specific distribution of the selected package.

       This is really useful if you have a mixed stable/testing environment and want to list all packages which are
       from testing and can be upgraded in testing.

Since I didn't have a package cache for apt-show-versions from the older release, all old packages are currently just shown with a No available version in archive comment. But since current packages are being tagged with the release, I can just exclude those with a simble grep:

# apt-show-versions | egrep -vc jessie