<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-us">
  <title type="text">SysAdmin Blog</title>
  <subtitle type="text">SysAdmin Blog</subtitle>
  <updated>2026-05-08T08:59:21+00:00</updated>
  <generator uri="https://getlaminas.org" version="2">Laminas_Feed_Writer</generator>
  <link rel="alternate" type="text/html" href="https://web.gxis.de/tiki/"/>
  <link rel="self" type="application/atom+xml" href="https://web.gxis.de/tiki/tiki-blog_rss.php?blogId=1"/>
  <id>https://web.gxis.de/tiki/</id>
  <author>
    <name>Alexander Bochmann</name>
    <email>ab+atom@st.gxis.de</email>
  </author>
  <entry>
    <title type="html"><![CDATA[vSphere host profiles, "A specified parameter was not correct: portgroupName"]]></title>
    <summary type="html"><![CDATA[It's not often you run into a VMware error message that has almost no search engine hits, which we managed to do recently...<br />
<br />
When trying to apply an existing host profile to a new host added to a cluster, vSphere errored out:<br />
<br />
<tt> A general system error occurred: Batch host remediation failed.</tt><br />
<tt> A specified parameter was not correct: portgroupName</tt><br />
<br />
We later learned that the same error would appear when trying to reapply the host profile to a machine that had been in the cluster for quite some time...<br />
<br />
I'll spare you the details, but as it turns out this happened ... because we were configuring vmkernel ports on dvSwitch portgroups in the host profile, and I had changed one of those port group names.<br />
<br />
Knowing that, the error message suddenly makes sense 🙄<br />
<br />
So instead of changing back the names, we went forward and updated all our host profiles with the new designations, which also meant reacknowledging host profile customizations that were referencing these port groups.<br />]]></summary>
    <published>2024-08-30T19:48:00+00:00</published>
    <updated>2024-08-30T19:48:00+00:00</updated>
    <link rel="alternate" type="text/html" href="https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D347"/>
    <id>https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D347</id>
    <author>
      <name>Alexander Bochmann</name>
      <email>&lt;a class="convert-mailto" href="mailto:nospam@example.com" data-encode-name="ab+wiki" data-encode-domain="st.gxis.de"&gt;ab+wiki at st.gxis.de&lt;/a&gt;</email>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[force serial console on an HP apollo 715/50 workstation]]></title>
    <summary type="html"><![CDATA[I made the error of setting the console path to graphics in the BOOT_ADMIN console of an old HP apollo 715/50 workstation with no monitor connected (or at least none that is able to detect the system's VGA signal).<br />
<br />
On more recent HP 9000 hardware, it seems to be possible to reset the console path to serial by pressing the TOC button after powering on with no keyboard and monitor connected, but as the <a class="wiki external"  title="External link" href="https://wiki.netbsd.org/ports/hppafaq/#index1h1" rel="external nofollow">NetBSD/hppa FAQ</a> <a class="wikicache" target="_blank" href="tiki-view_cache.php?url=https%3A%2F%2Fwiki.netbsd.org%2Fports%2Fhppafaq%2F%23index1h1">(cache)</a> sais, this has no effect on a 715.<br />
<br />
As it turns out, there is another way though, and I haven't seen it documented anywhere: The 715/50 has a monitor selection switch for its onboard graphics adapter. It has one setting (both switches on SW1 down) that is labeled as <em>15" Color (Model 715/33 only)</em> in the service manual.<br />
<br />
With this setting, the system comes up with serial A as default with 9600/8/n/1, and it's possible to interrupt the boot process with &lt;ESC&gt;, select &lt;a&gt; to get into boot administration mode, and then change the console path back to serial from the BOOT_ADMIN&gt; prompt:<br />
<br />
<tt> PATH console rs232_a.9600.8.none</tt><br />
<tt> RESET</tt><br />
<br />
	<a href="tiki-download_file.php?fileId=16&display" class="internal"  data-box="box">		<img src="tiki-download_file.php?fileId=16&display"  width="300px" height="818" alt="Screenshot from the HP 715 service manual showing options for the SW1 DIP switch" class="regImage pluginImg16 img-fluid" />	</a><br />]]></summary>
    <published>2024-05-05T13:22:00+00:00</published>
    <updated>2024-05-05T13:22:00+00:00</updated>
    <link rel="alternate" type="text/html" href="https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D345"/>
    <id>https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D345</id>
    <author>
      <name>Alexander Bochmann</name>
      <email>&lt;a class="convert-mailto" href="mailto:nospam@example.com" data-encode-name="ab+wiki" data-encode-domain="st.gxis.de"&gt;ab+wiki at st.gxis.de&lt;/a&gt;</email>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[Windows 10 and WSL: Thousands of "HNS Container Networking" firewall rules]]></title>
    <summary type="html"><![CDATA[My main Windows 10 PC, originally installed in 2018, recently has been having strange networking problems after powering on. For example, WSL would not start for minutes, and Wireguard took ages to activate.<br />
<br />
I happened to find <a class="wiki external"  title="External link" href="https://learn.microsoft.com/en-us/windows/wsl/troubleshooting" rel="external nofollow">this general WSL troubleshooting article</a> <a class="wikicache" target="_blank" href="tiki-view_cache.php?url=https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Fwindows%2Fwsl%2Ftroubleshooting">(cache)</a> on the Microsoft knowledgebase, which, about half way down, mentions possible problems with "HNS Firewall rules" and has a Powershell oneliner to remove some of those rules.<br />
<br />
No idea why this was the first thing I tried out of the many options on that page, but as it turns out, my system had over 12.000 HNS Container Networking rules:<br />
<br />
<tt> PS C:\Users\bochmann&gt; Get-NetFirewallRule -name "HNS Container Networking*" | measure | select Count</tt><br />
<tt> Count</tt><br />
<tt> -----</tt><br />
<tt> 12580</tt><br />
<br />
This seemed like a problem since there's only about 300 other firewall rules, not to mention the command took quite some time to complete.<br />
<br />
After testing on my notebook, which has a much more recent Windows install, it turns out that each reboot adds six of these rules, provided I shut down the system with a <code>shutdown /s /t 0</code> instead of using the Windows menu? Which I usually do to force a "real" shutdown and thwart fast startup...<br />
<br />
On the notebook, I just nuked all HNS firewall rules (not just those for UDP/53), to no apparent ill effect (needs to be run as Administrator):<br />
<br />
<tt> wsl --shutdown</tt><br />
<tt> Get-NetFirewallRule -Name "HNS Container Networking*" | Remove-NetFirewallRule</tt><br />
<tt> hnsdiag delete all</tt><br />
<tt> Restart-Service -Force hns</tt><br />
<br />
...on the other PC, Powershell tells me that the command will be running for another four hours.<br />
<br />
Now I only need to find out why this happens in the first place.<br />]]></summary>
    <published>2024-05-02T12:40:00+00:00</published>
    <updated>2024-05-02T12:40:00+00:00</updated>
    <link rel="alternate" type="text/html" href="https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D344"/>
    <id>https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D344</id>
    <author>
      <name>Alexander Bochmann</name>
      <email>&lt;a class="convert-mailto" href="mailto:nospam@example.com" data-encode-name="ab+wiki" data-encode-domain="st.gxis.de"&gt;ab+wiki at st.gxis.de&lt;/a&gt;</email>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[manually applying patches from GitHub]]></title>
    <summary type="html"><![CDATA[I wasn't previously aware that you can take any commit ID on the GitHub web interface and just add <code>.diff</code> to the URL to get a plain context diff that can then be applied to code existing elsewhere with good old <code>patch</code>.<br />
<br />
So it's not required to fiddle with git repos and forks and whatever to quickly apply a patch out of band (and then return to the upstream state later on with something like a <code>git checkout --force ...</code> that squashes all the local changes).<br />
<br />
Case in point: It was not initially clear when the recent Mastodon patches would be applied to the Hometown fork, but .diffs from relevant commits on the Mastodon repo applied to the code on my disk with minimal fuzz. So it was possible to quickly get into a state where my version had the most important patches without breaking the connection to Hometown upstream, and after the security fixes had landed there, I just checked that version out over my local changes.<br />]]></summary>
    <published>2023-07-09T21:35:00+00:00</published>
    <updated>2023-07-09T21:35:00+00:00</updated>
    <link rel="alternate" type="text/html" href="https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D342"/>
    <id>https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D342</id>
    <author>
      <name>Alexander Bochmann</name>
      <email>&lt;a class="convert-mailto" href="mailto:nospam@example.com" data-encode-name="ab+wiki" data-encode-domain="st.gxis.de"&gt;ab+wiki at st.gxis.de&lt;/a&gt;</email>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[quick notes from installing OS/2 Warp 4 way too often]]></title>
    <summary type="html"><![CDATA[I own an old Via EPIA board with a C3 CPU, and for some reason I thought casually installing OS/2 would be a good idea.<br />
<br />
<ul><li> I used install media copies from <a class="wiki external"  title="External link" href="https://winworldpc.com/product/os-2-warp-4/os-2-warp-40" rel="external nofollow">WinWorld</a>
<ul><li> I have German language install media in original packaging, but turning up all the patch sets in German was too much effort, and a mixed-language OS is annoying
</li><li> for some reason, the updated partitioning tool from the OS/2 Warp 4.52 installer failed on the IDE-to-SDcard adapter I was using
</li><li> (after lots of tries I ditched that storage solution and used an actual IDE disk - somehow boot sector and partition table kept getting lost when using the SD adapter?)
</li><li> fdisk from OS/2 Warp 4 worked without problems?
</li></ul></li><li> in BIOS setup, configure "LBA" addressing scheme for the HDD
</li><li> OS/2 Warp 4 install CD is not bootable, you need install floppies (and a floppy drive)
<ul><li> the installer has no USB support, and USB floppy drives are not an option (even though elstel.org, linked below, claims that booting from an USB floppy should work)
</li><li> (maybe some Via BIOS bug or something?)
</li><li> downloaded patched install disks from <a class="wiki external"  title="External link" href="https://www.elstel.org/OS2Warp/InstallUpdate.html#Introduction" rel="external nofollow">elstel.org</a> <a class="wikicache" target="_blank" href="tiki-view_cache.php?url=https%3A%2F%2Fwww.elstel.org%2FOS2Warp%2FInstallUpdate.html%23Introduction">(cache)</a> (those with Dani's IDE driver, last option in the list)
</li><li> note these are the install disks, you also need a boot disk (I, uh, don't remember which one I used?)
</li><li> also note that elstel.org links to patched bootable OS/2 Warp 4 install CDs (didn't try those)
</li></ul></li><li> since the CDROM isn't bootable, I ended up using an SCSI drive behind a LSI/NCR/Symbios Logic 53C810 PCI SCSI card
<ul><li> there are many releases of the 53C810 driver, but <a class="wiki external"  title="External link" href="https://www.os2site.com/sw/drivers/scsi/index.html" rel="external nofollow">symbios406.zip from os2site.com</a> was the newest one that worked for me with the Warp 4 install disks (versions newer than 4.0.x will hang, older versions may report unknown firmware)
</li><li> the 53C810 driver doesn't fit on the first install disk
</li><li> do not delete unneeded driver files from the install disk, instead truncate them (also mentioned on elstel.org)
</li><li> copying additional drivers from the install disks will fail when files are missing (will updating snoop.lst help?)
</li></ul></li><li> do not use quick install, it will create a FAT partition (instead of HPFS)
</li><li> 2GB HPFS install partition is fine
</li><li> the EPIA C3 board has a 10/100 Via Rhine II, <a class="wiki external"  title="External link" href="https://www.os2site.com/sw/drivers/network/via/index.html" rel="external nofollow">drivers on os2site</a>, copy to an empty disk to install when enabling the TCP/IP stack
</li><li> Via Soundblaster emulation (when enabled in the BIOS) is a Soundblaster Pro
</li><li> after installation, I used <a class="wiki external"  title="External link" href="https://archive.org/details/warp-4-fixpacks" rel="external nofollow">this patchset from archive.org</a> (note installation order mentioned in the TEXT file that's an additional download)
<ul><li> has FP17, TCPIP 4.3 and the MPTS updates, Java runtime (not JDK), Netscape Navigator, Scitech SNAP with the "free" code
</li></ul></li></ul><br />]]></summary>
    <published>2023-06-18T19:27:00+00:00</published>
    <updated>2023-06-18T19:27:00+00:00</updated>
    <link rel="alternate" type="text/html" href="https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D341"/>
    <id>https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D341</id>
    <author>
      <name>Alexander Bochmann</name>
      <email>&lt;a class="convert-mailto" href="mailto:nospam@example.com" data-encode-name="ab+wiki" data-encode-domain="st.gxis.de"&gt;ab+wiki at st.gxis.de&lt;/a&gt;</email>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[Debian bullseyse / Devuan chimaera openssl minimum TLS version]]></title>
    <summary type="html"><![CDATA[I recently spent way too much time trying to find out why my mail server wasn't able to send mail to a system that apparently only supported TLSv1. None of the TLS options in the sendmail configuration made any difference.<br />
<br />
Things started to click only after I noticed that connecting to the system in question via openssl s_client produced the same error message:<br />
<br />
<pre>
&gt; openssl s_client -connect mail.some.domain:25 -starttls smtp
CONNECTED(00000003)
139770261177664:error:1425F102:SSL routines:ssl_choose_client_version:unsupported 
protocol:../ssl/statem/statem_lib.c:1957:
</pre><br />
As it turns out, <code>/etc/ssl/openssl.cnf</code> in current Debian / Devuan has the following global configuration settings:<br />
<br />
<pre>
[system_default_sect]
MinProtocol = TLSv1.2
CipherString = DEFAULT@SECLEVEL=2
</pre><br />
So yeah, anything using openssl that doesn't explicitly override that configuration will not be able to make TLS connections to systems that don't support TLSv1.2...<br />
<br />
Changing the settings to <code>MinProtocol = TLSv1</code> made it possible to deliver my mail.<br />
<br />]]></summary>
    <published>2022-04-23T13:29:00+00:00</published>
    <updated>2022-04-23T13:29:00+00:00</updated>
    <link rel="alternate" type="text/html" href="https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D338"/>
    <id>https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D338</id>
    <author>
      <name>Alexander Bochmann</name>
      <email>&lt;a class="convert-mailto" href="mailto:nospam@example.com" data-encode-name="ab+wiki" data-encode-domain="st.gxis.de"&gt;ab+wiki at st.gxis.de&lt;/a&gt;</email>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[network interfaces renamed following Proxmox 7 upgrade]]></title>
    <summary type="html"><![CDATA[After upgrading my standalone Proxmox host from PVE 6 to 7, the interface names were suddenly changed back from "predictable" to the old ethX names. The setup is Proxmox on Debian, so when I initially set up the system, I manually installed Debian 10 first and then added the Proxmox 6 repositories and packages.<br />
<br />
After some debugging it turned out there was an old systemd network configuration file that prevented systemd-udevd from starting up correctly:<br />
<br />
<pre>
systemd-udevd[xxxx]: /etc/systemd/network/99-default.link: No valid settings found in the [Match] section, ignoring file. To match all interfaces, add OriginalName=* in the [Match] section.
</pre><br />
I currently have no idea where the file <em>/etc/systemd/network/99-default.link</em> originated from (it doesn't have a package owner after the upgrade), but apparently it contains an invalid syntax for the systemd-udevd in Debian Bullseye. Removing the file solved the problem, and I'm now back to the interface names in the ifupdown2 configuration used by Proxmox (I rebooted the system to be sure it comes up in the right way now).<br />]]></summary>
    <published>2021-11-24T21:32:00+00:00</published>
    <updated>2021-11-24T21:32:00+00:00</updated>
    <link rel="alternate" type="text/html" href="https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D337"/>
    <id>https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D337</id>
    <author>
      <name>Alexander Bochmann</name>
      <email>&lt;a class="convert-mailto" href="mailto:nospam@example.com" data-encode-name="ab+wiki" data-encode-domain="st.gxis.de"&gt;ab+wiki at st.gxis.de&lt;/a&gt;</email>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[WireGuard on the OpenPandora]]></title>
    <summary type="html"><![CDATA[<h3 class="showhide_heading" id="introduction"> introduction</h3>
<a class="wiki external"  title="External link" href="https://www.wireguard.com" rel="external nofollow">WireGuard</a> is a VPN system built on modern cryptography that provides for a comparatively simple setup and uses UDP as a transport, with moderate overhead. It "just works" for road warrior setups where one end doesn't have a stable address.<br />
<br />
The <a class="wiki external"  title="External link" href="https://www.openpandora.org" rel="external nofollow">OpenPandora</a> <a class="wikicache" target="_blank" href="tiki-view_cache.php?url=https%3A%2F%2Fwww.openpandora.org">(cache)</a> is an ARM Linux pocket computer, first released around 2010, that uses an ancient <a class="wiki external"  title="External link" href="https://en.wikipedia.org/wiki/%C3%85ngstr%C3%B6m_distribution" rel="external nofollow">OpenEmbedded Ångström</a> as base OS, with an Linux 3.2 kernel that has quite a few device-specific modules that never were upstreamed.<br />
<br />
A couple of weeks ago, I decided to try to combine the two, provided I wouldn't turn out as too much of an effort. With that in mind, I looked at the <a class="wiki external"  title="External link" href="https://git.zx2c4.com/wireguard-go/about/" rel="external nofollow">wireguard-go</a> userspace implementation instead of attempting the make the WireGuard linux-compat kernel module build against the outdated OpenPandora kernel.<br />
<br />
Setting up a tunnel requires two WireGuard components:<br />
<br />
<ol><li> a WireGuard protocol implementation (like the kernel module or wireguard-go)
</li><li> a version of <a class="wiki external"  title="External link" href="https://git.zx2c4.com/wireguard-tools/about/" rel="external nofollow">wireguard-tools</a> that is used to provide a configuration to WireGuard

</li></ol><br />As for wireguard-go, I made a short attempt at trying to build golang on the Pandora itself, but hit the "too much effort" barrier pretty quickly. Fortunately, golang now provides for cross-compiling to supported platforms - but the Pandora is not one of those: The Pandora OS (SuperZaxxon) is built with the outdated "softfp" ARM binary ABI, which is backwards-compatible with ARM CPUs that don't have floating-point hardware, but actually is capable to use vfp and NEON in the backend, if supported by the compiler. The workaround here is to crosscompile with ARMv5 as target architecture, which produces a pure software floating point executable (that also works on softfp by design).<br />
<br />
<h3 class="showhide_heading" id="cross-building_wireguard-go"> cross-building wireguard-go</h3>
I built wireguard-go on a Debian Buster host, and since buster-backports only provides go1.14, I couldn't use the most recent version (which currently requires go1.16): Went with <a class="wiki external"  title="External link" href="https://git.zx2c4.com/wireguard-go/tag/?h=0.0.20210212" rel="external nofollow">wireguard-go 0.0.20210212</a> instead.<br />
<br />
After checking out or unpacking the sources, building a binary is a simple matter of running make with the appropriate environment parameters:<br />
<br />
<pre>env GOOS=linux GOARCH=arm GOARM=5 make</pre><br />
Just copy the resulting <em>wireguard-go</em> over to <em>/usr/local/bin</em> on your Pandora and make it executable.<br />
<br />
<h3 class="showhide_heading" id="compiling_wireguard-tools"> compiling wireguard-tools</h3>
wireguard-tools has only a small set of build dependencies, the most important of which unfortunately isn't even mentioned: On Linux, you need a copy of the kernel headers that roughly matches the destination kernel.<br />
<br />
Turns out that SuperZaxxon only ships the include files for the initial kernel (2.6), but not those for the last available kernel build. Also Linux 2.6 apparently doesn't provide some required functions, so my first attempt failed.<br />
<br />
I ended up downloading the <a class="wiki external"  title="External link" href="http://git.openpandora.org/cgi-bin/gitweb.cgi?p=pandora-kernel.git;a=tree;h=refs/heads/pandora-3.2;hb=refs/heads/pandora-3.2" rel="external nofollow">latest 3.2 kernel sources</a> from the OpenPandora git.<br />
<br />
When I compile software on the Pandora, I usually first try to use the <a class="wiki external"  title="External link" href="http://repo.openpandora.org/?page=detail&amp;app=cdevtools.freamon.40n8e" rel="external nofollow">cdevtools PND</a> - it has an older gcc, but is generally more leightweight than the other option (<a class="wiki external"  title="External link" href="http://repo.openpandora.org/?page=detail&amp;app=codeblocks6022" rel="external nofollow">Code::Blocks</a>). So I start cdevtools, make a <em>src/wireguard</em> directory, and then download and unpack both wireguard-tools and the Pandora kernel sources in there.<br />
<br />
In the wireguard-tools directory, go to <em>src/</em> and run something like this:<br />
<pre>env CFLAGS="-I`pwd`/../../pandora-kernel-pandora-3.2-c4c68a4/include -Os -mtune=cortex-a8 -mcpu=cortex-a8 -mfpu=neon -mfloat-abi=softfp -pipe" make</pre><br />
...and then, to install the resulting programs below /usr/local:<br />
<pre>sudo env PREFIX=/usr/local WITH_WGQUICK=yes WITH_SYSTEMDUNITS=no make install</pre><br />
<h3 class="showhide_heading" id="Pandora_caveats"> Pandora caveats</h3>
<ul><li> SuperZaxxon does not autoload the <em>tun</em> module, so <em>/dev/net/tun</em> doesn't exist. (Ironically, it would be loaded if /dev/net/tun did exist and then something tried to access the device...)
</li><li> wg-quick uses some fancy bash i/o redirection which requires <em>/dev/fd</em>. Which is not there on the Pandora either, but it's easy to create, since it's just a symlink to <em>/proc/self/fd</em>.
</li><li> <strong>Do not use a VPN interface name that starts with "w"</strong> (like the default of wg0)! It triggers bugs in other scripts on the OpenPandora, for example loading of the WiFi firmware will fail after a resume from sleep.
</li><li> Add <em>/usr/local/bin</em> to the PATH of root so the binaries are found in their directory.
</li><li> A couple of the advanced wg-quick functions fail, mostly due to missing or outdated tools. One that I encountered was changing nameservers, but I assume anything the makes changes to the firewall configuration will be broken too. I did not try calling external commands from the wg-quick config file yet (which might serve as a workaround for some uses).
</li><li> Basic setup of a v4 tunnel with several routes has been tested successfully.
</li><li> IPv6 is completely untested.

</li></ul><br />I wrote a small wrapper script that creates a suitable environment for wg-quick invocation that's included as <em>/usr/local/bin/wg-pandora</em> in the tar file below:<br />
<pre>
#!/bin/sh

if [ `id -u` -ne "0" ]; then
  echo "[!] script needs to be run as root, use su oder sudo"
  exit 1
fi

if [ "$1" == "" ]; then
  echo "[!] please use the VPN interface name as parameter"
  echo "NOTE: do not use any device names starting with \"w...\" -"
  echo "      it will prevent Wifi reconfiguration on SuperZaxxon."
  exit 1
fi

if [ ! -f /etc/wireguard/$1.conf ]; then
  echo "[!] please create /etc/wireguard/$1.conf with a valid wg-quick configuration"
  exit 1
fi

if [ ! -e /dev/net/tun ]; then
  echo "[+] load tun kernel module"
  modprobe tun
fi

if [ ! -e /dev/fd ]; then
  echo "[+] create missing /dev/fd symlink"
  ln -s /proc/self/fd /dev/fd
fi

echo "[+] launching wg-quick"
/usr/local/bin/wg-quick up "$1"

exit 0
</pre><br />
<h3 class="showhide_heading" id="installation"> installation</h3>
<ul><li> Download <a class="wiki external"  title="External link" href="https://web.gxis.de/files/wireguard-pandora-20210502.tar.gz" rel="external nofollow">wireguard-pandora-20210502.tar.gz</a> and unpack to the root directory: <pre>tar -C/ -xpf wireguard-pandora-20210502.tar.gz</pre>
</li><li> Create a <a class="wiki external"  title="External link" href="https://git.zx2c4.com/wireguard-tools/about/src/man/wg-quick.8" rel="external nofollow">wg-quick</a> <a class="wikicache" target="_blank" href="tiki-view_cache.php?url=https%3A%2F%2Fgit.zx2c4.com%2Fwireguard-tools%2Fabout%2Fsrc%2Fman%2Fwg-quick.8">(cache)</a> configuration in <em>/etc/wireguard</em> (man pages are included in the download, but <em>man</em> is not installed on the Pandora by default).
</li><li> Run <em>/usr/local/bin/wg-pandora &lt;if-name&gt;</em>. (Remember the note about interface names.)
</li><li> You will need an existing WireGuard endpoint to connect to ;)
</li><li> Manual setup using <em>wg</em> (see <a class="wiki external"  title="External link" href="https://www.wireguard.com/quickstart/" rel="external nofollow">WireGuard quickstart</a>) is also possible, as soon as the <em>tun</em> module has been loaded and wireguard-go is running.
</li><li> There's a <a class="wiki external"  title="External link" href="https://pyra-handheld.com/boards/threads/wireguard-vpn-wip.99422/" rel="external nofollow">discussion thread over on the OpenPandora forums</a>.

</li></ul><br />]]></summary>
    <published>2021-05-02T19:30:00+00:00</published>
    <updated>2021-05-02T19:30:00+00:00</updated>
    <link rel="alternate" type="text/html" href="https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D335"/>
    <id>https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D335</id>
    <author>
      <name>Alexander Bochmann</name>
      <email>&lt;a class="convert-mailto" href="mailto:nospam@example.com" data-encode-name="ab+wiki" data-encode-domain="st.gxis.de"&gt;ab+wiki at st.gxis.de&lt;/a&gt;</email>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[Apache httpd, reverse proxy, and caching]]></title>
    <summary type="html"><![CDATA[There's tons of guides out there on either how to set up Apache httpd as a <a class="wiki external"  title="External link" href="https://httpd.apache.org/docs/2.4/mod/mod_proxy.html" rel="external nofollow">reverse proxy</a>, or how to enable <a class="wiki external"  title="External link" href="https://httpd.apache.org/docs/2.4/mod/mod_cache.html" rel="external nofollow">(disk) caching</a> for content being served.<br />
<br />
The web has surprisingly little information on how to combine both in a working manner, and to have Apache cache content that's being retrieved from a proxied backend.<br />
<br />
Just using the default configuration and then dropping something like a <code>CacheEnable disk</code> into the <code>&lt;Location ...&gt;</code> that holds your proxy rules will not work: Nothing ever is written to the cache directory.<br />
<br />
With debug logging you see either nothing at all or maybe a quick succession of <code> AH00750: Adding CACHE_SAVE filter ..</code> and  <code>AH00751: Adding CACHE_REMOVE_URL filter ...</code> messages in the error.log<br />
<br />
So what's up? Likely your configuration is entirely correct, but you're missing one statement:<br />
<br />
<pre>CacheQuickHandler off</pre><br />
It seems that with the default of <a class="wiki external"  title="External link" href="https://httpd.apache.org/docs/2.4/mod/mod_cache.html#cachequickhandler" rel="external nofollow">CacheQuickHandler</a> being enabled, proxied content never hits the <em>quick handler phase</em> that allows it to be processed for caching.<br />
<br />
When CacheQuickHandler is disabled, everything just drops into place, though some fine tuning might be required.<br />
<br />
The current configuration for my use case of caching media for my Mastodon instance that's being retrieved from a horribly sluggish Minio backend looks like this:<br />
<br />
<pre>&lt;IfModule mod_cache_disk.c&gt;
        CacheQuickHandler off
        CacheRoot /var/cache/apache2/mod_cache_disk
        CacheMaxFileSize 10000000
        CacheDirLevels 2
        CacheDirLength 1
        CacheLock off
        CacheIgnoreCacheControl On
        CacheIgnoreQueryString On
        CacheStoreNoStore On
        CacheIgnoreHeaders Set-Cookie X-Amz-Request-Id
&lt;/IfModule&gt;</pre><br />
...and then:<br />
<br />
<pre>&lt;Location "/"&gt;
        Require all granted
        ProxyPass http://&lt;backend-address&gt;:9000/
        ProxyPassReverse http://&lt;backend-address&gt;:9000/
        &lt;IfModule mod_cache_disk.c&gt;
               CacheEnable disk
        &lt;/IfModule&gt;
&lt;/Location&gt;</pre><br />
<br />]]></summary>
    <published>2020-11-24T21:24:00+00:00</published>
    <updated>2020-11-24T21:24:00+00:00</updated>
    <link rel="alternate" type="text/html" href="https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D332"/>
    <id>https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D332</id>
    <author>
      <name>Alexander Bochmann</name>
      <email>&lt;a class="convert-mailto" href="mailto:nospam@example.com" data-encode-name="ab+wiki" data-encode-domain="st.gxis.de"&gt;ab+wiki at st.gxis.de&lt;/a&gt;</email>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[25Gbit ethernet is complicated...]]></title>
    <summary type="html"><![CDATA[We just spent about a week trying to put a bunch of systems into production that had been ordered with 25Gbit fiber interfaces. We had planned to collect those on two of our Arista 7050CX3, using 100GBit QSFP28 in 4 * 25GBit mode and MPO breakout cables to 4 * LC for the 25Gbit SFP28 end. So we cable everything up, configure our LACP channels on both ends, and ... nothing. All of the links stay down.<br />
<br />
They do show a signal on the transciever though (at least on the switch side where we can look at optics information). An <em>show interfaces et10/1-4 status</em> says "notconnect" for all four subinterfaces. An <em>show interfaces et10/1-4 phy</em> displays an "errDisabled" on the phy layer. We are stumped.<br />
<br />
Over the course of the next few days, we try several changes, to no avail. Directly connecting two Arista switches works though, as does a direct connection between two end hosts. We even swap everything down to 40G on the Arista side and 10G SFP+ in the end hosts, which turns out perfectly fine (so at least our cabling is correct).<br />
<br />
At this point, support for the appliances we're trying to connect gives us credentials for shell access. It's a non-root user on what turns out as a normal Linux system, but at least I can see that it comes with QLogic Corp. FastLinQ QL45000 Series 25GbE controllers (for a short moment we had suspected we had the wrong controllers), and I can get some information by using <em>ethtool</em>. One of those is that ethtool reports the host interfaces as "25GBASE-KR", which tells me nothing. Someone on IRC mentions that "-KR" denotes an "electrical backplane" connection. Armed with those two small bits of information, I hit the search engines, and find this useful table in a <a class="wiki external"  title="External link" href="https://www.marvell.com/documents/quqedaawpjlt0en3e5zn/" rel="external nofollow">document on the Marvell web site</a>:<br />
<img src="tiki-download_file.php?fileId=13&display"  width="457" height="207" alt="D4cbaae36bf038ba" class="regImage pluginImg13 img-fluid" /><br />
It's accompanied by the following text:<br />
<br />
<div class="card bg-light"><div class="card-body">The –S short reach interfaces aim to support high-quality cables without<br />
ForwardError Correction (FEC) to minimize latency. Full reach interfaces<br />
aim to support the lowest possible cable or backplane cost and the longest<br />
possible reach, which do require the use of FEC. FEC options include<br />
BASE-R FEC (also referred to as Fire Code) and RS-FEC (also referred to<br />
as Reed-Solomon).</div></div><br />
<strong>There's two different, incompatible, error correction mechanisms on the bitstream layer of 25Gbit interfaces!?</strong> I didn't know that.<br />
<br />
Since the default on Arista switches seems to be Reed-Solomon, and I don't have any way to configure a detail like that on the end host, we change the configuration on the Arista side:<br />
<br />
<tt> interface et10/1-4</tt><br />
<tt>  error-correction encoding fire-code</tt><br />
<br />
That's all. We do the same for three other interface groups, and all links work just excpected (except for one that apparently has a bad transciever in the end host). I call off the screen-sharing session with Arista support planned for five minutes later.<br />
<br />]]></summary>
    <published>2020-02-10T22:47:00+00:00</published>
    <updated>2020-02-10T22:47:00+00:00</updated>
    <link rel="alternate" type="text/html" href="https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D329"/>
    <id>https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D329</id>
    <author>
      <name>Alexander Bochmann</name>
      <email>&lt;a class="convert-mailto" href="mailto:nospam@example.com" data-encode-name="ab+wiki" data-encode-domain="st.gxis.de"&gt;ab+wiki at st.gxis.de&lt;/a&gt;</email>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[backing up lxc container snapshots, Amanda style]]></title>
    <summary type="html"><![CDATA[I'm probably about the only person in the world using that kind of setup, but here we go:<br />
<br />
<ul><li> I have an active <a class="wiki external"  title="External link" href="http://amanda.org/" rel="external nofollow">Amanda</a> <a class="wikicache" target="_blank" href="tiki-view_cache.php?url=http%3A%2F%2Famanda.org%2F">(cache)</a> installation that I use to back up various UNIX systems (to disk, with a weekly flush out to a tape rotation)
</li><li> I run a system with lxc containers, using <a class="wiki external"  title="External link" href="https://btrfs.wiki.kernel.org/" rel="external nofollow">btrfs</a> as storage backend

</li></ul><br />On btrfs, lxc containers are just subvolumes mounted into the host filesystem, and container snapshots are btrfs snapshots attached to the <em>snapshots/</em> subdirectory of the container host volume.<br />
<br />
So I'm running a simple script on the lxc host each night that cycles through all the containers and creates a snapshot named "amanda" for each of them - deleting the previous version if present. The main loop of the bash script looks more or less like this:<br />
<br />
<pre>if [ -d /${lxdpool}/snapshots/${container}/amanda ]; then
 lxc delete ${container}/amanda
 sleep 2
fi
lxc snapshot ${container} amanda
</pre><br />
Amanda can do incremental backups using GNU tar (in addition to a host of other options). One of the less obvious stumbling blocks with this is that GNU tar takes the <em>device ID</em> into account when calculating incrementals - and as each btrfs snapshot is a new device, the default configuration will back up all of the files in the snapshot every day, even if the file metadata is unchanged. So to make this setup work, Amanda needs a new dumptype with a tar configuration that ignores the device ID (tar option <em>--no-check-device</em>). The <strong>amanda.conf</strong> on my backup server now defines this in addition to the pre-existing defaults:<br />
<br />
<pre>
 define application-tool app_amgtar_snap { #
    comment "amgtar for btrfs snapshots"
    plugin "amgtar"
    property "ONE-FILE-SYSTEM" "yes"  #use '--one-file-system' option
    property "ATIME-PRESERVE" "yes"   #use '--atime-preserve=system' option
    property "CHECK-DEVICE" "no"      #use '--no-check-device' if set to "no"
    property "IGNORE" ": socket ignored$"  # remove some log clutter
    property append "IGNORE" "directory is on a different filesystem"
}

define dumptype dt_amgtar_snap { #
    comment "new dump type that uses the above application definition"
    program "APPLICATION"
    application "app_amgtar_snap"
}

 define dumptype comp-user-ssh-tar-lxd-snap { #
    global-ssh   # use global ssh transport configuration
    client_username "backup"
    program "GNUTAR"
    dt_amgtar_snap    # that's my new dumptype
    comment "partitions dumped with tar as lxd snapshot, using gnutar --no-device option"
    index
    priority low
    compress client fast
    exclude list "./rootfs/.amandaexclude"  # each container can have individual exclude lists in /.amandaexclude
}
</pre><br />
All that's left now is to add entries to the Amanda <strong>disklist</strong> that are using my new dump type:<br />
<br />
<pre>
host.example.com        /lxdpool/snapshots/container1/amanda     comp-user-ssh-tar-lxd-snap
host.example.com        /lxdpool/snapshots/container2/amanda     comp-user-ssh-tar-lxd-snap
</pre><br />
<br />
<br />]]></summary>
    <published>2019-11-10T15:29:00+00:00</published>
    <updated>2019-11-10T15:29:00+00:00</updated>
    <link rel="alternate" type="text/html" href="https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D328"/>
    <id>https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D328</id>
    <author>
      <name>Alexander Bochmann</name>
      <email>&lt;a class="convert-mailto" href="mailto:nospam@example.com" data-encode-name="ab+wiki" data-encode-domain="st.gxis.de"&gt;ab+wiki at st.gxis.de&lt;/a&gt;</email>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[things that can go wrong when installing Smokeping]]></title>
    <summary type="html"><![CDATA[Yeah, haven't done that in a long time:<br />
<br />
<iframe src="https://mastodon.infra.de/@galaxis/103104946365972858/embed" class="mastodon-embed" style="max-width: 100%; border: 1" width="400" height="580"></iframe><script src="https://mastodon.infra.de/embed.js" async="async"></script><br />
(from <a class="wiki external"  title="External link" href="https://mastodon.infra.de/@galaxis/103104946365972858" rel="external nofollow">https://mastodon.infra.de/@galaxis/103104946365972858</a> <a class="wikicache" target="_blank" href="tiki-view_cache.php?url=https%3A%2F%2Fmastodon.infra.de%2F%40galaxis%2F103104946365972858">(cache)</a>)<br />]]></summary>
    <published>2019-11-08T23:46:00+00:00</published>
    <updated>2019-11-08T23:46:00+00:00</updated>
    <link rel="alternate" type="text/html" href="https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D327"/>
    <id>https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D327</id>
    <author>
      <name>Alexander Bochmann</name>
      <email>&lt;a class="convert-mailto" href="mailto:nospam@example.com" data-encode-name="ab+wiki" data-encode-domain="st.gxis.de"&gt;ab+wiki at st.gxis.de&lt;/a&gt;</email>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[adding a current FreeMiNT release into an existing EasyMiNT install on the Atari TT]]></title>
    <summary type="html"><![CDATA[I spent this weekend installing <a class="wiki external"  title="External link" href="https://atari.grossmaggul.de/home.php?lang=ge&amp;headline=EasyMiNT&amp;texte=easymint" rel="external nofollow">EasyMiNT</a> <a class="wikicache" target="_blank" href="tiki-view_cache.php?url=https%3A%2F%2Fatari.grossmaggul.de%2Fhome.php%3Flang%3Dge%26amp%3Bheadline%3DEasyMiNT%26amp%3Btexte%3Deasymint">(cache)</a> on my Atari TT, and then making it work with the <a class="wiki external"  title="External link" href="http://wiki.newtosworld.de/index.php?title=Lightning_VME_En" rel="external nofollow">Lightning VME USB board</a> <a class="wikicache" target="_blank" href="tiki-view_cache.php?url=http%3A%2F%2Fwiki.newtosworld.de%2Findex.php%3Ftitle%3DLightning_VME_En">(cache)</a>.<br />
<br />
Some more on the journey in getting there can be seen in these Fediverse threads:<br />
<ul><li> <a class="wiki external"  title="External link" href="https://mastodon.infra.de/@galaxis/103064551234920394" rel="external nofollow">https://mastodon.infra.de/@galaxis/103064551234920394</a>
</li><li> <a class="wiki external"  title="External link" href="https://mastodon.infra.de/@galaxis/103068427533675815" rel="external nofollow">https://mastodon.infra.de/@galaxis/103068427533675815</a>
</li><li> <a class="wiki external"  title="External link" href="https://mastodon.infra.de/@galaxis/103070354567250488" rel="external nofollow">https://mastodon.infra.de/@galaxis/103070354567250488</a>

</li></ul><br />Notes:<br />
<ul><li> I haven't had much luck in installing EasyMiNT to anything other than the <strong>C:</strong> drive
</li><li> The MiNT kernel provided with EasyMiNT is too old to be able to load the Lightning drivers, but since I had successfully installed the EasyMiNT distribution already, I wanted to upgrade it with a current <a class="wiki external"  title="External link" href="https://freemint.github.io/" rel="external nofollow">FreeMiNT</a> <a class="wikicache" target="_blank" href="tiki-view_cache.php?url=https%3A%2F%2Ffreemint.github.io%2F">(cache)</a> release.
</li><li> Booting the current MiNT kernel (as of 1-19-73f)  hangs after the "Installing BIOS keyboard" message. <a class="wiki external"  title="External link" href="http://atari-forum.com/viewtopic.php?t=34224#p377824" rel="external nofollow">This thread</a> <a class="wikicache" target="_blank" href="tiki-view_cache.php?url=http%3A%2F%2Fatari-forum.com%2Fviewtopic.php%3Ft%3D34224%23p377824">(cache)</a> on atari-forum.com recommends removing BIGDOS.PRG from the AUTO folder. Apparently BIGDOS is not required when using recent MiNT kernels anyway (I also got rid of WDIALOG.PRG while I was at it).

</li></ul><br />EasyMiNT installed on C: boots the kernel from <strong>C:\MINT\1-19-CUR</strong>. I didn't want to touch that working part of the setup, so I downloaded a full snapshot from <a class="wiki external"  title="External link" href="https://bintray.com/freemint/freemint/snapshots/" rel="external nofollow">https://bintray.com/freemint/freemint/snapshots/</a> that uses the snapshot version as MiNT SYSDIR (<strong>C:\MINT\1-19-73f</strong> for my build). Changing from the EasyMiNT kernel to the current <strong>MINT030.PRG</strong> in <strong>C:\AUTO\</strong> then implicitly executes everything else from the corresponding SYSDIR.<br />
<br />
As it turns out, the USB drivers included with the current FreeMiNT distribution are incompatible with those from the <a class="wiki external"  title="External link" href="https://www.newtosworld.de/viewforum.php?f=6&amp;sid=d47d0e2fd49e7d9426c7dc6919f26d65" rel="external nofollow">Lighting VME driver disk</a>. The easiest way is to rename <strong>$SYSDIR\USB</strong> to something else and replace the directory with the files from the <strong>TT\MINT</strong> directory in the Lightning distribution - and then add a missing file (<strong>ETH.UDD</strong>) <a class="wiki external"  title="External link" href="https://forum.atari-home.de/index.php/topic,14000.540.html" rel="external nofollow">attached to this forum post</a> <a class="wikicache" target="_blank" href="tiki-view_cache.php?url=https%3A%2F%2Fforum.atari-home.de%2Findex.php%2Ftopic%2C14000.540.html">(cache)</a>. Using the <strong>ETH.UDD</strong> provided with FreeMiNT does not work and leads to an "API Mismatch" message.<br />
<br />
To keep using most of the EasyMiNT setup, I adapted the boot sequence and MINT.CNF (some hints on the general boot layout can be found in <a href="tiki-index.php?page=MiNTBootSequence" title="MiNTBootSequence" class="wiki wiki_page">MiNTBootSequence</a>) by replacing some of the <strong>sln</strong> links. The relevant sections of my current configuration looks like this (<strong>E:</strong> is my EasyMiNT ext2 filesystem):<br />
<br />
<pre># add some binaries provided by FreeMiNT, later referenced in PATH
sln c:/mint/1-19-73f/sys-root/bin              u:/sysbin
# GEM programs included in the FreeMiNT distribution
sln c:/mint/1-19-73f/sys-root/opt              u:/opt
sln c:/mint/1-19-73f/sys-root/share            u:/share
# EasyMINT links
sln e:/etc     u:/etc
sln e:/bin     u:/bin
sln e:/sbin    u:/sbin
sln e:/home    u:/home
sln e:/usr     u:/usr
sln e:/mnt     u:/mnt
sln e:/root    u:/root
sln e:/tmp     u:/tmp
# this line only works after removing the /usr/bin/xaaes symlink in EasyMiNT!
# with this, the EasyMiNT/SpareMiNT init script keeps starting XaAES without any further changes
sln c:/mint/1-19-73f/xaaes/xaloader.prg    u:/usr/bin/xaaes

# I've found that using TOS paths in MINT.CNF works better?
setenv PATH u:\sysbin,u:\bin,u:\usr\bin,u:\usr\sbin,u:\sbin,u:\c\mint\1-19-73f\xaaes

setenv TMPDIR u:\tmp

# provided by EasyMiNT, only works when the appropriate direcories on E: are linked in
exec u:\c\mint\bin\sh u:\c\mint\bin\fscheck.sh

setenv TZ 'Europe/Berlin'
exec u:\sbin\tzinit -l

# load Lightning USB drivers
exec u:\c\mint\1-19-73f\usb\loader.prg

# use SpareMiNT init system, as installed by EasyMiNT
INIT=u:\sbin\init
</pre><br />
Linking in XALOADER.PRG via an sln link makes it easy to adapt the configuration to new releases. Most of the rest of the sln link tree comes from the MINT.CNF created by the EasyMiNT installer.<br />
<br />]]></summary>
    <published>2019-11-03T12:02:00+00:00</published>
    <updated>2019-11-03T12:02:00+00:00</updated>
    <link rel="alternate" type="text/html" href="https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D326"/>
    <id>https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D326</id>
    <author>
      <name>Alexander Bochmann</name>
      <email>&lt;a class="convert-mailto" href="mailto:nospam@example.com" data-encode-name="ab+wiki" data-encode-domain="st.gxis.de"&gt;ab+wiki at st.gxis.de&lt;/a&gt;</email>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[deleting stale VMware NSX controller instances]]></title>
    <summary type="html"><![CDATA[When the installation of an VMware NSX controller fails and it's locked in the UI, you can just delete it from the API:<br />
<br />
<iframe src="https://mastodon.infra.de/@galaxis/102376746230112269/embed" class="mastodon-embed" style="max-width: 100%; border: 1" width="400" height="400"></iframe><script src="https://mastodon.infra.de/embed.js" async="async"></script><br />
<br />
(from <a class="wiki external"  title="External link" href="https://mastodon.infra.de/@galaxis/102376746230112269" rel="external nofollow">https://mastodon.infra.de/@galaxis/102376746230112269</a> <a class="wikicache" target="_blank" href="tiki-view_cache.php?url=https%3A%2F%2Fmastodon.infra.de%2F%40galaxis%2F102376746230112269">(cache)</a>)<br />
<br />]]></summary>
    <published>2019-07-06T17:07:00+00:00</published>
    <updated>2019-07-06T17:07:00+00:00</updated>
    <link rel="alternate" type="text/html" href="https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D325"/>
    <id>https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D325</id>
    <author>
      <name>Alexander Bochmann</name>
      <email>&lt;a class="convert-mailto" href="mailto:nospam@example.com" data-encode-name="ab+wiki" data-encode-domain="st.gxis.de"&gt;ab+wiki at st.gxis.de&lt;/a&gt;</email>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[Apache 2.4 as a reverse proxy for Mastodon]]></title>
    <summary type="html"><![CDATA[The standard setup for <a class="wiki external"  title="External link" href="https://joinmastodon.org/" rel="external nofollow">Mastodon</a> is to use nginx as a reverse proxy. After one too many missing features I recently switched my installation over to using good old Apache.<br />
<br />
There's <a class="wiki external"  title="External link" href="https://github.com/tootsuite/documentation/blob/master/Running-Mastodon/Alternatives.md#apache" rel="external nofollow">an example Apache config</a> <a class="wikicache" target="_blank" href="tiki-view_cache.php?url=https%3A%2F%2Fgithub.com%2Ftootsuite%2Fdocumentation%2Fblob%2Fmaster%2FRunning-Mastodon%2FAlternatives.md%23apache">(cache)</a> in the unmaintained old documentation archive for Mastodon, and since I assume it's useless to try to update that, I'll quickly dump my current config here. There's no guarantee for correctness, but it currently seems to work for me. Note that this configuration does not do any caching for requests to static content retrieved through the reverse proxy.<br />
<br />
The following Apache modules are used:<br />
<br />
<ul><li> proxy
</li><li> proxy_http
</li><li> http2
</li><li> proxy_http2
</li><li> proxy_wstunnel
</li><li> headers
</li><li> socache_shmcb
</li><li> ssl

</li></ul><br />General SSL configuration (personal preference, CipherSuite selection is probably going to age badly). TLS v1.3 is disabled since Ubuntu bionic ships an Apache version that's too old for that:<br />
<br />
<pre>&lt;IfModule mod_ssl.c&gt;

        SSLCertificateFile     &lt;path to combined public key / certificate chain file&gt;
        SSLCertificateKeyFile  &lt;path to private key&gt;
        #   the referenced file can be the same as SSLCertificateFile
        #   when the CA certificates are directly appended to the server
        #   certificate for convinience.
        SSLCertificateChainFile &lt;path to combined public key / certificate chain file&gt;

        # SSLProtocol -all +TLSv1.2 +TLSv1.3
        SSLProtocol -all +TLSv1.2 +TLSv1.1
        SSLHonorCipherOrder on
        SSLCipherSuite ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:EECDH+AESGCM:AES256+EECDH:AES128+EECDH
        SSLCompression off
        SSLSessionTickets off
        SSLSessionCache "shmcb:logs/session-cache(512000)"
        SSLStaplingResponderTimeout 5
        SSLStaplingReturnResponderErrors off
        SSLUseStapling on
        SSLStaplingCache "shmcb:logs/stapling-cache(150000)"

        # needs to be generated first, see https://weakdh.org/sysadmin.html
        SSLOpenSSLConfCmd DHParameters /etc/ssl/dhparam.pem

&lt;/IfModule&gt;
</pre><br />
<br />
Mastodon vhost configuration:<br />
<br />
<pre>
&lt;VirtualHost *:443&gt;
        ServerAdmin webmaster@example.com
        ServerName mastodon.example.com

        SSLEngine on

        Protocols h2 http/1.1

        # fetch static files directly from local file system (adapt to installation path)
        DocumentRoot /home/mastodon/live/public

        Header always set Strict-Transport-Security "max-age=31536000"

        &lt;LocationMatch "^/(assets|avatars|emoji|headers|packs|sounds|system)"&gt;
                Header always set Cache-Control "public, max-age=31536000, immutable"
                Require all granted
        &lt;/LocationMatch&gt;

        &lt;Location "/"&gt;
                Require all granted
        &lt;/Location&gt;

        ProxyPreserveHost On
        RequestHeader set X-Forwarded-Proto "https"
        ProxyAddHeaders On

        # these files / pathes don't get proxied and are retrieved from DocumentRoot
        ProxyPass /500.html !
        ProxyPass /sw.js !
        ProxyPass /robots.txt !
        ProxyPass /manifest.json !
        ProxyPass /browserconfig.xml !
        ProxyPass /mask-icon.svg !
        ProxyPassMatch ^(/.*\.(png|ico)$) !
        ProxyPassMatch ^/(assets|avatars|emoji|headers|packs|sounds|system) !
        # everything else is either going to the streaming API or the web workers
        ProxyPass /api/v1/streaming ws://localhost:4000
        ProxyPassReverse /api/v1/streaming ws://localhost:4000
        ProxyPass / http://localhost:3000/
        ProxyPassReverse / http://localhost:3000/

        ErrorDocument 500 /500.html
        ErrorDocument 501 /500.html
        ErrorDocument 502 /500.html
        ErrorDocument 503 /500.html
        ErrorDocument 504 /500.html

&lt;/VirtualHost&gt;
</pre><br />
The trailing / on the websocket ProxyPass directive is missing by design (it's there in the old example config): Some API requests seen in the wild will not match <em>/api/v1/streaming/</em> and will get lost.<br />]]></summary>
    <published>2019-05-31T19:45:00+00:00</published>
    <updated>2019-05-31T19:45:00+00:00</updated>
    <link rel="alternate" type="text/html" href="https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D323"/>
    <id>https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D323</id>
    <author>
      <name>Alexander Bochmann</name>
      <email>&lt;a class="convert-mailto" href="mailto:nospam@example.com" data-encode-name="ab+wiki" data-encode-domain="st.gxis.de"&gt;ab+wiki at st.gxis.de&lt;/a&gt;</email>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[creating an iPXE boot floppy]]></title>
    <summary type="html"><![CDATA[The <a class="wiki external"  title="External link" href="http://ipxe.org/" rel="external nofollow">iPXE open source boot firmware project</a> provides an <a class="wiki external"  title="External link" href="http://ipxe.org/download" rel="external nofollow">CD image</a> that boots the iPXE binary using isolinux.<br />
<br />
Over on the Fediverse, the topic of bootstraping a system from a floppy disk came up, and with the iPXE binary being a mere 330KB, there's really no reason why it shouldn't be possible to boot that from a floppy disk. And it actually does work, with a few simple steps (on a Debian-ish Linux):<br />
<br />
<ul><li> format floppy disk and create FAT filesystem <pre>fdformat /dev/fd0
mkfs -t fat /dev/fd0</pre>
</li><li> get syslinux and install to floppy <pre>apt install syslinux syslinux-utils
syslinux --install /dev/fd0</pre>
</li><li> get iPXE ISO <pre>curl -O http://boot.ipxe.org/ipxe.iso</pre>
</li><li> mount both iPXE ISO and floppy, copy over required files, rename isolinux.cfg to syslinux.cfg <pre>mkdir fd iso
mount /dev/fd0 fd
mount -o ro ipxe.iso iso
cp iso/ipxe.krn fd/
cp iso/boot.cat fd/
cp iso/isolinux.cfg fd/syslinux.cfg
umount fd
umount iso
rmdir fd iso
</pre>

</li></ul><br />That's all! Take your floppy and boot a system<br />
<br />
Once iPXE has been started, hit <em>Ctrl-B</em> to call the shell. If you have a DHCP server on your network and a web server with a bootable ISO image, it's just two iPXE commands:<br />
<br />
<pre>dhcp
sanboot http://&lt;webserver&gt;/&lt;filename&gt;.iso</pre><br />
<br />]]></summary>
    <published>2018-06-30T22:20:00+00:00</published>
    <updated>2018-06-30T22:20:00+00:00</updated>
    <link rel="alternate" type="text/html" href="https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D315"/>
    <id>https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D315</id>
    <author>
      <name>Alexander Bochmann</name>
      <email>&lt;a class="convert-mailto" href="mailto:nospam@example.com" data-encode-name="ab+wiki" data-encode-domain="st.gxis.de"&gt;ab+wiki at st.gxis.de&lt;/a&gt;</email>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[SolidFire FDVA software repository downgrade]]></title>
    <summary type="html"><![CDATA[We've been playing with a <a class="wiki external"  title="External link" href="http://www.netapp.com/us/products/storage-systems/all-flash-array/solidfire-web-scale.aspx" rel="external nofollow">SolidFire flash storage cluster</a> for some time, and recently wanted to update the nodes to the current ElementOS 10.1 release.<br />
<br />
Unfortunately, our FDVA management node installation was borked, so we decided to just roll a new one from the current VM appliance template - easy.<br />
As it turns out though, the FDVA appliance only ships with the latest software release files, and the individual SolidFire nodes check back for a repository with their current version before starting the update, which consequently fails (it's all very Ubuntu-ish):<br />
<br />
<pre>
admin@SF-7323:~$ sudo sfinstall 192.168.10.21 -u admin -p password -l
2017-12-20 17:27:52: sfinstall Release Version: 10.1.0.83 Revision:  Build date: 2017-11-23 01:27
2017-12-20 17:27:52: Checking connectivity to MVIP 192.168.10.21
2017-12-20 17:27:52: Successfully connected to cluster MVIP
2017-12-20 17:27:53: PrintRepositoryPackages failed: SolidFireApiError server=[192.168.10.10] method=[AptUpdate], params=[{'quiet': 2}] - error name=[xCheckFailure], 
message=[cmdResult={ rc=255 stdout="W: Failed to fetch http://192.168.10.10/fluorine-updates/ubuntu/dists/precise/main/binary-amd64/Packages  404  Not Found
[..]
W: Failed to fetch http://192.168.10.10/fluorine-updates/security-ubuntu/dists/precise-security/universe/binary-amd64/Packages  404  Not Found
</pre><br />
The SolidFire docs don't really mention what to do from there, so we tinkered around for some time and found this:<br />
<br />
Any older version of the repository can be fetched using the <em>update-fdva</em> tool with the currently used ElementOS release version as command line (version number can be seen on the cluster web UI or when asking the cluster nodes for their mnode repository using sfinstall). In our case, the active version was 9.2.0.43 -<br />
<br />
<pre>
admin@SF-7323:~$ sudo update-fdva 9.2.0.43
Get: 1 http://localhost precise Release.gpg [490 B]
Get: 2 http://localhost precise-updates Release.gpg [490 B]
[..]
</pre><br />
This will fetch the 9.2.0.43 version of the SolidFire repository, but will also downgrade to the matching (old) versions of solidfire-fdva-tools and solidfire-python-framework...<br />
<br />
<pre>
admin@SF-7323:~$ dpkg -l | grep fdva
ii  solidfire-fdva-tools-fluorine-patch2-9.2.0.43               9.2.0.43                          SolidFire FDVA Tools 9 [fluorine-patch2]
</pre><br />
...so we immediately reinstalled the current versions, using <em>update-fdva</em> again, this time with the current release version number:<br />
<br />
<pre>
admin@SF-7323:~$ sudo update-fdva 10.1.0.83
</pre><br />
With all that in place, we could just run the update routine using the usual <em>sfinstall</em> command.<br />
<br />]]></summary>
    <published>2017-12-21T15:11:00+00:00</published>
    <updated>2017-12-21T15:11:00+00:00</updated>
    <link rel="alternate" type="text/html" href="https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D311"/>
    <id>https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D311</id>
    <author>
      <name>Alexander Bochmann</name>
      <email>&lt;a class="convert-mailto" href="mailto:nospam@example.com" data-encode-name="ab+wiki" data-encode-domain="st.gxis.de"&gt;ab+wiki at st.gxis.de&lt;/a&gt;</email>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[find obsolete packages on a Debian system]]></title>
    <summary type="html"><![CDATA[After dist-upgrading a Debian system recently, I wondered which packages might have been left over from previous releases (the system in question has been through several dist-upgrades over its lifetime), even after running <code>apt-get autoremove</code> and <code>deborphan</code>. After <a class="wiki external"  title="External link" href="https://mastodon.infra.de/@galaxis/95461" rel="external nofollow">dropping that question on Mastodon</a> <a class="wikicache" target="_blank" href="tiki-view_cache.php?url=https%3A%2F%2Fmastodon.infra.de%2F%40galaxis%2F95461">(cache)</a>, I got an answer pointing to <code>apt-show-versions</code>, which I didn't know of up to now.<br />
<br />
This totally does what I've been looking for. From the man page:<br />
<br />
<pre>NAME
       apt-show-versions - Lists available package versions with distribution

DESCRIPTION
       apt-show-versions parses the dpkg status file and the APT lists for the installed and available package
       versions and distribution and shows upgrade options within the specific distribution of the selected package.

       This is really useful if you have a mixed stable/testing environment and want to list all packages which are
       from testing and can be upgraded in testing.
</pre><br />
<br />
Since I didn't have a package cache for apt-show-versions from the older release, all old packages are currently just shown with a <em>No available version in archive</em> comment. But since current packages are being tagged with the release, I can just exclude those with a simble grep:<br />
<br />
<tt> # apt-show-versions | egrep -vc jessie</tt><br />
<tt> 58</tt><br />
<br />
<br />]]></summary>
    <published>2017-07-08T08:40:00+00:00</published>
    <updated>2017-07-08T08:40:00+00:00</updated>
    <link rel="alternate" type="text/html" href="https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D300"/>
    <id>https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D300</id>
    <author>
      <name>Alexander Bochmann</name>
      <email>&lt;a class="convert-mailto" href="mailto:nospam@example.com" data-encode-name="ab+wiki" data-encode-domain="st.gxis.de"&gt;ab+wiki at st.gxis.de&lt;/a&gt;</email>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[Intel D945GCLF2, ACPI Exception: AE_AML_INFINITE_LOOP]]></title>
    <summary type="html"><![CDATA[Another of those "just so I find my own post the next time I'm looking for this" things...<br />
<br />
After the replacing the CPU fan on an old Intel D945GCLF2 Atom board, I "optimized" the BIOS settings by enabling automatic fan control (instead of having the fan at a fixed speed).<br />
<br />
Currently, I have lots of messages like this in my kernel log, and a kworker thread using 100% CPU:<br />
<br />
<div class="codelisting_container"><div class="icon_copy_code far fa-clipboard" tabindex="0"  data-clipboard-target="#codebox4" ><span class="copy_code_tooltiptext">Copy to clipboard</span></div><pre class="codelisting"  data-theme="off"  data-wrap="1"  dir="ltr"  style="white-space:pre-wrap; overflow-wrap: break-word; word-wrap: break-word;" id="codebox4" ><div class="code">ACPI Error: Method parse/execution failed [\_SB_.PCI0.LPC_.SMBR] (Node ffff88007ec3f900), AE_AML_INFINITE_LOOP (20140424/psparse-536)
ACPI Error: Method parse/execution failed [\_SB_.PCI0.LPC_.INIT] (Node ffff88007ec3f928), AE_AML_INFINITE_LOOP (20140424/psparse-536)
ACPI Error: Method parse/execution failed [\_GPE._L00] (Node ffff88007ec35bd0), AE_AML_INFINITE_LOOP (20140424/psparse-536)
ACPI Exception: AE_AML_INFINITE_LOOP, while evaluating GPE method [_L00] (20140424/evgpe-580)</div></pre></div><br />
So, an hour or so of searching later, I finally hit <a class="wiki external"  title="External link" href="https://bugzilla.novell.com/show_bug.cgi?id=689848#c17" rel="external nofollow">this comment on the Novell bugzilla</a> <a class="wikicache" target="_blank" href="tiki-view_cache.php?url=https%3A%2F%2Fbugzilla.novell.com%2Fshow_bug.cgi%3Fid%3D689848%23c17">(cache)</a> - and then I promptly remembered that I used to have known about this problem, and that it was exactly the reason why the fan was set to a fixed speed:<br />
<br />
<div class='quote'>
    <div class='quoteheader'>
                    <i class="fas fa-quote-left" aria-hidden="true"></i>
            </div>
    <div class='quotebody'>
        In BIOS I have DISABLED auto fan speed. It is now set at 90%. It seems to have fixed it<br />
            </div>
</div>
<br />]]></summary>
    <published>2017-05-10T21:41:00+00:00</published>
    <updated>2017-05-10T21:41:00+00:00</updated>
    <link rel="alternate" type="text/html" href="https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D295"/>
    <id>https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D295</id>
    <author>
      <name>Alexander Bochmann</name>
      <email>&lt;a class="convert-mailto" href="mailto:nospam@example.com" data-encode-name="ab+wiki" data-encode-domain="st.gxis.de"&gt;ab+wiki at st.gxis.de&lt;/a&gt;</email>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[sendmail MIME conversion vs. DMARC+DKIM]]></title>
    <summary type="html"><![CDATA[I've recently tried to reconfigure a mailinglist that I run on one of my systems to make less problems with recipients that use DMARC.<br />
<br />
To that end, I wanted to implement the first option mentioned in the corresponding <a class="wiki external"  title="External link" href="https://dmarc.org/wiki/FAQ#I_operate_a_mailing_list_and_I_want_to_interoperate_with_DMARC.2C_what_should_I_do.3F" rel="external nofollow">DMARC FAQ for mailinglist administrators</a> <a class="wikicache" target="_blank" href="tiki-view_cache.php?url=https%3A%2F%2Fdmarc.org%2Fwiki%2FFAQ%23I_operate_a_mailing_list_and_I_want_to_interoperate_with_DMARC.2C_what_should_I_do.3F">(cache)</a>: Don't do any changes to the message body and headers potentially covered by a DKIM signature.<br />
<br />
On the mailinglist configuration side, this means not adding a list tag to the message subject, and not changing the body with an additional header or footer. (Adding the usual RFC2369/RFC2919 list headers is no problem.)<br />
<br />
After doing that it turned out that my setup, using <a class="wiki external"  title="External link" href="http://www.sendmail.org/~ca/" rel="external nofollow">sendmail</a>, still changed the body on some messages, thanks to the automatic MIME autoconversion that sendmail does.<br />
<br />
Getting rid of that actually required some digging into sendmail configuration and documentation:<br />
<br />
Responsible for message delivery to local programs is the aptly named "prog" mailer. Unfortunately, as far as the m4 configuration statements are concerned, all the variables for this mailer are called "SHELL"... The default flags for the prog/shell mailer are hardcoded to "[eu9]", according to the <a class="wiki external"  title="External link" href="http://www.sendmail.org/~ca/email/doc8.10/cf.README" rel="external nofollow">README</a> (though these defaults are changed in some of the OSTYPE definitions in the m4 macro collection):<br />
<br />
<div class='quote'>
    <div class='quoteheader'>
                    <cite>sendmail cf.README</cite> wrote:
            </div>
    <div class='quotebody'>
        LOCAL_SHELL_FLAGS	[eu9] The flags used by the shell mailer.  The flags lsDFM are always included.<br />
            </div>
</div>
<br />
The meaning of the individual flags is documented in the <a class="wiki external"  title="External link" href="http://www.sendmail.org/~ca/email/doc8.12/op-sh-5.html#sh-5.4" rel="external nofollow">OP manual</a>:<br />
<br />
<div class='quote'>
    <div class='quoteheader'>
                    <cite>sendmail OP manual</cite> wrote:
            </div>
    <div class='quotebody'>
        7<br />
&nbsp;&nbsp;&nbsp;Strip all output to seven bits. This is the default if the L flag is set. Note that clearing this option is not sufficient to get full eight bit data passed through sendmail. If the 7 option is set, this is essentially always set, since the eighth bit was stripped on input. Note that this option will only impact messages that didn't have 8-&gt;7 bit MIME conversions performed.<br />
8<br />
&nbsp;&nbsp;&nbsp;If set, it is acceptable to send eight bit data to this mailer; the usual attempt to do 8-&gt;7 bit MIME conversions will be bypassed.<br />
9<br />
&nbsp;&nbsp;&nbsp;If set, do limited 7-&gt;8 bit MIME conversions. These conversions are limited to text/plain data.<br />
            </div>
</div>
<br />
<br />
So it seems I want to get rid of the <em>9</em> in the LOCAL_SHELL_FLAGS, and replace it by a <em>8</em>...<br />
<br />
Adding the following two statements to my sendmail m4 configuration source does exactly that:<br />
<br />
<div class="codecaption">sendmail.mc</div><div class="codelisting_container"><div class="icon_copy_code far fa-clipboard" tabindex="0"  data-clipboard-target="#codebox5" ><span class="copy_code_tooltiptext">Copy to clipboard</span></div><pre class="codelisting"  data-theme="off"  data-wrap="1"  dir="ltr"  style="white-space:pre-wrap; overflow-wrap: break-word; word-wrap: break-word;" id="codebox5" ><div class="code">dnl # disable MIME-Autoconversion for prog mailer
MODIFY_MAILER_FLAGS(`SHELL&#039;, `-9&#039;)
MODIFY_MAILER_FLAGS(`SHELL&#039;, `+8&#039;)</div></pre></div><br />]]></summary>
    <published>2017-04-15T13:38:00+00:00</published>
    <updated>2017-04-15T13:38:00+00:00</updated>
    <link rel="alternate" type="text/html" href="https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D286"/>
    <id>https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D286</id>
    <author>
      <name>Alexander Bochmann</name>
      <email>&lt;a class="convert-mailto" href="mailto:nospam@example.com" data-encode-name="ab+wiki" data-encode-domain="st.gxis.de"&gt;ab+wiki at st.gxis.de&lt;/a&gt;</email>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[raspbian jessie -  rsyslogd-2007: action 'action 17' suspended, next retry ...]]></title>
    <summary type="html"><![CDATA[On a headless Raspberry Pi running raspbian/jessie, the <em>/var/log/messages</em> file is filling up with entries like these:<br />
<br />
<pre> rsyslogd-2007: action 'action 17' suspended, next retry is [..date..] [ try http://www.rsyslog.com/e/2007 ] </pre><br />
It seems this message is generated when rsyslogd isn't able to deliver syslog messages to one of the destinations in rsyslog.conf<br />
<br />
In the case a raspbian, it's obviously the entry at the end of the config that tries to pipe messages to <em>|/dev/xconsole</em> - which doesn't exist on a system that doesn't run X11...<br />
<br />
The messages disappear after commenting out or deleting the corresponding lines:<br />
<br />
<div class="codecaption">/etc/rsyslog.conf</div><div class="codelisting_container"><div class="icon_copy_code far fa-clipboard" tabindex="0"  data-clipboard-target="#codebox6" ><span class="copy_code_tooltiptext">Copy to clipboard</span></div><pre class="codelisting"  data-theme="off"  data-wrap="1"  dir="ltr"  style="white-space:pre-wrap; overflow-wrap: break-word; word-wrap: break-word;" id="codebox6" ><div class="code">#daemon.*;mail.*;\
#       news.err;\
#       *.=debug;*.=info;\
#       *.=notice;*.=warn       |/dev/xconsole</div></pre></div><br />
I really should file a bug report for this...<br />]]></summary>
    <published>2017-04-10T19:45:00+00:00</published>
    <updated>2017-04-10T19:45:00+00:00</updated>
    <link rel="alternate" type="text/html" href="https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D283"/>
    <id>https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D283</id>
    <author>
      <name>Alexander Bochmann</name>
      <email>&lt;a class="convert-mailto" href="mailto:nospam@example.com" data-encode-name="ab+wiki" data-encode-domain="st.gxis.de"&gt;ab+wiki at st.gxis.de&lt;/a&gt;</email>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[Splunk eval vs. variable names with dashes]]></title>
    <summary type="html"><![CDATA[I'm pretty certain I used to know this - but for the next time I'm putting this into a search engine and don't find it in the Splunk docs:<br />
<br />
One of our data sources writes structured data into our <a class="wiki external"  title="External link" href="https://www.splunk.com/en_us/products/splunk-enterprise.html" rel="external nofollow">Splunk</a> installation which contains variable names with dashes - in this particular case, <em>access-time</em><br />
<br />
It's no problem using such a variable in a lot of Splunk operations, but it fails in an <a class="wiki external"  title="External link" href="http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/%65%76%61%6C" rel="external nofollow">ev<x>al</a>, as it will be interpreted as a mathematical operation (<em>access</em> minus <em>time</em>).<br />
<br />
There's two options to work around that:<br />
<br />
<ol><li> the one mentioned in the Splunk documentation: Put the variable name in single quotes, i.e. <em>| eval newtime='access-time' - constant</em>
</li><li> the other one is to simply rename the variable before working on it: <em>| rename access-time AS accesstime | eval newtime=accesstime - constant</em></li></ol><br />]]></summary>
    <published>2017-04-05T20:54:00+00:00</published>
    <updated>2017-04-05T20:54:00+00:00</updated>
    <link rel="alternate" type="text/html" href="https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D276"/>
    <id>https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D276</id>
    <author>
      <name>Alexander Bochmann</name>
      <email>&lt;a class="convert-mailto" href="mailto:nospam@example.com" data-encode-name="ab+wiki" data-encode-domain="st.gxis.de"&gt;ab+wiki at st.gxis.de&lt;/a&gt;</email>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[downgrading Android apps using data from TWRP backups]]></title>
    <summary type="html"><![CDATA[Mostly as a reminder to myself when I'm looking to solve this kind of problem the next time: Since the March 22, 2017 version of the <a class="wiki external"  title="External link" href="https://play.google.com/store/apps/details?id=com.fortinet.forticlient_vpn" rel="external nofollow">FortiClient VPN Android app</a> kept crashing on my mobile (still running the last Cyanogenmod 13 snapshot) as soon as I tried to switch away to the launcher, I wanted to downgrade the app.<br />
<br />
Unfortunately, there's no copy on apkmirror.com or F-Droid, and I don't know about any other reasonably trustworthy sources. I also already had removed and reinstalled the app, so recovering the old version on the phone didn't seem an option either.<br />
<br />
Fortunately, I take <a class="wiki external"  title="External link" href="https://twrp.me/" rel="external nofollow">TWRP</a> backups now and then, so I tried looking at one of those. For once, having unencrypted backups turned out real convenient: A TWRP <em>data.ext4.win</em> file is just a tar.gz, so I was able recover the <em>app/com.fortinet.forticlient_vpn-1/base.apk</em> file (using 7Zip on Windows), and copy that over to my phone. After uninstalling the current version of the FortiClient app, I just reinstalled the program with the CM file manager using the restored <em>base.apk</em> as a source. Done.<br />]]></summary>
    <published>2017-03-27T22:41:00+00:00</published>
    <updated>2017-03-27T22:41:00+00:00</updated>
    <link rel="alternate" type="text/html" href="https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D270"/>
    <id>https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D270</id>
    <author>
      <name>Alexander Bochmann</name>
      <email>&lt;a class="convert-mailto" href="mailto:nospam@example.com" data-encode-name="ab+wiki" data-encode-domain="st.gxis.de"&gt;ab+wiki at st.gxis.de&lt;/a&gt;</email>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[Cisco ASA logging: Disable hiding of usernames in failed admin logins]]></title>
    <summary type="html"><![CDATA[Cisco ASA firewalls don't log, by default, the username used in a failed administrator login. Instead, the login is masked out using "*" characters:<br />
<br />
<tt> %ASA-6-113005: AAA user authentication Rejected : reason = AAA failure : server = 10.1.1.1 : user = ***** : user IP = 192.168.0.10</tt><br />
<br />
The rationale is that users sometimes enter their password instead of the username, and the password will then end up in logs. As we're using two-factor authentication for admin logins, that doesn't apply to us.<br />
<br />
That behaviour was actually <a class="wiki external"  title="External link" href="https://quickview.cloudapps.cisco.com/quickview/bug/CSCur17006" rel="external nofollow">tracked as a bug in Cisco's bug database</a> <a class="wikicache" target="_blank" href="tiki-view_cache.php?url=https%3A%2F%2Fquickview.cloudapps.cisco.com%2Fquickview%2Fbug%2FCSCur17006">(cache)</a>, and while the article mentions that a command was introduced to change this behaviour, the command itself isn't mentioned.<br />
<br />
After some fiddling on the ASA command line I found this statement:<br />
<br />
<tt> no logging hide username</tt><br />
<br />
The corresponding button in the ASDM GUI is in <em>Device Management -&gt; Logging -&gt; Syslog Setup: "Hide username if its validity cannot be determined"</em><br />]]></summary>
    <published>2017-03-23T15:28:00+00:00</published>
    <updated>2017-03-23T15:28:00+00:00</updated>
    <link rel="alternate" type="text/html" href="https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D264"/>
    <id>https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D264</id>
    <author>
      <name>Alexander Bochmann</name>
      <email>&lt;a class="convert-mailto" href="mailto:nospam@example.com" data-encode-name="ab+wiki" data-encode-domain="st.gxis.de"&gt;ab+wiki at st.gxis.de&lt;/a&gt;</email>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[so I didn't notice that my OpenBSD vserver had broken IPv6 for quite some time...]]></title>
    <summary type="html"><![CDATA[...until I had a look at the DNS server log, which showed errors contacting other servers via IPv6.<br />
<br />
The hoster I'm using has a somewhat strange IPv6 setup where you get a /64 for your system, but the default gateway is just fe80::1 - when I originally set up the system, I put that into /etc/mygate whithout thinking much about it.<br />
<br />
This initially was ok for quite some time, but it seems the default route vanished at some point. (In retrospect I don't quite understand why the setup ever worked at all, as the lo0 lookback interface has fe80::1 auto-assigned too...)<br />
<br />
Then I remembered that fe80:: carries interface tags, since it exists on any IPv6-enabled interface, and the OS needs some way to decide <em>which</em> fe80:: it has to deal with right now.<br />
<br />
Edited <strong>/etc/mygate</strong> accordingly, and things are back to normal (vio is OpenBSD's VirtIO network device driver, so my virtual ethernet device is vio0):<br />
<br />
<tt> fe80::1%vio0</tt><br />
<br />
<br />]]></summary>
    <published>2017-02-19T13:37:00+00:00</published>
    <updated>2017-02-19T13:37:00+00:00</updated>
    <link rel="alternate" type="text/html" href="https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D201"/>
    <id>https://web.gxis.de/tiki/tiki-view_blog_post.php%3FpostId%3D201</id>
    <author>
      <name>Alexander Bochmann</name>
      <email>&lt;a class="convert-mailto" href="mailto:nospam@example.com" data-encode-name="ab+wiki" data-encode-domain="st.gxis.de"&gt;ab+wiki at st.gxis.de&lt;/a&gt;</email>
    </author>
  </entry>
</feed>
