SysAdmin Blog

SysAdmin Blog

Cisco ASA logging: Disable hiding of usernames in failed admin logins

Alexander Bochmann Thursday 23 of March, 2017
Cisco ASA firewalls don't log, by default, the username used in a failed administrator login. Instead, the login is masked out using "*" characters:

%ASA-6-113005: AAA user authentication Rejected : reason = AAA failure : server = : user = ***** : user IP =

The rationale is that users sometimes enter their password instead of the username, and the password will then end up in logs. As we're using two-factor authentication for admin logins, that doesn't apply to us.

That behaviour was actually tracked as a bug in Cisco's bug database (cache), and while the article mentions that a command was introduced to change this behaviour, the command itself isn't mentioned.

After some fiddling on the ASA command line I found this statement:

no logging hide username

The corresponding button in the ASDM GUI is in Device Management -> Logging -> Syslog Setup: "Hide username if its validity cannot be determined"

so I didn't notice that my OpenBSD vserver had broken IPv6 for quite some time...

Alexander Bochmann Sunday 19 of February, 2017
...until I had a look at the DNS server log, which showed errors contacting other servers via IPv6.

The hoster I'm using has a somewhat strange IPv6 setup where you get a /64 for your system, but the default gateway is just fe80::1 - when I originally set up the system, I put that into /etc/mygate whithout thinking much about it.

This initially was ok for quite some time, but it seems the default route vanished at some point. (In retrospect I don't quite understand why the setup ever worked at all, as the lo0 lookback interface has fe80::1 auto-assigned too...)

Then I remembered that fe80:: carries interface tags, since it exists on any IPv6-enabled interface, and the OS needs some way to decide which fe80:: it has to deal with right now.

Edited /etc/mygate accordingly, and things are back to normal (vio is OpenBSD's VirtIO network device driver, so my virtual ethernet device is vio0):


Linux ATA bus errors with ASMedia ASM1062 PCIe card

Alexander Bochmann Saturday 22 of October, 2016
I recently added a cheap ASM1062 2-port SATA card to my Linux box at home, since it's Asus C8HM70-I board only has two SATA ports, and I wanted to use an additional small SSD as boot device.

With my disks hooked up to the new card, I started to get SATA errors when there was moderate write load:

kernel log
ata5.00: exception Emask 0x10 SAct 0x7c000000 SErr 0x400000 action 0x6 frozen
ata5.00: irq_stat 0x08000000, interface fatal error
ata5: SError: { Handshk }
ata5.00: failed command: WRITE FPDMA QUEUED
ata5.00: cmd 61/00:d0:00:2b:6f/0a:00:ac:00:00/40 tag 26 ncq 1310720 out
         res 40/00:f4:00:53:6f/00:00:ac:00:00/40 Emask 0x10 (ATA bus error)
ata5.00: status: { DRDY }
ata5.00: failed command: WRITE FPDMA QUEUED
ata5.00: cmd 61/00:d8:00:35:6f/0a:00:ac:00:00/40 tag 27 ncq 1310720 out
         res 40/00:f4:00:53:6f/00:00:ac:00:00/40 Emask 0x10 (ATA bus error)
ata5: hard resetting link
ata5: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata5.00: configured for UDMA/133
ata5: EH complete

I'm not yet ready to blame the card itself, since I remembered I recycled a pair of rather old SATA cables to connect the drives, and the card supports SATA 6G... The mainboard itself has just one SATA 6G connector, and with that I used different cables that clip into the port, but the clip mechanic doesn't work with the connectors on the ASMedia card.

For now, I turned the SATA link speed down to 3G by adding an libata.force parameter to the kernel command line:


(5 and 6 are corresponding to ata5 and ata6 from the libata kernel messages.)

This seems to work as a stopgap measure - the bus errors haven't reappeared since.


ata5: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata6: SATA link up 6.0 Gbps (SStatus 133 SControl 300)

With libata.force:

ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
ata6: SATA link up 3.0 Gbps (SStatus 123 SControl 320)

syslog-ng and RcvbufErrors on Linux

Alexander Bochmann Tuesday 10 of May, 2016
We're running a syslog-ng installation to collect syslog data from quite a lot of systems (and then selectively feed them into our Splunk installation). Almost all of these send syslog via UDP.

Recently, when adding a couple more machines, I noticed that the syslog server is dropping UDP datagrams:

udp RcvbufErrors
# netstat -su | grep -A6 "^Udp:"
    518026364 packets received
    36078 packets to unknown port received.
    23164168 packet receive errors
    1248583 packets sent
    RcvbufErrors: 23164167


This is mentioned in the syslog-ng OSE docs, but it seems no one here ever got to that section, including myself.

So, in that context I learned about the so-rcvbuf() parameter to the udp() source in syslog-ng, and the Linux kernel net.core.rmem_max sysctl...

Kernel configuration
# sysctl -w net.core.rmem_max=16777216

(add the same parameter to /etc/sysctl.conf)

source s_net {  
                udp(ip( port(514) so-rcvbuf(8388608)); 

(There's no reason why so-rcvbuf() couldn't be the same as rmem_max, and neither needs to be a multiple of 1024 - both just bad habits of mine...)

Don't increase net.core.rmem_default, as that would make the Linux kernel use a bigger buffer for every UDP socket being created on the system.

The RcvbufErrors counter hasn't been increasing since that change, but I'll add monitoring for that, so drops won't go unnoticed in the future.

killing your network with Cisco ASA 9.x identity NAT and proxy arp

Alexander Bochmann Sunday 17 of April, 2016
I was about to prepare a longer blog post on one of the pitfalls when migrating the NAT ruleset of an older Cisco ASA to a 9.x release - but as it turns out, the problem is already documented pretty well by Cisco, if you know what to look for...

With "Twice NAT", as implemented in 9.x software versions, an ASA firewall in routed mode will automatically do proxy ARP for all addresses covered by a NAT rule, to attract traffic for them. This is usually an intended effect, unless you're configuring Identity NAT rules (used to inhibit address translation for certain source/destination pairs) that cover address space locally connected to the firewall. This was not a problem with NAT exempt rules on older ASA software, but if such a rule is used now without the no-proxy-arp parameter, the ASA will act as a blackhole for traffic on on the local network segment, by sending proxy-ARP replies for addresses it doesn't own.

In Proxy ARP Problems with Identity NAT (cache), Cisco illustrates the problem with this diagram:

image copied from vendor documentation, (c) Cisco
image copied from vendor documentation, (c) Cisco

Yeah, don't do that. Always consider whether no-proxy-arp is required for a NAT rule before it's being deployed.

(Also see ASA FAQ: Why does the ASA reply to ARP requests for other IP addresses in the subnet? (cache).)

Cyanogenmod 12.1 device encryption fails after wiping filesystems with TWRP

Alexander Bochmann Wednesday 25 of November, 2015
I recently bought a 2nd hand Android mobile (Samsung) to install Cyanogenmod on. The process is quite straightforward from the documentation on the CM website. I installed TWRP using Heimdall and wiped the system partitions from the recovery before installing CM 12.1.

Once running Cyanogenmod, I wasn't able to activate device encryption though. Unsuccessfully tried several of the tips out there, like disabling Selinux before starting the encryption process. After retrying with an active adb logcat, I found this message in the log:

E/Cryptfs (  183): Orig filesystem overlaps crypto footer region.  Cannot encrypt in place.

...which in turn lead me to this thread on the Cyanogenmod forums (cache). The hint to resize the data partition is correct, but it's not actually required to reformat the filesystem, as Android comes with a resize2fs. So I booted into TWRP recovery and connected to the system via adb shell. Turns out that /data is mounted on /dev/block/mmcblk0p24:

# df
                       5584700    931020   4653680  17% /data
                       5584700    931020   4653680  17% /sdcard

After unmounting /data and /sdcard, I had a quick look at the partition with tune2fs:

# tune2fs -l /dev/block/mmcblk0p24
tune2fs 1.42.9 (28-Dec-2013)
Filesystem volume name:   
Last mounted on:          /data
Filesystem UUID:          17e3f4bc-acf2-631e-af53-921ea0c9e21a
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode filetype extent sparse_super large_file uninit_bg
Filesystem flags:         unsigned_directory_hash
Default mount options:    (none)
Filesystem state:         clean
Errors behavior:          Remount read-only
Filesystem OS type:       Linux
Inode count:              355520
Block count:              1421307
Reserved block count:     0
Free blocks:              1163420
Free inodes:              353516
First block:              0
Block size:               4096
Fragment size:            4096

So, 1421307 blocks of 4096 bytes. Since the forum thread was not quite clear on how much space is required to facilitate encryption, I decided to shrink the filesystem by 8 blocks (32k):

# e2fsck -fy /dev/block/mmcblk0p24 
# resize2fs /dev/block/mmcblk0p24 1421299

...rebooted into CM, and successfully activated system encryption without further problems.

Checkpoint vpn debug - Cannot signal vpnd: No such process

Alexander Bochmann Wednesday 05 of August, 2015
Jotting this down as I've found no useful reference to the above error message on the net:

Trying to enable IKE debugging on a Checkpoint FW1 using vpn debug ikeon results in the error message Cannot signal vpnd: No such process

This happens when the PID in in /opt/CPsuite-Rxx/fw1/tmp/vpnd.pid has been overwritten. I've seen various people (including myself) run into this because they had typed vpnd debug instead of vpn debug at some point...

Solution: Overwrite vpnd.pid with the correct PID

pgrep vpnd > $FWDIR/tmp/vpnd.pid

FortiOS 5.2 upgrade problems on Fortigate 80C

Alexander Bochmann Sunday 05 of July, 2015
Recently I tried upgrading my Fortigate 80C firewall to a current FortiOS (5.2.3) following the - supposedly - supported upgrade path, from 5.0.10.

Unfortunately I ran into the ehci_hcd 5035: fatal error that's been mentioned on the Fortinet forums in various places (here, for example) - system doesn't boot. Good thing it's possible to easily fall back to the previous release by booting from the backup partition. When you're connected to the console port, that is.

Today I found out FortiOS 5.2.3 can be installed after wiping the internal flash from the bootloader, using a serial console. My Fortigate had originally been installed with some FortiOS 4 release - I assume the boot disk layout has changed somewhere between releases, and the new image just doesn't fit.

  • a tftp server configured for an address in the network on interface 1 of your Fortigate to hold the new firmware image (mine wasn't, and I had to quickly shuffle some things around to recover from that)...
  • an USB stick with the current configuration to import after the upgrade has finished (or just put it on the tftp server, too)

First, select

[F]: Format boot device.

from the bootloader menu. As soon as that is finished, use

[G]: Get firmware image from TFTP server.

to fetch the new firmware image via tftp. The system will reboot with a default configuration. Log in with the admin account (no password) and restore your configuration from the USB stick:

config global
execute restore config usb <filename>


ATEN support - a positive surprise

Alexander Bochmann Friday 17 of October, 2014
In my previous post - quite some time ago - I was bitching about the scarcely documented RADIUS authorization function on one of our ATEN SN0116 serial console servers.

Some time later, I found out that RADIUS authentication doesn't quite work - only one user could log on when using RADIUS, and subsequent login attempts were denied. At the same time, there was no such problem when using local user accounts on the system.

Initially, I didn't bother opening a support request with ATEN for this issue, because I didn't expect anything to happen (no support contracts and all). Nevertheless, just before giving up on the RADIUS functionality, I registered one of our systems and described the problem.

After a bit of back and forth with the support representative on the ticket, I quickly got through both the "reporting this to engineering" and "engineering acknowledged the problem" stages, to get an updated software only a couple of days later.

It's not yet up on the ATEN firmware download page, but I assume the upcoming version (the next after v3.1.303) will have the relevant bug fix.

At our company, we've been quite exhausted by our support experiences with the likes of Cisco and Juniper in the past months, so this was really a pleasant experience for a change. Thank you, ATEN.