Aunt Anna’s Graham Breakfast Buns! :-)

No, this doesn’t have anything to do with technology. Or wait, maybe it does, a little bit. Couple of years ago I added a recipe for my favorite breakfast buns to Allrecipes.com, and then submitted the recipe for “Kitchen Approval” so that it would be available in Allrecipes’ general index and search results. The recipe is still in “pending” status, which means there must be a vast number of recipes people have submitted, but that are not searchable because Allrecipes hasn’t “approved” them. Why not make all submitted recipes available, but mark the “unverified” recipes as such? And in search the default could be not to search the unapproved recipes, but with a tic of a checkbox all those unverified recipes could be searched also.

In any case, since Google indexes this blog, if I post the direct link to the recipe here, at least it will be in Google’s search results soon! :-)

So, without further ado, presenting: Aunt Anna’s Graham Breakfast Buns!

Old-school link lists, the purpose of the web, and Google’s quest against “Unnatural links” (R.I.P. LinkResource.com)

For many years in the 2000′s I maintained an online link list, LinkResource.com, that I originally started to have an easy access to many of the resouces from any web-connected location. Over the years the significance of such resource waned, but I still kept it online since it received a decent number of hits (last update to the list was posted in 2009). Within the last year I’ve received number of emails from people whose site I had featured on the list, begging me to remove the link as Google penalizes their site for “Unnatural links”. Granted, LinkResource was an outright list of links, but wasn’t that a significant part of what the web was built for: to be easily able to access cross-referenced resources. Wikipedia articles, for instance, are rife with links to other Wikipedia articles and external resources.

Of course, this action by Google was brought on by abuse of cross-linking sites in the form of link-exchange networks. But in the end the innocuous links that site owners often have no control over (such as were the links on my now-discontinued LinkResource.com) can create a SEO disaster lowering the search-index score and hence reducing or even destroying business. Fortunately, Google announced the Disawow Tool for webmasters in the fall of 2012 to disconnect links that Google deems harmful from the site indexing score.

LinkResource.com 2000-2013

LinkResource.com 2000-2013

Encrypted Vault in Ubuntu for Your Valuable Data

Recently I set up Bitnami Cloud Tools for AWS to facilitate AWS configuration and use from the command line. After creating an administrative IAM (as not to use the main AWS login), and created and uploaded/associated the necessary X.509 credentials for that IAM login, I realized that anyone who would gain access to the local dev server would also gain full access to several AWS Virtual Private Cloud configurations. Not a terribly likely occurrence, but would I like to risk it? Say, when I have the cloud tools configured on Ubuntu on my laptop, someone could conceivably steal the laptop, and with a little technical expertise, gain access to the Ubuntu instance (running in a VM), and hence to the AWS VPCs.

At least in this case having the IAM credentials and the X.509 keys on a USB drive would be impractical (and would probably increase the likelihood that the keys would get misplaced and end up in the wrong hands). On Windows it’s a simple task to set up an encrypted vault using one of many available utilities to achieve such. But how to do that on Linux? After some digging I came across a Wiki entry Ubuntu: Make a secure vault. It worked fine, but via cut-and-paste that appeared rather cumbersome for daily operations. So I set out to write couple of scripts to make things easier.

First, you need to have cryptsetup package installed. Then you can make use of the setup-crypt script below. These scripts are quick utility scripts that don’t have a separate configuration file; you may want to edit some of the variables on top of the script, namely “CRYPT_HOME” (depending on where you want to place your encrypted vault file), “CRYPT_MOUNTPOINT” (depending on where you want to mount it), and “CRYPT_DISK_SIZE” (the capacity of the encrypted vault in megabytes).

After you save the above script to a file, and make the file executable (chmod 500 filename), you’re good to go. If you don’t want the encrypted vault file located at /root/crypto/, or want a vault of a different size than the rather small default of 64MB (I’m just saving a handful of AWS keys, so I didn’t need a larger vault file), edit the variables on top of the script before running it. Once started, follow the prompts and the encrypted vault file is created for you. If an error occurs during the vault creation process, if the vault file already exists, or if you cancel the script, any changes made up to that point are rolled back.

To mount and access the vault, save the following two scripts for mounting and unmounting the vault respectively:

Similarly make these scripts executable before running them. If you modified the encrypted vault location/name, or the mount point location during the creation process, you’ll want to make corresponding changes the the variable atop these scripts.

You can place these utility scripts in /usr/local/bin or other location on your path (or symlink from a location on your path) to avoid having to type the full path every time.

With the encrypted vault created using setup-crypt, you can then mount the vault using mount-crypt and access the contents of the vault at /mnt/crypto, and finally unmount the vault with umount-crypt. Since the vault is protected by a single passoword, be sure to set an appropriately safe password to match the required security level.

To further improve the security, you probably want to unmount the vault whenever you’re not logged in. Most likely contents of a vault such as this are intended for interactive use. You can always unmount and hence “lock” the vault with umount-crypt command, but it is a good idea to run umount-crypt automatically at logout. Depending on your shell you can crete/edit .zslogout (zsh), .bash_logout (bash), or .logout (tcsh/csh) at the user home directory (likely in “/root” since opening/closing loopback handles can only be done by the root), and place the following code in it:

I also close the vault at system shutdown/reboot, by symlinking the following from /etc/rc6.d/S40umount-crypto:

And that’s all there is to it! With your files safely inside a locked, encrypted vault, only you and the NSA have access to them! ;)

P.S.
To utilize the vault with Bitnami Cloud Tools, I have created folders for each AWS account I want to access under /mnt/crypto/, e.g. /mnt/crypto/aws_account_a, /mnt/crypto/aws_account_b, etc. Each folder contains similarly named files (as found in bitnami-awstools-x.x-x/config folder), like so:

aws-config.txt
aws-credentials.txt
ec2.crt
ec2.key

To switch from account to another I (re-)symlink the contents of the desired account from bitnami-awstools-x.x-x/config/, for example:

ln -sf /mnt/crypto/aws_account_b/* /opt/bitnami-awstools-x.x-x/config/

This way, once the vault is locked, the access to any and all of the AWS accounts via cloud tools goes away. Switching between the accounts could, of course, be scripted easily as well.

OpenVPN with FreeRADIUS: How To Use the CN from the User Cert as the Login Name (i.e. the reverse of “username-as-common-name”)

I recently set up handful of OpenVPN servers to provide access to various LAN and AWS VPC resources. Initially I had just the certificate validation configured, but I felt slightly uneasy about not having a password. Especially in the environments where multiple people need access to a resource, in the event one of them no longer should have access – such as when leaving an organization – the only way to block such user would be to add their cert into the CRL. While that should be done anyway when a user’s privilege needs to be revoked, a password would provide a more immediate and easy way to make such changes.

The next step was to install FreeRADIUS which proved to be a very easy task. I’m initially running it with just text-based back-end and will later add MySQL, perhaps with daloRADIUS GUI to make user administration even easier. On Ubuntu/Debian there is a package “openvpn-auth-radius”, which makes it possible to add FreeRADIUS authentication to OpenVPN server with one simple line:

(Of course, the client side also needs the auth-user-pass statement in their OpenVPN client configuration.)

But there is a problem: The user cert can be that of Bob while the login username/password is that of Alice, and the login would still be valid. Apparently I’m not the only one who has thought about this. While I didn’t want to hack the pam auth plugin, the post had enough clues to make a simple bash script (below) that sets the username based on the common-name from the validated user’s certificate:

To use this script, simply save it to /etc/openvpn/endpoint_server_radius_auth.sh, make it executable, and edit the file to add the shared secret for the RADIUS server from /etc/freeradius/clients. Finally, add the following lines in your OpenVPN server configuration that already authenticates the users by their certificates:

tmp-dir /dev/shm
auth-user-pass-verify /etc/openvpn/endpoint_server_radius_auth.sh via-file

Now the login name for RADIUS authentication is taken from the CommonName (CN) of the user’s certificate and, in fact, the username that the user enters when prompted for auth-user-pass username/password is ignored, only the password is significant.

The bottom line of this script: It utilizes RADIUS to provide a server-side password validation for the certificate’s CN. A user can always remove the password protection from their private key, so this approach functions as an extra layer of security while making it easier to quickly revoke user’s access to a resource.

Note: For this to work, the CommonName set in user certificates obiously must be a valid RADIUS login name. A user can’t modify the CN in their certificate (unless they’re NSA since they apparently have access to RSA-keys, too :( ), so they’re locked to use that specific username.

Also note that I wrote this script on Ubuntu and did not necessarily observe portability, so you may need to modify the script some for other platforms. It is primarily intended as an example (although it does work), as finding something like this would have saved me a few hours of work.

Replacing a Firewall/Gateway and Purging the Upstream ARP Cache with arping in Ubuntu

Over the years I have had to replace various firewall devices at co-location racks, and have equally many times been annoyed by the time it has taken to to clear the upstream (co-lo) router/gateway of the apparent stale ARP entries that point to the MAC of the retiring device. Since the external IP normally stays the same, the upstream router/gateway becomes confused and it takes some time, say, half an hour, until the upstream device cache expiration is reached and the traffic starts to flow normally again.

Facing once again such replacement I this time had to figure it out because the traffic of this particular installation could not be interrupted for 30 minutes (or however long it would take for the upstream cache to clear). I then came across Brian O’Neill’s 2012 article Changing of the Guard – Replacing a firewall and gratuitous ARP that introduced a solution in situations where there is no administrative access to the upstream devices (so that an immediate purge of the ARP cache could be triggered). Exactly what I was looking for!

In the article Brian uses a Linux server temporarily with a spoofed MAC address of the new firewall appliance to trigger the ARP cache flush with help of arping command. In my case I was installing Shorewall on Ubuntu 12.04, so I could use arping from the firewall server itself. I went ahead and installed arping (apt-get install arping), but it turned out the default arping package on Ubuntu does not include the required “-U” (‘unsolicited’, or gratuitous ARP). Fortunately an alternative package “iputils-arping” implements the unsolicited switch. With iputils-arping installed the command is still “arping”, and so the command Brian offered works as-is:

arping –U –c 5 –I eth1 192.168.1.1

Where “-c” indicates how many times the information is broadcast, “-I” obviously defines the interface connected to the upstream router/gateway, and the IP is the external IP of your firewall/gateway device.

Introducing duplicity-nfs-backup, or How to Use duplicity-backup Safely with NFS/CIFS Shares

After completing nfs_automount script a bit over a week ago, I soon realized rdiff-backup I had planned to use with the now-nearly-guaranteed-to-be-online NFS shares would not work. I then turned to my other favorite *NIX server backup solution, duplicity with duplicity-backup.sh wrapper script. It utilizes gzip-based archives, which works much better with NFS/CIFS shares. Besides the other odd problems with rdiff-backup and NFS, it resolves the more obvious issue with conflicting users/permissions between the client and the NFS share host as duplicity doesn’t maintain a direct mirrored copy of the files being backed up.

The only problem was that since duplicity creates incrementals, and I generally like to keep backups around for several months, the incrementals are really never needed beyond couple of weeks. Beyond that in my applications the day-by-day backups are overkill, and should be pruned. Duplicity provides an option to do so (“remove-all-inc-of-but-n-full”), but duplicity-backup.sh hadn’t implemented it, so I first contributed a patch to zertrin’s project. Then I proceeded to write a wrapper for the wrapper to add the extra pre-backup checks, and duplicity-nfs-backup was born.

So what is duplicity-nfs-backup? It is a wrapper script designed to ensure that an NFS/CIFS-mounted target directory is indeed accessible before commencing with backup. While duplicity-backup.sh can be used to back up to a variety of mediums (ftp, rsync, sftp, local file…), duplicity-nfs-backup is specifically intended to be used with NFS/CIFS shares as backup targets.

The script that was the impetus for writing duplicity-nfs-backup, nfs_automount, attempts to keep the NFS shares online at all times, but the client system can’t always help with such situations. What if the target system becomes unreachable due to a network problem? Or what if a disk, or a filesystem mount fails on the target while the share is still available? In any of these cases duplicity-backup/duplicity would back up into an empty mountpoint. duplicity-nfs-backup adds the necessary checks to ensure that this won’t happen, and it also issues log/syslog warnings when a backup fails due to a share that has gone M.I.A.

I mentioned earlier that duplicity-nfs-backup is “a wrapper for the wrapper.” Paraphrasing zertrin, it is important to note that duplicity-nfs-backup IS NEITHER duplicity, NOR is it duplicity-backup! It is only a wrapper script for duplicity-backup, also written in bash.

This means that you will need to install and configure duplicity and duplicity-backup.sh before you can utilize duplicity-nfs-backup. I also recommend that you would make use of nfs_automount as it significantly improves the chances that the NFS target share will be online when duplicity-nfs-backup attempts to access it.

This script is intended to be run from crontab. duplicity-nfs-backup takes no arguments, simply set the configuration parameters in duplicity-nfs-backup.conf and you’re done!

Like nfs_automount, duplicity-nfs-backup is also distributed under MIT license.

Clone or download duplicity-nfs-backup from my GitHub repository, and let me know if you come across any problems (or also if it works fantastically and saves the day! :)). Pull requests are always welcome.

NFS Automount, The Fourth Iteration (the complete rewrite)

** Note: This post has been significantly altered on 18 July 2013 from the original, posted a few days earlier.

A few days ago I released the fourth iteration of the NFS Automount script, with some minor changes to the previous version from December 2011. The earlier versions were released May 2011 (first CentOS Linux version), and July 2010 (originally written for FreeBSD).

Upon releasing the fourth version I realized the script was becoming brittle, the logic was, well, somewhat illogical, and minor refactoring would not help. Hence this complete rewrite of the script, now called “nfs_automount”, was born. It is conceptually based on the older versions, and I also borrowed some ideas from AutoNFS script on Ubuntu’s Community Wiki.

Like the earlier version, the goal of this script is to provide static (i.e. /etc/fstab-like) NFS mounts, while at the same time supporting cross-mounts between servers.

The other non-fstab alternative is to lazy-mount NFS shares with autofs (where available), but with it NFS shares are not continually maintained. When a remote share is accessed, it takes a few moments for it to become accessible as autofs mounts the share on-demand. While autofs times out a mounted share after some time of inactivity, it does not unmount the share before the timeout has lapsed in the event the remote server becomes inaccessible. While on-demand mounting may save some bandwidth, it is not suitable for all applications. Furthermore, when a system has one or more active mounted shares off of a server that goes offline, unexpected behavior is often observed on the client server until the now-defunct NFS shares are unmounted, or the remote server becomes available once again.

nfs_automount offers a solution:

  • The NFS shares are not statically defined in /etc/fstab so that the system startup is not delayed even when the remote server is not available. As soon as the shares become available they’re automatically mounted. If multiple servers cross-mount NFS shares from each other, and the servers are turned on at the same time, nfs_automount ensures that all mounts are established as soon as the shares become available.
  • The shares are monitored at a frequency you define, for example, every 60 seconds. If a share has become dismounted, stale, or their exporting server has become inaccessible, nfs_automount takes action to correct the situation: dismounted and stale shares are attempted to be remounted (stale shares are first immediately unmounted), and shares whose remote NFS service has disappeared are unmounted to prevent impact on the client system stability. Once a remote NFS service returns online, or definition of a previously stale share is reinstated, any shares that were unmounted as a result of those conditions are remounted.
  • The script is intended to run as a daemon (an upstart job script is provided for Ubuntu), and it reads its configuration from /etc/nfs-automount.conf where you can conveniently define the shares to be mounted and monitored along with some other options. You can also set ‘RUNTYPE’ option to ‘cron’, and run the script from crontab if you so choose.
  • You can define the shares to be mounted either as Read/Write, or Read Only. Of course, a share will be Read Only regardless of this setting if it has been exported as Read Only on the remote server.
  • An option to define a remote check file is provided. If provided in the configuration for a share, its unreachability can alert of a problem on the exporting server, such as a failed filesystem mount, even when the NFS share is otherwise working correctly. You can easily expand this feature to add additional functionality.
  • Provides clear logging which provides alerts by default, and more informative detail if you turn ‘DEBUGLOG’ setting to ‘true’.
  • Written in bash script with modular and clear syntax.
  • Tested on Ubuntu 12.x (should also work on Debian) and CentOS 6.x (should also work on RedHat). The service installation instructions (available on GitHub) have been written for Ubuntu, so if you’re installing the script for CentOS/RedHat, you will need to alter the installation steps somewhat. FreeBSD is no longer explicitly supported, but I believe it should work with minor modifications. I have not tested with Solaris or other *NIX environments. If you try, please post comments here!
  • Can be easily run as a service (upstart script is provided), or from crontab; the script works with crontab with just a single configuration switch change.
  • Distributed under MIT license.

Rather than posting the code (now 400+ lines) here, I have created a repository on GitHub from where it is easy to download or clone.

Enjoy! :)