HOWTO: get Docker Containers under Centos 5 with Xen

Centos5 is getting long in the tooth, but then again, many of my servers are antiques that would find native Centos6 to be problematic.

A recent adventure in disaster recovery led me to upgrade several of my Xen DomU’s from CentOS 5 to CentOS 6, but I was distressed to discover that about the minimum you can get by with on RAM for CentOS6 is nearly 400MB. I wanted to host several CentOS6 VMs, but the thought of getting dinged to the tune of half-a-GByte of RAM plus several gigs of disk image didn’t sit well for lightweight systems.

The “in” thing for this kind of stuff is Containers, which neatly fit in the space between a full VM and something less capable such as a chroot jail. The question was, could I get CentOS 6 containers to work in a CentOS5 Dom0?

As a matter of fact, yes, and it was considerably less painful than expected!

I cheated and did the real dirty work using my desktop machine, which is running Fedora 20, hence is better supported for all the bleeding-edge tools. Actually, Ubuntu would probably be even better, but I’m at home with what I’ve got and besides, the idea is to make it as little work as possible given my particular working environment.

Step 1: Vagrant.

Vagrant is one of those products that everyone says is wonderful (including themselves), but it was hard to tell what it’s good for. As it turns out, what it’s good for is disposable VM’s.

Specifically, Vagrant allows the creation of VM “boxes” and the management of repositories of boxes. A “box” is a VM image plus the meta-data needed for Vagrant to deploy and run the VM.

So I yum-installed vagrant on my Fedora X86_64 system.

My selected victim was a basic CentOS 6 box, since for the VirtualBox VM environment.

vagrant box add centos65-x86_64-20131205 https://github.com/2creatives/vagrant-centos/releases/download/v6.5.1/centos65-x86_64-20131205.box

Step 2. Docker

It would have been more convenient to get a ready-made Centos6 Docker box, but most Docker-ready boxes in the public repo are for Ubuntu. So I did a “vagrant up” to cause the box image to download and launch, connected to the Centos6 guest, and Docker-ized it using this handy resource: http://docs.docker.io/installation/rhel/

An alternative set of instructions:

http://cloudcounselor.com/2013/12/05/docker-0-7-redhat-centos-6-5/

The process is rather simple as long as you’re using the latest CentOS 6.5 release. Older kernels don’t have the necessary extensions, requiring a kernel upgrade first.

Step 3. Porting to Xen

Once docker was working, the challenge of getting the VM from VirtualBox to Xen presented itself. I was expecting a nightmare of fiddling with the disk image and generating a new initrd, but there was a pleasant surprise. All I had to do was convert the VM image from the “vmdk” format to a raw disk image, transfer the raw image to the Xen box, hack up a xen config and it booted right up!

The details:

On the Fedora desktop:

$ qemu-img convert -f vmdk virtualbox-centos6-containers.vmdk -O raw centos6-containers.img
$ rsync -avz --progress centos6-containers.img root@vmserver:/srv/xen/images/centos6-container/

File and directory names vary depending on your needs and preferences.

Now it’s time to connect to “vmserver”, which is my CentOS5 physical host.

I stole an existing XEN DomU pygrub-booting definition from another VM, changed the network and virtual disk definitions to provide a suitable environment. The disk definition itself looks like this:

disk = [ "tap:aio:/srv/xen/images/centos6-container/centos6-containers.img,xvda,w"]

xvda, incidentally is a standard Centos VM disk image, with a swap and LVM partition inside.

I launched the VM and behold! a Centos 6 Docker container DomU on a CentOS 5 Dom0 base.

Everything should be this easy.

The curse of the mad Puppet

I have been working with various things designed to allow me to control the mousetech.com domain assets in a more centralized way. One of them was to try and use Puppet to provision machines. Puppet is a fairly nice tool, but there are some unexpected pitfalls.

There are several ways to get puppet on a CentOS 5 server. If you’d a glutton for punishment, you can always pull down the source and build from scratch, but I don’t recommend that when just getting started. You can also pull it via YUM from the EPEL repository. Or, you can import the Puppet Labs repo in and pull from there.

I already have EPEL in my stock set of YUM repositories, so that’s what I went for first. In the beginning all was fun and games. Then I got more ambitions and started defining modules. This didn’t work. Worse, the sample documentation used commands that didn’t work. It was getting very frustrating.

It became obvious fairly early that some significant changes have been made to puppet and that what I had wasn’t the Latest and Greatest. That would have been OK, except that attempts to read the online documentation for the older stuff kept leaking back into docs for the newer stuff (a not uncommon problem, best handled I think by archiving the old docs as self-contained PDFs). On top of that, the version of puppet that I was running was sufficiently antique that much of the documentation had fallen off the website (see previous).

To add to the confusion, I wasn’t really sure WHICH version of puppet I was running, since their enterprise product doesn’t keep quite the same set of version numbers as their community version plus I suspected the version (2.6.18) in the EPEL RPM wasn’t indicative, either. I finally came to the conclusion that 2.6.18 – which is the version that Centos10 pulled actually correlates to the community 2.5 version, which is something like 2.3 in Enterprise versioning.

At this point, I went to the source: Puppet Labs and found out about their repo. Unfortunately a network-based RPM install failed for obscure reasons (I’m not sure whether I have lingering LAN issues or it’s them). Fortunately, I was able to wget and install locally. After which I was able to install a Version 3 Puppet, the documentation now matches the commands and module processing works the way they said it did.

One last fly in the ointment, though. It seems that nodes and classes share the same namespace. I was using the same name for my guinea-pug machine node and one of the packages it was trying out and while both node and component parsed, the actual execution was only done against the node – the package was silently ignored. I fixed this by changing the node name to its fully-qualified domain name.

 

[SOLVED] mail loops back to me (MX problem?) for virtual machine

Sometimes they just gang up on you.

I was migrating my sendmail server from a NAT address to a bridge address when it all started.

Xen has this really nasty habit of zapping your hardware MAC address if you don’t get the nat routing configure just right. There’s obviously some way to get it to revert, because occasionally for no obvious reason, the real MAC address will revert, but don’t try searching the web for an answer – all you’ll get is fruitless inquiries and flame responses (you shouldn’t be changing your MAC address, idiot!). Please. There are very good reasons why it’s useful to be able to set a custom MAC address. One place I worked coded their hardware asset IDs in the MAC to assist their DHCP server, for instance.

On the mousetech domain, I’d be happier if it didn’t happen. As it is, the MAC addresses of the primary and secondary NICs got swapped and I didn’t find out until I’d gotten most of the way through fixing things. So the former eth0 became eth1 and vice versa.

Shortly thereafter, outbound mail started bouncing with the infamous “mail loops back to me” message. Since I’d just done major relocation on the mail VM, I wasted a LOT of time messing around with sendmail options to no avail. Normally this message can be cured by putting in a valid MX record in DNS and/or adding all the mailserver alias names to the sendmail local-host-table (cw table). Not this time.

I was fairly sure that the problem had something to do with the fact that the physical host had been set up to forward all port 25 (smtp) requests to the mail VM and that somehow the wrong IP address was getting mixed in, but I could see the actual routing since it was all internal and specific to port 25 to boot.

Turns out I’d been sloppy when I fixed up the iptables forwarding. The correct version (after the NIC mixup) looks like this:

-A PREROUTING -i eth0 -p tcp --dport 25 -j DNAT --to-destination 10.0.1.8:25

Where I went wrong was in being lazy and omitting the NIC ID (eth0) when I repaired the damage that Xen did. As a result, BOTH NICs were being re-routed – the actual internet-facing NIC (which should be routed) and the internal DMZ bridge-facing NIC (which should not). As a result, traffic on port 25 for eth1 was being routed back on itself and sendmail complained.

The Underappreciated Raspberry

The Raspberry Pi B version is one of the most popular hacker toys of the day and with good reason. Although it’s not the first sub-miniature single-board computer, it’s the first one whose price, performance, features and power make it an acceptable substitute for a “real” desktop computer.

But there’s another Raspberry Pi as well. Ironically, the “A” model came out after the “B” model. Unlike the “B”, it lacks a few things. Only one USB port and no on-board ethernet, for example.

But before you dismiss it as not worth the $10 lower price, here are some things to consider:

1. The “A” model pulls less power, since it doesn’t have to support those missing components. This is especially important if you want to run the unit off batteries or solar cells.

2. The ethernet controller on the B is actually just a third USB port with an on-board USB-ethernet adapter. If I need to add ethernet to an “A” board, I have a USB ethernet dongle sitting on the shelf right next to me. I originally bought it to network-enable an old laptop.

3. Some people prefer NOT to have critical machines on the Internet. Why pay for hardware you don’t want to use? Plus these days, a lot of people skip the wiring and use WiFi adapters. There are a number of WiFi dongles that will plug into the USB port. Why pay extra for a jack you don’t need? Or for power to support it?

4. Enough about networking, though, consider USB. For some people 2 USB ports are more than they need. So the single port on the “A” system may be sufficient or even surplus. Conversely, for some people, 2 ports are not enough. And/or they need powered ports with more power than the Raspberry Pi itself supplies. USB expansion hubs, both powered and un-powered have been around a long time and are easy to come by.

So we have 2 choices. A good general-purpose platform and a cheaper basic platform. Both expandable, both economical in their own ways.

DBUnit and CSV reference data

The CSV capabilities of dbUnit are under-documented. Here’s the results of some pain, suffering and debug tracing:

The CSV files are expected to be one-per-table. The tablename is part of the filename, thus: “TABLE1.csv”. Format is the usual, with the first row containing the column names and subsequent rows containing data. It is possible to customize the delimiters and separators, but the defaults work with bog-standard CSV. One possible reason to override is to allow use with pipe-separated format files.

Here’s some code to snapshot an SQL query out to CSV for use as a later test reference (or whatever).

Connection con =
DataSourceUtils.getConnection(dataSource);
IDatabaseConnection dbUnitCon =
new DatabaseConnection(con, "MYDB");

ITable actualTable =
dbUnitCon.createQueryTable(
"SNAPTABLE",
"SELECT * FROM QTABLE "
+ " WHERE INK1='GLORP'"
+ " AND INK_ID in('ABEND001', 'AAA', '07CSI')");

// Take reference snapshot:
IDataSet ds1 = new DefaultDataSet(actualTable);
CsvDataSetWriter.write(ds1, new File("/home/timh/csvdir"));

dbUnit will create csvdir, if needed and output 2 files. The SNAPTABLE.csv file and a file named “table-ordering.txt”.


To load SNAPTABLE for use in validating the results of a test:

// Load expected data from CSV dataset
CsvDataFileLoader ldr = new CsvDataFileLoader();
// NOTE: terminal "/" on URL is MANDATORY!!!
IDataSet expectedDataSet =
ldr.load("/testdata/snapshots/");
ITable expectedTable =
expectedDataSet.getTable("SNAPTABLE");

// Assert actual database table match expected table
Assertion.assertEquals(expectedTable, actualTable);

Note that while you can specify case-insensitivity on table names (it’s the default), the case of the SNAPTABLE.csv file and the SNAPTABLE entry in table-ordering.txt must match – at least on case-sensitive OS’s. And it’s good practice regardless. Also note that table-ordering.txt can contain multiple table names, one tablename per line.

Finally, note that the error exception for the comparison assert counts line numbers off by 2. It doesn’t take the column-name row into account, and it starts counting from 0, instead of the more usual case (for databases) of counting starting at 1.

important! The CSV reader is not very flexible when it comes to reading NULL values. Null fields MUST be given a value of “null”, lower case, WITHOUT surrounding quotes. Like so:

“Fred Smith”,”127.0″,null,null,”Jabber”

Gourmet Recipe Manager problems under recent Fedora releases

The Gourmet Recipe Manager program is a very useful way to store and find recipes, but it has been essentially useless since about Fedora 11 (give or take).

The problem is that this app uses an SQLite database to keep its recipes and accesses the database using SQLAlchemy. Version 0.7 is seriously broken relative to Gourmet.

The cure, once you know it, is relatively easy – at least as long as you don’t have any other apps depending on SQLAlchemy (and I didn’t). First, check to see if you do:

rpm -q --whatrequires python-sqlalchemy

If gourmet is your only dependent (or you don’t care/feel brave), remove the 0.7 sqlalchemy package by brute force:

rpm --erase --nodep python-sqlalchemy

Download one of the 0.6 versions of sqlalchemy. If you have a 64-bit system, be sure to get the 64-bit RPM, which includes the 32-bit version.

Install the downloaded sqlalchemy RPM

rpm -ivh python-sqlalchemy-0.6.xxxxxx.rpm

Fire up gourmet. One of the easiest ways to see if it works is just to type a search. If it returns search results, you’re OK!

A Utility Program for Cataloging Ebooks

My books are among my greatest assets. So much so that finding room to store them all has long been a problem. So the emergence of ebook readers has been a big help to me.

However, to be of true value, certain aspects of the “dead tree” format must carry over and one of the most important ones is that when I buy a book, I own the book. I don’t “buy a license” for a book. Especially when you consider that ebooks typically sell for about the same price as their physical counterparts.

In order for a book to be truly “buyable”, however, certain principles must apply. On my side, that means that I treat it just like I would a real book and don’t go giving away freebie copies to my friends. Ideally, I could sell “used” ebooks or give them away, but that’s an idea whose time has yet to come.

On the seller’s side, that means a minimum of intrusiveness. I have to be able to read the book without being in direct contact with its “Big Brother” server out on the Internet. I will not tolerate sellers who yank back or alter books without my express permission. No “1984”s down on the Animal Farm. And my library shouldn’t “burn down to the ground” if the vendor goes under a la Borders Books.

The Barnes&Noble system generally meets those criteria. There’s DRM, but it’s (presently) “locks for honest people”, which I can live with. If for any reason, B&N shuts down their servers and exits the business, I have local backups and the means to keep reading them. My means of obtaining local backups has been tampered with on the Nook Tablet, which “hides” the purchased books from the USB storage device, and that’s a feature I hope that they’ll reconsider, since I’ll be keeping it in mind as I make future purchases, but for now, I can deal with it, if not appreciate it.

However, once I’ve made my local backups, I needed a way to be able to browse the local library. The B&N ebook filenames are based on their ISBN numbers, which makes them unique, but hard to locate. So I wrote this little app to be able to scan epub-format books and list their names and authors in a format that’s convenient for spreadsheets and database to take in.

It also determines what encryption algorithm provides DRM. If I scan a library and see that someone’s using DRM that I can’t decode, that’s a red flag and WILL affect both my present-day opinion of the publisher (as in what I recommend to others) AND future purchases.

My sincere thanks to publishers such as O’Reilly and Associates, Baen Books, and others who have been brave enough to forgo DRM altogether. You have placed your trust in my honesty and I will uphold it. Late update: TOR has joined the no-DRM club according to an announcement I just heard today. I’ve helped them kill a few trees over the years!

Catalog application downloads

My ePub catalog utility is a Java application located at http://www.mousetech.com/ebooks-catalog-1.0.jar. To use it:


java -jar ebooks-catalog-1.0.jar f1.epub [f2.epub f3.epub ...]
http://www.mousetech.com/ebooks-catalog.zip.

Both source and executable may be freely redistributed as long as you don't charge for it or take away credit for my efforts.

JSF2 Annotations Ignored

I couldn’t figure out why when I tried to retro-fit an old JSF project for JSF2 annotations the annotations weren’t being processed and the managed beans weren’t being instantiated.

Duh. The JSF2 annotation processor defers to faces-config.xml. My faces-config still said JSF 1.2 in the faces-config element and related xmlns attributes.

On a related note, annotations are only processed by default in the WAR WEB-INF/classes directory. If including JARs in WEB-INF/lib, each jar containing managed beans need a META-INF/faces-config.xml. Doesn’t have to have anything in it, but if it isn’t there, annotations won’t process.

The Horrors of OpenOffice Macro Programming

I love OpenOffice as a user. I have absolutely no envy for (ahem) That Other Product. In truth, just about every new MS-Office feature added since 1997 has been completely meaningless to me.

I have been manipulating OpenOffice/LibreOffice/StarOffice in various creative docs for some years now, but mostly by running XML processes against the ODF files. While XSLT is a royal pain, it beats the alternatives, and the ODT format is straightforward enough.

Recently, however, I’ve had a need to do some heavy reformatting of spreadsheets. They get generated as CSV files, imported, formatted, and various formulas plugged into them. I got tired of doing this by hand, so I decided it was time to automate the process.

¬°Ay, Caramba! What a nightmare!

On the one hand, there are all sorts of different ways to script OpenOffice. Java, Python, beanshell, and Basic. Two different kinds of Basic.

If you use the macro recorder, it captures scripts using one programming interface. However, when you write scripts by hand, you would normally use a completely different, less-awkward interface. Even if you wanted to use the capture API, trying to figure out what the various functions and features are makes it a rough proposition, since almost everything is basically set up an array of parameters and passing it to an execute function.

The simpler, hand-coded BASIC API, however, isn’t much better. Unlike the capture API, which maps onto the document model objects directly, the BASIC API attempts to provide convenience methods and properties. Unfortunately, some of them are pretty darned inconvenient, and as awful as the access documentation is, at least the low-level stuff is documented. Lots of luck with the high-level stuff.

I’ve heard the excuse made that the reason MS has so much better Office programming documentation is that they have the force of a Fortune 50 company behind them, but that rings hollow. OpenOffice after all was owned by Sun, then Oracle, and the community of OOffice users is pretty large. There really isn’t an excuse. About the best I’ve seen for how to program OpenOffice is a book that’s out of print.

So, while I myself don’t have the resources to remedy this situation, I’m putting together something that will at least work as a “cheat sheet” for OOCalc macro programming. Here it is: OpenOffice Calc Macro CheatSheet

The Dark Side of the Nook

I knew that people were unhappy with the Nook Tablet because it reserved a lot of memory for itself, but I hadn’t realized how far the rot has truly run.

One of the reasons why this unit appealed to me was that I expected it to continue the open-ness of the Nook Color, its predecessor. Sadly, this is not so. The only way to root a Nook Tablet is to wipe it back to factory settings. While on the whole, I like the tablet’s native OS, the B&N app store is pretty thin as far as some of my favorite Android apps go, and I’m displeased that I have been blocked from doing anything about that.

More serious, however, is what they’ve done to the books themselves. Side-loaded books are second-class citizens. B&N purchases are completely hidden from side-loading access. And therefore, the only backup mechanism is to re-retrieve them from the B&N servers.

When I buy a book, I expect that I’ve bought the book. An attractive feature of the Nook was that in case B&N ever bail from the business or a superior reader came along, I could decrypt and read my books anytime I wanted to on other platforms and not have them simply evaporate as some of the music services did. With the Nook tablet, this isn’t true anymore.

If I wanted a platform where other people controlled everything I did, I’d get an iPad.

1 2 3 4 5