Building the ACM Cluster, Part 11g: OpenAFS RPM Build

OpenAFS is the open source version of the AFS - a file system developed at Carnegie-Mellon University.  AFS has a global, DNS-based address space.  It also has a ton of nice features with respect to allowing users to create and control their own groups and much more granular permissions.  All in all it seems to be a good way to get data into a cluster and to allow users to store and manage documents in a reliable format.

Building OpenAFS

I’ve saved building OpenAFS for last because it is somewhat more complicated than the other RPM builds we’ve done so far, primarily due to some messiness with kernel versions.  Spcifically, the kernel interfaces that OpenAFS-1.6.1 (the current Linux release) were changed from the 2.6 to the 3.x branch.  OpenAFS has sources that are patched for this, but hasn’t released them yet.

Getting Source

So, to get the proper source, we’re going to have to get them from git.

$ git clone git://git.openafs.org/openafs.git

We then want to …

Continue reading

Building the ACM Cluster, part 11f: ZFS on Linux

ZFS is one of those pieces of software that is almost frighteningly good at what it does.  It has a whole slew of features that make it, for many uses, the perfect filesystem.  These include: deduplication, compression, data integrity guarantees (ZFS can detect and repair silent data corruption), copy-on-write architecture and a built in concept of RAID.  The only problem is that the source is under a license incompatible with the linux kernel, so it will never be kernel mainline.  There is, however, the ZFS on linux project, which makes it easy to bring ZFS to several linux distributions.

Building ZFS

Download the SPL and ZFS packages from the ZFS on Linux homepage.

Building SPL

The main ZFS package requires that parts of SPL (the Solaris portability layer) be installed before ZFS can be installed.  So lets start by untaring SPL.  At the time of writing, the most recent version is 0.6.0-rc12:

$ tar xf spl-0.6.0-rc12.tar.gz

Now, lets build the RPMs:

$ cd spl-0.6.0-rc12
$ ./configure
$ …
Continue reading

Building the ACM Cluster, Part 11e: Building Ceph

Ceph is a distributed storage engine.  It can be used in a whole number of different ways - for example, as a block device or an object store.  The current version is codenamed argonaut, hence the header image.

In the ACM cluster, we’re using it as the storage engine for VMs.  This makes a lot of sense in our case, as the VMs are going to want to move from machine to machine and this stops them having to copy the disk image.  Ceph also has the advantage of being kernel-mainline, meaning that all the required bits for it are already built into the kernel and building it does not require patching the kernel at all.

Building RPMs

Building the Ceph RPMs is very similar to the other RPM builds we’ve already done.  Ceph is extraordinarily kind and provides their own (working!) spec file.  So first off, download the Ceph tar.bz2 from here.  Assuming you’re using the same ceph 0.48-argonaut version I am, you’ll then want to run

$  tar xvf ceph-0.48argonaut.tar.bz2 …
Continue reading

Building the ACM Cluster, Part 11d: Building Myricom fiber RPMs

Welcome back to my ongoing series on building the JHUACM VM cluster.  In this part, I’m going to be focusing on building the RPM driver for the Myricom fiber cards that were given to us with the cluster.  Unfortunately the drivers for this are closed source.  However, through my connections to physics, I was able to get source code to build from.  In short, if you’re here looking for drivers, you’re out of luck - go talk to Myricom.

The specific hardware we have is driven by the mx2g driver, so that is what I’ll be working on.

Down the Rabbit Hole

The first thing I did when I got the tarball of the driver source was try to build the rpm.  The normal RPM build process is to copy the source tarball and a spec file into rpmbuild.  With these, it was some convoluted, undocumented and inflexible process.  In short, the process was to run “make rpm”, copy a magic folder somewhere magical, and then run rpmbuild against the spec file.  This also deliberately …

Continue reading

Building the ACM Cluster, Part 11c: Xen RPMs

Next up,  lets build RPMs of Xen, a hypervisor.  Xen was chosen because on machines which do not have virtualization bits (like the cluster I’m building), Xen will do paravirtualization, which is still somewhat quick.Xen also has the concept of clustering and shifting VMs between instances - an important feature in a VM cluster!

Xen Spec Files

CentOS 6 no longer has support for Xen.  CentOS decided that they were going to put their weight behind QEMU/KVM as the virtulization solution and thus stopped distributing and supporting Xen.  There are a few third party sites out there hosting packages. But, frankly, I am sufficiently paranoid to want to build them myself.  I also could not find a freely available spec file.  So I wrote my own for Xen.  Hopefully the fact that I simply use the official Xen source and a spec file that is very simple and easy to examine will satisfy those who are as paranoid as I am.

On to Building the RPM

  1. Download a copy of my spec file.  As before put it …
Continue reading

Building the ACM Cluster, Part 11b: Kernel RPM Build

If you haven’t already done so, read and execute Building the ACM Cluster, Part 11a: Setting up rpmbuild environment.

This article will be covering building a new kernel for CentOS and injecting it into xCat’s local package repository.  I am covering this because later on we’ll need to have a more recent kernel than CentOS comes with by default.  Xen specifically requires a later kernel.  However, when we build kernel modules, we’ll want to be building against the same kernel version we’re running.

Kernel Spec

I’ve built a kernel spec based on that at ElRepo (downloadable at kernel/el6/SPECS/ from any of these mirrors).  This builds a kernel package called kernel-ml.  This is so that it can coexist with the CentOS official kernel.  However, I have a different goal - I want to replace the kernel.  I’ve therefore created my own branch on my GitHub.  The only difference between this and the official specfile is that I’ve removed every …

Continue reading

Building the ACM Cluster, Part 11a: Setting up rpmbuild environment

Up to this point, we haven’t built any custom software for the cluster.  I’ve tried very hard to use mostly off the shelf software.  However, this has to change.  Several of the major components we’re going to use (xen, the fiber card driver, ceph) are not available in the CentOS repositories (or are too old).  So we’re going to build them ourselves.

However, rather than build them on a node-by node basis (which would make a single install hours long…), we’re going to build packages.  CentOS uses the rpm package format to distribute prebuilt software.  So we’ll be building RPMs.

Where do I build RPMs?

Many resources recommend not building RPMs as root.  This is quite sensible if you’re doing it on a machine that can’t easily be rebuilt - that way you can’t accidentally overwrite an important file.  However, since you 

Installing rpmbuild

The primary tool used to build rpms is called, obviously enough rpmbuild.  We also need a …

Continue reading

Building the ACM VM Cluster, Part 10: Operating system image build

Now that we’re done with network configuration.  Now, lets actually build an operating system to use on the nodes!

Lets go ISO Huntin'!

The first step in building operating system install images is to get the full operating system images.  Not netboot, but a fully installable version.  For CentOS the mirrors page is a good place to start your hunt.  Personally, I downloaded both DVD images (CentOS-6.3-x86_64-bin-DVD1.iso and CentOS-6.3-x86_64-bin-DVD2.iso), though I suspect that simply the minimal image will cover it.

Import Install Media

Next, we have to import the install media to xCat’s NFS filesystem.  To do this, we’ll use the copycds command.  copycds takes as arguments simply the ISOs you want to import:

# copycds CentOS-6.3-x86_64-bin-DVD1.iso CentOS-6.3-x86_64-bin-DVD2.iso

Sometimes copycds will tell you

Error: copycds could not identify the ISO supplied, you may wish to try -n <osver>

In which case your command will look more like

# copycds -n …
Continue reading

Building the ACM Cluster, Part 9: Setting up masquerade with iptables

Alright! Lets get this started again.  There is one last thing we need to do in order to have networking on the cluster functional.  Right now, the nodes inside the cluster can’t speak to the outside world.  While we set up the head node to be able to speak to things on every interface, we haven’t yet told it how to move traffic from one interface to another.

Making the Gateway

In normal clusters, there are three types of notes - workers, gateways and head nodes.  Workers do whatever task the cluster is intended for.  Head noes manage the workers.  And finally, gateways, which allow the worker nodes to communicate with things outside the cluster.

Gateways are needed because clusters often use IP addresses which are not publicly routeable.  The gateway allows the entire cluster to sit behind one IP address and is in charge of routing traffic properly.  This process is called Network Address Translation.  In many ways, this makes the gateway like your home router.

Anyway, …

Continue reading

Building the ACM Cluster, Part 8: Adventures in Routing: Source Based (Multi-homed) Routing

(This post is related to the ACM cluster build.  However, it is really generic systems stuff and not terribly related to the actual cluster build.  It is much more closely related to quirks of JHU networking.)

The Problem

JHU has two distinct networks - firewalled and firewall-free.  (In truth there are more and there are gradations, but these are the two JHUACM has IP allocations on.)  Some services cannot be run form inside the firewalled network.  For these the ACM has a small firewall-free allocation.  Because the cluster will be hosting VMs inside both networks, it needs to be capable of routing traffic from both.  This means doing something called source-based routing or multihomed routing.  This refers to the fact that this machine will have two connections to the internet.  Typically, this is a very rare setup - Multihoming is usually used at the ISP or datacenter level, rather than at the level of the individual box.

The Solution

The solution is to convert linux to …

Continue reading