Department of Redundancy Department

RAID 1 is useful. If you didn’t understand that sentence, stop reading here; this posting is not for you. My new desktop PC makes use of a 40Gb SSD for the operating system, and a pair of 1Tb drives in a RAID 1 array for the home directories and other dynamic content. Frustratingly, Ubuntu 10.04 desktop edition doesn’t have an option to install with RAID (unlike the server edition), so I had to do it by hand. Here’s the solution, because it’s not terribly difficult, and because it might be useful for someone.

On my system, I created partitions on the SSD for the root, /boot, /usr and swap, and intended to partition my 1Tb RAID volume up for the /home and /var folders. Hopefully what follows should be clear enough that you can easily adapt it for your own partitioning preferences.

Boot from the Ubuntu live CD and choose to “Try without installing”. Let it boot into the desktop environment.

Create your partitions. You can either use fdisk if you’re old-fashioned, or GPartEd from the System/Administration menu on the desktop. The partitions that you’re going to use as components of the RAID array must be exactly the same size on all volumes. Assign file system types to your non-RAID partitions (if you have any) but not to the RAID partitions. We’ll do that later.

Become root:

$ sudo su -

Install the RAID tools:

# apt-get update
# apt-get install mdadm

(For some reason, the mdadm installation includes postfix. Just tell the postfix wizard that you’re local, and it’ll not matter.)

Now for the important bit: creating the RAID array. I have to do this twice: once for the /home partition and once for the /var partition:

# mdadm --create /dev/md1 --verbose --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
# mdadm --create /dev/md2 --verbose --level=1 --raid-devices=2 /dev/sdb2 /dev/sdc2

(This should be pretty self-explanatory; –level=1 says I want to use a RAID 1 array, –raid-devices=2 says there’s two drives in the array, and /dev/md1 is the name of the RAID pseudo-device that’s created out of /dev/sdb1 and /dev/sdc1.)

Now we can create the filing system and format the array:

# mkfs.ext4 /dev/md1
# mkfs.ext4 /dev/md2

If you’re planning on using a different filing system (caution, ReiserFS may cause you to murder unsatisfactory mail-order brides) then just use a different mkfs instruction at this point.

Now to install Ubuntu. Run the installer (it’s right there on the desktop) and run through the usual question-and-answer process until you get to “Prepare disk space”. Choose to specify the partitions manually, and set the RAID array md devices where appropriate. (By way of example, I had /dev/sda1 for /, /dev/sda2 for /usr, /dev/sda3 for /boot, /dev/md1 for /home and /dev/md2 for /var.) Continue with the installation Q&A and then let the OS install itself.

Once it’s all installed don’t just go rebooting, because if you do, you’ll regret it. Problem is, Ubuntu desktop doesn’t include the RAID array manager mdadm in the default installation. Reboot now and your new OS won’t know how to access the RAID volumes. So before you reboot, you need to install mdadm into your new OS.

You need to mount your new installation and then tell the OS to treat that mount-point as the root. If you’ve got everything in one partition, that’s pretty straightforward. I had to do several mount operations to get all my partitions in place. If your partitions are simpler, adjust the mount operations accordingly. You must do the /dev, /proc and /sys mounts though:

# mount /dev/sda1 /mnt/
# mount /dev/sda2 /mnt/usr
# mount /dev/sda3 /mnt/boot
# mount /dev/md1 /mnt/home
# mount /dev/md2 /mnt/var
# mount --bind /proc /mnt/proc
# mount --bind /sys /mnt/sys
# mount --bind /dev /mnt/dev
# chroot /mnt

Install mdadm just like before (even the postfix), only this time it’s permanent:

# apt-get update
# apt-get install mdadm

For me, this was the end of the process. If your root volume is not part of a RAID array (mine wasn’t) then pop the CD out, reboot now and enjoy the fruits of your labours! However, if your root volume is part of a RAID array, then you’ll also have to fix the grub boot-loader, which will only be installed on one of the disks in the RAID array. For example, if your root volume is on /dev/md1 and that device comprises /dev/sda1 and /dev/sdb1 then you’ll need to do this, just to be on the safe side:

# grub-install /dev/sda
# grub-install /dev/sdb

Reboot, and cat /proc/mdstat or run the Ubuntu Disk Utility (on the System/Administration menu) to check that your RAID devices are syncing.

Plenty more phish in the sea

Why are phishing gangs so dumb?

I received some phishing spam today, targeting HSBC Hong Kong customers. They did one thing right – they targeted .hk e-mail addresses. Pity I use a .com e-mail for my banking, really. But that’s not the dumbness, that’s just bad luck on the phishers’ part.

The dumbness takes two forms. Firstly, the English. Yes, I know they’re Russian. Or Chinese. Or at least, that English is not their native language. But how much would it cost them to employ an actual English speaker to proof-read their spam? Unemployment is over 2 million in the UK, so it wouldn’t be hard for them to find one! I mean, just take a look:

Subject: HSBC Hong Kong All advanced forum features are available;
Date: Tue, 17 Feb 2009 04:34:16 +0200
From: HSBC Hong Kong <>
To: me
HSBC Hong Kong:

New user interface features and a new user interface for the HSBC Hong
Kong Users, were designed in order to reduce the high cognitive and
physical load that users experience when controlling the HSBC Hong Kong.

These interface features, and the new interface, were evaluated for
their  performance. The following results were obtained.

Proceed to view details:

[URL removed for sanity]

Sincerely, Freida Porter. Customer Service Department.
Copyright, inc. All rights reserved.

Not very convincing, eh? Also, there’s no attempt to sell me a loan. I can’t remember the last time HSBC communicated with me and didn’t try to sell me a loan on the side.

So, a little investment in a suitably fluent proof-reader who can imitate real HSBC e-mails would have paid off here.

Dumbness number two is just plain incompetence. In order to circumvent spam filters, the spammers fire out multiple copies of almost the same message… with different subject lines and different, randomly-generated, names at the bottom (none of which were even vaguely Chinese, incidentally). I received fifteen or so in the space of an hour (all flagged as spam, so they failed there too). The sheer quantity should make even the most gullible pause and think: that’s weird.

Interestingly, the phishing web site linked from the e-mail was very well done, looked convincing, and had a trojanised “You need the latest Adobe Flash Player” download. It’s odd that the criminals should put so much effort into their web site and then waste it with a half-arsed e-mail campaign. (Strictly, this means it’s not phishing, which is an attempt to get a user name and password. HSBC’s synchronous authentication defeats keylogging, so the trojan is probably a man-in-the-middle attack that waits for the user to authenticate with HSBC and then patches the criminals into the session. I didn’t get time to download it to a virtual machine and try it out.)

It goes without saying that they’re only after a tiny number of victims, that they expect 99.9% of their e-mails to be ignored. And of course, their incompetence and laziness is a boon to those grey-area types out there who just might fall for a well-constructed phishing scam. But still, it pains me to see shoddy work, even when it’s crime!

Portrait of the blogger as a young man

I remember, many years ago, there was a sudden hysteria about CDs. Perhaps it was put about by marketing droids on behalf of the vinyl and tape-cassette manufacturers, for – it was said – CDs were going to rot away. You’d have crystal clarity now, but next year… glitches and frustration. Like all good scare-stories, nothing came of it. My first ever CD (a Mozart piano concerto) is approaching its 20th birthday and still works perfectly well (although it hasn’t seen much service since being ripped to FLAC files).

The same was said, with rather more cause for concern, about early CD-Rs. When I started experimenting with writing CDs it was 1997, I was using a Yamaha CDR-100 hanging off an Adaptec AHA2940 SCSI card on a Linux 2.0.4 box running… a MicroChannel-patched kernel for it was, indeed, old IBM PS2 hardware! It was necessary to manually make the ISO image before writing. The burn process took forever. The computer had to be disconnected from the network and left untouched while writing because there was no write buffer: any interference at all and the burn failed. Even in ideal conditions the burn failed half the time anyway. Back then, blank CD-Rs cost several pounds each, and I was churning out the most expensive coffee mats in the office.

My managers, poor troubled souls, wanted to know why (considering this expense) we were archiving to CD-Rs at all. Weren’t they all going to be blank and useless within months? That’s what the trade press said, anyway. (Perhaps, this time, marketing fluff from the DDS-3 manufacturers.)

Well, I can officially set their minds at rest today. Rummaging through old boxes of things, tucked inside a logbook full of scrawl, I found a CD-R that I burned back in 1999 (on a Yamaha CDR-400), and to my delight it still works! I was able to retrieve 10 year old e-mails and review conversations and ideas long since forgotten. I found my old appraisals from 1998 (“… has a problem with authority…”; “… must learn that technical skills are not an excuse for rudeness…”). There was the source code for my unfinished squid redirector, and for my port of apache to CX-SX. Best of all, there were pictures!

Me and my CB500 at a bike show somewhere

Me and my CB500 at a bike show somewhere

Here, for example, is one of the few pictures of my favourite motorbike – my CB500 – which must have been taken in 1999 at a bike show, back when I was on the committee of the Honda Owners Club (GB).  I loved that bike – comfortable for long journeys, fast, and with Dunlop ArrowMax tyres on it it’d handle as well as a CBR600 in most conditions. I can’t remember anything about the next two bikes, but the bike on the far left is the world’s only immaculate CX500 “maggot”, belonging to a chap called Lennie.

I’m still rummaging through the depths of the files I copied from the antique CD-R.  There’s a lot of .tar.gz and ZIP files on there that have to be explored. I’m feeling really quite nostalgic about the whole thing.

It is a far, far beta thing…

I’m playing with the beta release of Windows 7 right now, preparing to assess its strategic impact on enterprise security when (as it surely will) it replaces XP on the desktop (the mundane Vista having been quite ignored). Getting the beta running properly on VirtualBox OSE was not entirely straightforward, so here’s a few tips for anyone else trying it. This has been tested on Ubuntu 8.10.

  1. Download the ISO from Microsoft and get your installation key. You have two weeks left from the date of this post, according to Microsoft, after which no more beta downloads will be allowed.
  2. Set up a new VM on VirtualBox OSE, with a suitable name. Tell it to use Windows Vista as the OS-type. Give it 1Gb of RAM, 128Mb of video memory, and at least 20Gb of hard disc. Under  CD/DVD-ROM, choose to “mount host drive” and select the “ISO image” option, browsing to the Windows 7 Beta ISO that you downloaded. Your VM is now configured and ready to run.
  3. Start the VM. The Windows installation process is pretty normal and self-explanatory. There will be some periods of black screens and total inactivity, but this appears to be standard. Just be patient!
  4. Once you get the desktop up and running, have a fiddle. You’ll notice that there’s no audio, no network, and the mouse gets trapped in the virtual OS window. All of these things can be fixed!
  5. First, we install the Guest Additions. At the time of writing, there are no Additions for Windows 7, but I found that the old XP additions seem to work perfectly well, with only a minor tweak. First, mount the Guest Additions ISO using the “Devices” menu in the VirtualBox OS window and selecting “Mount CD/DVD-ROM”. This makes the files available to Windows 7. You’ll see the autoplay window; choose to open folder and view files.
  6. Right-click “VBoxWindowsAdditions” and select “Properties”. Choose the “Compatibility” tab and enable “Run this program in compatibility mode”, choosing “Windows XP (Service Pack 2)”. OK everything and run VBoxWindowsAdditions. Agree to EULAs etc where necessary and then reboot when prompted. The mouse movement between guest and host OS should now be seamless.
  7. Now the network: click on the Windows button in the bottom-left, right-click “Computer” and select “Properties”. Choose “Device Manager”. In the device manager, you’ll see that “Ethernet Controller” has a yellow warning mark. Right-click it and choose “Update driver software” and then “Browse my computer for driver software”. Click “Browse” and choose the CD with the VirtualBox Guest Additions on it (it should still be mounted). Just selecting the root folder of the CD is sufficient. Click OK and then “Next” and the driver will install. Let Windows 7 get itself onto the network and surf a bit to check it’s working.
  8. Audio drivers are easy once the networking is running. Right-click on the yellow-flagged “Multimedia Audio Controller” in your device manager window and choose “Update driver software”. This time, choose to fetch the driver from the Internet, and Windows 7 will do the rest.

Following these steps, I have a reliable and speedy Windows 7 installation. There’s still lots of evaluation to be done, but at first glance it does seem nicer than Vista.


Not economy to business class, sadly, just software. I’ve upgraded the back-end of my blog to the latest version of WordPress (2.7). This was not straightforward. For anyone else attempting a 2.0.x upgrade to 2.3 or later, here’s a tip: don’t. The database upgrade tool is completely broken and simply will not work.

Instead, upgrade as far as 2.2.3, and then use the “Export” function (which is not present in the 2.0.x tree) to save the whole kit’n’caboodle as an XML file. Then make a fresh install of 2.7 with a new database, get it running, and “Import” the XML. You will lose your blogroll/links so make a separate copy of those. And of course, you’ll have to manually copy over your wp-content/uploads directory and any themes and plugins and re-create user accounts.

It’s well worth it though. WordPress 2.7 is very slick.


You know that feeling when some business (usually one that is struggling) launches a new product or service and your instinctive reaction is, “Boy, are you late to the party!”? Well, that’s how I feel about Yahoo! today.

I’ve been using Yahoo! web mail for years. It’s quick, simple, lets me use folders (unlike Gmail), and it’s fast. I’ve never migrated to the new AJAX-y version of the web mail (mostly because it enables IM without my consent). I like the Classic version. But suddenly, whenever I log in, it’s nagging me to “make connections” with people in my address book. I’m supposed to “invite” them to connect to me. Well, I’ve got news for you Yahoo! – I’ve already got a Facebook account. If I want social networking, I’ll go there. I don’t want to “connect” with people but there doesn’t seem to be a way to dismiss this silliness and return to pure e-mail.

In addition, the “connection” mechanism tells outright lies. It’s been nagging me to connect to addresses I barely recognise, so I clicked on “How were these suggestions made?”


Rubbish! Most of the addresses it has been giving me are people I’ve e-mailed once, or haven’t e-mailed for literally years. I mean, take a look at this:


I have never e-mailed that address, and I know it.

Nice try, Yahoo!, but this is really very poor indeed. You’re asking me to send “invitations” that are unknown quantities to swathes of out-of-date and obscure e-mail addresses to make “connections” for no reason that you’ve bothered to explain to me. When I click on “Why connect?” it gives me two crappy reasons and “Much more coming soon!”. Ooh. I can’t wait (he said in a voice dripping with sarcasm).


Last night I flew from Washington DC to New York, which involved some hours of sitting around idle in Washington Dulles airport. It’s the first time I’ve used DC’s main airport, because on all previous occasions I’ve flown into or out of Reagan, which is right there in the city and terribly convenient.

Now, you’d expect the newest terminal of the main airport of the capital city of the most powerful (allegedly) country in the world to be first-class. Actually it was more third-world.

Many airports that I use provide free wifi. Hong Kong does, of course, and so does Singapore, JFK (in parts), Prague, Cancun, and even Jakarta. Some airports provide wifi with cross-billing arrangements to PCCW (e.g. Munich, which lets me bill straight to my CSL account). All other airports that I use allow wireless through reliable paid hotspots like Boingo or T-Mobile (Heathrow, JFK, Stockholm, Geneva…  oh gosh, loads).

As for Dulles? Although T-Mobile and Boingo were present, they didn’t work. I couldn’t get DHCP from either of them anywhere in the terminal. AA’s Admiral’s Club was accessible but that doesn’t support one-off billing. The only other APs about were the usual Free Public Wifi sillinesses.

Wifi provision for airports shouldn’t be seen as a luxury any more, it should be a basic courtesy to the passengers who are forced to turn up earlier and earlier and wait longer and longer for flights that are often more and more extensively delayed. If I can’t work in airports then I’m losing a great deal of potentially productive time.

Mind you, Dulles Airport is like a time-warp in so many other ways, from the amazingly dated check-in concourse to the peculiar mobile-lounge shuttles between terminals.

On a positive note I got to fly on an Embraer E190 for the first time and thought it was a splendid little aircraft.

There’s snow predicted for New York during the night, which is going to make things interesting for tomorrow… I have an Amtrak journey up to Boston.

Dodging the FLAC

My computer is my preferred music player. To this end, I have been copying my CD collection to my hard disc drive (away, you demons of RIAA, they’re for my personal use only and all the CDs are legitimately purchased). My preferred file format is FLAC (the free lossless audio codec), which makes perfect copies of the tracks at a fraction of their original size. This is an improvement over MP3, for example, which necessitates some loss of quality.

This is ideal for listening to music on my PC at home. But if I want to load up my mobile phone with sounds and take them with me into the streets then I am undone. Damn thing doesn’t play FLACs. In fact, almost no personal music players will handle FLAC. So what to do? I could transcode all my FLACs into MP3s, but then I’d have an untidy music collection and a criminal waste of disk space.

Then I found this baby: mp3fs. It lets you mount your FLAC collection as a “virtual” filing system that looks just like a collection of MP3s. And when you want to copy the files onto a flash memory card to stick into your music player, it transcodes them on-the-fly. Beautiful, elegant and exactly what I need.

For Ubuntu users, it’s straightforward to use. This chap has prepared some .deb files for Ubuntu which you can download and install using Synaptic.

If you want to manually mount the MP3 filing system, you can do it like this:

sudo mp3fs /my/flac/directory,128 /my/mp3/directory -o allow_other,ro

Where, of course /my/flac/directory is the directory containing all your ripped FLAC files, 128 is the bitrate (128 is quite good enough for portable music players) and /my/mp3/directory is the mount point to which your virtual filing system will be attached.

Even better, you can add the following line to your /etc/fstab file and have the MP3s available when you start the machine. (You need to be ‘root’ or using sudo to edit /etc/fstab)

mp3fs#/my/flac/directory,128 /my/mp3/directory fuse ro,allow_other 0 0

This is all dependent on FUSE, but that’s installed by default on Ubuntu and most other recent GUI-based Linuxes, so it shouldn’t be an issue.

Not fare!

In the light of yet another resurgence of newspaper articles about the weakness in the London Oyster card system, nearly all of which claim that our own beloved Octopus is also vulnerable, let me set the record straight.

It bloody isn’t.

Oyster uses the Dutch MiFare Classic chip. The designers committed the cardinal sin of security when they put that together: they invented their own encryption technique. Moreover, they relied on the uniqueness and obscurity of that encryption technique to protect the card and prevent the thousand natural hacks that currency-cards are heir to, and they were so confident that this would work that they made the encryption key a fixed value. Of course, it didn’t take very long for a bunch of bright young things to reverse-engineer the encryption (which wasn’t very good, really), after which the card was fatally holed.

Of course, Octopus would be just as badly affected if it used MiFare. Only it doesn’t. It uses the Sony FeliCa chip. The FeliCa does not rely on a half-arsed but vaguely obscure encryption algorithm; it uses standard and widely trusted encryption, and to preserve the confidentiality and integrity of the transactions, it changes the encryption key every single time it is used. In other words, it doesn’t matter that you know how the data is encrypted: by the time you’ve cracked it, the key’s already changed.

So, no need to be concerned about your Octo. It’s just the Oyster that’s shucked.