I was interested in migrating some servers from the older generation Amazon EC2 m1.small to the newer (and more cost effective) t2.micro instance types, but I had some difficulties in making this jump. Here is what I figured out…
My servers were magnetic drive PV Debian servers with magnetic RAID-10 arrays. The drives were assigned to /dev/xvdf1, /dev/xvdf2, etc. device nodes.
My first problem was getting a root drive on an HVM AMI to boot with my data on it. I was attempting to do what has been prescribed commonly, which is to mount your root drive as a secondary drive while booted into an HVM instance, and transfer over the contents of /boot to your old PV root drive. Perhaps part of the problem was the AMI was booted via extlinux and the instructions I found were for dealing with grub2, but I wasn’t able to get this working and gave up.
Instead, I copied over the contents of everything but /boot to the HVM root drive. I ran into a few snafus here too, namely replacing the HVM root drive’s /etc/fstab, and locking myself out by not permitting SSH logins with passwords (my SSH keys were stored on my available RAID), but I was soon able to get a working root drive with all of my stuff on it.
The next problem I faced was bringing up my RAID-10 array. I was never able to figure this out either, but I think part of the problem here was the fact that HVM EBS volumes cannot be attached as /dev/xvdf[x] device nodes, and instead need to be attached as /dev/xvdf, /dev/xvdg, etc. I’m not sure if there was a way to recreate this RAID somehow, but I gave up on this, and instead elected to recreate the RAID and simply rsync my data over.
While all of this can be a PITA (if your experience is anything like mine), this is also an opportunity to take advantage of the capabilities offered by these newer generation instance types in support SSD drives (while perhaps saving money in the process). By simply copying over /boot, as described above, perhaps this makes it more difficult to migrate from a magnetic to an SSD OS/root drive, and likewise for rebuilding your RAID array with SSDs.
Hopefully this insight will be useful to those that are looking for a relatively pain free approach to making these adjustments to their environments. Here is my workflow I settled on:
- Create new instance from community AMI using HVM image, which in my case used a 8 GB SSD
- Assign your existing security group to this instance
- Mount a snapshot of your old root drive to /dev/sdf
- Copy over everything but /boot, and make sure to keep your original /etc/fstab, /etc/resolve.conf, /etc/hosts, etc. In my case, don’t install grub2 or change how the instance is booted
- Create new RAID array by attaching /dev/sdf, /dev/sdg etc.
- Copy data to your new RAID array
- Create swap space, as needed/desired
- Reassign your existing IP address to your new instance
- Setup Cloudwatch, as necessary/desired
- Do one final rsync of your data from your old RAID array to your new one, if necessary