Skip to content

SAN-SAN Migration Using Windows Host-Based Mirroring

We’ve been using IBM XIV for SAN storage, and we’ve hit the end of our lease. We’re migrating over to EMC’s VNX platform, and there’s no good cheap way to do SAN-SAN mirroring between the two systems.

At the same time, we’re replacing our IBM Bladecenters with Cisco UCS Chassis. So, we’re moving compute and storage in one fell swoop, and we need a migration plan that the business will be pleased with.

The plan we came up with was pretty elegant, and the business only suffered ~30 min of downtime (per server moved).

Here’s the (simplified) environment we’re dealing with:

diagram

 

We weren’t able to connect the XIV to the VNX directly (well, we’re able to, but direct SAN-SAN migration is not an option in this case), but we were able to connect it to the Cisco MDS Fabric zoning switch. This let us present storage from the VNX into UCS servers.

Most of the servers in the Bladecenter were virtual servers running on ESX hosts. Most of that migration was done via vMotion, and wasn’t a big deal at all. Physical Windows Servers, on the other hand, needed some extra hand-holding. Bladecenter servers all have internal storage for boot, and UCS doesn’t; it’s all Boot to SAN.  I’ll be documenting installing Windows Server 2008 R2 in a Boot to SAN environment in a later post.

The plan we came up is actually pretty elegant, and there’s only a small amount of downtime per server.

Pre-Downtime

– If the source server is a simple server (like a file server) where it’s just Windows and some features on the C drive, just build the new server in UCS and copy settings from the old to the new (keeping hostname and IP different than the old one).
– If the source has some complex applications, use Acronis or some other imaging tool can restore an image to unlike hardware. Apply the image to the UCS server (once again, keeping the hostname and IP different).
– Create all necessary LUNs in VNX, roughly replicating the volumes on the old server (expanding sizes as necessary).
– Bring online the VNX volumes on the new server. Make them Dynamic Disks, but do not create volumes. If the old server uses MBR disks, initialize the new volumes as MBR. If GPT, then make GPT disks.
– In MDS, create zones from the XIV to the new server.

Downtime

– Shut down the old server.
– Map the XIV volumes to the new server and rescan disks in Disk Management.
– You may have to install MPIO sets using the Windows Server MPIO feature (depends on the SAN and the drivers) once the disks are mapped. This will require a reboot.
–  When the volumes appear in Disk management, convert them to Dynamic if they aren’t already (you can do this live, but it’s better to do it when applications aren’t read/writing to it, so may as well do it now).
– You may need to change drive letters to what they were on the old server.
– Change the hostname and IP address on the new server to that of the old server.
– The server is now live, albeit still connected to XIV storage.

Post-Downtime

– You should now have live XIV volumes and empty VNX disks in Disk Management. All that you need to do now is right click on each XIV volume and Mirror the disk onto one onto the corresponding VNX disk. This mirroring is done in the background. Speed is dependent on OS activity and bandwidth from one SAN to the other. The disks will show “Resynching” while this process completes.
– When complete, you’ll have mirrored sets of disks. All you have to do is right click on each set and “Remove Mirror…”. Choose the XIV disk, and the data will now be exclusively on VNX.

It may not be the simplest process, but it doesn’t require a lot of downtime, and it kills two birds with one stone.

Of course, this is Windows we’re dealing with, so be prepared for anything unexpected to happen. There is, however, an easy failback plan. Simply turn the old server back on and map the XIV volumes back to the old server. Of course, it’s never really that simple, but it’s a good line to give management when they ask about a failback plan.

Post a Comment

Your email is never published nor shared. Required fields are marked *
*
*