Customer came to us with the goal of moving everything they had running in their Data Centre to their new Data Centre located about 30km away in the city.
In their rack were servers providing: 160 websites, 200 DNS zones, 30 Dedicated Virtual servers, and email for 100 domains across 250 IP addresses, 13 Physical Servers, 3 clusters and 2 SANs. A fairly comprehensive setup.
During the planning stage it was decided that we would upgrade a lot of the shared hosting servers so we’d start with a greenfield, the IP addresses were not movable as they were owned by the DC. The plan as executed involved several stages
- New Virtual environment setup
- DNS Zone duplication & TTL lowering to 60s
- Website migration staging and WAF configuration
- Virtual Machine migration
- Decommissioning of old hardware
There was enough capacity in hardware to take two servers and a SAN to the new DC. Existing switches and routers were already available at the new DC. All Drives were formatted, and firmware updated as part of the staging process. A /24 IPv4 block was purchased from APNIC and HyperV, SCCM, and VMM were setup and configured from scratch. The redundancy and failover was tested before being put into production.
New Virtual machines for DNS, Web hosts, and Database servers were commissioned and zone transfers setup to create NS3 & 4 the TTL for all records was also reduced to 60s. The WAF was configured, webserver content was seeded over FTP and tested. The new web environment was now ready for cut over. During an out of hours scheduled outage websites were taken offline for 20min during which a DB dump, import and final content sync was completed. The BIND DNS records were updated using some python to replace the old IP addresses with the new IP addresses across all 160 zones and the new zones distributed to all of the NS. The old WAF was configured to act as a reverse proxy to forward any traffic going to the old IPs to the new IPs.
All of the VMs were migrated by taking a snapshot of the original VM transferring the data seed via SFTP. The DNS records for the Services were updated to the new IP address and the old VM shutdown and the differencing disk transferred. There was one hosting client that had a no downtime requirement. This was a challenging requirement however through the use of site to site VPNs, DFS and MySQL clustering the site was able to be moved with zero down time over a longer period. As VMs were moved from one DC to the next Servers were powered down, formatted, updated and added to the new HyperV cluster to increase capacity
Now that the hard work was done it was time for the fun stuff. The DNS records for NS1 and NS2 were changed in sequence 76hrs apart to ensure no loss of service. All other VMs were by now shutdown, once no traffic was being served by NS1 & 2 they too were shutdown. Traffic to all IPs was disabled and a 48hr no traffic period was used to confirm no loss of service. Once 48hrs had passed all hardware was shutdown and moved where needed and the migration was completed.