by Ray Davis, CTA
This is a quick blog to show the power of a protection domain in an AHV setup. First, I wanted to explain that my lab is not complex yet. Meaning I don’t have firewall rules, different VLANS, or any kind of segmentation. It’s a flat Network with 2 Nutanix AHV Host. When I started writing a blog on how to setup a Nutanix protection domain, I was doing this on a production and DR site at a previous company. It worked like a treat, and I was able to move AHV virtual machines from different clusters to a DR cluster and back with no problem.
But since then, I moved on to another company where they don’t have Nutanix :(. I don’t have access to the environment anymore. My lab is 100% AHV. Why? Well, because I got into Nutanix 2 years ago or more. I love what they offer and the Hypervisor’s easy use and other software they provide. I am using the Nutanix CE version. However, the Nutanix CE version is behind compared to what is released now. I noticed in an AOS 6.0 setup that some of the options are in different areas compared to the CE version. The CE version is based on 5.18, from what I can tell. I am not sure when they will release a newer build for the CE edition either at this time.
This allowed me to maintain base skills set in the Nutanix realm. I know it’s not 100% solution when compared to what many are running in a Prod setup. But It’s the same steps in a production environment. I am now rewriting this blog around my lab setup instead of what I originally had written up. Since I need to get my Virtual Machines off the other AHV host, I decided to write a little blog on protection domains VM replication. This blog shows you and gives you an idea of what it really can do. It is very straightforward. The biggest issue I had back when I was doing it in a Production setup was Firewalls. It is critical to make sure this is right. If not, you will struggle. Ask me how I know :). We all have to go through the firewall setups with Information Security or whoever manages your firewalls. Just make sure it’s set up correctly.
As I was saying above, my lab is simple. It is just for me to test things and keep up to date with my CVADS/CVAD journey.
In my lab, I will be referring to names called:
I created two Single Node Clusters, so I can have the option to do failovers and try to mirror a source and destination AHV cluster.
I was in a situation where I needed to get another Lab server. So after searching online, I found a great site called Refurbished/Used Dell & HP Servers, Hard Disk Drives – TechMikeNY
I ended up ordering this. This is all I needed for my second host at this time.
|Dell PowerEdge R630 8-Bay 2.5″ 1U ServerDELL_PE_R630_8B||1||$364.00|
|Dell 0C34X6 2TB SSD SATA 2.5” 6Gbps Solid State Drive2TB_SSD_SATA_SFF_6G||2||$408.50|
|Intel Xeon E5-2698 v3 2.30GHz 16-Core LGA 2011 / Socket R-3 Processor SR1XE2-30Ghz_E5-2698_V3_16C||2||$245.60|
|32GB PC4-2133P ECC-Registered Server Memory RAM32GB_PC4-2133P||8||$683.20|
|Dell 1100W 80+ Platinum Power SupplyDELL_1100W_80-PLUS||2||$153.80|
|Dell iDRAC8 Enterprise Remote Access LicenseiDRAC8_Enterprise||1||$67.20|
|Dell HBA330 12Gbps SAS HBA Controller (NON-RAID) MiniCardHBA330_Mini||1||$62.70|
|200GB SSD SATA 2.5” 6Gbps Solid State Drive200GB_SSD_SATA_SFF_6G||2||$73.30|
|Dell 0R1XFC I350 Quad-Port 1GBe Daughter CardDELL_0R1XFC||1||$21.00|
|64GB SATA Disk-On-Module SATADOM SATA III 6Gbps Drive64GB_SATADOM_6G||1||$15.50|
|Dell 2.5in R-Series CaddyR-Series_SFF_Caddies||4||$19.00|
Here are the firewall Ports
You need to create a Protection domain to do this first. This is what tells Nutanix what to replication and how often.
Let’s start by setting up a protection domain on NTX-Cluster-01
Drop down, select data protection.
Click + protection domain
Please give it a name. Give it something that makes sense to what you are working with.
Now it will ask you for the VM names.
You can select what schedule fits you best in the setup. I set this up to run every day as an example. The 10 minutes in the screen show is to show you the schedule options.
Now you need to add the Remote site (NTX-Cluster-03) to which you want it to send. You will do this on the source cluster and add the Remote location to the source cluster.
NTX-Cluster-01 >>>>>>> NTX-Cluster-03
Once both sites are set up to talk to each other, we will need to come back here.
Go to the remote Site Cluster and Log in.
Go to the Data Protection section.
Then create the new remote site connection.
NTX-Cluster-013 >>>>>>> NTX-Cluster-01 ( this is for reverse sync) Basically to replicate it back if needed. In my case, it’s not. But in a Prod setup, you would want to send it back once your DR failover activities are completed.
Again once we get the sites talking I will come back here to update the mappings.
Now to check the connections from NTX-Cluster-01 >>>>>>> NTX-Cluster-03
Now, as you can see, 03 is talking to 01
Then you can see that 01 is talking to 03
NTX-Cluster-03 >>>>>>> NTX-Cluster-01
Now let’s set the mappings. We can start on NTX-Cluster-01.
Go back to the Remote Site and edit “Update” the Remote site settings.
Click on Settings, and scroll down.
Add the Network Mappings and vStore Name mappings. This is just setting up the Source and destination network and storage.
So AHV: Data-Cluster01 will send to AHV: Data-Cluster-03
My network name is the same as the test sites. This would be different in your environment based on what you have the Network name when creating a base VLAN. I used the default storage location as well on both Clusters.
Go back to the Remote Site (NTX-Cluster-03) and do the same but in reverse.
So AHV: Data-Cluster03 will send to AHV: Data-Cluster-01
My network name is the same as the test sites.
NTX-Cluster-03 updated with the remote site info.
Replication has started
On the remote tab, you can see the data completed along with start times, and then it shows outgoing.
Now, if I log into my “remote site” NTX-Cluster-03
I should see incoming and some stats. You can see it listing NTX-Cluster-01 as the remote site. This is because we are logged into NTX-Cluster-03.
Now it’s done:
I would like to migrate the server from NTX-Cluster-01 to NTX-Cluster-03.
Go to the source Prism Element, and click on the Async DR tab. Then click Migrate option.
Now click on Migrate.
As you can see, it’s gone from the Prod location
Check the Remote location or DR NTX-Cluster-03, and you will see it now.
Now let’s power it one.
Up and online
Now, in this case, I am going to migrate it back.
Now it will, Snapshot the VM and send it back to NTX-Cluster-01.
Now let’s check NTX-Cluster-01 and it’s back.
In the next example, I need to move most of my lab VMs from NXT-Cluster-01 to NXT-Cluster-03 to free up resources on a host.
I selected more this time.
This time around, I will let the schedule do what it needs to do. This will give you an idea of how it works.
Starts at 5 pm, and now it’s 4:53 pm.
At 5 pm we will see a local snapshot start, then it will start replicating to NTX-Cluster-03.
Now to fail them over.
Click on Async DR, The Protection Domain name, Entities.
They’re gone and now on NTX-Cluster-03.
I am powering them up now.
They are all online.
I hope you find this helpful if you want to play around with Protection domains. As I stated above, this is in my lab. But I have used this in a production environment to move workloads around. I mainly used this to move resources from one cluster to another in preparation for a data center migration involving a CVAD setup. It worked like a charm and saved me big time in areas where this was needed.