NutanixCE – Getting Started

NutanixCE is out there for some time now, and I’ve decided it’s time to give it a try. It took me while to go through some documentation and blog posts, and I was ready to deploy it in my lab. Understanding how Nutanix works might be confusing to some extend but I’ve found that actually it’s very well documented and some reading won’t harm. Good starting point would be

To obtain files you will need  Nutanix CE registration. Before moving on,  I wanted to highlight  few components of the community edition.

  • Prism – monitoring and management console; It can manage single cluster
  • Prism Central – can manage multiple Clusters
  • Nutanix on ESXi – Community edition can use, at this point, only Acropolis as underlying hypervisor. If you want to utilize your ESXi hosts you will need Nutanix Foundation which is currently available only for Partners, Customers and Nutanix employees.

Get your environment ready

That’s being said, two steps would be required to install NutanixCE  in your vSphere environment and get some adequate performance for testing.If you went through documentation, you already know that, Nutanix leverages auto tiering and use 3 tiers of Storage:

  • Hot – In Memory
  • Warm – Solid State Drives
  • Cold – Spinning Drives

Well, in this case flash storage will need to be either physically present or you should “make” one; of course enabling SSD works only for Block storage or DAS (Directly Attached Storage). KB2013188 describes in great detail how to enable SSD option on disk/LUN. Follow instructions and make all necessary changes, just bare in mind this operation requires ESXi reboot.

Next step is to install ESXi Mac Learning dvFilter. This packet would help you improve network and CPU performance by providing MAC-learning mechanism for your nested environment.

….applications like running nested ESX, i.e. ESX as a guest-VM on ESX, the situation is different. As an ESX VM may emit packets for a multitude of different MAC addresses, it currently requires the vswitch port to be put in “promiscuous mode”. That however will lead to too many packets delivered into the ESX VM, as it leads to all packets on the vswitch being seen by all ESX VMs. When running several ESX VMs, this can lead to very significant CPU overhead and noticeable degradation in network throughput.

When ready with installation of VIB to your ESXi hosts we can start with

Creating your VMs

Although cluster with single node can be created, I personally prefer to stick to at least minimum configuration. To create Nutanix Cluster which can tolerate two failures minimum of 3 nodes are needed, where five were recommended as far I remember. In our case we will create three node cluster with following configuration:

VM AttributeValueComment
OS TypeCent OS 4/5/6/7 (64Bit)
Storage ControllerPVSCSI
Disk (0:0)Boot Image Image downloaded from Nutanix
Disk (0:1)300GB SSD
Disk (0:2)600GB
Network Controller 0Intel E1000
Network Controller 1Intel E1000
ethernet0.filter4.namedvfilter-maclearnAdvanced VM Options
ethernet0.filter4.onFailure failOpenAdvanced VM Options
ethernet1.filter4.namedvfilter-maclearnAdvanced VM Options
ethernet1.filter4.onFailure failOpenAdvanced VM Options

When you have one VM ready you can simply clone it.There isn’t no need to repeat this operation for each VM.

Final word I’d say, overall preparation is more time consuming than actual deployment.However, in next article I’ll go through installation and basic configuration of Nutanix Cluster.


Posted in Nutanix and tagged , , , , , .

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.