VMware Vsan is a Hyper-Converged storage (Software Defined Storage) solutions with a vSphere-native, high-performance architecture. With software defined storage, it has opened new opportunities to new skills. With vSAN we can use local disk and create highly available, scalable, high performance datastores for vSphere infrastructure. Benefit of using local disk is there low latency and in the VSAN data is replicated with other Host disks.
Below are the system requirements for VMware virtual SAN,
• 1GB NIC; 10GB NIC recommended
• SATA/SAS HBA or RAID controller
• At least one flash caching device and one persistent storage disk (flash or HDD) for each capacity-contributing node
• Min. 2 hosts – Max. 64 hosts
VMware HCL guide
• Certified vSAN Ready server and it is on Hardware Compatibility List
• VMware vSphere 6.5 EP02 (any edition)
• VMware vSphere with Operations Management 6.1 (any edition)
• VMware vCloud Suite 6.0 (any edition updated with 6.5)
• VMware vCenter Server 6.5
I have tested vSAN on Vmware workstation, you can follow my earlier article to Emulate HDD as SSD flash disk on Esxi and VMware workstation.
I am using default option to build vSAN for this demo, In the vSAN, There is a limitation of a 2 or 3 host cluster configuration, you can bear only one host failure by cluster setting option of host failure to tolerate 1. VMware vSAN keeps each of the 2 necessary copies of VM data on distinct esxi hosts. The witness object is on a third host. As there are low number of esxi server in the cluster, you might see following limitation in the cluster. When a esxi host goes down, vSAN cannot rebuild Virtual Machine data on another esxi server to safeguard against another failure. Another thing if a Esxi is put onto enter maintenance mode, vSAN cannot reprotect evacuated data. Data is uncovered to a possible catastrophe while the host is in maintenance mode. Make sure for production you are configuring proper redundancy and choosing proper options for the VSAN.
In my environment I am running vcenter server 6.5, 3 esxi server version 6.5. Each esxi host has 2 x 10 GB disk for vSAN, out of one is SSD for cache tier and another is HDD for capacity tier. Make sure for production you use hardware or esxi host listed on VMware HCL, there are also certified VSAN ready nodes available. Before starting configuration one of the prerequisite is disable vSphere HA (high availability). Select the cluster, on the configure tab in the right, expand services and select vSphere Availability, click Edit. Again on vSphere Availability uncheck box on Turn on vSphere HA.
Another configuration is I have configured new VMKernel adapter port with vSAN services configured and enabled on the virtual switch. Make sure you are atleast using 10 Gig network adpater for vSAN network traffic.
Below Services, there is vSAN option, expand it, and on the General view, you will find vSAN is Turned OFF. Press Configure button.
It opened the Configure vSAN wizard. Under the vSAN capabilities there are several services and options, select them how you want your vSAN cluster to work.
Enabling or disabling deduplication and compression, requires a rolling reformt of all disks in the VSAN cluster, Depending on the amount of data stored, this might take a long time. vSAN alterations disk layout on each disk group of the cluster. To achieve this change, vSAN evacuates data from the disk group, removes the disk group, and recreates it with a new layout format that provisions deduplication and compression.
vSAN encryption needs an external Key Management Server (KMS), the vCenter Server, ESXi server. vCenter Server asks encryption keys from an outer KMS. The KMS produces and provisions the keys, and vCenter Server acquires the key IDs from the KMS and issues them to the ESXi servers.
A fault domain typically mentions to a group of hardware devices that would be impacted by an outage, I am using normal cluster without Fault domain or stretched cluster. Click next to proceed.
On the Network validation, check the existing vSAN network settings on all the hosts in the cluster. Make sure you have created configured one vmkernel adapter portgroup on each esxi host with vSAN option enabled. I have already configured this, Configuration will validate settings and Everything is green here..
On the Claim disks, select the disks to contribute to the vSAN datastore, Select which disks should be claimed for chache and which for capacity in the vSAN cluster. The disks below are grouped by model and size or by host. The recommended selection has been made based on the available devices in your environment. The number of capacity disks must be greater than or equal to the number of cache disks claimed per host.
Here you can make your disk appear as flash or HDD as shown in the icon. Next Select cache tier and capacity tier selecting from drop down box.
How many disks can a single esxi server add to VSAN?
Maximum 5 diskgroup
Individually disk group needs at least 1 SDD and 1 HDD at a minimum and 7 HDDs at a maximum
HDD count maximum per esxi host = 5 x 7 = 35
SSD count maximum per esxi host = 5 x 1 = 5
On the ready to complete review your settings selection before finishing the wizard. All the configured settings are listed her, click finish.
Next check your recent tasks, Updating vSAN configuration starts based on provided information.
Check the configuration on Disk Management under vSAN. Each esxi server is listed and disks is mounted under disk groups.
Verify datastores, There is single vsanDatastore listed. It is ready to move VMs.
Verify Configuration Assist in vSAN, for me it is showing errors because I am not using certified hardware and warning and errors are expected.
Emulate HDD as SSD flash disk on Esxi and VMware workstation
PART 1 : INSTALLING ESXI ON VMWARE WORKSTATION HOME LAB
PART 2 : CONFIGURING ESXI ON VMWARE WORKSTATION HOME LAB
PART 1 : BUILDING AND BUYING GUIDE IDEAS FOR VMWARE LAB
POWERCLI - CREATE DATACENTER AND ADD ESXI HOST IN VCENTER
This post first appeared on Tales From Real IT System Administrators World And Non-production Environment, please read the originial post: here