Configuring SAP HANA Scale Out on VMware with vVols

Virtual volumes(vVols ) have changed the way  organizations think about how virtual machine storage operates.When using vVols, a layer of complexity is removed to allow the virtual infrastructure administrator to deliver more precise storage service levels and have more direct control over how storage is consumed by virtual infrastructure. Cody Hosterman blogged about the Introduction of vSphere virtual volumes when it was initially released for FlashArray, highlighting the benefits and how it all works. 

SAP HANA Scale up storage deployments are pretty simple to deploy on VMware vSphere. Simply put, all that is needed are separate shared data and log volume disks attached to separate paravirtual SCSI adapters that are formatted with a filesystem and mounted. Scale Out deployments require a bit more work and can be done on both VMFS and vVols.  The intent of this post is to show you exactly how to do that. 

SAP Note 2652670 (SAP Support Login required) details the support for SAP HANA on VMware vSphere.  Also, the SAP Community Wiki provides a good overview of SAP HANA on VMware vSphere. 

Pre-requisites 

  • Virtual Volumes are to be implemented following this guide 
  • 3 Virtual machines with SUSE Enterprise Linux for SAP applications or Red Hat Enterprise Linux installed,running the correct configuration for an SAP HANA scale out installation.
    • Ensure device-mapper-multipath is installed, enabled and running.
  • An NFS server with an NFS mount point exported, available and mounted on each of the virtual machines operating systems. 

Step 1 . Add 2 additional VMware Paravirtual SCSI Adapters to each virtual machine. Ensure that the “SCSI Bus Sharing” is set to “Physical” or “Virtual”.  If set to “Virtual, “ virtual disks can be shared by virtual machines on the same server.   “Physical” allows the disks to be shared by virtual machines on any server, as long as all servers are connected to the same datastore/storage provider.

Step 2. Add the required virtual disks to any one of the virtual machines (all of  the virtual machines must be powered down) to be used in the scale out cluster. Ensure that the virtual disks properties are set as follows:

  • Sharing : Multi-Writer
  • Disk Mode : Independent – Persistent
  • Virtual Device Node – Use any one of the SCSI controllers added in Step 1, as long as the data and log volume(s) are attached to different SCSI controllers.
    • The first SCSI controller should have all of the data volumes and the second controller should have all of the volumes intended to be used for logging. For example, in a scale out 3+1 implementation, there should be 6 disks in total accessed by 4 virtual machines.
  • If the VM has used Change Block Tracking (CBT) before, then there is a configuration option set for it which will block Multi-Writer Disks being used. To disable CBT follow this guide.

Step 3. Add the existing disks created in Step 2 to each subsequent virtual machine (while powered off) by selecting “Add New Device”  and adding an “Existing Hard Disk”.

The hard disks that are to be added to each virtual machine in the 3+1 configuration scenario will likely be “vm_name_from_step_2_number.vmdk”. These disks do not need to be stored with the virtual machine. However, it is recommended that a single virtual machine be nominated as the “owner” for traceability purposes. Ensure that when adding an existing disk, all of the relevant properties are set as laid out in step 2.

Step 4. Once all of the disks have been added to each virtual machine, each virtual machine can be powered on. Once the power-on process is complete, the disks will be visible to each system.

Step 5. Device-Mapper-Multipath (or multipathd) will ignore any virtual disks with the VMware vendor string to force the operating system to consider the relevant disks for multipathing.  The following steps are to be followed:

Using lsblk identify the device names of the volumes which have been added

  • For each device to be used in the SAP HANA installation run :

udevadm info –query=all –name=/dev/<device> | grep ID_SERIAL

 

  • Record the value for “ID_SERIAL” and then create the multipath.conf file , ensuring that each device to be used has the value entered as a “wwid” in the multipaths field :
defaults {
	user_friendly_names no
}
blacklist {
}
multipaths {
  multipath {
	wwid  36000c29dbe4185a0bbdedc9b922747c4
  }
  multipath {
	wwid  36000c2911f4ad967aa58336edcca4445
  }
  multipath {
	wwid  36000c296e9e0bfd3231ec5a7210f7a84
  }
  multipath {
	wwid  36000c29d6460812668557f320071f09d
  }
  multipath {
	wwid  36000c2983398315d62c06fa480b3fa36
  }
  multipath {
	wwid  36000c29bc7582f9acdabaa7e0e6141cd
  }
}
  • Start and enable the multipath daemon using systemctl enable multipathd && systemctl start multipathd
    • The devices should now show up in the multipath listing:

Copy the multipath.conf file to each virtual machine to be used as in the SAP HANA scale out installation, start and enable the multipath daemon

Step 5. Format the multipath devices with the xfs filesystem – mkfs.xfs /dev/mapper/<device_wwid>

Step 6. Place the values in the global.ini file to be used during installation , using the ha_provider hdb.ha.fcClient with a persistent reservation type of 5

[communication]
listeninterface=.global
[persistence]
basepath_datavolumes=/hana/data/RH1
basepath_logvolumes=/hana/log/RH1
use_mountpoints=yes
basepath_shared=yes
[storage]
ha_provider=hdb_ha.fcClient
partition_*_*__prType=5
partition_1_data__wwid=36000c29dbe4185a0bbdedc9b922747c4
partition_1_log__wwid=36000c29d6460812668557f320071f09d
partition_2_data__wwid=36000c2911f4ad967aa58336edcca4445
partition_2_log__wwid=36000c2983398315d62c06fa480b3fa36
partition_3_data__wwid=36000c296e9e0bfd3231ec5a7210f7a84
partition_3_log__wwid=36000c29bc7582f9acdabaa7e0e6141cd
[trace]
ha_fcclient=info

At this point,Purity GUI shows that all of the virtual machines on the vVol datastore have their own volume group.

Going into the virtual machine, which is where the virtual disks were originally created. It now shows each of those disks. It’s important to note that even if the disks are attached to other virtual machines , they will only show up in the volume group of the virtual machine they were created on.

After all of these steps have been completed, the SAP HANA installation can be performed using the normal process with HDBLCM, HDBLCMGUI, software provisioning manager or HDBINST. 

I found the process very easy, and it also allowed me to utilise the different vSphere technologies such as DRS and vSphere HA, thereby giving a second layer of availability and resource management, should it be needed.

Further Information

Further information on the SAP HANA Storage requirements for TDI can be found here.

Further information on the Multi-Writer Flag for VMware Virtual machines can be found here.

Best Practices of virtualized SAP HANA of Intel Skylake base server host systems