Hitachi HDLM Multipathing for vSphere Latest Docs

HDLM

HDLM stands for Hitachi Dynamic Link Manager which is our multi-pathing software for regular OSes such as Windows, Linux etc but also for vSphere. You can install a VIB and use this to optimise storage performance in Hitachi storage environments.

I came across these documents yesterday as part of something I was doing. I thought I’d share them here as sometimes people tell me it can be hard to find stuff related to HDS.

For those customers and partners using HDLM, attached find the latest documentation from October 2014 for vSphere for HDLM. First the User Guide. This is Revision 9.

[ddownload id=”1427″]

 

Then the Release Notes from October 2014. This is Revision 10.

[ddownload id=”1430″]

 

Regarding default PSPs for different arrays types:

Active-Active: FIXED
(Although round robin can be used in many cases)
Active-Passive: MRU
ALUA:  MRU
(Although some arrays need to use Fixed)

For more info start at this KB article:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1011340

If you want more balanced performance and awareness of path failures/failovers for Hitachi arrays you can use HDLM. Please consult the user guide but to summarise, the main benefits are:

  • Load balancing (Not based on thresholds such as hitting 1000 IOPS)
  • Path failure detection and enhanced path failover/failback
  • Enhanced path selection policies
    • Extended Round Robin
    • Extended Least I/Os
    • Extended Least Blocks

Extended Least I/Os is the default.

Why would you use it: A quick Recap

If are using Hitachi storage and are entitled to use HDLM then you should probably use it to get optimal performance and multi-pathing between the host-target. I would typically suggest that for Production at the very least, and you could make a call on whether to use it for Dev/Test etc.

The main benefit is a more optimal use of paths in a synchronous manner. I’d equate it to the way DRS smooths out performance imbalances across a cluster. HDLM does the same thing with the IO directed to a datastore/LUN in real-time based on enhanced path selection policies.

You can use the native ESXi multipathing NMP and in most cases it will be more than sufficient for your needs if you don’t use HDLM,  or EMC Powerpath.  There is a misunderstanding commonly about the native NMP Round-Robin (RR) Path Selection Policy (PSP).

To clarify:

  • For a given datastore only a single path will be used at a single point in time.
    • The important thing to note is that this will be the case until a given amount of data is transferred. Consider this the threshold.
  • This is kind of like using LACP on a vSphere host for NFS. You cannot split an Ethernet frame across two paths and reassemble at the other end to simultaneously transmit a single frame across multiple uplinks. Same thing for storage isn’t it ?
  • The switching threshold can be either based on IOPS or bytes. The default is 1000 IOPS.
    • I suggest leaving this alone. Read on.
  • When the threshold set for the path is reached the host will switch to the next path.
  • The host will not use the previous path for the particular datastore at that point in time until it is selected again.

From experience it is entirely acceptable that a particular datastore is accessed using one path at a time.

I have seen extremely high IO driven from a single VM with very large block size. Using the infamous IOPS=1 setting is not enough to get the IO to the required level. Then you need to look at queue depth and number of outstanding IOs (DSNRO).

Once you start playing with these parameters the impact can be system/cluster-wide which is why it’s probably not a good idea.

Queuing in vSphere

Also consider the different queues involved in the IO path:

Screen Shot 2014-12-11 at 10.39.08

Consider the job the hypervisor is doing for a second. It is co-ordinating IO from multiple VMs, living on different data stores, on multiple different hosts. Think of the amount of moving parts here.

Imagine a cluster with

  • 20 hosts
  • 2 HBAs per hosts
  • 180 datastores
    • Thats a total of 360 active devices each host is managing with multiple queues on top and underneath.

That’s not trivial so before you let the handbrake off, take the time to consider the decision you’re making to ensure it will not have an unforeseen impact somewhere down the line.

 

 

 

 

 

 

 

 414 total views,  2 views today

Hitachi HDLM Multipathing for vSphere Latest Docs

Leave a Reply

Your email address will not be published.

*

Scroll to top