All the blogs in the VVol Series
Part 1: Introduction
Part 2: With and Without VVols
Part 3: New Storage Constructs
Part 4: How are VVols instantiated
Part 5: Storage Containers & Capability Profiles
Part 6: The Mythical Protocol Endpoint
It’s been waaaaaaayyyyyyyyy too long since my last VVol post – it was September 2015 ! Much has happened since then in the world of VVol
DISCLAIMER AND GUIDANCE:
Just a reminder that this series is not a Koolaid exercise about HDS VVol implementation. I try to describe to you (in some cases) how HDS implements something and where it might be different (and maybe good), but in general this is about sharing knowledge for you the reader, to help plan for VVol. Other vendors (like HP) use a controller-based VASA provider implementation, for example. I see benefits in that, but I also see benefits in our implementation, and hey I work for HDS so go figure !.
Storage Containers (SC)
Previously, I told you that in the next post I would describe Storage Containers and Capability Profiles in more detail. Really where we want to get to is the consumption process i.e. how VMs get the right storage for their needs.
In the second post With and Without VVols, I included this diagram and I kind of talked a bit about SC’s and capability profiles.
Containers shift the administrative unit from a capacity perspective on the storage from a LUN to a pool of capacity.
Straight away this provides some benefits:
- The only capacity boundary that matters from a monitoring perspective is at the storage container level.
- This is partly true 🙂
- The HDS instantiation provides another level of granularity by supporting multiple pools inside a SC. VVol objects live within these pools.
- If you had a single pool inside a single storage container then the statement above is true – monitor the storage container for space, and that’s all you need to worry about.
- To increase capacity you add parity groups / raid groups to pools. Capacity will be automatically surfaced to vCenter via VASA provider.
- Within the HDS implementation the operational problem of snapshots filling up datastores is removed by setting a Snapshot pool at storage container level. You size your assigned snapshot pool large enough and all snapshots will reside in there.
Note: Let’s be clear that due to the importance of Hitachi Command Suite, (HCS) and the VASA provider in all power-on, power-off, disk expansion and additional operations, these are now critical infrastructure components. As such their availability and recoverability requirements have increased.
I will cover design considerations in an upcoming post.
High-level Storage Container architecture
Here is a diagram which shows the communication path and separation of VVol objects.
One thing to note is that you can have VMFS and VVol objects on the same storage, presented to the same hosts and clusters with some constraints:
- A Virtual Machine must be all-VVol or all-VMFS
- VMFS objects cannot use the same Hitachi disk pool as VVols
- A discrete VVol resource group is used (host ports, host group and disk pools)
- it’s stating the obvious but VVol is only supported on vSphere 6
- The limit on arrays is still tied into the maximum number of logical objects
- On G1000 Hitachi arrays you can have 64,000 VVol and 1 million snapshots. Snapshot addresses are not taken from the 64K address space unless mounted.
- The Protocol Endpoint is created at Storage Container level, so any host that has been presented the 46MB Protocol Endpoint Administrative Logical Unit (ALU) may use any of the pools and capabilities contained therein !!
How would you consume VVols as an application owner ?.
The key point of VVol in a way is separation of provisioning from consumption, which has always been a problem for VMware Administrators i.e. having to wait for storage to do stuff …..
You want to match VM storage profile rules and rulesets to the capabilities of the underlying storage.
I mentioned above how HDS is slightly different from other vendors.
When capabilities are added in the future, these will be inherited from the management software (HCS) and abstracted from array-level.
In other words the array doesn’t know or care about capability profiles. It just stores data and objects. As the array is not aware of metadata such as capability profiles, data associated with a VVol and capability profiles are independent. This provides some flexibility as you can imagine in the future with potentially separate cadence cycles. Check with your vendor to see how they will manage it.
Here is an example of creation of a container in HCS:
As you can see there are three pools above, highlighted inside the blue box:
- 2 x Dynamic Provisioned (thin) or DP pools
- 1 x Thin Image Pool (Snapshot)
- As I described above this is set at storage container level
- This is a space efficient copy-after-write pool
I highlighted the option in a red box to “Define Profile” for each pool, as part of provisioning process. This is where we define the capabilites we can consume in vCenter:
Now you see the capabilities exposed in HCS. Note that there are auto-generated capabilities which are there by default, as well as other settings such as Cost, Performance, Availability.
You can choose right now whether you want to show or hide these capabilities to the storage provider in vCenter by ticking the boxes above on or off, in which case they will be hidden when creating rule sets.
These capabilities will increase and in coming releases HDS will provide custom profiles so you can tag volumes with metadata such as location i.e. London_DC and NewYork_DC, then you can use that in the consumption phase in vCenter on in a Cloud portal.
Publishing via VASA ?
This is automatic. Create the profiles and once the storage provider (the name for the VASA provider) is registered within vCenter, you can create VVol objects. Right now it’s based on 15 minute refresh cycle if memory serves correctly.
A Consumption Example
In this example imagine you are ordering a VM from an internal private Cloud. It just happens VVol is being used. Because we have the cost category exposed automatically, this could be part of a the ordering phase in vRA to match attributes based on cost, as well as other requirements.
We are now creating a VM Storage Profile in vCenter. We are setting some parameters for this VM Storage Profile. Let’s call it Oracle Production with a defined cost category of 800.
We can add lots of other rules and Rule Sets using availability or performance or parameters such as Encryption, Replication and vCenter-based VM hardware snapshots (Using Hitachi V2I VIrtual Infrastructure Integrator).
Lets say you move a VMs disks from one VM Storage Policy to another. Right now all that happens is creation of a compliance alert. Imagine if automated remediation occured (kind of like offloaded storage vmotion) using the VASA API. Hmmmmm….. even between arrays …… Wouldn’t that be handy at month-end ?
Note: In order to successfully place a VVol object with multiple rulesets, ony a single rule set must be satisfied to create a VVol object.
Below the user is ordering his VMs from vRealize Automation and we have exposed the Cost category as well to ensure we don’t just give away all the good stuff to everyone 😉
When you pick the appropriate options, rule matching ensures the VM and it’s associated VVols are automatically created in the right place on some storage system somewhere, un-beknown to the happy user 😉
A word on Snapshots and VVol
You must have a license for hardware-based snapshots on your array to use VVol snapshots. I’ll say it again, you cannot snapshot a VM with VVols without underlying hardware snapshot capabilities in place. You can clone a VM without it.
In the next post I’l probably talk more about the I/O path and VVol performance and queue depth considerations.
859 total views, 2 views today