Part 1: Introduction
Part 2: With and Without VVols
Part 3: New Storage Constructs
Part 4: How are VVols instantiated
Part 5: Storage Containers & Capability Profiles
Part 6: The Mythical Protocol Endpoint
Before we can architect VM Storage Profiles which are the point of consumption for VMs and applications, we need to understand how the north-of-hypervisor view maps downwards. I don’t want to harp on about the storage; this is more about describing some new concepts to aid understanding the entire architecture.
So let’s introduce three (new-ish) concepts and then we can expand on them later:
▪ Protocol Endpoint
▪ Storage Container
▪ VASA Provider
The Protocol Endpoint
Think of a protocol endpoint like a NAT gateway for VVols. Thanks to a colleague (VH) for the NAT analogy.
So the only “logical” connection between an ESXi host and the storage is to the PE. More on how that is configured later. Once that connection is established and you have some storage configured, you can start to create VVols. The Protocol Endpoint is maybe badly named as you can use any protocol across it.
So there is still an object that is “addressed” by the ESXi host called the protocol endpoint. It looks like a LUN – in Hitachi Block systems it is 46MB and is mounted to the ESXi host on LUNID 256 – but that’s where the similarity ends. I already mentioned before that LUNs are history !!. More on PE configuration later.
A PE is the portal (Sci-Fi reference) behind which VVols exist and some would say hide. That’s not an entirely facetious statement.
Secondary LUN-ID is an inherent feature of VVol which is how once the logical connection between an ESXi host and VM is established, we are then into pure object creation paradise. No more zoning, masking, creation of filesystem etc. The VVol objects end up in the correct place thanks to translation between VM Storage profiles and Capability Profiles on the storage. In HDS’ implementation these are configured at pool level.
I will show screenshots and examples of all of this in subsequent blogs, but it’s important to build the picture up layer by layer.
A Storage Container is a logical grouping of capacity. It is mapped to a “virtual datastore” called a VVol datastore. That is not real or persistent in the sense that there is no filesystem. It helps direct the I/O to the appropriate place but there is no unit of administration or associated overhead
In HDS’ implementation, you can use multiple disk pools (thin pools or tiered pools) in a storage container, and each storage container can be the size of the array if required. We have already publicly stated elsewhere that ultimately containers will likely span arrays. The use of single or multiple pools is an interesting corner case I’ll discuss later.
At pool-level, you configure capabilities such as performance, availability, replication. HDS design provides for containers with multiple pools, meeting the requirements of applications using capability profiles per pool, with little overhead. That’s part of HDS implementation. So you can still manage very simply, but when it comes to the placements of VVol that’s where pools come into play.
So an example would be three pools in a storage container.
- Pool 1= Database Data files on a Tiered Pool (HDT)
- Pool 2= Database Log files on a Thin pool (HDP)
- Pool 3 = General system files and disks on a thin pool (HDP)
Are there any reasons to have multiple storage containers ?
One use case can be deployment of VVol in Cloud. Thanks to partitioning technology in HDS arrays you could segregate each vDC (virtual datacenter) on a separate array partition (separate resource group), with a single storage container and one or more protocol endpoints.
That is an interesting use case which I will cover later.
The VASA Provider
The VASA provider is not new. It is a set of APIs that stands for vStorage API for Storage Awareness. In V2.0 of the APIs we inherit the ability at hypervisor-level to orchestrate VVol objects (directly) on the storage, as well as publish what the storage can do.
Recently HDS Vasa Provider 3.1 has been released. In the not too distant future VP 3.1f will be released. This will be a single vehicle for VASA Provider for both file and block to present a consolidated user experience for both types of systems in the same management plane.
In the next post I will talk about how VVol is realised. Until then !
205 total views, 2 views today