Disclaimer: I work for Hitachi Data Systems. This post is not authorised or approved by my company and is based on my personal opinions. I hope you read on.
A couple of weeks back Pure Storage announced their converged stack called the Flashstack. So now Hitachi has UCP Director, VCE has Vblock, Netapp has Flexpod. Now Pure is in the Club.
A Personal Experience of a converged deployment
Last year I worked on two converged system implementations as an independent consultant. This post is written from that perspective and is based on frustrations about over-engineered solutions for the customers in question. I was a sub-contractor, only brought in to build vSphere environments once the solutions had been scoped/sold/delivered.
Both instances involved CCIEs working for the system integrator for the network/fabric/compute design & build. Core network management was the customer’s responsibilty. So this was Layer-2 design only yet still required that level of expertise on the networking side.
In this project, there were 100+ days PS time for the storage system for snapshot integration and (just) 10 days to design & deploy SRM.
I had a hunch that the customer didn’t have the skillsets to understand how to manage this environment and would ultimately have to outsource the management which is what happened. This was for an environment for 50-100 VMs (I know !).
When I started this post I was going to talk about how Hitachi converged the management stack using UCP Director inside vCenter and Hyper-V. That sounds like FUD which I don’t want to get involved in, so I decided to raise the question of whether a converged system is better and what is converged ?.
Question: What is a converged system?
Answer: It is a pre-qualified combination of hardware, with a reference architecture and a single point of support (in some cases).
Question: So does that make manageability easier ?
Answer: In many cases you still manage hypervisor, server image deployment as well as the storage & network separately. So you still need to provision LUNs, zone FC switches, drop images on blades etc etc. And each of these activities requires a separate element management system (to clarify: this is not the case with HDS).
Question: So how is this better ?
Answer: If you look at some of the features of the blade systems in question, they are definitely an improvement re: the ability of blades to inherit a “persona”. The approach of oversubscribing normally under-utilised uplinks for storage and network traffic is also a good idea. However you have to ask yourself at what scale many customers would require this functionality i.e. how many customers deploy new blades every day of the week and with modern hardware, server failures are relatively uncommon, so will they benefit from the more modern architecture.
Question: So wouldn’t it maybe be simpler to just use rack servers ?
Answer: It depends on your requirements, what you’re comfortable with and in most cases whether the vendor you are most comfortable with has a solution that suits your needs. Also it might make sense to spread a fault domain across multiple racks which can be a challenge with a single blade chassis which was the case for the customers I worked with.
Question: So when should you deploy a converged platform
Answer: A good use case could be consolidating from many legacy server models onto a single infrastructure reference architecture/combination, which can reduce support overhead via standardisation.
I don’t know about other vendors but Hitachi just released a much smaller converged solution – the 4000E – which comes in a form factor of 2-16 blades. So at the very least make sure you have a system that is not too big for your needs, or over-engineered.
A bridge too far
FCoE has not taken off, has it ?
I think this is one of the great failures of the whole converged story i.e. the ability to take advantage of the undoubted benefits of converging storage and Ethernet traffic onto a single medium. This has not happened in the most part. I think it’s widely accepted that Cisco was championing FCoE as part of it’s Data Center 3.0 initiative.
In relation to the concept of FCoE, I’m suggesting most customers use native FC on an array and “convert” to Ethernet inside the converged stack by running inside Ethernet, thereby removing one of the main value propositions of the whole concept i.e. to remove a separate fibre channel fabric and reduce HBA, cabling and FC switch costs. I’d welcome feedback on that point.
Only go down this route if your requirements and scale justify it, and if you have the skills to understand the technology underpinning such a solution.
As has been said many times … KISS … Keep it simple Stupid and in my humble opinion sometimes converged is not the simplest route to go. Now with some hyper-converged solutions such as VMware EVO:RAIL they have simplified management and made this a great solution for many customers.
What do you think ?
Comment or ping me on Twitter @paulpmeehan
219 total views, 1 views today