Hitachi Blades …. Have your cake and eat it. Part 1: The introduction

At the Hitachi Data Systems Community Website you will find some posts that are related to my role at HDS. Several people have asked me about our blades. I plan to write the second in a number of articles shortly but wanted to link across to give people an introduction. The original article is posted here:

 

https://community.hds.com/people/pmeehan/blog/2014/11/14/hitachi-blades-have-your-cake-and-eat-it-part-1-the-introduction

I have also posted the article on my personal blog here to ensure no login issues which some people have had on the community website.

Disclaimer: Take note that while this article contains my own personal thoughts,  it does not express official HDS opinions.  It also has views that are specific to our technology so bear that in mind and don’t be surprised about that or complain when you get to the bottom of the article 🙂

……………………

Hitachi Blades …. Have your cake and eat it. Part 1: The introduction

When I joined Hitachi I didn’t understand the power and use cases that could be applied to our Blades. I admit I assumed they were just like any other blades. Furthermore I have always had a sneaking disregard for blades. My preference when all things were equal was always rack servers. I think people either love ’em or hate ’em. I wasn’t quite sure blades were ever being properly utilised and frequently saw half empty chassis wasting power.

I knew Hitachi had our fully converged platform (UCP Pro) with fully integrated management of the entire stack – orchestrated by UCP Director software. That was the first differentiator that nobody else can match. If you wanna check out the demo here and let me know of similar solutions I’d love to hear about them:

Hitachi Unified Compute Platform Director —  Full Length Demonstration – YouTube

I was pretty impressed by UCP Director as it does resolve a common problem which is having to separately provision network (VLANs), storage, hosts etc. It even lets you drop images on bare metal servers – that are not part of the virtual infrastructure – as well as deploy ESXi images from in vCenter. It just makes things easier. I have a different opinion to some companies who argue that managing storage is incredibly complex. I believe the overall complexity of virtual infrastructures is what has made things more complex. It’s not the fault of the storage – you could make the same statement about the network surely ?, particularly in a virtual datacenter. Just think about planning a virtual network design for NAS/NFS. It’s the network that introduces most of the complexity. Right ?

So reducing management complexity and offloading that to an orchestration layer is what it’s all about. Just look at VAAI and what is has meant for template deployment, storage vmotion and other tasks. In my view that’s where intelligent orchestration software like UCP Director is an equally valid approach to starting from scratch with a blank sheet of paper.

Multi-Hypervisor

Of course you can get UCP Director for VMware and Hyper-V with the same orchestration layer for both vCenter and System Center. So this is about choice for our customers.

LPARs

When I heard of LPARs (logical partitions) I didn’t get it. As a died-in-the-wool virtualisation-first advocate I couldn’t see why now, with vSphere 5.5, there could be a use case with monster sized VMs now possible. To introduce LPARs, they came from Hitachi Mainframe and UNIX platforms and have been around for years.

hitachi-lpar1-2

Now I get it. If you’re a customer and want to benefit from a consolidated converged platform from Hitachi with UCP Director, you may be concerned about software licensing or worried about an audit from Oracle, Microsoft, IBM or other running on a vSphere or Hyper-V cluster.

Now you can have your cake and eat it !!. 

I heard of a customer the other day with 5500 servers which had cores “switched off” for licensing purposes. They had an average of 1.3 cores per server. Isn’t this sad that customers have to do this because of punitive licensing policies ?

Let’s say you want to use vSphere (or Hyper-V) on a UCP Pro converged platform. But you need to consolidate a bunch of physical servers that need to stay on single CPUs for licensing reasons. You can run 30 LPARs on a single blade which aids significant consolidation. So you can mix and match within the same chassis. Note that in the case of Oracle, you still need to license all the cores on a blade, but with that shouldn’t be too much of an issue if you consolidate a number of instances to a single or a few blades, and then use Hitachi hardware protection to automate identity failover if a blade fails.

SMP

What about if you want to run a super sized server that requires more than a dual core footprint which is common today. You can do that too – with up to 80 cores in a single instance.  Within our customer base these use cases are not uncommon.

What else can we do ?

N+M

N+M allows you to do two things:

  • Protect up to four blades with a matching set of four blades. If the hardware dies N+M kicks and a substitute blade will take the place of a first team player. This is called Hot standby N+M. The only potential limitation is the requirement for dedicated standby nodes. A use case could be a large SAP HANA instance – I saw one in a bank two weeks ago – the 80-core example I mentioned already.
  • Protection for a group of up to 25 blades. This is called cold standby. In this case you could have 22 blades configured running production with SAP, Oracle, VMware and other apps in separate clusters. If one fails the 3 standby blades will replace the failed blade, mirroring it from a hardware perspective.

Nesting

What if you want to do something like Hu Yoshida described in his blog here in terms of performance benefits of the new Haswell class process and VMCS functionality :

HDS Blogs: Hitachi LPARs Provide Safe Multi-Tenant Cloud With Intel E5v3

This resolves the performance impact of multiple VMM running on the same hardware.

Conclusion

Why should you care ???

This is only significant because it reduces lock-in for customers and allows them to do more with less, by satisfying many use cases with a single uniform form factor and solution type. As a Virtualisation Architect this is always what I like to see – Options!. Options are what allow you to make smart design decisions. And as we all know use case should drive all decisions and there are some real genuine use cases here.

This is particularly true in what I perceive as a final battleground in the quest to achieve 100% virtualisation in many enterprise customers, and that is software licensing.

My future posts will cover some real use cases with real examples. Until then ……. Enjoy your cake 

 705 total views,  2 views today

Hitachi Blades …. Have your cake and eat it. Part 1: The introduction

Leave a Reply

Your email address will not be published.

*

Scroll to top