Virtualisation Poll Result 1: CPU and Memory Utlilisation

Recently I wrote a post looking for assistance with some questions regarding real-world Virtualisation deployments. It was partly for curiosity and personal reasons and partly for work-related reasons.

I’m wishing now I had expanded it to make it more extensive. I plan to run the same thing later in the year and try and make this an annual thing – a kind of barometer if you like – to get a real view on what’s happening in the world of virtualisation, as opposed to analysts reports or other forms of information.

That post was HERE and the polls are now closed.

Note: don’t confuse this with the vBlog voting Eric Siebert is running.

The first question was:

The results were as follows:
Screen Shot 2015-03-24 at 10.25.27
Before analysis, let’s take a look at the next question which relates to memory consumption:

 Here are the results of that question:
Screen Shot 2015-03-25 at 09.21.07

Analysis

There is always a question regarding whether workloads are CPU or memory bound. I’ve worked in sites where CPU and processing was what it was all about. Alternatively when running a cloud platform it is likely that memory oversubscription will become the bottleneck.
For me I’m kind of surprised that the disparity was so great between how many sites are running over 40% memory utilisation – over 87%, versus CPU over 40% – 38.4%.
So taken as a whole:
  • CPU utilisation is less  than 40% in over 60% of responses.
  • Memory utilisation is less than 40% in only about 12% of responses.

So we can clearly see the disparity. There are a couple of questions which for me are linked to this one. I will publish posts over the coming days on those but you might ask yourself what effect these results have on CPU and Memory reservations as well as the vCPU to pCPU oversubscription ratios.

Clearly we could potentially further oversubscribe CPU Cores but eventually we reach a position where availability considerations and failure domains become real concerns. So if we push CPU consolidation ratios too much we could lead to 100-200 VMs falling over if a host dies. This is not a good place to be but this is where we are at.

My personal view has always been that if you make vSphere or Hyper-V your “Mainframe” on which your entire workload runs, then it is penny-wise and pound-foolish to underallocate hosts for the sake of marginal cost implications, when weighed up against the downsides. That will always be a personal starting position but it obviously depends if it’s a production, QA, or Test/Dev environment.

Is this a problem ?

So let’s extrapolate slightly forward. If CPU becomes more and more powerful and even more cores are exposed on a physical server unit, what might the impact be, based on the availability considerations of a single host.

For me, this is where the application architecture must change. Right now, many applications cannot be deployed in scale-out (cattle) architectures, and are not ideally placed to leverage further inevitable increases in CPU core numbers and respective core performance. The reason I say that is there is no inherent resilience within most existing applications to withstand individual node failures.

This is going to cause a big problem for enterprises, balancing too much oversubscription in terms of VMs per-host and the implications in terms of line-of-business resilience versus the quest to drive better consolidation ratios from ever more powerful hardware resources.

That’s just a personal view but one I’m seeing regularly where customers and partners are trying to achieve CPU consolidation ratios of 10-20-30 of vCPU to pCPU which is a dangerous place to be in my opinion.

 336 total views,  2 views today

Virtualisation Poll Result 1: CPU and Memory Utlilisation

Leave a Reply

Your email address will not be published.

*

Scroll to top