VCAP-DCA Lab Talk Part 2: The Kit (and Kaboodle)

Recap: 

In the last post I covered some of the reasons you might want to use a lab, as well as the basic options. If you haven’t seen it check it out HERE.

Remember I’m recounting my experience and a certain level of general confusion, which I am trying to ensure you don’t replicate. We don’t want to make it tooooooo easy but making a plan and figuring what direction to go, is what it’s all about. Then you can figure out the nuts and bolts. to your “site-specific” configuration, using industry parlance.

Site=Your House/Your Rules.

I had planned to focus on the home network in this one. Then I started rabbiting on and couldn’t stop… So that’s gonna be the next part, as putting readers to sleep with such a young blog is frankly unforgiveable, in my view.

So, in this post we’re talking about nested configurations, specifically option 1: Nested with VMware Workstation or Fusion in my last blog which you can find at Lab Talk Part 1: Use Cases and Options. In my case I now have one of each, as you’ll find out later.

That was due to the fact that I want to replicate a dual data center logical design. In my head it’s like a 10km distance between DCs in the same city.

My House

About my house – it’s big-ish (by Irish standards), about 300m2 / 3200sq/ft.  In terms of the US and Canada, its probably average.

It covers a large area and all internal walls are breeze block, no plasterboard. That complicates running a wireless network to all corners and having coverage. I had to take this into account as some devices need to communicate with the wireless network and the other Datacenter. I will cover that in Blog part 3.

Before we continue, let’s remember My lab equipment is also my Work PC. And that’s one of the differences I mentioned last time out. It normally stays on 24×7, and is intended to be used as a lab, but HAS to work for BAU activities. Especially in terms of being internet connected.

Furthermore, I’ve recently moved upstairs to a spare bedroom, away from the broadband router. So for optimal internet access I needed a slightly different solution. That’s where the wireless piece also mattered.

The clients 

I have growing children and a wife who use the Internet voraciously (Twitter/Email/Rolypolyland/Moshimonsters etc). So, why not cover how I solved that problem, ‘cos it kind of also describes how I now run my two labs, which are typically joined up.

Over many years I’ve tried various unsuccessful ways to extend my network throughout the house. I used different Powerline (IP over Electrical circuits) solutions, repeaters, wireless access points, all of which were partly but not wholly successful.

I have now got a good working solution leveraging WiFi and Powerline as well as a small Gig switch. More on that in Blog part 3.

You can check out Powerline  here: http://en.wikipedia.org/wiki/Power-line_communication

I never settled on a working combo until now.

So now to the first nested system, which is a Workstation box.

Existing Machine Spec

My machine is a serious gaming system designed for overclocking, big PSUs etc and has the following spec (purchased from custompc.ie):

That let’s me do a lot of stuff, and it typically stays in one place which is great considering the form factor.

It’s a superb motherboard and spec in terms of processor, number of USB ports – I think there are 10 USB port in total, 7 PCI slots, 12 hard disks. And as described it is memory that is the ceiling I have hit with this.

It does show that these boxes are not exactly balanced in terms of performance.

Inside my Nested VMware Workstation I have the following (which explains the requirement for memory I hope)

  • Local vCenter with SRM, Update Manager, vCops, SQL 2008 R2 all installed in a single VM.
  • vCloud Director (with vCAC coming soon)
  • VMware View
  • Soon to be VSAN after a bit of an upgrade
  • Single two-node ESXi cluster (9GB RAM each – I’ll explain why later)
  • FreeNAS for storage (I’ve tried a few storage solution). This is my current one – That’s tied into the other explanation 😉
    • presenting 200GB SSD
    • presenting 500GB SATA x 2
  • 1TB “ISOs and Images” disk/dumping ground for myriads and myriads of downloads.

From a Virtual Datacenter Design perspective this is called the East Data Center:

It’s got fast spinning fans with lots of blue lights. Geek heaven !!

I think it’s also very important to ensure you’re boot drive which is getting hit hard, is running a decent SSD drive. Remember a bottleneck is the point of weakness (or bad design) within the system. So be aware of that..

IMG_2606

The new “Datacenter”

So when it came to a new purchase I didn’t wanna throw away my existing system. Even now, I’m still thinking of physical. That’s mainly down to memory….

However I think not virtualising even a physical server (ESXi inside ESXi) is crazy, and IMHO is a waste of hardware as far as a lab is concerned. So that will ultimately by my next plan, but for now I’m happy…..

The main criteria for the new machine were standard:

  • Lightweight for bringing with me wherever
  • Decent component specs. Particularly from a Flash disk perspective.
  • As much memory as possible
  • I want it to act as a DR datacenter as my primary Gamer. As BC/DR is a major part of any VCDX level design so having it on the same system just seems wrong to me.
  • Not necessarily Apple. In fact I tried to stick with Wintel x86:
    • After considering price, among other options, the Macbook pro seemed to be the best option. And when I weighed up the cost and guaranteed Apple “Quality” factor, that’s what pushed it over the line.
    • So Datacenter 2 is called West Data Center:

The specs are pretty straightforward….

  • MacBook Pro Retina Display 13” Core i5
  • 16 GB RAM (1600Mhz)
  • 512GB Flash
  • Dual-Thunderbolt (I intend to use this for Gigabit Ethernet)

I can confirm this is the most gorgeous laptop I’ve ever had. It takes getting used to but it’s just so slick…

IMG_2609

So I have my East and West Data Centres which now map onto two vCenter Data Center of the same name. It didn’t take me long to get SRM up and running on it. I’m using vSphere native replication in SRM which I really like.

The Dual-Site Approach

I’m not sure who else is doing it but I think this might be a good solution for lots of people. You could run two 8GB systems side by side and get similar results.

Regardless of the home network, plug a Gig switch between each nested setup and you can now simulate a Metro (MAN) link. This is perfect for simulating SRM or any other replication toolset. And as it turns out, it works well.

My Gig switch is pretty dumb – it doesn’t support VLANs and can’t be “managed” but to get up and running I don’t mind. I’ll get my ROI on that in 1 week…

So for now. That’s it. I will be hitting the network with minimum delay…We’ll be talking powerline, bridged networking, network segments, switching in machine and many other topics.

Until then, as we say in Ireland, Slán (pronounced Slawn) which means until the next time.

 380 total views,  1 views today

VCAP-DCA Lab Talk Part 2: The Kit (and Kaboodle)

One thought on “VCAP-DCA Lab Talk Part 2: The Kit (and Kaboodle)

Leave a Reply

Your email address will not be published.

*

Scroll to top