Out With the Old

It’s been a considerably long time since this website really contained anything meaningful. Between work, car racing, my house, studying for certs and life in general, it just got put on the back burner.

My home lab suffered quite a bit too. I didn’t spend as much time working on it as I wanted to and, despite my best intentions, I started actually having to test products by placing them in production. This is far from an ideal scenario and something that I personally hate to do.

That was then and this is now.

The site is up. A friend and I are working on graphics and layouts. Lots of changes to be coming here in the future. In order to start things off, here is an update on my home lab.

For the last three years my lab consisted primarily of one physical ESXi host running on a Supermicro barebones server. It’s 1RU with a 12-core AMD processor and 32GB of RAM. It had a few local SATA hard drives and a SATA SSD for host caching. It worked. I ran some small VMs and had some fun. About 18 months ago, I added two HP DL360 G5 hosts for the ESXi 6 beta. I built an openfiler SAN on their local SAS disks and had a little more fun but it was not a great setup. No real SDS, no real IOPS, not a lot of capacity.

So, about three months ago, I was talking with a friend of mine and he explained what he was doing with his lab. This got the wheels turning and I started doing some real research. I decided that if I was going to claim to be a top tier VMware and data center architect, I needed to know what I was talking about by getting my hands dirty.

Fast forward two months and I started buying some gear off eBay and assembling it. This week, I burned everything in. Had some first 48 hour failures and rectified most of the issues. I fixed them at least enough to actually start building.

Here is the result:

Oh yes he did....
C3000 Lab

So what did I do now? I went from small and manageable to completely insane. Here is the hardware selection:

  • HPE BLC3000 BladeSystem Chassis with 6 PSU and 6 fans
  • 3 BL460c G7 blades with 2 QC Xeons & 96GB RAM
  • 3 SB40c Storage Blades
  • 2 VirtualConnect Flex-10 modules
  • 18 600GB 10k SAS drives
  • 6 Sandisk 240GB SSD drives

I loaded ESXi 6, added them to a Windows vCenter and started the migration. This is a great time to plug the VMUG Advantage program. $200 a year gets me access to most of the software I need. vSphere, Horizon, vRealize, vCloud, etc… Everything is legit and going strong.

I rolled the options on storage and decided to go VSAN. I’m a VMware guy above all else and so that became the right choice. I’m not getting as much capacity as I wanted but I am getting what I need and it isn’t costing me any more.

Why did I do this? Good question honestly. I want to advance a lot in my career and so far, I have bet my whole career on virtualization with VMware products and I have got to move forward. I have talked for years about moving forward to the VCAP exams and beyond. This infrastructure will allow me to have my VCAP/VCIX/VCDX lab, my home network and a work lab for testing client designs. It sips power as well. 700 watts when everything is up an running. My 2 G5 DL360s consumed close to that with less than half the RAM and less than 10% the storage. Plus, I get 10Gbe…

More to come. New layouts, graphics, links, etc.

–Doug

Leave a Reply

Your email address will not be published. Required fields are marked *