Home Lab Rebuild 2018

About two years ago I decided that the pieced together homelab was not working any longer. I had VMs sitting on multiple pieces of storage. Some were RAID5 being presented back to the same hosts over again with freenas, openfiler or the HP VSA.

In order to provide a complete solution that was scaleable, resilient to failure, had 10Gb networking and comprised of enterprise hardware I chose the HPE C3000 Blade System. I picked up two 10Gb VirturalConnect Ethernet Modules, three blades with Westmere CPUs plus 96GB of RAM, storage blades, SAS HDDs and some SSDs. I made the decision to go with vSAN on vSphere 6.0 in order to provide for fault tolerant shared storage to all hosts.

Gone baby gone

The system worked well. Eventually I tossed the consumer SSDs for some enterprise SAS versions but as soon as vSphere 6 Update 2 released, I knew I was in trouble. It seemed that the Westmere L5630 CPUs were not long for this world and would be eliminated from support in the next few releases. Deduplication and compression were also introduced for All Flash configurations in that release. 6.5 introduced even more new features that weren’t going to be doable on my hardware. I was also looking to do some GPU pass-through work which wasn’t possible on the blades while maintaining the current configuration. I knew they would have to be replaced.

I began my search by outlining the must have features.

  1. Sandy Bridge or newer CPUs
  2. vSAN VCG supported hardware end to end
  3. DDR3 RAM so it would still be cheap
  4. Two PCIe slots at a minimum
  5. On-board 10Gb SFP+ to avoid wasting a PCIe slot
  6. Less than 200W power consumption without a GPU

It seemed like my choices were an HPE Dl360P Gen8 or Dell R620. After considering the cost, I picked the DL360p as it came out to be the cheapest.

My configuration was as follows,

  1. Dual E5-2670 8-Core CPUs
  2. 192GB of RAM after recovering some memory from my blades
  3. Dual Port SFP+ on board
  4. 1RU form factor
  5. Dual 450w platinum PSUs

Next in line is the actual storage. I reached out to someone I bought enterprise SSDs from in the past and purchased six 1.6TB Sandisk Optimus Ascend drives brand new off the shelf. These are great because they are enterprise grade and supported as All Flash capacity for vSAN 6.6! For the cache, I found a few eBay sellers with some Sandisk FusionIO ioDrive2 365GB PCIe cards with 99%+ life left!

A Quanta LB6M switch from UnixPLus provides the 10Gb networking for my new core. A bunch of SFP+ LC modules and some OM3 LC fibre cables and my parts list is done.

With all of that together, now I have a pretty sweet lab. I should be satisfied for at least 3 or 4 weeks.

Stay tuned for a follow up on the build and performance testing.

VSAN 6.2, Horizon 7 and much more….

So the last few weeks have been exciting. VMware released vSphere 6.0 U2, which includes VSAN 6.2! VMware last week released Horizon 7 which adds a plethora of new and much sought after features! VMware also released a new Fling on Monday which is an HTML5 vSphere Web Client which is likely to be released as a fully supported client in the next major release of vSphere.

First order of business was to get 6.0 U2 installed. The vCenter upgrade went well. The host upgrades went well. Everything was great until I started the VSAN disk format upgrades. It’s well documented now but there were some issues with the following error stream.

Failed to realign following Virtual SAN objects: be9aa152-5bae-c9b2-d859-0017a4770001, c50dc656-25e3-56ec-f252-0017a4770008, 8870a152-c758-fff610a9-0017a4770001, due to being locked or lack of vmdk descriptor file, which requires manual fix.

This as it turns out is related to a CBT bug. Find the VMware KB article here.

I resolved it by identifying the VMs with issues and shutting them down during the upgrade. With that all done, I was now on vCenter 6.0 U2, ESXi 6.0 U2 and VSAN 6.2!

The next few days followed some performance tuning, playing with vRealize and just generally getting everything happy.

On to the latest announcement, VMware released Horizon 7 last week. So, downloaded that and started setting everything up. I’m preparing for the VCAP6-DTM so I didn’t want to interrupt my Horizon 6 environment so I just built a separate set of servers for 7. Composer server, composer DB, connection servers all deployed nicely. I cloned my master images from my Horizon 6 environment so that I didn’t have to rebuild anything there.

The feature from Horizon 7 that I am most interested in is Instant Clones. I’m really interested in the Blast Extreme changeover but that is nothing too new. The new firewall changes are here.

Back to Instant Clones. This took me a while to get up and running. I wanted to try and see how much I could stress my storage out and still have it working. I ended up modifying the Horizon 7 policies for VSAN the same way I had to for 6. By default VMware Horizon deploys VSAN storage polices setting the FTT, stripe width, cache, etc. I changed the stripe width from 1 to 3 which took my clone times down from 64 minutes to 33 minutes.

How do instant clones work? Not great yet. There is a limit of 2 monitors, no persona management, no 3D, etc. Here is a list of the things that aren’t supported yet;

In Horizon 7.0, instant clones have certain restrictions:
Single-user desktops only. RDS hosts are not supported.
Floating user assignment only. Users are assigned random desktops from the pool.
Instant-clone desktops cannot have persistent disks. Users can use VMware App Volumes to store
persistent data. For more information about App Volumes, see
Virtual Volumes and VAAI (vStorage APIs for Array Integration) native NFS snapshots are not
Sysprep is not available for desktop customization.
Windows 7 and Windows 10 are supported but not Windows 8 or Windows 8.1.
PowerCLI is not supported.
Local datastores are not supported.
IPv6 is not supported.
Instant clones cannot reuse pre-existing computer accounts in Active Directory.
Persona Management is not available.
3D rendering is not available.
You cannot specify a minimum number of ready (provisioned) machines during instant clone
maintenance operations. This feature is not needed because the high speed of creating instant clones
means that some machines are always available even during maintenance operations.

It’s a long list. Persona management is a big killer for me. Local datastores are next on my “big deal” list.

It also seems like they don’t perform as well. They noticeably perform worse than my linked clones with the same configuration.

This is why we test and this is why we don’t deploy brand new stuff in production.

VMware UEM 9 released as well. It’s not on my list right now but probably in the summer.

More updates to come.

VDI Test Lab

One of the best things about my new home lab is the increased capacity for testing. After re-configuring all of my production VMs for VSAN and getting them moved over with distributed switches, vRealize, etc; I finally get to add something new. So tonight I started configuring my VDI test lab. Now, I don’t really want to deploy a bunch of bloated Windows desktops so I decided that this would be a great opportunity to do some Linux desktops. I have over 6 years of experience with View and this will be my first attempt at Ubuntu linked-clones. I’ll follow up with a nice and simple “how-to” when I am done.

Out With the Old

It’s been a considerably long time since this website really contained anything meaningful. Between work, car racing, my house, studying for certs and life in general, it just got put on the back burner.

My home lab suffered quite a bit too. I didn’t spend as much time working on it as I wanted to and, despite my best intentions, I started actually having to test products by placing them in production. This is far from an ideal scenario and something that I personally hate to do.

That was then and this is now.

The site is up. A friend and I are working on graphics and layouts. Lots of changes to be coming here in the future. In order to start things off, here is an update on my home lab.

For the last three years my lab consisted primarily of one physical ESXi host running on a Supermicro barebones server. It’s 1RU with a 12-core AMD processor and 32GB of RAM. It had a few local SATA hard drives and a SATA SSD for host caching. It worked. I ran some small VMs and had some fun. About 18 months ago, I added two HP DL360 G5 hosts for the ESXi 6 beta. I built an openfiler SAN on their local SAS disks and had a little more fun but it was not a great setup. No real SDS, no real IOPS, not a lot of capacity.

So, about three months ago, I was talking with a friend of mine and he explained what he was doing with his lab. This got the wheels turning and I started doing some real research. I decided that if I was going to claim to be a top tier VMware and data center architect, I needed to know what I was talking about by getting my hands dirty.

Fast forward two months and I started buying some gear off eBay and assembling it. This week, I burned everything in. Had some first 48 hour failures and rectified most of the issues. I fixed them at least enough to actually start building.

Here is the result:

Oh yes he did....
C3000 Lab

So what did I do now? I went from small and manageable to completely insane. Here is the hardware selection:

  • HPE BLC3000 BladeSystem Chassis with 6 PSU and 6 fans
  • 3 BL460c G7 blades with 2 QC Xeons & 96GB RAM
  • 3 SB40c Storage Blades
  • 2 VirtualConnect Flex-10 modules
  • 18 600GB 10k SAS drives
  • 6 Sandisk 240GB SSD drives

I loaded ESXi 6, added them to a Windows vCenter and started the migration. This is a great time to plug the VMUG Advantage program. $200 a year gets me access to most of the software I need. vSphere, Horizon, vRealize, vCloud, etc… Everything is legit and going strong.

I rolled the options on storage and decided to go VSAN. I’m a VMware guy above all else and so that became the right choice. I’m not getting as much capacity as I wanted but I am getting what I need and it isn’t costing me any more.

Why did I do this? Good question honestly. I want to advance a lot in my career and so far, I have bet my whole career on virtualization with VMware products and I have got to move forward. I have talked for years about moving forward to the VCAP exams and beyond. This infrastructure will allow me to have my VCAP/VCIX/VCDX lab, my home network and a work lab for testing client designs. It sips power as well. 700 watts when everything is up an running. My 2 G5 DL360s consumed close to that with less than half the RAM and less than 10% the storage. Plus, I get 10Gbe…

More to come. New layouts, graphics, links, etc.