VMware Home Lab – Part #1

We Live in the Cloud

VMware Home Lab – Part #1

7th September 2018 ESXi Homelab 0

In this blog I go through my initial thoughts and decisions on the choice of hardware.

As this is my first blog article, I’m going to spread the topic over several parts.
This multipart blog is my journey on architecting, purchasing and building my VMware home lab environment.

I’m the kind of person who loves to understand and appreciate how ‘things’ work, and where possible, to improve on them…relatively speaking. I also like to tinker (break?)…I’m sure there is a happy medium?!?

It wasn’t that long ago (or, depending on an individual’s own perception on the passing of time) when I was building and maintaining a small on-premise data center, including the networking, security and remote workers. There were 6 rack servers, each running Windows Server and a specific business application. The lack of air conditioning made the server room very hot and unpleasant whilst being as noisy as standing next to an airplane engine. But it was the perfect ‘hands on’ learning experience.

Thankfully, now-a-days, off-premises (internet based) cloud resources can help with data center demands that provide numerous business and technical benefits, and it’s through these benefits that we see a strong Test/Dev or Pre-Production draw, but also an increasing demand for live environments.

My reasons for a VMware lab is so that I can get that closer ‘hands on’ experience with the VMware product stack, rather than just running a very limited nested environment on my poor old Dell laptop (vSphere, vCenter, vROps). My technical requirements for the VMware lab was relatively simple; to stand up the VMware vCloud Suite and additional components vRealize Suite brings. I always like to aim high before realization kicks in and normalizes my thoughts on what is possible.

Through conversations with numerous colleagues I had initially focused on looking at small form factor hardware. This appealed as I can have them sitting nicely on my home office desk. I thought having three servers, making a three-node cluster, should allow me to perform most functions.

For the average person, looking to have their own VMware ‘home lab’ (data center), without the (ongoing) expense of paying for internet-based cloud resource, but also requiring the flexibility of being able to tear down, rebuild and maintain the environment, the home lab is king of the hill.

For those lucky enough to have access to VMware products, or trial keys; awesome, that’s half the job done. However, this means I now need to choose which hardware I want/need for my VMware home lab, but prior to this I needed to ask myself (in no particular order):

  • What do I want to do with the lab?
  • Where am I going to locate the hardware?
  • What architecture will best serve me for flexibility?
  • Do I want the hardware to be modular?
  • Can I just use budget laptops or my old desktop?
  • Would a small form factor, power efficient server or budget friendly rack servers?

Plus, a dozen or so other questions :-). However, I suspect if you’re reading this you will have your own set of questions and ideas.
So, after these questions, I had a large selection of choices to whittle down. Starting with the barebones small form factor option, I narrowed down my search to the following three choices:

  • Supermicro Micro Server E200-8D
  • Gigabyte BRIX GB-BRi5-8250
  • Intel NUC NUC715BNH i5

At this point its worth mentioning that my budget was ideally, no more than £1500.00. Compared to other countries like the US or Canada Computer / Server hardware looks to be more expensive in the UK, and £1500 is a lot of money to come out of a personal budget. I needed to maximize my ‘bang per buck’ so to speak.

If money was no object then it would be 3x the Supermicro server as each one can house 128GB RAM, which no other small form factor bare metal computer offers. Unfortunately, three of these units, fully populated with memory, but not fully spec’d comes to £2k. Plus not many places in the UK sell them. That rules these out.

At this point the NUC and BRIX still appealed as they were substantially cheaper, and still modular. However, the NUC was limited to 32GB memory per host. This wouldn’t be enough for a 3 node cluster running the Cloud and Realize suite components. That ruled the NUC out.

The Brix was better as it had a memory maximum of 64GB RAM DDR4 per host, plus it had the Intel I5-8250 quad core processor. Admittedly not the best i5 processer (Still cheaper than the i7 though) but will still provide enough flexibility and vCPU’s when running my chosen VMware software.

Cross checking these choices on the VMware HCL had me find that not all models are listed, but most are, and what you don’t find on the VMware HCL, you will almost certainly be able find someone who has gotten it to work online without too much effort. So, any of these three models would work (spec limitations aside).

However, with the Gigabyte fully loaded with 64GB each node, 120GB m.2 card and a 500GB SSD, a quantity of three would be over £2k..too expensive. I looked at the option of running just two hosts, but I need that third as a Witness for vSAN. That rules these out.

Stepping away from small form factor, another option was second hand laptops and desktops/PC towers. I’ll get straight to the point with both. The laptops weren’t particularly modular and cost efficient to have spec suitable. Desktops/PC towers were proving challenging as most didn’t seem to take any more than 32GB RAM. Unless I wanted to pay big bucks for a pro gaming spec – which I didn’t. That ruled those out.

This left the eBay option or reconditioned server retailers for actual server hardware; either Tower or Rack based. The challenge here is age vs cost. From a rack perspective, I’ve chosen to focus on the Dell PowerEdge R series or HP ProLiant DL series. My reason is that they have a dual socket setup with PSU redundancy, large memory capacity, and modular drive bays with RAID. More importantly they offer the bang per buck vs horsepower required. The downside is the noise the fans make can be, shall we say, noticeable…..from miles away…😊. I’m lucky enough to have a garage I can run them in where the noise doesn’t become an issue.

From a tower perspective I decided to look at the tower-based cousins to the above rack servers. The Dell PowerEdge T610/620 and the HP ProLiant ML350 G6 series. These are big (deep) tower servers, but thankfully the physical size isn’t too much of an issue as I plan to have them running in the garage.

All these are ‘older’ or ‘refurbished’ servers, but are fully modular, so they can be upgraded or repaired – but will certainly be good enough to last several years as a home lab.

One of the benefits of utilizing a proper physical server as apposed to the small form factor PC, is the requirement for only two, not three (which was the original plan if I utilized the small form factor). It would then be possible to build a nested environment on both servers, providing three or more potential hosts.

In conclusion, my final decision for hardware will be the rack servers. These are cheaper than a tower as they are generally more challenging to house/mount/store. As it’s a rack server, the absolute minimum spec I’m going with will be:

  • 2x Intel E5-2630L Hex Core Xeon 2.60 GHz
  • DDR3 128GB RAM
  • RAID Support
  • m.2 NVMe drive 120/250GB for vSAN
  • 1 TB HDD capacity per server

From a price point perspective and support, the Dell PowerEdge R620 is the choice to go with. There are many other excellent offerings out there, but it’s all down to budget. I already have network NAS with a small NFS store if required, but I want everything close by.
Part #2 of this will go through the config and design of the VMware software…