Project Home Lab: Existing Infrastructure

In this second post in my Project Home Lab series, I’m going to cover fairly loosely what I’ve got in my environment at home already as I need to take this into account to determine whether I can keep it all or whether I need to make more fundamental changes to my environment also.

In this second post in my Project Home Lab series, I’m going to cover fairly loosely what I’ve got in my environment at home already as I need to take this into account to determine whether I can keep it all or whether I need to make more fundamental changes to my environment also.

This series will consist of the following posts. I will update the table of contents with the new page links in each post as I produce and publish the articles.

  1. Project Home Lab: Goals
  2. Project Home Lab: Existing Infrastructure
  3. Project Home Lab: Hardware Decisions
  4. Project Home Lab: Network Decisions
  5. Project Home Lab: Shopping List

Racking

I’m fortunate that my wife lets me have a server rack in the garage which is what allows me to even chase the Project Home Lab ambition. Currently, this is a 12U rack I built myself with wooden panels and some 12U AV posts I got from eBay. It’s served me well although it has it’s nuances.

  • Non-removable side panels make access tricky
  • No wheels or castors making rear access non-existent as the rack is backed into a corner
  • No cooling aids such as top vents or air ducting

The rack is probably going to have to go for three reasons. Firstly because there isn’t going to be enough U space in the rack for me to add the new hardware I am going to be looking at and secondly because I need there to be more access into the rack so that when I need to add cabling or investigate faults, I need to be able to get in there and check it all without more time being spent on gaining access then doing the task in hand. The third reason is weight. All of the new equipment such as new rack chassis and the like will add weight and I don’t think the wooden panels right now will support all the extra.

Power

Currently, my rack gets its power from an APC 750VA 1U RM UPS. I’ve had it for about six years and it’s been faultless. I currently operate at about 20% load which gives me a runtime of around 25 to 30 minutes on battery. With the addition of new equipment, I think that I can probably get away with keeping the UPS load within capacity limits but this is going to severely hamper my battery runtime and I’d like to keep a minimum of 15 minutes battery to protect against short-term power outages so the UPS may need to be replaced.

A secondary issue with the UPS is connectivity. This model of UPS has four outlet IEC C13 ports as do most small form factor UPS units. I’m going to need to invest in a power distribution unit (PDU) or two to add extra power outlets for the new devices. The reason for two and not just a single PDU is that I want to spread the power load over the physical ports on the UPS so that I’m not driving all the power through a single outlet on the UPS and potentially burn it out.

Network

My network core lives in the rack right now and this is where it will stay. I currently have a Cisco ASA 5520 firewall and a TP-Link TL-SG3424 gigabit 24 port switch. Both of these will certainly be kept as is.

The ASA is amazing. It’s running just shy of the latest Cisco IOS release with fully upgraded 2GB RAM and it’s handling the Layer 3 inter-VLAN routing of my home VLANs right now and also acting as my edge router receiving my 120Mbps Virgin Media cable connection and it barely cracks 5% CPU usage and 512MB memory usage. I’ve got no questions whether this can handle the new device traffic but when you look at the specification of the Cisco ASA 5520 is it any wonder?

The TP-Link switch is a Layer 2 managed switch with 24 gigabit ports. I’m using 2 of the ports in a LAG up to my access switch in my home office, another two ports in a LAG to the ASA and a third pair of ports in a LAG to my home server. The remaining ports connect to devices in the main area of the house. For £125, this is a great switch. It supports all of the enterprise features you would expect from a named brand Layer 2 managed switch like Cisco, HP or Dell but at a fraction of the cost. Reliability and performance has never been an issue and I don’t foresee it being one. Lastly, it’s silent as it is passively cooled keeping the volume and BTU output of the rack down.

I have two issues with the current switch however relating to the new lab. One is port count and the other is performance impact. With the current port occupation on the switch, it is highly unlikely that I will be able to get everything connected to it so I will be likely adding a leaf switch for connecting the lab devices and then an uplink or two into the core from the leaf. The second reason is that I like how my home network performs right now. If I was to start throwing Hyper-V over SMB 3.0 File Server traffic across it all day long, I’m not sure how my home production network would suffer. This adds credence to adding the leaf switch. With the leaf switch, the only traffic that need to leave the confines of the lab back into the core are packets destined for the internet or administrative connections from me into the lab via Remote Desktop Services or management consoles.

Cabling

All of my cabling at home is shielded category 6 cable wired into a category 6 patch panel with homemade patch leads from the panel into the switch. I test all of my cabling with a Fluke tester to validate them to make sure I’m going to great good clean transmissions over the wire. I try to use wired in the house where ever possible as I like having that constant, reliable gigabit speed compared with the relative slowness of 300Mbps N specification wireless and potential disruptors such as DECT cordless phones, Bluetooth and microwaves.

I’m going to be continuing to use this cabling in the new lab. I won’t be using fibre or InfiniBand due to the complexity and cost. Sticking to category 6 copper cabling keeps my cable media uniform across the all my devices.

Server

I’ve got one server right now which is running Windows Server 2012 R2 Essentials. This acts as the core to everything in the house offering Directory Services, DHCP, DNS not to mention being a backup target and a media streaming server. It’s currently housed in an RM 400/10 4U rack enclosure from X-Case. I upgraded the case about two years ago with hot swap drive caddies to allow me to add and remove drives to my Storage Spaces Storage Pool easily. Inside the case is an ASUS ATX desktop motherboard with an Intel Core i5 3470T low power processor and 12GB DDR3 RAM.

Although I’m really happy with the performance of this server right now, I am a sucker for consistency and the aesthetics of things. If I can get parts at the right prices, I may well give my home server a little upgrade so that the parts inside match those of the new servers. For me this is a silly thing to cure a minor case of OCD I have but in real terms, it means if I have any suspect failed parts, I can swap and move them between servers to test as needed.

What’s Next

To be honest with you from the start, I’m actually writing some of these articles after the fact: I started this project over a month ago and I already have quite a few of the hardware parts ready for use. In the next post, I will explain my thought processes for selecting the hardware I have bought already and what I still need to purchase and why I will be purchasing those parts.

I’ll do a summary of all of the prices too for budding lab builders among you to use as a reference.