X-Case

Project Home Lab: Open Server Surgery

So with my recent bought of activity on the home lab project front, evident from my previous posts Project Home Lab: Server Build and Project Home Lab: Servers Built, I’ve forged ahead and got some of the really challenging an blocking bits of the project done over Christmas. I eluded to what I needed to do next in the post Project Home Lab: Servers Built. All of this work paves the way for me to get the project completed in good order, hopefully all during January, at long last.

In this post I’m just going to gloss over some of the details about the home server move that I completed over the weekend. Lots more hours of thinking, note taking and planning were involved in this and most likely more than should have gone into it but I don’t like failure so I like to make sure I have all the bases covered and all the boxes ticked. Most critically, I had to arrange an outage window and downtime with the wife for this to happen.

Out with the Old

The now previous incarnation of my Windows Server 2012 R2 Essentials home server lived in a 4U rack mount chassis. As it was the only server I possessed at the time, I never bothered with rack mount rails so problem one was in the fact that the server was just resting atop of my UPS in the bottom of the rack.

Problem two and luckily, something which has never bitten me previously but has long bothered me is that the server ran on desktop parts inside a server chassis. As a result, it had no IPMI interface for out of band management so that if something should go wrong with the Windows install or some warning in the BIOS, I can remotely access the keyboard, video and mouse (a KVM no less). It had an Intel Core i5 3740T processor with an ASUS ATX motherboard and unregistered unbuffered memory with a desktop power supply albeit it a high quality Corsair one. All good quality hardware but not optimal for a server build.

The biggest problem however was with the fact that the 4U chassis, a previous purchase from X-Case a couple of years ago sat at 4U tall but only had capacity for ten external disks. I had two 2.5″ SSDs for the operating system mounted internally in one of the 5.25″ bays in a dual 2.5″ drive adapter in addition to the external drives. It all worked nicely but it wasn’t ideal as my storage needs are growing and I only had two free slots. Although not a problem, the hot swap drive bays were added to the chassis with an aftermarket upgrade from X-Case didn’t use SAS Multilane SFF-8087 cables but instead used SATA connections which meant from my LSI 9280-16i4e RAID Controller, I had to use SAS to SATA Reverse Fanout cables which made the whole affair a bit untidy.

None of this is X-Case’s fault let us remember. The case did it’s job very well but my evolving and increasingly demanding needs no longer met the capabilities of the case.

Planning for the New

Because I like their to be order in the force, per my shopping list at Project Home Lab: Shopping List, I bought a new 3U X-Case chassis for my home server at the same time as buying up the lab components and getting the home server set straight is priority one because the 4U chassis is a blocker to me getting any further work done as the 3U and 2U lab servers need to fit in above it. In addition to moving chassis, I’ve given it an overhaul with a new motherboard and CPU to match the hardware in the lab environment. A smaller catalogue of parts means less knowledge required to maintain the environments and means I have an easy way of upgrading or retro-fitting in the future with the single design ethos.

As anyone knows, changing the motherboard, processor and all of the underlying system components in a Windows server is a nightmare potentially in the making so I had to plan well for this.

I had meticulously noted all of the drive configurations from the RAID Controller down to the last detail, I had noted which drives connected to which SATA port on which controller port and I had a full backup of the system state to perform a bare metal recovery if I needed. All of our user data is backed up to Azure so that I can restore it if needed although I didn’t expect any problems with the data drives in honesty, it was the operating system drives I was most concerned about.

In with the New

After getting approval for the service outage from the wife and shutting down the old home server, I got it all disconnected and removed from the rack. I began the painful process of unscrewing all of my eight drives from the old chassis drive caddy’s and the two internal drives and reinstalling them into the new caddy’s using the 2.5″ to 3.5″ adapters from the shopping list. I think I probably spent about 45 minutes carefully screwing and unscrewing drives and at the same time, noting which slot I removed them from and which slot I installed them into.

With all the drives moved over, I moved over the RAID Controller and connected up the SAS Multilane SFF-8087 cables to the connector with the tail end already connected to the storage backplanes in the chassis.

Once finished, I connected up the power and the IPMI network port on the home server which I had already configured with a static IP as the home server is my DHCP Server so it wouldn’t be able to get an automatic lease address. I got connected to the IPMI interface okay and powered the server on using it and quickly flipped over to the Remote Control mode which I have to say, works really nicely even when you consider that it’s Java based.

Up with the New

While I was building the chassis for the home server, I had already done some of the pre-work to minimize the downtime. The BIOS was already upgraded to the latest version along with the on-board SAS2008 controller and the IPMI firmware. I had also already configured all of the BIOS options for AHCI and a few other bits (I’ll give out all of the technicalities of this in another post later).

First things first, the Drive Roaming feature on the LSI controller which I blogged about previously, Moving Drives on an LSI MegaRAID Controller worked perfectly. All 9 of the virtual drives on the controller were detected correctly, the RAID1 Mirror for the OS drives stayed in-tact and I knew that the first major hurdle was a behind me. A problem here would have been the most significant to timely progress.

The boot drive was hit okay from the LSI RAID Controller BIOS and the Windows Server 2012 R2 logo appeared at least showing me that it was starting to do something. It hung here for a couple of minutes and then the words “Getting Devices Ready” appeared. The server hung here for at least another 10 minutes at which point I was starting to get worried. Just when I was thinking about powering it off and moving all the drives back and reverting my changes, a percentage appeared after the words “Getting Devices Ready”, starting at 35% and it quickly soared up to 100% and the server rebooted.

After this reboot, the server booted normally into Windows. It took me about another hour after this to clean-up the server. First I had to reconfigure my network adapter team to include the two on-board Gigabit Ethernet adapters on the Supermicro motherboard as I am no longer using the Intel PCIe adapter from the old chassis. Then, using the set devmgr_show_nonpresent_devices=1 trick, I removed all of the references to and uninstalled the drivers for the old server hardware.

After another reboot or two to make sure everything was working properly and a thorough check of the event logs for any Active Directory, DNS or DHCP errors and a test from my 3G smartphone to make sure that my published website was running okay on the new server, I called it a success. One thing I noted of interested here was that Windows appeared to not require re-activation as I had suspected it would. A motherboard and CPU family change would be considered a major hardware update which normally requires re-activation of the license key but even checking now, it reports activated.

Here’s some Mr. Blurrycam shots of the old 4U chassis after I removed it and the new 3U chassis in the rack.

WP_20141230_009

WP_20141230_006

As you can see from the second picture, the bottom 3U chassis is powered up and this is the home server. In disk slots 1 and 5 I have the two Intel 520 Series SSDs which make up the operating system RAID1 Mirror and in the remaining eight populated slots are all 3TB Western Digital Red drives.

Above the home server is the other 3U chassis which will be the Lab Storage Server once all is said and done and at the very bottom I have the APC 1500VA UPS which is quite happy at 20% load running the home server along with my switches, firewall and access points via PoE. I’ll post some proper pictures of the rack once everything is finished.

Behind the scenes, I had to do some cabling in the back of the rack to add a new cable for the home server IPMI interface which I didn’t have before and the existing cables for the home server NIC Team were a bit too tight for my liking caused by the 3U Lab Storage Server above being quite deep and pulling on them slightly. To fix this, I’ve patched up two new cables of longer length and routed them properly in the rack. I’ve got a lot of cables to make soon for the lab (14 no less) and I will be doing some better cable management at the same time as that job. One of the nice touches on the new X-Case RM316 Pro chassis is the front indicators for the network ports, both of which light up and work with the Supermicro on-board Intel Gigabit Ethernet ports. The fanatic in me wishes they were blue LEDs for the network lights to match the power and drive lights but that’s not really important now is it.

More Home Server Changes

The home server has now been running for two days without so much as a hiccup or a cough. I’m keeping an eye on the event logs in Windows and the IPMI alarms and sensor readings in the bedding in period and it all looks really happy.

To say thank you to the home server for playing so nicely during it’s open server surgery, I’ve got three new Western Digital 5TB drives to feed it some extra storage. Two of the existing 3TB drives will be coming out to make up the bulk storage portion of the Lab Storage Server Storage Space and one drive will be an expansion giving me a gross uplift of 9TB capacity in the pool. I would be exchanging the 3TB drives in the home server with larger capacity drives one day in the future anyway so I figured I may as well do two of them early and make good use of the old drives for the lab.

I’m also exploring the options of following the TechNet documentation for transitioning from Windows Server 2012 R2 Essentials to Windows Server 2012 R2 Standard. You still get all of the Essentials features but on a mainline SKU which means less potential issues with software (like System Center Endpoint Protection for example which won’t install on Essentials). On this point I’m waiting for some confirmation of the transition behaviour in a TechNet Forum question I raised at https://social.technet.microsoft.com/Forums/en-US/d888f37a-e9e9-4553-b24c-ebf0b845aaf1/office-365-features-after-windows-server-standard-transition?forum=winserveressentials&prof=required as the TechNet article at http://technet.microsoft.com/en-us/library/jj247582 leaves a little to be desired in terms of information.

I’m debating buying up some Lindy USB Port Blockers (http://www.lindy.co.uk/accessories-c9/security-c388/usb-rj-45-port-blockers-locks-c390/usb-port-blocker-pack-of-4-colour-code-blue-p2324) for the front access USB ports on all the servers so that it won’t be possible for anyone to insert anything in the ports without the unlocking tool to open up the port first. See if you can guess which colour I might buy?

Up Next

Next on my to do list is the re-addressing of the network, breaking out my hacksaw and cabling.

The re-addressing of the network is make room for the new VLANs and associated addressing which I will be adding for the lab and my new addressing schema makes it much easier for me longer term to manage. This is going to be a difficult job much like the job I’ve just finished. I’ve got a bit of planning to finish for this before I can do it so this probably won’t happen now until after new year.

The hacksaw, as drastic as that sounds is for the 2U Hyper-V server which you may notice is not racked in the picture above. For some reason, the sliding rails for the 2U chassis are too long for my rack and with the server installed on the rails and pushed back, it sits about an inch and half proud of the posts which no only means I can’t screw it down in place but I can’t close the rack door. I’m going to be hacking about two inches off the end of the rails so that I can get the server to sit flush in the rack. It’s only a small job but I need to measure twice and cut once as my Dad taught me.

As I mentioned before, I’ve got some 14 cables I need to make and test for the lab and this is something I can be working on in parallel to the other activities so I’m going to be trying to make a start on these soon so that once I have the 2U rails cut to size correctly, I can cable up at the same time.

Project Home Lab: Shopping List

Up until now, I’ve talked at length about the various factors dictating what I will be buying and why. In this post which is meant to be a high level overview of all the posts previous, I’m going to give you a shopping list of all of the components needed to make the build tick so that if you want to embark on your own project you can get a head start if you chose to go down the same route yourself.

This series will consist of the following posts. I will update the table of contents links in each post as I produce and publish the articles.

  1. Project Home Lab: Goals
  2. Project Home Lab: Existing Infrastructure
  3. Project Home Lab: Hardware Decisions
  4. Project Home Lab: Network Decisions
  5. Project Home Lab: Shopping List

Common Infrastructure

Storage Server

Disks for the server I’ve yet to purchase or confirm as these are pretty much a commodity item. I’ll update this post when I do select these but expect it to be a mixture of SSD and SATA disk.

With the SAS Multilane cables for connecting to the on-board SAS SFF-8087 ports, do make sure you get the cable with Sideband support, provided by an extra wire or two and an extra pin connection in the cable otherwise you won’t get the SGPIO disk failure and status indication through the disk backplane.

Hyper-V Server

Disks for the server I’ve yet to purchase or confirm as these are pretty much a commodity item. For the Hyper-V server, the disks need not be large or pretty as they will be used primary just for getting the host operating system online. A pair of SSDs in a RAID1 Mirror will be the most likely suspect.

With the SAS Multilane cables for connecting to the on-board SAS SFF-8087 ports, do make sure you get the cable with Sideband support, provided by an extra wire or two and an extra pin connection in the cable otherwise you won’t get the SGPIO disk failure and status indication through the disk backplane.

Next Up

With the shopping list crossed off and most of the hardware now ordered and some of it already in my hands, it’s time to get building. The next posts will show some of the builds, enjoy.

Project Home Lab: Hardware Decisions

In part one and two of this series, I talked about what I want to achieve and what I have in place already. From now on in, it’s all about the new stuff I want.

This series will consist of the following posts. I will update the table of contents links in each post as I produce and publish the articles.

  1. Project Home Lab: Goals
  2. Project Home Lab: Existing Infrastructure
  3. Project Home Lab: Hardware Decisions
  4. Project Home Lab: Network Decisions
  5. Project Home Lab: Shopping List

One Server vs. Multiple Servers

Early on, I had thought about building a single server with storage and hypervisor in bed together but I quickly came to the conclusion that this would hinder me in the long-run. Yes, I would get the fastest possible access to the disk storage for the VMs with an all-in-one but it would leave me with nowhere to go as scaling up would be limited by the specification of the internal hardware in that single server and scaling out would have big costs associated as I would need to buy the networking and Hyper-V servers to break it out.

I also decided that this wouldn’t give me a playground which could simulate much for a customer environment as how many business do you know that run everything on a single host?

To this end, I decided that one server to act as a storage server and another for my hypervisor was the solution. This means that over time if the need arises, I can add additional Hyper-V hypervisor servers to scale out my compute capacity and form a multi node cluster. There may be upgrades required to the storage server to increase the capacity or IOPS but those costs would be minimal and typical business and usual storage growth costs.

Rack Mount vs. Standalone

For most people considering rack mount verses standalone, the choice would be based on whether or not their wife or partner will allow them to get away with having a server rack in the house somewhere. As I have already overcome that obstacle years ago, it makes that easier. Standalone has it’s advantages because the machines can be put into the corner of a room or garage with ease, however standalone servers tend not to have the performance or scalability that I am looking for and want in a virtualization platform which demands big memory to name but one facet. Standalone servers tend not to be so readily available as parts of used systems on eBay for purchase either which makes it harder.

Server Rack Cabinet

Based on my points above regarding a rack, I stayed focused on rack mount however my problem lay in my current rack build. The cabinet I have currently stands roughly 22U tall however only has usable space for 12U of equipment as I built it originally with the design of the remainder of the space being storage compartments which I actually no longer use. With the rack currently being wooden, it’s very primitive and provides me only with front and rear access however due to it’s place in the corner of the garage, I only actually have front access.

Because of this, I am looking to need a new server rack to house everything. This rack will be a multi-tenant rack housing both my production home network and the lab environment and I need to make sure that I buy a rack which will fit my existing space and will give me some additional rack space for expansion should I need it in the future such as additional Hyper-V servers or storage enclosures

A UK based vendor called X-Case have recently started selling server racks and at £214 for a new 22U rack, which is perfect for me. It fits my space perfectly, it’s on castors so I can move it around, has removable side panels with doors from and rear. I’ve bought from X-Case before when I built my home server which lives in one of their 4U cases right now and their products are great and really affordably priced compared to the big name brands.

The 22U cabinet gives me plenty of space to house my current 9U of devices and 13U for new purchases and I don’t plan on taking my lab that big (just yet).

Off the Shelf vs. Custom

I try where possible to buy desktops and laptops from dedicated builders like Dell, Lenovo or the likes. Firstly because they can build it better than I can and gone are the days where it is cheaper to build your own. For servers at home, I have a slightly different view however. On eBay, you will find a myriad of used servers up for sale from the likes of Dell PowerEdge and HP Proliant. Sifting through them to find a good example which fits all the requirements can be a challenge. The problem I have with all of these is that none of them are energy conscious because servers are designed for the datacenter and not the home. It is to this end that I decided to build my own.

Building my own gives me the flexibility to select certain parts bespoke and new and others used and reputably branded whilst keeping an eye on the power meter.

Rack Mount Cases

For rack mount cases, I wanted to buy new. The first reason is that pressed metal tins are generally quite affordable and secondly, because I wanted to buy something that was going to perfectly fit my bill. I didn’t want 1U because that limits me with power supply, expansion card and CPU cooler options. 1U also means that you aren’t going to be fitting in many disks which would impact my disk I/O performance options. Lastly, 1U chassis need cooling fans and 1U fans are small which means they need to rotate fast to push the same cubic feet per minute (CFM) of air. Fast means noisy and all of these factors immediately rules out 1U.

2U is a good height. 2U means I’m not limited by the power supply as almost all server supplies are 2U compatible and most expansion cards are available in a half-height form factor suitable for installation into 2U.  Fans in a 2U chassis are larger which means slower spinning and quieter and I also have more height to work with for CPU coolers. 2U gives me enough room to work with, with my components but isn’t wasteful of space either. 2U does have a limitation however and that is physical space on the front of the enclosure for disks so whilst 2U is a good fit for my Hyper-V hypervisor servers, it doesn’t perfectly fit for the storage.

3U and 4U are ideal sizes for storage. You get all the benefits of 2U as above but more front surface area to jam in disk slots. I looked at what is out there and I decided early on that by using a combination of SSD and SATA disk for storage, I wouldn’t be needing that many disks for a single user environment and the gains in 4U over 3U wasn’t really worth it so I focused my attention on 3U. 4U also has the problem for me that with the number of disks it can support, you typically can’t find this number of SAS channels on a single controller so I would need multiple SAS controllers and if you could find one with enough ports, it would likely be upwards of £1,000 just for the card.

As I haven’t decided on my motherboard or CPU options at this point, but I know that I want this build to be flexible, I need to ensure whatever I buy can support ATX and Extended ATX motherboards so that I’m free to make the right decision.

Accessibility for me is important. I want whatever case I opt for to support sliding rails so that I can draw out a server if it has a fault to replace parts. I also want the disks in any of the servers to be hot-swappable so that I can see a faulting disk and replace it without having to open up the chassis and start messing around with drive screws and cables. As I plan on using a mixture of SSD and SATA disk, I need it to support 3.5″ and 2.5″ disks. I’m not interested in dual redundant power supplies in my build as that adds power demand and cost. If I lose a power supply, I can take the hit of having the lab offline for a few days for a replacement.

As I’ve expressed earlier, I like X-Case. They are a UK firm so I feel like I’m doing my bit for the UK economy and their products are good. For my 2U Hyper-V servers, I have decided on the X-Case RM 208 Pro (http://www.xcase.co.uk/rackmount-cases/2u-rackmount-server-cases/x-case-rm-208-pro-8-hotswap-caddy-with-6gb-sata-sas-backplane-temperature-controlled-fans.html).

The RM 208 Pro is a 2U rack mount enclosure. It’s £194 for the case and £27 for the sliding rail kit for it. It supports 2U power supplies, Extended ATX motherboards, has 8 hot-swappable disk caddies on the front taking 3.5″ disks and the disks are connected via two SAS 6Gbps SFF-8087 Multilane connectors, common on RAID and HBA cards. The SAS backplane supports SGPIO which means I will get disk failure and early warning notification lights on the enclosure if my RAID or HBA cards support it. The internal fans are hot-swappable and are temperature controlled for speed via the motherboard pin headers.

For the storage server, I decided on the X-Case RM 316 Pro (http://www.xcase.co.uk/rackmount-cases/3u-rackmount-server-cases/x-case-rm-316-pro-16-x-6gb-hotswap-caddy-mini-sas-backplane-120mm-temperature-controlled-fans.html). This enclosure looks and feels the same as the RM 208 Pro except at 3U, it has support for 16 3.5″ disks spread over four SAS 6Gbps SFF-8087 Multilane connectors. Everything else about this enclosure matches the RM 208 Pro that I will use for the Hyper-V server. The RM 316 Pro is more expensive at £370 for the chassis and another £33 for the sliding rails but the extra 8 disk slots will not limit me there.

Power Supply

For these servers, I want something fairly cheap but yet reliable and from a known brand as power is what makes the whole thing tick after all. X-Case resell Seasonic power supplies and after much research into them, transpires that they are actually the OEM manufacturer for a number of high-street brand power supplies, including the Corsair Builder Series supply in my current home server which has been running for over two years without a hiccup. The Seasonic SS-600 H2U 600 Watt power supply (http://www.xcase.co.uk/power-supply/2u-rackmount-power-supply-s/saesonic-ss-600h2u-2u-80-psu.html#sthash.WMRWR8NM.dpbs) is 80 Plus efficient and seems like the ticket. At only £100 it’s a good price too considering the price of some ATX power supplies these days. I’ll be using this unit in both the storage and Hyper-V servers.

Processor

In this decision process, processor comes before motherboard as after all, the motherboard is just a life support system for the processor. I knew I wanted a server processor, not a desktop processor. I knew I needed a processor which supported Intel Virtualization Technology (Intel-VT) or AMD-V so that cut down the options to pick from as not all CPUs, even new models released today have Intel-VT or AMD-V. I knew I also wanted a CPU with low TDP to keep power consumption down and heat BTU output down to reduce the cooling requirements and noise of the fans.

Server processors are highly expensive new so I also knew that this was going to be a used part. Intel processors are generally more readily available in used form but I didn’t want to omit AMD from the race as their Opteron processors have really high core counts which is a great thing for a virtualization host. I also wanted to make sure that I used at least the same family of CPU between the storage and hypervisor servers so that I was using consistent parts to keep the builds consistent and simple for me to support.

After weighing up all of the options back and forth, I settled on the Intel Xeon L5630 processor (http://ark.intel.com/products/47927/Intel-Xeon-Processor-L5630-12M-Cache-2_13-GHz-5_86-GTs-Intel-QPI) and got them for £25 per processor on eBay. The L5630 is a quad core CPU with a TDP of 40W which is really low for a server processor. The CPU launched in 2010 which means it’s not that old even if the units I have were first off the line. The L5630 has a clock speed of 2.13GHz and has 12MB of L3 cache. With quad core and Hyper-Threading, the hosts will see eight cores available and with Turbo Boost support, the CPU can boost up to 2.4GHz. As I said previously, Intel Hyper-Threading and Turbo Boost are supported as is Intel VT-x, Intel VT-d, SpeedStep and the latest AES encryption instructions which makes this CPU very feature rich for it’s age.

Memory support is tri-channel DDR3 up to 288GB per processor and it can be used in a dual processor mode thanks to it’s dual Quick-Path Interconnect (QPI). DDR3 support is useful because higher capacity DIMMs such as 8GB or 16GB are rare in DDR2 and DDR2 is becoming harder and more expensive to buy as stocks dwindle while DDR3 is readily available in sorts of shapes, sizes and flavours on eBay.

Motherboard

With my processor decided upon, I now needed to select a motherboard to suit. The Intel Xeon L5630 uses the Socket 1366 motherboard socket running the Intel 5500 series Tylersburg chipset. My first port of call was Supermicro as they make amazing products and they frequently OEM their parts to other vendors which shows a lot of faith in them. This, coupled with the fact that their parts range is wider than the Grand Canyon meant I was sure to find what I wanted.

The requirements for the motherboard, in line with my goals meant that I wanted something which gives me plenty of options for future expansion. I also want accessibility which means I don’t want to be running to the server with a keyboard and monitor in hand to troubleshoot a boot issue so iLO, DRAC or IPMI are very important for me. The more feature rich the motherboard also means the less I potentially need to spend on expansion cards so that was also a factor.

Selecting the motherboard took the longest amount of time due to the options available but eventually I selected the Supermicro X8DTH-6F motherboard. I was able to find this for £250 including shipping and import taxes, new from a seller in the USA on eBay.

The X8DTH-6F (http://www.supermicro.com/products/motherboard/QPI/5500/X8DTH-6F.cfm) has everything. It’s a dual socket Extended ATX motherboard taking the Intel 5500 and 5600 series processors, good for my L5630 Xeon. It can be run in uniprocessor mode and a second added later which meets my expansion plans allowing me to add a second CPU for the Hyper-V server to add processing power at a later stage. Six DDR3 DIMM slots per processor gives me a total of 12 DIMM slots with six usable now supporting 1333MHz DDR3 in either conventional desktop UDIMM format, all the way up to ECC Registered and Buffered. With the second CPU added later, this opens up the additional six DIMMs for use also.

On-board dual port gigabit Intel Ethernet and a dedicated IPMI port supporting remote media and KVM ticks another box. Being Extended ATX, the board has seven PCI Express slots giving me lots of options for expansion cards and the on-board Intel ICH10R and LSI SAS 2008 6Gbps SAS controllers handle all of my drive quandaries too, at least for the medium term.

£250 for the motherboard may seem a bit steep however I consider these factors. Having the two on-board gigabit Ethernet ports saves me about £40 buying a used dual port Intel PCI Express network adapter from eBay to service the management traffic. The on-board LSI SAS controller saves me around £100 buying a used LSI SAS host bus adapter card. Having both of these on-board means two less PCI Express cards installed, theoretically improving the airflow in the case and likely reducing power consumption too. IMPI can be added to any machine with a PCI Express slot, however whether the add-in cards available online are as integrated and feature rich as dedicated on-board is questionable and the cards I have seen online run for about £500 each making the motherboard look positively cheap.

Memory

I want as much memory as possible in my Hyper-V servers. For the storage server I want a sensible amount but not to the extent of the Hyper-V servers. The more memory I have, the more I can give my virtual machines to help give them that production feel.

DDR3 support on the CPU and motherboard means I’m up to date with the current specification although not for memory speed I should point out. I wanted to buy from the Supermicro validated memory support list and I wanted ECC Registered Buffered DIMMs as that’s what you use in servers for the error correction capabilities of these DIMMs. Also, because the motherboard only supports 16GB memory per processor if you use UDIMM desktop type memory this means I really wanted to use ECC Registered DIMMs. I need to make sure I do everything possible to squeeze the maximum performance out of this lab and that means using memory in accordance with the tri-channel native operation mode of the CPU.

For the storage server, I decided on 12GB RAM by way of three 4GB DIMMs. For the Hyper-V server, I decided on 48GB as six 8GB DIMMs for the uniprocessor setup and if I add a second processor later, I can add an additional 48GB.

16GB DIMMs are available but they are just way too expensive right now for me to consider. I managed to get the 4GB DIMMs for about £20 each and the 8gB DIMMs for £35 each. All of the DIMMs are Samsung low voltage DIMMs running at 1333 MHz. To translate, this means I am using PC3L-10600R designation DIMMs. These DIMMs will automatically down rank to the highest speed supported on the motherboard and under-running the memory will help to keep the temperature of the DIMMs during operation down.

With a second processor installed later, this would give me 96GB of RAM in my Hyper-V host if I stay with 8GB DIMMs and if I later upgrade to 16GB DIMMs should their prices become sensible, I could have to 192GB of memory.

Ancillaries

As always with a PC build, you need some odds and sods to finish it off.

The CPU needs a cooler so I opted for the Supermicro SNK-P0037P passive cooler. The cooler is recommended for the motherboard and made by the motherboard manufacturer Supermicro. The cooler is rated for processors up to 90W TDP which means they will more than easily handle my 40W L series CPU and no fan on the CPU will help to keep downs the power consumption and noise as one less moving part to power.

To connect the SAS Multilane connections on the motherboard to the enclosure backplane, I need some SFF-8087 cables. For the Hyper-V server, I will be installing only a pair of SSDs to run the host Windows Server 2012 R2 operating system. To protect against a SAS channel or cable failure, I will be installing both SFF-8087 multilane cables with a single SSD per channel.

For the storage server, I am going to install both channel cables allowing me to run 8 disks. I will operate like this initially and once I need more than 8 disks to increase the performance, I will buy an 8 Port LSI PCI Express SAS HBA to run the other two channels, buying two more cables. Genuine LSI SFF-8087 to SFF-8087 cables with Sideband support for the SGPIO disk information pass-through are £10 each, new on eBay.

The enclosures have 3.5″ drive bays to allow me to use big capacity SATA disk but as I will be using a combination of SSD and SATA, I need a way to mount the 2.5″ SSD disks. For about £10 each on eBay, you can pick up the HP 654540-001. This is a 2.5″ to 3.5″ disk carrier specifically designed for hot swap enclosures. You mount the disk into the carrier and it translates the power and data port positions to match that of a 3.5″ disk. It uses no intermediary disk controller so the disk will be seen exactly for what it is by the controller and the operating system and there is no performance penalty either.

Project Home Lab: Existing Infrastructure

In this second post in my Project Home Lab series, I’m going to cover fairly loosely what I’ve got in my environment at home already as I need to take this into account to determine whether I can keep it all or whether I need to make more fundamental changes to my environment also.

This series will consist of the following posts. I will update the table of contents with the new page links in each post as I produce and publish the articles.

  1. Project Home Lab: Goals
  2. Project Home Lab: Existing Infrastructure
  3. Project Home Lab: Hardware Decisions
  4. Project Home Lab: Network Decisions
  5. Project Home Lab: Shopping List

Racking

I’m fortunate that my wife lets me have a server rack in the garage which is what allows me to even chase the Project Home Lab ambition. Currently, this is a 12U rack I built myself with wooden panels and some 12U AV posts I got from eBay. It’s served me well although it has it’s nuances.

  • Non-removable side panels make access tricky
  • No wheels or castors making rear access non-existent as the rack is backed into a corner
  • No cooling aids such as top vents or air ducting

The rack is probably going to have to go for three reasons. Firstly because there isn’t going to be enough U space in the rack for me to add the new hardware I am going to be looking at and secondly because I need there to be more access into the rack so that when I need to add cabling or investigate faults, I need to be able to get in there and check it all without more time being spent on gaining access then doing the task in hand. The third reason is weight. All of the new equipment such as new rack chassis and the like will add weight and I don’t think the wooden panels right now will support all the extra.

Power

Currently, my rack gets its power from an APC 750VA 1U RM UPS. I’ve had it for about six years and it’s been faultless. I currently operate at about 20% load which gives me a runtime of around 25 to 30 minutes on battery. With the addition of new equipment, I think that I can probably get away with keeping the UPS load within capacity limits but this is going to severely hamper my battery runtime and I’d like to keep a minimum of 15 minutes battery to protect against short-term power outages so the UPS may need to be replaced.

A secondary issue with the UPS is connectivity. This model of UPS has four outlet IEC C13 ports as do most small form factor UPS units. I’m going to need to invest in a power distribution unit (PDU) or two to add extra power outlets for the new devices. The reason for two and not just a single PDU is that I want to spread the power load over the physical ports on the UPS so that I’m not driving all the power through a single outlet on the UPS and potentially burn it out.

Network

My network core lives in the rack right now and this is where it will stay. I currently have a Cisco ASA 5520 firewall and a TP-Link TL-SG3424 gigabit 24 port switch. Both of these will certainly be kept as is.

The ASA is amazing. It’s running just shy of the latest Cisco IOS release with fully upgraded 2GB RAM and it’s handling the Layer 3 inter-VLAN routing of my home VLANs right now and also acting as my edge router receiving my 120Mbps Virgin Media cable connection and it barely cracks 5% CPU usage and 512MB memory usage. I’ve got no questions whether this can handle the new device traffic but when you look at the specification of the Cisco ASA 5520 is it any wonder?

The TP-Link switch is a Layer 2 managed switch with 24 gigabit ports. I’m using 2 of the ports in a LAG up to my access switch in my home office, another two ports in a LAG to the ASA and a third pair of ports in a LAG to my home server. The remaining ports connect to devices in the main area of the house. For £125, this is a great switch. It supports all of the enterprise features you would expect from a named brand Layer 2 managed switch like Cisco, HP or Dell but at a fraction of the cost. Reliability and performance has never been an issue and I don’t foresee it being one. Lastly, it’s silent as it is passively cooled keeping the volume and BTU output of the rack down.

I have two issues with the current switch however relating to the new lab. One is port count and the other is performance impact. With the current port occupation on the switch, it is highly unlikely that I will be able to get everything connected to it so I will be likely adding a leaf switch for connecting the lab devices and then an uplink or two into the core from the leaf. The second reason is that I like how my home network performs right now. If I was to start throwing Hyper-V over SMB 3.0 File Server traffic across it all day long, I’m not sure how my home production network would suffer. This adds credence to adding the leaf switch. With the leaf switch, the only traffic that need to leave the confines of the lab back into the core are packets destined for the internet or administrative connections from me into the lab via Remote Desktop Services or management consoles.

Cabling

All of my cabling at home is shielded category 6 cable wired into a category 6 patch panel with homemade patch leads from the panel into the switch. I test all of my cabling with a Fluke tester to validate them to make sure I’m going to great good clean transmissions over the wire. I try to use wired in the house where ever possible as I like having that constant, reliable gigabit speed compared with the relative slowness of 300Mbps N specification wireless and potential disruptors such as DECT cordless phones, Bluetooth and microwaves.

I’m going to be continuing to use this cabling in the new lab. I won’t be using fibre or InfiniBand due to the complexity and cost. Sticking to category 6 copper cabling keeps my cable media uniform across the all my devices.

Server

I’ve got one server right now which is running Windows Server 2012 R2 Essentials. This acts as the core to everything in the house offering Directory Services, DHCP, DNS not to mention being a backup target and a media streaming server. It’s currently housed in an RM 400/10 4U rack enclosure from X-Case. I upgraded the case about two years ago with hot swap drive caddies to allow me to add and remove drives to my Storage Spaces Storage Pool easily. Inside the case is an ASUS ATX desktop motherboard with an Intel Core i5 3470T low power processor and 12GB DDR3 RAM.

Although I’m really happy with the performance of this server right now, I am a sucker for consistency and the aesthetics of things. If I can get parts at the right prices, I may well give my home server a little upgrade so that the parts inside match those of the new servers. For me this is a silly thing to cure a minor case of OCD I have but in real terms, it means if I have any suspect failed parts, I can swap and move them between servers to test as needed.

What’s Next

To be honest with you from the start, I’m actually writing some of these articles after the fact: I started this project over a month ago and I already have quite a few of the hardware parts ready for use. In the next post, I will explain my thought processes for selecting the hardware I have bought already and what I still need to purchase and why I will be purchasing those parts.

I’ll do a summary of all of the prices too for budding lab builders among you to use as a reference.

I’m Not as Green as My Name Suggests

With my name being Richard Green, one could go some way to try and associate me with environmental tree-friendliness. Contrary to that, I am actually extremely energy inefficient. My biggest energy crux in my current Windows Home Server machine.

Running on a Dell PowerEdge SC1425 with two Intel 2.8GHz Dual Core Xeon processors and 6GB of DDR2, this thing is total overkill for Windows Home Server and isn’t actually very good at it’s job either. Granted, it’s got dual Gigabit Ethernet for teamed and reliable network connectivity and it’s got SATA-II drives for high speed data movement, but at the same time, its in a 1U chassis which means it only supports a maximum of two drives, and it’s got a 450W power supply which when faced with the two Intel Xeon processors, both of which are designed at 90W power consumption makes for an eye-watering electricity consumption report.

I did try to enhance the usage profile of the machine by using an add-in for Windows Home Server called LightsOut, however the great feature of this software, which is to sleep and wake the server at pre-defined times during the day remained useless on the PowerEdge. Being a server machine its power supply doesn’t support the S3 power state which means it doesn’t support sleep – Only Shutdown and Restart, as a result, meaning the server stays on 24×7.

Granted, I could manually shutdown the server each night and power it back up again during the day when needed, but that’s not the design of a server. It’s designed to be accessible when you need it. My view on energy efficiently and environmental impact kind of fits this mantra also. I’m quite happy to spend a little money on energy efficient products if it will benefit me, and if my way of life isn’t impacted as a result. This example of powering down the sever manually has an impact because it’s an additional action upon me to complete, it means the server is potentially unavailable during start-up periods when I want it and generally makes the appliance less useful.

I’ve been looking around at what other people have done with Windows Home Server machines and seen a growing trend in Atom powered machines with low power consumption, designed for always on availability. My issue herein is that I have a 19” server rack in which all of my kit is mounted so the device needs to comply to the form factor to make it suitable, which basically rules out all of the pre-built systems from people like HP and Asus, so I’m being hurtled back into the world I escaped a few years ago – Self build.

The criteria for the project are quite tight:

  1. 19” Rack Mount Chassis – 1U, 2U, 3U or 4U is not really important.
  2. Support for at Least 4 SATA-II drives.
  3. Ideally support for a regular ATX PSU to reduce cost and improve efficiency over a server PSU.
  4. As near to silent operation as possible.
  5. Low power consumption.

After trawling the internet for quite some time on the subject now, I believe I have produced the ultimate solution using the following:

  • X-Case RM400/10 4U Rack Mountable Case
  • ASUS AT3IONT-I Intel Atom 330 and nVidia ION Montherboard
  • StarTech 4-Port PCI Express SATA-II Controller
  • Corsair Value Select Memory
  • Corsair CX400W Power Supply
  • Western Digital 1TB SATA-II Green Hard Disks

image

The case from X-Case at http://www.xcase.co.uk/product-p/case-x-case-400-fslash-10.htm?CartID=1 is the building block for this system. It allows me the flexibility to use my existing rack at home, while in a 4U chassis is gives enough room for 10x 3.5” hard disks and 1x 5.25” optical drive, although my machine will not have one installed as Windows Home Server can be installed via USB.

image

The ASUS, Intel Atom, nVidia ION Motherboard trick box from Novatech http://www.novatech.co.uk/novatech/prods/components/motherboards/miniitxmotherboards/90-MIBCT0-G0EAY0GZ.html gives me a Dual Core 1.6GHz processor which under full load only draws 8W of power and yet does not require active cooling, and only uses a passive heat sink, all the while, the miniITX form factor of the motherboard keeps the remaining power draw to a minimum.

image

The motherboard hosts 4 SATA-II ports, so needing to increase that to come close to the 10 drive support of the case, I will add a StarTech 4-Port PCI Express SATA-II Controller. The StarTech card was chosen because it appears to be the only card to combine SATA-II and PCI Express interface, as many of the other cards such as those powered by the Silicon Image 3114 controller are PCI based. The StarTech card can be seen here http://www.leaf-computer.de/raid-controller-4-port-sata-ii-pcie-x1.html and can be purposed from Leaf Computers via Amazon Marketplace.

image

The Corsair CX400W power supply from Overclockers UK at http://www.novatech.co.uk/novatech/prods/components/powersupplies/corsair/cmpsu-400cxuk.html is of good efficiency and also being near silent with a slow rotating 120mm fan to keep the air moving. This supply also has six SATA connectors for the hard drive power needs and four Molex connectors which can easily be converted to SATA once the need arises.

image

The Western Digital hard disks are of the Green variety. The demands of a Windows Home Server are not high speed disk access, unlike a RAID10 SQL Server. The needs are for high volumes of always available storage. The Green drives give SATA-II high speed access while providing a low thermal output because of the adaptive rotation speed controls and also the low power consumption.

Although only speculation based on figures collected from sources around the Internet, I believe that the Windows Home Server of this specification would consume a mere 32 Watts at idle and 38 Watts and full load when using 2 1TB Green drives. The drives consume about 6 Watts each, so simply add this amount for each drive added. The other advantage, is by using a standard ATX power supply with 12V 4-Pin connector to power the motherboard, I will have support for S3 power state, allowing the server to be put into Sleep overnight. This will allow me to reduce the operational hours from 24×7 to 17×7 in my example.

Using an online power calculator, we can see that the server of this specification will consume only 16 kWh (Kilo Watt Hours) per month. I have an in-line power meter currently connected to my personal computer which I will be attaching to the Home Server in the next day or so, and then I will be able to see the real-world draw of the current PowerEdge SC1425 to compare the two and see the potential savings.

I will create a new post to show the comparison once the data is available.