Posts from 2014

Project Home Lab: Hardware Decisions

In part one and two of this series, I talked about what I want to achieve and what I have in place already. From now on in, it’s all about the new stuff I want.

This series will consist of the following posts. I will update the table of contents links in each post as I produce and publish the articles.

  1. Project Home Lab: Goals
  2. Project Home Lab: Existing Infrastructure
  3. Project Home Lab: Hardware Decisions
  4. Project Home Lab: Network Decisions
  5. Project Home Lab: Shopping List

One Server vs. Multiple Servers

Early on, I had thought about building a single server with storage and hypervisor in bed together but I quickly came to the conclusion that this would hinder me in the long-run. Yes, I would get the fastest possible access to the disk storage for the VMs with an all-in-one but it would leave me with nowhere to go as scaling up would be limited by the specification of the internal hardware in that single server and scaling out would have big costs associated as I would need to buy the networking and Hyper-V servers to break it out.

I also decided that this wouldn’t give me a playground which could simulate much for a customer environment as how many business do you know that run everything on a single host?

To this end, I decided that one server to act as a storage server and another for my hypervisor was the solution. This means that over time if the need arises, I can add additional Hyper-V hypervisor servers to scale out my compute capacity and form a multi node cluster. There may be upgrades required to the storage server to increase the capacity or IOPS but those costs would be minimal and typical business and usual storage growth costs.

Rack Mount vs. Standalone

For most people considering rack mount verses standalone, the choice would be based on whether or not their wife or partner will allow them to get away with having a server rack in the house somewhere. As I have already overcome that obstacle years ago, it makes that easier. Standalone has it’s advantages because the machines can be put into the corner of a room or garage with ease, however standalone servers tend not to have the performance or scalability that I am looking for and want in a virtualization platform which demands big memory to name but one facet. Standalone servers tend not to be so readily available as parts of used systems on eBay for purchase either which makes it harder.

Server Rack Cabinet

Based on my points above regarding a rack, I stayed focused on rack mount however my problem lay in my current rack build. The cabinet I have currently stands roughly 22U tall however only has usable space for 12U of equipment as I built it originally with the design of the remainder of the space being storage compartments which I actually no longer use. With the rack currently being wooden, it’s very primitive and provides me only with front and rear access however due to it’s place in the corner of the garage, I only actually have front access.

Because of this, I am looking to need a new server rack to house everything. This rack will be a multi-tenant rack housing both my production home network and the lab environment and I need to make sure that I buy a rack which will fit my existing space and will give me some additional rack space for expansion should I need it in the future such as additional Hyper-V servers or storage enclosures

A UK based vendor called X-Case have recently started selling server racks and at £214 for a new 22U rack, which is perfect for me. It fits my space perfectly, it’s on castors so I can move it around, has removable side panels with doors from and rear. I’ve bought from X-Case before when I built my home server which lives in one of their 4U cases right now and their products are great and really affordably priced compared to the big name brands.

The 22U cabinet gives me plenty of space to house my current 9U of devices and 13U for new purchases and I don’t plan on taking my lab that big (just yet).

Off the Shelf vs. Custom

I try where possible to buy desktops and laptops from dedicated builders like Dell, Lenovo or the likes. Firstly because they can build it better than I can and gone are the days where it is cheaper to build your own. For servers at home, I have a slightly different view however. On eBay, you will find a myriad of used servers up for sale from the likes of Dell PowerEdge and HP Proliant. Sifting through them to find a good example which fits all the requirements can be a challenge. The problem I have with all of these is that none of them are energy conscious because servers are designed for the datacenter and not the home. It is to this end that I decided to build my own.

Building my own gives me the flexibility to select certain parts bespoke and new and others used and reputably branded whilst keeping an eye on the power meter.

Rack Mount Cases

For rack mount cases, I wanted to buy new. The first reason is that pressed metal tins are generally quite affordable and secondly, because I wanted to buy something that was going to perfectly fit my bill. I didn’t want 1U because that limits me with power supply, expansion card and CPU cooler options. 1U also means that you aren’t going to be fitting in many disks which would impact my disk I/O performance options. Lastly, 1U chassis need cooling fans and 1U fans are small which means they need to rotate fast to push the same cubic feet per minute (CFM) of air. Fast means noisy and all of these factors immediately rules out 1U.

2U is a good height. 2U means I’m not limited by the power supply as almost all server supplies are 2U compatible and most expansion cards are available in a half-height form factor suitable for installation into 2U.  Fans in a 2U chassis are larger which means slower spinning and quieter and I also have more height to work with for CPU coolers. 2U gives me enough room to work with, with my components but isn’t wasteful of space either. 2U does have a limitation however and that is physical space on the front of the enclosure for disks so whilst 2U is a good fit for my Hyper-V hypervisor servers, it doesn’t perfectly fit for the storage.

3U and 4U are ideal sizes for storage. You get all the benefits of 2U as above but more front surface area to jam in disk slots. I looked at what is out there and I decided early on that by using a combination of SSD and SATA disk for storage, I wouldn’t be needing that many disks for a single user environment and the gains in 4U over 3U wasn’t really worth it so I focused my attention on 3U. 4U also has the problem for me that with the number of disks it can support, you typically can’t find this number of SAS channels on a single controller so I would need multiple SAS controllers and if you could find one with enough ports, it would likely be upwards of £1,000 just for the card.

As I haven’t decided on my motherboard or CPU options at this point, but I know that I want this build to be flexible, I need to ensure whatever I buy can support ATX and Extended ATX motherboards so that I’m free to make the right decision.

Accessibility for me is important. I want whatever case I opt for to support sliding rails so that I can draw out a server if it has a fault to replace parts. I also want the disks in any of the servers to be hot-swappable so that I can see a faulting disk and replace it without having to open up the chassis and start messing around with drive screws and cables. As I plan on using a mixture of SSD and SATA disk, I need it to support 3.5″ and 2.5″ disks. I’m not interested in dual redundant power supplies in my build as that adds power demand and cost. If I lose a power supply, I can take the hit of having the lab offline for a few days for a replacement.

As I’ve expressed earlier, I like X-Case. They are a UK firm so I feel like I’m doing my bit for the UK economy and their products are good. For my 2U Hyper-V servers, I have decided on the X-Case RM 208 Pro (http://www.xcase.co.uk/rackmount-cases/2u-rackmount-server-cases/x-case-rm-208-pro-8-hotswap-caddy-with-6gb-sata-sas-backplane-temperature-controlled-fans.html).

The RM 208 Pro is a 2U rack mount enclosure. It’s £194 for the case and £27 for the sliding rail kit for it. It supports 2U power supplies, Extended ATX motherboards, has 8 hot-swappable disk caddies on the front taking 3.5″ disks and the disks are connected via two SAS 6Gbps SFF-8087 Multilane connectors, common on RAID and HBA cards. The SAS backplane supports SGPIO which means I will get disk failure and early warning notification lights on the enclosure if my RAID or HBA cards support it. The internal fans are hot-swappable and are temperature controlled for speed via the motherboard pin headers.

For the storage server, I decided on the X-Case RM 316 Pro (http://www.xcase.co.uk/rackmount-cases/3u-rackmount-server-cases/x-case-rm-316-pro-16-x-6gb-hotswap-caddy-mini-sas-backplane-120mm-temperature-controlled-fans.html). This enclosure looks and feels the same as the RM 208 Pro except at 3U, it has support for 16 3.5″ disks spread over four SAS 6Gbps SFF-8087 Multilane connectors. Everything else about this enclosure matches the RM 208 Pro that I will use for the Hyper-V server. The RM 316 Pro is more expensive at £370 for the chassis and another £33 for the sliding rails but the extra 8 disk slots will not limit me there.

Power Supply

For these servers, I want something fairly cheap but yet reliable and from a known brand as power is what makes the whole thing tick after all. X-Case resell Seasonic power supplies and after much research into them, transpires that they are actually the OEM manufacturer for a number of high-street brand power supplies, including the Corsair Builder Series supply in my current home server which has been running for over two years without a hiccup. The Seasonic SS-600 H2U 600 Watt power supply (http://www.xcase.co.uk/power-supply/2u-rackmount-power-supply-s/saesonic-ss-600h2u-2u-80-psu.html#sthash.WMRWR8NM.dpbs) is 80 Plus efficient and seems like the ticket. At only £100 it’s a good price too considering the price of some ATX power supplies these days. I’ll be using this unit in both the storage and Hyper-V servers.

Processor

In this decision process, processor comes before motherboard as after all, the motherboard is just a life support system for the processor. I knew I wanted a server processor, not a desktop processor. I knew I needed a processor which supported Intel Virtualization Technology (Intel-VT) or AMD-V so that cut down the options to pick from as not all CPUs, even new models released today have Intel-VT or AMD-V. I knew I also wanted a CPU with low TDP to keep power consumption down and heat BTU output down to reduce the cooling requirements and noise of the fans.

Server processors are highly expensive new so I also knew that this was going to be a used part. Intel processors are generally more readily available in used form but I didn’t want to omit AMD from the race as their Opteron processors have really high core counts which is a great thing for a virtualization host. I also wanted to make sure that I used at least the same family of CPU between the storage and hypervisor servers so that I was using consistent parts to keep the builds consistent and simple for me to support.

After weighing up all of the options back and forth, I settled on the Intel Xeon L5630 processor (http://ark.intel.com/products/47927/Intel-Xeon-Processor-L5630-12M-Cache-2_13-GHz-5_86-GTs-Intel-QPI) and got them for £25 per processor on eBay. The L5630 is a quad core CPU with a TDP of 40W which is really low for a server processor. The CPU launched in 2010 which means it’s not that old even if the units I have were first off the line. The L5630 has a clock speed of 2.13GHz and has 12MB of L3 cache. With quad core and Hyper-Threading, the hosts will see eight cores available and with Turbo Boost support, the CPU can boost up to 2.4GHz. As I said previously, Intel Hyper-Threading and Turbo Boost are supported as is Intel VT-x, Intel VT-d, SpeedStep and the latest AES encryption instructions which makes this CPU very feature rich for it’s age.

Memory support is tri-channel DDR3 up to 288GB per processor and it can be used in a dual processor mode thanks to it’s dual Quick-Path Interconnect (QPI). DDR3 support is useful because higher capacity DIMMs such as 8GB or 16GB are rare in DDR2 and DDR2 is becoming harder and more expensive to buy as stocks dwindle while DDR3 is readily available in sorts of shapes, sizes and flavours on eBay.

Motherboard

With my processor decided upon, I now needed to select a motherboard to suit. The Intel Xeon L5630 uses the Socket 1366 motherboard socket running the Intel 5500 series Tylersburg chipset. My first port of call was Supermicro as they make amazing products and they frequently OEM their parts to other vendors which shows a lot of faith in them. This, coupled with the fact that their parts range is wider than the Grand Canyon meant I was sure to find what I wanted.

The requirements for the motherboard, in line with my goals meant that I wanted something which gives me plenty of options for future expansion. I also want accessibility which means I don’t want to be running to the server with a keyboard and monitor in hand to troubleshoot a boot issue so iLO, DRAC or IPMI are very important for me. The more feature rich the motherboard also means the less I potentially need to spend on expansion cards so that was also a factor.

Selecting the motherboard took the longest amount of time due to the options available but eventually I selected the Supermicro X8DTH-6F motherboard. I was able to find this for £250 including shipping and import taxes, new from a seller in the USA on eBay.

The X8DTH-6F (http://www.supermicro.com/products/motherboard/QPI/5500/X8DTH-6F.cfm) has everything. It’s a dual socket Extended ATX motherboard taking the Intel 5500 and 5600 series processors, good for my L5630 Xeon. It can be run in uniprocessor mode and a second added later which meets my expansion plans allowing me to add a second CPU for the Hyper-V server to add processing power at a later stage. Six DDR3 DIMM slots per processor gives me a total of 12 DIMM slots with six usable now supporting 1333MHz DDR3 in either conventional desktop UDIMM format, all the way up to ECC Registered and Buffered. With the second CPU added later, this opens up the additional six DIMMs for use also.

On-board dual port gigabit Intel Ethernet and a dedicated IPMI port supporting remote media and KVM ticks another box. Being Extended ATX, the board has seven PCI Express slots giving me lots of options for expansion cards and the on-board Intel ICH10R and LSI SAS 2008 6Gbps SAS controllers handle all of my drive quandaries too, at least for the medium term.

£250 for the motherboard may seem a bit steep however I consider these factors. Having the two on-board gigabit Ethernet ports saves me about £40 buying a used dual port Intel PCI Express network adapter from eBay to service the management traffic. The on-board LSI SAS controller saves me around £100 buying a used LSI SAS host bus adapter card. Having both of these on-board means two less PCI Express cards installed, theoretically improving the airflow in the case and likely reducing power consumption too. IMPI can be added to any machine with a PCI Express slot, however whether the add-in cards available online are as integrated and feature rich as dedicated on-board is questionable and the cards I have seen online run for about £500 each making the motherboard look positively cheap.

Memory

I want as much memory as possible in my Hyper-V servers. For the storage server I want a sensible amount but not to the extent of the Hyper-V servers. The more memory I have, the more I can give my virtual machines to help give them that production feel.

DDR3 support on the CPU and motherboard means I’m up to date with the current specification although not for memory speed I should point out. I wanted to buy from the Supermicro validated memory support list and I wanted ECC Registered Buffered DIMMs as that’s what you use in servers for the error correction capabilities of these DIMMs. Also, because the motherboard only supports 16GB memory per processor if you use UDIMM desktop type memory this means I really wanted to use ECC Registered DIMMs. I need to make sure I do everything possible to squeeze the maximum performance out of this lab and that means using memory in accordance with the tri-channel native operation mode of the CPU.

For the storage server, I decided on 12GB RAM by way of three 4GB DIMMs. For the Hyper-V server, I decided on 48GB as six 8GB DIMMs for the uniprocessor setup and if I add a second processor later, I can add an additional 48GB.

16GB DIMMs are available but they are just way too expensive right now for me to consider. I managed to get the 4GB DIMMs for about £20 each and the 8gB DIMMs for £35 each. All of the DIMMs are Samsung low voltage DIMMs running at 1333 MHz. To translate, this means I am using PC3L-10600R designation DIMMs. These DIMMs will automatically down rank to the highest speed supported on the motherboard and under-running the memory will help to keep the temperature of the DIMMs during operation down.

With a second processor installed later, this would give me 96GB of RAM in my Hyper-V host if I stay with 8GB DIMMs and if I later upgrade to 16GB DIMMs should their prices become sensible, I could have to 192GB of memory.

Ancillaries

As always with a PC build, you need some odds and sods to finish it off.

The CPU needs a cooler so I opted for the Supermicro SNK-P0037P passive cooler. The cooler is recommended for the motherboard and made by the motherboard manufacturer Supermicro. The cooler is rated for processors up to 90W TDP which means they will more than easily handle my 40W L series CPU and no fan on the CPU will help to keep downs the power consumption and noise as one less moving part to power.

To connect the SAS Multilane connections on the motherboard to the enclosure backplane, I need some SFF-8087 cables. For the Hyper-V server, I will be installing only a pair of SSDs to run the host Windows Server 2012 R2 operating system. To protect against a SAS channel or cable failure, I will be installing both SFF-8087 multilane cables with a single SSD per channel.

For the storage server, I am going to install both channel cables allowing me to run 8 disks. I will operate like this initially and once I need more than 8 disks to increase the performance, I will buy an 8 Port LSI PCI Express SAS HBA to run the other two channels, buying two more cables. Genuine LSI SFF-8087 to SFF-8087 cables with Sideband support for the SGPIO disk information pass-through are £10 each, new on eBay.

The enclosures have 3.5″ drive bays to allow me to use big capacity SATA disk but as I will be using a combination of SSD and SATA, I need a way to mount the 2.5″ SSD disks. For about £10 each on eBay, you can pick up the HP 654540-001. This is a 2.5″ to 3.5″ disk carrier specifically designed for hot swap enclosures. You mount the disk into the carrier and it translates the power and data port positions to match that of a 3.5″ disk. It uses no intermediary disk controller so the disk will be seen exactly for what it is by the controller and the operating system and there is no performance penalty either.

Microsoft Azure Web Sites Hosting Plan Modes

Normally in Microsoft Azure (nee Windows Azure), I run my blog in Shared compute mode, however I occasionally have to scale up to Standard if I hit the compute limits for Shared in a given time period. It’s a bit naughty perhaps but I’m not built of money so I need to look after the pounds where possible.

Today, I noticed that the site popped offline whilst I was working on something, the issue being what I was doing in the back-end of WordPress generated a big load which then tripped the Shared instance resources counter. I logged into the Microsoft Azure Management Portal, ready to increase the site level to Standard to notice that the Scale options for a Web Site have now changed, a new feature in Microsoft Azure Web Sites.

Microsoft Azure Web Sites Hosting Mode

Previously, we had three options for the Scale of a website in Azure, Free, Shared and Standard. Free was a great way to develop and test a site which didn’t need a custom domain name attached, didn’t need to be able to use HTTPS or where you generally weren’t worried about the performance. Shared stepped it up a level giving you support for Custom Domain Names however HTTPS support and some of the high end features such as Endpoint Monitoring where still out of reach and reserved for Standard.

After some poking around, I haven’t yet been able to find out exactly what the pitch for Basic vs. Standard is but looking through the settings in the Web Site settings panels in Microsoft Azure, I can see that SSL is available for Basic but Web Site Backups and Endpoint Monitoring are still reserved for Standard. I’ll see what else I can find out about this update and what exactly is in and out between Basic and Standard and update the post.

It’s also interesting to note that the Microsoft Azure Pricing Calculator hasn’t yet been updated to reflect the addition of the new tier with the calculator still only offering up Free, Shared or Standard as the tier options.

Microsoft Azure Pricing Calculator Web Site Tiers

There are other new features in Microsoft Azure Web Sites that I want to talk about but I’ll save that for another post later.

Build 2014 Day 1 News

Before I get into the meat, I need to point out that I wasn’t at Build. This post is based on information from the live blogs, news and tweets taken from those at the event.

If you are a Microsoft fan, this was a really big week for you. The Build conference always gets all the new toys (as do the attendees to pay back their ticket prices).

Last week Office for iPad was announced and released which was amazing for the Apple community but yesterday, Microsoft really rolled it’s sleeves up and delivered the goods for Windows and Microsoft users. The new features, updates and announcements are wide sweeping and as the updates and products are released, more will no doubt be learnt.

Windows 8.1 and Windows Server 2012 R2 Update

Let’s get the biggest one out of the way first. The Windows 8.1 and Windows Server 2012 R2 Update 1 will officially be launched on April 8th worldwide. I’ve been lucky enough to be running this update for about three weeks now since the .msu files accidently leaked onto the Windows Update Catalogue and my desktop and Surface are already running it. On the Surface, the impact is minimal but on the desktop with a mouse, it makes a big difference and it feels much nicer.

If you are a TechNet or an MSDN subscriber, the good news is that you can already download the updates. The updates are available for download as either a standalone update to apply to an existing Windows installation or as a complete Windows installation media with the update slipstreamed in. The update is in essence, a service pack too meaning that it includes all of the previously released updates for Windows 8.1 and Windows Server 2012 R2 and includes the optional updates most people never bother to install and even some which Microsoft didn’t release previously, those which fall under the bug fixes and performance improvements category.

Windows 8.1 Update MSDN

For those of you who don’t know already, the update is aimed at improving Windows 8.1 functionality for desktop users with options to pin full screen immersive Apps to the taskbar, minimize and close Apps with a fly out title bar that appears when you hover at the top of an App. Additionally, there are now Power and Search buttons on the Start screen to save people who aren’t familiar with Windows 8.1 from trying to find the Charm bar.

The update also includes the new Enterprise Mode for Internet Explorer which is aimed at improving compatibility with Internet Explorer 11 and existing Line of Business applications, most of which will have been designed around existing versions of Internet Explorer like 6, 7 and 8. There is also an update for the server SKU to Active Directory for users with Office 365 to allow users to sign in to Remote Desktop Services sessions using their Office 365 email address.

Windows 8.1 and Server 2012 R2 Future Update Preview

Insight into a future update for Windows 8.1 and Windows Server 2012 R2 were shown yesterday at Build including a demo of a hybrid Start Menu to further help desktop users. This hybrid looks on face value feels like the classic Start Menu but has an additional column on the right allowing you to pin Live Tiles to it and have the tiles update like they do on the normal Start Screen in Windows 8.1.

Personally, I like the Start Screen but I can see this is going to be a real winner for enterprise customers who are either still relying on Windows XP looking to get out of the support retirement hole they are currently in or for customers on Windows 7 looking to upgrade but aren’t quite convinced on the interface of Windows 8.1 right now.

This future update demo also showed how in the future, we will be able to have immersive Apps running in windowed mode further adding to the look and feel more comfortable for enterprises to deploy.

 Windows Phone 8.1

The Windows Phone 8.1 update has been much the talk of the blogosphere since early information about it started to leak. The main talking point is the Cortana digital voice assistant which is Microsoft’s answer to Siri. Sadly, the demo didn’t go particularly well for Joe Belfiore on stage but the premise is really there. In my current mindset, I can’t really see me finding huge value in Cortana but I will wait until I get my hands on it in two months when the update is released to tell for sure. Regardless of my thoughts, Cortana has a myriad of features allowing to you to interact with and control not only native operating system functions but also with third-party apps, something will Belfiore demonstrated on stage.

Aside from Cortana, there is now going to be support for VPN and S/MIME digitally signed email in Windows Phone 8.1. I will certainly be trying out the VPN capability back to my home as I’m interested to see if I can use the VPN tunnel as the default gateway which will then allow me to avail of my OpenDNS DNS protections at home on the move and mobile. Other improvements include the much asked for Action Center which will be the notification hub for Windows Phone, the ability to switch mid-call between GSM voice and Skype to enable video calling, similar to that of FaceTime and also improved controls for enabling and disabling phone features such as WiFi, Bluetooth, Flight Mode and the volume controls. There is also a new developer API to allow apps to customise the lock screen is ways we haven’t been able to do previously.

With respect to the VPN and S/MIME support, I will be interested to see and hear if Windows Intune gets an update to allow administrators to deploy these features over the air (OTA) and then have the settings enforced on the device so that the user of the handset can’t override or disable the VPN or email signing.

I’m a huge Windows Phone fan and I’ve been using it since day dot. The evolution of the platform has been exciting to be a part of and I’m really looking forward to this Windows Phone 8.1 update.

New Lumia’s

Stephen Elop came out on stage to present some new Lumia handsets, some of which may be available to buy with Windows Phone 8.1 before the update is available to existing devices which is interesting to note. The new Lumia 930 is the update to the phone I have right now, the Lumia 925.

The Lumia 930 looks amazing and is a GSM take on the Lumia Icon currently available on Verizon in the US. To say I’m pretty upset that I’ve got another 18 months on my mobile contract with Vodafone before I can look at another Lumia as a free handset upgrade is an understatement. I may have to sell one of my children so that I can get a Lumia 930 SIM free.

A couple of other Lumia’s were shown however these are low end devices aimed more at the developing markets than the hyper-consumer US and EU markets where the 930 sits.

Universal Apps

This one is absolutely massive, if the developer community pull together and work on it properly. The premise is simple. A single app which you can purchase from the store would be available across Windows Phone, Windows 8.1 supporting both Touch and Desktop modes and Xbox One.

Whether you need to pay for access to each platform separately is up to the application developer to decide but the fact that in the future, we could see Apps that we all use and love working in harmony across all of our devices is what you can clearly see Microsoft have been working towards.

With the power of ‘the cloud’ the App developers can allow the synchronisation of content and settings between all of these devices so that the user experience is consistent. Tweaks in Visual Studio are going to allow developers to provide modified interfaces per device so that the experience suits the form factor of your device best too.

Universal Apps is something which iOS specifically has struggled with across iPad and iPhone so if Microsoft and the developer community can make this work right, I think this is going to be a massive boost for the Microsoft eco-system and hopefully should see a lot more Apps being written for the platforms because developers can get the biggest bang for their buck (exposure and revenue vs. time spent coding) by having the App available across a wide range of devices.

Office for Touch

Many people, including myself, took to Twitter to have a bit of a moan about the fact that Office for iPad was released last week and that is looks great. The problem of course is that we still don’t have a dedicated touch version of Office for Windows to really take advantage of devices like the Surface. Microsoft answered these to demo a preview version of Office for Touch which isn’t even at the beta stage yet. For a set of Apps which aren’t even at the beta stage yet, it looked impressive so the finished product should hopefully blow us all away. The interfaces were clean and reminiscent of the interface shown last week with Office for iPad.

Judging by how good the preview version of the Apps looked, I’ve got my fingers crossed for an Autumn (Fall) release but nothing was said or committed with regard to shipping of this product. Either way, it can’t come soon enough as although the Touch Mode in Office 2013 is okay, all it really does is space out the icons some to make it easier for me to fat finger the icons and a fully touch oriented version of Office for Windows would make the experience on devices like the Surface a real leader.

Conclusion

There is a lot in the pipeline for Windows and Microsoft. New products, company reorganisations and announcements, this is going to be an exciting year to be a fan of and a worker in the Microsoft space. All I can say on the subject is Prepare for Titan Fall.

Project Home Lab: Existing Infrastructure

In this second post in my Project Home Lab series, I’m going to cover fairly loosely what I’ve got in my environment at home already as I need to take this into account to determine whether I can keep it all or whether I need to make more fundamental changes to my environment also.

This series will consist of the following posts. I will update the table of contents with the new page links in each post as I produce and publish the articles.

  1. Project Home Lab: Goals
  2. Project Home Lab: Existing Infrastructure
  3. Project Home Lab: Hardware Decisions
  4. Project Home Lab: Network Decisions
  5. Project Home Lab: Shopping List

Racking

I’m fortunate that my wife lets me have a server rack in the garage which is what allows me to even chase the Project Home Lab ambition. Currently, this is a 12U rack I built myself with wooden panels and some 12U AV posts I got from eBay. It’s served me well although it has it’s nuances.

  • Non-removable side panels make access tricky
  • No wheels or castors making rear access non-existent as the rack is backed into a corner
  • No cooling aids such as top vents or air ducting

The rack is probably going to have to go for three reasons. Firstly because there isn’t going to be enough U space in the rack for me to add the new hardware I am going to be looking at and secondly because I need there to be more access into the rack so that when I need to add cabling or investigate faults, I need to be able to get in there and check it all without more time being spent on gaining access then doing the task in hand. The third reason is weight. All of the new equipment such as new rack chassis and the like will add weight and I don’t think the wooden panels right now will support all the extra.

Power

Currently, my rack gets its power from an APC 750VA 1U RM UPS. I’ve had it for about six years and it’s been faultless. I currently operate at about 20% load which gives me a runtime of around 25 to 30 minutes on battery. With the addition of new equipment, I think that I can probably get away with keeping the UPS load within capacity limits but this is going to severely hamper my battery runtime and I’d like to keep a minimum of 15 minutes battery to protect against short-term power outages so the UPS may need to be replaced.

A secondary issue with the UPS is connectivity. This model of UPS has four outlet IEC C13 ports as do most small form factor UPS units. I’m going to need to invest in a power distribution unit (PDU) or two to add extra power outlets for the new devices. The reason for two and not just a single PDU is that I want to spread the power load over the physical ports on the UPS so that I’m not driving all the power through a single outlet on the UPS and potentially burn it out.

Network

My network core lives in the rack right now and this is where it will stay. I currently have a Cisco ASA 5520 firewall and a TP-Link TL-SG3424 gigabit 24 port switch. Both of these will certainly be kept as is.

The ASA is amazing. It’s running just shy of the latest Cisco IOS release with fully upgraded 2GB RAM and it’s handling the Layer 3 inter-VLAN routing of my home VLANs right now and also acting as my edge router receiving my 120Mbps Virgin Media cable connection and it barely cracks 5% CPU usage and 512MB memory usage. I’ve got no questions whether this can handle the new device traffic but when you look at the specification of the Cisco ASA 5520 is it any wonder?

The TP-Link switch is a Layer 2 managed switch with 24 gigabit ports. I’m using 2 of the ports in a LAG up to my access switch in my home office, another two ports in a LAG to the ASA and a third pair of ports in a LAG to my home server. The remaining ports connect to devices in the main area of the house. For £125, this is a great switch. It supports all of the enterprise features you would expect from a named brand Layer 2 managed switch like Cisco, HP or Dell but at a fraction of the cost. Reliability and performance has never been an issue and I don’t foresee it being one. Lastly, it’s silent as it is passively cooled keeping the volume and BTU output of the rack down.

I have two issues with the current switch however relating to the new lab. One is port count and the other is performance impact. With the current port occupation on the switch, it is highly unlikely that I will be able to get everything connected to it so I will be likely adding a leaf switch for connecting the lab devices and then an uplink or two into the core from the leaf. The second reason is that I like how my home network performs right now. If I was to start throwing Hyper-V over SMB 3.0 File Server traffic across it all day long, I’m not sure how my home production network would suffer. This adds credence to adding the leaf switch. With the leaf switch, the only traffic that need to leave the confines of the lab back into the core are packets destined for the internet or administrative connections from me into the lab via Remote Desktop Services or management consoles.

Cabling

All of my cabling at home is shielded category 6 cable wired into a category 6 patch panel with homemade patch leads from the panel into the switch. I test all of my cabling with a Fluke tester to validate them to make sure I’m going to great good clean transmissions over the wire. I try to use wired in the house where ever possible as I like having that constant, reliable gigabit speed compared with the relative slowness of 300Mbps N specification wireless and potential disruptors such as DECT cordless phones, Bluetooth and microwaves.

I’m going to be continuing to use this cabling in the new lab. I won’t be using fibre or InfiniBand due to the complexity and cost. Sticking to category 6 copper cabling keeps my cable media uniform across the all my devices.

Server

I’ve got one server right now which is running Windows Server 2012 R2 Essentials. This acts as the core to everything in the house offering Directory Services, DHCP, DNS not to mention being a backup target and a media streaming server. It’s currently housed in an RM 400/10 4U rack enclosure from X-Case. I upgraded the case about two years ago with hot swap drive caddies to allow me to add and remove drives to my Storage Spaces Storage Pool easily. Inside the case is an ASUS ATX desktop motherboard with an Intel Core i5 3470T low power processor and 12GB DDR3 RAM.

Although I’m really happy with the performance of this server right now, I am a sucker for consistency and the aesthetics of things. If I can get parts at the right prices, I may well give my home server a little upgrade so that the parts inside match those of the new servers. For me this is a silly thing to cure a minor case of OCD I have but in real terms, it means if I have any suspect failed parts, I can swap and move them between servers to test as needed.

What’s Next

To be honest with you from the start, I’m actually writing some of these articles after the fact: I started this project over a month ago and I already have quite a few of the hardware parts ready for use. In the next post, I will explain my thought processes for selecting the hardware I have bought already and what I still need to purchase and why I will be purchasing those parts.

I’ll do a summary of all of the prices too for budding lab builders among you to use as a reference.

Project Home Lab: Goals

Since I’ve started working in consultancy, I have the constant need to challenge myself and spend more time working with the technologies that I promote. The only way to do this is learn and practice and the only way to learn and practice is to have equipment to do that on. I have embarked on a project to build myself a home lab and in this Project Home Lab series of blog posts, I’ll go through all that I am doing to produce my home lab.

This series will consist of the following posts. I will update the table of contents with the new page links in each post as I produce and publish the articles.

  1. Project Home Lab: Goals
  2. Project Home Lab: Existing Infrastructure
  3. Project Home Lab: Hardware Decisions
  4. Project Home Lab: Network Decisions
  5. Project Home Lab: Shopping List

Project Goals

In this first post, I will explain the goals of my home lab and what I want to be able to achieve with it.

The goal of the project is to allow me to work in an environment where I can break and fix, play, learn and explore the products I work with as a consultant, System Center primarily.

I need the project to provide me with a hardware platform which is well performing, not to the degree that an enterprise customer would expect it to be but enough to not make me want to hurt myself every time I do something in the lab due to a severe lack of performance. Functionally performing to summarise. I need the hardware I use to be cheap but suitable for the task, cost effective in other words. I also need it to be energy efficient where possible as I don’t want to be paying the earth to run this environment. Coupled to the energy efficient statement, I need it to not sound like a datacenter in my garage where I keep my kit so there may be a need for some post-work to add some sound deadening to the garage if things get too loud as noise output can’t be completely eradicated unless I spend a fortune on water cooling for it all.

Although my plans for the environment are small right now, I don’t want to be hamstrung in the future, stuck at the end of a garden path without options to either scale up or scale out the project. Virtualization is a given in this project and being that I work with the Microsoft technology stack, this is obviously going to be centred around Hyper-V. The primary goal is to run System Center in it’s entirety which will include some SQL Servers for databases. If I decide in three months time to add the Windows Azure Pack or in six months that adding Exchange or another enterprise application to the mix will help me understand the challenges that customers I work for have then I want to be able to deploy that without having to re-invent the wheel to do so but minor upgrades are to be expected: memory uplifts or more disks for increased IOPS perhaps.

Project Budget

To be honest, there is no real budget limit for this. I’m going to spend what is appropriate to make it work but the sky is not the limit. I’ve got a wife and three kids to feed so I need to make it all happen as cost effectively as possible which will likely mean the cost is spread with purchasing parts over a number of months.

DPM Replica Recovery Point Run Out of Space

If you receive an alert in System Center Data Protection Manager (DPM) that a replica or recovery point volume has run out of space, you will probably find this is a result of your DPM Storage Pool being out of space and head off to talk to your storage administrator to get some additional disk presented. While this is obviously the correct thing to do, you also need to take into consideration the impact this may have on your Recovery Point and Replica volumes.

Once you have added a new disk to the server, you will add it to the DPM Storage Pool to extend the capacity of the pool. With the pool extended, it would be logical to assume that DPM will automatically extended the Replica and Recovery Point volumes and whilst this is true for normal operation that DPM will auto-grow these volumes if you have enabled the auto-grow feature, if you’re DPM Storage Pool completely filled before the new disk was added, you will need to do this manually. When DPM has attempted to auto-grow the partition on a previous attempt but been unable to do so due to insufficient disk space, it puts the Protection Group into a state where this operation is not attempted again automatically.

Imagine a scenario whereby a DPM server has a single 2TB volume in use for the Storage Pool. DPM creates many dynamic partitions on the disk to store your Protection Group data. When the disk fills and DPM needs to start using a new disk that you added, it will convert the existing dynamic partition into a spanned partition to allow it to span the multiple Storage Pool disks. If this operation occurs during normal DPM operation whereby there is sufficient free space to do so then it will happen automatically and you have nothing to worry about. If however, your DPM Storage Pool completely fills before it has a chance to convert it to a spanned partition, even once their capacity, DPM will stop trying to perform this operation.

Luckily for us, the fix is pretty simple. Locate the Protection Group which is reporting that the Replica or Recovery Point volumes are out of space and extend them manually using the DPM Central Console by any amount you like. It can be as little as 100MB if you need it to be. This manual extension will force DPM to read the data from the Windows Logical Disk Manager (LDM), it will see the new disk available and perform the span conversion operation.

If you can’t identify via the DPM Central Console which Protection Groups are faulting, another way is to look at the Disk Management console and look for partitions on the DPM Storage Pool disks which are not of type Spanned. Non-spanned partitions in this instance will be partitions which have not been pulled across both disks. This could be because that Protection Group hasn’t yet needed to be extended to make use of the new disk or it could be because it’s out of space but it’s a step in the right direct.

If you are extending the existing DPM Storage Pool disk instead of adding a new disk, I’m not exactly sure what would happen. If I had to hazard a guess, I would probably say that DPM will know about this uplift in capacity and extend the Protection Groups automatically as you are still working with the same disk, therefore no span conversion operation is required however I could be wrong? This is something for me to test in my new lab once I get it built.

Logical Network Creation Error in VMM 2012 R2

If you are working with System Center Virtual Machine Manager (VMM) and trying to configure Logical Networks on a Hyper-V host, here is an issue you need to be aware of.

If the display name of the network adapters on your host contain the square bracket characters (Eg. [ or ]) then the creation of the Logical Network on the host will fail with a rather spurious error message. Check the display name of all of the adapters on the host and ensure that they do not contain the square bracket characters before you go through any other troubleshooting. You could save yourself an hour or two.

Configuring a SQL Azure Sync Group TechNet Guide

Back in January, I drafted a blog post on how to configure a SQL Azure Sync Group to provide database high availability and geo distribution. I decided that actually there was so much content there, it would have been too long for a blog post so I have published it instead as a .pdf document on TechNet Gallery instead.

The great news is that the guide is now published and you download it for yourself at http://gallery.technet.microsoft.com/Configuring-a-Windows-73847ad3.

Please let me know what you think of it as this is my first publication of this format. If you have any questions, comments or have a topic request for another guide then please get in touch and we’ll see what I can do.

Automating SharePoint Online with System Center Orchestrator

Recently, I’ve been working with a customer who uses Office 365 SharePoint Online and were looking to automate the creation of new sub sites in SharePoint Online with System Center Orchestrator. In addition to the requirement for automating the creation of the sub sites, the customer wanted this to be available as a self-service offering which they can make available to their users.

The customer asked me to put together a video on how we achieved this. This has been put up on YouTube as a four part video series.

You can see the series in the Automating SharePoint Online with System Center Orchestrator playlist at https://www.youtube.com/playlist?list=PLAKHPB7NYKVWBHi778g3LoQmtZ-cBMgsb or with the embedded video below.

The four parts are broken down as follows:

Part 1: Introduction and Prerequisites
Part 2: System Center Orchestrator Configuration
Part 3: System Center Service Manager Configuration
Part 4: Finished Product Demonstration

WMI Filter Features on Demand GPO

Last week, Yung Chou from Microsoft put up a post about using Group Policy to provide Features on Demand for Windows Server 2012 R2 and how this can help in restricted environments where servers don’t have access to Windows Update to retrieve on-demand features such as .NET Framework 3.5 or where you don’t want to be left manually providing UNC paths to operating system media.

This is certainly true, and even if you aren’t in a restricted environment this is worthy of doing because it makes it much easier for administrators to add certain roles and features to Windows Server however the one point that was missed from the post is that you will probably want to WMI Filter this Group Policy Object so that only Windows Server 2012 R2 operating systems will be able to read it and apply the policy setting.

I’m not going to walk through the process of creating a WMI Filter and applying it to a GPO as that’s pretty simple stuff but finding the right query to craft can sometimes be a challenge so here you go:

SELECT ProductType, Version FROM Win32_OperatingSystem WHERE (Version LIKE "6.3%") AND (ProductType = "2" OR ProductType = "3")

This query will pick out Windows Server 2012 R2 with the Version LIKE 6.3% syntax however this alone would also resolve true on Windows 8.1 client machines so the addition of ProductType equals 2 or 3 means that only server types will be matched.

This filter can be used for targeting any GPO that requires Windows Server 2012 R2 specifically. If you wanted to craft a WMI Filter which explicitly calls out Windows 8.1 instead then simply replace ProductType 2 or 3 with ProductType equals 1.