Posts from February 2011

BitLocker Drive Encryption and Windows Home Server 2011

As a quick note to self kind of post, I am very keen to find out if Microsoft are supporting the use of BitLocker encrypted volumes on the Home Server?

As of Power Pack 2 for Windows Home Server v1, Microsoft support backing up BitLocker encrypted volumes from the client, however I am interested in encrypting the volumes on the server.

If someone is able to steal Nicky’s laptop for example, that only has her data and email, whereas if someone where to steal the Home Server, they have copies of everyone’s emails, backups and data, not to mention any shared data which is saved to the Home Server shared folders.

I am hoping in the coming das with more information being released about Home Server 2011 that an answer will become clear on this, although I personally see no reason for not supporting it, as there is no Drive Extender (DE) for it to cause problems with, and Windows Server 2008 R2 which is the underlying operating system has BitLocker in it’s core.

A Quick Hit on RAID Levels

These are written by my own fair hand with nothing copied from Wikipedia, so I accept any errors to be my own.

RAID-0

RAID-0, otherwise known as a Stripe without Parity is a method of pooling two or more disks together into a single logical disk. Although consisting of multiple drives, the OS will only see one disk with the total capacity of all of the provisioned drives. Although attributed as a RAID version, RAID-0 provides no protection for the data and there are no parity bits or duplicate copies of the data on any of the disks – If a single disk fails, all of the data which is stored on that disk is lost and the volume will enter a degraded state.

This level gives the highest value for money as every disk you add grants the full amount of storage, but you get no protection for that money, so it should not be used by itself as a data protection solution.

RAID-1

Otherwise known as Mirroring, RAID-1 provides the simplest level of data protection. Two disks are mirrored, so that they are exact copies of each other. In the event of a single drive failure, the other drive takes up the role as the single serviceable disk and performs all of the I/O. Until the failed disk is replaced however, the data is at risk as should the second disk fail then you will have no more copies of the data.

RAID-5

The most commonly used RAID method in older business systems, RAID-5, otherwise known as stripe with parity, requires a minimum of three disks. The storage capacity of the array equals the capacity of two of the disks. The capacity equal to the third disk is used to store the parity data. Two of the three disks can simultaneously fail in RAID-5 and the parity data on the last remaining disk is able to recreate the existing data to it’s full healthy state.

Read performance on RAID-5 is good, however write performance is slow due to the overhead to write the parity data, especially on low grade performance on-board RAID controllers seen on modern day motherboards.

RAID-10

My personal favourite, RAID-10 is what is known as a hybrid RAID or nested RAID technology. It uses a recommended minimum of four disks, although in some configurations can work with two. RAID-10 is actually correctly noted as RAID-1+0. This means RAID-1 combined with RAID-0. A stripe without parity (RAID-0) offers good read and write performance, whilst offering maximum storage capacity, but it offers no protection. RAID-1 Mirror offers good performance for both reads and writes, but the performance is below that of RAID-0. RAID-1 provides the protection of a second copy of all of the data.

RAID-10 fixes this by backing each striped disk with a mirrored copy. In a four disk RAID-10 configuration, the capacity of the volume is half that of the disks total (Eg. 4x 2TB disks gives a physical disk size of 8TB, and a usable disk size of 4TB).

RAID-10 gives you the performance of RAID-0 plus the added performance benefit of having a second disk spindle hitting each I/O, in theory halving the seek time to read or write data to the disk, plus the protection benefit of a second copy of all of the data should one drive fail. In RAID-10, two of the four drives can fail at any one time, however there must be one drive from each Mirror set available to complete the Stripe.

Windows Home Server 2011 (Vail): The Plan

Yesterday I received an email in my Inbox from Microsoft Connect announcing the Windows Home Server 2011 LogoPublic release of the Release Candidate of Windows Home Server Vail, which is to be officially named Windows Home Server 2011.

As many had feared, the key feature from Windows Home Server v1, Drive Extender is missing. Although I shed a tear briefly, I’m not going to write offensive emails to Steve Ballmer or get on a high horse about it like many people in the social networking or blogosphere scenes. This is largely due to the fact that I am technical to therefore understand technologies like RAID and as a result, I have an idea on directions to take with DE.

I had been previously considering using software RAID in the form of Flex RAID because it offers byte level data protection like that offered by DE, but in further thinking sessions, I came to the conclusion that Microsoft dropped DE because of issues with software data redundancy, and that although potentially more expensive to initially setup, hardware RAID will offer a better level of protection and will also help to improve IOPS which is crucial when I will be using Green drives to reduce the energy footprint of the server.

Yes Drive Extender was easy and made the server friendly and made it easy for end-users without technical knowledge to provision storage, RAID was not supported and was also not recommended. RAID makes the setup more complicated in that you need knowledge to understand the technical differences between RAID-0, 1, 5, and 10 and so forth, the number of disks required for each and how the different levels effect storage capacity, redundancy and performance.

Revisiting My Home Server Build

Back in December 2010, I posted with some rough specifications and power figures for my ideal Home Server build. On the surface it’s largely the same as before, however I currently have a few considerations:

  • Drive Extender is defiantly gone now, so I am dependant on another storage technology, increasing the number of drives required potentially, which has a knock-on effect on the power usage.
  • Still with a lack of Windows Media Center, and now owning an Xbox 360, I am using VMware Server to virtualise a Windows 7 machine on the Home Server so that the Xbox has an always on and available Media Center connection. I am in the throws of deciding whether or not to stick with the Xbox for streaming or use the PS3. If I keep the VMware Server thing going, then my problem is that I will require a more powerful CPU, such as an Intel Pentium D, increasing the power requirements.

Since my posting in December, 3TB drives have become available, which has affected the price per gigabyte. As it stands today, for the Western Digital Green series of drive, the pricing per gigabyte for each drive model is as follows (pricing from Overclockers UK as of today):

  • Western Digital Green 1TB – £49.99 / £0.05 per Gigabyte
  • Western Digital Green 2TB – £79.99 / £0.04 per Gigabyte
  • Western Digital Green 3TB – £179.99 / £0.06 per Gigabyte
    *I’ve used base 10 or 1000 Gigabytes per Terabyte for this, as this is how drive manufacturers count, although I wish someone would teach then binary.
    Based on these prices and taking into account RAID requirements, I’m changing my drives from 1TB Green drives to 2TB Green drives. This allows me to stick to only one PCI-Express RAID controller and reduce the wattage of the box by reducing the number of disks.

How I Work with RAID

The following diagram is something I knocked up in Visio this evening. It shows how the disks will be physically and logically arranged:

WHS2011 Disk Design

So what does all this mean?

As per my original build plan, the motherboard I am using will feature an on-board SATA-II RAID controller supporting up to four drives, and supporting RAID modes 0, 1 and 5. I will also be installing a PCI-Express SATA-II RAID controller for two reasons. Firstly is because my case supports up to ten drives and I need to be able to make sure I have the capacity to make use of those slots. Secondly, the on-board RAID controller will be slow to perform any fancy setups like RAID-5 and also it doesn’t supported any nested RAID modes.

The controller I will be installing, the Leaf Computers 4-Port SATA-II RAID PCI-Express card is a 1x PCI-Express interface card which supports RAID modes 0, 1, 5 and most importantly of all 10.

In my design, I am going to be using both controllers to serve two different purposes in two different RAID modes.

Drives labelled OS indicate drives used for the OS. Drives labelled DG-A are drives which exist in the first RAID-10 Mirror set, while DG-B indicates drives in the second mirror set.

The Operating System

The Operating System functioning is critical to the machine for obvious reasons, so it needs to be protected, but at a limited cost. Once the server is booted and running, the operations being called upon for the underlying OS will be low, so the performance requirements should be minimal for the OS disks and because it’s a server staying on for much of the day, I’m not concerned about boot times.

For this reason, I’ve decided to use two Western Digital Green 2TB drives in a RAID-1 Mirror attached to the on-board RAID controller.

The Windows Home Server 2011 installation automatically creates a 60GB partition for the operating system and allocates the remaining space to a D: partition which is by default used for your own personal data.

The Data

The Data in the case of the Windows Home Server environment is the critical piece: Shared copies of pictures, music, documents and digital video library are all stored on the server. These files need to be protected from drive failure to a good degree, and also need to be readily available for streaming and transfer to and from the server. For this reason, I will be using four drives in a RAID-10 configuration attached to the PCI-Express Leaf Computers controller.

This controller offers the ability to use RAID-10 unlike the on-board controller and will be a much higher performing controller which will reduce any bottleneck in the underlying RAID technology. With two disks serving each half of the stripe, the performance will be impressive, and should theoretically outperform the expensive and high power consuming Black Edition drives from Western Digital.

The mirroring of each drive in the Stripe provides the data protection.

Once built, the provided Silicon Image management software can be installed on the server and can help generate alerts to provide warnings when a drive has failed so that it can be replaced as soon as possible to ensure that the data is protected.

Off-site backups of the data will likely be taken to a USB attached SATA-II drive to offer a physical backup – RAID is not backup and I want to make this clear now.

Presenting the Logical Volumes

So I’ve got my 2TB RAID-1 Mirror for the OS, and I’ve got my 4TB GUID disk for the RAID-10 array, but how will I actually use it? Without Drive Extender, Microsoft are proposing people use separate disks for each shared folder and bump files between volumes when a volume becomes full.

All of this sounds horribly ineffective and effortful, so I instead am going to make the most of DE missing and use NTFS Mount Points, otherwise known as Junctions.

The Windows Home Server 2011 installation will create a C: drive and D: drive. The C: drive will be 60GB for the operating system and will leave a 1.9TB partition based on the remainder of the disk. On this disk, I will create a folder called MNT and using the Junction command in Server 2008 R2, instead of assigning a drive letter to the 4TB GUID disk, it will instead become a logical extension of the D: drive.

The advantages of this are:

  1. No need to manage partitions.
    The underlying disks provide the separation I need, so why complicate things by partitioning the 4TB volume into separate disks for Photos, Videos, Music etc? All this will lead to in the long run, is under provisioning of one of the partitions and I will then spend time growing and shrinking the partitions to meet my requirements which will cause massive disk fragmentation?
  2. No need to manage drive letters.
    Drive letters are a pain, especially if you are expected to have one for each type of shared folder and especially if you need to remember where you put something. I instead will have a super-massive D: drive containing everything. I’m sure the thought of this makes a lot of people cringe, but in reality, what does it matter? It is very unlikely that a problem will occur with the partition unless there is an underlying disk problem, and that in turn will affect all of the partitions.

The only flaw in my plan will be the Home Server 2011 installation process. If it detects my 2TB hard disks and decides to initialise them as GPT disks instead of GUID disks, then I will have to restart the installation, hit into WinPE and manually initialise and partition the disks.

I will require my disks to be GUID and not GPT as when I add the NTFS Junction to the D: drive, its capacity will exceed that permitted by GPT.

Expansion

Based on the current server motherboard specification (ASUS Intel Atom miniITX board), there is only provision for one PCI-Express card. As a result, all four drives will already be in use on the existing PCI-Express card. The only option for expanding the capacity of the server would be to attach two drives in a Mirror to the remaining two on-board driven ports, however performance on the drives would be lower, and they would be acting as a separate mirror and not part of the RAID-10 configuration.

The alternative option is to replace the four 2TB drives attached to the PCI-Express card with 3TB drives once the price on them is lower over time, however, although this would grant an additional 2TB of usable space, the cost would be high.

If I make the decision to use an Intel Pentium D processor instead, then a miniATX motherboard would offer two PCI-Express slots and opens the possibility to add another four disks in a second RAID-10 configuration allowing anything up to 6TB to be added to the storage capacity of the server depending on the drives installed.

In reality, on our current Home Server, we are storing a little over 1TB of data, so this is about giving us the capacity we need for the next 5 years and dong it right in one move without having to make ad-hoc and ineffective changes down the line, which I am normally pressed into at home due to budget verses cost of used hardware.

In the next 5 years, I will no doubt upgrade my Nikon D40 to a D90 which will result in larger photo file sizes, and our digital libraries will no doubt continue to grow as a significant rate as the kids get older and more into an ever widening range of media, not to mention the likelihood of them getting their own machines which I will need to backup.

4TB of RAID-10 storage gives us 3x the capacity we have currently, more redundant, more preformat and more energy efficient – Much more for that matter.