windows home server 2011

Restoring Client Computer Backup Database in Windows Home Server 2011

Quite sometime ago (probably about two months now), the primary drive on my Windows Home Server 2011 was giving me issues due to a new device driver I installed. Nothing got me going with ease: Last Known Good Configuration, Safe Mode, nothing. The problem lied in the fact that the server wouldn’t acknowledge that the OS disk was a boot partition, and after leaving it to attempt to repair the boot files by itself, which for the record, I don’t think I’ve ever seen work, I took to it manually.

Launching the Recovery Console command prompt from the installation media, I tried the good old commands that have served me well in the past for Windows Vista and Windows 7 machines I had to repair, bootrec and bootsec, but nothing worked, so I was left with only one option to re-install the OS. I wasn’t concerned about loosing personal data that is stored on a separate RAID volume, but I was concerned about my client backups which were stored on the same volume.

Using a USB attached hard disk, I manually copied out the Client Computer Backups folder, then rebuilt the operating system. I don’t keep active backups of the Home Server operating system because the Windows Server Backup utility in Windows Server 2008 R2 isn’t that hot. It doesn’t support GPT partitions over 2TB which obviously is an

Once installed, Windows Home Server sets up the default shares and folders including the Client Computer Backups. The critical thing here is that no clients can start a backup to the server before you complete these steps. Once a client starts a backup to the server, it creates the new databases and files for the server, ruining the chances of importing the existing structure.

From the new OS installation, open the directory where the Client Computer Backups live. The default location is C:ServerFoldersClient Computer Backups, but I had moved mine to D:ServerFoldersClient Computer Backups. Once you’ve found the directory, copy across all of the files I had previously copied from the burnt install of Windows and overwrite any files that it requests.

Once completed, restart the server. This will restart all of the Windows Home Server services responsible for running the Dashboard and the Client Computer Backups. Once the restart has completed, open the Dashboard and select the Computers tab where you normally view the computer health states and backups. On first inspection, it looks as though you have no clients and no backups, but look more closely and you will se a collapsed group called Archived Computers. Expand this group and you will see all of your clients listed and all of their associated backups will be listed if you select the Restore Files option for a computer.

The thing to point out here is that these backups will remain disassociated from the clients. Once you re-add a client to the server and commence a backup, it will be listed as a normal computer and the Archived Computer object for it will also remain listed. This is because the server generates GUIDs for the backup files based on a combination of the client identity and the server identity and because the reinstallation of the operating system will cause a new GUID to be generated, they are different. This isn’t a problem for me, but I’ve read a number of posts on the TechNet forums at Microsoft where people have had trouble locating the Archived Computers group in the Dashboard interface and think that they’ve lost everything which clearly isn’t the case.

Windows Home Server 2011 Cross Subnet Client Computer Backup

Windows Home Server 2011 is a great product for home use, but it’s design is centred around homes with very basic single subnet flat networks.

A lot of home networking devices shipping these days give you the ability to separate your wired network and wireless network into separate VLANs, such as Linksys products which by use 192.168.1.1 for the wired network and 192.168.2.1 for the wireless network by default when the feature is enabled, or there are geeks like me who run their homes like a miniature enterprise with router-on-a-stick topologies or even vast OSI Layer 3 switched networks.

This causes problems for Windows Home Server 2011 Connectors installed on your client computers running Windows XP, 7, 8 or Macs as out of the box, they can’t communicate with the Home Server and complete their daily scheduled backup jobs leaving you unprotected.

Fortunately, this is fixed very easily with a quick Remote Desktop onto the server. It’s wise to point out now that Microsoft don’t support this modification, however it’s such a small change that I would argue Microsoft would be crazy to deny support for an end-user based on the change and it would be very easy to change back to default if they did complain.

  1. Start a Remote Desktop Services session to the server and logon as the Home Server Administrator account.
    (If you are unsure of how to do this, then you can find this elsewhere online. Anyone unsure of using Remote Desktop probably isn’t a great candidate for making firewall configuration changes either).
  2. From the remote session, open Windows Firewall with Advanced Security from Control Panel Administrative Tools.
  3. Scroll through the list of rules until you find the block listing the following services:
    Windows Server Certificate Service
    Windows Server Client Computer Backup
    Windows Server Connect Computer Web Site
    Windows Server Discovery
    Windows Server Mac Web Service
    Windows Server Provider Framework
  4. For each of these services, open the properties, and select the Scope tab.
  5. If you are unsure of the address boundaries of your subnets, then the easiest thing is to change this from Remote IP Address Local Subnet to Remote IP Address Any IP Address, although I don’t recommend this configuration.
  6. If you know the address boundaries of your subnets, then click the Add button and add either the slash notation for the subnet address in the top box, or select This IP Address Range and enter your starting and ending addresses.
    In my case, I added the slash notation of the subnet for my wireless network (eg. 192.168.2.1/24).
  7. Once you have updated the scope, select theOKbutton to commit the change. No server restart, client computer restart or anything else is required to make it work. The server will simply now start accepting connections from the addresses you specified.

It’s worth noting that this change will also now allow you to join clients to the Home Server from your wireless subnet as again, by default, I found you had to resort to a physical connection to get the connector client installed as it wasn’t able to detect the Home Server otherwise.

The Trials and Tribulations of Installing Windows Home Server 2011

As I sit here now in my study at home, I am blessed by the new soothing sound of my self-built Windows Home Server 2011 system. And why is the sound soothing? Because it’s silent. My rack is still making some noise, which is coming from the Cisco switch and router which both probably need a good strip down and de-dust to help with the noise, it is nothing compared with the noise of the old PowerEdge SC1425 that I had running.

Unfortunately, installing Windows Home Server 2011 for me wasn’t smooth sailing, and I hit quite a few bumps along the way, so here is the list of problems I faced to help others avoid the same time wasters.

Before even starting the installation, please make sure you do read the release notes. Ed Bott has gone through some of the crazy requirements in a post at ZDNet (http://www.zdnet.com/blog/bott/before-you-install-windows-home-server-2011-rtfm-seriously/3134). The biggest one to watch out for is the clock.

Due to some kind of bizarre issue with the RTM release of WHS 2011, you must change the time in your BIOS to the time for PST (Pacific Standard Time) or GMT –8hrs. You must then leave BIOS and consequentially leave the Windows clock to that time, and during the installation when prompted for Time Zone, you must set this to Pacific Standard Time.

Once the installation is complete, you must then wait a further 24hrs before changing the time back. If you chose not to heed this advice, then the release notes state that you will not be able to join any client computers to the Home Server during this 24hr period. Once your 24hr period is up, you can log into the server and change the time and the time zone accordingly.

The first problem hit at the first phase of the installation, Extracting Files, while it was at 0%. Reviewing the error log from the setup process, I saw that it had encountered a (Trackback:80004005) Setup Error 31: Trackback:80004005 error. A quick look on the Microsoft Social Forums led me to discover that WHS 2011 doesn’t support any kind of RAID or array type disk to be attached for the installation. For me, this meant disconnecting the RAID-10 controller and powering down the disks attached to the controller for the duration of the install. Once install was completed, I simply reconnected the controller and installed the drivers and everything is working perfectly as I expected.

The second problem occurred once the installation was complete and it runs the WHS 2011 customisation process after first logon. It seems that WHS 2011 goes out to Windows Update and pulls a couple of required updates, and as such, needs a suitable network card. My motherboard uses a NIC which isn’t natively supported by WHS 2011, so I had to install the driver, however to my shock, the initial lack of a NIC terminated the setup process and I was forced to restart.

As my existing home server and the new home server where to be using the same IP address, I had the new one disconnected initially. This caused the next problem, because after installing the NIC driver, I was given a prompt that there was no network connectivity and that I should connect a network cable. Once again to my shock and disbelief, this required another restart.

At this point, I also released that my Cisco switch had switchport-security turned on for the Home Server port still and this meant I had to disable that on the switch as it was bound to a different MAC address at the time, and guest what? Reboot again.

My final problem laid with the network card on the motherboard itself. In the BIOS, I enabled the maximum power saving mode setting. It turns out, that for the ASUS E35M1-M PRO motherboard, this prevents the network card operating in 1Gbps mode and drops it to 100Mbps. It took me a while to figure this one out with changing cables, switching between switch ports etc, but I eventually discovered an option under the network card in Device Manager for Green Ethernet. Disabling this setting, which was previously set to Enabled, reset the network connection, and it was then connected at 1Gbps.

After all of this, I have a fully working and perfect home server for me and the family. I’ll be writing some other posts to explain my setup in detail, but this post is purely for the installation process

Corsair AX750 Professional Power Supply and Memory Installation

So in the latest episode of part installation in my progressive Home Server 2011 build, I received my power supply and the memory in the post today.
The memory is the same as that originally specified in my home server design: 4GB of Corsair 1333MHz DDR3 as two 2GB sticks so that I get the most from the dual channel memory controller. This isn’t XMS or Dominator or any of the special Corsair models of memory, but instead the standard Corsair memory. The reason for this is that the home server isn’t going to be running a large number of memory intensive processes and especially not one’s which require ultra-low CAS latency and timings or lots of paging in and out.
I’ve always used Corsair memory since I switched from unbranded about 4-5 years ago due to lots of back to back memory related issues. I’ve never had a single stick go bad, even the three 2GB sticks of Registered DDR2 in my current Dell PowerEdge SC1425 home server build (which has been up and running every day for the last three years). If a stick ever did go bad I know I have Corsairs lifetime warranty to back me up too which is nice.
The power supply has a three key requirements in this build. One is to be silent or as very near to silent as possible. Two is to provide enough SATA connectors to support the six SATA-II disks that will be going in the server, and lastly but not least is to be as energy efficient as possible even as low power draw levels.
Silence was a difficult one for me to find because all of the silent power supplies I was able to find didn’t support more than four SATA-II disks and I didn’t want to be using Molex to SATA converters in the build as it’s just another thing to go wrong or stop something working at 100% efficiency. As per the previous point, six disk support was hard to find. It was achievable but only on the higher end power supplies capable of delivering silly amounts of power up to 1200W in some cases. Whilst a power supply only uses the power is needs, they have power efficiency curves based on the demand. Typically, the lower the draw from the PSU’s peak or recommended continuous load rating, the worse the efficiency, and alas the final requirement.
I was mainly after a supply rated at 80 PLUS Bronze, however anything better was a plus. When I found the Corsair AX750 Professional Series Modular PSU, I was in power heaven. With a peak load of 750W, sleek black good looks, and a modular design supporting up to twelve SATA—II disks, I would not only have enough SATA connectors to meet the needs of my current six disk design, but also capacity to extend to the full ten disk capacity of the case if I wanted to in the future, but thanks to the modular cabling I am able to maximize the air flow in the already airy case by only installing the cases I need to deliver power to the disks and motherboards.
The power supply is 80 PLUS Gold rated the supply delivers a massive 90% efficiency at 230V even when operating at below 20% load (I will probably be in the 6-7% region) which is something very rare for a power supply. The power supply does have a fan, which is rated at 35dB at full 750W load which is load, however when running below 20% load, the fan is disabled due to the lack of heat generation which means I meet the final criteria for silence.
The supply comes with a full seven year warranty from Corsair, and because I will be only running the power supply and extremely low load levels, none of the components are likely to ever be taxed to a level to cause them to fail which means this supply ticks all the boxes I had as minimum requirements, goes an extra ten miles with tonnes of extra features and nice touches but leaves me safe in the knowledge that it will outlast my projected storage utilization for this server (and most likely the shelf life of Windows Home Server 2011 too).
Lastly, I decided that as I not going to be using the rear case fans that I should remove them to give me a bit more through flow for the air and also to get the hanging Molex connectors out of the way. There is a shot of the case without these fans now and the extra ventilation it will give me. The front case fans are still installed as once the build is complete I am going to review the temperatures and make a decision as to whether the temperature warrants having some air movement and if the dB level from them is acceptable.

Past the First Hurdle, but In a New Camp

Since my last post about the new Home Server project, I’ve received financial backing in the form of overtime at work to start the purchasing, and I’ve also received WAF (Wife Approval Factor) in that the new server will be near silent, give us much greater storage capacity while cutting the overall power consumption significantly.

Late last week, I ordered the case for the project and the motherboard with a twist.

Since my last post on all things Home Server back in March, I have discovered a new product, recently released by AMD square in the playing field of the Intel Atom. The processor is the AMD E-350 Zacate, based on the AMD Fusion platform. The platform is designed for high performance, yet low power consumption, while integrating HD video capabilities and other top end features into the chipset.

So moving away from the Asus AT3IONT-I board with the Intel Atom 330 processor, I have instead gone for the Asus E35M-1-M PRO motherboard, being an AMD fanboy in a previous life.

This motherboard is microATX by contrast to the Intel board, which was miniITX. This shift in form factor gives me more flexibility due to the increased number of PCI Express slots and also means there is room for more powerful chipset on the motherboard. The net result, is a motherboard, which, for a little over £100 gives you a 18W TDP processor which can be passively cooled, or actively cooled with the optional CPU fan included, support for up to 8GB DDR3 memory, 2x PCI, 1x PCIe 1x and 1x PCIe 16x, 5 SATA-II 6Gbps ports supporting JBOD, RAID0 and RAID1, Gigabit LAN, USB 3.0, HDMI, DVI-D and VGA video output, along with SPIDIF optical out for audio.

The new motherboard sees the power consumption up from 12W to 18W, however this extra 6W, based on the performance benchmarks and a recent review from TheWindowsBlog on Twitter which you can read at the Windows Team Blog site really seems worth it. The motherboard they reviewed is actually the miniITX version of the board, which lacks a couple of the features I have, but the processor and chipset is identical.

Forum topics on sites like The Green Button and AVForums are all suggesting that this processor and video card combo, in a HTPC scenario are more than capable of handling two simultaneous HD streams, something the Atom can’t manage on it’s own.

With the motherboard in hand, I have to say it’s a really nice looking board, and the features still blow me away for such a small and tightly integrated package.

The case is another ball game. The pictures over on the X-Case website really don’t do it justice. The 4U chassis has large slots in it’s front to allow for decent airflow. Unlocking the front panel allows you to lower the flap to reveal the internal air filter and two 80mm fans, with the air filter mounted in front to stop the case inhaling the dust. Inside the case, you have ample room for even a Full ATX board, so my microATX board is going to be swamped inside, but at least it will have six of the ten 3.5” drive bays full with 2TB Western Digital Green disks to keep it company.

I’ll be assembling the motherboard in the case later today, and will grab a picture. Next month, I will hopefully be ordering the memory and the power supply which will give me enough to get the machine powered on and to configure the EFI BIOS settings how I want them before ordering the RAID controller and the disks last.

In light of the additional PCI slots, I am currently thinking about adding an Intel Dual Port Server NIC to the machine so that I can setup a team to give me more throughput and redundancy on the network, as this is what I currently have setup in my existing Dell PowerEdge SC1425 box.

BitLocker Drive Encryption and Windows Home Server 2011

As a quick note to self kind of post, I am very keen to find out if Microsoft are supporting the use of BitLocker encrypted volumes on the Home Server?

As of Power Pack 2 for Windows Home Server v1, Microsoft support backing up BitLocker encrypted volumes from the client, however I am interested in encrypting the volumes on the server.

If someone is able to steal Nicky’s laptop for example, that only has her data and email, whereas if someone where to steal the Home Server, they have copies of everyone’s emails, backups and data, not to mention any shared data which is saved to the Home Server shared folders.

I am hoping in the coming das with more information being released about Home Server 2011 that an answer will become clear on this, although I personally see no reason for not supporting it, as there is no Drive Extender (DE) for it to cause problems with, and Windows Server 2008 R2 which is the underlying operating system has BitLocker in it’s core.

Windows Home Server 2011 (Vail): The Plan

Yesterday I received an email in my Inbox from Microsoft Connect announcing the Windows Home Server 2011 LogoPublic release of the Release Candidate of Windows Home Server Vail, which is to be officially named Windows Home Server 2011.

As many had feared, the key feature from Windows Home Server v1, Drive Extender is missing. Although I shed a tear briefly, I’m not going to write offensive emails to Steve Ballmer or get on a high horse about it like many people in the social networking or blogosphere scenes. This is largely due to the fact that I am technical to therefore understand technologies like RAID and as a result, I have an idea on directions to take with DE.

I had been previously considering using software RAID in the form of Flex RAID because it offers byte level data protection like that offered by DE, but in further thinking sessions, I came to the conclusion that Microsoft dropped DE because of issues with software data redundancy, and that although potentially more expensive to initially setup, hardware RAID will offer a better level of protection and will also help to improve IOPS which is crucial when I will be using Green drives to reduce the energy footprint of the server.

Yes Drive Extender was easy and made the server friendly and made it easy for end-users without technical knowledge to provision storage, RAID was not supported and was also not recommended. RAID makes the setup more complicated in that you need knowledge to understand the technical differences between RAID-0, 1, 5, and 10 and so forth, the number of disks required for each and how the different levels effect storage capacity, redundancy and performance.

Revisiting My Home Server Build

Back in December 2010, I posted with some rough specifications and power figures for my ideal Home Server build. On the surface it’s largely the same as before, however I currently have a few considerations:

  • Drive Extender is defiantly gone now, so I am dependant on another storage technology, increasing the number of drives required potentially, which has a knock-on effect on the power usage.
  • Still with a lack of Windows Media Center, and now owning an Xbox 360, I am using VMware Server to virtualise a Windows 7 machine on the Home Server so that the Xbox has an always on and available Media Center connection. I am in the throws of deciding whether or not to stick with the Xbox for streaming or use the PS3. If I keep the VMware Server thing going, then my problem is that I will require a more powerful CPU, such as an Intel Pentium D, increasing the power requirements.

Since my posting in December, 3TB drives have become available, which has affected the price per gigabyte. As it stands today, for the Western Digital Green series of drive, the pricing per gigabyte for each drive model is as follows (pricing from Overclockers UK as of today):

  • Western Digital Green 1TB – £49.99 / £0.05 per Gigabyte
  • Western Digital Green 2TB – £79.99 / £0.04 per Gigabyte
  • Western Digital Green 3TB – £179.99 / £0.06 per Gigabyte
    *I’ve used base 10 or 1000 Gigabytes per Terabyte for this, as this is how drive manufacturers count, although I wish someone would teach then binary.
    Based on these prices and taking into account RAID requirements, I’m changing my drives from 1TB Green drives to 2TB Green drives. This allows me to stick to only one PCI-Express RAID controller and reduce the wattage of the box by reducing the number of disks.

How I Work with RAID

The following diagram is something I knocked up in Visio this evening. It shows how the disks will be physically and logically arranged:

WHS2011 Disk Design

So what does all this mean?

As per my original build plan, the motherboard I am using will feature an on-board SATA-II RAID controller supporting up to four drives, and supporting RAID modes 0, 1 and 5. I will also be installing a PCI-Express SATA-II RAID controller for two reasons. Firstly is because my case supports up to ten drives and I need to be able to make sure I have the capacity to make use of those slots. Secondly, the on-board RAID controller will be slow to perform any fancy setups like RAID-5 and also it doesn’t supported any nested RAID modes.

The controller I will be installing, the Leaf Computers 4-Port SATA-II RAID PCI-Express card is a 1x PCI-Express interface card which supports RAID modes 0, 1, 5 and most importantly of all 10.

In my design, I am going to be using both controllers to serve two different purposes in two different RAID modes.

Drives labelled OS indicate drives used for the OS. Drives labelled DG-A are drives which exist in the first RAID-10 Mirror set, while DG-B indicates drives in the second mirror set.

The Operating System

The Operating System functioning is critical to the machine for obvious reasons, so it needs to be protected, but at a limited cost. Once the server is booted and running, the operations being called upon for the underlying OS will be low, so the performance requirements should be minimal for the OS disks and because it’s a server staying on for much of the day, I’m not concerned about boot times.

For this reason, I’ve decided to use two Western Digital Green 2TB drives in a RAID-1 Mirror attached to the on-board RAID controller.

The Windows Home Server 2011 installation automatically creates a 60GB partition for the operating system and allocates the remaining space to a D: partition which is by default used for your own personal data.

The Data

The Data in the case of the Windows Home Server environment is the critical piece: Shared copies of pictures, music, documents and digital video library are all stored on the server. These files need to be protected from drive failure to a good degree, and also need to be readily available for streaming and transfer to and from the server. For this reason, I will be using four drives in a RAID-10 configuration attached to the PCI-Express Leaf Computers controller.

This controller offers the ability to use RAID-10 unlike the on-board controller and will be a much higher performing controller which will reduce any bottleneck in the underlying RAID technology. With two disks serving each half of the stripe, the performance will be impressive, and should theoretically outperform the expensive and high power consuming Black Edition drives from Western Digital.

The mirroring of each drive in the Stripe provides the data protection.

Once built, the provided Silicon Image management software can be installed on the server and can help generate alerts to provide warnings when a drive has failed so that it can be replaced as soon as possible to ensure that the data is protected.

Off-site backups of the data will likely be taken to a USB attached SATA-II drive to offer a physical backup – RAID is not backup and I want to make this clear now.

Presenting the Logical Volumes

So I’ve got my 2TB RAID-1 Mirror for the OS, and I’ve got my 4TB GUID disk for the RAID-10 array, but how will I actually use it? Without Drive Extender, Microsoft are proposing people use separate disks for each shared folder and bump files between volumes when a volume becomes full.

All of this sounds horribly ineffective and effortful, so I instead am going to make the most of DE missing and use NTFS Mount Points, otherwise known as Junctions.

The Windows Home Server 2011 installation will create a C: drive and D: drive. The C: drive will be 60GB for the operating system and will leave a 1.9TB partition based on the remainder of the disk. On this disk, I will create a folder called MNT and using the Junction command in Server 2008 R2, instead of assigning a drive letter to the 4TB GUID disk, it will instead become a logical extension of the D: drive.

The advantages of this are:

  1. No need to manage partitions.
    The underlying disks provide the separation I need, so why complicate things by partitioning the 4TB volume into separate disks for Photos, Videos, Music etc? All this will lead to in the long run, is under provisioning of one of the partitions and I will then spend time growing and shrinking the partitions to meet my requirements which will cause massive disk fragmentation?
  2. No need to manage drive letters.
    Drive letters are a pain, especially if you are expected to have one for each type of shared folder and especially if you need to remember where you put something. I instead will have a super-massive D: drive containing everything. I’m sure the thought of this makes a lot of people cringe, but in reality, what does it matter? It is very unlikely that a problem will occur with the partition unless there is an underlying disk problem, and that in turn will affect all of the partitions.

The only flaw in my plan will be the Home Server 2011 installation process. If it detects my 2TB hard disks and decides to initialise them as GPT disks instead of GUID disks, then I will have to restart the installation, hit into WinPE and manually initialise and partition the disks.

I will require my disks to be GUID and not GPT as when I add the NTFS Junction to the D: drive, its capacity will exceed that permitted by GPT.

Expansion

Based on the current server motherboard specification (ASUS Intel Atom miniITX board), there is only provision for one PCI-Express card. As a result, all four drives will already be in use on the existing PCI-Express card. The only option for expanding the capacity of the server would be to attach two drives in a Mirror to the remaining two on-board driven ports, however performance on the drives would be lower, and they would be acting as a separate mirror and not part of the RAID-10 configuration.

The alternative option is to replace the four 2TB drives attached to the PCI-Express card with 3TB drives once the price on them is lower over time, however, although this would grant an additional 2TB of usable space, the cost would be high.

If I make the decision to use an Intel Pentium D processor instead, then a miniATX motherboard would offer two PCI-Express slots and opens the possibility to add another four disks in a second RAID-10 configuration allowing anything up to 6TB to be added to the storage capacity of the server depending on the drives installed.

In reality, on our current Home Server, we are storing a little over 1TB of data, so this is about giving us the capacity we need for the next 5 years and dong it right in one move without having to make ad-hoc and ineffective changes down the line, which I am normally pressed into at home due to budget verses cost of used hardware.

In the next 5 years, I will no doubt upgrade my Nikon D40 to a D90 which will result in larger photo file sizes, and our digital libraries will no doubt continue to grow as a significant rate as the kids get older and more into an ever widening range of media, not to mention the likelihood of them getting their own machines which I will need to backup.

4TB of RAID-10 storage gives us 3x the capacity we have currently, more redundant, more preformat and more energy efficient – Much more for that matter.