Posts from 2012

Good Enough for a Network Engineer

In my home currently, I have three main areas of tech: There is the garage which hosts my home built rack with my firewall, switch and home server, the study where my desktop and our Vonage phone gateway live and lastly the living room where the HTPC media center lives.

All of this is interconnected with two Cisco 2950T L2 switches which are 10/100 switches with a pair of gigabit ports for god measure, and a Cisco Aironet 1100 access point for wireless. Downstairs, I make use of the gigabit ports on the core switch to the home server connected to a dual port Intel server adapter in a static 2Gbps team to ensure that there is sufficient bandwidth available for multiple clients accessing media content leaving everything else to run at 100Mbps.

I’ve been long toying with the idea of a gigabit upgrade for the home including a new 802.11n access point to increase the wireless speeds from their current 802.11g 54Mbps speed. Being an enterprise grade gear geek, I love having Cisco in my home. The performance meets and mostly exceeds home gear on a 100Mbps port by port basis and the reliability is amazing (prior to a planned power down this week to install a module in my UPS, my core switch had over 300 days uptime), but this all comes at a cost; a financial one and a feature one.

To get me the gigabit ports I so crave at the core, I’m looking at either a Catalyst 2960 switch or a Catalyst 3560G switch. The 3560G is preferred in part because it gives me Layer 3 routing on the LAN side as opposed to doing router-on-a-stick with the firewall to traverse my VLANs but also because it’s an older model now replaced by the 3750 and 3750v2 switches making it marginally cheaper (although the 3560 series, including the 3560G still hold an incredible price purely due to the fact that they are one of the most commonly deployed enterprise switches). For upstairs on the access switch, I’m looking at a Catalyst 2960 Express to allow me to downsize my access layer point count as a 24 port switch for my study is crazy, but at the time served the requirement for LACP port channelling and price.  For the wireless, I’m looking at an Aironet 1140 Series.

When you price up the best of the used prices online for this gear, it’s frightening. £4-500 for the 3560G, £400 for the 2960 Express and £150-250 for the Aironet 1140 Series, totalling around £1,150, something I simply cannot afford or justify for a four or five user home network even if feature rich reliability and stability are critical to me.

After hearing my tales, a network engineer in our office introduced me to a company called TP-Link who he uses in his home and said that it’s good kit. For a network admin who normally deals in the realms of Cisco, RSA and other networking and security big boys, granting TP-Link the accolade of being good must mean they are worth a look surely?

TP-Link have a nice range of product and they actually compare if not slightly exceed Cisco on feature set when comparing like-for-like models, but best of all is their price. For a cool £300, I can get a brand new, Amazon retail priced TL-SG5428 24 port gigabit switch, a TL-WA801ND 300Mbps 802.11n wireless access point and a TL-SG3210 8 port gigabit desktop switch. For the most part, Amazon prices are actually cheaper than eBay prices for TP-Link kit.

So how do they actually stack up? I’ll start by comparing the switches. TP-Link switches are all fanless which means that the decibel from the stack in my study will become nill and garage will be cut probably by two thirds as the switch is currently the loudest item at 41dB for the 2950T. Features I use and rely on such as MAC port security, QoS mapping for voice and ACLs all exist in TP-Link land, and acutally, for TP-Link, they offer Layer 2 through 4 ACLs on their Layer 2 switches, compared to Cisco who only give you Layer 2 MAC based ACLs on the Layer 2 switches. Management options include an IOS alike CLI, Web, SNMP and RADIUS allowing me to manage the switches in the same way I do currently. Network features like LACP, port trunking, port mirroring and more are all still present on the TP-Link side of like too.

For the desktop switch there is actually no feature loss when compared to the rack mount 24 port model. All of the features listed across the two models compare equally which means I won’t suffer for taking a step down to a desktop switch from the current rack mount.

On the wireless front, my current 1100 Aironet access point supports PoE and I’m using this in the form of an inline injector which the TP-Link ships with whereas I had to buy my current Cisco one separately. All the usual wireless access point features exist on the TP-Link access point too such as multiple SSIDs, VLANs, detachable, replaceable antenna, 802.11d, 802.11i and all the managements such as the IOS alike CLI, Web, SNMP and RADIUS again.

The feedback from our network engineer has been that the throughput of the switches and their reliability are both top notch and he’s had no complaints since buying the switch many months ago nullifying the concern I had there.

The conclusion then is that the age old adage of nobody got fired for buying Cisco may stand true, but it looks as though you might not get fired for buying TP-Link either? Frankly, I was concerned over how you can even design and manufacture a 300Mbps N access point for £35 and a 24 port rack mount gigabit switch for £200 let alone sell it and turn profit, but the fact that TP-Link can and do so, and do it so well means I’m clearly paying for a badge that my home network doesn’t demand? It also means that my home network could stop suffering the two generations old only mantra that seems to flow currently. By no longer competing with  Cisco on feature and price, only being able to justify buying two or three generation old equipment, I can buy something bang up to date, giving me the gigabit I have for so long wanted and need.

Time will tell as I’m not going to be replacing everything overnight but I will be staggering all my upgrades throughout the 2013 calendar, but I’ve got strong optimism for the idea of the switch. The best part is that it will be largely free as the resale values on my old Cisco kit on eBay will cover 99% of the cost of the new kit. Who said there is no such thing as a free lunch?

Storage Architecture for Windows Server 2012 Essentials

Two of the best features in my eyes in Windows Server 2012 Essentials over Windows Home Server 2011 are both related to disk.

RAID Support
Windows Server 2012 Essentials is a grown-up Windows Server unlike Windows Home Server 2011 which in an aim to simplify the server in the home idea for consumers, removed the ability to use hardware RAID the operating system volume. This was a horrible thing for Microsoft to do in my opinion.

Storage Spaces
In a nod to Driver Extender from Windows Home Server (v1) Windows 6.2 kernels running on Windows 8 and Windows Server 2012 both support Storage Pools and Storage Spaces. This allows you to pool disks together to produce simple, mirrored or parity volumes from a single pool of disks. It’s like RAID on steroids because it means you only waste the chunks on disk that you want to for volumes that you want to protect, not all of them.

So taking these two ideals into consideration, what am I going to do?

Step 1 is to get the operating system off of the pair of 2TB disks I have, where there is a 60GB partition for the OS and a 1.8TB partition on one disk, and a 1.8TB partition on the second mirrored from the first disk using Windows Disk Management mirroring.

Step 2 is to maximize the utilization of the capacity of my six 2TB disks.

To achieve step 1, I am investing in a pair of SSD disks. For Windows Server 2012 Essentials to accept them they have to be over 160GB, so I am looking at the Intel 520 Series 240GB disks which are currently on Amazon reduced from £300 to £180. These will be connected to my SATA RAID controller in a RAID1 mirror and will be installed in a Lian Li 5.25″ to dual 2.5″ adapter, allowing me to utilize one of the three 5.25″ bays in my case which I will not ever use otherwise, opening up two slots for 3.5″ high capacity disks for future expansion. Needless to say, a pair of Intel 520 Series 240GB disks will give the operating system volume unbelievable IOPS and will allow the server to boot, reboot and access the OS extremely quickly. I’m also going to leave it as one super-sized 240GB partition so that I never have to worry about Windows Updates or software I install on the server causing me to need to think about repartitioning in the future.

To achieve step 2, it’s simple. Connect the now completely free to breath six 2TB disks to any of the on-board or two remaining SATA RAID controller ports, and configure them in Windows Server 2012 Essentials as a single, six disk Storage Pool and carve my volumes out of this 12TB raw disk pool using the protection levels I see fit for my needs.

Thanks to the ability to over provisioning (or thin provisioning as Microsoft incorrectly refer to it in my opinion) on Storage Spaces, I can create spaces larger than my current capacity and add disk or replace existing 2TB disk with 3TB or 4TB disk as available to extend the live capacity.

Over time, as I require more disk there will be one ‘problem’ in that I will have depleted all of my SATA ports. Luckily, my SATA RAID controller supports Port Multipliers, and a cheap and potentially nasty Syba 5 to 1 SATA Port Multiplier for about £45 means I can extend my capability to an extra four ports which at that point reaches the capacity of the case chassis. Power also isn’t an issue as my Corsair AX750 power supply was selected at the time specifically because of it’s amazing ability to run at peak power efficiency at extremely low consumption levels and also to support up to 12 SATA disks with its modular cabling design.

So there we have it…my design for Windows Server 2012 Essentials Storage Architecture. It’s by no means conventional but then I don’t really think anything about my server build is, with it’s 4U rack mount configuration packing a build-out consuming less power than your average light fixture.

I only designed and stood up the Windows Home Server 2011 setup little over a year ago. I think we all secretly knew that Home Server as a product family was a dying breed and that Microsoft would either kill it off completely or encompass it into another product family sooner than later to drop the support overheads. Thankfully it happened sooner I feel: Yes, it means that I have to rebuild my setup not that long after it was actually first built, but thankful because it meant I haven’t invested too heavily in customisation or further expansion of my current setup leaving me playing the corner flag with a legacy product at work. Luckily now, with Windows Server 2012 Essentials being a core SKU in the Windows Server family, it will be three years until the next major release. Although a Windows Server 2012 R2 release may appear sometime in the middle of the three year release cadence for server operating systems, at least being on the RTM release for the same product should make that migration a hell of a lot easier.

Hardware Compatibility for Windows Server 2012 Essentials

Following on from my spate of posts relating to Windows Server 2012 Essentials, I am working hard to test my configurations in a Hyper-V 3.0 VM on my desktop to ensure that I can migrate to Windows Server 2012 Essentials successfully without any hiccups.

Migrating my data on the current Windows Home Server 2011 is the biggest task, but not the biggest challenge. For me, ensuring that my hardware will work as I need is the biggest challenge because of my extremely bespoke build.

The first item on the agenda is the CPU. The system requirements from TechNet state that a 1.4GHz single core or a 1.3GHz dual core is required. Lucky, as I have a 1.6GHz dual core AMD E-350 Hudson processor. I’m a long way from the recommended 3.1GHz multi-core processor, but my primary target is still energy efficiency, so the E-350 processor exactly achieves that with an 18W TDP. If I find over time that CPU is my bottleneck then I will need to consider using slightly more watts and upgrade to something like the 35W TDP Intel i5 Mobile chipset but that will need a new motherboard too, so would cost a load to upgrade.

Next up is the memory; I currently have 4GB of the stuff. The minimum is 2GB but the recommended is 8GB. I know based of my current usage that my Windows Home Server 2011 machine that I am using about 70% of the physical memory, and with Windows Server 2012 being of more modern gravy, it is designed around lower I/O and more memory (as memory is super cheap these days), so I’ve decided to upgrade to 8GB, replacing my 2 x 2GB 1066MHz Corsair Value Select with 2 x 4GB 1600MHz Corsair XMS3. This new memory is faster than my current as at build time, Corsair didn’t sell the Value Select memory in anything above 1066MHz, and because the XMS3 memory is designed for gamers and overclockers, features like variable voltage, improved CAS latency and builtin heat spreaders should all help improve overall system performance and stability.

Next up is the network. This one could be interesting. I wrote a post back in August 2011 when I first built the new home server around circumventing the fact that the Intel drivers wouldn’t install on Windows Home Server 2011 (based on Windows Server 2008 R2) because I am using one of the older generation PCI-X cards which were discontinued. The driver physically works in Windows Server 2008 R2, shows as WHQL in Device Manager and all of the ANS features work too, but the .msi blocks it. I’m betting on the fact that by using the updated version Intel driver, designed for Windows and Windows Server 2012 that the same hack will work. In Windows Server 2012, I won’t be using the Intel ANS teaming driver for creating my 2Gbps SLA team though, but I will be using the native features in Windows Server 2012 which is one of the amazing new features. If that fails, then I will be using the onboard Realtek 1Gbps NIC for the short term while I acquire a replacement, more modern PCI-E dual port Intel NIC to replace my PCI-X one which run for about £40-£60 on eBay these days.

The final and most pivotal part of the build, the one which could ruin it all is the Leaf Computer JMicron JMB36x based SATA RAID controller. In Windows Server 2012 Essentials, I am re-modelling my storage architecture. This is the primary reason for my move to Windows Server 2012 Essentials so that I can take advantage of Storage Pools and Storage Spaces. After some debate and discussion with @LupoLoopy at work surrounding SATA IOPS and protection levels for data, we both agree my current setup of RAID10 for the data volumes is seriously wasting two of my 2TB disks and I am arguably wasting another two of them on the OS volume. I will be posting in full later to discuss and expose my storage strategy.

Back to the controller though, using my Windows Server 2012 Essentials Hyper-V 3.0 VM, I installed the driver using the Install Legacy Hardware option in Device Manager, and the latest driver version from the JMicron site installed successfully, without warning and still bears the WHQL mark even though it is a Windows Server 2008 R2 driver.

Am I happy? Very. With the exception of possibly the Intel NIC if my hack for the .msi restrictions doesn’t work and I need to buy a new one (although secretly, I would like to replace it with a PCIe one at some stage anyway), all of my hardware looks set and happy for Windows Server 2012 Essentials. So much more to do before I can start any work, but progress is progress after all.

Partners on Exchange in Windows Server 2012 Essentials

Reading some of the comments and views on Windows Server 2012 Essentials this evening, it appears that quite a number of partners aren’t very happy with the lack of Exchange as was previously found in Small Business Server (SBS).

I think this is short-sighted of these partners making these comments. If you are a partner, what makes you more money? New deployments or supporting existing ones? I would hazard a guess that it is the new deployments. SBS made Exchange easy, really easy, which meant that the amount of work to configure Exchange to work was limited. The hardest part was migrating any existing mail systems into Exchange.

Windows Server 2012 Essentials is designed around feature integration with Office 365. This means that you can offer your customers not only Exchange, but also Lync and SharePoint (yes, I know SharePoint was in SBS too, but it wasn’t the greatest of configurations). What’s more, how available and accessible is a single SBS server verses Office 365? Yep, Office 365 is better. So by giving your customers Windows Server 2012 Essentials and Office 365, are they not getting a better product, giving them more functionality and most likely a better customer experience, translated into happier customers?

All this, leaves you as a partner more time to focus on upsell, selling the customer more, varied products or trying to break into new customers or verticals and spending less time answering to menial support incidents, and lest not forget that moving to Office 365 isn’t a walk in the park by itself. If a customer is currently using SBS then their existing messaging environment will likely need to be updated to support some kind of temporary co-existence while users are migrated, and all of this is professional services work, work that frequently carries a big price tag and has high margins on it.

The moral of this story is that cloud is happening and I think that those partners who embrace it will succeed. Those who oppose it will likely find themselves losing work to people who do embrace it and for me personally, what sounds better as a job title? Systems Implementation Engineer or Cloud Solutions Integrator or Cloud Solutions Architect?

Azure Backup for Windows Server 2012 Essentials

Last night, I posted saying that I think Microsoft had missed a trick in not taking advantage of the Windows Azure Cloud Backup features in Windows Server 2012 Essentials, and today it looks like I must eat a slice of humble pie.

After some reading on the subject this evening, it appears that Microsoft are actually incorporating it, but not natively. To access the feature, you need to install a plugin. A blog post on the Small Business Server TechNet Blog details the installation steps to get the plugin installed and working (

Users of Windows Server 2012 Essentials can get a free six month trial for the service, however information on pricing is hard to find and understand: There is nothing on the trial signup page which offers an insight into what you will pay beyond the trial? Using the extremely complicated (and for good reason due to its capability and scale) Azure Pricing Calculator gives you a hint as to what you will pay but I think Microsoft need to provide some confirmation around the storage options.

Storage is offered in two different flavours: Geo Redundant and Local Redundant with the former seeing your data replicated throughout the Azure global infrastructure and the latter seeing your data only being replicated within your geographic region, but I can’t seem to find anything that states whether either option is valid for the backup service, or if you must use a particular option? Geo Redundant storage is £7.58 per month for 100GB, while Local Redundant is £5.64 per month for 100GB to give it some context.

The two storage types will have implications on your views on the United States and their laws such as the Patriot Act. If you are precious about your data (you should be) and don’t want these authorities to be able to view it under law without your consent which is essentially what the Patriot Act boils down to, then you may want to consider against the Geo Redundant option as after all, Local Redundant still gives you way more availability than your single on-site server. The region that your data is stored in is determined by the country you select during registration, so make sure you set it correctly.

Compare the above prices to those of one of the most popular Windows Home Server cloud backup solutions, Cloudberry and Azure directly looks good. For the same 100GB of storage, you will pay $9.30 a month for Amazon S3 or $12 a month for Google Cloud Storage, plus a $29.99 license cost for the Cloudberry product.

The thing to be conscious of, is this small catch: retrieving the data. Azure provides free unlimited inbound (upload) traffic so you pay nothing to upload your backups, but download is priced per gigabyte per billing cycle. If your server was to fail and you need to pull down your 100GB of data back to the server once it is recovered, then in a single billing period then you will pay £6.55 for 95GB (the first 5GB is free), but the key to remember is that this is a one time cost if and when the server fails. This price also will vary based on your geography. The price I’ve shown is for US and European egress data. If you like in another location, then the price is £10.37 instead, so bear this in mind.

Looking at this as a home user and not an SMB, I think paying £5.64 a month is a very small price to pay for piece of mind that all of my family pictures and important documents can be protected to a much higher degree than I can do at home with a Mirror Storage Space and an external USB or eSATA disk on-site backing up the server. From the perspective of an SMB then your data is your business so only you can value what your data is worth, but I would guess a lot. If you are an SMB without the luxury of a full time IT professional or a well managed agreement with a Microsoft Partner for supporting your environment, then I would guess that this service could one day prove invaluable.

Windows Server 2012 Essentials Storage Spaces Vs. RAID

In Windows Server 2012 Essentials as with the whole Windows 6.2 kernel family, Storage Spaces and Storage Pools re-invent the concept of Drive Extender from Windows Home Server v1. With several options for resiliency in Storage Pools, I thought I would touch on what bang you will get for your buck with each protection level and compare it to physical RAID levels.

For all the examples, I will be using 2 500GB disks unless the example requires more such as RAID 5 or 10 where I will use the minimum number required to achieve the set. If you are using 1TB, 2TB or greater sized disks, then simply multiply the figures here to work out your gains.

RAID 0 (Stripe – No Resiliency in Disks, Two Disks Required)
1TB Raw / 1TB Usable

RAID 1 (Mirror – Single Disk Resiliency, Two Disks Required)
1TB Raw / 500GB Usable

RAID 5 (Stripe with Parity – Single Disk Resiliency, Three Disks Required)
1.5TB Raw / 1TB Usable

RAID 10 (Mirror of Stripes – One Disk in Either Stripe or Both Disks in One Strip May Fail, Four Disks Required)
2TB Raw / 1TB Usable

Storage Space Simple (Equivalent to RAID 0 – No Resiliency in Disks, One Disk Required)
500GB Raw / 500GB Usable

Storage Space Two Way Mirror (Equivalent to RAID 1 – Single Disk Resiliency, Two Disks Required)
1TB Raw / 500GB Usable

Storage Space Three Way Mirror (Equivalent to RAID 1 with a 2nd Mirror – Two Disk Resiliency, Three Disks Required)
1.5TB Raw / 500GB Usable

Storage Space Parity (Equivalent to RAID 5 – One Disk Resiliency, Three Disks Required)
1.5TB Raw / 1TB Usable

The thing to be clear on Storage Pools and Storage Spaces over traditional RAID is that RAID consumes the entire disk, obscuring it to the physical operating system and limits you to the capacity of the underlying disk subsystem. This makes adding new disk to an existing RAID set and extending it’s capacity challenging unless you are using RAID 5 whereby you can simply add disk and extend capacity. Storage Pools and Storage Spaces are different in that the Pool amalgamates the capacity of the underlying disks together, then pools overlay the disks to provide the availability. This allows you to do clever things like use three disks in a single Pool to provide both a Two Way Mirror to provide protection to read/write files such as documents and a Parity to provide protection to read only workloads such as video or music files, maximising the yield from your disk investment. With RAID, so achieve these separate protection levels, you would need five disk instead of three.

I think the only challenge with Storage Pools and Storage Spaces is going to be to calculate the capacity requirements and optimising the use of the disks: In my scenario I have 6 2TB disks and trying to decide what levels to protect the different content types at and whether to split each workload type into a dedicated Storage Space or whether to Pool Spaces between workloads is interesting as I want to make sure that my content is protected as effectively as I need it, but at the same time, as a consumer, I can’t afford to blow £150-£200 on new disks all the time so I need to maximise my investments.

The core advantage of Storage Pools and Storage Spaces for me over RAID is that it does allow you to fine-grain control your disks making the most out of them, thin-provisioning (over provisioning as it actually should be called) allows me to design the disks for future expansion ahead of time and it allows me to add disks and expand pools (online) without complicated RAID array configurations, changes and scary thoughts of migrating RAID levels (if you have a controller which supports such a thing).

I’ll be doing another post in the coming days on my options for Storage Pools and Storage Spaces, and where I am leaning and why.

Windows Server 2012 Essentials Initial Admin Thoughts

I spend my days working with Windows Servers and more increasingly Windows Server 2012. Whilst I may not know everything there is to know (and who does after all), I like to think I know quite a bit on the subject and therefore my understanding of what’s good and proper is generally sound. Once the installation of Windows Server 2012 Essentials completed I was drilling through some of the back-end interfaces to dig up parts of how it worked and was strung together and these are my opinions based on those views as an administrator.

Active Directory Domain Services (ADDS)

As we know, Windows Server 2012 Essentials unlike Windows Home Server 2011 creates a domain. It does this with the greatest of ease for the end-user driving the install, but with ease, you lack control, evident here.

The domain is created with a Windows Server 2012 Domain and Forest Functional Level which is good, however the Active Directory Recycle Bin feature, added in Windows Server 2008 R2 ADDS is disabled which I think it should be to help people out who accidently delete users or computer accounts.

The domain is created with a .local domain suffix which for me is not nice as they can end up causing you problems depending on what you are trying to do with the domain environment. If you read some of the literature for Office 365 they don’t support federation using ADFS with .local domains.

The case sensitivity of the installer has big implications on the domain name created. I personally like to see a lowercase domain name (FQDN) with an uppercase Pre-Windows 2000 domain name (NetBIOS) but the installer uses the same name for both. From my previous post on installing Windows Server 2012 Essentials, whatever you type in the Internal Domain Name text field will be used for both, so be careful with that. You can change the Pre-Windows 2000 domain name using Active Directory Users and Computer (ADUC) or the PowerShell Cmdlets, but whether this will have implications for the Dashboard and other Essentials functionality is not clear without testing.

When new users and computers are added/connected to the domain using the dashboard and the client computer connector software, the new objects are placed in the Users and Computers containers respectively. I tried using redirusr and redircmp to move the new object creation to an OU, but this didn’t work and everything still hits the containers. Manually moving the objects later seems to cause no issues, but I think it’s very bad that the installer doesn’t at least create initial OUs for these objects as objects in containers can’t be linked to GPOs.

In Active Directory Sites and Services, no IP Subnets are configured to link to the site and the site is left with the standard name of Default-First-Site-Name. I don’t see any problems in renaming this and adding the subnets.


The DNS role is installed as a requirement for ADDS. The installation is basic, very basic. One Forward Lookup Zone is created for the DNS domain name specified in the installer, but no Reverse Lookup Zone. No Forwarders are configured so all recursive lookups will be hitting the Root Hints servers unless you are configuring the Essentials server to use the ISP router as its DNS server, which brings the next point, linked to DHCP. Clients will be receiving DHCP leases normally from a self-bought or an ISP router which will be configuring the clients with itself as the sole DNS server.

Unless the connecter client does something very nasty like configure a static DNS server on the NIC in use for the Essentials server, how will it be able to resolve DNS records on the server as it will be relying on the records from the router?

Lastly on DNS is that Scavenging is disabled, so if you do use DHCP and have your clients leasing addresses directly from the Essentials server (which I would recommend) then the stale records won’t get cleaned up.

Certificate Authority (CA)

The installer configures an Enterprise Root CA on the server which is an online root and issuing CA in this instance. Anyone who knows PKI knows that an online root CA is bad news. I know it’s the only option as you can’t expect people to drop two servers, one to remain powered off for it’s life as an offline root CA, but doesn’t stop it from being horrid.

The most annoying thing here is the name that the CA is given. [DomainName]-[ServerName]-CA. This is totally unfriendly and looks ghastly in any of your certificates. The CA isn’t configured to grant the account you specify as the administrator account during the installer as a KRA or a DRA so hope that nobody in your house or office tries to be clever and EFS encrypt their documents before losing the private key to open them.

Network Access Protection (NAP)

This role is installed to assign policy for the VPN and Remote Web Access. The administrative console for it is not installed to keep your blind to its configuration, but you can easily install this using Server Manager by adding the RSAT Feature for NAP.

Remote Desktop Services (RDS) Gateway

This component is used for the Remote Web Access. As with NAP, the console is not installed to keep you in the dark, but you can again install this using Server Manager by adding that RSAT Feature for RDS Gateway Tools.


Other random bits and pieces I noticed whilst poking around where as follows:

  • Memory Usage for the base install is 1.4GB and CPU Usage while idle was 4% on my Hyper-V 3.0 VM from my Core i3 desktop PC. It will be interesting to see how my physical AMD E-350 Zacate Home Server processor handles it or how the processor in the HP Microserver would fare?
  • No Group Policy Objects are configured aside from the two default domain policies. Do not rename either of the default policies as options in the Dashboard update the configuration of the policies and if the dashboard is looking for them based on name and not GUID, then you will hit problems.
  • The Server Backup feature  within the dashboard relies on a dedicated and assigned local disk. There is no option for making use of Windows Azure Cloud Backup which is now supported in the Windows Server 2012 iteration of Windows Server Backup. I think Microsoft are missing a trick here as there are other 3rd parties already cashing in on the cloud backup market a la Windows Home Server 2011, such as Cloudberry.
  • Deleting any of the default server shares such as Recorded TV or Company (if you aren’t a company and you aren’t using Media Center for Live TV Archiving to the Essentials Server) for example causes warnings of missing folders in the Dashboard and causes Critical status alerts in the alert panel. There is a workaround for this courtesy of  Philip Churchill at

Windows Server 2012 Essentials Installation Screenshots

I took an hour out today to do an installation of Windows Server 2012 Essentials inside a Hyper-V 3.0 VM so that I could familiarise myself with it a little before I consider porting my existing Windows Home Server 2011 install over. I’m not a Windows Server 2012 virgin as I’ve been working with it for a while in my capacity at work so I was primarily interested in the experiences of the Essentials edition compared with the Standard and Datacenter editions for enterprise.

Before you begin anything, it’s worth checking the system requirements at The biggest point here is a minimum of a 160GB disk for the operating system installation which can be partitioned into a 60GB operating system volume and a 100GB data volume. This is a bummer for some people who may have taken the decision to run their OS on SSD as one of the most common sized SSD drives around and at an affordable price is a 128GB drive. I think Microsoft should have lowered the disk requirement to cater for this 128GB SSD market, but that’s just my opinion as the majority of people will likely be using 1TB or greater disks in their builds to get the storage capacity and density.

After being asked the usual language questions and if you want to modify the disk partition layout, the installation is complete pretty quick as is with new Windows releases and the Essentials Setup Wizard commences.

After the updating and preparing your server phase, the server will reboot twice. One of these will almost certainly be to bring online the Active Directory Domain Services role, but the other I’m not sure what causes that? The quick observers among you will also notice that very briefly, the server logs on automatically as the Administrator account, displaying the Modern UI Start Menu, before once again, the Essentials Setup Wizard resumes. Once complete, you will see a final screen of the wizard, hopefully with a nice green tick stating that the installation is complete and the server is ready to be used. The URL for connecting clients and the usernames you specified are confirmed here too.

It’s worth pointing out that at the phase where you are asked to provide a username, you cannot use the username Administrator. It appears that Windows Server 2012 Essentials keeps this one up it’s sleeve for it’s own use and you aren’t told the password for it at any stage. Once the installation completed, I took a quick dive through all of the screens in the Windows Server 2012 Essentials Dashboard to see what options are available and configured as default. These are all shown in the image gallery below.


Xbox Music Availability for Xbox Live Gold Users

Here’s an interesting something I found out today courtesy of a tweet from @WithinRafael.

If you are an Xbox Live Gold subscriber, then from a Windows 8 desktop, laptop, slate (or whatever your tipple) and from your Xbox, you can access free streaming from the Xbox Live Music service.

You don’t get access to the service from Windows Phone or the ability to download music for offline access for free, and you will require an Xbox Live Music Pass for those features.

Microsoft don’t exactly seem to be ‘sharing’ this information, because the details at don’t exactly scream and shout free stuff.

Get yourself a Nokia Lumia for the free Nokia Mix Radio on Windows Phone and free streaming on your full fat Windows device and your set for free though which I like very much, typing here from my Windows 8 laptop listening to some free David Guetta.

Remote Desktop Protocol 8

Just a real quick post to say that I noticed my desktop PC at home running Windows 7 had an optional update listed for Remote Desktop Protocol 8, the version included in Windows 8 and Windows Server 2012 natively.

The update is available for Windows 7 and Windows Server 2008 R2 with SP1.

The KB article for the changes and improvements in the protocol version at available at

The ky features appear to be support for VoIP applications via RemoteFX, Improved SSO for RDS Web Access, RemoteApp Reconnection. It’s also worth noting that Shadowing (Remote Control) of RDS sessions and Aero Glass Remoting are now deprecated, so if you are using Shared Session Virtualization as a VDI infrastructure, you might what to think about testing this update first if your users like their Desktop Experience RDS sessions.