Posts from April 2014

Microsoft Azure Billing Alert Service

Last week, I dropped a post about how to enable and utilize the New Service Tiers for Azure SQL Databases. In this post, I’d like to show to you another preview feature available in Microsoft Azure, the Billing Alert Service. Whilst I see this service being of more interest to consumers and small enterprises consuming Microsoft Azure services, this isn’t to say that a cost conscious large organisation couldn’t benefit from this and best of all, it’s totally free to enable and consume.

Where I see value in this feature is tracking the cost of your subscriptions. Whilst it’s possible to activate a spending limit on a subscription, this isn’t without it’s own issues. A spending limit once imposed and reached will stop and terminate all services in your subscription. My blog for example went offline for 24hrs last month because I’d hit a spending limit imposed. I’ve since removed this and instead use the Billing Alert Service to help me track my spending and kerb excessive usage when I get near the amount I’m happy to spend.

It’s important to note that this is a preview feature. This doesn’t mean that the service doesn’t work but it does mean that Microsoft could make changes to the service or pull it entirely at some point so just bear this in mind.

Enabling the Billing Alert Service

Enabling the Billing Alert Service is easy from the Microsoft Azure Account Portal which you can access at the URL https://account.windowsazure.com and login with the account used to control your subscription. From here, select the Preview Features link in the top navigation to access a list of features which are available for you to access in preview.

Azure Portal Preview Features

Scrolling through the list, somewhere near the bottom of the list, you will find the Billing Alert Service. Hit the Try It Now button to activate the feature.

Billing Alert Service

Once you’ve selected the button, you will be prompted for which subscription to activate the feature. If you have more than one subscription, select the appropriate one and click the Tick button to complete the operation which I found did take some time to complete.

Adding Billing Alert Preview Feature

Once activated, you will see the status of the feature reported below the Try It Now button as You Are Active. In the screenshot below, you can see I am currently activated for both the Billing Alert Service and the New Service Tires for SQL Databases which I covered in the previous post on Microsoft Azure preview features.

Billing Alert Service Active

With the feature now activated, we can use it to setup some billing alerts. Unlike most other features in preview in Microsoft Azure, the Billing Alert Service is accessed through the Account Portal and not the Management Portal. Click the Subscriptions link in the top bar to access a list of your subscriptions.

Configuring the Billing Alert Service

Here on the Subscriptions Overview page, the statistics for the subscription with billing amounts and usage consumption are shown and we now have a new link for Alerts Preview which is just below the main title.

Azure Subscription Overview

Accessing the feature for the first time, we can see that we have no alerts configured for billing and there is a link to add a new alert. The yellow information bar tells us that we can create up to five billing alerts which is probably going to be sufficient for most people. I’m going to be creating two for my subscription: one approaching my preferred monthly spending limit so that I can calm things down a little and one when I hit the preferred limit so that I can shut down anything that can wait or that I no longer need.

Billing Alert No Alerts Configured

Click the Add Alert button to get started creating a new alert.

Billing Alert Configure New Alert

On the new alert page, there aren’t actually many options or settings to configure. Firstly, you need to set a title for your alert which will appear in the email which is sent out so make this factual so that you can see quickly from the email what the alert is warning you about.

Next, we can set what type of alert to send and there are currently two types. The default is what I am using, Billing Total which tracks the amount of spend. The second type is Monetary Credits which changes the context to amount remaining. If you want to track what you are spending in money then use the first option. If you are using a subscription with free spending credits such as MSDN or the like then you may wish to use the latter to track how much of your free entitlement you have left.

Once you’ve set the alert type, set what value to use. In this example, I’m sending out an alert once I’ve spent £75 in a billing month period.

Lastly, we configure the alert recipients. In my case, this is to my personal, singular email address but there is nothing to stop you from adding a distribution list address here so you could configure a distribution list in Microsoft Exchange or Office 365 with all of the parties with a vested interest in your Microsoft Azure subscription as members to receive the alerts.

Once you’ve added the address or addresses for your alert recipients, select the Save button to save the alert definition.

I took the liberty of creating my second alert offline but here is how the console looks with the two alerts added.

Billing Alert Two Alerts Configured

With alerts created, you will receive a welcome email confirming that the alert was setup which allows you to verify the email or distribution list address you used. There is a delete button on the right of the interface allowing you drop and delete alerts as you wish at a later stage.

I hope that this has been helpful for you who want to try and keep tabs on your Microsoft Azure spend.

Microsoft Ordered to Release Dublin Server Data

An article went live on BBC News this morning (Microsoft ‘must release’ data held on Dublin server) which I hadn’t seen initially and was brought to my attention. The subject of the article is a US court case where a judge has ruled that Microsoft must hand over email records for a mailbox which is held on one of the servers in the Dublin, otherwise known to Microsoft Azure fans as the North Europe region.

Data sovereignty has always been an issue plaguing people considering a move to consume public cloud services and this case looks set to throw the whole debate up into the air once more.

The US government argue that they should be allowed to access the data in terms similar to those of a subpoena which grants them the right to request documents held in any country by the person subpoenaed however Microsoft contest against this and comments from the EU Commission agree with Microsoft. I’m in the camp of the Microsoft, the EU and the consumers among us all that if my data resides in outside of US jurisdiction that the US shouldn’t be able to just walk in and a take a copy. In all honesty, they probably already have a copy thanks to the NSA but unfortunately for them, that wouldn’t be admissible in court as evidence. Anybody else watch the Good Wife recently?

I really hope Microsoft battle this one through and that the EU member states back Microsoft in any appeals they make. The record and the law needs to be set the record straight with the US on the subject of data sovereignty. This is defiantly going to be a hot topic to watch out for.

New Service Tiers for Azure SQL Databases

Last night I received an email from the Microsoft Azure team with an announcement for a change to the functionality of Azure SQL Databases. At present, there are two service tiers available for Azure SQL Databases, being Web and Business with limits on size relative to each. As anyone who has read my guide on TechNet Gallery entitled Configuring a SQL Azure Sync Group will know, I’m quite into these DBaaS offerings in Azure. Yesterday, they announced in preview the release of three new service tiers for the Azure SQL Databases service.

What’s in the Announcement

The three new tiers announced are named Basic, Standard and Premium. In twelve months time, Microsoft will be ceasing the current Web and Business tiers in favour of these new tiers currently in preview.

  • Basic: Designed for applications with a light transactional workload. Performance objectives for Basic provide a predictable, hourly transaction rate.
  • Standard: Standard is the go-to option for getting started with cloud-designed business applications. It offers mid-level performance and business continuity features. Performance objectives for Standard deliver predictable, per-minute transaction rates.
  • Premium: Designed for mission-critical databases, Premium offers the highest performance levels and access to advanced business continuity features. Performance objectives for Premium deliver predictable, per-second transaction rates. In addition to this, there are going to be revisited scaling limits for the tiers, uptime SLA, backup and recovery options and disaster recovery options.

Basic will have a 2GB limit, an increase from the 1GB limit in the current Web tier. Standard will have a 250GB limit whilst Premium will have a 500GB limit. Restore points for recovering the databases will be available for 24 hours on Basic, 7 days on Standard and 35 days on Premium. All the tiers come with a 99.95% uptime SLA.

New Tiers Pricing

The good news is that if you jump on the band-wagon early, you get reduced pricing during the preview. In an example scenario, using the North Europe Dublin datacentre and billing in Pounds (GBP) on a Pay as You Go tariff, Web and Business edition for a 100MB database is £3.179 per month. A Basic database of the same size is £1.60 per month, Standard is up to £64 per month according to usage and Premium varies wildly between £296 and £2,368 according to usage. It’s interesting to note the high end pricing on Premium which dependant on use can actually work out more expensive than running a SQL Server IaaS virtual machine in Microsoft Azure but that’s the price you pay for design simplicity of DBaaS over SQL Server IaaS.

If we use my blog here at richardjgreen.net as an example where I currently use Web databases, if I moved from Web to Basic under the new tiers, I would see a monthly decrease in cost of about 50%.

What Will Happen to Web and Business

All we know at the moment is that these two legacy tiers will be phased out in twelve months time. There doesn’t seem to be any indication as to how databases would be transitioned from the existing Web and Business tiers over to the new tiers but I would hazard a guess that Web databases will become Basic and Business databases will become Standard.

This above statement is assuming of course that there is compatibility between the current tiers and the new and that the databases will be transitioned seamlessly. I think it would be a bad PR exercise for Microsoft if existing databases were dropped instead of transitioned over to the new tiers as that’s going to put extra work down for customers already consuming these services.

Accessing the Preview Tiers

In order to access the preview tiers, login to your Microsoft Azure Account Portal, the production portal and not the new Preview Portal. You can access this part of the Azure portal at https://account.windowsazure.com if you haven’t accessed it before.

From here, click the Preview Features link in the top navigation.

Azure Portal Preview Features

From the Preview Features page, scroll down until you can see the New Service Tiers for SQL Databases option.

SQL New Tiers Try Now

Click the Try It Now button alongside the preview feature entry.

SQL New Tiers Add Feature

You will be presented with a dialog to select which subscription you wish to enable that feature for. I only have one subscription so I only have a single selection in the drop down. Click the tick button in the bottom right once you have the correct subscription selected. You will be taken back to the previous page once it’s done and you will be sent a welcome email for the preview.

SQL New Tiers Active

The preview features page in the portal will update to also show a caption under the New Service Tiers for SQL Databases button You Are Active to show that you are participating in this preview service.

With this enabled, we can head over to the Management Portal using either the Portal link in the upper right or by navigating to https://manage.windowsazure.com to try out the feature.

SQL New Database Custom Create

From the Management Portal in Microsoft Azure, I have clicked into SQL Databases and selected the New Custom Create option. As you can see in the new database wizard, in addition to the current tiers for Web and Business, we can now select from our three preview tiers, Basic, Standard and Premium also.

SQL Database Features Support

The current crop of SQL Databases support the Automatic Backup and Sync features. I haven’t had a chance to explore the support for these with the new tiers yet but I’ll be back soon with just that information. I will be interested to find this out for myself as transitioning from Web to Basic would save me on my monthly Azure bills but if Sync isn’t available in this tier then I’m probably going to be paying more to use Standard.

Configuring a Microsoft Azure CDN TechNet Guide

Last month, I published the first of a two part guide published on TechNet entitled Configuring a SQL Azure Sync Group which demonstrated the steps for configuring two SQL Azure databases to replicate using SQL Azure Sync. I’m still working away on the second part of the guide which I promised however to keep you all as excited about Microsoft Azure as I am in the mean time, I’ve just published a guide entitled Configuring a Microsoft Azure CDN on the TechNet Gallery.

This guide walks through the steps to prepare a BLOB Storage Account for distribution to a CDN and how to activate and use the CDN feature in Microsoft Azure.

The guide is available at http://gallery.technet.microsoft.com/Configuring-a-Azure-CDN-05c1f68a for download right now.

I hope you enjoy this latest release and if you have any questions or comments then please feel free to get in touch.

Microsoft Azure Portal Preview

Sometimes life can be too busy for it’s own good. The night that Microsoft unveiled the Microsoft Azure Portal Preview at Build on Day 2, I grabbed a load of screenshots and took myself on a tour of the portal to share with you all but I got side tracked with Project Home Lab so haven’t been able to write it up and post them for you until now so here it is.

Azure Preview Portal Home

First impression when you login to the portal at the new address of https://portal.azure.com (which by the way, is loads easier to remember than https://manage.windowsazure.com) is that it takes a while to get going. You are initially shown an animation which looks like a slowed down version of a craft in Star Wars going to hyperdrive. Once this is done, the portal is very nice and responsive. I think we’re probably seeing some upfront caching or the like going on here to improve performance in the portal as a whole.

The home screen for the portal is really nice. You don’t initially see a list of all of your services in Azure as you do with the current version of the portal but that’s only two clicks away. What I do really like is the service status dashboard and the billing information against a spending limit visible directly on the home screen. How much you’re spending and whether the service is up or not are probably the two most important things to consider with cloud services so this is really great to see front and centre here.

Clicking the service status map brings up a list of all of the Microsoft Azure services and then drilling into one of them shows you information about current issues, outages and future planned maintenance.

Azure Preview Portal Service Alerts

To get back to the home screen from here, you can either click the Home button in the left navigation or you can close the individual panes which have been opened. It’s interesting that the portal is a sideways scrolling site, a break from the normal up and down scrolling. Microsoft refers to these side scrolling panes as journeys where you take a journey into a feature or Azure service.

From the home page once more, we can click the billing information tile to get more detailed information. From here, we can see the current status of your bill, the remaining balance on any spending limits that you may have in place and also a cost breakdown on where you are spending all of your money. For me, the major expense is Azure Backup service which I use to backup my home Windows Server 2012 R2 Essentials server using the Microsoft Azure Backup Agent but you can also see my costs for SQL, Storage and Web Sites associated with running this site.

Azure Preview Portal Billing

On the home page we also have a What’s New tile. This is really nice as it provides a central and authorative source as to what is new in Microsoft Azure. There are constantly new features and services being introduced and for enterprises especially it’s good to know when these transition from Preview to fully supported and you can get all of this information from the What’s New journey.

Azure Preview Portal Whats New

With the journeys on the home page explored, it’s time to actually look at our services running in Microsoft Azure. Clicking the Browse link on the left navigation gives us the option what we want to Browse and click the Everything link shows us the lot which we can then use to drill into a particular area. Below for example, I’ve drilled into my SQL databases service to view the two databases I have in SQL Azure running.

Azure Preview Portal Browse SQL

As the whole portal is preview right now, there isn’t access to everything that you can get from the existing and current Microsoft Azure portal so expect some bits of data or controls to be missing while they bring this portal up to full functionality and then presumably, deprecate the old portal in favour of this one.

Exploring the Web Sites journey is interesting as you can see some of the work Microsoft have been doing on Web Sites really visibly in this new portal and you can see that it is clear that Microsoft are trying to take Web Sites down an avenue which attracts conventional web hosters who would have historically gone to companies like GoDaddy or Namecheap to name but two of the suppliers out there.

Azure Preview Portal Browse Web Sites

Web Sites looks as we would envision, a list of sites we have running and available in Microsoft Azure right now, but drilling into the Web Hosting Plans area shows us this new view which hints at the hoster targeted approach.

Azure Preview Portal Browse Web Hosting Plans

First up, we see a list of available Web Hosting Plans. The plans which are shown will be those for which you have Web Sites operational right now so for me, I have one site in Free mode which is a development project and the blog which is running in Basic with a Small instance. Drilling into one of these Web Hosting Plans shows us more information about that tier.

Initially, we see summary data for the tier such as the names of the sites running on that tier, an overview of features available but clicking the Pricing Tier tile shows us a complete breakdown of all of the features and compares it against all of the other tiers making it really easy to decide what features you need access to and how that correlates to a tier.

Azure Preview Portal Browse Web Hosting Plans Tier

Similarly, clicking the Quotas tile shows us a detailed view on how we are consuming resource in that tier however this looks a little bit buggy to me right now as for my Basic Small instance, it is showing 1288% memory usage percentage yet in the table view below the chart, I’m showing a maximum of 59% and an average of 56%.

Azure Preview Portal Browse Web Hosting Plans Metrics

That’s covered exploring our existing services for now but what about creating new services? There is a green new button in the bottom left which launches the Microsoft Azure gallery where we can provision new services. Unlike the current Microsoft Azure portal whereby Microsoft core Azure services are available from the new link and then third-party solutions such as pre-packages WordPress sites or registration for Bing Maps API was under the Azure Gallery, everything is now under one roof.

Azure Preview Portal Gallery

An awful lot of things in this gallery right now have a little blue arrow in the corner which indicates that access to this feature or service is coming soon and as in the screenshot above, you will see that even existing Microsoft Azure services such as a Windows Server virtual machine are coming soon. This is obviously there to allow Microsoft to get some more of the under the covers functionality in the new portal online and I’d expect this gallery to change very frequently with the existing Microsoft Azure services becoming available for creation in the new preview portal.

If you want to create something which is marked as coming soon, you will need to flip back over to the existing portal however there is good chance that you will still be able to manage that service element in the new portal should you chose.

Overall, I love the look of the new portal and I can’t wait for everything to come fully online so that we can use this portal for everything instead of having to move back and forth between new and old during the transition period. It’s a really like looking and feeling site and I think it adds a fresh new air to Microsoft Azure which will hopefully bring in some more customers for Microsoft and make the platform even more successful than it already is.

Windows XP End of Support

Yesterday was crunch day for many people out there still running Windows XP as Microsoft support for the aged operating system ended. Yesterday was significant being Patch Tuesday, the usual monthly release cycle of Windows Updates across the Microsoft operating system and product lines but for Windows XP, this is supposedly the last.

Some customers have already paid up multi-million pound deals to continue getting support for Windows XP beyond this date such as the UK government which agreed a £5.5 million deal with Microsoft to continue to receive support (http://www.telegraph.co.uk/technology/microsoft/10741243/Government-pays-Microsoft-5.5m-to-extend-Windows-XP-support.html) but this only gives them an extra 12 months before the support ends once more. I think that people have left the Windows XP support issue to so late in the day to even give thought to that it’s costing them sums of money like this is a huge shame and a missed opportunity.

I work in IT and I’m a big evangelist for the latest and greatest from Microsoft so I’ve got a hugely biased view on the Windows XP support issue but this isn’t something that Microsoft have pulled out of the bag without notice. Microsoft have been warning people for quite some time that XP support would end and for an operating system first released in 2001, it’s had a fantastic run of 13 years but times have to move on as holding onto the past only hinders you long term.

You can see for yourself when Microsoft will be retiring support for applications and operating systems and the transition between phases of the support lifecycle at the Microsoft Support Lifecycle Index at http://support.microsoft.com/gp/lifeselectindex.

Windows 7

Windows 7 is a great mainstay operating system and for 99% of applications currently running on Windows XP, you won’t have an issue so moving to Windows 7 not only keeps you in support but it will improve the effectiveness of your employees due to improvements and usability gains in Windows 7 over XP, not to mention the ability to support a fuller and richer set of hardware features and capabilities: 64-bit anyone? Windows 7 has extended support available until January 2020 which gives you another 6 years before you need to worry about the problem. Windows 7 has a pretty similar look and feel to Windows XP which means the operating system isn’t a culture shock to them.

Windows 8

Windows 8 has improved a lot since it’s initial release with Windows 8.1 and most recently with the Windows 8.1 Update 1, not that I personally had a problem with it prior to these update releases but we know that others did for certain. Sure, there are going to be application compatibility issues with applications coming forwards from Windows XP to Windows 8.1 but that’s to be expected really when you try and make a 13 year technology jump in one hit but unless applications are making specific calls into hooks in the operating system there still shouldn’t be any major issues aside from perhaps browser?

The user interface and experience is going to be daunting for some people sure but Microsoft are aiming to quash this with more and more updates to Windows 8.1 to improve keyboard and mouse control for classical desktop users and actually, the majority of people will love it once they become accustomed to it.

I moved by mum over to Windows 8 and later Windows 8.1 sometime last year. She works for a government sector group in the UK and is one of these stuck on Windows XP and Office 2003 people by day. She took to Windows 8.1 like a duck to water and loves it and that’s on a conventional laptop, not even a touch screen device to really get the most out of it.

Internet Explorer

One of the biggest hang ups for Windows XP that I see is Internet Explorer. As sad as I find it both as an IT Pro and someone who tries to write code for websites, people still use Internet Explorer 6, 7 or 8 because some enterprise applications were designed for the ways that they uniquely rendered pages and moving upwards to Internet Explorer 11 seems like an unsurpassable mountain.  Old versions of Internet Explorer not only potentially harm the user experience because of limited or no support for modern Internet standards but also for security because the older browsers can be more susceptible to attacks through exploits which are often protected against either in more modern software or even at a hardware level thanks to improvements in technologies like Intel Data Execution Protection (DEP).

I’m aware of one organisation who is deploying Google Chrome to allow them to use a new HTML5 web application instead of upgrading from Internet Explorer 8.

Enterprise Mode in Internet Explorer 11 with the Windows 8.1 Update 1 release is designed to try and deal with this by allowing Internet Explorer to render pages in a manner consistent with older versions of Internet Explorer and we can control all of these settings as an administrator with Group Policy.

Group Policy Enterprise Mode

Office 2003

Yes, some people do still use. There are so many features, improvements and optimizations in every version of Office since 2003 that people working with Office 2003 must feel like they are being left out to pasture. I think if I had to go back to working with Windows XP and Office 2003 that a part of me would actually die. It’s even just the little things that make all the difference like Flash Fill in Excel 2013, one of my personal favourites.

If anyone has ever send you an Office 2003 format document such as a .doc and you are using Office 2010 for example, open that file, and save a copy of it as a .docx and check the file size difference. The XML file formats are so much smaller that if you were to convert all of a businesses existing documents to the XML formats, I’m pretty confident that you could reduce your storage growth expenditure for the forthcoming financial year paying for a large part of your Windows operating system upgrade project.

Upgrade Easily

Moving to later versions of Windows need not be as hard as some people fear either. System Center Configuration Manager (SCCM) for example can be used with User State Migration Toolkit (USMT) to migrate a machine, applications and all of the users data and settings from a Windows XP machine to a Windows 7 machine using an automated task sequence process requiring no user input. You could even deliver it as a self-service offering for end-users to upgrade when its convenient to them.

Moving off Windows XP could even be the driver you need to review your technology approach and spur you to start looking at other options like VDI or tablet devices?

Try It You Might Like It

I guess what I’m getting at is that I work in IT, I deal with enterprises all day long and I understand the challenge but I still don’t quite understand how some people have managed to hang on to Windows XP for quite so long especially with the rise of the millennial in the workplace. These new workers are becoming more demanding of enterprise IT to provide technology experiences not only with more synergy to experiences they are used to in the home but also with the adoption of BYOD. Yes, BYOD adoption rates are questionable in both volumes according to who your source is and what exactly do you define as BYOD but there is no denying it is happens to varying extents.

I believe that there are a lot of organisations out there who have a perceived Windows XP problem because that’s what they think is the case through fear and uncertainty (FUD) spread through the media about new versions of Windows but I ask have you actually tried Windows 7 or Windows 8.1? Have you actually built out a device with the operating system and tested all of your applications? What is the cost to replace one or two applications that don’t upgrade quite right or the cost to revamp a web interface with a web developer for a couple of weeks verses paying large sums of money for special support arrangements for Windows XP with Microsoft, something which doesn’t actually help you solve the problem but only prolongs it’s effects upon you?

Project Home Lab: Shopping List

Up until now, I’ve talked at length about the various factors dictating what I will be buying and why. In this post which is meant to be a high level overview of all the posts previous, I’m going to give you a shopping list of all of the components needed to make the build tick so that if you want to embark on your own project you can get a head start if you chose to go down the same route yourself.

This series will consist of the following posts. I will update the table of contents links in each post as I produce and publish the articles.

  1. Project Home Lab: Goals
  2. Project Home Lab: Existing Infrastructure
  3. Project Home Lab: Hardware Decisions
  4. Project Home Lab: Network Decisions
  5. Project Home Lab: Shopping List

Common Infrastructure

Storage Server

Disks for the server I’ve yet to purchase or confirm as these are pretty much a commodity item. I’ll update this post when I do select these but expect it to be a mixture of SSD and SATA disk.

With the SAS Multilane cables for connecting to the on-board SAS SFF-8087 ports, do make sure you get the cable with Sideband support, provided by an extra wire or two and an extra pin connection in the cable otherwise you won’t get the SGPIO disk failure and status indication through the disk backplane.

Hyper-V Server

Disks for the server I’ve yet to purchase or confirm as these are pretty much a commodity item. For the Hyper-V server, the disks need not be large or pretty as they will be used primary just for getting the host operating system online. A pair of SSDs in a RAID1 Mirror will be the most likely suspect.

With the SAS Multilane cables for connecting to the on-board SAS SFF-8087 ports, do make sure you get the cable with Sideband support, provided by an extra wire or two and an extra pin connection in the cable otherwise you won’t get the SGPIO disk failure and status indication through the disk backplane.

Next Up

With the shopping list crossed off and most of the hardware now ordered and some of it already in my hands, it’s time to get building. The next posts will show some of the builds, enjoy.

Project Home Lab: Network Decisions

So far in the series, I’ve talked about the goals and what hardware I want to use. In this post, I’m going to talk about how I plan to connect it all together and how I’m going to get it talking to the outside world via my existing production home network.

This series will consist of the following posts. I will update the table of contents links in each post as I produce and publish the articles.

  1. Project Home Lab: Goals
  2. Project Home Lab: Existing Infrastructure
  3. Project Home Lab: Hardware Decisions
  4. Project Home Lab: Network Decisions
  5. Project Home Lab: Shopping List

Hyper-V to Storage

I’ve got two new servers we know that much as planned so far. The data will be on one server, the processing power on another so I need a way to interconnect them. I also need to be conscious of ensuring that whatever I deploy for the interconnect can scale up with other areas if I elect to add another host later. Most importantly, I need to be sensitive to my existing network. Me hammering away with data transfers in the lab should not under any circumstances impact my home network as getting in trouble with the wife stopping her from being able to stream videos in Xbox Fitness or play the latest Facebook craze just isn’t worth it.

We know already that I am going to need three Ethernet ports, two gigabit and one 100Mbps for the two servers to operate the on-board network ports which I have said previously I will use for management and IPMI access. This leaves the most important aspect of getting data between the two. 100Mbps is gone, never to be seen again for anything other than out of band type connections so my options are 1GbE, 10GbE or Infiniband with RDMA.

As we know, this is a home project. Infiniband requires specialist knowledge, some of which I possess from work with Xsigo in a former role and whilst yes, 40Gb/s or more between the machines would be nice, the Infiniband host bus adapters (HBAs) are expensive and the Infiniband switches even more so. 10GbE is more common however as it is still pretty much at the pinnacle of Ethernet based networking with enterprises only really taking it by the horns today it too is also very expensive which leaves me with 1GbE.

Gigabit Ethernet has been around the block a few times, parts are common and reasonably affordable. Gigabit Ethernet can be run over standard Cat 5e or as I have, Cat 6 cable so I’m reusing my existing investment in cabling and tooling for producing cables. Gigabit Ethernet also means I’m working with a single connectivity medium throughout making the identification of faults and troubleshooting simpler.

I want to get good performance out this lab so after some discussion with @LupoLoopy, we came to the decision that I should use SMB Multi-Channel, the new feature in Windows Server 2012 R2. With four ports of Gigabit Ethernet I will get decent performance at a low price and it’s easy enough to add another card to the server to open up more ports if I need later. A quad port Intel PCI Express adapter comes in at between £50 and £100 on eBay used. I got both the cards for the Hyper-V server and the storage server for £50 so make sure to keep your eye on the available items for a bargain.

I will run my Hyper-V virtual networking over these ports also and using Storage QoS in Hyper-V I can ensure that I get the right amount of storage throughput at all times.

Switching

With it now decided I’m going to use four ports of Gigabit Ethernet for my SMB Multi-Channel storage traffic and three ports for management and IPMI, I need to provision seven Ethernet ports per server. With two servers right now, that’s 14 ports and if I allow an additional seven ports for a possible future expansion, that’s 21 ports, nearly a 24 port switch full.

My current core switch, a 24 port TP-Link TL-SG3424 has about 12 ports free right now so not enough for this project. Going back to my previous statements, I want to keep any of this traffic from harming my home network performance, therefore put two and two together and you can see I’ll need a new switch for this. I don’t want to have to replace my core switch as it works perfectly well, performs well, silent and so forth. As I want to completely isolate this lab, I’m going instead to add a second switch to my network for the lab and I will trunk the lab up to the core for internet access. With this leaf switch design for the network, the only traffic that needs to leave or enter the core switch to and from the lab is external access from myself or Internet access requests, containing the storage traffic and protecting my home interests.

I looked at all the options and came to the swift conclusion that I was going to be best placed to get another TP-Link TL-SG3424, the same as I have already for the leaf switch. 24 Gigabit Ethernet ports suit all my needs, I know it performs well, leaves me with enough ports free for an additional host in the future plus a few ports for uplinks into the core.

I wrote a review of the TP-Link TL-SG3210 I use as my access switch which has equal features and interfaces to the TL-SG3424 just it has 8 instead of 24 ports.

Access

Access into the lab will primarily be over Remote Desktop Protocol from the home network. To do this, I’m going to be accessing the lab across uplink ports that I will configure between the core and the lab switch. The lab will be in a separate VLAN to protect the home network from any broadcasts or such like going on in the lab. As my TP-Link switches are Layer 2, the Cisco ASA will be acting as my Layer 3 router between the home network and the lab which will allow me to place IP restrictions on who can traverse from the home network into the lab.

Costs

The cost for the new TP-Link switch is about £120. I’ve already got all the tools and cable I need to wire up the networking so there is no new costs there making this arguably, the cheapest part of the project. Time is actually going to be the biggest cost factor with the networking because of the time it’s going to take me to configure all of the new VLANs for the management, VM traffic and SMB Multi-Channel traffic, the sour side of using TP-Link over Cisco and not being able to use VLAN Trunking Protocol (VTP), a feature on Cisco which I love dearly.

Thankfully, VLAN configuration is a one time thing though, so although I’ll lose a couple of hours to all the network configuration initially, the cost of buying the switches and the low power consumption of the passive cooled TP-Link devices is worth it long term.

Next up, I will do a summary post in the form of a shopping list to get down everything I’m going to be using for the project and then I’ll be heading into build.

Project Home Lab: Hardware Decisions

In part one and two of this series, I talked about what I want to achieve and what I have in place already. From now on in, it’s all about the new stuff I want.

This series will consist of the following posts. I will update the table of contents links in each post as I produce and publish the articles.

  1. Project Home Lab: Goals
  2. Project Home Lab: Existing Infrastructure
  3. Project Home Lab: Hardware Decisions
  4. Project Home Lab: Network Decisions
  5. Project Home Lab: Shopping List

One Server vs. Multiple Servers

Early on, I had thought about building a single server with storage and hypervisor in bed together but I quickly came to the conclusion that this would hinder me in the long-run. Yes, I would get the fastest possible access to the disk storage for the VMs with an all-in-one but it would leave me with nowhere to go as scaling up would be limited by the specification of the internal hardware in that single server and scaling out would have big costs associated as I would need to buy the networking and Hyper-V servers to break it out.

I also decided that this wouldn’t give me a playground which could simulate much for a customer environment as how many business do you know that run everything on a single host?

To this end, I decided that one server to act as a storage server and another for my hypervisor was the solution. This means that over time if the need arises, I can add additional Hyper-V hypervisor servers to scale out my compute capacity and form a multi node cluster. There may be upgrades required to the storage server to increase the capacity or IOPS but those costs would be minimal and typical business and usual storage growth costs.

Rack Mount vs. Standalone

For most people considering rack mount verses standalone, the choice would be based on whether or not their wife or partner will allow them to get away with having a server rack in the house somewhere. As I have already overcome that obstacle years ago, it makes that easier. Standalone has it’s advantages because the machines can be put into the corner of a room or garage with ease, however standalone servers tend not to have the performance or scalability that I am looking for and want in a virtualization platform which demands big memory to name but one facet. Standalone servers tend not to be so readily available as parts of used systems on eBay for purchase either which makes it harder.

Server Rack Cabinet

Based on my points above regarding a rack, I stayed focused on rack mount however my problem lay in my current rack build. The cabinet I have currently stands roughly 22U tall however only has usable space for 12U of equipment as I built it originally with the design of the remainder of the space being storage compartments which I actually no longer use. With the rack currently being wooden, it’s very primitive and provides me only with front and rear access however due to it’s place in the corner of the garage, I only actually have front access.

Because of this, I am looking to need a new server rack to house everything. This rack will be a multi-tenant rack housing both my production home network and the lab environment and I need to make sure that I buy a rack which will fit my existing space and will give me some additional rack space for expansion should I need it in the future such as additional Hyper-V servers or storage enclosures

A UK based vendor called X-Case have recently started selling server racks and at £214 for a new 22U rack, which is perfect for me. It fits my space perfectly, it’s on castors so I can move it around, has removable side panels with doors from and rear. I’ve bought from X-Case before when I built my home server which lives in one of their 4U cases right now and their products are great and really affordably priced compared to the big name brands.

The 22U cabinet gives me plenty of space to house my current 9U of devices and 13U for new purchases and I don’t plan on taking my lab that big (just yet).

Off the Shelf vs. Custom

I try where possible to buy desktops and laptops from dedicated builders like Dell, Lenovo or the likes. Firstly because they can build it better than I can and gone are the days where it is cheaper to build your own. For servers at home, I have a slightly different view however. On eBay, you will find a myriad of used servers up for sale from the likes of Dell PowerEdge and HP Proliant. Sifting through them to find a good example which fits all the requirements can be a challenge. The problem I have with all of these is that none of them are energy conscious because servers are designed for the datacenter and not the home. It is to this end that I decided to build my own.

Building my own gives me the flexibility to select certain parts bespoke and new and others used and reputably branded whilst keeping an eye on the power meter.

Rack Mount Cases

For rack mount cases, I wanted to buy new. The first reason is that pressed metal tins are generally quite affordable and secondly, because I wanted to buy something that was going to perfectly fit my bill. I didn’t want 1U because that limits me with power supply, expansion card and CPU cooler options. 1U also means that you aren’t going to be fitting in many disks which would impact my disk I/O performance options. Lastly, 1U chassis need cooling fans and 1U fans are small which means they need to rotate fast to push the same cubic feet per minute (CFM) of air. Fast means noisy and all of these factors immediately rules out 1U.

2U is a good height. 2U means I’m not limited by the power supply as almost all server supplies are 2U compatible and most expansion cards are available in a half-height form factor suitable for installation into 2U.  Fans in a 2U chassis are larger which means slower spinning and quieter and I also have more height to work with for CPU coolers. 2U gives me enough room to work with, with my components but isn’t wasteful of space either. 2U does have a limitation however and that is physical space on the front of the enclosure for disks so whilst 2U is a good fit for my Hyper-V hypervisor servers, it doesn’t perfectly fit for the storage.

3U and 4U are ideal sizes for storage. You get all the benefits of 2U as above but more front surface area to jam in disk slots. I looked at what is out there and I decided early on that by using a combination of SSD and SATA disk for storage, I wouldn’t be needing that many disks for a single user environment and the gains in 4U over 3U wasn’t really worth it so I focused my attention on 3U. 4U also has the problem for me that with the number of disks it can support, you typically can’t find this number of SAS channels on a single controller so I would need multiple SAS controllers and if you could find one with enough ports, it would likely be upwards of £1,000 just for the card.

As I haven’t decided on my motherboard or CPU options at this point, but I know that I want this build to be flexible, I need to ensure whatever I buy can support ATX and Extended ATX motherboards so that I’m free to make the right decision.

Accessibility for me is important. I want whatever case I opt for to support sliding rails so that I can draw out a server if it has a fault to replace parts. I also want the disks in any of the servers to be hot-swappable so that I can see a faulting disk and replace it without having to open up the chassis and start messing around with drive screws and cables. As I plan on using a mixture of SSD and SATA disk, I need it to support 3.5″ and 2.5″ disks. I’m not interested in dual redundant power supplies in my build as that adds power demand and cost. If I lose a power supply, I can take the hit of having the lab offline for a few days for a replacement.

As I’ve expressed earlier, I like X-Case. They are a UK firm so I feel like I’m doing my bit for the UK economy and their products are good. For my 2U Hyper-V servers, I have decided on the X-Case RM 208 Pro (http://www.xcase.co.uk/rackmount-cases/2u-rackmount-server-cases/x-case-rm-208-pro-8-hotswap-caddy-with-6gb-sata-sas-backplane-temperature-controlled-fans.html).

The RM 208 Pro is a 2U rack mount enclosure. It’s £194 for the case and £27 for the sliding rail kit for it. It supports 2U power supplies, Extended ATX motherboards, has 8 hot-swappable disk caddies on the front taking 3.5″ disks and the disks are connected via two SAS 6Gbps SFF-8087 Multilane connectors, common on RAID and HBA cards. The SAS backplane supports SGPIO which means I will get disk failure and early warning notification lights on the enclosure if my RAID or HBA cards support it. The internal fans are hot-swappable and are temperature controlled for speed via the motherboard pin headers.

For the storage server, I decided on the X-Case RM 316 Pro (http://www.xcase.co.uk/rackmount-cases/3u-rackmount-server-cases/x-case-rm-316-pro-16-x-6gb-hotswap-caddy-mini-sas-backplane-120mm-temperature-controlled-fans.html). This enclosure looks and feels the same as the RM 208 Pro except at 3U, it has support for 16 3.5″ disks spread over four SAS 6Gbps SFF-8087 Multilane connectors. Everything else about this enclosure matches the RM 208 Pro that I will use for the Hyper-V server. The RM 316 Pro is more expensive at £370 for the chassis and another £33 for the sliding rails but the extra 8 disk slots will not limit me there.

Power Supply

For these servers, I want something fairly cheap but yet reliable and from a known brand as power is what makes the whole thing tick after all. X-Case resell Seasonic power supplies and after much research into them, transpires that they are actually the OEM manufacturer for a number of high-street brand power supplies, including the Corsair Builder Series supply in my current home server which has been running for over two years without a hiccup. The Seasonic SS-600 H2U 600 Watt power supply (http://www.xcase.co.uk/power-supply/2u-rackmount-power-supply-s/saesonic-ss-600h2u-2u-80-psu.html#sthash.WMRWR8NM.dpbs) is 80 Plus efficient and seems like the ticket. At only £100 it’s a good price too considering the price of some ATX power supplies these days. I’ll be using this unit in both the storage and Hyper-V servers.

Processor

In this decision process, processor comes before motherboard as after all, the motherboard is just a life support system for the processor. I knew I wanted a server processor, not a desktop processor. I knew I needed a processor which supported Intel Virtualization Technology (Intel-VT) or AMD-V so that cut down the options to pick from as not all CPUs, even new models released today have Intel-VT or AMD-V. I knew I also wanted a CPU with low TDP to keep power consumption down and heat BTU output down to reduce the cooling requirements and noise of the fans.

Server processors are highly expensive new so I also knew that this was going to be a used part. Intel processors are generally more readily available in used form but I didn’t want to omit AMD from the race as their Opteron processors have really high core counts which is a great thing for a virtualization host. I also wanted to make sure that I used at least the same family of CPU between the storage and hypervisor servers so that I was using consistent parts to keep the builds consistent and simple for me to support.

After weighing up all of the options back and forth, I settled on the Intel Xeon L5630 processor (http://ark.intel.com/products/47927/Intel-Xeon-Processor-L5630-12M-Cache-2_13-GHz-5_86-GTs-Intel-QPI) and got them for £25 per processor on eBay. The L5630 is a quad core CPU with a TDP of 40W which is really low for a server processor. The CPU launched in 2010 which means it’s not that old even if the units I have were first off the line. The L5630 has a clock speed of 2.13GHz and has 12MB of L3 cache. With quad core and Hyper-Threading, the hosts will see eight cores available and with Turbo Boost support, the CPU can boost up to 2.4GHz. As I said previously, Intel Hyper-Threading and Turbo Boost are supported as is Intel VT-x, Intel VT-d, SpeedStep and the latest AES encryption instructions which makes this CPU very feature rich for it’s age.

Memory support is tri-channel DDR3 up to 288GB per processor and it can be used in a dual processor mode thanks to it’s dual Quick-Path Interconnect (QPI). DDR3 support is useful because higher capacity DIMMs such as 8GB or 16GB are rare in DDR2 and DDR2 is becoming harder and more expensive to buy as stocks dwindle while DDR3 is readily available in sorts of shapes, sizes and flavours on eBay.

Motherboard

With my processor decided upon, I now needed to select a motherboard to suit. The Intel Xeon L5630 uses the Socket 1366 motherboard socket running the Intel 5500 series Tylersburg chipset. My first port of call was Supermicro as they make amazing products and they frequently OEM their parts to other vendors which shows a lot of faith in them. This, coupled with the fact that their parts range is wider than the Grand Canyon meant I was sure to find what I wanted.

The requirements for the motherboard, in line with my goals meant that I wanted something which gives me plenty of options for future expansion. I also want accessibility which means I don’t want to be running to the server with a keyboard and monitor in hand to troubleshoot a boot issue so iLO, DRAC or IPMI are very important for me. The more feature rich the motherboard also means the less I potentially need to spend on expansion cards so that was also a factor.

Selecting the motherboard took the longest amount of time due to the options available but eventually I selected the Supermicro X8DTH-6F motherboard. I was able to find this for £250 including shipping and import taxes, new from a seller in the USA on eBay.

The X8DTH-6F (http://www.supermicro.com/products/motherboard/QPI/5500/X8DTH-6F.cfm) has everything. It’s a dual socket Extended ATX motherboard taking the Intel 5500 and 5600 series processors, good for my L5630 Xeon. It can be run in uniprocessor mode and a second added later which meets my expansion plans allowing me to add a second CPU for the Hyper-V server to add processing power at a later stage. Six DDR3 DIMM slots per processor gives me a total of 12 DIMM slots with six usable now supporting 1333MHz DDR3 in either conventional desktop UDIMM format, all the way up to ECC Registered and Buffered. With the second CPU added later, this opens up the additional six DIMMs for use also.

On-board dual port gigabit Intel Ethernet and a dedicated IPMI port supporting remote media and KVM ticks another box. Being Extended ATX, the board has seven PCI Express slots giving me lots of options for expansion cards and the on-board Intel ICH10R and LSI SAS 2008 6Gbps SAS controllers handle all of my drive quandaries too, at least for the medium term.

£250 for the motherboard may seem a bit steep however I consider these factors. Having the two on-board gigabit Ethernet ports saves me about £40 buying a used dual port Intel PCI Express network adapter from eBay to service the management traffic. The on-board LSI SAS controller saves me around £100 buying a used LSI SAS host bus adapter card. Having both of these on-board means two less PCI Express cards installed, theoretically improving the airflow in the case and likely reducing power consumption too. IMPI can be added to any machine with a PCI Express slot, however whether the add-in cards available online are as integrated and feature rich as dedicated on-board is questionable and the cards I have seen online run for about £500 each making the motherboard look positively cheap.

Memory

I want as much memory as possible in my Hyper-V servers. For the storage server I want a sensible amount but not to the extent of the Hyper-V servers. The more memory I have, the more I can give my virtual machines to help give them that production feel.

DDR3 support on the CPU and motherboard means I’m up to date with the current specification although not for memory speed I should point out. I wanted to buy from the Supermicro validated memory support list and I wanted ECC Registered Buffered DIMMs as that’s what you use in servers for the error correction capabilities of these DIMMs. Also, because the motherboard only supports 16GB memory per processor if you use UDIMM desktop type memory this means I really wanted to use ECC Registered DIMMs. I need to make sure I do everything possible to squeeze the maximum performance out of this lab and that means using memory in accordance with the tri-channel native operation mode of the CPU.

For the storage server, I decided on 12GB RAM by way of three 4GB DIMMs. For the Hyper-V server, I decided on 48GB as six 8GB DIMMs for the uniprocessor setup and if I add a second processor later, I can add an additional 48GB.

16GB DIMMs are available but they are just way too expensive right now for me to consider. I managed to get the 4GB DIMMs for about £20 each and the 8gB DIMMs for £35 each. All of the DIMMs are Samsung low voltage DIMMs running at 1333 MHz. To translate, this means I am using PC3L-10600R designation DIMMs. These DIMMs will automatically down rank to the highest speed supported on the motherboard and under-running the memory will help to keep the temperature of the DIMMs during operation down.

With a second processor installed later, this would give me 96GB of RAM in my Hyper-V host if I stay with 8GB DIMMs and if I later upgrade to 16GB DIMMs should their prices become sensible, I could have to 192GB of memory.

Ancillaries

As always with a PC build, you need some odds and sods to finish it off.

The CPU needs a cooler so I opted for the Supermicro SNK-P0037P passive cooler. The cooler is recommended for the motherboard and made by the motherboard manufacturer Supermicro. The cooler is rated for processors up to 90W TDP which means they will more than easily handle my 40W L series CPU and no fan on the CPU will help to keep downs the power consumption and noise as one less moving part to power.

To connect the SAS Multilane connections on the motherboard to the enclosure backplane, I need some SFF-8087 cables. For the Hyper-V server, I will be installing only a pair of SSDs to run the host Windows Server 2012 R2 operating system. To protect against a SAS channel or cable failure, I will be installing both SFF-8087 multilane cables with a single SSD per channel.

For the storage server, I am going to install both channel cables allowing me to run 8 disks. I will operate like this initially and once I need more than 8 disks to increase the performance, I will buy an 8 Port LSI PCI Express SAS HBA to run the other two channels, buying two more cables. Genuine LSI SFF-8087 to SFF-8087 cables with Sideband support for the SGPIO disk information pass-through are £10 each, new on eBay.

The enclosures have 3.5″ drive bays to allow me to use big capacity SATA disk but as I will be using a combination of SSD and SATA, I need a way to mount the 2.5″ SSD disks. For about £10 each on eBay, you can pick up the HP 654540-001. This is a 2.5″ to 3.5″ disk carrier specifically designed for hot swap enclosures. You mount the disk into the carrier and it translates the power and data port positions to match that of a 3.5″ disk. It uses no intermediary disk controller so the disk will be seen exactly for what it is by the controller and the operating system and there is no performance penalty either.

Microsoft Azure Web Sites Hosting Plan Modes

Normally in Microsoft Azure (nee Windows Azure), I run my blog in Shared compute mode, however I occasionally have to scale up to Standard if I hit the compute limits for Shared in a given time period. It’s a bit naughty perhaps but I’m not built of money so I need to look after the pounds where possible.

Today, I noticed that the site popped offline whilst I was working on something, the issue being what I was doing in the back-end of WordPress generated a big load which then tripped the Shared instance resources counter. I logged into the Microsoft Azure Management Portal, ready to increase the site level to Standard to notice that the Scale options for a Web Site have now changed, a new feature in Microsoft Azure Web Sites.

Microsoft Azure Web Sites Hosting Mode

Previously, we had three options for the Scale of a website in Azure, Free, Shared and Standard. Free was a great way to develop and test a site which didn’t need a custom domain name attached, didn’t need to be able to use HTTPS or where you generally weren’t worried about the performance. Shared stepped it up a level giving you support for Custom Domain Names however HTTPS support and some of the high end features such as Endpoint Monitoring where still out of reach and reserved for Standard.

After some poking around, I haven’t yet been able to find out exactly what the pitch for Basic vs. Standard is but looking through the settings in the Web Site settings panels in Microsoft Azure, I can see that SSL is available for Basic but Web Site Backups and Endpoint Monitoring are still reserved for Standard. I’ll see what else I can find out about this update and what exactly is in and out between Basic and Standard and update the post.

It’s also interesting to note that the Microsoft Azure Pricing Calculator hasn’t yet been updated to reflect the addition of the new tier with the calculator still only offering up Free, Shared or Standard as the tier options.

Microsoft Azure Pricing Calculator Web Site Tiers

There are other new features in Microsoft Azure Web Sites that I want to talk about but I’ll save that for another post later.