Richard works as a Cloud Consultant for Fordway Solution where his primary focus is to help customers understand, adopt and develop with Microsoft Azure, Office 365 and System Center.

Richard Green is an IT Pro with over 15 years' of experience in all things Microsoft including System Center and Office 365. He has previously worked as a System Center consultant and as an internal solutions architect across many verticals.

Outside of work, he loves motorbikes and is part of the orange army, marshaling for NGRRC, British Superbikes and MotoGP. He is also an Assistant Cub Scout Leader.

Windows Server 2003 End of Life Plan Spreadsheet

Last week, the folks over at Microsoft published another entry in their blog post series Best Practices for Windows Server 2003 End-of-Support Migration (http://blogs.technet.com/b/server-cloud/archive/2014/10/09/best-practices-for-windows-server-2003-end-of-support-migration-part-4.aspx?wc.mt_id=Social_WinServer_General_TTD&WT.mc_id=Social_TW_OutgoingPromotion_20141009_97469473_windowsserver&linkId=9944146) which included a visually appealing spreadsheet template for helping you keep track of and plan your Windows Server 2003 migrations but to my shock, they didn’t provide the actual Excel file for that design (shame on them).

I’ve copied the design and made it into an Excel spreadsheet which I’ve setup with Conditional Formatting in the relevant cells so that when you add your numeric values and X’s it will automatically colour the cells to help you keep it as pretty as intended as after all, we need a bit of colour and happiness to help us with Windows Server 2003 migrations right?

Click the screenshot of the Excel file below for the download. As a note, make sure you use the Excel desktop application and not the Excel web app to view or use this file as the web app appears to hurt some of the formatting and layout.

Server 2003 Migration Spreadsheet

UPDATE: If you want to read more about Windows Server 2003 End of Life, a post by me has been published on the Fordway blog at http://www.fordway.com/blog-fordway/windows-server-2003-end-of-life/.

Explaining NUMA Spanning in Hyper-V

When we work in virtualized worlds with Microsoft Hyper-V, there are no many things we have to worry about when it comes to processors. Most of these things come with acronyms which people don’t really understand but they know they need and these and one of these is NUMA Spanning which I’m going to try and explain here and convey why we want to avoid NUMA Spanning where possible and I’m going to do it all in fairly simple terms to keep the topic light. In reality, NUMA architectures may be more complex than this.

NUMA Spanning or Non-Uniform Memory Address Spanning was a feature introduced into motherboard chipsets by Intel and AMD. Intel implemented it with the feature set Quick Path Interconnect (QPI) in 2007 and AMD implemented it with HyperTransport in 2003. NUMA uses a construct of nodes in it’s architecture. As the name suggests, NUMA refers to system memory (RAM) and how we use memory and more specifically, how we determine which memory in the system to use.

Single NUMA Node

Single NUMA Node

In the most simple system, you have a single NUMA node. A single NUMA node is achieved either in a system with a single socket processor or by using a motherboard and processor combination which does not support the concept of NUMA. With a single NUMA node, all memory is treated as equal and a VM running on a hypervisor on this configuration system would use any memory available to it without preference.

Multiple NUMA Nodes

Two NUMA Nodes

In a typical system that we see today with multiple processor sockets and with a processor and motherboard configuration that supports NUMA, we have multiple NUMA nodes. NUMA nodes are determined by the arrangement of memory DIMMs in relation to the processor sockets on the motherboard. In a hugely oversimplified sample system with two CPU sockets, each loaded up with a single core processor and 6 DIMMs per socket, each DIMM slot populated with an 8GB DIMM (12 DIMMs total). In this configuration we have two NUMA nodes, and in each NUMA node, we have one CPU socket and it’s directly connected 48GB of memory.

The reason for this relates to the memory controller within the processor and the interconnect paths on the motherboard. The Intel Xeon processor for example has an integrated memory controller. This memory controller is responsible for the address and resource management of the six DIMMs attached to the six DIMM slots on the motherboard linked to this processor socket. For this processor to access this memory it takes the quickest possible path, directly between the processor and the memory and this is referred to as Uniform Memory Access.

For this processor to access memory that is in a DIMM slot that is linked to our second processor socket, it has to cross the interconnect on the motherboard and via the memory controller on the second CPU. All of this takes mere nanoseconds to perform but it is additional latency that we want to avoid in order to achieve maximum system performance. We also need to remember that if we have a good virtual machine consolidation ratio on our physical host, this may be happening for multiple VMs all over the place and that adds up to lots of nanoseconds all of the time. This is NUMA Spanning at work. The processor is breaking out of its own NUMA node to access Non-Uniform Memory in another NUMA node.

Considerations for NUMA Spanning and VM Sizing

NUMA Spanning has a bearing on how we should be sizing our VMs that we deploy to our Hyper-V hosts. In my sample server configuration above, I have 48GB of memory per NUMA node. To minimize the chances of VMs spanning these NUMA nodes, we therefore need to deploy our VMs with sizing considerations linked to this. If I deployed 23 VMs with 4GB of memory each, that equals 92GB. This would mean 48GB memory in the first NUMA node could be totally allocated for VM workload and 44GB of memory allocated to VMs in the second NUMA node leaving 4GB of memory for the parent partition of Hyper-V to operate in. None of these VMs would span NUMA nodes because 48GB/4GB is 12 which means 12 entire VMs can fit per NUMA node.

If I deployed 20 VMs but this time with 4.5GB of memory each, this would require 90GB memory for virtual workloads and leave 6GB for hosting the parent partition of Hyper-V. The problem here is that 48GB/4.5GB doesn’t fit, we have left overs and uneven numbers. 10 of our VMs would fit entirely into the first NUMA node and 9 of our VMs would fit entirely within the second NUMA node but our 20th VM would be in no man’s land and would be left to have half its memory in both of the NUMA nodes.

In good design practice, we should try to size our VMs to match our NUMA architecture. Take my sample server configuration of 48GB per NUMA node, we should use VMs with memory sizes of either 2GB, 4GB, 6GB, 8GB, 12GB, 24GB or 48GB. Anything other than this has a real risk to be NUMA spanned.

Considerations for Disabling NUMA Spanning

So now that we understand what NUMA Spanning is and the potential decrease in performance it can cause, we need to look at it with a virtualization lens as this is where it really takes effect to the maximum. The hypervisor understands the NUMA architecture of the host through the detection of the hardware within. When a VM tries to start and the hypervisor attempts to allocate memory for the VM, it will always try to first get memory within the NUMA node for the processor that is being used for the virtual workload but sometimes that may not be possible due to other workloads blocking the memory.

For the most part, leaving NUMA Spanning enabled is totally fine but if you are really trying to squeeze performance from a system, a virtual SQL Server perhaps, NUMA Spanning would be something we would like to have turned off. NUMA Spanning is enabled by default in both VMware and Hyper-V and it is enabled at the host level but we can override this configuration on both a per hypervisor host level and a per VM level.

I am not for one minute going to recommend that you disable NUMA Spanning at the host level as this might impact your ability to run your workloads. If NUMA Spanning is disabled for the host and the host is not able to accommodate the memory demand of the VM within a single NUMA node, the power on request for the VM will fail and you will be unable to turn on the machine however if you have some VMs which have NUMA Spanning disabled and others with it enabled, you can have your host work like a memory based jigsaw puzzle, fitting things in where it can.

Having SQL Servers and performance sensitive VMs running with NUMA Spanning disabled would be advantageous to their performance and having NUMA Spanning disabled on VMs which are not performance sensitive allows them to use whatever memory is available and cross NUMA nodes as required giving you the best combination of maximum performance for your intensive workloads and the resources required to run those that are not.

Using VMM Hardware Profiles to Manage NUMA Spanning

VMM Hardware Profile NUMA Spanning

So assuming we have a Hyper-V environment that is managed by Virtual Machine Manager (VMM), we can make this really easy to manage without having to bother our users or systems administrators with understanding NUMA Spanning. When we deploy VMs we can base our VMs on Hardware Profiles. A VMM Hardware Profile has the NUMA Spanning option available to us and simply, we would create multiple Hardware Profiles for our workload types, some of which would be for general purpose servers with NUMA Spanning enabled whilst other Hardware Profiles would be configured specifically to be used by performance sensitive workloads with the NUMA Spanning setting disabled in the profile.

The key to remember here is that if you have VMs that are already deployed in your environment you will need to update their configuration. Hardware Profiles in VMM are not linked to the VMs that we deploy so once a VM is deployed, any changes to the Hardware Profile that it was deployed from do not filter down to the VM. The other thing to note is that NUMA Spanning configuration is only applied at VM Startup and during Live or Quick Migration. If you want your VMs to update the NUMA Spanning configuration after you have changed the setting you will either need to stop and start the VM or migrate it to another host in your Hyper-V Failover Cluster.

Gartner Magic Quadrant Unified Communications

Well here’s one you wouldn’t have expected to see. Gartner have placed Microsoft and Lync ahead of Cisco in their Unified Communications Magic Quadrant.

Gartner have put Cisco and Microsoft level for Ability to Execute however Microsoft have been placed ahead in Vision. You can read the full article at http://www.gartner.com/technology/reprints.do?id=1-1YWQWK0&ct=140806&st=sb. Well done Microsoft. Now if work can be done to address the cautions that Gartner have identified then the position will be even stronger.

System Center Service Manager 2012 R2 Data Warehouse Reports Unavailable

Late last week, I had the pleasure of deploying and configuring a System Center Service Manager 2012 R2 Data Warehouse. I got informed today that none of the reports were available in the Reporting tab in SCSM so I had a look at what the problem might be.

With the SCSM Data Warehouse, the most important job during setup is one of the Data Warehouse Jobs named MPSyncJob. The MPSyncJob has the purpose of deploying all of the management packs from SCSM into the reports folders in SQL Reporting Services (SSRS).

When I looked at this job in the Data Warehouse Jobs tab under Data Warehouse in the SCSM Console, 175/181 has the status Associated but 6 of them were stuck with the status Pending Association and these were all the reporting management packs with this status. Viewing the Management Packs tab under Data Warehouse in the SCSM Console, I could see that these same 6 management packs had a Deployment Status of Failed which is obviously not good.

I logged on to the SCSM Data Warehouse server and poked into the Operations Manager log which is where SCSM records all it’s events and there were a number of critical alerts in the log with the Event Source Deployment and the message went along the lines of insufficient permissions to complete the requested operation so I knew immediately there was a permissions issue with SSRS. I headed over to the SSRS Report Manager URL which normally looks like https://SERVERNAME.domain.suffix/Reports_InstanceName and logged in as myself.

Viewing the permissions on the System Center and Service Manager report folders, I quickly could see that the account that I specified during the setup of the SCSM Data Warehouse was missing, the installer had not properly assigned the permissions to the account.

I manually added the permissions to the account and restarted the deployment of the management packs in a failed state and the Operations Log has now reported that they have successfully been deployed, happy days. Now I just need to wait for SCSM to complete all of the other jobs in the appropriate order to get the full functionality through from our Data Warehouse.

 

Active Directory and DFS-R Auto-Recovery

I appreciate this is an old subject but it is one that I’ve come across a couple of times recently so wanted to share and highlight the importance of it. This will be one of a few posts I have upcoming on slightly older topics but none the less important ones that need to be addressed.

How Does DFS-R Effect Active Directory

In Windows Server 2008, Microsoft made a big change to Active Directory Domain Services (AD DS) by allowing us to use DFS-R for the underlying replication technology for the Active Directory SYSVOL, replacing File Replication Service (FRS) that has been with us since the birth of Active Directory. DFS-R is a massive improvement on FRS and you can read about the changes that DFS-R brings to understand the benefits at http://technet.microsoft.com/en-us/library/cc794837(v=WS.10).aspx. If you have upgraded your domains from Windows Server 2003 to Windows Server 2008 or Windows Server 2008 R2 and you haven’t completed the FRS to DFS-R migration (and it’s easily overlooked as you have to manually complete this part of a migration in addition to upgrading or replacing your domain controllers with Windows Server 2008 servers and there are no prompts or reminders when replacing your domain controllers to do it), I’d really recommend you look at it. There is a guide available on TechNet at http://technet.microsoft.com/en-us/library/dd640019(v=WS.10).aspx to help you through the process.

Back in January 2012, Microsoft released KB2663685 which changes the default behaviour of DFS-R replication and it effects Active Directory. Prior to the hotfix, when a DSF-R replication group member performs a dirty shutdown, the member would perform an automatic recovery when it came back online however after the hotfix, this is no longer the case. This behaviour change results in a DFS-R replication group member halting replication after a dirty shutdown awaiting manual intervention. Your intervention choices range from manually activating the recovery task to decommissioning the server and replacing it, all depending on the nature of the dirty shutdown. What we need to understand however is that a dirty shutdown can happen more often than you think so it’s important to be aware of this.

Identifying Dirty DFS-R Shutdown Events

Dirty shutdown events are logged to the DFS Replication event log with the event ID of 2213 as shown below in the screenshot and it advises you that replication has been halted. If you have virtual domain controllers and you shutdown your domain controller using the Shutdown Guest Operating System options in vSphere or in Hyper-V, this will actually trigger a dirty shutdown state. Similarly, if you have a HA cluster of hypervisors and you have a host failure causing the VM to restart on another host, yep, you guessed it, that’s another dirty shutdown. The lesson here first and foremost is always shutdown domain controllers from within the guest operating system to ensure that it is done cleanly and not forcefully via a machine agent. The event ID 2213 is quite helpful in that it actually gives us the exact command to recover the replication so a simply copy and paste into an elevated command prompt will recover the server. No need to edit to taste. Once you’ve entered the command, another event is logged with the event ID 2214 to indicate that replication has recovered shown in the second screenshot.

AD DS DFS-R Dirty Shutdown 2213  AD DS DFS-R Dirty Shutdown 2214

Changing DFS-R Auto-Recovery Behaviour

So now that we understand the behaviour change, the event ID’s that lets us track this issue, how can we get back to the previous behaviour so that DFS-R can automatically recover itself? Before you do this, you need to realise that there is a risk to this change and the risk is that if you allow automatic recovery of DFS-R replication groups and the server that is coming back online is indeed dirty, it could have an impact on the sanctity of your Active Directory Domain Services SYSVOL directory.

Unless you have a very large organisation or unless you are making continuous change to your Group Policy Objects or files which are stored in SYSVOL, this shouldn’t really be a problem and I believe that the risk is outweighed by the advantages. If a domain controller restarts and you don’t pick up on the event ID 2213, you have a domain controller which is out of sync with the rest of the domain controllers. The risk to this happening is that domain members and domain users will be getting out of date versions of Group Policy Objects if they use this domain controller as the domain controller will still be active servicing clients whilst this DFS-R replication group is in an unhealthy state.

Effects Beyond Active Directory

DFS-R is a technology originally designed for replicating file server data. This change to DFS-R Auto-Recovery impacts not only Active Directory, the scope of this post but also file services. If you are using DFS-R to replicate your file servers then you may want to consider this change for those servers too. Whilst having an out of date SYSVOL can be an inconvenience, having an out of date file server can be a major problem as users will be working with out of date copies of documents or users may not even be able to find documents if the document they are looking for is new and hasn’t been replicated to their target server.

My take on this though would be to carefully consider the change for a file server. Whilst having a corrupt Group Policy can fairly easily be fixed or recovered from a GPO backup or re-created if the policy wasn’t too complex, asking a user to re-create their work because you allowed a corrupt copy of it to be brought into the environment might not go down quite so well.

SQL Server Maintenance Solution

Earlier this year, I posted about a tool from Brent Ozar called spBlitz and how it gives you amazing insight into configuration problems with your SQL Servers. Well today, I am here to tell you about another great SQL tool available for free online and that is the SQL Server Maintenance Solution by Ola Hallengren, a Swedish database administrator and was awarded a Microsoft MVP this year for the first time.

You can download his tool from https://ola.hallengren.com/ and on the site, there is full documentation for all of the features of the tool including the most common configuration examples and its use so you can get up and running really quickly with it.

The SQL Server Maintenance Solution is a .sql file that you download and allow it to install itself as a series of Stored Procedures in your master database. The tool works by invoking its Stored Procedures as SQL Agent Jobs and by default will create a number of these unless you opt not to during the install by changing one of the lines in the .sql file.

I opted to not install the default jobs but to create my own so I could configure how and what I wanted the scripts to do but it really is so simple that no administrator of SQL has any reason to not be performing good routine maintenance. I am using Ola’s scripts to both perform routine DBCC CHECKDB consistency and also to perform index defragmentation on databases which is it’s real power.

The reason Ola’s scripts beat a SQL Maintenance Plan for index defragmentation and the main reason I wanted to use them is that Ola gives us the flexibility to perform different actions according to the level of fragmentation so for example, I could do nothing if fragmentation in an index is below 10%, reorganise an existing index if fragmentation is between 10% and 30% and completely rebuild the index if it is over 30%. Compare this to a SQL Maintenance Plan where your option is reorganise or rebuild regardless of fragmentation level and you can see the advantage.

So now, that’s to the community and Brent and Ola, we can check the configuration of our SQL Servers to make sure they are happy and safe as well as easily configure our daily and weekly checks and maintenance on databases to keep our server and our databases happy and we all know that happy databases means happy software.

In another post coming up soon, I will show you how we can update the configuration of our SCOM Management Pack for SQL Server so that we can receive alerts for failed SQL Server Agent Jobs, allowing us to centralise our knowledge, reporting and alerting for SQL maintenance tasks.

HD Voice and O2 Out in the Cold

HD Voice is the name given to a feature which offers Wideband call quality through your mobile. Mobile networks haven’t exactly been all over this because I suspect of the largely falling call volumes across their networks due to the increasing prevalence of smartphones and apps like Skype and WhatsApp but does that mean they should stop trying?

A couple of weeks’ ago, Vodafone announced that they have been rolling out HD Voice on their network leaving O2 UK as the only network in the UK to not offer this now?

The reason networks had been slow on the uptake of Wideband call quality was previously due to the lack of handset support but gone are those days so there really is no excuse now for the networks but what with O2?

Well Engadget spoke to O2 (http://www.engadget.com/2014/09/11/vodafone-enables-hd-voice/) and they have said that they have no plans at all to implement HD Voice.

To me, this is like a kick in the teeth to anyone who actually cares about making phone calls. I know that the anomaly that is picking up the phone to someone gets rarer and rarer as more people use mobile apps to call either other or more commonly, message or chat to each other by other means, but O2’s statement really tells you that they no longer care about calls. O2 obviously seem to only be interested in pushing packets these days so if you like making calls and talking to people, find another network.

Whilst I’ve just slated O2 above, I should point out that no network is perfect. O2 after all have just recently added support for Windows Phone Visual Voicemail, the first network in the UK to do so, however the problem I observed with this is that it only works on phones with an O2 ROM image and my old Lumia 820 from O2 shipped with a carrier unbranded ROM so doesn’t get the feature.

The one thing that I haven’t been able to find out is whether HD Voice works between networks as well as within. Back in 2012, Orange claimed a network of a HD Voice call between countries (http://www.pcadvisor.co.uk/news/network-wifi/3406390/orange-claims-first-hd-voice-call-between-two-countries/) but from the wording of the article, it would seem that this was entirely within their own network. I suspect that it does work inter-network as well as intra-network otherwise it’s really a bit pointless.

If you want to hear the difference between narrowband and wideband HD Voice audio, check out this BBC News article which has a little audio clip comparing the difference at http://www.bbc.co.uk/blogs/legacy/thereporters/rorycellanjones/2010/09/hd_voice_-_can_you_hear_me_now.html/.

A Swathe of Microsoft Azure Updates

I’ve been a bit lazy over the last couple of weeks when it’s come to blogging a) because I’ve been on the road quite a bit with work and I haven’t fancied sitting in front of my PC when I got home in the evening and b) I’ve been too hooked watching Ray Donovan on TV to think about picking up the laptop.

The problem with not blogging for a while is that I have a lot of pent up desire to post things that I’ve been thinking about and doing over the last couple of weeks, not enough time to do it, nor the will power to type it all out.

As we all know, Azure is fairly close to my heart these days and three’s been a lot of activity in Azure across a whole host of offerings.

The biggest changes are covered in full in the blog post by Scott Guthrie over at http://weblogs.asp.net/scottgu/azure-sql-databases-api-management-media-services-websites-role-based-access-control-and-more.

Azure SQL Service Tiers

For me and my love obsession with running WordPress on Azure, the biggest changes here are the General Availability of the Azure SQL Database Service Tiers. These are the tiers which have been in preview since early this year and are due to replace the legacy tiers next year. The good news here is that Microsoft appear to have made a change during the course of the year which means you don’t need to actually migrate your data and you can simply switch between the tiers so there’s no excuse now.

Azure Websites

Another big change is to Azure Websites. Azure Websites have previously not been able to integrate with a Virtual Network to allow you to easily consume on-premise resources as part of a website. You could get around this to an extent using a BizTalk Hybrid Connection however the setup of this required agents to be deployed across the servers you wanted to connect to and meant extra configuration and complexity. We can now consume resources on-premise via our Virtual Network to on-premise resources whether it be a SQL Server, a back-end application server or whatever your website needs.

As part of the website changes, there is a new gallery template available for Websites named Scalable WordPress. This is a WordPress site deployment on Azure Websites designed for Azure which includes pre-configuration to use Azure BLOB Storage and easy configuration for Azure CDN. This new template potentially puts all my work to hone WordPress for Azure to the waste heap. As a WordPress user and fan, I’m going to be deploying one of these sites in the next few days (maybe longer) to see how Microsoft have built the site template. My money is on either they have used plugins to achieve it in the same way I do or they’ve customized the code base to make it work. Either way, I’ll be interested to see.

Azure RBAC

Finally, at last, the feature that we’ve all been wanting, needing and waiting for. No more, is a subscription the boundary for security and access control in Azure as with the release of Role Based Access Control (RBAC), we can now control access to resources in our Azure subscriptions. I’m really looking forward to having a poke around with this feature as I see this being one of the biggest features ever with Azure.

Azure Active Directory (AAD) Sync

In a separate article over at http://blogs.technet.com/b/ad/archive/2014/04/21/new-sync-capabilities-in-preview-password-write-back-new-aad-sync-and-multi-forest-support.aspx it was announced that the latest version of the AAD Sync tool has come out of Preview and is now in General Availability.

This new version supports Self-Service Password Reset write-back to Active Directory Domain Services (AD DS) with DirSync and Multi-Forest sync for complex domain and Exchange Server topologies.

Password Write-Back for organisations using AAD could be really good thing, just bear in mind before you get too excited about the reduction in service desk calls you can achieve through self-service password reset, you need to meet the prerequisites for the writeback agent which are pretty simple but you also need to be paying for Azure Active Directory Premium.

All in all, this has been a great month for Azure and I’m looking forward to trying to get my teeth into some of these new features.

SCOM Hyper-V Management Pack Extensions

If you’ve ever been responsible for the management or monitoring of a Hyper-V virtualization platform, you’ve no doubt wanted and needed to monitor it for performance and capacity. The go to choice for monitoring Hyper-V is System Center Operations Manager (SCOM) and if you are using Virtual Machine Manager (VMM) to manage your Hyper-V environment then you could have and should have configured the PRO Tips integration between SCOM and VMM.

With all of this said, both the default SCOM Hyper-V Management Pack and the monitoring improvements that come with the VMM Management Packs and integration are still pretty lacklustre and don’t give you all the information and intelligence you would really like to have.

Luckily for us all, Codeplex comes to the rescue with the Hyper-V Management Pack Extensions. Available for SCOM 2012 and 2012 R2, the Management Pack provides the following (taken from the Codeplex project page):

New features on release 1.0.1.282
Support for Windows Server 2012 R2 hyper-V
Hyper-V Extended Replica Monitoring and Dashboard
Minor code optimizations

Features on release 1.0.1.206
VMs Integration Services Version monitor
Hyper-V Replica Health Monitoring Dashboard and States
SMB Shares I/O latency monitor
VMs Snapshots monitoring
Management Pack Performance improvements

Included features from previous release
Hyper-V Hypervisor Logical processor monitoring
Hyper-V Hypervisor Virtual processor monitoring
Hyper-V Dynamic Memory monitoring
Hyper-V Virtual Networks monitoring
NUMA remote pages monitoring
SLAT enabled processor detection
Hyper-V VHDs monitoring
Physical and Logical Disk monitoring
Host Available Memory monitoring
Stopped and Failed VMs monitoring
Failed Live Migrations monitoring

The requirements to get the Management Pack installed are low which makes implementation really easy. If you keep your core packs updated there is good chance you’ve already got the three required packs installed, Windows OS 6.0.7061.0, Windows Server Hyper-V 6.2.6641.0 and Windows Server Cluster 6.0.7063.0.

The project suggests there is documentation but it seems to be absent so what you will want to know is what is the behaviour going to be upon installation? If you have a development Management Group for SCOM then install it here first to test and verify as you should always be doing. The Management Pack is largely disabled by default which is ideal but there are a couple of rules enabled by default to watch out for so check the rules and change the default state for the two enabled rules to disabled if you desire.

As is the norm with disabled rules in SCOM, create a group which either explicitly or dynamically targets your Hyper-V hosts and override the rules for the group to enable them. The rules are broken down into Windows Server 2012 and Windows Server 2012 R2 sets so you can opt to enable one, the other or both according to the OS version you are using for your Hyper-V deployments.

If you do have the VMM integration with SCOM configured and you are using Hyper-V Dynamic Memory, you will notice very quickly if you enable all the rules in the  Hyper-V Management Pack Extensions that you will start receiving duplicate alerts for memory pressure so make a decision where you want to get your memory pressure alerts from be it the VMM Management Pack or the Hyper-V Extensions Management Pack and override and disable alert generation for the one you don’t want.

There is still one metric missing even from this very thorough Hyper-V Extensions Management Pack and that is the collection of the CPU Wait Time Per Dispatch performance counter, the Hyper-V equivalent of the VMware vSphere CPU Ready counter. I’ll cover this one in a later post with a custom Performance Collection Rule.

You can download the Management Pack from Codeplex at http://hypervmpe2012.codeplex.com/. I hope it finds you well and enjoy your newly found Hyper-V monitoring intelligence.

Welcome to Fordway

So a couple of weeks’ ago, I said that there were some exciting times coming up for me and I figure it’s time to spill some beans if you haven’t seen already through other sources like Twitter or LinkedIn.

Last week, I started my new role as a Consultant for Fordway of Godalming and it’s been a busy week and one day already. Working for Fordway is an exciting role for me because I get to combine two worlds of IT that I enjoy into one place. This is made possible by the fact that Fordway offer both more traditional consultancy IT services to customers as well as operating a managed services cloud environment in which customers can get IaaS (among others) services in a way which is supported under G-Cloud, the UK government framework for government cloud adoption.

To me, the idea of being able to help support and deliver both in-house IT and customer IT services is really exciting and I’m looking forward to working in both of these areas and continuing to do so with products that I know and love like Hyper-V, System Center and more. Whilst I’m on the subject of employers and products, I just want to reiterate the fact that this is my own personal blog and that everything posted here reflects my own views and opinions and not those of my employers, past, present or future.

I look forward to working on new projects for new people and being able to share some of what I get up to with you all as always.