KB2992611 Winshock Update and the Broken Cipher Suites

Last week, Microsoft released an update under KB2992611 in response to a security bulletin MS14-066 to address a flaw in SChannel reported to Microsoft. As part of KB2992611, Microsoft not only patched the flaw in SChannel but they also added four new encryption cipher suites. The suites added were as follows:

TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
TLS_RSA_WITH_AES_256_GCM_SHA384
TLS_RSA_WITH_AES_128_GCM_SHA256

Although it was a nice gesture to add some new cipher suites to Windows, there was a knock on effect to installing KB2992611 and adding these new cipher suites as it appears that Google Chrome for one, possibly more browsers depending on the version you have, do not accept these ciphers and the addition would cause browsers to fail to connect to websites and causing TLS sessions to be dropped. There are also other issues although less widely reported about the installation of KB2992611 causing SQL and ODBC based data connections within applications to drop dramatically in performance.

To address the problem, Microsoft have re-released KB2992611 with KB3018238 which is a secondary update which changes the default state of these new ciphers to disabled. It’s important to note that disabling the new ciphers does not remove the fix for the vulnerability in SChannel which is addressed by the original hotfix. Some people are suggesting uninstalling KB2992611 to workaround the issue but doing this will open the SChannel vulnerability again. After hearing conversations about these updates today, there is much confusion about the situation. Microsoft have not pulled KB2992611 and replaced it with KB3018238 but they have instead added KB3018238 as a secondary update. This is in contrast to replacing the update with a version 2 release which is commonplace when there are issues with updates.

If you have already installed KB2992611, you will be offered KB3018238 via Windows Update. Installing KB3018238 will disable the four new cipher suites by default to restore compatibility however you will have the option to re-enable them if you wish via the normal means for editing and selecting cipher suites. The fix for SChannel will remain in place. If you have not yet installed KB2992611, then via Windows Update, you will see KB2992611 advertised as an update for installation but upon installation, both KB2992611 and KB3018238 will be installed and both will be listed in the View Installed Updates pane in Control Panel. In this case, you will have both the cipher suites disabled and that SChannel vulnerability patched.

If you are having issues with SQL Server or ODBC connection based applications, there is no fix for this problem currently and the solution to this is community opinion to remove the previously installed KB2992611 which appears to restore order to the force. Hopefully Microsoft will address whatever the underlying issue with SQL Server and ODBC and the interaction with this fix to SChannel in future update.

In addition to KB3018238 to fix the issues with SChannel, Microsoft yesterday released two other updates. KB3011780 has been released to address a flaw in Kerberos which effects the Key Distribution Center (KDC). This is a service which runs on Domain Controllers so this update is considered critical. Another update under KB3000850 has been released as a November 2014 Rollup Update for Windows 8.1 and Windows Server 2012 R2. This rollup includes all previously released updates for the operating systems and includes the KB2992611 but it is not clear whether it includes the original release of KB2992611 or KB2992611 and the secondary KB3018238 update.

To download KB2992611 with the secondary update KB3018238 visit http://support.microsoft.com/kb/2992611. For the Kerberos update KB3011780 visit http://support.microsoft.com/kb/3011780 and lastly, for the November 2014 Rollup Update, visit http://support.microsoft.com/kb/3000850.

Friends in the 21st Century

I’m known for liking to have a good old moan about things and I’m also known for being a bit old fashioned in my ways and values despite my age. I don’t normally get involved in talking about that part of me on my blog as I like to keep it technical here but when something overlaps into technology, it’s hard not to get it out there.

When I was growing up, we had friends and friends were people who you went out with and socialised together, people who you’d call on the phone to see how they were doing or how their life was going. Now, in the year 2014, what on earth has happened to the concept of friends? Did the old definition get completely unwritten and nobody told me? I checked the Oxford dictionary and the definition for the noun friend reads as follows:

A person with whom one has a bond of mutual affection, typically one exclusive of sexual or family relations

The synonyms show you that a friend is someone close to you with words like intimate, confidante, soul mate and brother or sister given and Oxford tells us that the origins of the word friend are Germanic and the meaning ‘to love’.

Just this weekend, I met someone at a party and I spent no more than fifteen minutes sum total time talking to said individual. I didn’t dislike him at all so there’s no problems there, but does meeting a stranger at a party and spending net fifteen minutes with them really constitute a friendship these days and how does that effect the things that we should be holding closest and dearest to us?

On Facebook right now, I have 60 friends. All of these people are either family or friends who are actually people that I am some-which-way interested in hearing from or actually care to read what they have to say (although I do wish sometimes that I could unfriend some people for the amount of share this and look at rubbish they post).

I did a straw poll on Twitter earlier today and granted, my follower base isn’t particular large and those who do follow me are going to be biased to me in a like minded sense, but both of the people who responded said the same thing: they only friend with people on Facebook who they actually know so why are a lot of people out there so willing to throw friend invitations on Facebook around like sweets and confetti? Surely a friendship on Facebook should be something reserved for the people who you actually hold in that esteem? Not only does having a mammoth collection of friends clutter your News Feed with information and status updates that you largely are going to ignore and not care about, but you are also exposing yourself to people who you don’t really know. Not that I am trying to victimise her in this post, but my wife has currently, 320 friends on Facebook and whilst she definitely has a wider circle of friends and people she interacts with more people than me, is it really five times greater than mine or is she collecting friends for the sake of it (bearing in mind here that she accepted the friend request from the same person I received an invitation from at the weekend)?

Facebook Contact Privacy Settings

I took a couple of screenshots of my Contact Info page from my Facebook profile earlier today and overlayed them on top of each other so that I can show the whole scene in one picture. As you can see from the picture, my contact information shared with friends and this includes my mobile and home phone number, my home address and although not shown (as it’s further down the page beyond the fold) my email address is also shared with friends.

I know that the protagonist amongst you will say that you can customise this and change who can see your information but that then brings its own questions. Firstly, who actually thinks about what that person might be able to see before accepting the friend request in that the decision to accept or decline for most has probably become a reflex action and secondly, what are the privacy options if you wanted to limit that persons access to your information? I took a look at the privacy options for my phone number and the choices are Friends of Friends, Friends, Only Me or Specific People.

Friends of Friends is just utter lunacy. Why would I want to share my phone number with the friends of my friends when I have no control over who they friend in turn? Friends Only is a logical option and Only Me defeats the purpose of adding the information to your profile in the first place. Specific People is the ideal option if you are a bit of a friend collector or very privacy conscious but who really is going to remember to after accepting that friend request, go and edit the list of people who are allowed or denied to see your information? What’s more, I highly suspect that this isn’t a setting which you can edit from the mobile applications which makes it hard to administer the value too.

Contact information and information about where you live, your email address and other personal data nuggets are important pieces of personal information, Personally Identifiable Information (PII) as the world has come to know it and this information should be protected at all costs, not made available to somebody at the acceptance of a friend request. If the Facebook account of somebody in your friends list was hacked, then your information could become part of the next wave of phishing scam or telephone nuisance.

Aside from the PII though, there is the day to day aspect of do you actually want to see what said individual is posting status messages about or do you want to know what they liked and shared and the answer is most likely probably not, especially if you are already dealing with a high volume of News Feed clutter already. The side of this issue is more personal and the response will vary from person to person according to how much of their lives they want to publicise, but do I want people who I only know in the most lose of senses to know what I am doing and do I want my status updates appearing in their News Feed? If I post a message that I’m having a great day out with my kids because I want to share the fact that I’m having a great time, enjoying a day with my family, how do I know that I only met for fifteen minutes isn’t a professional crook and now armed with knowledge that I am out for the day with my kids and my home address, isn’t going to come and burgle my house for all my prized, hard earned possessions and the blunt answer is that you don’t know these things because the people you friend on Facebook, you probably don’t know enough about them to make that judgement call.

For all my rambling in this post, the crux of the issue for me is that the definition of friends seems to have negatively evolved as social media has made people far more accessible to other people. I think that it is a good thing is many respects as it allows us to connect with people that they care most about in ways that they couldn’t have done previously and people in this category are truly the real friends in life. On the other side though, I also think that there is a high degree of over-sharing that goes on and people, people are making their lives too publicly accessible for the consumption of those that they barely know at all and they aren’t considering the implications of clicking that little blue accept button before they do it which not only means each time you look at Facebook you have to wade through the endless scrolling page of tripe to reach the good stuff and consequentially wasting your own time, you are also exposing yourself and your information to people. If I wouldn’t give somebody I met at a party my phone number, why would I connect with them as a friend on Facebook because they are tantamount to the same thing.

Two Weeks of Dell, VMware and TechEd

It’s been a while since I’ve worked with VMware to any serious nature but for the last two weeks, I’ve been working with a customer to deploy vSphere 5.5 on a new Dell Vrtx chassis. I’ve seen the Dell Vrtx on display at VMUG conferences gone by and it sure is an interesting proposition but this is the first time I’ve had a chance to work with it real world.

All in all, the Dell Vrtx is a really nice system, everything seems to be well planned and thought out.  The web interface for managing the chassis works but it is slow at times to open pages and refresh information, bearable. The remote KVM console to the blades themselves is Java based so results may vary whether it works or not; I really dislike Java based systems and wish more vendors would start to use HTML5 for their interfaces. There is an apparent lack of information on the Dell Website about the Vrtx system. There is a wealth of configuration guides and best practice documents for the Vrtx but all of these seem be so highly pitched that they lack actual technical details. Another issue is the Dell parts catalogue doesn’t really acknowledge the existance of the Vrtx system; I was talking to someone about extending the system with some Fibre Channel HBAs for FC storage connectivity but of all of the FC HBAs for sale on the Dell website, only a single port 4Gbps HBA is listed as supported which I can’t believe for one minute given the PCIe slots in the Vrtx are, well, PCIe slots. Disk performance on the Shared PERC controller is pretty impressive but networking needs to be taken with caution. If you are using the PowerEdge M620 half-height blade, it only exposes two 1GbE Ethernet interfaces to the internal switch plane on the chassis whereas the full height PowerEdge M520 blade exposes four 1GbE Ethernet interfaces and I would have really liked to have seen all four interfaces on the half-height blade, especially when building virtualization solutions with VMware vSphere or Microsoft Windows Server Hyper-V.

I haven’t really worked with VMware too much since vSphere 5.0 and working with vSphere 5.5, not an awful lot has changed. After talking with the customer in question, we opted to deploy the vCenter Server Appliance (vCSA). vCSA in previous releases of vSphere was a bit lacklustre in it’s configuration maximums but in 5.5, this has been addressed and it can now be used as a serious alternative to a Windows Server running vCenter. The OVA virtual appliance is 1.8GB on disk and deploys really quickly, and the setup is fast and simple. vSphere Update Manager (VUM) isn’t supported under Linux or on the vCSA so you do still need to run a Windows Server for VUM but as not everyone opts to deploy VUM, that’s not a big deal really. What I would say to the vCSA though is if you plan to use local authentication and not the VMware SSO service with Active Directory integration then I would still consider the Windows Server. Reason for this being that with the vCSA, you cannot provision and manage new users and password via the vSphere Web Client and instead you have to SSH onto the appliance and manage the users from the VI CLI. With the Windows Server then we can obviously do this with the Users and Groups MMC Console which is much easier if you are of the Microsoft persuasion. If you are using the VMware SSO service and Active Directory integration then this will not be a problem for you though.

Keeping it on the VMware train, I’m looking forward to a day out to the VMware UK User Group Conference VMUG in Coventry in two weeks. I’ve been for the last three years and had a really good and informative day every time I’ve been.

Being so busy on the customer project and with my head buried in VMware, I’ve been really slow on the uptake of TechEd Europe news which bothers me but fear not, thanks to Channel 9, I’ve got a nice list of sessions to watch and enjoy from the comfort of my sofa but with there being so many sessions that I’m interested in, it’s going to take me a fair old chunk of time to plough through them.

Thoughts on Windows Server 2003 End of Life

A post by me has a just been published over on the Fordway blog at http://www.fordway.com/blog-fordway/windows-server-2003-end-of-life/.

This was written in parallel to my earlier post Windows Server 2003 End of Life Spreadsheet, reproducing the spreadsheet for documenting your Windows Server 2003 environment originally posted by Microsoft. In this new post on the Fordway blog, I talk about some of the areas that we need to focus our attention and other up some food for thought. If you have any questions then please feel free to get in touch either with myself or someone at Fordway who will be happy to help you.

Monitoring SQL Server Agent Jobs with SCOM Guide

Late last night, I published a TechNet Guide that I have been working on recently entitled “Monitoring SQL Server Agent Jobs with SCOM”. Here’s the introduction from the document.

All good database administrators (DBAs) create jobs, plans and tasks to keep their SQL servers in tip top shape but a lot of the time, insight as to the status of these jobs is left either unturned like an age old stone or is done by configuring SQL Database Mail on your SQL servers so that email alerts are generated which means you have additional configuration being done on every server to configure this and it’s yet another thing to manage.

In this guide, I am going to walk you through configuring a System Center Operations Manager 2012 R2 environment to extend the monitoring of your SQL Servers to include the health state of your SQL Server Agent Jobs, allowing you to keep an eye on not just the SQL Server platform but also on the jobs that run to make the platform healthy.

You can download the guide from the TechNet Gallery at https://gallery.technet.microsoft.com/SQL-Server-Agent-Jobs-with-f2b7d5ce. Please rate the guide to let me know whether you liked it or not using the star system on TechNet. I welcome your feedback in the Q&A.

Windows Server 2003 End of Life Plan Spreadsheet

Last week, the folks over at Microsoft published another entry in their blog post series Best Practices for Windows Server 2003 End-of-Support Migration (http://blogs.technet.com/b/server-cloud/archive/2014/10/09/best-practices-for-windows-server-2003-end-of-support-migration-part-4.aspx?wc.mt_id=Social_WinServer_General_TTD&WT.mc_id=Social_TW_OutgoingPromotion_20141009_97469473_windowsserver&linkId=9944146) which included a visually appealing spreadsheet template for helping you keep track of and plan your Windows Server 2003 migrations but to my shock, they didn’t provide the actual Excel file for that design (shame on them).

I’ve copied the design and made it into an Excel spreadsheet which I’ve setup with Conditional Formatting in the relevant cells so that when you add your numeric values and X’s it will automatically colour the cells to help you keep it as pretty as intended as after all, we need a bit of colour and happiness to help us with Windows Server 2003 migrations right?

Click the screenshot of the Excel file below for the download. As a note, make sure you use the Excel desktop application and not the Excel web app to view or use this file as the web app appears to hurt some of the formatting and layout.

Server 2003 Migration Spreadsheet

UPDATE: If you want to read more about Windows Server 2003 End of Life, a post by me has been published on the Fordway blog at http://www.fordway.com/blog-fordway/windows-server-2003-end-of-life/.

Explaining NUMA Spanning in Hyper-V

When we work in virtualized worlds with Microsoft Hyper-V, there are no many things we have to worry about when it comes to processors. Most of these things come with acronyms which people don’t really understand but they know they need and these and one of these is NUMA Spanning which I’m going to try and explain here and convey why we want to avoid NUMA Spanning where possible and I’m going to do it all in fairly simple terms to keep the topic light. In reality, NUMA architectures may be more complex than this.

NUMA Spanning or Non-Uniform Memory Address Spanning was a feature introduced into motherboard chipsets by Intel and AMD. Intel implemented it with the feature set Quick Path Interconnect (QPI) in 2007 and AMD implemented it with HyperTransport in 2003. NUMA uses a construct of nodes in it’s architecture. As the name suggests, NUMA refers to system memory (RAM) and how we use memory and more specifically, how we determine which memory in the system to use.

Single NUMA Node

Single NUMA Node

In the most simple system, you have a single NUMA node. A single NUMA node is achieved either in a system with a single socket processor or by using a motherboard and processor combination which does not support the concept of NUMA. With a single NUMA node, all memory is treated as equal and a VM running on a hypervisor on this configuration system would use any memory available to it without preference.

Multiple NUMA Nodes

Two NUMA Nodes

In a typical system that we see today with multiple processor sockets and with a processor and motherboard configuration that supports NUMA, we have multiple NUMA nodes. NUMA nodes are determined by the arrangement of memory DIMMs in relation to the processor sockets on the motherboard. In a hugely oversimplified sample system with two CPU sockets, each loaded up with a single core processor and 6 DIMMs per socket, each DIMM slot populated with an 8GB DIMM (12 DIMMs total). In this configuration we have two NUMA nodes, and in each NUMA node, we have one CPU socket and it’s directly connected 48GB of memory.

The reason for this relates to the memory controller within the processor and the interconnect paths on the motherboard. The Intel Xeon processor for example has an integrated memory controller. This memory controller is responsible for the address and resource management of the six DIMMs attached to the six DIMM slots on the motherboard linked to this processor socket. For this processor to access this memory it takes the quickest possible path, directly between the processor and the memory and this is referred to as Uniform Memory Access.

For this processor to access memory that is in a DIMM slot that is linked to our second processor socket, it has to cross the interconnect on the motherboard and via the memory controller on the second CPU. All of this takes mere nanoseconds to perform but it is additional latency that we want to avoid in order to achieve maximum system performance. We also need to remember that if we have a good virtual machine consolidation ratio on our physical host, this may be happening for multiple VMs all over the place and that adds up to lots of nanoseconds all of the time. This is NUMA Spanning at work. The processor is breaking out of its own NUMA node to access Non-Uniform Memory in another NUMA node.

Considerations for NUMA Spanning and VM Sizing

NUMA Spanning has a bearing on how we should be sizing our VMs that we deploy to our Hyper-V hosts. In my sample server configuration above, I have 48GB of memory per NUMA node. To minimize the chances of VMs spanning these NUMA nodes, we therefore need to deploy our VMs with sizing considerations linked to this. If I deployed 23 VMs with 4GB of memory each, that equals 92GB. This would mean 48GB memory in the first NUMA node could be totally allocated for VM workload and 44GB of memory allocated to VMs in the second NUMA node leaving 4GB of memory for the parent partition of Hyper-V to operate in. None of these VMs would span NUMA nodes because 48GB/4GB is 12 which means 12 entire VMs can fit per NUMA node.

If I deployed 20 VMs but this time with 4.5GB of memory each, this would require 90GB memory for virtual workloads and leave 6GB for hosting the parent partition of Hyper-V. The problem here is that 48GB/4.5GB doesn’t fit, we have left overs and uneven numbers. 10 of our VMs would fit entirely into the first NUMA node and 9 of our VMs would fit entirely within the second NUMA node but our 20th VM would be in no man’s land and would be left to have half its memory in both of the NUMA nodes.

In good design practice, we should try to size our VMs to match our NUMA architecture. Take my sample server configuration of 48GB per NUMA node, we should use VMs with memory sizes of either 2GB, 4GB, 6GB, 8GB, 12GB, 24GB or 48GB. Anything other than this has a real risk to be NUMA spanned.

Considerations for Disabling NUMA Spanning

So now that we understand what NUMA Spanning is and the potential decrease in performance it can cause, we need to look at it with a virtualization lens as this is where it really takes effect to the maximum. The hypervisor understands the NUMA architecture of the host through the detection of the hardware within. When a VM tries to start and the hypervisor attempts to allocate memory for the VM, it will always try to first get memory within the NUMA node for the processor that is being used for the virtual workload but sometimes that may not be possible due to other workloads blocking the memory.

For the most part, leaving NUMA Spanning enabled is totally fine but if you are really trying to squeeze performance from a system, a virtual SQL Server perhaps, NUMA Spanning would be something we would like to have turned off. NUMA Spanning is enabled by default in both VMware and Hyper-V and it is enabled at the host level but we can override this configuration on both a per hypervisor host level and a per VM level.

I am not for one minute going to recommend that you disable NUMA Spanning at the host level as this might impact your ability to run your workloads. If NUMA Spanning is disabled for the host and the host is not able to accommodate the memory demand of the VM within a single NUMA node, the power on request for the VM will fail and you will be unable to turn on the machine however if you have some VMs which have NUMA Spanning disabled and others with it enabled, you can have your host work like a memory based jigsaw puzzle, fitting things in where it can.

Having SQL Servers and performance sensitive VMs running with NUMA Spanning disabled would be advantageous to their performance and having NUMA Spanning disabled on VMs which are not performance sensitive allows them to use whatever memory is available and cross NUMA nodes as required giving you the best combination of maximum performance for your intensive workloads and the resources required to run those that are not.

Using VMM Hardware Profiles to Manage NUMA Spanning

VMM Hardware Profile NUMA Spanning

So assuming we have a Hyper-V environment that is managed by Virtual Machine Manager (VMM), we can make this really easy to manage without having to bother our users or systems administrators with understanding NUMA Spanning. When we deploy VMs we can base our VMs on Hardware Profiles. A VMM Hardware Profile has the NUMA Spanning option available to us and simply, we would create multiple Hardware Profiles for our workload types, some of which would be for general purpose servers with NUMA Spanning enabled whilst other Hardware Profiles would be configured specifically to be used by performance sensitive workloads with the NUMA Spanning setting disabled in the profile.

The key to remember here is that if you have VMs that are already deployed in your environment you will need to update their configuration. Hardware Profiles in VMM are not linked to the VMs that we deploy so once a VM is deployed, any changes to the Hardware Profile that it was deployed from do not filter down to the VM. The other thing to note is that NUMA Spanning configuration is only applied at VM Startup and during Live or Quick Migration. If you want your VMs to update the NUMA Spanning configuration after you have changed the setting you will either need to stop and start the VM or migrate it to another host in your Hyper-V Failover Cluster.

Gartner Magic Quadrant Unified Communications

Well here’s one you wouldn’t have expected to see. Gartner have placed Microsoft and Lync ahead of Cisco in their Unified Communications Magic Quadrant.

Gartner have put Cisco and Microsoft level for Ability to Execute however Microsoft have been placed ahead in Vision. You can read the full article at http://www.gartner.com/technology/reprints.do?id=1-1YWQWK0&ct=140806&st=sb. Well done Microsoft. Now if work can be done to address the cautions that Gartner have identified then the position will be even stronger.

Older Posts