Posts from April 2015

Update Rollup 6 for System Center Service Manager

On the 28th April 2015, Microsoft are going to make available Update Rollup 6 for System Center Service Manager 2012 R2. Microsoft have provided the details of the update in a blog post at http://blogs.technet.com/b/servicemanager/archive/2015/04/22/it-39-s-time-for-ur6.aspx.

This update heavily focuses on performance enhancements including improvements to the AD and SCCM connectors as well as improvements to the MPSyncJob, one of the many Data Warehouse jobs which causes no end of problems in my experience. For non-US customers, this update also includes the previously released hotfix to address SQL Nvarchar errors that I blogged about at http://richardjgreen.net/nvarchar-data-type-error-with-scsm-2012-r2-update-rollup-5/.

It seems from the post that the Service Manager team are also starting to put a lot of focus on performance and addressing the speed and performance problems that people experience using Service Manager once it is actually loaded up with data, connectors and ITIL related incidents, requests and changes. I’m looking forward to see what comes out of this team over the coming updates and see how they can improve the usability of Service Manager as it’s a key piece in the System Center puzzle that does indeed need a bit of work to make it more usable.

Managing the Skype for Business User Experience

Yesterday, Microsoft rolled out the April 2015 update for Lync 2013 which replaces Lync 2013 with the Skype for Business user experience. I tried out Skype for Business with the Office 2016 Technical Preview a few weeks ago and although it’s early doors, I’m liking the coming together of the two product families thus far.

In this post, I am going to cover off the prerequisites for client and server and also the configuration settings for managing the end-user experience as already, there seems to be a wave of confusion online about it.

Client Prerequisites

In order for your clients to receive the new Skype for Business user experience, there are some prerequisities that apply. Firstly, you must be running Office 2013 with Service Pack 1 (KB2817430). If you don’t have Service Pack 1, you can download it from here for 32-bit and here for 64-bit installations.

With Service Pack 1 applied, you then must have the March 2014 Update for Lync 2013 (KB2863908) applied which you can obtain from here for 32-bit and here for 64-bit installations. There are many updates for Office 2013 post-SP1 which apply not only to Lync but to the whole suite so I would recommend updating all the other products too, not just Lync but for the purposes of this post, this is the update that is critical.

With both the Office 2013 Service Pack 1 applied and the March 2014 update for Lync applied, you are ready to install the Skype for Business update. This update is the April 2015 Update for Skype for Business (KB2889853) and you can download the 32-bit version here or the 64-bit version from here.

Update for Skype for Business

Once you have installed Skype for Business from KB2889853 above, you will want to get another update which is KB2889923 which is a post-April 2015 update for Skype for Business which addresses known issues with the original release. Hard to believe that such an update already exists but it sure does. You can download this update, KB2889923 for 32-bit here and for 64-bit here. Don’t be alarmed that the download page for this update still reports Lync 2013 as the effected product as this is a known thing.

Client Experience

Once you have the updates above installed, you will be running Skype for Business however for many users, you will be prompted at first login that your administrator doesn’t want to run this version of Skype for Business and that you need to revert to Lync.

Restart Skype for Business Dialog

This is caused by server-side settings and depending on your environment whether you are on-premise Lync Server or Office 365 will effect how you resolve it. If you want to control this behavior manually for testing purposes then you can edit the registry key which governs the client experience at HKCU\SOFTWARE\Microsoft\Office\Lync where you can edit the value of the EnableSkypeUI binary value accordingly. 00 00 00 00 denotes that the classic Lync user interface is used and 00 00 00 01 denotes that the Skype for Business UI is used.

EnableSkypeUI Registry

Managing Office 365 Client Experience

If you are using Office 365 then one of the benefits of the service is that Microsoft keep your platform up to date for you so you can go right ahead and configure the server-side policy.

In order to connect to Lync Online via PowerShell, you need to have the Microsoft Online Services Sign-In Assistant installed which you can obtain from http://www.microsoft.com/en-us/download/details.aspx?id=28177 and you will need to have the updated version of the Lync Online Connector Module installed in order to access the Skype for Business parameters. You can download the Lync Online Connector Module from http://www.microsoft.com/en-gb/download/details.aspx?id=39366. If you have managed your Lync Online tenant from PowerShell before you will already have the sign-in assistant so just grab the updated Lync module.

With the two installed, you can download the SwitchSkypeUI.zip file from Microsoft at http://www.microsoft.com/en-us/download/details.aspx?id=46404. This .zip file includes three PowerShell scripts.
DisableSkypeUIGlobal.ps1 will disable the Skype for Business UI for all of your users and force them to use the Lync UI.
EnableSkypeUIGlobal.ps1 will enable the Skype for Business UI for all users and if they have the relevant updates installed will be forced to use the Skype UI.
EnableSkypeUIForUsers.ps1 will enable the Skype UI for a specific set of users. The script accepts pipeline input to the $users variable for your users.

If you run any of these scripts you will be prompted to enter your Office 365 Global Administrator credentials to perform the operation. If you run the selective users script then you will need to provide the users in UPN format such as lyncuser@richardjgreen.net.

Managing Lync Server On-Premise Client Experience

If you are using Lync Server in an on-premise or hosted environment then the work may potentially be a little more consuming. In order to access the Skype for Business parameters in the Lync PowerShell Module, you must be running at least the December 2014 Cumulative Update for Lync Server 2013. You can obtain this update from https://support.microsoft.com/en-us/kb/3018162/ and this updates carries a version number of 5.0.8308.857 if you want to check your current versions.

If you don’t have this update installed then you are going to first need to plan the deployment of it throughout your Lync topology. If you are in a hosted environment, check with your service provider whether the update has been applied.

With the update applied, we expose a new parameter for the CsClientPolicy Cmdlets in PowerShell to configure the Skype for Business user experience.

Either from a Lync Server or from a client with the Lync PowerShell Module installed, you can use the following commands to configure the client experience.

To disable the Skype for Business experience for all users, enter the Cmdlet Set-CsClientPolicy -Identity Global -EnableSkypeUI $False. If you want to enable the experience for everyone then you can use the Cmdlet Set-CsClientPolicy -Identity Global -EnableSkypeUI $True.

If you want to configure the experience to be enabled only for a subset of users such as a test group then you can apply the parameter to a specific Client Policy such as Set-CsClientPolicy -Identity CustomPolicyName -EnableSkypeUI $True.

Changing SQL Server Instance Collation

Working in my home lab over the last couple of evenings, I have been installing some additional SQL Server instances ready for me to install System Center Service Manager. As anyone who has worked with System Center 2012 or 2012 R2 knows, getting your SQL instance collation right is critical. To compound matters, when you think you’ve got an instance setup right, you could end up finding that although one product has the correct collation, another does not.

In my case with Service Manager, making sure you use the correct collation not only effects Service Manager but also potentially your ability to integrate it with other parts of the suite such as Operations Manager. There is a really helpful blog post at http://blogs.technet.com/b/servicemanager/archive/2012/05/24/clarification-on-sql-server-collation-requirements-for-system-center-2012.aspx which not only talks through the SQL collations for System Center but additionally offers up a table of interoperable collations.

Needless to say, I got the collation wrong when installing Service Manager in my lab and I really didn’t want to have to go to the trouble of uninstalling it and re-installing it as not only is that time consuming driving a SQL installation but because I have two instances, one for the Management database and another for the Data Warehouse database I would have had to do it twice.

Luckily for me, I found that it is possible to change the collation of a SQL Server instance after installation. I want to point out that although this is possible to do, I’m not sure I would recommend it for someone in a production environment and I would definitely tell you to back anything relating to that instance up first. Doing this not only drops any user databases but because it causes the master database to be rebuilt, it will lose any customisations or setting changes you have made to that instance since install.

With everything backed up and ready, use the following command to change the collation of the instance.

Z:\setup.exe /QUIET /ACTION=REBUILDDATABASE /INSTANCENAME=SCSM /SQLSYSADMINACCOUNTS=RJGLAB\Administrator /SQLCOLLATION=Latin1_General_CI_AS

To break the command down, Z:\setup.exe is the path to the SQL Server setup executable on my server. the INSTANCENAME parameter is where you specify the instance you want to modify the collation for. SQLSYSADMINACCOUNTS is where you specify who will be make a sysadmin on the instance after the rebuild (as remember our master database is going to be reset) and SQLCOLLATION is where you specify the new collation to use.

If your instance is running in Mixed Mode Authentication, you can also provide the SAPWD parameter to specify the password that will be used for the sa account however my instance is in Windows Authentication mode so I don’t need to set or use the sa account.

jQuery and WordPress No Conflict

Last night, I was doing some work on my blog as ever since I wrote the custom theme I use (and since updated it) I have neglected my mobile visitors and the mobile views not only looked awful but in some cases, depending on the device you were using made the content totally invisible due to the black wallpaper background and the black body text appearing on top of each other.

I decided I wanted to add a fixed top header that faded away as you scroll down the page and reappeared as you scroll back up as I’ve seen it on a number of sites before and the effect is both aesthetically pleasing and maximises the real estate on devices with small screens such as smartphones. I found a site which had a good reference on how to implement this using jQuery and I added the script to the site and the relevant CSS selectors but it wasn’t working.

Using the Developer Tools in Internet Explorer, I could see that my script file containing the jQuery was generating an error on line 1 with the error $ is not defined. Being that I’m just about good enough to write and tweak the jQuery for my needs, I had to resort to searching online to find the solution so I thought I would post it here in the hope that I help some other WordPress administrator out there struggling with the same issue.

The problem arises not because of a problem with jQuery but a nuance with WordPress and how it implements jQuery. With a normal site under normal conditions, you load the jQuery library to allow you to invoke jQuery on the site and in doing so, jQuery assign itself the $ symbol as a variable used for invoking jQuery. jQuery includes an optional mode called noConflict which helps to prevent conflicts with other libraries and extensions that may also use the dollar symbol as a variable. WordPress implements jQuery in no conflict mode and as a result, the dollar symbol is not available and instead, we have to invoke jQuery using the jQuery named variable.

When writing jQuery scripts for use on a WordPress site, we need to replace all instances of the dollar symbol with the jQuery notation. Below is an example line from the script I first wrote for the disappearing menu bar script referencing jQuery using the dollar symbol which causes the $ is not defined error.

$(window).scroll(function(event){
    didScroll = true;
});

To make the script function properly on the WordPress site, I had to modify this section of code to replace $ with jQuery as shown below.

jQuery(window).scroll(function(event){
    didScroll = true;
});

After changing this and all the remaining references in the script and saving it back to the site, the Developer Tools ceased to report the error and the script started to function as expected.

One Year Left for SQL Server 2005 Support

Many enterprises are still dealing with the challenges of completing Windows XP and Windows Server 2003 migrations. Whether you are moving to Windows 7 or Windows 8.1, perhaps even running the gauntlet on Windows XP and hedging your bets for Windows 10 later this year on your clients all the while, evaluating and testing your line of business applications and servers on Windows Server 2012 R2, there is a lot to deal with.

There’s nothing like a little added pressure to throw into the mix and that is why as of 12th April 2015, there is one year left on the extended support status of SQL Server 2005. This notice effects all editions of SQL Server 2005 including 32-bit and 64-bit versions, remembering of course that later versions of the Microsoft database engine are 64-bit only.

With databases and their associated servers being critical to the underpinning of your applications, making the right choices to move these databases is a big decision. If for example, your current database server is a 32-bit server then not only will you have to move that to a 64-bit version of SQL Server but also a 64-bit operating system and that may mean new hardware required if the server only has a 32-bit processor to work with. There is also the question of virtualization as back in 2005, many people wouldn’t have dreamed of virtualizing a database server but today, it’s a commonly done thing . We even have Database as a Service solutions available in the public and private cloud such as SQL Database in Microsoft Azure.

Once you’ve decided on a target architecture platfom, the talk may move on to questions such as storage types, SSDs and flash cache devices such as the Fusion-io ioDrive as things have certainly moved on in storage since your SQL Server 2005 system was first deployed and once you’ve had those conversations, you can think about high availability options such as failover clustering, mirroring or AlwaysOn High Availability, the latter being new to exiting SQL Server 2005 users and offering a fantastic high availability solution.

I think the SQL Server 2005 issue is going to be quite a wide-spread one as in my travels to customer sites and on projects, I see a lot of SQL Server 2005 in the field still, running production systems and some of these systems may themselves no longer be within support so contacting the vendors for information about support for later versions of SQL may make for interesting work. If the vendor themselves has ceased trading then finding out whether that application will support SQL Server 2012 or SQL Server 2014 will be down to you and a test environment.

Mail Calendar and People Apps in Windows 10 Build 10049

In previous builds of Windows 10, there was a known issue with the default Mail, Calendar and People apps which caused them to become corrupted and you had to use PowerShell to resolve the issue by removing the old app instances and re-installing them from the Windows 8.1 Store. My PC downloaded Build 10049 overnight and this build seems to have the same issue however the catch appears to be that if you follow the old instructions that it doesn’t work off the bat and you have to repeat the process as suggested on the thread http://blogs.windows.com/bloggingwindows/2015/03/30/windows-10-technical-preview-build-10049-now-available/.

First, open a PowerShell prompt with the Run As Administrator option. Once launched, enter the following code.

Get-AppxProvisionedPackage -Online | Where-Object {$_.PackageName -Like "*WindowsCommunicationApps*"} | Remove-AppxProvisionedPackage -Online

Once you have completed this, restart the PC. With the restart complete, open the Administrative PowerShell prompt and re-enter the same command again. I had to do this twice in the end so just hit the up arrow to re-use the command and hit enter to run it twice.

Once you have done this, open the Windows 8.1 Store using the green tiled Store app, not the Windows 10 Store with the grey tiled Store (Beta) app. In the store, search for Mail and install the Mail, Calendar and People app collection.

If the installation fails, try restarting and re-running the PowerShell command above as it will work eventually.

Once the apps are installed, looking at your Start Menu, you may see them appear corrupted still, showing the odd looking app names. If this is the case, unpin them from your Start Menu by right-clicking on the tiles and select the Unpin from Start option and then re-pin them to the Start Menu by right-clicking the odd looking app name from the All Apps list and select the Pin to Start option. Once you re-pin the apps, they should change to show the correct app name and launching the apps should now work.

Microsoft are reporting working on fixing the issue to prevent the corruption of these apps in future builds but for the time being, it looks like removing the apps just once that worked in previous builds isn’t enough.

Automatically Assign DVD Drive Letter VMM Private Cloud

When you are running a private cloud, automation is the key to success and having everything automated and running in a repeatable fashion every time is important.

When using Virtual Machine Manager to deploy VMs into a Hyper-V (or VMware) environment, it is fairly common for the VMs we deploy to have multiple drive letters such as C: for the operating system and D: for the data directories and server application installations. One of the problems with this setup is the virtual DVD Drive interfering with your drive lettering.

Like many administrators, I like to move my DVD Drive to the Z: drive so that it is still there to allow me to mount .iso files on the VM using stored .iso files in the VMM Library Share but that way, I know that all my data drives are kept together. Unfortunately, Windows Server will automatically assign the DVD Drive to the D: letter which means a manual task is required to move it to another letter however I have a nice little solution that will move it to the Z: drive or any letter you desire using the VMM GUI Run Once commands.

To make this work, we need to perform two distinct activities. One is to add some files to support this change to our VM Template and the second is to configure VMM to do the work.

Adding Files to the VM Template

I’m assuming in this post that you have a working VM Template configured in VMM. If you don’t then you should get that sorted first as deploying VMs using VMM lives around good quality templates and not deploying VMs with blank hard disks and installing the OS from the .iso image.

On either your VMM Library Share server or on another server, using Computer Management, attach the VHD file for the VM Template so that you can get access to the disk.

On the disk, I created a folder call FirstRun at the root of the drive. Inside this folder, we are going to add two script files. One is a simple command script and the other is a PowerShell script. Through my testing, it appears VMM isn’t quite so impressed with launching PowerShell scripts from the GUI Run Once commands and there is also the matter of PowerShell Execution Policy to factor in, both of which we can get around by using a command script to bootstrap the PowerShell script.

The first script is called FirstRun.cmd and contains the following.

:: First Run Configuration Script
:: v1.0 7th April 2015 by Richard J Green

:: Assigns DVD Drive to Z: Drive and Perform Clean-Up

@echo off
title First Run Configuration Script

:: Launch PowerShell and Set the DVD Drive Letter to Z
echo Set the DVD Drive Letter to Z
PowerShell.exe -NoLogo -Sta -NoProfile -ExecutionPolicy Unrestricted -File %SystemDrive%\FirstRun\Set-DriveLetter.ps1

:: Clean-Up Script Files from VM
echo Clean-Up Script Files
cd\
rd /S /Q FirstRun

This script simply launches a PowerShell session and forces PowerShell into single threaded mode to avoid any multi-threading issues, it does not attempt to load a PowerShell profile which speeds things up and it sets the inline Execution Policy to Unrestricted. This restriction only applies to this instance of PowerShell and not the system as a whole which is how we get around PowerShell Execution Policy defaulting to Restricted (we can’t be sure Group Policy will have been processed by this point so any GPO setting to lower the policy to RemoteSigned (for example) may not be ready). Lastly, we call in a PowerShell script file which does our real work.

At the end of this script, it runs a quick rd command which deletes our FirstRun directly meaning that your resulting VM deployment isn’t left with the first run deployment scripts and files on it so we are cleaning up after ourselves.

The second script is the real worker script, the PowerShell which is going to configure your drive letter.

# Set-DriveLetter.ps1
# v1.0 7th April 2015 by Richard J Green

# Sets the Drive Letter for the DVD Drive to Z
(GWMI Win32_CDROMDrive).Drive | %{$a = mountvol $_ /l;mountvol $_ /d;$a = $a.Trim();mountvol Z: $a}

This script is even shorter and simply locates any removable media drives via WMI and then remounts the drive to the Z: drive. If you want to use a different drive letter, simply change it at the end of the line. The PowerShell for this is courtesy of Derek Seaman at http://www.derekseaman.com/2010/04/change-volume-drive-letter-with.html.

Although I am using this for merely changing the DVD Drive letter right now, I can see me expanding these First Run scripts over time to do more work for me on VM deployments.

Once you have added the two script files into the FirstRun directory in the VM .vhd template, using Computer Management on the server, unmounts the .vhd file so that the changes are saved back into the template.

Configuring VMM GUI Run Once Commands

With the template now configured, open your Virtual Machine Manager console and head to the Library pane. Depending on how you use VMM, you will need to configure this directly on your VM Template or on your Guest OS Profile. I use a Guest OS Profile with all of my VM deployments as I keep my VM Template configuration as low as possible to allow for maximum re-use so I will be showing you how to configure this on a Guest OS Profile.

Edit the Properties of your Guest OS Profile that requires these scripts to run and select the Guest OS Profile tab and then the [GUIRunOnce] Commands option from the bottom of the configuration options list. IN the right of the Properties window, in the Command to Add field, enter the path to your FirstRun.cmd script stored in your VM Template.

VMM Guest OS Profile GUIRunOnce

Testing the First Run Script

After configuring the commands on the Guest OS Profile in VMM, I deployed a VM into my environment based on the VM Template with the files embedded and using the Guest OS Profile for customisation of the template during deployment. Once the deployment was complete, I logged on to the VM using my normal server administration credentials and I was greeted by the sight of the FirstRun.cmd command prompt script running and as the title bar in the screenshot below shows, we can see it is currently running a Windows PowerShell application which means that the called in PowerShell .ps1 script is running.

Once the logon is complete, I opened Computer and was greeted with the sight of the DVD Drive on the Z: letter as desired. Browsing the contents of the C: drive, the FirstRun directory has been removed and there is no trace of the scripts or directory having ever been there.

VM First Logon Running Script  VM DVD Drive on Z

It is important to remember that this script will run when a user first logs on to the server and not automatically as part of provisioning. This is how GUI Run Once commands in VMM are designed to work and the expected behaviour.

If you wanted this to be completely seamless, you could use the AutoLogonCredential parameter in VMM on your VM Template to configure VMM to automatically logon as the local administrator account at the end of deployment which would trigger the GUI Run Once script, perform any first run activities and have the final step of your FirstRun.cmd script be to either restart the VM to complete any configuration or to simply log off the server with the logoff command. I may well try this for myself and update the post when I get a chance to let you know how this works for real.

Nvarchar Data Type Error with SCSM 2012 R2 Update Rollup 5

If you are running System Center 2012 R2 in your environment and you have installed Update Rollup 5 but you are based outside of the USA then this post may well be for you.

Update Rollup 5 is the latest of the regular maintenance updates for Service Manager 2012 R2 and includes a wave of updates but it also comes with a nasty bug up it’s sleeve.

I was working with a customer this week trying to get to the bottom of an issue whereby the Data Warehouse jobs where failing. The MPSyncJob was completing successfully but the next jobs in the Data Warehouse job order, the Extract jobs where failing and reviewing the event log on the Data Warehouse server had an error message “The conversion of a nvarchar data type to a datetime data type resulted in an out-of-range value”.

The error message itself isn’t particularly helpful unless you happen to know a bit about SQL and that nvarchar and datetime are both SQL data types for storing row data. I looked back through the logs and found that the jobs started failing the day after we installed an updated version of a custom management pack I had written for them so we uninstalled the MP and I re-ran all of the warehouse jobs which this time succeeded so we knew it was the custom pack at fault.

I reviewed my code in Visual Studio and was happy that everything was as it should be so I turned to the TechNet forums to see what others had to say and sure enough, there where quite a few people on there complaining that after installing Update Rollup 5, they started to see these same Data Warehouse job failures.

It transpires that there is a bug in Update Rollup 5 which only effects systems which use a System Locale that results in a change to the date and time format. US systems store their date and time in the MM/DD/YY format however here in the UK and many other countries, we store the date as DD/MM/YY. The bug in Update Rollup 5 meant that SCSM isn’t able to understand how a month could possibly have more than 12 days as it isn’t able to understand international date formatting with the days and months transposed.

Microsoft have released a hotfix for Service Manager 2012 R2 Update Rollup 5 which updates the Microsoft.EnterpriseManagement.Orchestration.dll file and fixes the issue.

You can obtain the update from http://www.microsoft.com/en-gb/download/details.aspx?id=46368. Once downloaded, apply the update to your Service Manager Management Servers and your Data Warehouse Management Servers. Although not noted as a requirement in the update release notes, I chose to restart the servers just to be certain. After installing the update, Microsoft.EnterpriseManagement.Orchestration.dll will be updated from 7.5.3079.315, the UR5 version, to 7.5.3079.344 to reflect the hotfix installation.

After applying the hotfix, I re-imported the management pack I had written, re-imported the data for the management pack using a CSV Import and I manually triggered the MPSyncJob and the Extract jobs and they all ran without issue and the Data Warehouse is now functional again.

One important note regarding this update is that it states that your Data Warehouse must have completed at least one successful synchronisation before installation. If you are using an existing deployment of SCSM 2012 R2 then this shouldn’t be a problem however if you are working with a new installation then you should pair the Management Group and the Data Warehouse Management Group and complete a sync before you start installing third-party management packs that could trigger the issue. Once the jobs have completed overnight at least once, then install the hotfix and proceed with installing your custom management packs.