System Center

All posts concerning the System Center suite including but not limited to Configuration Manager, Operations Manager, Virtual Machine Manager, Service Manager, Data Protection Manager and Orchestrator. Quite often, posts relating to Hyper-V will crop up here due to the link with Virtual Machine Manager.

Configuring Global Service Monitor for SCOM

As System Center people, we all know that SCOM is very powerful and capable at monitoring but unless you deploy Management Servers or Gateway Servers into a public cloud environment like Azure, all of your monitoring comes with the perspective of inside your environment. If you are hosting web services that are externally accessible, one important aspect to consider is outside-in monitoring, otherwise known as monitoring your externally facing services from outside of your organisation.

Licensing and Registering

Global Service Monitor (GSM) for SCOM has been around for quite some time now since 2013 and I still see people running SCOM who are entitled to GSM not using it. To be eligible for GSM, you most importantly need to be running System Center Operations Manager 2012 SP1 or higher. You need to have a properly licensed SCOM deployment and you need to have Software Assurance for your System Center licenses as GSM is an SA benefit if you want to use the service permanently or you can also sign up for a free 90 day trial of GSM if you don’t have SA on your licenses to try the service out as I did.

To activate your SA benefit for GSM or to register for a 90 day trial, you first need to visit the Microsoft Commerce Portal at http://go.microsoft.com/fwlink/?LinkId=275502. You need an Organisational Account to sign in here which means in a sly way, you need to be using Office 365, Azure or Intune as well or at least have a working Azure Active Directory deployment ready for you to consume one of these services in the future.

Preparing the Management Servers

Once you get yourself either signed up or activated according to whether you are going trial or permanent, we need to download the GSM Management Pack. You can obtain this from the Microsoft Download Center at http://www.microsoft.com/en-us/download/details.aspx?id=36422. The download is a .msi file which you need to install to extract the Management Pack Bundle files.

With the files extracted but before we can install the Management Packs, you need to check you have the relevant Windows Features installed. GSM requires the Windows Identity Foundation 3.5 feature to be enabled on the Management Servers which will participate in the monitoring so make sure you install this on all the relevant Management Servers and not just the once you perform the installation on.

To avoid posting a screenshot of clicking through Server Manager and Add Roles and Features Wizards, the PowerShell Cmdlets for installing this feature is below.

Import-Module ServerManager
Install-WindowsFeature Windows-Identity-Foundation

Once that is out of the way, you can import the Management Packs into SCOM.

Import GSM Management Packs

Configuring Global Service Monitor Settings

Once you have the Management Packs imported, a new view will be added to the Administration pane of the Operations Manager console for Global Service Monitor and you can start the configuration wizard. You will be asked to sign-in using your Organisational Account as part of the process and from this, your GSM Subscription ID will be discovered.

Configure GSM Resource Pool

GSM uses Resource Pools for determining which Management Servers will communicate with the service. You can use the All Management Servers Resource Pool however this is not recommended. I have created a new resource pool for as recommended. You also here have the option to configure a proxy server to use to access the GSM service.

Creating Web Application Availability Monitors

Once you have completed the wizard above and GSM is configured, you can start to configure monitors using the service. I already had an existing Web Application Availability Monitor configured for my blog so I have modified this to use GSM. It is important to note that GSM only works with Web Application Availability Monitors and not with Web Application Transaction Monitors so you will need to make sure that you are using the appropriate type. There is a good article on System Center Central that compares the two types of monitor and what each can do at http://www.systemcentercentral.com/which-is-the-best-synthetic-web-transaction-to-use-in-operations-manager-for-my-requirements-scom-sysctr/ if you need to understand the difference.

Web Application Availability Monitor Locations

As you can see above, I have my existing Web Application Availability Monitor and I have one internal location configured, my resource pool however we have an empty field above called External Locations. Select the Add button to add a new external location.

Web Application Availability Monitor Set External Locations

Selecting this option now presents us with a list of the available GSM monitoring locations. Those familiar with the Azure datacentre locations will note that they are the same as the GSM locations. I selected a few choice locations but which ones you use or how many is entirely up to you. If your service that you are trying to outside-in monitor is truly global, you may want to use them all but if you are only interested in the availability of your service within a particular geographic region then just use those relevant to you.

Once you apply the changes it takes a little while for the request to be sent up to Global Service Monitor and for the monitoring data to start coming back down but after a short wait, about fifteen minutes in my case, I started to see the health state for the various monitoring sites in the Monitoring view.

Web Application Availability Monitor Health

In my lab, I am using SquaredUp to provide rich HTML5 visualizations of my SCOM environment so I decided to take this a step further and I am using the Azure SQL Database Management Pack to monitor my Azure SQL databases that host my WordPress database and I built a Distributed Application for it and presented it via SquaredUp as shown below.

Web Application Distributed Application  Web Application Monitor via SquaredUp

And there we have it, a setup and working outside-in monitoring solution for web services using SCOM and taking advantage of SA licensing benefits. One of the best things about this is that each monitored location retrieves the counters you specify whilst configuring your Web Application Availability Monitor so you get the response time, DNS resolution time and other counters for each region so you can see really clearly how latency plays a part in your applications performance.

I hope you found this useful and it helps you to monitor your own solutions with GSM.

 

 

Automatically Label the OS Drive on New VMs

In my quest for private cloud (and public) nirvana, I’m always looking for ways to automate parts of the first run user experience so that as IT Pros, we can build and deliver services to users which fit the bill right out of the gate. In a previous post from earlier this year, in a post entitled Automatically Assign DVD Drive Letter VMM Private Cloud, I walked you through the process of using a PowerShell script that would run as a GUI Run Once script as part of a VMM initiated virtual machine deployment to set the DVD Drive letter.

Since I posted this article, I’ve made a couple of improvements to the environment that I wanted to share with you all and in this first post, I will cover off how to automatically label and name the OS drive on our newly deployed virtual machines. This process involves applies registry keys. As with my first post, you could achieve the same results with Group Policy, however I like all of my modifications to be applied to the local machine so that if the machine is deployed as a non-domain joined server into a DMZ or if there is an issue with the first time Group Policy gets processed, these settings still get applied but I will cover both methods here. This would also work in a multi-tenant or hosting environment where VMs may not being landing in your own domain or environment.

Add the Script to the VM Template

If you followed my previous post, you will be familiar with mounting the .vhd file for the VM Template on another server to modify the local file system. If you are unsure of this, please refer back to my original article Automatically Assign DVD Drive Letter VMM Private Cloud for guidance.

With the .vhd file mounted, we are going to add a new PowerShell script to the FirstRun folder named Set-OSDriveLabel.ps1 and it will contain the following.

# Set-OSDriveLabel.ps1
# v1.0 2nd June 2015 by Richard J Green

# Sets the OS Install Volume Label to the Value in the DriveLabel Variable
$DriveLabel = "OS"
$OSDrive = $env:SystemDrive
$OSDrive = $OSDrive.Substring(0,$OSDrive.Length-1)

New-Item -Path HKLM:\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Explorer\DriveIcons -Name $OSDrive -Force
New-Item -Path HKLM:\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Explorer\DriveIcons\$OSDrive -Name DefaultLabel -Force
Set-Item -Path HKLM:\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Explorer\DriveIcons\$OSDrive\DefaultLabel -Value $DriveLabel -Force

Short and sweet, this script will detect the Windows installation drive from the PowerShell SystemDrive environment variable and set this drive letter to use the label OS as defined in the DriveLabel variable.

One important note here is that this setting is applied to the Wow6432Node on a 64-bit server. If you were applying this to a client OS that was 32-bit then you would need to remove the Wow6432Node portion of the registry key location. I find this a peculiar one given that this change effects Windows Explorer which is a 64-bit process.

With the PowerShell script saved in the FirstRun folder, we need to update the FirstRun.cmd wrapper script that invokes the containerised PowerShell scripts in the appropriate escalated manner. Simply add the following lines to the script before the clean-up section at the end.

:: Launch PowerShell and Label the OS Drive to OS
echo Set OS Drive Label to OS
PowerShell.exe -NoLogo -Sta -NoProfile -ExecutionPolicy Unrestricted -File %SystemDrive%\FirstRun\Set-OSDriveLabl.ps1

I hope this takes away another manual step from your VM build processes and brings you one step closer to nirvana. In another post coming soon, I will have instructions on how to hide some of the folders from the This PC or “My Computer” folder which don’t really belong on a server and another post to clarify the steps on creating Network Locations for the This PC folder.

Cireson Announce Partner Only Channel Sales Model

This week, Cireson, the company best known for their System Center Service Manager extensions such as the Self-Service Portal have made an exciting announcement regarding their sales channels.

Historically, Cireson have offered customers both a direct and a partner channel for purchasing their products however this week, it has been announced that Cireson are moving to a partner only model to align themselves with the Microsoft global partner ecosystem.

For some, this may come as a bit of a disappointment if you have previously purchased products directly however working for a Cireson partner organisation as I do, I see this as a great thing for our customers as it allows us to really help customers ensure that they are making the most of their Cireson product purchases. If you are based in the UK or Ireland you have four partners available to assist you currently with Cireson products and services with Fordway Solutions, the company that I work as a Microsoft Consultant at being one of them.

If you are interested in Cireson products for System Center Service Manager such as the Self-Service Portal, Knowledge Base, Dashboards and more then please get in touch with us at Fordway and we’ll be sure to help you with your service management needs.

You can read the Cireson announcement at http://www.prweb.com/releases/2015/4/prweb12690930.htm and you can get in touch with Fordway Solutions regarding Cireson, System Center and more at http://fordway.com/contact-us.

Update Rollup 6 for System Center Service Manager

On the 28th April 2015, Microsoft are going to make available Update Rollup 6 for System Center Service Manager 2012 R2. Microsoft have provided the details of the update in a blog post at http://blogs.technet.com/b/servicemanager/archive/2015/04/22/it-39-s-time-for-ur6.aspx.

This update heavily focuses on performance enhancements including improvements to the AD and SCCM connectors as well as improvements to the MPSyncJob, one of the many Data Warehouse jobs which causes no end of problems in my experience. For non-US customers, this update also includes the previously released hotfix to address SQL Nvarchar errors that I blogged about at http://richardjgreen.net/nvarchar-data-type-error-with-scsm-2012-r2-update-rollup-5/.

It seems from the post that the Service Manager team are also starting to put a lot of focus on performance and addressing the speed and performance problems that people experience using Service Manager once it is actually loaded up with data, connectors and ITIL related incidents, requests and changes. I’m looking forward to see what comes out of this team over the coming updates and see how they can improve the usability of Service Manager as it’s a key piece in the System Center puzzle that does indeed need a bit of work to make it more usable.

Changing SQL Server Instance Collation

Working in my home lab over the last couple of evenings, I have been installing some additional SQL Server instances ready for me to install System Center Service Manager. As anyone who has worked with System Center 2012 or 2012 R2 knows, getting your SQL instance collation right is critical. To compound matters, when you think you’ve got an instance setup right, you could end up finding that although one product has the correct collation, another does not.

In my case with Service Manager, making sure you use the correct collation not only effects Service Manager but also potentially your ability to integrate it with other parts of the suite such as Operations Manager. There is a really helpful blog post at http://blogs.technet.com/b/servicemanager/archive/2012/05/24/clarification-on-sql-server-collation-requirements-for-system-center-2012.aspx which not only talks through the SQL collations for System Center but additionally offers up a table of interoperable collations.

Needless to say, I got the collation wrong when installing Service Manager in my lab and I really didn’t want to have to go to the trouble of uninstalling it and re-installing it as not only is that time consuming driving a SQL installation but because I have two instances, one for the Management database and another for the Data Warehouse database I would have had to do it twice.

Luckily for me, I found that it is possible to change the collation of a SQL Server instance after installation. I want to point out that although this is possible to do, I’m not sure I would recommend it for someone in a production environment and I would definitely tell you to back anything relating to that instance up first. Doing this not only drops any user databases but because it causes the master database to be rebuilt, it will lose any customisations or setting changes you have made to that instance since install.

With everything backed up and ready, use the following command to change the collation of the instance.

Z:\setup.exe /QUIET /ACTION=REBUILDDATABASE /INSTANCENAME=SCSM /SQLSYSADMINACCOUNTS=RJGLAB\Administrator /SQLCOLLATION=Latin1_General_CI_AS

To break the command down, Z:\setup.exe is the path to the SQL Server setup executable on my server. the INSTANCENAME parameter is where you specify the instance you want to modify the collation for. SQLSYSADMINACCOUNTS is where you specify who will be make a sysadmin on the instance after the rebuild (as remember our master database is going to be reset) and SQLCOLLATION is where you specify the new collation to use.

If your instance is running in Mixed Mode Authentication, you can also provide the SAPWD parameter to specify the password that will be used for the sa account however my instance is in Windows Authentication mode so I don’t need to set or use the sa account.

Automatically Assign DVD Drive Letter VMM Private Cloud

When you are running a private cloud, automation is the key to success and having everything automated and running in a repeatable fashion every time is important.

When using Virtual Machine Manager to deploy VMs into a Hyper-V (or VMware) environment, it is fairly common for the VMs we deploy to have multiple drive letters such as C: for the operating system and D: for the data directories and server application installations. One of the problems with this setup is the virtual DVD Drive interfering with your drive lettering.

Like many administrators, I like to move my DVD Drive to the Z: drive so that it is still there to allow me to mount .iso files on the VM using stored .iso files in the VMM Library Share but that way, I know that all my data drives are kept together. Unfortunately, Windows Server will automatically assign the DVD Drive to the D: letter which means a manual task is required to move it to another letter however I have a nice little solution that will move it to the Z: drive or any letter you desire using the VMM GUI Run Once commands.

To make this work, we need to perform two distinct activities. One is to add some files to support this change to our VM Template and the second is to configure VMM to do the work.

Adding Files to the VM Template

I’m assuming in this post that you have a working VM Template configured in VMM. If you don’t then you should get that sorted first as deploying VMs using VMM lives around good quality templates and not deploying VMs with blank hard disks and installing the OS from the .iso image.

On either your VMM Library Share server or on another server, using Computer Management, attach the VHD file for the VM Template so that you can get access to the disk.

On the disk, I created a folder call FirstRun at the root of the drive. Inside this folder, we are going to add two script files. One is a simple command script and the other is a PowerShell script. Through my testing, it appears VMM isn’t quite so impressed with launching PowerShell scripts from the GUI Run Once commands and there is also the matter of PowerShell Execution Policy to factor in, both of which we can get around by using a command script to bootstrap the PowerShell script.

The first script is called FirstRun.cmd and contains the following.

:: First Run Configuration Script
:: v1.0 7th April 2015 by Richard J Green

:: Assigns DVD Drive to Z: Drive and Perform Clean-Up

@echo off
title First Run Configuration Script

:: Launch PowerShell and Set the DVD Drive Letter to Z
echo Set the DVD Drive Letter to Z
PowerShell.exe -NoLogo -Sta -NoProfile -ExecutionPolicy Unrestricted -File %SystemDrive%\FirstRun\Set-DriveLetter.ps1

:: Clean-Up Script Files from VM
echo Clean-Up Script Files
cd\
rd /S /Q FirstRun

This script simply launches a PowerShell session and forces PowerShell into single threaded mode to avoid any multi-threading issues, it does not attempt to load a PowerShell profile which speeds things up and it sets the inline Execution Policy to Unrestricted. This restriction only applies to this instance of PowerShell and not the system as a whole which is how we get around PowerShell Execution Policy defaulting to Restricted (we can’t be sure Group Policy will have been processed by this point so any GPO setting to lower the policy to RemoteSigned (for example) may not be ready). Lastly, we call in a PowerShell script file which does our real work.

At the end of this script, it runs a quick rd command which deletes our FirstRun directly meaning that your resulting VM deployment isn’t left with the first run deployment scripts and files on it so we are cleaning up after ourselves.

The second script is the real worker script, the PowerShell which is going to configure your drive letter.

# Set-DriveLetter.ps1
# v1.0 7th April 2015 by Richard J Green

# Sets the Drive Letter for the DVD Drive to Z
(GWMI Win32_CDROMDrive).Drive | %{$a = mountvol $_ /l;mountvol $_ /d;$a = $a.Trim();mountvol Z: $a}

This script is even shorter and simply locates any removable media drives via WMI and then remounts the drive to the Z: drive. If you want to use a different drive letter, simply change it at the end of the line. The PowerShell for this is courtesy of Derek Seaman at http://www.derekseaman.com/2010/04/change-volume-drive-letter-with.html.

Although I am using this for merely changing the DVD Drive letter right now, I can see me expanding these First Run scripts over time to do more work for me on VM deployments.

Once you have added the two script files into the FirstRun directory in the VM .vhd template, using Computer Management on the server, unmounts the .vhd file so that the changes are saved back into the template.

Configuring VMM GUI Run Once Commands

With the template now configured, open your Virtual Machine Manager console and head to the Library pane. Depending on how you use VMM, you will need to configure this directly on your VM Template or on your Guest OS Profile. I use a Guest OS Profile with all of my VM deployments as I keep my VM Template configuration as low as possible to allow for maximum re-use so I will be showing you how to configure this on a Guest OS Profile.

Edit the Properties of your Guest OS Profile that requires these scripts to run and select the Guest OS Profile tab and then the [GUIRunOnce] Commands option from the bottom of the configuration options list. IN the right of the Properties window, in the Command to Add field, enter the path to your FirstRun.cmd script stored in your VM Template.

VMM Guest OS Profile GUIRunOnce

Testing the First Run Script

After configuring the commands on the Guest OS Profile in VMM, I deployed a VM into my environment based on the VM Template with the files embedded and using the Guest OS Profile for customisation of the template during deployment. Once the deployment was complete, I logged on to the VM using my normal server administration credentials and I was greeted by the sight of the FirstRun.cmd command prompt script running and as the title bar in the screenshot below shows, we can see it is currently running a Windows PowerShell application which means that the called in PowerShell .ps1 script is running.

Once the logon is complete, I opened Computer and was greeted with the sight of the DVD Drive on the Z: letter as desired. Browsing the contents of the C: drive, the FirstRun directory has been removed and there is no trace of the scripts or directory having ever been there.

VM First Logon Running Script  VM DVD Drive on Z

It is important to remember that this script will run when a user first logs on to the server and not automatically as part of provisioning. This is how GUI Run Once commands in VMM are designed to work and the expected behaviour.

If you wanted this to be completely seamless, you could use the AutoLogonCredential parameter in VMM on your VM Template to configure VMM to automatically logon as the local administrator account at the end of deployment which would trigger the GUI Run Once script, perform any first run activities and have the final step of your FirstRun.cmd script be to either restart the VM to complete any configuration or to simply log off the server with the logoff command. I may well try this for myself and update the post when I get a chance to let you know how this works for real.

Nvarchar Data Type Error with SCSM 2012 R2 Update Rollup 5

If you are running System Center 2012 R2 in your environment and you have installed Update Rollup 5 but you are based outside of the USA then this post may well be for you.

Update Rollup 5 is the latest of the regular maintenance updates for Service Manager 2012 R2 and includes a wave of updates but it also comes with a nasty bug up it’s sleeve.

I was working with a customer this week trying to get to the bottom of an issue whereby the Data Warehouse jobs where failing. The MPSyncJob was completing successfully but the next jobs in the Data Warehouse job order, the Extract jobs where failing and reviewing the event log on the Data Warehouse server had an error message “The conversion of a nvarchar data type to a datetime data type resulted in an out-of-range value”.

The error message itself isn’t particularly helpful unless you happen to know a bit about SQL and that nvarchar and datetime are both SQL data types for storing row data. I looked back through the logs and found that the jobs started failing the day after we installed an updated version of a custom management pack I had written for them so we uninstalled the MP and I re-ran all of the warehouse jobs which this time succeeded so we knew it was the custom pack at fault.

I reviewed my code in Visual Studio and was happy that everything was as it should be so I turned to the TechNet forums to see what others had to say and sure enough, there where quite a few people on there complaining that after installing Update Rollup 5, they started to see these same Data Warehouse job failures.

It transpires that there is a bug in Update Rollup 5 which only effects systems which use a System Locale that results in a change to the date and time format. US systems store their date and time in the MM/DD/YY format however here in the UK and many other countries, we store the date as DD/MM/YY. The bug in Update Rollup 5 meant that SCSM isn’t able to understand how a month could possibly have more than 12 days as it isn’t able to understand international date formatting with the days and months transposed.

Microsoft have released a hotfix for Service Manager 2012 R2 Update Rollup 5 which updates the Microsoft.EnterpriseManagement.Orchestration.dll file and fixes the issue.

You can obtain the update from http://www.microsoft.com/en-gb/download/details.aspx?id=46368. Once downloaded, apply the update to your Service Manager Management Servers and your Data Warehouse Management Servers. Although not noted as a requirement in the update release notes, I chose to restart the servers just to be certain. After installing the update, Microsoft.EnterpriseManagement.Orchestration.dll will be updated from 7.5.3079.315, the UR5 version, to 7.5.3079.344 to reflect the hotfix installation.

After applying the hotfix, I re-imported the management pack I had written, re-imported the data for the management pack using a CSV Import and I manually triggered the MPSyncJob and the Extract jobs and they all ran without issue and the Data Warehouse is now functional again.

One important note regarding this update is that it states that your Data Warehouse must have completed at least one successful synchronisation before installation. If you are using an existing deployment of SCSM 2012 R2 then this shouldn’t be a problem however if you are working with a new installation then you should pair the Management Group and the Data Warehouse Management Group and complete a sync before you start installing third-party management packs that could trigger the issue. Once the jobs have completed overnight at least once, then install the hotfix and proceed with installing your custom management packs.

SCCM OSD Part 2: Consolidating the Captured Images

In part of one this series, SCCM OSD Part 1: Building Reference Images, we setup task sequences to capture reference images for all of the required operating systems. Further on in this series, we will be using Microsoft Deployment Toolkit to create a User-Driven Installation (UDI) with the Configuration Manager integration and in order for this to work, we need to consolidate our images into a single master .wim file.

This post will focus on this area. There are no screenshots to offer here as this is a purely command driven exercise. In order to complete the steps in this post, you will need to know the path to where you captured the reference image task sequence .wim files. For the purpose of this, I will assume in this post that all of your captured images are stored in the D:\Images\Captured path on your server. To keep this post consistent with my lab environment, I will provide the commands for capturing Windows 7 and Windows 8.1 images for both 32-bit and 64-bit architectures into the consolidated image.

We start the process by capturing the first image with the following command:

Dism /Export-Image /SourceImageFile:”D:\Images\Captured\Windows 7 x86.wim” /SourceIndex:2 /DestinationImageFile:”D:\Images\Consolidated.wim” /DestinationName:”Windows 7 x86″

Dism is a complicated beast and has a lot of switches that we need to throw into our commands to make it work as we want. To make it worse, Dism is a command line tool not a PowerShell tool so we don’t have the luxury of tabbing our commands to completion. When you enter this command for yourself, make sure you include the quote marks around any names or file paths with spaces. If you have no spaces in your names or paths then you can omit the spaces.

To breakdown this command, the Export-Image opening parameter tells Dism that we want to export the contents of one .wim file into another. The SourceImageFile parameter tells Dism where our source file is located and the SourceIndex tells it which image within the .wim file we want to export. The reason we need to target index 2 is that when a machine is captured using the Build and Capture Task Sequence, two partitions will be created on the disk and captured. The first will be a 300MB System Reserved partition used for Boot Files, BitLocker and Windows Recovery Environment if any of these features are configured. The second partition is used to install the actual Windows operating system. DestinationImageFile is obvious in that we are telling Dism where we want the image from the original file to be saved. In essence, we are telling Dism to create a new file with the index from an existing image. The DestinationName parameter is not required but is makes our lives a lot easier down the line. With Destination Name, we provide a friendly name for the index within the .wim file so that when we are using SCCM or MDT to work with the image we are shown not only the index number but a friendly name to help us understand what index in the image file does what.

The command will execute fairly quickly and once complete, we will have a new file called Consolidated.wim with the contents of the original .wim file for Windows 7 x86. Now, we repeat the command for Windows 7 x64.

Dism /Export-Image /SourceImageFile:”D:\Images\Captured\Windows 7 x64.wim” /SourceIndex:2 /DestinationImageFile:”D:\Images\Consolidated.wim” /DestinationName:”Windows 7 x64″

You will notice the two differences heres. Firstly, we specify a different Source Image File to the 64-bit version of Windows 7. The second difference is the Destination Name. When we run this command, Dism sees that the Consolidated.wim file already exists and does not overwrite it but instead, applies our Export Image command as a second index to the Consolidated.wim file and hence you see how we build a consolidated image.

Repeat the command twice more to add the Windows 8.1 iimages:

Dism /Export-Image /SourceImageFile:”D:\Images\Captured\Windows 8.1 x86.wim” /SourceIndex:2 /DestinationImageFile:”D:\Images\Consolidated.wim” /DestinationName:”Windows 8.1 x86″
Dism /Export-Image /SourceImageFile:”D:\Images\Captured\Windows 8.1 x64.wim” /SourceIndex:2 /DestinationImageFile:”D:\Images\Consolidated.wim” /DestinationName:”Windows 8.1 x64″

Once these two commands have completed, it’s time to review our work. Use the following command with Dism once more to list the contents of the new Consolidated.wim file and make sure everything is as we expect it.

Dism /Get-WimInfo /WimFile:”D:\Images\Consolidated.wim”

The result will be that Dism outputs to the command line the name, index number and size of all of the indexes within the image. If you are following my steps above to the letter then you will have a resulting Consolidated.wim file with four indexes, each with a friendly name to match the operating system within that given index.

SCCM OSD Part 1: Building Reference Images

This is the first in what will become a multi-part series of posts on configuring Operating System Deployment in Configuration Manager 2012 R2. The end goal will be to use Configuration Manager with MDT integration to provide a rich end-user experience for deploying operating systems.

In this first part, we will lay the foundation for what will become the core of the deployment – the Windows Operating System images. In this part, we will create task sequences to build and capture the reference images and update them as needed.

Import OEM Media OS Images

We start with our source Windows media. Copy the contents of the Windows .iso file you plan to use for your installations to a suitable directory in your SCCM source structure and import the Operating System Images as shown above. Repeat this for as many Operating System versions and architectures as you need to support. If you are supporting many operating systems, I would highly recommend creating a folder structure to aid locating the images.

Create Task Sequence Wizard Build and Capture

Once you have imported your base Operating System Images, we need to create a new Task Sequence. In the Task Sequence Wizard, select the Build and capture a reference operating system option.

Specify Task Sequence Name and Boot Image

Next, we need to give our Task Sequence a name and specify the boot image to use. You should always use the 32-bit (x86) boot image because with this one image we can support both 32-bit and 64-bit operating system images however if you use the 64-bit boot image, that is only able to support 64-bit operating system images.

Specify OS Image to Use as Reference

Next, we need to specify our source operating system. In this demonstration, I am using Windows 8.1 Enterprise with Update (x64). The install.wim file in the source Windows media only contains a single image so Image 1 automatically selected from the .wim file. If you are using a Windows image that provides multiple Images such as Home Basic, Home Premium and Professional then you need to make sure you specify the correct image from the list.

Specify Join a Workgroup

Next, we need to specify our machine to join a workgroup and not a domain. We don’t want our reference machine to join the domain as joining the domain will cause Group Policy Objects to be applied to the image which could in turn install software, none of which we want included in the base image. Specify any workgroup name you like but I stick to WORKGROUP just for simplicity.

Set ConfigMgr Client Package Properties

On the step shown above, we need to configure the Configuration Manager Client Package that will be used to install the Configuration Manager Client. Configuration Manager will automatically select the package from the site however we need to customise the parameters that get used for the installation. Parameters are automatically detected from the site Client Push Installation parameters and in my case, this added the Fallback Status Point (FSP) record automatically. We need to add to this the SMSP parameter. The SMSMP parameter tells the Configuration Manager Client the name of the Configuration Manager Management Point. A domain client would find this automatically via Active Directory Publishing of Configuration Manager but as we are in a workgroup, we need to add it. Without this parameter, our Install Software Updates steps will fail to find any updates. Add the parameter as SMSMP=RJGCMSITE1.rjglab.local where RJGCMSITE1.rjglab.local is the FQDN of y our Configuration Manager Management Point.

Specify Install All Software Updates

After setting our SMSMP parameter, we need to tell the task sequence wizard that we want to install All Software Updates. This will install any updates which are either Required or Available to the client from any deployments that are visible to the client.

Specify Capture Path and Network Access Account

 

On the final step, we need to specify the capture path and a network access account. Specify the UNC path to the location where you want the captured reference image to be uploaded. This captured file is not automatically added to Configuration Manager once the capture process is complete. The network access account does not use the account configured in the Site Properties and requires us to re-enter the username and password. This is because we may be saving the captured image to a location or to a server which the normal network access account does not have access.

Once you reach this point, the reference image task sequence will be created with all the default steps and can be used like this if you wish however I like to add a few more steps manually.

Add Install Software Steps to the Reference Image

As you can see from the image above, I have added an Install Software step to the task sequence to install .NET Framework 4.5.1 so that all of my reference machines include this newer version of .NET Framework. Other things you might want to consider including in your reference images are Windows Features such as .NET Framework 3.5.1 or software such as Visual C++ packages that will be required by your end-user applications later on down the road. This is down to personal preference and individual requirements so do as you will here. Use an Install Software step to perform this and reference the package and program as required to do so.

Add Software Update Scan Step

Next, I like to make some changes to the Install Software Updates phase of the sequence. Firstly, I have found, as have others in the community that sometimes the task sequence just fails to find any updates. We can fix this with two steps added to the task sequence. The first step shown above calls the Configuration Manager Client and forces it to perform a Software Update Scan Cycle. To add this yourself, use the following, added as a Run Command Line action in the task sequence.

WMIC /namespace:\\root\ccm path sms_client CALL TriggerSchedule “{00000000-0000-0000-0000-000000000113}” /NOINTERACTIVE

Add Software Update Wait Step

In the step following our forced Software Update Scan Cycle, add a wait timer to the task sequence. This is to give the Software Update Scan Cycle enough time to run, complete and evaluate the updates requirements. Some people will want to use a VBScript to initiate this but doing so requires a package to be downloaded by the client. The easiest way is to use PowerShell and the Sleep command. Use the following added to the task sequence as a Run Command Line action to add a wait timer to the task sequence.

PowerShell.exe -Command Start-Sleep 45

You can change the timer from 45 to any number of seconds that you require but I found that 45 seconds works okay for my requirements.

As you will also see from the two screenshots above, I have added multiple Software Update sections with a Restart Computer step following each wave. As we all know, some Windows Updates require dependencies to be installed or require a restart to complete their installation. Having three iterations (waves) of Install Software Updates in the task sequence does add a chunk of time to the end of the capture process but it is worth it, especially given you won’t be running these too often if at all after the one time. Having three passes of the Install Software Updates step will pretty much ensure that your reference images have 100% of all available updates installed and will be fully up to date.

Once you’ve reached this point, your task sequence for building and capturing a reference image is done. If like me, you are supporting multiple operating systems and architectures then you can now copy the task sequence to create a duplicate of it. For each duplicate you create, edit the Apply Operating System and the Capture the Reference Machine steps to change both the operating system image that gets applied to the reference machine and also the path to which the image is captured.

Once you have created all of the required task sequences, advertise (deploy) them to a collection and run them on your client. At the end of the process, you will have captured a .wim file for each operating system variant you support as a fully patched reference image and we are ready to move on to the next step which is image consolidation which I will be posting in the coming hours or day or so.

Extending SCUP with the Patch My PC Catalog

If you read my two previous posts, Preparing Certificates and GPOs for System Center Update Publisher and Setting Up System Center Update Publisher, you will have already a working SCUP installation and integration with Configuration Manager and you will have the certificates and Group Policy Object settings in place for your clients to trust the updates distributed by SCUP. The downfall to the work done with SCUP up to now is that the out of the box catalogs that Microsoft give you access to are subject to that provided to Microsoft by the software vendors. Adobe, Dell, Fujitsu and HP all provide catalogs however none of these are complete and cover their entire product line but the gesture is most welcome none-the-less.

Where SCUP becomes really powerful is when we look beyond these out of the box catalogs and look at starting to patch other third-party software that doesn’t get delivered through Windows Updates normally and the primary reason is security.

Third-party applications as much as we need them can be the bain of an administrators life and the need to keep them up to date, especially when you look at heavily updated applications like Adobe Flash Player or Google Chrome. We need to keep pace with these updates to make sure that the vulnerabilities and CVEs addressed by the updated versions get into the hands of our users but it is a balance between time, effort and cost as are all things in business. Depending on the sector or organisation you work for, you might have a requirement to keep pace too. UK bodies that use the Public Services Network (or PSN) or organisations accepting credit card payments required to comply with PCI DSS all have compliance requirements to maintain applications within a certain number of versions of the latest available release.

Another reason for considering SCUP for these third-party updates is consistency and efficiency. Google Chrome and Adobe Flash Player for example, both have automatic update engines built into them designed to keep the products up to date however these systems aren’t designed with the enterprise in mind and as a result we not only can find ourselves in a scenario where we start to find divergent versions of software across the estate but also we find a large amount of internet connection bandwidth being consumed by downloading these software updates for each and every client. Yes there are workarounds to this such as caching the updates on a proxy server but that doesn’t really resolve the root issue.

Home Brew Updates and Detectoids

The brave amongst you may be looking around the SCUP console and have realised that you can import your own updates from a Local Update Source and that you can write your own detectoid rules to locate installed software at specific versions but that is time consuming work, requires a lot of testing and prone to error: I tried myself to write custom detectoids for patching Oracle Java in a previous life and it didn’t go so well even though I followed instructions somebody else claimed to have worked.

If we look back to the statement I made about balancing time, effort and cost, creating custom updates in SCUP uses all three of those although the cost is born out of man-hours spent on the endeavour and not a real cost like buying something. Therefore, this isn’t an effective solution so we need to find something else.

Patch My PC SCUP Catalog

As we already know, SCUP provides some out of the box catalogs for getting third-party updates but the list of products and vendors is extremely limited. To my mind, the worst offenders like Oracle with Java and Google with Chrome should be doing more to help enterprises with services like SCUP catalogs but they don’t sadly. Luckily for us though, the market answers our needs and here is where I introduce a company called Patch My PC who have a product simply named SCUP Catalog.

What Patch My PC provide is a subscription based catalog that we can import into our SCUP console and they do all the hard work for you of creating the detectoids, pulling together the update files and crucially, the testing. Unlike most enterprise software that costs the earth, Patch My PC is priced simply and fairly: $1 per managed client per year. There is a minimum order of 250 managed clients so even if you have only 100 devices, you need to license 250 still but at $1 per client, per year, I fail to see how any organisation could manage the patching of third-party applications more cheaply.

Before I get any further into the details on this post, I just want to make one thing clear. As are all of my posts on this blog, nobody is paying me to write a favourable review for a product or say anything nice about their company in exchange for favours. I approached Patch My PC to request the NFR license for my lab so that I could blog about it to show you all the value of the software, not because I’m making revenues of advertising their product for them. There are other products on the market which can perform a similar job to Patch My PC SCUP Catalog but none of them are able to do it with the simplicity that we can here today nor do any of them come even remotely close on value for money and price. As we all know enterprise IT is squeezed year-on-year for budgets, if we can achieve something more effectively and more cost consciously then it is good thing.

Add Patch My PC SCUP Catalog

After registration and payment, you will be emailed a URL to a .cab file. You don’t need to download this file as this file is updated frequently by the team at Patch My PC with the latest updates. In the SCUP Console, on the Catalogs page, select the Add Catalog link in the Ribbon. In the wizard, enter the URL given to you for your unique catalog and enter the details for Patch My PC as shown into the various form fields.

Import Patch My PC SCUP Catalog

Once you have added the catalog, you need to import it. Still on the Catalog page in the console, select the Import button and select the Patch My PC catalog to import it. Unlike the out of the box catalogs I showed in my previous posts, this will take a lot longer to import as there is a lot more here but it shouldn’t take more than a minute or two.

Publish Patch My PC Updates to WSUS

With the catalog imported, head over to the Updates page and take a look at the list of products and updates that the catalog has added to SCUP. The list of products includes too many products for me to mention directly here but you can look at the list they maintain at https://patchmypc.net/supported-products-scup-catalog. To deploy an update to clients, we need to publish it to WSUS. Select the update(s) you want to deploy and select the Publish option from the Ribbon.

Once you have published the updates they will be inserted into WSUS and we now need to make a quick change in Configuration Manager for the remainder of the process to work.

Add Products to SCCM SUP Point

In your Configuration Manager Administration Console, navigate to the Administration page and expand the Site Configuration folder followed by Sites. In the main area, right-click your Configuration Manager site and select the Configure Site Components menu item followed by Software Update Point. In the SUP settings, select the Products tab and check the boxes for all of the products you just published into WSUS as they will currently not be enabled.

SCCM Software Updates with Patch My PC

Once you have done this, the next time your Software Update Point WSUS server performs a synchronisation either automatically on the schedule or if you force one, the updates for the recently added products will appear in the All Software Updates view of the console and will be available for you to deploy to your clients following your normal software update process.

As you can see, with Patch My PC, we can use SCUP to quickly get third-party software updates published into WSUS and made available to Configuration Manager for us to deploy to clients extremely quickly and easily without having to create our own custom updates or detection rules. Furthermore, we no longer need to manually create Software Packages in Configuration Manager for the updated products and Device Collections to locate machines on the network with particular software versions installed to target the deployment of these updates.

The whole process took me in my lab no more than 30 minutes to get setup with a working Update Publisher deployment already in place and now that it is done, it would take less than ten minutes each month to add approvals for the products I am interested in and get them into Configuration Manager to the point that I would be ready to roll them out to clients and to be able to achieve this level of simplicity in third-party patch management for $1 per device per year is frankly amazing.