Posts from March 2015

SCCM OSD Part 2: Consolidating the Captured Images

In part of one this series, SCCM OSD Part 1: Building Reference Images, we setup task sequences to capture reference images for all of the required operating systems. Further on in this series, we will be using Microsoft Deployment Toolkit to create a User-Driven Installation (UDI) with the Configuration Manager integration and in order for this to work, we need to consolidate our images into a single master .wim file.

This post will focus on this area. There are no screenshots to offer here as this is a purely command driven exercise. In order to complete the steps in this post, you will need to know the path to where you captured the reference image task sequence .wim files. For the purpose of this, I will assume in this post that all of your captured images are stored in the D:\Images\Captured path on your server. To keep this post consistent with my lab environment, I will provide the commands for capturing Windows 7 and Windows 8.1 images for both 32-bit and 64-bit architectures into the consolidated image.

We start the process by capturing the first image with the following command:

Dism /Export-Image /SourceImageFile:”D:\Images\Captured\Windows 7 x86.wim” /SourceIndex:2 /DestinationImageFile:”D:\Images\Consolidated.wim” /DestinationName:”Windows 7 x86″

Dism is a complicated beast and has a lot of switches that we need to throw into our commands to make it work as we want. To make it worse, Dism is a command line tool not a PowerShell tool so we don’t have the luxury of tabbing our commands to completion. When you enter this command for yourself, make sure you include the quote marks around any names or file paths with spaces. If you have no spaces in your names or paths then you can omit the spaces.

To breakdown this command, the Export-Image opening parameter tells Dism that we want to export the contents of one .wim file into another. The SourceImageFile parameter tells Dism where our source file is located and the SourceIndex tells it which image within the .wim file we want to export. The reason we need to target index 2 is that when a machine is captured using the Build and Capture Task Sequence, two partitions will be created on the disk and captured. The first will be a 300MB System Reserved partition used for Boot Files, BitLocker and Windows Recovery Environment if any of these features are configured. The second partition is used to install the actual Windows operating system. DestinationImageFile is obvious in that we are telling Dism where we want the image from the original file to be saved. In essence, we are telling Dism to create a new file with the index from an existing image. The DestinationName parameter is not required but is makes our lives a lot easier down the line. With Destination Name, we provide a friendly name for the index within the .wim file so that when we are using SCCM or MDT to work with the image we are shown not only the index number but a friendly name to help us understand what index in the image file does what.

The command will execute fairly quickly and once complete, we will have a new file called Consolidated.wim with the contents of the original .wim file for Windows 7 x86. Now, we repeat the command for Windows 7 x64.

Dism /Export-Image /SourceImageFile:”D:\Images\Captured\Windows 7 x64.wim” /SourceIndex:2 /DestinationImageFile:”D:\Images\Consolidated.wim” /DestinationName:”Windows 7 x64″

You will notice the two differences heres. Firstly, we specify a different Source Image File to the 64-bit version of Windows 7. The second difference is the Destination Name. When we run this command, Dism sees that the Consolidated.wim file already exists and does not overwrite it but instead, applies our Export Image command as a second index to the Consolidated.wim file and hence you see how we build a consolidated image.

Repeat the command twice more to add the Windows 8.1 iimages:

Dism /Export-Image /SourceImageFile:”D:\Images\Captured\Windows 8.1 x86.wim” /SourceIndex:2 /DestinationImageFile:”D:\Images\Consolidated.wim” /DestinationName:”Windows 8.1 x86″
Dism /Export-Image /SourceImageFile:”D:\Images\Captured\Windows 8.1 x64.wim” /SourceIndex:2 /DestinationImageFile:”D:\Images\Consolidated.wim” /DestinationName:”Windows 8.1 x64″

Once these two commands have completed, it’s time to review our work. Use the following command with Dism once more to list the contents of the new Consolidated.wim file and make sure everything is as we expect it.

Dism /Get-WimInfo /WimFile:”D:\Images\Consolidated.wim”

The result will be that Dism outputs to the command line the name, index number and size of all of the indexes within the image. If you are following my steps above to the letter then you will have a resulting Consolidated.wim file with four indexes, each with a friendly name to match the operating system within that given index.

SCCM OSD Part 1: Building Reference Images

This is the first in what will become a multi-part series of posts on configuring Operating System Deployment in Configuration Manager 2012 R2. The end goal will be to use Configuration Manager with MDT integration to provide a rich end-user experience for deploying operating systems.

In this first part, we will lay the foundation for what will become the core of the deployment – the Windows Operating System images. In this part, we will create task sequences to build and capture the reference images and update them as needed.

Import OEM Media OS Images

We start with our source Windows media. Copy the contents of the Windows .iso file you plan to use for your installations to a suitable directory in your SCCM source structure and import the Operating System Images as shown above. Repeat this for as many Operating System versions and architectures as you need to support. If you are supporting many operating systems, I would highly recommend creating a folder structure to aid locating the images.

Create Task Sequence Wizard Build and Capture

Once you have imported your base Operating System Images, we need to create a new Task Sequence. In the Task Sequence Wizard, select the Build and capture a reference operating system option.

Specify Task Sequence Name and Boot Image

Next, we need to give our Task Sequence a name and specify the boot image to use. You should always use the 32-bit (x86) boot image because with this one image we can support both 32-bit and 64-bit operating system images however if you use the 64-bit boot image, that is only able to support 64-bit operating system images.

Specify OS Image to Use as Reference

Next, we need to specify our source operating system. In this demonstration, I am using Windows 8.1 Enterprise with Update (x64). The install.wim file in the source Windows media only contains a single image so Image 1 automatically selected from the .wim file. If you are using a Windows image that provides multiple Images such as Home Basic, Home Premium and Professional then you need to make sure you specify the correct image from the list.

Specify Join a Workgroup

Next, we need to specify our machine to join a workgroup and not a domain. We don’t want our reference machine to join the domain as joining the domain will cause Group Policy Objects to be applied to the image which could in turn install software, none of which we want included in the base image. Specify any workgroup name you like but I stick to WORKGROUP just for simplicity.

Set ConfigMgr Client Package Properties

On the step shown above, we need to configure the Configuration Manager Client Package that will be used to install the Configuration Manager Client. Configuration Manager will automatically select the package from the site however we need to customise the parameters that get used for the installation. Parameters are automatically detected from the site Client Push Installation parameters and in my case, this added the Fallback Status Point (FSP) record automatically. We need to add to this the SMSP parameter. The SMSMP parameter tells the Configuration Manager Client the name of the Configuration Manager Management Point. A domain client would find this automatically via Active Directory Publishing of Configuration Manager but as we are in a workgroup, we need to add it. Without this parameter, our Install Software Updates steps will fail to find any updates. Add the parameter as SMSMP=RJGCMSITE1.rjglab.local where RJGCMSITE1.rjglab.local is the FQDN of y our Configuration Manager Management Point.

Specify Install All Software Updates

After setting our SMSMP parameter, we need to tell the task sequence wizard that we want to install All Software Updates. This will install any updates which are either Required or Available to the client from any deployments that are visible to the client.

Specify Capture Path and Network Access Account

 

On the final step, we need to specify the capture path and a network access account. Specify the UNC path to the location where you want the captured reference image to be uploaded. This captured file is not automatically added to Configuration Manager once the capture process is complete. The network access account does not use the account configured in the Site Properties and requires us to re-enter the username and password. This is because we may be saving the captured image to a location or to a server which the normal network access account does not have access.

Once you reach this point, the reference image task sequence will be created with all the default steps and can be used like this if you wish however I like to add a few more steps manually.

Add Install Software Steps to the Reference Image

As you can see from the image above, I have added an Install Software step to the task sequence to install .NET Framework 4.5.1 so that all of my reference machines include this newer version of .NET Framework. Other things you might want to consider including in your reference images are Windows Features such as .NET Framework 3.5.1 or software such as Visual C++ packages that will be required by your end-user applications later on down the road. This is down to personal preference and individual requirements so do as you will here. Use an Install Software step to perform this and reference the package and program as required to do so.

Add Software Update Scan Step

Next, I like to make some changes to the Install Software Updates phase of the sequence. Firstly, I have found, as have others in the community that sometimes the task sequence just fails to find any updates. We can fix this with two steps added to the task sequence. The first step shown above calls the Configuration Manager Client and forces it to perform a Software Update Scan Cycle. To add this yourself, use the following, added as a Run Command Line action in the task sequence.

WMIC /namespace:\\root\ccm path sms_client CALL TriggerSchedule “{00000000-0000-0000-0000-000000000113}” /NOINTERACTIVE

Add Software Update Wait Step

In the step following our forced Software Update Scan Cycle, add a wait timer to the task sequence. This is to give the Software Update Scan Cycle enough time to run, complete and evaluate the updates requirements. Some people will want to use a VBScript to initiate this but doing so requires a package to be downloaded by the client. The easiest way is to use PowerShell and the Sleep command. Use the following added to the task sequence as a Run Command Line action to add a wait timer to the task sequence.

PowerShell.exe -Command Start-Sleep 45

You can change the timer from 45 to any number of seconds that you require but I found that 45 seconds works okay for my requirements.

As you will also see from the two screenshots above, I have added multiple Software Update sections with a Restart Computer step following each wave. As we all know, some Windows Updates require dependencies to be installed or require a restart to complete their installation. Having three iterations (waves) of Install Software Updates in the task sequence does add a chunk of time to the end of the capture process but it is worth it, especially given you won’t be running these too often if at all after the one time. Having three passes of the Install Software Updates step will pretty much ensure that your reference images have 100% of all available updates installed and will be fully up to date.

Once you’ve reached this point, your task sequence for building and capturing a reference image is done. If like me, you are supporting multiple operating systems and architectures then you can now copy the task sequence to create a duplicate of it. For each duplicate you create, edit the Apply Operating System and the Capture the Reference Machine steps to change both the operating system image that gets applied to the reference machine and also the path to which the image is captured.

Once you have created all of the required task sequences, advertise (deploy) them to a collection and run them on your client. At the end of the process, you will have captured a .wim file for each operating system variant you support as a fully patched reference image and we are ready to move on to the next step which is image consolidation which I will be posting in the coming hours or day or so.

Extending SCUP with the Patch My PC Catalog

If you read my two previous posts, Preparing Certificates and GPOs for System Center Update Publisher and Setting Up System Center Update Publisher, you will have already a working SCUP installation and integration with Configuration Manager and you will have the certificates and Group Policy Object settings in place for your clients to trust the updates distributed by SCUP. The downfall to the work done with SCUP up to now is that the out of the box catalogs that Microsoft give you access to are subject to that provided to Microsoft by the software vendors. Adobe, Dell, Fujitsu and HP all provide catalogs however none of these are complete and cover their entire product line but the gesture is most welcome none-the-less.

Where SCUP becomes really powerful is when we look beyond these out of the box catalogs and look at starting to patch other third-party software that doesn’t get delivered through Windows Updates normally and the primary reason is security.

Third-party applications as much as we need them can be the bain of an administrators life and the need to keep them up to date, especially when you look at heavily updated applications like Adobe Flash Player or Google Chrome. We need to keep pace with these updates to make sure that the vulnerabilities and CVEs addressed by the updated versions get into the hands of our users but it is a balance between time, effort and cost as are all things in business. Depending on the sector or organisation you work for, you might have a requirement to keep pace too. UK bodies that use the Public Services Network (or PSN) or organisations accepting credit card payments required to comply with PCI DSS all have compliance requirements to maintain applications within a certain number of versions of the latest available release.

Another reason for considering SCUP for these third-party updates is consistency and efficiency. Google Chrome and Adobe Flash Player for example, both have automatic update engines built into them designed to keep the products up to date however these systems aren’t designed with the enterprise in mind and as a result we not only can find ourselves in a scenario where we start to find divergent versions of software across the estate but also we find a large amount of internet connection bandwidth being consumed by downloading these software updates for each and every client. Yes there are workarounds to this such as caching the updates on a proxy server but that doesn’t really resolve the root issue.

Home Brew Updates and Detectoids

The brave amongst you may be looking around the SCUP console and have realised that you can import your own updates from a Local Update Source and that you can write your own detectoid rules to locate installed software at specific versions but that is time consuming work, requires a lot of testing and prone to error: I tried myself to write custom detectoids for patching Oracle Java in a previous life and it didn’t go so well even though I followed instructions somebody else claimed to have worked.

If we look back to the statement I made about balancing time, effort and cost, creating custom updates in SCUP uses all three of those although the cost is born out of man-hours spent on the endeavour and not a real cost like buying something. Therefore, this isn’t an effective solution so we need to find something else.

Patch My PC SCUP Catalog

As we already know, SCUP provides some out of the box catalogs for getting third-party updates but the list of products and vendors is extremely limited. To my mind, the worst offenders like Oracle with Java and Google with Chrome should be doing more to help enterprises with services like SCUP catalogs but they don’t sadly. Luckily for us though, the market answers our needs and here is where I introduce a company called Patch My PC who have a product simply named SCUP Catalog.

What Patch My PC provide is a subscription based catalog that we can import into our SCUP console and they do all the hard work for you of creating the detectoids, pulling together the update files and crucially, the testing. Unlike most enterprise software that costs the earth, Patch My PC is priced simply and fairly: $1 per managed client per year. There is a minimum order of 250 managed clients so even if you have only 100 devices, you need to license 250 still but at $1 per client, per year, I fail to see how any organisation could manage the patching of third-party applications more cheaply.

Before I get any further into the details on this post, I just want to make one thing clear. As are all of my posts on this blog, nobody is paying me to write a favourable review for a product or say anything nice about their company in exchange for favours. I approached Patch My PC to request the NFR license for my lab so that I could blog about it to show you all the value of the software, not because I’m making revenues of advertising their product for them. There are other products on the market which can perform a similar job to Patch My PC SCUP Catalog but none of them are able to do it with the simplicity that we can here today nor do any of them come even remotely close on value for money and price. As we all know enterprise IT is squeezed year-on-year for budgets, if we can achieve something more effectively and more cost consciously then it is good thing.

Add Patch My PC SCUP Catalog

After registration and payment, you will be emailed a URL to a .cab file. You don’t need to download this file as this file is updated frequently by the team at Patch My PC with the latest updates. In the SCUP Console, on the Catalogs page, select the Add Catalog link in the Ribbon. In the wizard, enter the URL given to you for your unique catalog and enter the details for Patch My PC as shown into the various form fields.

Import Patch My PC SCUP Catalog

Once you have added the catalog, you need to import it. Still on the Catalog page in the console, select the Import button and select the Patch My PC catalog to import it. Unlike the out of the box catalogs I showed in my previous posts, this will take a lot longer to import as there is a lot more here but it shouldn’t take more than a minute or two.

Publish Patch My PC Updates to WSUS

With the catalog imported, head over to the Updates page and take a look at the list of products and updates that the catalog has added to SCUP. The list of products includes too many products for me to mention directly here but you can look at the list they maintain at https://patchmypc.net/supported-products-scup-catalog. To deploy an update to clients, we need to publish it to WSUS. Select the update(s) you want to deploy and select the Publish option from the Ribbon.

Once you have published the updates they will be inserted into WSUS and we now need to make a quick change in Configuration Manager for the remainder of the process to work.

Add Products to SCCM SUP Point

In your Configuration Manager Administration Console, navigate to the Administration page and expand the Site Configuration folder followed by Sites. In the main area, right-click your Configuration Manager site and select the Configure Site Components menu item followed by Software Update Point. In the SUP settings, select the Products tab and check the boxes for all of the products you just published into WSUS as they will currently not be enabled.

SCCM Software Updates with Patch My PC

Once you have done this, the next time your Software Update Point WSUS server performs a synchronisation either automatically on the schedule or if you force one, the updates for the recently added products will appear in the All Software Updates view of the console and will be available for you to deploy to your clients following your normal software update process.

As you can see, with Patch My PC, we can use SCUP to quickly get third-party software updates published into WSUS and made available to Configuration Manager for us to deploy to clients extremely quickly and easily without having to create our own custom updates or detection rules. Furthermore, we no longer need to manually create Software Packages in Configuration Manager for the updated products and Device Collections to locate machines on the network with particular software versions installed to target the deployment of these updates.

The whole process took me in my lab no more than 30 minutes to get setup with a working Update Publisher deployment already in place and now that it is done, it would take less than ten minutes each month to add approvals for the products I am interested in and get them into Configuration Manager to the point that I would be ready to roll them out to clients and to be able to achieve this level of simplicity in third-party patch management for $1 per device per year is frankly amazing.

SCCM OSD Failed Setup Could Not Configure One or More Components

Last week I got asked to look at an issue where a new model of laptop was failing to deploy using a Configuration Manager Operating System Deployment Task Sequence. We knew that the environment was good as other machines were able to complete the task sequence without any issues and the first thought was that it could be a driver issue.

Initially I was sceptical of it being a driver issue as when we see problems with machines completing operating system deployment, problems with drivers normally fall into the category of silent fail whereby the driver is missing all together and we end up with the yellow exclamation mark in Device Manager or the task sequence fails because the driver missing or problematic is related to network or storage and blocks the task sequence from completing.

In this instance however, we knew that the problem was specific to this model. Given that we are failing in the Windows Setup portion of the task sequence, the usual smsts.log file is of no help because the Configuration task sequence has not yet been re-initialized after the reboot at the Setup Windows and ConfigMgr step. in this instance, we need to refer to the setuperr.log and the setupact.log in the Panther directory which you will find in the Windows installation directory. This is where errors and actions relating to Windows Setup live as opposed to the normal smsts.log file.

We rebooted the machine back into WinPE to allow us to open the log file with visual Notepad and began reading the file. Sure enough, we hit an error and the code given was 0x80004005. Looking at the activity either side of this, we can see that the machine is busy at work with the CBS (Component Based Servicing) engine and is initializing and terminating driver operations so we know that something has happened to a driver to cause this problem.

At this point, we had nothing more to go on. Two weeks’ ago, I had a similar issue with another customer whereby the issue was clearly logged to the setuperr.log file and the problem in that instance was an update we had added to the image with Offline Servicing required .NET Framework 4.5 to be present on the machine however Dism didn’t know to check that so we simply removed the update but here, we have no such helpful fault information.

Given that this was a new machine and given that we are deploying Windows 7, I had a thought? What if these drivers being applied require the User or Kernel Mode Driver Framework 1.11 updates that were released for Windows 7 some time ago?

This theory was easy to check. I mounted the Windows 7 .wim file on our SCCM server and then used the Get-Packages switch for Dism to list the installed updated in the image. Sure enough, User-Mode Driver Framework 1.11 (KB2685813) and Kernel-Mode Driver Framework 1.11 (KB2685811) were both absent from the list. I downloaded the updates from the Microsoft Download Center and Offline Serviced the Windows 7 image with the updates and commited the changed back into the .wim file.

After reloading the image in the Configuration Manager Administration Console and updating the .wim file package on the Distribution Points we re-ran the task sequence and by-jove, the machine completed the task sequence with no dramas.

For background reading, the User-Mode and Kernel-Mode Driver Framework 1.11 update is required to install any driver file which was written using the Windows 8 or Windows 8.1 Driver Kit. What I have yet to be able to determine is if there is a way of checking a driver .inf file to determine the version of the Driver Framework required. If there had been a way to determine this, Configuration Manager administrators around the world may rejoice a little so if you do know of a way to check this, please let me know as I would be interested to hear. This would have not been an issue had the reference images been patched with the latest (or at least some) Windows Updates however in this case, I was not so lucky.

Setting Up System Center Update Publisher

In my earlier post Preparing Certificates and GPOs for System Center Update Publisher, I showed you how you can prepare your environment with the appropriate certificate and Group Policy Object to support a System Center Update Publisher installation. With all of this installed and configured, the time is upon us to now install and configure System Center Update Publisher.

I am not going to go through the installation process for SCUP here because it is literally a Next, Next, Finish installation. What I will tell you though is that the latest version of SCUP is 2011 and you can download it from http://www.microsoft.com/en-gb/download/details.aspx?id=11940. The steps in this post can be applied to Configuration Manager 2007 or Configuration Manager 2012 and 2012 R2 but all of my screenshots for the Configuration Manager side of things will be in SCCM 2012 R2.

Configure SCUP Options

Once you have got SCUP installed, you want to open the console, ensuring that you use the Run As Administrator option. If you don’t elevate the console when you launch it, a number of the options and settings will prevent you from changing them. Once open, click the blue icon in the first position on the Ribbon and select Options to get to the settings.

SCUP Configure WSUS Server

First, we want to configure the WSUS Server settings tab. On this tab, you can either specify the hostname for a remote WSUS server or if you are running the SCUP console locally on your WSUS server you can select the option for Connect to a Local Update Server. An important note here is that if you are connecting to a remote WSUS server, the connection must be over SSL on either Port 443 or Port 8531 in order to be able to configure the Signing Certificate settings.

Once you have specified the server, select the Browse button in the Signing Certificate area and locate the .pfx file that has the exported Code Signing certificate including the private key that was exported in the Preparing Certificates and GPOs for System Center Update Publisher post. Once you have located the file and the path is shown in the field, select the Create button and this will publish the certificate into WSUS. You will be prompted to enter the password for the .pfx file at this point.

SCUP Configure SCCM Server

With the WSUS settings configured, we now need to head to the ConfigMgr Server tab. Here, specify whether to connect to a local Configuration Manager server if you are running the SCUP console on your Primary Site Server, otherwise enter the remote server name.

In the fields in the lower part of the screen, you can specify the behaviour of SCUP for transitioning updates between Metadata only and Full Content publishing status according to the required client count. In a nutshell, you can have SCUP publish only the metadata for an update into SCCM to allow you to determine if clients require the update. Once a defined number of clients report the update as required, SCUP will change the status of the update to Full Content and will download the files such as .msp or .exe files for the update.

Adding Catalogs to SCUP

SCUP works by using catalogs which are lists of updates published by manufacturers and included in these catalogs are the update definitions which are called detectoids, working to determine if a client meets the requirements for an update as well as defining the URL where SCUP can download the update from.

In the SCUP console, select the Catalogs button from the left navigation and then hit the Add Catalogs button from the Ribbon.

SCUP Add Catalogs

After clicking the Add Catalogs button, you will be presented with the list of partner catalogs supported by SCUP. These are out of the box and are at no cost to use. To my mind, the Adobe Reader and Adobe Flash Player are the most important. As you can see from the screenshot, I have added these two catalogs from the list of partner catalogs to include in my SCUP catalogs to be used.

Once you have added catalogs to SCUP, we aren’t quite finished as that only adds them to the list of catalogs that can be used however it does not automatically start getting update information. Now, we need to Import the Catalogs. Select the Import button from the Ribbon to access the Import Software Updates Catalog Wizard and here, select one, some or all of the catalogs you just added. Doing this may take a few moments and you might receive a security warning asking you to accept some certificates in the process so go ahead and allow this.

Publishing Updates from SCUP

With the update catalogs added to SCUP and the updates in those catalogs imported, now it is time to look at some actual updates. Head over to the Updates view in the console with the button in the lower-left corner. and expand one of the folders to view a subset of the updates.

SCUP Updates List

Here we can see the name of the updates, if there are any relevant article IDs or CVEs that they address as well as the date the update was released and whether or not it is expired. As you can see for Adobe Flash Player, many of the updates are expired because they have been superseded by later updates. Highlight an update that has not been superseded and select the Publish button in the Ribbon. Click through the wizard to download the update files if required and the update will be published into WSUS ready for SCCM to use.

Configuring SCCM Software Update Point Products

With the updates now published into WSUS for Configuration Manager use, we need to make sure that Configuration Manager will be able to detect the updates. As part of installing and configuring Configuration Manager you will have setup the products and classifications for which you want to download updates and we need to add to this the products that we just published with SCUP.

In the Configuration Manager Administration Console, navigate to the Administration page and then expand the Site Configuration followed by the Sites view. Right-click on your site and then select the Configure Site Components menu item followed by Software Update Point.

SCCM SUP Products

As you can see in the screenshot above, after publishing the Adobe updates into WSUS, there is now some additional products listed for Adobe Systems Inc including Flash Player and Reader. There is also a new product called Local Publisher which is the product SCUP updates for any updates you create manually. Check all of the new products you want to be able to deploy to clients and then save the changes to the Software Update Point role.

Viewing the SCUP Updates in SCCM

SCCM Adobe Updates Available

With the updates now published to WSUS for Configuration Manager and with Configuration Manager’s SUP role configured to accept updates for these products we’re all set. You can either wait for the WSUS server to perform a scheduled synchronisation or you can force it from the Software Updates area of the Software Library page in the console. Once a synchronisation has occurred Configuration Manager will be able to list the new updates for the new products.

As you can see in the screenshot above, I used a criteria to filter the search results for Bulletin ID contains APSB which is the prefix Adobe uses for all of their security updates much like Microsoft use KB to prefix their updates. I can now follow the normal process of downloading the updates into Deployment Packages and approving the updates for distribution to collections.

 

Preparing Certificates and GPOs for System Center Update Publisher

If you are using Configuration Manager to manage and patch your client estate then you already know that it’s great to have your Software Updates in the same console as your Application Delivery and the way in which Configuration Manager 2012 R2 manages Software Updates is a big leap on usability over Configuration Manager 2007 however the missing piece of the puzzle for many is managing non-Microsoft updates and for that, we need to enlist the help of a free product from Microsoft called System Center Update Publisher.

Before we start anything with Configuration Manager, WSUS or SCUP however, we do have the small matter of prerequisites to cover off and in this case it requires a certificate and a Group Policy setting or two. The certificate we are interested in is a Code Signing certificate which unless you are familiar with signing PowerShell scripts that you author, you may not have come across previously and your internal CA may not be setup to issue. You can buy these certificates for Code Signing from an external third-party CA if you wish but it is easiest and best done internally as after all, the code you are going to be signing is for updates to your internal clients.

Creating the Code Signing Certificate Template

On your Certificate Authority, we need to configure it to issue a Code Signing certificate. You can either use the native Code Signing template or you can create a custom template just for SCUP so that you can limit the scope of the certificate template to selected users or a group of users accordingly. If you want to create a new template then duplicate the existing Code Signing certificate for the purpose.

Once you have decided on the template to use, configure the CA to issue the certificate. In my lab, my template is called SCUP Code Signing and the security on the template limits users in an Active Directory Group called SCUP Code Signing Users to being able to Enroll the certificate which prevents users, malicious or otherwise from requesting the certificate.

SCUP Code Signing CA Template

Request the Code Signing Certificate

Once you have configured everything on the CA, you need to request a certificate based on this template. Using the Certificates MMC snap-in for your user account, you can request to enrol the certificate from your Active Directory Enrollment Policy.

SCUP Code Signing Certificate Request

If you based your new Certificate Template on the Code Signing template or you used the Code Signing template, you don’t need to enter additional information and the request will be built from Active Directory user attributes. Once you have created the certificate, you need to export it twice. For the first export, export the certificate only in .cer format and do not export the private key. This portion of the certificate will be used in the Group Policy Object shortly. The second export is required to be in .pfx format and include the private key and is used in SCUP for configuring it once installed.

Configure the Trusted Publishers Group Policy Setting

Once you have issued the certificate and you have exported it twice; once as a .cer file and once as a .pfx file, we need to configure the Group Policy for the Trusted Publishers. Put simply, in order for your client PCs to install updates that are not signed by Microsoft, the clients need to trust the updates. In order for the updates to be trusted, they need to be signed with a certificate that the clients trust. Having a certificate from your internal CA isn’t enough for this though. Once you have a certificate, a client will trust it as it is from a Trusted Root Certification Authority but it will not be trusted for code signing unless added to the appropriate certificate store.

Using ether a new Group Policy Object or an existing object which contains your other Certificate Services related settings, we need to add the .cer certificate exported earlier to the policy.

Trusted Publishers GPO Setting

Within the Group Policy Object, expand the Computer Configuration folder and then drill into Security Settings followed by Public Key Policies. Within the Public Key Policies folder, open the Trusted Publishers folder. In here, you need to import the Code Signing certificate .cer file that was previously exported. Doing this allows your clients to trust updates signed with this certificate for the publishing of software and applications.

Make sure you use the .cer export and not the .pfx export here as we only want the clients to have and trust the public key portion of the certificate. Distributing the .pfx would give these clients the private key also and that would be bad to have sent throughout the entire environment on every machine linked with the GPO.

Next, we need to change one setting in relation to the Windows Update Agent on the clients. In the same GPO or in another GPO if you have one dedicated to Windows Update related settings, navigate to Computer Configuration, Administrative Templates, Windows Components, Windows Update. Here, you need to change the status of the Allow signed updates from an intranet Microsoft update service location setting from Not Configured to Enabled. This second setting allows the Windows Update Agent to actually detect and download updates from your WSUS and SCCM environment if they are not signed by Microsoft and this setting is paired with the Trusted Publisher certificate above to make non-Microsoft updates trusted on the client.

With these all the above completed, you are now set and ready to deploy System Center Update Publisher and a follow-up post I will be publishing soon will cover the SCUP installation and setup.

Delta CRLs are Not Accessible via HTTP When Hosted on IIS

If you are running a Microsoft PKI in your environment then chances are you will have (or at least you should have) configured at least one HTTP based distribution point (CDP) for your Certificate Revocation Lists. If you are only publishing full CRLs then you will have no problems however if you are publishing Delta CRLs, the smaller, faster to process kind which list only certificates revoked since the last full publish then you may encounter an issue if you are using an IIS website to publish these.

The problem lies in the filename used for the CRLs. In my lab for example, my Certificate Authority issues a CRL file name rjglab-CA.crl and the delta files are named the same as the full CRL but they are appended with the plus character making the file name rjglab-CA+.crl. In it’s native configuration, IIS does not permit the use of the plus character because that character falls into the realms of IIS Request Mapping and the request handler.

HTTP Error Downloading Delta CRL

We can see in the screenshot above what the error code and message given by IIS is when we try to download the Delta CRL in the default configuration.

For an IIS webserver hosting your CRL and Delta CRL, we need to change the behaviour of IIS to allow this plus character to be permitted which luckily is easily done. First off, open IIS Manager on the server which is hosting and making available to clients your Delta CRL file. From the server home in IIS, open the Request Filtering page and from this page, select the Edit Feature Settings button in the Actions bar.

Request Filtering Settings in IIS

On the Edit Request Filtering Settings page under the General section, by default, Allow Double Escaping is disabled. Enable this option and then press OK.

Once you have made the change, try to download the Delta CRL file and you should find that the file is available and you can successfully download it.

Delta CRL Downloaded OK

Extended Validation (EV) with an Internal Certificate Authority

As IT Pro’s, we know that Extended Validation or EV on web server certificates doesn’t actually add a security layer or harden our web servers in any way but it does give users the warm fuzzy feeling that the website they are using is definitely trustworthy and given that we want our users to believe everything we do internally in IT is trustworthy, it would be great to have our internal web services use Extended Validation certificates for user facing websites.

If you are using a Windows Active Directory Certificates Services (ADCS) certificate authority for issuing your certificates then the great news is that we can do this and it can be made to work in an existing environment so you don’t need to build a new Root CA or setup new servers for it to work, we just need to create a new Certificate Template and a Group Policy Object in the domain.

Configure the Certificate Authority

The first step is to create the Certificate Template. On your ADCS server where you issue your Web Server certificates, open the Certificate Authority MMC console. From the console, right-click on the Certificate Templates folder and select Manage.

Manage Certificate Templates

Once you have clicked this, another window will open with the list of Certificate Templates configured in the environment. Find the Web Server certificate, right-click it and select the Duplicate Template option.

New Template Properties

At the Properties for New Template dialog, enter a display name that is appropriate such as “Web Server with EV” or “Web Server Extended Validation”. From here, click the Extensions tab.

New Template Properties Extensions

On the Extensions tab, highlight the Issuance Policies list item and select Edit. At the window which appears, select the New button to add a new Issuance Policy.

EV Issuance Policy

Give your new issuance policy a name such as “EV Issuance Policy” and if you have one (which you should do for production) enter your Certificate Purpose Statement URI. If you don’t know what a Certificate Purpose Statement (CPS) is then I would suggest the TechNet article Certificate Policies and Certificate Policy Statements as a first primer however in a nutshell, it’s a webpage which gives people information about how the certificates can be used.

Before you hit OK on the New Issuance Policy dialog, note the final field OID. Copy this OID to your clipboard and keep it their for the time being or better yet, save it to a text document in a safe place as we need this for the steps later.

Once you have this, hit OK on the dialog and change any other settings on the template you may need to such as the validity period, the key length or whether you want to allow the private key to be exported. Once you have created the new template, we need to configure the CA to be able to issue it.

CA Certificate Template to Issue

As shown above, back in the Certificate Authority console, right-click on the Certificate Templates folder and this time, select New followed by the Certificate Template to Issueoption. From the list of templates, select the new template you just created for Web Server with Extended Validation.

After this, the Certificate Authority is configured with a new template that can be used for Extended Validation and the CA is configured to issue certificates based on that template however it’s no good having the certificates if the clients do not know to trust it to the extent required to display the green address bar.

Configure Group Policy in Active Directory

With the CA configured, we need to configure clients to trust this certificate for Extended Validation and the best method for this is going to be Group Policy. If you have an existing Group Policy to apply certificate related settings then use that policy otherwise create a new one and link it either at the root of your domain to apply it to all computers on the domain or to a particular OU if you only want it to apply to sub-set of clients. Just for clarity, I would not recommend putting certificate related settings in the Default Domain Policy nor would I recommend putting any settings into that policy. The Default Domain Policy and the Default Domain Controllers Policy should be left untouched and new policy objects should be created for any settings you want to apply.

In your Group Policy Object, expand the view in Computer Configuration followed by Security Settings, Public Key Policies and finally Trusted Root Certification Authorities. If you are using an existing policy, you should have here a valid copy of the public key portion of the certificate for your Root CA. If you are creating this as a new policy, you will need to import the public key portion of your Root CA certificate.

GPO Trusted Root Certificataion Authorities GPO Trust Root CA Extended Validation Properties

Once your certificate is added, right-click it and select the Properties. From the properties, you need to select the Extended Validation tab. On this tab, add the OID that you earlier copied or saved to a text document. Any OIs in this list are considered trusted for Extended Validation when a certificate contains the Issuance Policy matching that OID and the certificate issued by a CA that is part of the issuing or subordinate chain below the specified Root CA.

Once you have applied the GPO to your clients, you can issue a new certificate for a web site with the Web Server Extended Validation template and when browsing to that site from a client computer which both trusts your Root CA and understands the OID applied to the Issuance Policy, you will get the green address bar.

Website with EV Certificate

Automatically Select a Configuration Manager Task Sequence

With Configuration Manager, we have a great amount of power and control over how we deploy computers using Task Sequences. Many people go to great lengths to make their operating system deployment experience a great one be it for end-users with User Driven Installation or for their support technicians driving the builds instead. One way which we can streamline the process is to use a consolidated task sequence which handles all of our various operating systems, languages, drivers and applications in a single task sequence with intelligence.

If you have gone to the great lengths to make this happen in your environment, you may be left with a sinking feeling that everytime you PXE Boot or USB Media Boot a client to deploy, you still have to select a task sequence to run even though you only have one as shown in the sample below.

Select TS Task Sequence Wizard

Luckily, we have a solution to this and a way to allow us to skip the Task Sequence Selection wizard and automatically enter our one task sequence to rule them all and it is done using Prestart Commands on our Boot Images in Configuration Manager. You don’t need MDT or any other fancy software integration with Configuration Manager to do this as it is done using the default boot images.

To start, we need to know the Deployment ID for the Task Sequence. This is not to be confused with the Package ID. The Deployment ID is the unique ID assigned to a single instance of the task sequence deployed to a collection.

We can get the Deployment ID for the Task Sequence by navigating to the Software Library portion of the Configuration Manager Administration Console and then expanding Operating Systems followed by Task Sequences and then locate your sequence from any folder structure you may have created. With the Task Sequence selected, the lower half of the screen will show the summary properties for it. Click the Deployments tab at the bottom here to see where your Task Sequence is deployed, in my example, to the All Systems collection.

In this area, right-click on the column titles and add the Deployment ID column to the view. This is the value we need.

Task Sequence Deployment ID

With this value in hand, we now need to create a very simple VBScript file. I store this file in a directory called OSD Prestart Files on my Primary Site Server but where you store it is up to you. Create a VBScript with the following contents.

Set DefaultOSDTS = CreateObject("Microsoft.SMS.TSEnvironment")
DefaultOSDTS("SMSPreferredAdvertID") = "RJG20008"

In your case, you will need to change the value of DefaultOSDTS to the Deployment ID shown in your console. My Deployment ID was RJG20008 as set.

Once you have created and saved this file, head over to the Boot Images section of the Operating System portion in the Software Library and view the Properties for your boot image.

Boot Image Prestart Command Setting

As you can see from the image above, we need to check the box for the option Enable Prestart Command on the Customisation tab of the properties. Once this is checked, we can add the command to call our VBScript file which in my case will be cscript AutoStartOSD.vbs. Once you have entered this, check the box for Include files for the prestart command and specify the path to the script file you created so that this file is integrated into the Boot Image file.

When you save the changes, you will be prompted to update your Boot Image files and the file will be rebuilt and updated on your Distribution Points. Once complete, if you are using PXE to boot your clients you need to do nothing more and you can start enjoying your automatically starting task sequence. If you are using USB or ISO Boot Media to start your task sequence process, you will need to update your image as the Boot Image that the media is based on has been updated.

Access Office 365 with Azure ExpressRoute

Azure ExpressRoute when it launched back in 2014 was for me, one of the most exciting propositions with Azure. The ability to rapidly provision, scale and consume PaaS and IaaS resources in the Microsoft Cloud however it lacked one thing and that was Office 365. Whilst many, many customers are adopting Office 365, having that traffic routed out over your internet connection for some people is seen as a security concern and for others it’s a bandwidth problem they just don’t want to deal with.

Earlier this week, the Office Team has posted a blog at http://blogs.office.com/2015/03/17/announcing-azure-expressroute-connectivity-to-office-365/ that Office 365 over Azure ExpressRoute is on the way although sadly not until Q3 2015.

The wait aside, this is great news both for customers seeking the maximum performance for their Office 365 deployments and their on-premise users and great news because it is another string in the public cloud productivity suites’ bow. I look forward to seeing that make it to the mainstream and seeing it in action.