dism

SCCM OSD Part 2: Consolidating the Captured Images

In part of one this series, SCCM OSD Part 1: Building Reference Images, we setup task sequences to capture reference images for all of the required operating systems. Further on in this series, we will be using Microsoft Deployment Toolkit to create a User-Driven Installation (UDI) with the Configuration Manager integration and in order for this to work, we need to consolidate our images into a single master .wim file.

This post will focus on this area. There are no screenshots to offer here as this is a purely command driven exercise. In order to complete the steps in this post, you will need to know the path to where you captured the reference image task sequence .wim files. For the purpose of this, I will assume in this post that all of your captured images are stored in the D:\Images\Captured path on your server. To keep this post consistent with my lab environment, I will provide the commands for capturing Windows 7 and Windows 8.1 images for both 32-bit and 64-bit architectures into the consolidated image.

We start the process by capturing the first image with the following command:

Dism /Export-Image /SourceImageFile:”D:\Images\Captured\Windows 7 x86.wim” /SourceIndex:2 /DestinationImageFile:”D:\Images\Consolidated.wim” /DestinationName:”Windows 7 x86″

Dism is a complicated beast and has a lot of switches that we need to throw into our commands to make it work as we want. To make it worse, Dism is a command line tool not a PowerShell tool so we don’t have the luxury of tabbing our commands to completion. When you enter this command for yourself, make sure you include the quote marks around any names or file paths with spaces. If you have no spaces in your names or paths then you can omit the spaces.

To breakdown this command, the Export-Image opening parameter tells Dism that we want to export the contents of one .wim file into another. The SourceImageFile parameter tells Dism where our source file is located and the SourceIndex tells it which image within the .wim file we want to export. The reason we need to target index 2 is that when a machine is captured using the Build and Capture Task Sequence, two partitions will be created on the disk and captured. The first will be a 300MB System Reserved partition used for Boot Files, BitLocker and Windows Recovery Environment if any of these features are configured. The second partition is used to install the actual Windows operating system. DestinationImageFile is obvious in that we are telling Dism where we want the image from the original file to be saved. In essence, we are telling Dism to create a new file with the index from an existing image. The DestinationName parameter is not required but is makes our lives a lot easier down the line. With Destination Name, we provide a friendly name for the index within the .wim file so that when we are using SCCM or MDT to work with the image we are shown not only the index number but a friendly name to help us understand what index in the image file does what.

The command will execute fairly quickly and once complete, we will have a new file called Consolidated.wim with the contents of the original .wim file for Windows 7 x86. Now, we repeat the command for Windows 7 x64.

Dism /Export-Image /SourceImageFile:”D:\Images\Captured\Windows 7 x64.wim” /SourceIndex:2 /DestinationImageFile:”D:\Images\Consolidated.wim” /DestinationName:”Windows 7 x64″

You will notice the two differences heres. Firstly, we specify a different Source Image File to the 64-bit version of Windows 7. The second difference is the Destination Name. When we run this command, Dism sees that the Consolidated.wim file already exists and does not overwrite it but instead, applies our Export Image command as a second index to the Consolidated.wim file and hence you see how we build a consolidated image.

Repeat the command twice more to add the Windows 8.1 iimages:

Dism /Export-Image /SourceImageFile:”D:\Images\Captured\Windows 8.1 x86.wim” /SourceIndex:2 /DestinationImageFile:”D:\Images\Consolidated.wim” /DestinationName:”Windows 8.1 x86″
Dism /Export-Image /SourceImageFile:”D:\Images\Captured\Windows 8.1 x64.wim” /SourceIndex:2 /DestinationImageFile:”D:\Images\Consolidated.wim” /DestinationName:”Windows 8.1 x64″

Once these two commands have completed, it’s time to review our work. Use the following command with Dism once more to list the contents of the new Consolidated.wim file and make sure everything is as we expect it.

Dism /Get-WimInfo /WimFile:”D:\Images\Consolidated.wim”

The result will be that Dism outputs to the command line the name, index number and size of all of the indexes within the image. If you are following my steps above to the letter then you will have a resulting Consolidated.wim file with four indexes, each with a friendly name to match the operating system within that given index.

SCCM OSD Failed Setup Could Not Configure One or More Components

Last week I got asked to look at an issue where a new model of laptop was failing to deploy using a Configuration Manager Operating System Deployment Task Sequence. We knew that the environment was good as other machines were able to complete the task sequence without any issues and the first thought was that it could be a driver issue.

Initially I was sceptical of it being a driver issue as when we see problems with machines completing operating system deployment, problems with drivers normally fall into the category of silent fail whereby the driver is missing all together and we end up with the yellow exclamation mark in Device Manager or the task sequence fails because the driver missing or problematic is related to network or storage and blocks the task sequence from completing.

In this instance however, we knew that the problem was specific to this model. Given that we are failing in the Windows Setup portion of the task sequence, the usual smsts.log file is of no help because the Configuration task sequence has not yet been re-initialized after the reboot at the Setup Windows and ConfigMgr step. in this instance, we need to refer to the setuperr.log and the setupact.log in the Panther directory which you will find in the Windows installation directory. This is where errors and actions relating to Windows Setup live as opposed to the normal smsts.log file.

We rebooted the machine back into WinPE to allow us to open the log file with visual Notepad and began reading the file. Sure enough, we hit an error and the code given was 0x80004005. Looking at the activity either side of this, we can see that the machine is busy at work with the CBS (Component Based Servicing) engine and is initializing and terminating driver operations so we know that something has happened to a driver to cause this problem.

At this point, we had nothing more to go on. Two weeks’ ago, I had a similar issue with another customer whereby the issue was clearly logged to the setuperr.log file and the problem in that instance was an update we had added to the image with Offline Servicing required .NET Framework 4.5 to be present on the machine however Dism didn’t know to check that so we simply removed the update but here, we have no such helpful fault information.

Given that this was a new machine and given that we are deploying Windows 7, I had a thought? What if these drivers being applied require the User or Kernel Mode Driver Framework 1.11 updates that were released for Windows 7 some time ago?

This theory was easy to check. I mounted the Windows 7 .wim file on our SCCM server and then used the Get-Packages switch for Dism to list the installed updated in the image. Sure enough, User-Mode Driver Framework 1.11 (KB2685813) and Kernel-Mode Driver Framework 1.11 (KB2685811) were both absent from the list. I downloaded the updates from the Microsoft Download Center and Offline Serviced the Windows 7 image with the updates and commited the changed back into the .wim file.

After reloading the image in the Configuration Manager Administration Console and updating the .wim file package on the Distribution Points we re-ran the task sequence and by-jove, the machine completed the task sequence with no dramas.

For background reading, the User-Mode and Kernel-Mode Driver Framework 1.11 update is required to install any driver file which was written using the Windows 8 or Windows 8.1 Driver Kit. What I have yet to be able to determine is if there is a way of checking a driver .inf file to determine the version of the Driver Framework required. If there had been a way to determine this, Configuration Manager administrators around the world may rejoice a little so if you do know of a way to check this, please let me know as I would be interested to hear. This would have not been an issue had the reference images been patched with the latest (or at least some) Windows Updates however in this case, I was not so lucky.

Invalid License Key Error When Performing Windows Edition Upgrade

Last week, I decided to perform the in-place edition upgrade from Windows Server 2012 R2 Essentials to Windows Server 2012 R2 Standard on my home server as part of a multitude of things I’m working on at home right now. Following the TechNet article for the command to run and the impact and implications of doing the edition upgrade at http://technet.microsoft.com/en-us/library/jj247582 I ran the command as instructed in the article but I kept getting a license key error stating that my license key was not valid.

As my server was originally licensed under a TechNet key, I wondered if the problem could be down to different licensing channels preventing me from installing the key. On the server, I ran the command cscript slmgr.vbs /dlv to display the detailed license information and the channel was reported as Retail as I expected for a TechNet key. The key I am trying to use is an MSDN key which also should be reported as part of the Retail channel but to verify that, I downloaded the Ultimate PID Checker from http://janek2012.eu/ultimate-pid-checker/ and my Windows Server 2012 R2 Standard license key, sure enough is good, valid and just as importantly, from the Retail channel.

So my existing and new keys are from the same licensing channel and the new key checks out as being valid so what is the problem? Well it turns out, PowerShell was the problem.

Typically I launch a PowerShell prompt and then I enter cmd.exe if I need to run a something which explicitly requires a Command Prompt. This makes it easy for me to jump back and forth between PowerShell and Command Prompt within a single window hence the reason for doing it. I decided to try it differently so I opened a new Administrative Command Prompt standalone, without using PowerShell as my entry point and the key was accepted and everything worked as planned.

The lesson here is this: If you are entering a command into a PowerShell prompt and it’s not working, try it natively within a Command Prompt as that just maybe is your problem.

Inaccessible Boot Device after Windows Server 2012 R2 KB2919355

Earlier on this week, I finally got around to spending a bit of time towards building my home lab. I know it’s late because I started this project back in February but you know how it is.

On the servers, I am installing Windows Server 2012 R2 with Update which for the uninitiated is KB2919355 for Windows Server 2012 R2 and Windows 8.1. This is essentially a service pack sized update for Windows and includes a whole host of updates. I am using the installation media with the update integrated to same me some time with the updates but also because it’s cleaner to start with the update pre-installed.

The Inaccessible Boot Device Problem

After installing Windows Server 2012 R2, the machine starts to boot and at the point where I would expect to see a message along the lines of Configuring Devices, the machine hits a Blue Screen of Death with the message Stop 0x7B INACCESSIBLE_BOOT_DEVICE and restarts. This happens a few times before it hangs on  a black screen warning that the computer has failed to start after multiple attempts. I assumed it was a BIOS problem so I went hunting in the BIOS in case I had enabled a setting not supported by my CPU or maybe I’d set the wrong ACHI or IDE mode options but everything looked good. I decided to try the Optimized Defaults and Failsafe Defaults options in the BIOS, both of which required an OS re-install due to the AHCI changes but neither worked.

After this I was worried there was either something wrong with my hardware or a compatibility issue with the hardware make-up and I was going to be snookered however after a while of searching online, I found the solution.

KB2919355 included a new version of the storage controller driver Storport. It transpires that this new version of Storport in KB2919355 had an issue with certain SCSI and SAS controllers whereby if the controller device driver was initialized in a memory space beyond 4GB then it would cause the phyiscal boot devices to become inaccessible. This problem hit people who installed the KB2919355 update to previously built servers at the time of release as well as people like me, building new servers with the update slipstreamed. My assumption is that it’s caused by the SCSI or SAS controller not being able to address 64-bit memory addresses hence the 4GB limitation.

The problem hits mainly LSI based SCSI and SAS controllers based on the 2000 series chipset, including but by no means limited to the LSI SAS 2004, LSI SAS 2008, LSI MegaRAID 9211, Supermicro SMC 2008, Dell PERC H200 and IBM X240 controllers. In my case, my Supermicro X8DTH-6F motherboards have the Supermicro SMC 2008 8 Port SAS controller onboard which is a Supermicro branded edition of the LSI SAS 2008 IR controller.

The workaround at the time was to disable various BIOS features such as Intel-VT, Hyperthreading and more to reduce the number of system base drivers that needed to load, allowing the driver to fit under the 4GB memory space but eventually the issue was confirmed and a hotfix released however installing the hofix is quite problematic when the system refuses to boot. Luckily, we can use the Windows installation media to fix the issue.

Microsoft later released guidance on the workaround to use BCDEdit from the Windows Recovery Environment (WinRE) to change the maximum memory.

Resolving the Issue with KB2966870

Workarounds aside, we want to fix the issue not gloss over or around it. First off, download the hotfix KB2966870 which is a hotfix by request so you need to enter your email address and get the link emailed to you. You can get the update from https://support.microsoft.com/kb/2966870. Once you have the update, you need to make it available to your server.

If your Windows Server 2012 R2 installation media is a USB bootable stick or drive then copy the file here. If your installation medium is CD or DVD then burn the file to a disc.

Boot the server using the Windows Server 2012 R2 media but don’t press the Install button. From the welcome screen, press Ctrl + F10 which will open a Command Prompt in Administrator mode. Because of the Windows installation files being decompressed to a RAM disk, your hard disk will have likely been mounted on D: instead of C: but verify this first by doing a dir to check the normal file structure like Program Files, Users and Windows. Also, locate the drive letter of your installation media which will be the drive with your .msu update file on it.

Once you have found your hard disk drive letter and your boot media letter, we will use the following DISM command to install the update using Offline Servicing:

Dism /Image:[Hard Disk]:\ /Add-Package /PackagePath:[Install Media]:\Windows8.1-KB2966870-x64.msu

Once the command completes, exit the Command Prompt and exit the Windows Installation interface to restart the computer. In my case, I had to restart the computer twice for the update to appear to actually apply and take effect but once the update had been taken on-board, the machine boots without issues first time, every time. You can verify that the update has been installed with the View Installed Updates view in the Windows Update Control Panel applet.

Deduplication in Windows Server 2012 Essentials

Yesterday, I posted with a quasi-rant about Windows Server 2012 Essentials Storage Pools and the inability to remove a disk in a sensible non-destructive manner. At the end of that post, I eluded to the lack of the Primary Data Deduplication feature in Windows Server 2012 Essentials which got me thinking about it more, so I went of on an internet duck hunt to find the solution.

Firstly, I found this thread (http://social.technet.microsoft.com/Forums/en-US/winserveressentials/thread/4288f259-cf87-4bd6-bf9f-babfe26b5a69) on the TechNet forums in which an MVP highlights a bug which was filed on Microsoft Connect during the beta stages over the lack of deduplication. The bug was closed by Microsoft with a status of ‘Postponed’ and a message that it was a business decision to remove the feature.

Sad, but true when the people being targeted with Essentials are the people potentially wanting and needing it most, but I guess the reason probably lies in the realms of supportability and a degree of knowledge gap in the home and small business sectors to understand the feature.

Luckily for me, in another search, I found this article (http://forums.mydigitallife.info/archive/index.php/t-34417.html) at My Digital Life where some nefarious user has managed to extract the .cab files from a Windows Server 2012 Standard installation required to allow DISM to install the feature. While the post is targeted at Windows 8 64-bit users to use dedup on their desktop machines, the process works equally well for Windows Server 2012 Essentials, if not better as you can also use the GUI to drive the configuration.

I don’t want to be the one in breach of copyright infringement or breach of terms of service with Microsoft, so I’m not going to link to the .7z file provided on My Digital Life, so download it from them, sorry.

Download the file and extract it to a location on the server. Once extracted, open an elevated command prompt, change the directory context of the prompt to your extracted .7z folder and enter the following command:
dism /Online /Add-Package /PackagePath:Microsoft-Windows-VdsInterop-Package~31bf3856ad364e35~amd64~~6.2.9200.16384.cab /PackagePath:Microsoft-Windows-VdsInterop-Package~31bf3856ad364e35~amd64~en-US~6.2.9200.16384.cab /packagepath:Microsoft-Windows-FileServer-Package~31bf3856ad364e35~amd64~~6.2.9200.16384.cab /PackagePath:Microsoft-Windows-FileServer-Package~31bf3856ad364e35~amd64~en-US~6.2.9200.16384.cab /packagepath:Microsoft-Windows-Dedup-Package~31bf3856ad364e35~amd64~~6.2.9200.16384.cab /PackagePath:Microsoft-Windows-Dedup-Package~31bf3856ad364e35~amd64~en-US~6.2.9200.16384.cab

If DISM fails or gives you any errors, then the most likely cause is that you didn’t use an elevated command prompt. The next likely cause is that you aren’t in the correct working directory so check that too.

Once all of the packages are imported okay, enter the second command:
dism /Online /Enable-Feature /FeatureName:Dedup-Core /All

No restart is required for the import of the packages or the enabling of the feature, so everything can be done online.

Once the feature is enabled, head over to Server Manager to get things started. Server Manager isn’t pinned to the server Start Screen by default, so from the Start Screen type Server Manager and it will appear in the in-line search results.

From Server Manager, select File and Storage Services from the left pane, and then select Volumes from the sub-options.

As you will see in the screenshot, I’ve already enabled dedup on the volume on this test Windows Server 2012 Essentials VM of mine and I’ve saved space  by virtue of the fact that I’ve created two data folders with identical data in each folder.

For you to configure your volumes, right click the volume you want to setup and select the Configure Data Deduplication option. On the options screen, first, tick the box to enable the feature. Once selected, you have options for age of files to include in Deduplication and types of file to exclude. For my usage at home, I am setting the age to 0 days which includes all files regardless of age, and I am choosing to not exclude any file types as I want maximum savings.

The final step is at the bottom of the dialog, Set Deduplication Schedule. This allows you to configure when optimization tasks occur and whether to use background optimization during idle periods of disk access. I chose to enable both of these and I have left the default time of 0145hrs in place.

Once you click OK and then OK again on the initial dialog, you have just enabled dedup on that volume. Repeat the process for any volumes you are interested in and job done for you. After this, the server has the hard task of calculating all the savings and the process of actually creating the metadata links to physical blocks on the disk and marking the space occupied by duplicate blocks on the disk as free space. This process is very CPU and memory heavy and depending on the size of your dataset can and will take a long time to run.

I am just about to kick off a manual task on my live Essentials server at home, so once the results are in, I will be posting here to report my savings and also the time taken, but I’m not expecting this to come in anytime within the next day or so.