Working Hard on Web Security

As anyone who visits my site on a regular basis may have noticed, I’ve been working hard on securing up this blog to make it follow more best practices and more in keeping with modern web security given it’s been quite a while since I’ve touched that side of the site, and there have been numerous things that I have implemented and I thought I would give a little run down of them.

HTTPS Everywhere

The first and most obvious addition is HTTPS everywhere. I have been for some time using HTTPS to protect the WordPress admin portion of the site where I do my bidding but I didn’t have this applied to the user side of the site. That changed first and I have updated all of the content in the database to reference HTTPS paths, updated WordPress to use HTTPS as the Site URL and also added an IIS re-write rule in my web.config to redirect all traffic from HTTP to HTTPS. If you have existing links to the site, they will continue to work for you, you will just be redirected to the HTTPS variant.

HTTP Strict Transport Security (HSTS)

HSTS is a complimentary technology for HTTPS. When enabled, this instructs the browser that the site doesn’t ever wish to be downgraded from HTTPS to HTTP. This is applied to the entire site using an IIS outbound re-write rule in the web.config file. I have also submitted the site to the well known HSTS Preload List which means that modern browsers which support this will know before you even visit the site that it should be transmitted only over HTTPS.

Remove Server Headers

I make no secret about the fact that this site runs on Azure as a Web App but that doesn’t mean my site needs to shout about it in code. As WordPress is a PHP application, that means there are two headers to deal with: one for the IIS web server and another for PHP, the server programming language. The IIS header is dealt with by removing the custom headers using a declaration in the web.config file whilst the PHP header is dealt with using a php.ini overrides file on the Azure Web App. You can read about how to apply custom PHP settings to Azure Web App at https://docs.microsoft.com/en-us/azure/app-service-web/web-sites-php-configure.

Whilst this doesn’t actually protect you as a visitor per see, what it does do, is obscure the identity of the server software to scripts and bulk scanning attacks making it harder for someone to identity the server type or code language and then execute attacks on the site specific to those combinations. As a visitor, this makes the site less susceptible to attack and makes it safer for you by proxy.

Cross-Site Scripting (XSS) and Framing

XSS attacks and framing attacks are where someone tries to impersonate my site and overlay hidden buttons or links onto my site. The result is that as a user, you think you are clicking on a link on the site but in reality you are clicking on a hidden link overlaying that link which takes you to somewhere far nastier than the blog posts I write. This is dealt with simply using custom headers in the web.config file and instructs your browser newer to allow the site to be framed by another domain and to enable the Cross-Site Scripting protections within the browser explicitly instead of relying on the browser to enable them itself.

Content Security Policy (CSP)

The CSP is a set of rules laid out by which the site will abide. For example, the CSP states that I will only load images from certain URLs or scripts or CSS stylesheets from certain URLs. This protects you because when the browser loads the page, it will automatically prevent any content which violates the CSP from loading. If an attacker were to inject some unsafe JavaScript into the site somehow, your browser would see this as violating the policy and not load it.

This one is a bit of a bore to configure as it requires some thought about where your content originates however luckily, this can be configured in test mode first to verify all is good before going live and breaking the house.

HTTP Public Key Pinning

This is possibly the most violent and aggressive of the security techniques and as such is often omitted by other sites but I decided that as I was going through the hurt elsewhere to do it here too. Public Key Pinning works by explicitly stating the Public Key hashes of the certificates the site will present when connected via HTTPS. If the site presents a certificate other than one of those explicitly stated, the browser will assume the certificate compromised and block access to the site entirely.

Because the browser will completely block access to the site, certain compromises are made here to ensure I don’t get locked out myself forever and render the domain trash. Firstly, the policy has a defined lifespan which I have currently set to 30 days. Unlike HSTS, this isn’t part of the preload in modern browsers so your browser doesn’t know these keys before it starts but for someone who has visited the site before or me (as I visit here often you know) if this policy fails, I’ve got a problem.

Public Key Pinning provides some fallback in that in active to the active key pins, the certificates currently in use, we can apply backup pins for inactive certificates. This is done by generating offline CSRs for certificates and pinning the public key of the CSR. The net result is that if my current certificate is lost or revoked, I can generate a new certificate using the CSR for the pinned public key and I can issue a new certificate that when brought online will pass validation. This just means a little more work for me to get the site up and running again if there is an issue.

Advice and Background

Through all of this, I have used the extremely helpful security blog of Scott Helme at https://scotthelme.co.uk/. He also operates a site called Security Headers which has various tools for generating, testing and validating all of these settings and options at https://securityheaders.io/.

To do the final validations, I use Scott Helme’s Security Headers Site Scan feature which grades the site based on the features enabled and the configuration settings for each. Using that test, this blog currently scores an A+ security rating which you can validate using the short-link https://schd.io/A24. I also use the Qualys SSL Labs test tool which I currently score an security rating which you can validate yourself at https://www.ssllabs.com/ssltest/analyze.html?d=richardjgreen.net.

The reason for being capped to an A not an A+ on Qualys SSL Labs is that as this site is run using Azure Web App which is based on IIS, there is no way for me to control the SSL Protocols or Ciphers. There is currently no fix for IIS against the TLS_FALLBACK_SCSV issue and the mitigation for IIS requires disabling TLS 1.0 and TLS 1.1 so that TLS cannot be forced to fallback from TLS 1.2. When a fix is available and applied to Azure Web Apps, this score should rise from A to A+ accordingly.

How Much Work Is It?

For me, on this blog, not a huge amount. Implementing many of the options such as HTTPS Redirect, HSTS re-write rule and the custom HTTP headers is done easily using the IIS web.config file in a short amount of time and can also easily be tested. Get used to using the debugging console in your browser and the network activity recorder to monitor the HTTP headers to verify these settings for yourself.

For the Public Key Pins and Content Security Policy, there can be some work involved.

I found that the best option is start with the CSP, use the Security Headers site generator tool to create a basic policy with the Report Only option enabled. Once this is applied, using the debugging console, your browser will report violations and you can append them to the policy. Scripts and Stylesheets will be the biggest issue and deciding whether to add hashes for them, to move them to external .js or .css files or whether to enable unsafe-inline options. Be warned, using the unsafe-inline option for scripts will cap your grade to A on the Security Headers Site Scan. For new build sites, this should be easily done but for existing sites with a lot of CSS or JavaScript in use, you could be spending quite some time debugging this stuff.

For Public Key Pinning, if you aren’t overly familiar with OpenSSL, get familiar as you will be using it to generate your private keys, CSRs and the resulting public keys for the backup pins. Failure to create backup pins will result in the policy not being applied in browsers once added to the site configuration. Make sure also not to set your max-age to anything too long. If you have a disaster, that’s how long you could be waiting to regain access to the site. If you set it longer than the life of your SSL certificate, you are really looking for trouble.

Kickstart

If you are using IIS as your web server, here are some snippets you can apply to your web.config file as a kick start. These settings will not touch the Public Key Pins or CSP as I will leave these to each individual but adding these lines to an existing web.config will remove the server header and enable the XSS and Framing protection. I’ve also extracted my HTTP to HTTPS redirect rule and HSTS outbound re-write rule for you to use.

<httpProtocol>
  <customHeaders>
    <clear />
    <remove name="X-Powered-By" />
    <add name="X-Frame-Options" value="SAMEORIGIN" />
    <add name="X-Xss-Protection" value="1" />
    <add name="X-Content-Type-Options" value="nosniff" />
  </customHeaders>
</httpProtocol>
<rewrite>
  <rules>
    <rule name="HTTP to HTTPS Redirect" stopProcessing="true">
      <match url="(.*)" />
      <conditions>
        <add input="{HTTPS}" pattern="OFF" />
      </conditions>
      <action type="Redirect" url="https://{HTTP_HOST}/{R:0}" redirectType="Permanent" />
    </rule>
  </rules>
  <outboundRules rewriteBeforeCache="true">
    <rule name="Add HSTS Strict Transport Security Header" stopProcessing="true">
      <match serverVariable="RESPONSE_Strict_Transport_Security" pattern=".*" />
        <conditions>
          <add input="{HTTPS}" pattern="on" ignoreCase="true" />
        </conditions>
      <action type="Rewrite" value="max-age=15552000; includeSubDomains; preload" />
    </rule>
  </outboundRules>
</rewrite>

Hunting and Decrypting EFS Encrypted Files

At home last week, I started doing some preparations for upgrading my home server from Windows Server 2012 R2 to Windows Server 2016. This server was originally installed using Windows Server 2012 R2 Essentials and since, I have performed a Standard edition, edition upgrade on the machine which means that the host has ADDS, ADCS, NPS and some other roles installed as part of the original Essentials server installation. We all know that unbinding ADDS and ADCS can be a bit of a bore which is why nobody in the age of virtualization should be installing ADDS and ADCS on a single server together but that’s by the by.

When I started looking at decommissioning the ADCS role, I noticed that an EFS certificate had been issued to my domain user account. I’ve never knowingly used EFS but the presence of a certificate for that purpose lead me to believe there may be some files out there so I started looking.

EFS was a technology that appeared circa Windows XP to allow users to encrypt files before BitLocker was a thing. It was a nice idea but it was troubled and flawed in that it was enabled by default and users could self-encrypt files without IT having implemented the proper tools to allow them to recover the files when disaster struck.

Disable EFS via Group Policy

First and foremost, we want to prevent any new EFS encrypted files from appearing. This is easily done with Group Policy and is a setting that could and possibly even should be included in a baseline GPO for clients and servers (the setting is a Computer Configuration setting) but don’t put it in the Default Domain Policy as modifying this with additional settings isn’t best practice.

GPO PKI Node

In the GPMC, open the GPO that you plan to include the setting in and navigate to Computer Configuration Security Settings Public Key Policies. Once in the PKI node, right-click on the Encrypting File System folder in the navigation area on the left.

GPO EFS Do Not allow

In the EFS Properties, set the File Encryption using Encrypting File System (EFS) option to Don’t allow. This will disable the ability for users to use EFS.

Finding EFS Encrypted Files

This is the part I was primarily concerned about as where would the files be exactly given I’ve never knowingly used EFS. Luckily, there is a few switches in the cipher utility that can do all the heavy lifting for us. If you are an admin of a file server, you can run this centrally or if users have local files, you could configure this as a logon script for the users and output the results to a network share as .txt files which you can review at a later time.

cipher.exe /u /n /h <SearchPath> > <ResultsFile>

This nice simple command once modified with your path to search and the output file name will generate a list of all of the EFS encrypted files in that path. The search is recursive so will include sub-directories too. You can simply point this as your drive root and let is find everything. By adding the /n switch to the command, it prevents cipher from triggering an update on the file which means that it will not try to use or renew the EFS certificate on the file. This is ideal for running the command centrally as an admin where you may not have permissions currently to access the files but only traverse directory permissions.

Decrypting EFS Encrypted Files

This is the more complicated part. To be able to decrypt the files, you need to be in possession of a valid certificate that matches that used to encrypt the files. If your PKI was correctly configured for EFS and you have a Data Recovery Agent certificate then this is the master key that will allow you to unlock any EFS encrypted files. If you don’t have this then the only option is to have the files encrypted from the user side. If the users don’t have those certificates any more either then I’m sorry to say that there’s nothing that can be done except hope you have a backup of the files prior to them being encrypted. The key for the Data Recovery Agent (DRA) is embedded into EFS certificates so retro-adding a DRA isn’t an option.

Assuming you have the relevant certificates needed, the following command will attempt to decrypt all files in the specified path. Again, you could run this as a logon script for the user and set it up to traverse their home drives or redirected folders to decrypt all of their own files.

cipher.exe /d /s <FilePath>

 

Hyper-V Replication Firewall Rules on Nano Server

Nano Server is the newest edition in the Windows Server family and because of it’s ultra-low footprint and patching requirement, makes it an ideal Hyper-V host for running your private cloud infrastructure.

One of the resiliency features in Hyper-V, Hyper-V Replicas allows you to replicate a VM on a timed interval of as low as 30 seconds. This isn’t a new feature but is a great one none-the-less and is ideally suited to organisations with multiple data centres wanting to protect their VMs across two or more sites without the need for expensive SAN replication technologies.

Nano Server ships by default with the Windows Firewall enabled and there are two rules for Hyper-V Replicas which are both disabled by default. If you want to use Hyper-V Replica, even once you’ve configured everything you need via the Hyper-V Manager console or via PowerShell such as virtual networks and enabling the Hyper-V Replica feature, you will still need to configure this rule.

Nano Server and Group Policy Settings

It is important to note that Nano Server does not process Group Policy like a Server Core or GUI-based Windows Server therefore you cannot configure this using an Advanced Windows Firewall policy Group Policy Object. If you want to apply Group Policy derived settings to a Nano Server host then you should refer to this TechNet post at https://blogs.msdn.microsoft.com/powershell/2016/05/09/new-security-cmdlets-in-nano-server/. Nano Server and Windows Server 2016 in general includes new PowerShell Cmdlets that allow you to import an export from utilities such as SecEdit or AuditPol and then import the resulting files from these tools into Nano Server.

The following snippets of PowerShell code are for enabling the specific rules whether you use HTTP or HTTPS for replication. Bear in mind that you could include this in a Nano host build script to automate the configuration of your hosts.

Enable Hyper-V Replication over HTTP

If you are using Hyper-V Replica over HTTP with Kerberos authentication then you will need to enable the firewall rule for this using the following PowerShell snippet.

$Cred = Get-Credential
Enter-PSSession -ComputerName <NanoHostFQDN> -Credential $Cred

Set-NetFirewallRule -Name VIRT-HVRHTTPL-In-Tcp-NoScope -Enabled True -Scope Any

 Enable Hyper-V Replication over HTTPS

If you are using Hyper-V Replica over HTTPS with certificated based authentication then you will need to enable the firewall rule for this using the following PowerShell snippet. Bear in mind that as Nano Server does not process Group Policy, any certificate auto-enrollment policies you have configured in the domain will not apply so you will need to manually request and issue the certificates to the hosts, or automate this via another means.

$Cred = Get-Credential
Enter-PSSession -ComputerName <NanoHostFQDN> -Credential $Cred

Set-NetFirewallRule -Name VIRT-HVRHTTPSL-In-Tcp-NoScope -Enabled True -Scope Any

 

Modifying the Nano Server Pagefile

This weekend, I’ve been working on a little pet project using an ultra-small form factor PC that I’ve got setup running Nano Server and Boot from VHD.

The setup is great and ideal for my use case however there is a problem when using Boot from VHD and that is that the operating system you are booting cannot host a pagefile inside the VHD file. When you boot a PC using a native boot VHD file, the pagefile will be automatically created on the physical partition with the most available free space and set to System Managed which means that the pagefile will swell and shrink according to demand and not perhaps on the disk or partition you want it to be on.

I started the journey trying to modify the pagefile configuration however I quickly discovered that even the PowerShell Cmdlets recommended by many other people online to use with Server Core don’t work because they rely on using WMI to modify the parameters and if you try these, you’ll very quickly find that Nano Server only accepts and extremely small subset of WMI PowerShell Cmdlets, presumably down to the compressed WMI database in Nano.

Luckily, I found one set of Cmdlets that do work on Nano Server and allows you to configure your pagefile as you desire.

Set-CimInstance -Property @{AutomaticManagedPageFile = $False}

$PageFile = Get-CimInstance -ClassName Win32_PageFileSetting
$PageFile | Remove-CimInstance

New-CimInstance -ClassName Win32_PageFileSetting -Property  @{Name= "$("P"):\pagefile.sys"}
Get-CimInstance -ClassName Win32_PageFileSetting | Set-CimInstance -Property @{InitialSize = 4096; MaximumSize = 4096}

As you’ll see, I’m using P as my pagefile drive volume and I’m setting the initial and maximum sizes to 4096MB. Simply change these to suit your needs and job’s a good one.

Setting PowerShell as the Default Shell in Server Core

As part of a little weekend project I’ve embarked on this week, I’ve built myself a pair of new Domain Controllers for my home AD environment running on Server Core. Not only does using Server Core for Domain Controllers make great sense because they take up less resources (CPU, Memory and Storage) but they also need less patching which means we can keep them up more often. Sure, it would be nice to be able to use Nano Server for Domain Controllers but least in Technical Preview 5 at the time of writing, this isn’t a role that’s available. DNS is but AD isn’t and hopefully it will come.

Living in the present though, with Windows Server 2012 R2 and Server Core being the best we can do for Active Directory, there is a problem that most people will notice when they start using Server Core and that is that it uses Command Prompt as it’s default shell. This means that if you want to use any PowerShell Cmdlets, you need to step up to PowerShell first. I know this doesn’t seem like a hardship but if you do it enough, it gets tiresome, especially when you think that the Active Directory Cmdlets all live in PowerShell.

Luckily, we can fix this and make PowerShell the default shell in Server Core. If you’ve only got one server to do this against then the easiest thing to do it do it manually but if you’ve got a larger estate of Server Core machines, you can go it with Group Policy Preferences too.

Setting PowerShell as the Default Shell Manually

If you’ve only got one server, a couple of servers or maybe your Server Core machines are workgroup members so you can’t use Group Policy and if any of these are true, the manual method is for you. It’s a simple PowerShell one-liner:

Set-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon' -Name Shell -Value 'PowerShell.exe -NoExit'

 Setting PowerShell as the Default Shell via Group Policy

As I mentioned, we can use a Group Policy Object to ensure that all of our Server Core machines get PowerShell as their default shell.

The first step is to setup a WMI Filter in Active Directory to detect Server Core machines and the second is to create and link the GPO itself. To create a new WMI Filter, using Group Policy Management Console create a new WMI Filter. Name it whatever you chose but I called mine Windows Server 2012 R2 Server Core Only. For the query itself, use the following WMI Query:

SELECT InstallState FROM Win32_OptionalFeature WHERE (Name = "Server-Gui-Shell") AND (InstallState = "2")

To break it down, this queries WMI in the Win32_OptionalFeature class and grabs the InstallState property. It then checks to see whether InstallState is equal two for the Server-Gui-Shell value. In Windows server 2008 and 2008 R2, this was a little easier as Server GUI and Server Core identified themselves as different SKUs of the operating system however because Windows Server 2012 R2 allows us to install and uninstall the GUI as a feature that means there isn’t a different in the SKU so the way to tell the two apart is the installation state of the Server-Gui-Shell feature. On a server with a GUI, this will equal 1 and on a server without the GUI this will equal 2.

With the WMI Filter now created, we can create the GPO itself. Create a new GPO and configure it to use the WMI Filter we just created. Once created and filtered, open up the GPO Editor so that we can add our setting.

With the GPO Editor, expand Computer Configuration Preferences Windows Preferences Registry. Right-click the Registry node on the left and select New Registry Item and configure the registry item as follows:

Action: Update
Hive: HKEY_LOCAL_MACHINE
Key Path: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon
Value Name: Shell
Value Type: REG_SZ
Value Data: PowerShell.exe -NoExit

Once you set this, hit OK and you’re done. Link the GPO to an OU in your Active Directory hierarchy that contains your servers and once it has applied, you’ll start to get PowerShell as your default prompt when you logon. Because the WMI Filter only applies to Server Core machines, it’s safe to link this GPO to a root OU that contains all of your servers so that when any Server Core machines get dropped in, they will automatically pick this GPO up.

Cleaning Up Active Directory and Cluster Computer Accounts

Recently at work, I’ve been looking at doing a clean up of our Active Directory domain and namely removing stale user and computer accounts. To do this, I short but sweet PowerShell script which gets all of the computer objects from the domain and include the LastLogonTimestamp and the pwdLastSet attributes to show when the computer account was last active however I came across an interesting problem with cluster computer objects.

Import-Module ActiveDirectory
Get-ADComputer -Filter * -SearchBase “DC=domain,DC=com” -Properties Name, LastLogonTimestamp, pwdLastSet -ResultPageSize 0 | Select Name, @{n='LastLogonTimestamp';e={[DateTime]::FromFileTime($_.LastLogonTimestamp)}}, @{n='pwdLastSet';e={[DateTime]::FromFileTime($_.pwdLastSet)}}, DistinguishedName

When reviewing the results, it seemed as though Network Names for Cluster Resource Groups weren’t updating their LastLogonTimestamp or pwdLastSet attributes even though those Network Names are still in use.

After a bit of a search online, I found a TechNet Blog post at http://blogs.technet.com/b/askds/archive/2011/08/23/cluster-and-stale-computer-accounts.aspx which describes exactly that situation. The LastLogonTimestamp attribute is only updated when the Network Name is brought online so if you’ve got a rock solid environment and your clusters don’t failover or come crashing down too often, this object will appear as although it’s stale.

To save you reading the article, I’ve produced two updated versions of the script. This first amendment simply adds the servicePrincipalName column to the result set so that you can verify them for yourself.

Import-Module ActiveDirectory
Get-ADComputer -Filter * -SearchBase “DC=domain,DC=com” -Properties Name, LastLogonTimestamp, pwdLastSet, servicePrincipalName -ResultPageSize 0 | Select Name, @{n='LastLogonTimestamp';e={[DateTime]::FromFileTime($_.LastLogonTimestamp)}}, @{n='pwdLastSet';e={[DateTime]::FromFileTime($_.pwdLastSet)}}, servicePrincipalName, DistinguishedName

This second amended version uses the -Filter parameter of the Get-ADComputer Cmdlet to filter out any results that include the MSClusterVirtualServer which designates it as a cluster object computer account.

Import-Module ActiveDirectory
Get-ADComputer -Filter 'servicePrincipalName -NotLike "*MSClusterVirtualServer*"' -SearchBase “DC=domain,DC=com” -Properties Name, LastLogonTimestamp, pwdLastSet, servicePrincipalName -ResultPageSize 0 | Select Name, @{n='LastLogonTimestamp';e={[DateTime]::FromFileTime($_.LastLogonTimestamp)}}, @{n='pwdLastSet';e={[DateTime]::FromFileTime($_.pwdLastSet)}}, DistinguishedName

The result set generated by this second amendment of the script will produce exactly the same output as the original script with the notable exception that the cluster objects are automatically filtered out of the results. This just leaves you to ensuring that when you are retiring clusters from your environment that you perform the relevant clean up afterwards to delete the account. Alternatively, you could use some clever automation script like Orchestrator to manage the decommissioning of your clusters and include this as an action for you.

Set a Registry Value Using PowerShell Containing a Forward Slash

I don’t normally blog about PowerShell as it’s just a day-to-day thing that we all do and use (you do all use PowerShell right) but I came across a problem today that I thought I would share as I had to run the net to find the solution for myself.

A co-worker came to me today asking for help with some PowerShell code for a script he is writing. The script is to apply some registry settings to machines for a piece of security hardening work which includes disabling some of the less secure SSL and TLS cipher suites. All is going well until he gets to the line of the script that tries to disable the DES 56/56 cipher suite and PowerShell throws it back at him. The reason for it is because PowerShell is treating that forward slash character as a separator for a multi-value string.

Here is the line of code that you would run normally to create the registry key for DES 56/56:

New-Item -Path “HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\DES 56/56"

When this runs, PowerShell creates a registry key for DES 56 but then it creates a sub-key for the second 56 as it’s seen as a separator which obviously isn’t what we want. I tried all sorts to get around it such as changing the double quotes for single quotes and first placing the path into a variable and calling in the variable but it just would not have it.

I managed to eventually find a way around this but it means that we can’t use the PowerShell Cmdlet New-Item but instead, we have to use the .NET way of things. Here’s the code sample to make it work:

$Writable = $True
$Key = (Get-Item HKLM:\).OpenSubKey(“SYSTEM”, $Writable).CreateSubKey(“CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\DES 56/56”)
$Key.SetValue(“Enabled”, “0”, [Microsoft.Win32.RegistryValueKind]::DWORD)

 

MDOP and EMET for Windows 10

It’s been a while since I’ve posted anything here now which is in part down to me being busy at home and in part due to work being full-on at the moment trying to juggle a handful of internal systems projects as well as dropping in customer engagements but you won’t hear me complaining as it’s all great work.

In the time between I last wrote anything and now, Windows 10 is full swing and we are already looking at the Threshold 2 (or November 2015 Update) for Windows 10 shipping which will see the Skype Messaging experience rolled out to the public as well as the Cortana text messaging and missed call notifications on the desktop, both of which have been available to people running the Windows 10 Insider Preview builds for a few weeks’ now.

With people looking more closely at Windows 10, there’s good news for people who rely on the slew of Microsoft tools in the enterprise as many of them are either now already updated to support Windows 10 or are working their way to support. MDOP 2015 was released back in August 2015 and this included updated service packs for Application Virtualization (App-V) 5.0 SP3, User Experience Virtualization (UE-V) 2.1 SP1 and Microsoft BitLocker Administration and Management (MBAM) 2.5 SP1 to add support for Windows 10. App-V and MBAM are simply service packs to add support whilst UE-V not only gains support for Windows 10 but also gets native support for Office 2013 via the ADMX files which means you no longer need to manually import the Office 2013 .xml templates into your Template Store.

Sadly, UE-V 2.1 SP1 shipped before the release of Office 2016 which means there is no native support for this which seems to be a common theme for UE-V; the product ships ready for a new Windows version but misses the matching Office version so. If you want to use UE-V for Office 2016, you can head over to the TechNet Gallery and download the official Microsoft .xml templates for it from https://gallery.technet.microsoft.com/Authored-Office-2016-32-0dc05cd8.

Aside from MDOP, Microsoft EMET is being updated to version 5.5 which includes support for Windows 10 along with claiming to include much improved Group Policy based management of the clients. I haven’t tried this for myself yet as the product is still in beta but I will be giving it a try soon and I will be sure to post anything I find that can help improve the management position of it.

As a throw-in note, If you are using System Center Endpoint Protection for anti-virus then you might want to have a read of this post by System Center Dudes at http://www.systemcenterdudes.com/sccm-2012-windows-10-endpoint-protection/, which explains the behaviour of Endpoint Protection in Windows 10.

Enterprise Windows 10 Migration Article

Recently, via my work at Fordway, I was asked to write an article for the website ITProPortal on Windows 10 migration from an enterprise perspective.

The article got published on October 30th and judging by the social share buttons on the site, it has received quite a warm reception. You can read the article, entitled Migrating to Windows 10: It’s all about the preparation at http://www.itproportal.com/2015/10/30/migrating-to-windows-10-all-about-preparation/.

Unattended Installation of Office 2016

With the release of Office 2016, Visio 2016 and Project 2016, many will want to start thinking about their upgrade. Office 2016 at present is only available in the Click-to-Run format but if the Office 365 Community is to be believed there will be an .msi based installation coming for volume license customers on October 1st.

As it happens, in Office 2016, the Click-to-Run experience is actually quite nice compared to previous instances of it and while I’ve been running the preview builds of Office 2016, I certainly haven’t seen any issues with performance so I see no reason not to use Click-to-Run now given that if you ever decide to remove Office from the machine, it will leave you with a cleaner slate.

This post is going to cover how to build an offline source and perform an unattended installation of Office 2016. This will work for Configuration Manager customers as well as customers using a manual installation process. In order to be able to perform an offline installation of Office 2016, you are going to need two things. The Office 2016 Deployment Tool and you are going to need an offline source for Office 2016. If you don’t have this already, you can generate it using the tool but I was able to get the offline source from the MSDN .iso download.

Download the Deployment Tool

First things first, go to http://www.microsoft.com/en-us/download/details.aspx?id=49117 and get the Office 2016 Deployment Tool. The installer for this doesn’t actually install an application but merely unpacks a setup.exe file and a sample configuration.xml file. I unpacked the setup.exe file to a folder on the root of my drive for easy access.

Within this folder, create sub-folders for each of the Office products you want to configure. In my case, I am doing all three: Office, Project and Visio and once you have created these folders, copy the setup.exe file to each sub-folder.

Create the Configuration Files

Once you’ve got the sample configuration.xml file, you can use this, along with the reference at https://technet.microsoft.com/en-us/library/jj219426.aspx for generating your custom configuration file. I have created three files: one for Office, one for Project and another for Visio, all of which I have included below to save you some time.

You will notice that in the Product section of these files, I have a value called PIDKEY. This PIDKEY value is where you provide your product key if you are using one. If you are using per-user licensing then you need to remove the entire PIDKEY value.

I have also opted to exclude Access, InfoPath and Publisher from my installation as I don’t have an need for these applications. A full list of applications you can exclude is available at the TechNet reference page. Another option which you may find useful is the Display Level. This can be set between Full and None. I have opted for None to make this a silent installation but you could opt for Full. Full will present the user with the UI for the installation but they will not be prompted to answer any questions. This allows the user to track the progress of the installation if you are trying to perform a passive install rather than a silent one.

Save each of the products configuration files in their relevant directory. It is worth noting that you are not obliged to name the configuration file configuration.xml and you can save this as whatever you want to call it. This allows you to maintain multiple configurations for different sets of users who require access to different Office applications.

Office 2016 Pro Plus Configuration File

<Configuration>

  <Add OfficeClientEdition="32">
    <Product ID="O365ProPlusRetail" PIDKEY="XXXXX-XXXXX-XXXXX-XXXXX-XXXXX">
      <Language ID="en-US" />
      <ExcludeApp ID="Access" />
      <ExcludeApp ID="InfoPath" />
      <ExcludeApp ID="Publisher" />
      <ExcludeApp ID="SharePointDesigner" />
    </Product>
  </Add>

  <Updates Enabled="True" />

  <Display Level="None" AcceptEULA="True" />

  <Property Name="AutoActivate" Value="1" />
  <Property Name="ForceAppShutdown" Value="True" />

</Configuration>

 Project 2016 Professional Configuration File

<Configuration>

  <Add OfficeClientEdition="32">
    <Product ID="ProjectProRetail" PIDKEY="XXXXX-XXXXX-XXXXX-XXXXX-XXXXX">
      <Language ID="en-US" />
    </Product>
  </Add>

  <Updates Enabled="True" />

  <Display Level="None" AcceptEULA="True" />

  <Property Name="AutoActivate" Value="1" />
  <Property Name="ForceAppShutdown" Value="True" />

</Configuration>

 Visio 2016 Professional Configuration File

<Configuration>

  <Add OfficeClientEdition="32">
    <Product ID="VisioProRetail" PIDKEY="XXXXX-XXXXX-XXXXX-XXXXX-XXXXX">
      <Language ID="en-US" />
    </Product>
  </Add>

  <Updates Enabled="True" />

  <Display Level="None" AcceptEULA="True" />

  <Property Name="AutoActivate" Value="1" />
  <Property Name="ForceAppShutdown" Value="True" />

</Configuration>

 Create an Offline Source

With your configuration files created and saved in your product specific sub-folders, we can proceed with creating the office source.

If you have the .iso media from MSDN or elsewhere, to do this, mount the .iso file and locate the office folder on it. Copy this office folder into the sub-folder for your specific product and then repeat this with the media for the remaining products. You should end up with three folders, one for Office, Visio and Project and inside each of these folders, you will have a folder named office, the .xml configuration file and the setup.exe file.

If you don’t have the media, we now need to download the content for offline use. Open an elevated command prompt and change the working directory to the directory where your setup.exe is located. From here, type the command setup.exe /download configuration.xml. This will start the download for the Click-to-Run components for offline use. Once it has completed, you need to repeat the process for any remaining Office products you are using.

Perform an Unattended Installation

With the configuration files and offline source ready, you can now perform an unattended installation. To do this, you simply use the command setup.exe /configure configuration.xml from the working directory containing the files. You don’t need to specify the path to the configuration file as you have put it in the same directory as the setup.exe file and you don’t need to specify the path to the offline source because it will automatically look for this in the office folder from where you launched the setup.exe file.

If you are deploying Office using Configuration Manager then you would simply copy the folders for each of your products to package source path and create applications for them within Configuration Manager. Clients will download the package source to the local cache as it does for any normal application prior to performing the installation.

Update on Product IDs

After publishing this post, I noticed that my test machine wasn’t accepting the licence key I included in the .xml file. This turned out to be because the media I used from MSDN contained not O365ProPlusRetail Product ID but instead ProPlusRetail. My recommendation here would be to perform a test installation on a test machine first to check the Product ID which gets installed from your media (if you are using any) so that you can make sure you are targeting the correct Product ID.

After updating my .xml file to use the correct Product ID, the installation started automatically entering the product key and automatically activating the products.