App-V Client Management via GPO

Deploying the App-V Client to end-user machines can be headache. Microsoft provide ADM files for managing the configuration of the App-V Client via Group Policy in AD DS, however if you are trying to deploy the client yourself, you will soon discover that the Microsoft ADM files don’t allow you to configure an App-V Publishing Server. The only options you have with the ADM files are to override the sequenced application package and icon source roots.

Using this method, you install string for silent installation will look something like this:


As anyone can see this isn’t exactly elegant, and if you are using SCCM to deploy the App-V Client as I am, you will soon discover SCCM has a character limit for the installer path which means you may have to turn to building a batch file to execute the installation and then call the file in the SCCM Program.

The other problem you will have are that you are then hardcoded to use the server name and port specified in the install. Yes, you could use a DNS CNAME to direct your clients to the App-V servers, and sure you can use a GPO to edit the registry keys on the end-user machines after the fact, however none of this is elegant as properly managing the deployment.

Introducing Login Consultants, a Netherlands based virtualization specialist company. This company provide a third-party ADM file for you to import into AD DS for extending the management options for App-V from the Microsoft ADM file, and best of all, you can register and download the ADM file for free from

Using the Microsoft ADM file and the Login Consultants ADM file in conjunction, your install string turns into this:

setup.exe /s /v” /qn”

Much cleaner, easier to setup in Configuration Manager and then it gives you the ability to manage all of your App-V server configuration, including server name, ports, protocol, SFT_SOFTGRIDSERVER environment variable and all the other settings you need via Group Policy.

For centralising and streamlining management, this is a huge boon, as it means you have a one size fits all deployment of the App-V Client and then allowing you to manage everything else from either AD DS or from the App-V Management Server.

Certificate Store Permissions and Windows Live Block App-V RTSPS Protocol

Last week, when converting our existing ICT internal dogfood trial of App-V to a highly available production capable App-V solution, we came to a decision to utilize the RTSPS (Real Time Streaming Protocol Secure) protocol for streaming our applications.

Using some my own and another colleagues laptops for testing the RTSPS protocol, we ran into an issue whereby the client received the following error:

The specified Application Virtualization Server has shut down the connection. Try again in a few minutes. If the problem persists, report the following error code to your System Administrator.

Error Code: xxxxxx-xxxxxx0A-10000009

We initially discovered from an App-V blog article ( that this issue occurs when the server lacks permissions for the NETWORK SERVICE account to access the certificate store machine keys.

Following the advise of the article for Windows Server 2008 R2 systems, this was quickly resolved by using a Certificate Management based Microsoft Management Console to grant Read permission for the NETWORK SERVICE account to the certificate which is being used to sign the RTSPS protocol in App-V.

Thinking the issue was resolved, we proceeded to initiate a Refresh on the App-V client and tried to stream an application that we had previously sequenced, however we now received a new error:

The Application Virtualization Client could not update publishing information from the server App-V Server. The server will not allow a connection without valid NTLM credentials. Report the following error code to your System Administrator.

Error code: 4615186-1690900A-00002002

Leaving us puzzled. We were unable to find a solution initially, so we turned to Bing for some assistance, unearthing an interesting but niche blog post.

According to the source of our findings ( machines with components from the Windows Live Essentials suite of applications cannot run the RTSPS protocol due to a registry key added to the LSA Security Packages key.


After removing the livessp value from the multi-value string in the registry and restarting the system we were successfully able to refresh the server and also stream the applications.

Error Opening Excel 2007 and 2010 Documents in SharePoint 2010

Last night we completed a SharePoint 2010 at work and after all the testing, we deemed the upgrade a success, however coming into the office this morning, we received reports from some users that they were unable to open some of their Excel spreadsheets stored in various Document Libraries.

After some diagnosis, it turned out that the problem only effected Office 2007 and Office 2010 XML format documents and that original format Excel documents from Office 2003 and documents saved in the 2003 format were unaffected.

After initially suspecting the problem to be linked to the new Excel Services Application in SharePoint 2010, I worked to resolve the configuration of the Excel Services Application which we had left previous un-configured due to it not being required currently, however the problem persisted.

Whilst searching TechNet for the error code we were receiving I encountered a page entitled “Configure the Default Behaviour for Browser-Enabled Documents” ( which details how to manage the behaviour of SharePoint for launching web compatible documents.

SharePoint 2010 features various web-enabled services and can be configured to use Office Web Apps, which is a hosted version of the applications available via Office Live WebApps. The default behaviour for SharePoint 2010 is to attempt to launch web compatible formats using the web based application, however as this is not configured in our environment the error appeared.

The resolution to the problem was simply enabling the Site Collection Feature Open Documents in Client Applications by Default. Once enabled on the Site Collection to apply the setting to all child sites, SharePoint began prompting the users to open the file with their client side installations of Excel as per the SharePoint 2007 behaviour.

Heatmiser Touchscreen Thermostat Review

As part of one of the many DIY projects at home at the moment, me and the wife are preparing to get our hallways, stairs and landings plastered; the final piece of our house decoration project, no less than five years after buying our first house together.

The house was originally equipped with warm air circulation heating, however about two years before we moved into the house, the previous owners had central heating fitted. As with any standard central heating install, the thermostat and timer system was basic and left a lot to be desired, with the timer controls located upstairs, tucked away with the boiler and wiring centre, and the thermostat being a standard twist dial model where to engage the heating was a process of trial and error twisting the dial until you heard the vague click of engagement.

Being a technophile and wanting something a little more for our lifestyle or technology, I discovered a company called Heatmiser which make a range of amazing looking and functioning slimline and touchscreen thermostats. I purchased their PRT-TS touchscreen model, which you can see here on their website (

The unit is flush mounted, so meant I had to spend some time channelling out the wall to recess a 35mm patress back box, but this was a good thing as it gave me the chance to remove the plastic cable trunking which the previous owners had used to ‘hide’ the wiring for the old model which was surface mounted, along with re-positioning it away from the kitchen door where the previous owners had mounted it directly against which just looking unsightly.

In our wiring configuration, there is a small gotcha, which I originally misread in the wiring diagram, which is that you may require a short piece of Live coloured wire (if you’re doing it properly) to bridge between the Live and the A1 terminal interfaces. The A2 terminal connects to the yellow call for heat wire, however the switch to engage the call for heat is across the A1 and A2 terminals. Bridging the Live and the A1 terminals allows current to flow through the call for heat switch, and hence allowing the heating to be engaged. In my initial wiring of the unit, the unit was functional, however heat wasn’t being called for this reason, but bear in mind that depending on your wiring configuration, this may not be required.

The advantages to this setup are amazing. The new thermostat actually controls the entire heating timings and temperature, so to have it function correctly, you actually configure the timer unit in the wiring centre for permanent on mode and let the thermostat do the rest, making the control of it more friendly as it’s in the main body of the house. The new unit is energy saving trust approved and claims to be able to save up to 10% on your heating bills due to two key features. One is the accuracy as this unit is accurate to +/- 1oC verses a standard unit which is about 2-3oC and the second feature is Optimum Prestart.

Unlike conventional thermostats where you have to incorporate an element of warm up time in your programming so that the house is warm when you wake up, this unit calculates the exact amount of time to warm the house to the required temperature and engages the heating automatically at this time to ensure that you reach the required temperature by the time you set. This feature is disabled by default, but entering the Feature configuration mode on the touchscreen LCD allows you to enable it and specify whether to allow the unit one or hours hours to perform Optimum Start functions.

The finish of the product is really nice. I opted for the silver bezel, and with it’s blue backlight LCD which only illuminates when you touch the screen looks really modern and 21st century, but it’s also available in white and brass finishes too.

The unit has another feature called Frost Protection Mode, which when enabled by default allows you to configure a temperature, which when breached will automatically engage the heating outside of your normal comfort levels as Heatmiser call them, or timer settings as you would normally call them. This level can be set low to prevent accidental heating engagements, but is valuable as it helps maintain a safe temperature in the house whilst helping to prevent any pipe freezing etc in deep cold during winter. This is another way in which it helps reduce your bills as it means that firing up the heating for short five minute bursts during normal daytime hours to maintain a core temperature means you actually need the heating engaged for less time during your comfort times because the house is already closer to that comfort temperature.

Although I’m yet to see the real effectiveness of the unit as it’s currently summer, I’m sure it’s going to be great. The lock function for the LCD means that the kids can’t change any of the settings without unlocking it, which requires a key press for 10 seconds to disengage and the Hold function allows you to boost the temperature if you are feeling a bit cold one evening and it allows you to specify a hold time so that you don’t forget to turn the thermostat down again afterwards.

The timer programming is simple yet concise. I’ve set our unit to 7 day mode due to our lifestyle which means you get four settings for each of the 7 days, and for each event (Wake, Leave, Return and Sleep) you can specify a temperature, so you no longer have to run to the thermostat in the evening to turn it up because you want it warmer in the eventing that you do in the mornings before work.

The Heatmiser line actually includes many other products, some of which really interest me. One is a unit identical in looks to ours, but also allows control of the hot water timings, which then completely removes the need for a timer in the wiring centre, however the setup for our current heating system doesn’t permit this model. Our unit is a 230v model due to our current system, however they have a range of 12v units for more modern low voltage heating, and they also have a range of network thermostats which allows you, when connected to a Network Wiring Centre to link multiple thermostats for operation of split zone heating from one of many units, and control all of the units from a single unit, or even control the heating remotely via a web application or SMS message. I hope that in our next house, years down the line I get the opportunity to use some of these other products. I would love to be able to use the Heatmiser web application as part of a Media Center interface via a plugin so that you can adjust the heating from your 10ft view.

Come the winter I’ll post another review of how the unit actually performs at managing the heating bill and temperature maintenance, but so far, the outlook is good.

Outlook 2010 Social Connector ProgID for Facebook

Today, I was investigating the management and control of the Outlook Social Connector via Group Policy, using the Office 2010 ADM/ADMX files from Microsoft.

Two of the settings of interest for the Outlook Social Connector are the ability to control which social connectors are displayed, and which are automatically loaded without user interaction. Whilst looking online, a Microsoft Forum thread appeared in my results with the ProgID for some of the available connectors, however they were missing a big one – Facebook.

Looking in the HKEY_CLASSES_ROOT registry hive on my machine, where I have the Facebook connector installed, I found it, so here is a list of the currently available Outlook Social Connector ProgIDs which can be used (semi-colon seperated) in the Group Policy Management Console to configure the behaviour.

SharePoint – OscAddin.SharePointProvider
SharePoint –
LinkedIn –
MySpace –
Windows Live Messenger – OscAddin.WindowsLiveProvider
Facebook – OscAddin.FacebookProvider
Facebook – OscAddin.FacebookProvider.1

I hope this helps you all.

Circumventing Intel’s Discontinued Driver Support for Intel PRO 1000/MT Network Adapters in Server 2008 R2

In a previous life, my Dell PowerEdge SC1425 home server has an on-board Intel PRO 1000/MT Dual Port adapter, which introduced me to the world of adapter teaming. At the time I used the adapters in Adapter Fault Tolerance mode because it was the simplest to configure and gave be redundancy in the event that a cable, server port or a switch port failed.

In my current home server, I have been running since its conception with the on-board adapter, a Realtek Gigabit adapter which worked, however it kept dropping packets and causing the orange light of death on my Catalyst 2950 switch.

Not being happy with it’s performance, I decided to invest £20 in a used PCI-X version of the Intel PRO 1000/MT Dual Port adapter for the server. Although it’s a PCI-X card, it is compatible with all PCI interfaces too, which means it plays nice with my ASUS AMD E-350 motherboard, however I didn’t realise that Intel doesn’t play nice with Server 2008 R2 and Windows 7.

When trying to download the drivers for it from the Intel site, after selecting either Server 2008 R2 or Windows 7 64-bit, you get a message that they don’t support this operating system for this version of network card, which I can kind of understand due to the age of this family of cards, however it posed me an issue. Windows Server 2008 R2 running on the Home Server automatically installed Microsoft drivers and detected the NICs, however that left me without the Advanced Network features to enable the team.

I set off my downloading the Vista 64-bit driver for the adapter and extracting the contents of the package using WinRAR. After extraction, I tried to install the driver and sure enough the MSI reported that no adapters were detected, presumably because of the differences in the driver models between the two OS’s. After this defeat, I launched Device Manager and attempted to manually install the drivers by using the Update Device Driver method. After specifying the Intel directory as the source directory, sure enough, Windows installed the Intel versions of the drivers, digitally signed without any complaints.

With the proper Intel driver installed, I was now left with one problem and that was still the teaming. Inside the package, was a folder called APPS with a sub-directory called PROSETDX. Anyone who has previously used Intel NIC drivers will realise that PROSET is the name used for the Intel management software, so I decided to look inside, and sure enough, there is an MSI file called PROSETDX.msi. I launched the installer, and to my immediate horror, it launches the installer which the autorun starts.

Not wanting to give up hope, I ran through the installer and completed the wizard, expecting it to again say that no adapters were found, however it proceeded with the installation, and soon enough completed.

This part may change for some of you – Intel made a bold move somewhere between version 8.0 of the Intel PROSet driver and version 15.0 of the PROSet driver and moved the configuration features from a standalone executable, to an extension in the Device Manager tabs for the network card. I poured open the device properties, and to my surprise, all of the Intel Advanced Features were installed and available.


I promptly began to configure my team and it setup without any problems and it created the virtual adapter without any issues too including installing the new driver for it and the new protocols on the existing network adapters.

With this new server, I decided to do things properly, and I’ve configured the team using Static Link Aggregation. I initially tried IEEE 802.3ad Dynamic Link Aggregation, however the server was bouncing up and down like a yoyo, so I set it back to Static. Reading the information for the Static Link Aggregation mode is a note about Cisco:

This team type is supported on Cisco switches with channelling mode set to "ON", Intel switches capable of Link Aggregation, and other switches capable of static 802.3ad.

Following this advice, I switched back to my SSH prompt (which was already open after trying to get LACP working for the IEEE 802.3ad team). Two commands completes the config: one to enable the Etherchannel and one to set the mode to LACP instead of PAgP.

interface GigabitEthernet0/1
description Windows Home Server Team Primary
switchport mode access
speed 1000
duplex full
channel-group 1 mode on
spanning-tree portfast
spanning-tree bpduguard enable
interface GigabitEthernet0/2
description Windows Home Server Team Secondary
switchport mode access
speed 1000
duplex full
channel-group 1 mode on
spanning-tree portfast
spanning-tree bpduguard enable

The finishing touch is to check the Link Status and Speed in the Network Connection Properties. 2.0Gbps displayed speed for the two bonded 1.0Gbps interfaces. Thank you Intel.


Package Fails to Distribute in SCCM When an autorun.inf File is Present

At work this week, I was working with an Intel HD Graphics driver package which in terms of SCCM, you would call a bad driver. We call it a bad driver because it is a driver which doesn’t not install correctly using the Apply Device Drivers OSD step but instead requires a full application to be executed.

After creating the package in SCCM, I proceeded to distribute the package to our distribution points on the network so that the operating system deployment process would be able to access the files required to deploy the application.

After waiting a short while for the package to distribute, I checked the Package Status view in the ConfigMgr Console, and I saw that the status was Install Retrying. After looking at the status log for the distribution point, I saw that it had already gone into a retrying state several times. If received the following error:

SMS Distribution Manager failed to copy package "SITE0011C" from "\SERVERPATHIntelHD Graphics Display Driverx64" to "MSWNET:["SMS_SITE=SITE"]\SERVERSMSPKGD$SITE0011C".

Possible cause: SMS Distribution Manager does not have sufficient rights to read from the package source directory or to write to the destination directory on the distribution point.

Solution: In the SMS Administrator console, verify that the site system connection accounts have sufficient privileges to the source and destination directories.

Possible cause: The distribution point might not be accessible to SMS Distribution Manager running on the site server.

Solution: If users are currently accessing the package files on the distribution point, disconnect the users first. If the package distribution point is located on a Windows NT computer, you can force users to disconnect by clicking on the "Disconnect users from distribution points" box in the Data Access tab of the Package Properties dialog box.

Possible cause: The distribution point does not have enough free disk space to store the package.

Solution: Verify that there is enough free disk space.

Possible cause: The package source directory contains files with long file names and the total length of the path exceeds the maximum length supported by the operating system.

Solution: Reduce the number of folders defined for the package, shorten the filename, or consider bundling the files using a compression utility.

I logged into the effected distribution point and verified that the file shares used by SCCM where still active and that there was sufficient disk space on the server which there was.

I have encountered issues with package distribution before with a Windows 7 64-bit image was refusing to distribute, but I couldn’t find any cause, and in that instance re-creating the package resolved the issue, so my first port of call was this. On the sources directory, I made a new folder and copied the source files from my workstation fresh to the server in case there had been a problem with the previous file transfer.

On this occasion, whilst copying the files, I got an error whilst trying to copy the files, and it specifically generated the error no the autorun.inf file which was included in the download from the Intel site. I thought this was wierd, but knowing how invasive our McAfee enterprise policies can be at times, I wondered if the autorun.inf file was causing an issue. I deleted the autorun.inf file from the original package sources directory on the server and watched while SCCM happily distributed the package to the distribution points.

After a quick bit of investigation, I soon discovered a setting in McAfee VirusScan called Prevent Remote Creation of autorun.inf Files which was enabled. Because SCCM uses SMB to transfer the packages from the source directory to the distribution points, this triggered the McAfee rule and blocked the entire package from being created.

As a rule on thumb, there is no reason to have autorun.inf files inside your SCCM packages and their source directories, so in this instance I simply omitted the file, however if you needed to keep the file, then you could simply disable this protection rule for your SCCM Site and Distribution Point servers and the server which holds your package source files (perhaps a File Server). Although I have mentioed McAfee as the culprit in this scenario, I’m pretty sure that other anti-virus applications will feature a similar rule which could cause you other headaches.

So Long, So Busy

It’s been quite a while since I’ve posted anything and for that I am disappointed, however I do have just cause.
Since my last post, me and the wife Nicky have been working the Cambridge Weight Plan and in one week I lost 10lb which is amazing. We’ve got Nicky’s Dad over this week helping us with some lose ends of DIY in the house too, so my evenings are packed with DIY work.

Back in the land of tech, I’ve been busy on a MIMEsweeper for SMTP training course, a Websense training course, and working feverishly hard on System Center Configuration Manager and Exchange. I’m hoping I’ll be coming out with a couple of posts soon on these subjects, along with some bits on Windows Home Server 2011.

The Trials and Tribulations of Installing Windows Home Server 2011

As I sit here now in my study at home, I am blessed by the new soothing sound of my self-built Windows Home Server 2011 system. And why is the sound soothing? Because it’s silent. My rack is still making some noise, which is coming from the Cisco switch and router which both probably need a good strip down and de-dust to help with the noise, it is nothing compared with the noise of the old PowerEdge SC1425 that I had running.

Unfortunately, installing Windows Home Server 2011 for me wasn’t smooth sailing, and I hit quite a few bumps along the way, so here is the list of problems I faced to help others avoid the same time wasters.

Before even starting the installation, please make sure you do read the release notes. Ed Bott has gone through some of the crazy requirements in a post at ZDNet ( The biggest one to watch out for is the clock.

Due to some kind of bizarre issue with the RTM release of WHS 2011, you must change the time in your BIOS to the time for PST (Pacific Standard Time) or GMT –8hrs. You must then leave BIOS and consequentially leave the Windows clock to that time, and during the installation when prompted for Time Zone, you must set this to Pacific Standard Time.

Once the installation is complete, you must then wait a further 24hrs before changing the time back. If you chose not to heed this advice, then the release notes state that you will not be able to join any client computers to the Home Server during this 24hr period. Once your 24hr period is up, you can log into the server and change the time and the time zone accordingly.

The first problem hit at the first phase of the installation, Extracting Files, while it was at 0%. Reviewing the error log from the setup process, I saw that it had encountered a (Trackback:80004005) Setup Error 31: Trackback:80004005 error. A quick look on the Microsoft Social Forums led me to discover that WHS 2011 doesn’t support any kind of RAID or array type disk to be attached for the installation. For me, this meant disconnecting the RAID-10 controller and powering down the disks attached to the controller for the duration of the install. Once install was completed, I simply reconnected the controller and installed the drivers and everything is working perfectly as I expected.

The second problem occurred once the installation was complete and it runs the WHS 2011 customisation process after first logon. It seems that WHS 2011 goes out to Windows Update and pulls a couple of required updates, and as such, needs a suitable network card. My motherboard uses a NIC which isn’t natively supported by WHS 2011, so I had to install the driver, however to my shock, the initial lack of a NIC terminated the setup process and I was forced to restart.

As my existing home server and the new home server where to be using the same IP address, I had the new one disconnected initially. This caused the next problem, because after installing the NIC driver, I was given a prompt that there was no network connectivity and that I should connect a network cable. Once again to my shock and disbelief, this required another restart.

At this point, I also released that my Cisco switch had switchport-security turned on for the Home Server port still and this meant I had to disable that on the switch as it was bound to a different MAC address at the time, and guest what? Reboot again.

My final problem laid with the network card on the motherboard itself. In the BIOS, I enabled the maximum power saving mode setting. It turns out, that for the ASUS E35M1-M PRO motherboard, this prevents the network card operating in 1Gbps mode and drops it to 100Mbps. It took me a while to figure this one out with changing cables, switching between switch ports etc, but I eventually discovered an option under the network card in Device Manager for Green Ethernet. Disabling this setting, which was previously set to Enabled, reset the network connection, and it was then connected at 1Gbps.

After all of this, I have a fully working and perfect home server for me and the family. I’ll be writing some other posts to explain my setup in detail, but this post is purely for the installation process

WordPress Development – functions.php

As I travel down the road of WordPress theme development, I have discovered many things.

A problem that has been hurting me for the last week at least as I develop the new theme is errors I would occasionally receive, which would read Cannot modify header information – headers already sent. For me as a non-programmer, this didn’t really mean an awful lot, and trawling the WordPress support forum didn’t help me hugely as I didn’t understand some of the lingo being used.

I had a starting point, which was my functions.php file. This filename was referenced in the errors, with a line number however upon inspection of that line, I couldn’t see a fault, so I looked elsewhere.

This evening, I compared my functions.php file to that of the TwentyTen theme which ships with WordPress 3.1, and I noticed something interesting. My functions.php file used multiple PHP statements opened and closed as needed, however the TwentyTen functions.php file only had a single set of PHP tags, opening at the start of the file and closing at the end, with each of the functions contained within it.

When I looked back at my file, I saw that the line indicating the error was in fact a closing PHP tag.

This post is more to serve as reference for other newbies out there trying to develop your first WordPress theme. Make sure that your functions.php file is a single PHP statement from start to finish with no leading or trailing line breaks or spaces. For me, this problem caused PHP errors when trying to modify Widgets in the admin interface, configure Plugins, manage the Theme settings and also stopped RSS and XMLRPC from working, so it’s a pretty big issue.