Active Directory

Active Directory Fine-Grained Password Policies

This post isn’t going to set the world on fire because of it’s revelations and new features; instead, I am going to talk about a feature that has been around since Windows Server 2008 called Fine Grained Password Policies.

Active Directory Password Policies are, even in 2018, still misunderstood. For all the consulting engagements I do, I still encounter customer environments where admins have tried to configure multiple Group Policy Objects to control password policy at various levels within their OU structure. An example of this behaviour would be to set the Default Domain Policy object to a standard password complexity and then have an OU containing administrative accounts for Domain Admins which has a GPO applying a more complex policy.

Read more…

Active Directory 2016 Time-Based Group Membership

Group membership control and management is one of the cornerstones of Active Directory Domain Services. In Windows Server 2016, Microsoft introduced a new feature to Active Directory that forms part of the Microsoft Privileged Access Management (PAM) strategy.

When used in conjunction with automation, this can be used to provide Just-In-Time (JIT) access to protected and administratively sensitive services. When used in an environment that is synchronised with Azure Active Directory using Azure AD Connect, this can be used to provide JIT for hybrid solutions in Microsoft Azure (when RBAC has been applied to Azure Resource Manager objects).

In this post, I will briefly explain the processing for implementing time-based group membership in Active Directory.

Read more…

Cleaning Up Active Directory and Cluster Computer Accounts

Recently at work, I’ve been looking at doing a clean up of our Active Directory domain and namely removing stale user and computer accounts. To do this, I short but sweet PowerShell script which gets all of the computer objects from the domain and include the LastLogonTimestamp and the pwdLastSet attributes to show when the computer account was last active however I came across an interesting problem with cluster computer objects.

Import-Module ActiveDirectory
Get-ADComputer -Filter * -SearchBase “DC=domain,DC=com” -Properties Name, LastLogonTimestamp, pwdLastSet -ResultPageSize 0 | Select Name, @{n='LastLogonTimestamp';e={[DateTime]::FromFileTime($_.LastLogonTimestamp)}}, @{n='pwdLastSet';e={[DateTime]::FromFileTime($_.pwdLastSet)}}, DistinguishedName

When reviewing the results, it seemed as though Network Names for Cluster Resource Groups weren’t updating their LastLogonTimestamp or pwdLastSet attributes even though those Network Names are still in use.

After a bit of a search online, I found a TechNet Blog post at http://blogs.technet.com/b/askds/archive/2011/08/23/cluster-and-stale-computer-accounts.aspx which describes exactly that situation. The LastLogonTimestamp attribute is only updated when the Network Name is brought online so if you’ve got a rock solid environment and your clusters don’t failover or come crashing down too often, this object will appear as although it’s stale.

To save you reading the article, I’ve produced two updated versions of the script. This first amendment simply adds the servicePrincipalName column to the result set so that you can verify them for yourself.

Import-Module ActiveDirectory
Get-ADComputer -Filter * -SearchBase “DC=domain,DC=com” -Properties Name, LastLogonTimestamp, pwdLastSet, servicePrincipalName -ResultPageSize 0 | Select Name, @{n='LastLogonTimestamp';e={[DateTime]::FromFileTime($_.LastLogonTimestamp)}}, @{n='pwdLastSet';e={[DateTime]::FromFileTime($_.pwdLastSet)}}, servicePrincipalName, DistinguishedName

This second amended version uses the -Filter parameter of the Get-ADComputer Cmdlet to filter out any results that include the MSClusterVirtualServer which designates it as a cluster object computer account.

Import-Module ActiveDirectory
Get-ADComputer -Filter 'servicePrincipalName -NotLike "*MSClusterVirtualServer*"' -SearchBase “DC=domain,DC=com” -Properties Name, LastLogonTimestamp, pwdLastSet, servicePrincipalName -ResultPageSize 0 | Select Name, @{n='LastLogonTimestamp';e={[DateTime]::FromFileTime($_.LastLogonTimestamp)}}, @{n='pwdLastSet';e={[DateTime]::FromFileTime($_.pwdLastSet)}}, DistinguishedName

The result set generated by this second amendment of the script will produce exactly the same output as the original script with the notable exception that the cluster objects are automatically filtered out of the results. This just leaves you to ensuring that when you are retiring clusters from your environment that you perform the relevant clean up afterwards to delete the account. Alternatively, you could use some clever automation script like Orchestrator to manage the decommissioning of your clusters and include this as an action for you.

Understanding Office 365 and AAD Federated Identity Types

Recently, I’ve undertaken a number of customer chalk and talk sessions on Office 365 to discuss with them some of the benifits they can expect to see from moving from on-premise services to Office 365 hybrid and cloud services. Amongst the myriad of topics that get covered in these sessions, one of the biggest areas for discussion and contention is identify federation from the on-premise environment to Office 365 which uses Azure Active Directory (AAD) as it’s identity management system.

I thought I would take this oppourtunity to cover off some of the high-level points of the trade-offs and differences between the ways of achieving identity federation with Office 365 and Azure Active Directory. Please remember that this isn’t an exhaustive list of things to consider but a good taster.

In some future posts, I will be covering deployment scenarios for the two methods of identity federation and also the software we need to configure and deploy in order to make it work.

What is Identity Federation

Simply put, identity federation is the means of allowing your users to logon to both on-premise services and Office 365 and Azure Active Directory authentication based services with a single identity, the identity they know and love that resides currently in your on-premise Active Directory and to most people is simply the username and password that they use to logon to your internal PCs and other systems.

Without identity federation, we have a scenario where users have split personas resulting in them having one logon for your on-premise services and another for their cloud services with Office 365 and Azure Active Directory. If you work in a highly secure, militry level environment, perhaps this is actually what you want and you don’t want to potentially expose your internal identies to the cloud but for 99% of people looking at Office 365, you want the simplified experience for your end-users of having just one credential to rule them all.

Single Sign-On vs. Same Sign-On

Single Sign-On and Same Sign-On are the same yet different. Single Sign-On refers to the ability to logon once such as to a domain joined Windows desktop PC and then not have to re-enter credentials for any of the services you consume during that session and this is the holy grail of integration scenarios where the user experience is totally seamless and the user doesn’t need to think about anything other than which app or product they want to work with next.

Same Sign-On refers to using a single identity, provided by our identity federation solution however instead of the user seamlessly being logged into these systems, the user may be prompted at various stages to re-enter their credentials in order to authenticate to a web service or an application such as with the Lync Client or a SharePoint team site.

Both of these scenarios are achievable with Office 365 and Azure Active Directory however in order to achieve one over the other increases the amount of work and effort required upfront in order to achieve success. I will explain in more detail further on the technical differences but for now, know that Same Sign-On is the easiest to implement and requires nothing more than one server to run the syncrohisation software. Single Sign-On requires more servers to be deployed and it requires some firewall reconfiguration in order to allow the services to be properly accessible.

Deciding Which Identity Type is Right for You

In this section, I will talk about some of the determining factors for deciding betwween same and single sign-on.

User Experience

The end-user experience is obviously high on the agenda when it comes to deciding between the two identity federation modes. Same sign-on means users can login to services with the same username and password whilst single sign-on allows users to seamlessly move between applications and services without the need to re-authenticate.

The winner here is clearly single sign-on however this comes with a caveat that Outlook does not behave in a truely singluar fashion like the other applications do.

Due to the way that Office 365 uses the RPC over HTTPS protocol with Basic Authentication, Outlook users will still be prompted to enter their credentials even when single sign-on is deployed and properly configured. This is a common misconception and people see it as a problem with their single sign-on configuration or deployment but sadly it is just the way that Outlook behaves. The workaround to this issue is for users to select the Remember My Credentials option and their password will be cached in the Windows Credential Manager until such a time that the user changes their password.

Password Security

Password security is in my opinion, the number one reason that people consider the single sign-on deployment over same sign-on.

With a same sign-on deployment, the authentication to Office 365 and Azure Active Directory service is performed within AAD. The syncronisation software that is run within your organisation syncs both the users and their passwords to the cloud AAD directory. When a user requests access to a service, the password entered by the user is sent to AAD for verification and authorization. All this means that there are copies of your passwords stored in AAD.

With single sign-on, your passwords are not stored in the cloud AAD directory but instead, only the usernames and a few other attributes of the users. When a user requests a login to a service, the authentication request is forwarded to servers within your organisation known as proxies which then forward the request to your internal Active Directory domain controllers. Once authorized, a token is sent back to Office 365 and Azure Active Directory to approve the login. In a nutshell, the passwords never leave your environment, only tokens approving or denying the connection.

The winner for password security is likely to be single sign-on. The idea of having passwords stored in the cloud is too much for some organisations whereas for others, the idea of the authentication tokens flying back and forth is equally bad coupled with the exposure a single sign-on solution potentially adds to your internal directory environment.

Password Changes

Users need to be able to change their passwords in accordance with your organisations security policy. Typically, in order for a user to make that change, they need to be either on-premise or connected to the on-premise environment via a VPN or such if they are working remotely. Windows Server DirectAccess helps with password changes amongst many other scenarios where the users need to be connected to the corporate network.

If you opt for a same sign-on implementation then by enabling the Password Write-Back feature in the sync application, we can enable users to change their passwords without requiring a connection back to the corporate environment. This password change can be performed via one of the Office 365 application websites such as Outlook Web App. Once the password is changed by the user, it is written to the cloud Azure Active Directory and is synchronised back to the corporate directory when the nextx synchronisation occurs.

If you deploy a single sign-on model then you do not have the flexibility of enabling users to perform password changes via the portals because there is no cloud storage of the password and the cloud environment is not aware of the users password but only that they were authenticated from your environment.

If you are looking for ways to reduce the user reliance on VPN technologies and enable them to do more of their work remotely using cloud based applications then same sign-on could provide an added benifit here. Companies which use single sign-on will need to maintain these VPN technologies even if simply to allow users to change their passwords as required. As I have mentioned previously, DirectAccess if not already deployed could be a real answer here as it provides always-on connectivity back to the corporate environment and does not require the user to interact with a VPN client improving the user experience.

Access Revokation

In a scenario where you need to revoke somebodies access very quickly either due to a confidentiality issue or an employee has gone rogue, single sign-on is the clear winner. Same sign-on works by syncronising the stage of user objects periodically based on a scheduled job from on-premise to the Azure Active Directory. If a user account is disabled on-premise then it could take sometime for that change to make it’s way to the AAD directory and further damage could be done in the scenario as a result.

If you are using single sign-on, when a user account is disabled, that user account will no longer be able to authenticate to Office 365 and AAD federated applications as soon as the account is disabled because those services will no longer be able to authorize that user based on the fact that, that authentication request is sent directly into your environment.

Availability and Reliability

This point is closely linked to the password security requirement, as is configuration complexity. In order for users to be able to sign-in to Office 365 and AAD linked services, there needs to be an authentication service available to process the request. For same sign-on where the passwords are stored in the cloud directory, the availability and reliability is provided by Microsoft Azure.

As we understand from the Microsoft Azure SLA page at http://azure.microsoft.com/en-gb/support/legal/sla/ Azure Active Directory Free provides no guarantee of uptime or SLA although through my personal experience and use of it, I have never seen a problem with it being available and working. Azure Active Directory Basic and Premium both offer a 99.9% enterprise SLA along with a slew of other features but at a cost.

For deployments using single sign-on, because the authentication requests are redireted to servers which are maintained by you as the customer, the availability and reliability of the authentication service is dependant on you and is born of a number of factors: we have authentication proxies, authentication servers and domain controllers all of which need to be available for the solution to work and not to mention any firewalls, load balancers, networks and internet connections, service providers, power sources, you name it, all of which are consumed by these servers.

I’ll be covering some deployment scenarios for both same and single sign-on in a future post however for now, we’ll assume all of this resides in your on-premise datacenters. If you have reliable on-premise servers and infrastructure services and you can provide a highly available solution for single sign-on then you will have no problems however if any of the components in the single sign-on server chain fail, users will be unable to authenticate to Office 365 or AAD federated applications which will cause a user experience and an IT support issue for you.

Configuration Complexity

Taking what we learn from the reliability and availability information above, it is fairly apparent that there are more moving parts and complexities to the single sign-on implementation. If you as an organisation are looking to reduce your configuration complexity because you want to unlock time from your internal IT resources or you are looking to improve your uptimes to provide a higher level of user satisfaction then you should consider whether the complexity of a single sign-on implemention is right for you? In theory, once the solution is deployed it should be no hassle at all but all know that servers go bad from time to time and even when things are working right there is still things to contend with like software updates, security patching, backups and so forth.

The same sign-on implemention requires only one server to implement, requires no firewall changes to allow inbound traffic to your network (all communication is based on an outbound connection from the server) and if you wanted to, you could even omit the backups because the installation of the software is very simple and can be recovered from scratch is little time at all.

Same sign-on is definately the winner in the complexity category so if you have a small or over-worked IT department then this could be an assist in the war against time spent on support issues however single sign-on clearly provides a richer user experience so it needs to be a balanced debate about the benifits of avoiding the configuration complexity.

Software Implementation

In this section I am going to quickly cover off the software we use for these scenarios but I am not going to go into configuration of them or the deployment scenarios as I want to save that for another post at the risk of this post dragging on too long. The other reason I want to cover these here is that throughout this post I have talked of same sign-on and single sign-on but I want to translate that into a software term for later reference.

Same Sign-On using DirSync, AADSync or FIM

Same sign-on is implemented using the Directory Synchronisation (DirSync), the Azure Active Directory Sync (AADSync) tools or using Forefront Identity Manager with the Azure AD Connector.

The DirSync tool has been around for quite some time now and has been improved with new features from one iteration to the next. For example, initial versions of DirSync didn’t support Password Sync to the cloud environment which meant users had different passwords for on-premise and cloud which was a reason a lot of early Office 365 adopters didn’t adopt DirSync. With more recent additions to DirSync to support Password Sync and Write Back from the AAD cloud environment, DirSync has become a lot more popular and I find that the majority of customers seeking a fast track adoption of Office 365 going for DirSync. DirSync has a limitation of only working for environments with a single Active Directory forest which made it a non-starter for customers with more complex internal environments.

Forefront Identity Manager (FIM) with the Azure AD Connector is DirSync on steroids or rather, DirSync is a watered down version of FIM. DirSync is a pre-packaged and pre-configured version of FIM. Using the full FIM application instead of DirSync enables support for multi-forest environments and also for environments where identity sources include non-Active Directory environmnets such as LDAP or SQL based authentication sources. For customers wanting a same sign-on experience but with a complex identity solution internally, FIM was the only option. If you have FIM deployed in your environment already then you may want to take this route in order to help you sweat the asset.

AADSync is a relatively new tool and only came out of beta in September 2014. The AADSync tool is designed to replace DirSync as it provides additional features and functionality over DirSync for example, AADSync has support for multi-forest environments (although still only Active Directory based) making it much more viable for larger, complex customer environments. It also allows customers to control which attributes and user properties are synchronised to the cloud environment making it better for the more security concious amongst us.

Single Sign-On using ADFS

Single sign-on is implemented using Active Directory Federation Services (ADFS). ADFS is deployed in many different configurations according to the requirements of the organisation but in it’s simplest form, it requires an ADFS Proxy which is a server residing in your DMZ responsible for accepting the incoming request for authorization from Office 365 and AAD to your environment. This is passed to an ADFS Server which resides on the internal network and communicates with the Active Directory environment to perform the actual authorization.

There is an additional component in the mix and that is the requirement for either a DirSync or an AADSync server however this is deployed to push the user objects into AAD however there is no password sync or write back of attributes occuring here, this server is there just to allow AAD to know what users you have so that AAD knows whether a valid username has been entered in order to pass it down to your environment for authorization.

Because of the public nature of ADFS, it requires you to have public IP addresses available, certificates for the URLs used by ADFS and also requires you to have a DMZ segment exposed to the internet.

Deployment Scenarios

In a future post which I hope won’t take me too long to make available, I will talk about some of the deployment scenarios and options for deploying both same sign-on and also single sign-on. I will also cover how the release of AADSync effects existing deployments of DirSync or FIM.

Active Directory and DFS-R Auto-Recovery

I appreciate this is an old subject but it is one that I’ve come across a couple of times recently so wanted to share and highlight the importance of it. This will be one of a few posts I have upcoming on slightly older topics but none the less important ones that need to be addressed.

How Does DFS-R Effect Active Directory

In Windows Server 2008, Microsoft made a big change to Active Directory Domain Services (AD DS) by allowing us to use DFS-R for the underlying replication technology for the Active Directory SYSVOL, replacing File Replication Service (FRS) that has been with us since the birth of Active Directory. DFS-R is a massive improvement on FRS and you can read about the changes that DFS-R brings to understand the benefits at http://technet.microsoft.com/en-us/library/cc794837(v=WS.10).aspx. If you have upgraded your domains from Windows Server 2003 to Windows Server 2008 or Windows Server 2008 R2 and you haven’t completed the FRS to DFS-R migration (and it’s easily overlooked as you have to manually complete this part of a migration in addition to upgrading or replacing your domain controllers with Windows Server 2008 servers and there are no prompts or reminders when replacing your domain controllers to do it), I’d really recommend you look at it. There is a guide available on TechNet at http://technet.microsoft.com/en-us/library/dd640019(v=WS.10).aspx to help you through the process.

Back in January 2012, Microsoft released KB2663685 which changes the default behaviour of DFS-R replication and it effects Active Directory. Prior to the hotfix, when a DSF-R replication group member performs a dirty shutdown, the member would perform an automatic recovery when it came back online however after the hotfix, this is no longer the case. This behaviour change results in a DFS-R replication group member halting replication after a dirty shutdown awaiting manual intervention. Your intervention choices range from manually activating the recovery task to decommissioning the server and replacing it, all depending on the nature of the dirty shutdown. What we need to understand however is that a dirty shutdown can happen more often than you think so it’s important to be aware of this.

Identifying Dirty DFS-R Shutdown Events

Dirty shutdown events are logged to the DFS Replication event log with the event ID of 2213 as shown below in the screenshot and it advises you that replication has been halted. If you have virtual domain controllers and you shutdown your domain controller using the Shutdown Guest Operating System options in vSphere or in Hyper-V, this will actually trigger a dirty shutdown state. Similarly, if you have a HA cluster of hypervisors and you have a host failure causing the VM to restart on another host, yep, you guessed it, that’s another dirty shutdown. The lesson here first and foremost is always shutdown domain controllers from within the guest operating system to ensure that it is done cleanly and not forcefully via a machine agent. The event ID 2213 is quite helpful in that it actually gives us the exact command to recover the replication so a simply copy and paste into an elevated command prompt will recover the server. No need to edit to taste. Once you’ve entered the command, another event is logged with the event ID 2214 to indicate that replication has recovered shown in the second screenshot.

AD DS DFS-R Dirty Shutdown 2213  AD DS DFS-R Dirty Shutdown 2214

Changing DFS-R Auto-Recovery Behaviour

So now that we understand the behaviour change, the event ID’s that lets us track this issue, how can we get back to the previous behaviour so that DFS-R can automatically recover itself? Before you do this, you need to realise that there is a risk to this change and the risk is that if you allow automatic recovery of DFS-R replication groups and the server that is coming back online is indeed dirty, it could have an impact on the sanctity of your Active Directory Domain Services SYSVOL directory.

Unless you have a very large organisation or unless you are making continuous change to your Group Policy Objects or files which are stored in SYSVOL, this shouldn’t really be a problem and I believe that the risk is outweighed by the advantages. If a domain controller restarts and you don’t pick up on the event ID 2213, you have a domain controller which is out of sync with the rest of the domain controllers. The risk to this happening is that domain members and domain users will be getting out of date versions of Group Policy Objects if they use this domain controller as the domain controller will still be active servicing clients whilst this DFS-R replication group is in an unhealthy state.

Effects Beyond Active Directory

DFS-R is a technology originally designed for replicating file server data. This change to DFS-R Auto-Recovery impacts not only Active Directory, the scope of this post but also file services. If you are using DFS-R to replicate your file servers then you may want to consider this change for those servers too. Whilst having an out of date SYSVOL can be an inconvenience, having an out of date file server can be a major problem as users will be working with out of date copies of documents or users may not even be able to find documents if the document they are looking for is new and hasn’t been replicated to their target server.

My take on this though would be to carefully consider the change for a file server. Whilst having a corrupt Group Policy can fairly easily be fixed or recovered from a GPO backup or re-created if the policy wasn’t too complex, asking a user to re-create their work because you allowed a corrupt copy of it to be brought into the environment might not go down quite so well.

A Swathe of Microsoft Azure Updates

I’ve been a bit lazy over the last couple of weeks when it’s come to blogging a) because I’ve been on the road quite a bit with work and I haven’t fancied sitting in front of my PC when I got home in the evening and b) I’ve been too hooked watching Ray Donovan on TV to think about picking up the laptop.

The problem with not blogging for a while is that I have a lot of pent up desire to post things that I’ve been thinking about and doing over the last couple of weeks, not enough time to do it, nor the will power to type it all out.

As we all know, Azure is fairly close to my heart these days and three’s been a lot of activity in Azure across a whole host of offerings.

The biggest changes are covered in full in the blog post by Scott Guthrie over at http://weblogs.asp.net/scottgu/azure-sql-databases-api-management-media-services-websites-role-based-access-control-and-more.

Azure SQL Service Tiers

For me and my love obsession with running WordPress on Azure, the biggest changes here are the General Availability of the Azure SQL Database Service Tiers. These are the tiers which have been in preview since early this year and are due to replace the legacy tiers next year. The good news here is that Microsoft appear to have made a change during the course of the year which means you don’t need to actually migrate your data and you can simply switch between the tiers so there’s no excuse now.

Azure Websites

Another big change is to Azure Websites. Azure Websites have previously not been able to integrate with a Virtual Network to allow you to easily consume on-premise resources as part of a website. You could get around this to an extent using a BizTalk Hybrid Connection however the setup of this required agents to be deployed across the servers you wanted to connect to and meant extra configuration and complexity. We can now consume resources on-premise via our Virtual Network to on-premise resources whether it be a SQL Server, a back-end application server or whatever your website needs.

As part of the website changes, there is a new gallery template available for Websites named Scalable WordPress. This is a WordPress site deployment on Azure Websites designed for Azure which includes pre-configuration to use Azure BLOB Storage and easy configuration for Azure CDN. This new template potentially puts all my work to hone WordPress for Azure to the waste heap. As a WordPress user and fan, I’m going to be deploying one of these sites in the next few days (maybe longer) to see how Microsoft have built the site template. My money is on either they have used plugins to achieve it in the same way I do or they’ve customized the code base to make it work. Either way, I’ll be interested to see.

Azure RBAC

Finally, at last, the feature that we’ve all been wanting, needing and waiting for. No more, is a subscription the boundary for security and access control in Azure as with the release of Role Based Access Control (RBAC), we can now control access to resources in our Azure subscriptions. I’m really looking forward to having a poke around with this feature as I see this being one of the biggest features ever with Azure.

Azure Active Directory (AAD) Sync

In a separate article over at http://blogs.technet.com/b/ad/archive/2014/04/21/new-sync-capabilities-in-preview-password-write-back-new-aad-sync-and-multi-forest-support.aspx it was announced that the latest version of the AAD Sync tool has come out of Preview and is now in General Availability.

This new version supports Self-Service Password Reset write-back to Active Directory Domain Services (AD DS) with DirSync and Multi-Forest sync for complex domain and Exchange Server topologies.

Password Write-Back for organisations using AAD could be really good thing, just bear in mind before you get too excited about the reduction in service desk calls you can achieve through self-service password reset, you need to meet the prerequisites for the writeback agent which are pretty simple but you also need to be paying for Azure Active Directory Premium.

All in all, this has been a great month for Azure and I’m looking forward to trying to get my teeth into some of these new features.

Deploying Windows Server 2012 Primary Computer Setting

For companies (or homes) using roaming profiles and folder redirection, Microsoft gave you are great new feature in Windows Server 2012 called Primary Computer. This feature hasn’t been talked about that much although it really should have been. The Primary Computer feature allows you to define the primary computer for a user in Active Directory on a user object. Once applied to a user account it prevents the distribution of their roaming profile on non-primary devices and for folder redirection, disables the ability to sync the folders with Offline Files for non-primary devices.

So What is the Benefit

This is ideal for several reasons. Firstly, it helps to reduce profile corruption for roaming profile users when roaming between machines which may be running different versions of Windows or different architectures. Also for roaming profile users, it greatly improves logon and logoff times for non-primary devices. If a user is logging on to a kiosk computer for example, they don’t need their roaming profile and they probably just want to access a service or application quickly so why wait for it? For users of folder redirection, this means that the user is able to access their files when the computer is on the network and can access the file share resource which hosts those redirected folders, but they are non cached using Offline Files. For the business, this is a great security benefit as it means that somebody logging on to a temporary machine isn’t going to be caching all of those files, files which they could potentially leave on the train or in an aeroplane overhead locker. For laptops which typically have small hard disk capacities this is useful for both roaming profile and folder redirection scenarios as it means that you aren’t pulling down potentially gigabytes of data to the local machine clogging up the disk.

Implementing Primary Devices Using Active Directory Administration Center

First, launch the Active Directory Administrative Center and navigate your OU structure to find the computer object for the computer that you want to make primary for a given user, or if you already know the machine name, use the search feature to locate it.

Primary Computer Finding Distinguished Name

From the computer account object, scroll down to the bottom of the view and select the Attribute Editor tab. Scroll through the list of attributes to find the distinguishedName attribute and select the View button to show the full DN.

Primary Computer Copy Distinguished Name

On the String Attribute Editor, right click the pre-highlighted text and select the Copy option from the context menu. Cancel out of the Attribute Editor and cancel out of the computer object view.

With the DN of the computer now in the clipboard, find the user that you want to make this the primary computer for either by searching or again, navigating your OU structure.

Primary Computer Set User msDS-PrimaryComputer

On the user account, do as we did with the computer account a moment ago, scroll down and select the Attribute Editor tab. Scroll through the list of attributes until you locate the msDS-PrimaryComputer attribute then click the Edit button. Right-click in Value to Add box and select Paste from the context menu to paste in the DN of the computer then select the Add button.

Click OK to close the Multi-Valued String Editor dialog then click OK to exit out of the user account properties. Your work here is done.

Implementing Primary Devices Using PowerShell

Out of the box, there is actually no neat way of implementing Primary Devices using PowerShell. To do it, we have to plug a few Cmdlets together. Firstly, get the attributes for the computer and store them in an object. $Computer = Get-ADComputer Computer1 (where Computer1 is the name of the computer). Next, we map the computer that we just stored in the $Computer object to the user. Set-ADUser User1 -Add @{‘msDS-PrimaryComputer’ = “$Computer”} (where User1 is the name of the user). With those two Cmdlets out of the way, the partnership between the user and the computer should now be done, but we can verify this with the following Cmdlet. Get-ADUser User1 -Properties msDS-PrimaryComputer

Configuring Folder Redirection and Roaming Profiles

Now that we’ve setup Primary Computer attributes for some users, it would probably be a good idea if our Group Policy settings for Roaming Profile and Folder Redirection actually honoured these settings and only transferred out the data to the users’ primary computers. The setting for Folder Redirection is available as both a User Setting and a Computer Setting in Group Policy whereas the Roaming Profile setting is only available as a Computer Setting. Because of the fact you can’t apply both of these policy settings from a single policy if you decide to use user targeting, my advice is to apply this as a computer policy. It makes good sense to keep these two settings together as it means you can see that you are applying the Primary Computer setting to both roaming profiles and folder redirection in one view and it means you can give your Group Policy Object a meaningful name like Primary Computer Roaming Settings or the like.

From the Group Policy Management Console, navigate to the Computer Configuration > Administrative Templates > System. From the System node, you will find the Folder Redirection and User Profiles nodes.

Inside the Folder Redirection node, enable the Redirect folders on primary computers only policy setting. Inside the User Profiles node, enable the Download roaming profiles on primary computers only setting.

Active Directory and the Case of the Failed BitLocker Recovery Key Archive

This is an issue I came across this evening at home (yes, just to reiterate, home), however the issue applies equally to my workplace as we encounter the same issue there.

One of the laptops in my house incorporates a TPM Module which I take advantage of to BitLocker encrypt the hard disk and using the TPM and a PIN. This gives me peace of mind as it’s the laptop used by my wife who although doesn’t currently will likely start to take her device out on the road when studying at university.

Historically, I have used the Save to File method of storing the recovery key, storing the key both on our home server and on my SkyDrive account for protection, but as of our new Windows Server 2012 Essentials environment, I wanted to take advantage of Active Directory and configure the clients to automatically archive the keys to there.

The key to beginning this process is to download an .exe file from Microsoft (http://www.microsoft.com/en-us/download/details.aspx?id=13432). I’m not going to explain here how to extend the AD Schema or modify the domain ACL for this all to work as that is all explained in the Microsoft document.

Following the instructions, I created a GPO which applied both the Trusted Platform Module Services Computer Configuration Setting for Turn on TPM Backup to Active Directory Domain Services and also the setting for BitLocker Drive Encryption Store Computer Configuration Setting for Store BitLocker Recovery Information in Active Directory Domain Services.

After allowing the machine to pickup the GPO and a restart to be sure, I enabled BitLocker and I realised that after verification in AD, nothing was being backed up. Strange I thought, as this matches a problem in the office at work however we had attributed this problem at work to a potential issue with our AD security ACEs, but at home, this is a brand new Windows Server 2012 with previously untouched ACEs out of the OOBE.

After scratching my head a little and a bit more poking around in Group Policy, I clocked it. The settings defined in the documentation are for Windows Vista. Windows 7 and Windows 8 clients rely on a different set of Group Policy Computer Configuration settings.

These new settings give you far more granular control of BitLocker than the Windows Vista settings did, so much so, that Microsoft elected that the Windows Vista settings would simply not apply to Windows 7 or 8 and that the new settings needed to be used.

You can find the new settings in Computer Configuration > Administrative Tools > Windows Components > BitLocker Drive Encryption. The settings in the root of this GPO hive are the existing Vista settings. The new Windows 7 and Windows 8 settings live in the three child portions: Fixed, Operating System and Removable Drives.

Each area gives you specific, granular control over how BitLocker affects these volumes, including whether to store the key in AD DS, whether to allow a user to configure a PIN or just to use the TPM and probably the best option second to enabling AD DS archive in my opinion is whether to allow the user to select or whether to mandate that the entire drive or only the used space is encrypted. The Operating System Drives portion gives you the most options and will likely be the one people want to configure most as this is ultimately what determines the behaviour when booting your computer.

I’m sure you’ll agree that there’s a lot of new settings here over Vista and that this gives you much greater flexibility and control over the settings, but with great power comes great responsibility. Make sure you read the effects and impact of each setting clearly and that you test your configuration and if possible, backup any data on any machines which you are testing BitLocker GPOs against in the event that the key isn’t archived to AD DS and that you enter a situation where you need, but don’t have that recovery key available.

Building Active Directory Based Outlook Signatures

One thing that many companies strive for is a consistent brand identity. There is many reasons for wanting this is anything just to appear to be a professional, unified front your customers. With email being one of the most prevalent communication forms in industry today, one of the best ways to achieve this brand identity.

Active Directory Directory Services, being the centralised gatekeeper of corporate information in a Microsoft environment, the service which feeds Exchange, SharePoint, Lync and many other services with user identity data is the ideal place to get the information needed to generate these signatures.

The key to making this work however is the dynamic automation. Any company can have their Marketing or HR department send a mail shot to the entire company asking the users to update their own signatures in Outlook, but there is always room for ‘creativity’ with this scenario; users trying to make small or subtle changes to the intended design affecting the corporate image.

Luckily, Outlook, or more specifically Word as a good Visual Basic for Applications (VBA) interface for programmatically generating documents and Active Directory Directory Services is one of the most easily and commonly accessed systems via VBScript.

The following script, broken down into chunks explained to allow you to make the script work for your own needs does exactly this.

Option Explicit
On Error Resume Next

Dim objSysInfo, strUser, objUser, strName, strJobTitle, strDepartment, _
    strCompany, strExtension, strPhoneLocal, strPhoneIntl, strMobileLocal, _
    strMobileIntl, strEmail, strCountry, strWebAddress, strPhonePrefix, _
    objWord, objDoc, objSelection, objEmailOptions, objSignatureEntries, _
    objSignatureObject

The opening section quite simply tells the Windows Script Host to only accept variables which are defined (Option Explicit). Many people omit this option from scripts for simplicity of coding, however I think that it’s lazy. Defining your variables with a Dim statement means you know that no rogue variables exist and it helps to prevent typos down the line.

The next line (On Error Resume Next) tells Script Host to continue running the script even if an error occurs. This is needed to prevent the script from generating popup alerts on client computers were the script is running, potentially confusing users. So long as you thoroughly test the script before deploying it, you can be safe in the knowledge that errors won’t happen, but better safe than sorry.

Set objSysInfo = CreateObject("ADSystemInfo")

strUser = objSysInfo.UserName
Set objUser = GetObject("LDAP://" & strUser)

Here the connection to Active Directory Directory Services is made The connection is made in the context of the logged on user and is then placed into a variable.

If Err.Number <> 0 Then
    WScript.Quit
End If

This section is vitally important in environments with laptop users. If a connection to the domain is not available and this section isn’t included then the script will continue to run, and the user will end up with a very nasty looking signature. If a domain connection cannot be established at this point then the script will exit before anything is modified in the signature, so any exiting signature will continue to take effect.

strName = objUser.fullName
strJobTitle = objUser.title
strDepartment = objUser.department
strCompany = objUser.company
strExtension = objUser.telephoneNumber
strPhoneLocal = objUser.otherTelephone
strMobileLocal = objUser.mobile
strEmail = objUser.mail
strCountry = objUser.co

This section maps the user object attributes to the script variables. Depending on how you use the various attributes in Active Directory, you may need to tweak this, or if you want to pull more information such as building or office address. The format for each attribute is objUser.attributeName. Using a tool such as ADSI Edit will allow you to view all of the attributes in the schema for the user object and their LDAP names.

Select Case strCountry
Case "United Kingdom"
    strWebAddress = "http://www.testcorp.co.uk"
    strPhonePrefix = "+44 "
Case "Ireland"
    strWebAddress = "http://www.testcorp.ie"
    strPhonePrefix = "+353 "
Case Else
    strWebAddress = "http://www.testcorp.co.uk"
    strPhonePrefix = ""
End Select

For some people, this section might not be needed so you could instead simply define the strWebAddress and strPhonePrefix variables. My test lab environment emulates a multi-national company and as such you want each users’ signature to reflect their region. The Case statements evaluate the value in the Country attribute in Active Directory and based on it set the country dialling code and the regionalised web address. Make sure you define a Case Else statement to catch any users who don’t have a Country defined.

If strPhoneLocal = "" Then
Else
    If Left(strPhoneLocal, 1) = "+" Then
        strPhoneIntl = strPhoneLocal
    Else
        strPhoneIntl = strPhonePrefix + Right(strPhoneLocal, Len(strPhoneLocal)-1)
    End If
End If

If strMobileLocal = "" Then
Else
    If Left(strMobileLocal, 1) = "+" Then
        strMobileIntl = strMobileLocal
    Else
        strMobileIntl = strPhonePrefix + Right(strMobileLocal, Len(strMobileLocal)-1)
    End If
End If

Here, the phone numbers retrieved from Active Directory are evaluated and if needed converted to international dialling format. The statements take the first character from the direct dial and mobile phone numbers and if it begins with a plus symbol then no conversion is done and the number is taken literally. If the first character is not a plus symbol, then the first number is removed and replaced with the country dialling code determined in the previous code block.

Set objWord = CreateObject("Word.Application")

Set objDoc = objWord.Documents.Add()
Set objSelection = objWord.Selection

Set objEmailOptions = objWord.EmailOptions
Set objSignatureObject = objEmailOptions.EmailSignature

Set objSignatureEntries = objSignatureObject.EmailSignatureEntries

Const wdParagraph = 4
Const wdExtend = 1
Const wdCollapseEnd = 0

objSelection.Font.Color = RGB(0,133,200)
objSelection.Font.Bold = True
objSelection.TypeText strName
objSelection.TypeText Chr(11)

objSelection.Font.Color = RGB(128,128,128)
objSelection.Font.Size = 10
objSelection.Font.Bold = False
objSelection.TypeText strJobTitle & ", " & strDepartment
objSelection.TypeText Chr(11)
objSelection.TypeText strCompany
objSelection.TypeParagraph()

objSelection.TypeText "Internal: " & strExtension

If strPhoneIntl = "" Then
Else
    objSelection.TypeText " | " & "External: " & strPhoneIntl
End If

If strMobileIntl = "" Then
Else
    objSelection.TypeText " | " & "Mobile: " & strMobileIntl
End If

objSelection.TypeText Chr(11)

objSelection.TypeText "Email: "
objDoc.Hyperlinks.Add objSelection.Range, "mailto:" & strEmail,,,strEmail
objSelection.TypeText " | " & "Web: "
objDoc.Hyperlinks.Add objSelection.Range, strWebAddress,,,strWebAddress

Here is the visual bit. The signature is built using a Word application. The script runs through the creation of the signature line by line. The phone number section is dynamic. If when the user information was retrieved from Active Directory one or more of the phone number fields were empty, then the label for that number type and also the number are omitted from the block.

Colours are all defined using RGB values. If you want to change these for your own use, simply use Word to find the colour you need, then select the Custom tab to view the RGB codes for it.

objSelection.StartOf wdParagraph, wdExtend
objSelection.Font.Color = RGB(128,128,128)
objSelection.Font.Size = 10
objSelection.Collapse wdCollapseEnd

Set objSelection = objDoc.Range()

The final and perhaps complicated thing going on here is the hyperlink generation. By default, the hyperlinks will adopt the default hyperlink style of blue text, size 11 font with an underline. Changes to this section should be heavily tested because at this point, Word begins moving the pointer caret through the document to select the hyperlinks which have been created to alter their style. Incorrectly placing the caret can result it items deleted or strangely laid out in the finished signature.

objSignatureEntries.Add "Test Corp Default", objSelection
objSignatureObject.NewMessageSignature = "Test Corp Default"
objSignatureObject.ReplyMessageSignature = "Test Corp Default"

objDoc.Saved = True
objWord.Quit

Last but not least, everything that has been done so far is saved into the document and configured in the default Outlook profile as the signature to be used for new messages and also reply messages.

If you wanted no signature to be added to replies then you could change the following line:

objSignatureObject.ReplyMessageSignature = ""

It would actually be possible to define a different signature for the reply messages if you so wished. To do this, you would need to save the new message signature and close the current Word object, then open a new Word object, define the signature and then save it to the reply message signature.