In March 2017, I published an article Restricting Azure Resource Deployment by Region which provided some insight into Azure Resource Policies. In that post, I provided a link to my GitHub repository azure-resource-policy-templates. Today, I am pleased to announce that I have updated this repository with more templates for you to use. The repository has been updated with the following new templates:
- Force Mandatory Azure Resource Manager Tags to Resources
- Force Mandatory Storage Service Encryption (SSE) on Storage Accounts
- Force Azure Virtual Machine Naming Convention
- Restrict Azure Virtual Machine Sizes Available
- Restrict Storage Account Types Available
Unlike the previous templates I provided which were designed to be applied individually to restrict the region in which resources could be deployed, these templates can be layered up to provide a complete resource management strategy. In this post, I will show how you can additively apply the Restrict Storage Account Types and the Force Mandatory SSE policies to Storage Accounts; and how you can apply the Restrict Azure Virtual Machine Sizes and Force Azure Machine Machine Naming Convention policies to VMs.
I won’t rehash how to import the policies in this post as that was covered in my previous article. I will jump straight into showing you how they work in the real-world.
In what feels like a long time ago, Microsoft released Office 2016 which includes the Outlook client. In the 2016 release of Outlook, Microsoft introduced a new feature called Mentions.
For anyone who is a user of Twitter, Facebook or other social media platforms, the notion of a mention will not be something new. For those two are not familiar with these platforms, a mention is the process of name-dropping somebody within a message. The objective of a mention is to draw the attention of somebody to something. An example of this could be during an email exchange between two parties, introducing a third party to the conversation. This could be to ask the third party to respond to a specific question.
One of the reasons I really like mentions is due to the misuse of To and CC fields in an email today. In an idyllic world, messages sent to you require your action or consultation. Messages which you are copied on (CC) are sent for informational purposes. In theory, you should be able to delete any message you have ever been copied on and nothing would be lost. The CC field takes its name from traditional carbon copy paper where writing on one piece of paper would press through multiple layers; these were very useful for sales order paperwork or contracts where multiple parties need a copy of one document.
Office 365 Groups is a feature of Office 365, designed to provide a modern alternative to Distribution Groups in Microsoft Exchange. Distribution Groups still exist; Office 365 Groups offer a lot more features. Features lit up by Office 365 include group-based calendars, task lists, team mailbox and more. One could argue they behave more like a shared mailbox than a traditional Distribution Group.
When Office 365 Groups were first introduced, an email sent to the group would be sent to both the group mailbox and the group members. This duality was welcome for existing distribution list users who wanted to maintain legacy behaviour and confusing for modernists who wanted to dispel bulk email from their inbox and focus areas for specific group communication. Back in April 2017, Microsoft introduced a change to the behaviour of Office 365 Groups to disable the legacy behaviour. In changing the behaviour, an option was introduced to allow administrators to control the behaviour.
Let me start by painting a picture. You are using Azure. You have an App Service configured with a Web App that is hosting a website; this website for example. The website could be single-instanced or it could be multi-instanced using Azure Load Balancer, Azure Traffic Manager, Azure Application Gateway, or any other number of load balancing and traffic distribution technologies. One day, your web application fails to respond and you get a dreaded HTTP 500 or another error code. As a dedicated Azure consumer, you use Azure Application Insights to monitor your website. Application Insights not only gives you user metrics akin to Google Analytics but also gives you performance and availability metrics.
The picture I painted just then explains my scenario. I use Azure App Service with an Azure Web App to host this blog. I use Azure Application Insights to provide me with all of the metrics and data I need to understand the site. The availability monitoring feature is quite excellent. It allows me to monitor the website availability from up to five locations around the world with performance data for each region so I can see how the site performs for each geography. If the site goes down for any reason, I get an email notification to warn me.
In a post I will be publishing shortly after this one, I wrote an Azure Automation Runbook to automatically restart an Azure Web App when Azure Application Insights reports the site as being offline. The solution is not foolproof, but it offers a good first line of defence against issues that bring the site down. I originally wrote the runbook some time ago, however, with pressure elsewhere, it has been a while since I have been able to re-visit it and complete it.
Whilst testing the workflow this morning, I found that it was generating an error at the Login-AzureRmAccount stage; the stage where the workflow should be logging into Azure using a service principal to obtain permissions on the relevant Resource Groups. A screenshot of the error log from the automation job is shown below.
The error had been puzzled as I know this had previously worked and I have not made changes to the Azure credential nor to the runbook since. A quick Google of the error message brought me to the answer at https://social.msdn.microsoft.com/Forums/en-US/c38e01df-dac8-4095-9658-7b1d981fe8e6/azure-automation-error-run-loginazurermaccount-to-login?forum=azureautomation. The problem lay in the fact that my Azure Automation account was referencing old versions of the Azure PowerShell Module. The old version of the module generated a failure to use the Login-AzureRmAccount command.
Updating the Azure PowerShell Module in Azure Automation is painless and can be performed from the Modules blade in the Azure Automation account.
After a short wait, the modules are updated to the latest version. Re-running my workflow in Azure Automation completed successfully proving the issue as being an out-of-date module version.
An interesting point is that there is currently a banner message in Azure Automation warning that Azure PowerShell modules will be automatically updated in Azure after the 17th July 2017. The screenshot below illustrates the message in Azure Automation. I think this is a very good move by Microsoft. As an author of automation, my workflow and runbook should not be beholden to the version of the module. If a new module is required to allow my code to continue to function, do the update automatically. If features are being deprecated in the Azure PowerShell modules, I hope that Microsoft will notify us in advance. This will give us all time to revise our code to work on any deprecated commands.
Office 365 co-existence with volume licensed products is something which has been a bone of contention for many Office 365 users. Traditionally, an enterprise, we have installed Office 2016 ProPlus using a Windows Installer package. The license for this would have come from your Enterprise Agreement (EA) and would typically be licensed using a KMS host. When you move to Office 365, this model changes. These changes can have a major impact on Project and Visio applications for some customers.
To read out what the changes are and how we can work with them, read on below the fold.
An email landed in my inbox this morning from Microsoft Azure regarding HTTP Public Key Pinning, a subject I have posted about at some length recently. If you don’t know what HPKP is or how it is used, refer back to some of my previous posts on the subject.
A normal HPKP implementation would see you configure your website to pin your own public certificate. Whilst I would advise against it because you have no ownership or control over the certificates, it would be entirely possible to pin the Microsoft Azure Websites certificates using HPKP to your site. The email from Microsoft this morning was an advisory that Microsoft is changing the certificate it uses.
If you are using HPKP and think there is a chance you may have pinned the Microsoft certificates, I would strongly advise you to read the Microsoft Knowledge Base article at https://blogs.technet.microsoft.com/kv/2017/04/20/azure-tls-certificates-changes/?WT.mc_id=azurebg_email_Trans_33716_1407_SSL_Intermediate_Cert_Change for more information.
If you are unsure if you are using HPKP or if you are unsure of which public keys you have pinned, I would suggest you use the Qualys SSL Test site as this will report the certificates in use with HPKP and whether it is enabled.
Deflate and GZip compression have been with us on the web for many years. They do a decent job but as times move on, so do compression algorithms. This is something I have talked about before using services like TinyPNG to squeeze the spare bytes out of your images to reduce page load times but this only applies to images obviously.
Brotli is a Google project for a newer, more modern compression algorithm for the web. According to the claims of Google, using Brotli over GZip not only increases the content compression reducing page size but also reduces CPU usage in the decompression process too. With the ever expanding usage of mobile devices, both of these are great things to have.
If you are interested in reducing your page size to improve load times and reduce your outbound bandwidth on your site then read on to learn now. I will cover the requirements, fallback compatibility and also how to get Brotli for Linux and Windows as well as the main point, how to enable it for an Azure Web App.
Today is the day that ClearDB users rejoice. Today is the day that a viable platform as a service offering for both MySQL and PostgreSQL exist in Microsoft Azure. Announced last night, Microsoft have now launched their own platform as a service offerings for the two database engines.
For years, ClearDB have offered a PaaS solution for MySQL. I had the misfortune of trying it out first hand recently on a web project and I can tell you that the performance was shocking. So bad was the performance that we actually deployed a Linux VM in Azure to run the MySQL service in IaaS and take the management hit on IaaS vs. PaaS. Even the support offered was terrible, blaming the performance on Azure itself when there were no issues with the Azure platform globally at the time.
The announcement puts these new services in preview. This means that the services and features aren’t going to be ready for your production workloads nor are all of the features going to be available right now. For example, I deployed an Azure Database for MySQL server last night to try it out and the Basic pricing tier is the only tier available right now. The ability to force all connections to secure and to define firewall rules for access is important and good to see there from day one.
All in all, it looks like a good first release. As I have been using In App MySQL database for Azure Web Apps to run the MySQL database on this site for sometime now (since preview in fact), and I have been debating whether to step back to IaaS for MySQL because of the fact that In App MySQL limits my ability to use features like Azure Load Balancer or Azure Traffic Manager with multiple site instances, this is going to be something I can definately see me using in the near term for real.
You can check out the documentation, pricing and scaling details for yourself at https://docs.microsoft.com/en-gb/azure/mysql/concepts-servers.
This is going to be a really quick post but one I thought may be worth sharing. Imagine that you are working in the Azure Portal and you are trying to update a Virtual Machine configuration to detach an existing data disk on the VM. You’ve done everything right following the steps at https://docs.microsoft.com/en-us/azure/virtual-machines/windows/detach-disk by stopping the VM and waiting for it to fully stop.
For normal users, this wouldn’t be an issue however if you are like me and you care for your eyes and have switched to the dark theme in the Azure Portal, you are in for a problem. When you select Edit on the disk configuration of the VM, you notice that the Detach button that the Microsoft article refers to is missing as shown below.
The Detach button should be visible just to the right of the Host Caching drop-down menu but as you can see, it is not.
It turns out, this is a bug in the Azure Portal when using the dark theme and I have reported this already. If you switch to one of the other theme colours, the button magically appears.
The problem is that the buttons are meant to change when you select the dark theme. If you look at the Save and Discard buttons at the top of the screenshot, you can see that in the dark theme, these two buttons are white to constant with the dark background and when using the white theme, these buttons are black to contrast with the background. The Detach button at the moment, doesn’t appear to be properly changing between white and black to cater for the background colour in use.