Error Configuring the Service Manager Exchange Connector

Here’s a quick one to answer a problem that I had recently and one that you may bump into if you are trying to setup System Center Service Manager 2012 R2 with the Exchange Connector 3.0.

From the installation instructions for the Exchange Connector 3.0, you must copy the Microsoft.SystemCenter.ExchangeConnector.Resources.dll and the Microsoft.SystemCenter.ExchangeConnector.dll files from the extracted file location into your Service Manager installation location.

Once you have copied these two files, you import the ServiceManager.ExchangeConnector.mpb Management Pack Bundle into Service Manager. Once this is done, you need to copy the Microsoft.Exchange.WebServices.dll file into the Service Manager installation directory. The instructions provided with the management pack aren’t very clear on this but you can obtain this file from an installation of the Microsoft Exchange Web Services Managed API.

Once you have done all of this, you can then finally you create your Exchange Connector. When testing the connection to Exchange to create the connector, you may receive the following error message:

SCSM Exchange Connector Error

“The connection to the server was unsuccessful. Please check the server name and/or credentials entered.
Additional Information: The source was not found, but some or all event logs could not be searched. Inaccessible logs: Security.”

If you receive this error, you need to read the Exchange Connector 3.0 documentation a little more carefully before heading to the Microsoft Download Center to download the Microsoft Exchange Web Service Managed API. You must be using version 1.2 of the API .dll file for Service Manager to work correctly. If you downloaded and used the later 2.0 version of the API, you will receive this error. This applies to all versions of Exchange including Office 365 or Exchange Online.

Simply install the correct version of the API and replace the Microsoft.Exchange.WebServices.dll file in your Service Manager installation directory. You will need to have all instances of the Service Manager console closed in order to replace this file as the console being open will put a lock on the file.

If you are unsure which version of the file you have, look in your Service Manager installation directory for the Microsoft.Exchange.WebServices.dll file. The API version 1.2 file has a file version of 14.3.32.0 and the API version 2.0 file has a file version of 15.0.516.14.

Project Home Lab: Open Server Surgery

So with my recent bought of activity on the home lab project front, evident from my previous posts Project Home Lab: Server Build and Project Home Lab: Servers Built, I’ve forged ahead and got some of the really challenging an blocking bits of the project done over Christmas. I eluded to what I needed to do next in the post Project Home Lab: Servers Built. All of this work paves the way for me to get the project completed in good order, hopefully all during January, at long last.

In this post I’m just going to gloss over some of the details about the home server move that I completed over the weekend. Lots more hours of thinking, note taking and planning were involved in this and most likely more than should have gone into it but I don’t like failure so I like to make sure I have all the bases covered and all the boxes ticked. Most critically, I had to arrange an outage window and downtime with the wife for this to happen.

Out with the Old

The now previous incarnation of my Windows Server 2012 R2 Essentials home server lived in a 4U rack mount chassis. As it was the only server I possessed at the time, I never bothered with rack mount rails so problem one was in the fact that the server was just resting atop of my UPS in the bottom of the rack.

Problem two and luckily, something which has never bitten me previously but has long bothered me is that the server ran on desktop parts inside a server chassis. As a result, it had no IPMI interface for out of band management so that if something should go wrong with the Windows install or some warning in the BIOS, I can remotely access the keyboard, video and mouse (a KVM no less). It had an Intel Core i5 3740T processor with an ASUS ATX motherboard and unregistered unbuffered memory with a desktop power supply albeit it a high quality Corsair one. All good quality hardware but not optimal for a server build.

The biggest problem however was with the fact that the 4U chassis, a previous purchase from X-Case a couple of years ago sat at 4U tall but only had capacity for ten external disks. I had two 2.5″ SSDs for the operating system mounted internally in one of the 5.25″ bays in a dual 2.5″ drive adapter in addition to the external drives. It all worked nicely but it wasn’t ideal as my storage needs are growing and I only had two free slots. Although not a problem, the hot swap drive bays were added to the chassis with an aftermarket upgrade from X-Case didn’t use SAS Multilane SFF-8087 cables but instead used SATA connections which meant from my LSI 9280-16i4e RAID Controller, I had to use SAS to SATA Reverse Fanout cables which made the whole affair a bit untidy.

None of this is X-Case’s fault let us remember. The case did it’s job very well but my evolving and increasingly demanding needs no longer met the capabilities of the case.

Planning for the New

Because I like their to be order in the force, per my shopping list at Project Home Lab: Shopping List, I bought a new 3U X-Case chassis for my home server at the same time as buying up the lab components and getting the home server set straight is priority one because the 4U chassis is a blocker to me getting any further work done as the 3U and 2U lab servers need to fit in above it. In addition to moving chassis, I’ve given it an overhaul with a new motherboard and CPU to match the hardware in the lab environment. A smaller catalogue of parts means less knowledge required to maintain the environments and means I have an easy way of upgrading or retro-fitting in the future with the single design ethos.

As anyone knows, changing the motherboard, processor and all of the underlying system components in a Windows server is a nightmare potentially in the making so I had to plan well for this.

I had meticulously noted all of the drive configurations from the RAID Controller down to the last detail, I had noted which drives connected to which SATA port on which controller port and I had a full backup of the system state to perform a bare metal recovery if I needed. All of our user data is backed up to Azure so that I can restore it if needed although I didn’t expect any problems with the data drives in honesty, it was the operating system drives I was most concerned about.

In with the New

After getting approval for the service outage from the wife and shutting down the old home server, I got it all disconnected and removed from the rack. I began the painful process of unscrewing all of my eight drives from the old chassis drive caddy’s and the two internal drives and reinstalling them into the new caddy’s using the 2.5″ to 3.5″ adapters from the shopping list. I think I probably spent about 45 minutes carefully screwing and unscrewing drives and at the same time, noting which slot I removed them from and which slot I installed them into.

With all the drives moved over, I moved over the RAID Controller and connected up the SAS Multilane SFF-8087 cables to the connector with the tail end already connected to the storage backplanes in the chassis.

Once finished, I connected up the power and the IPMI network port on the home server which I had already configured with a static IP as the home server is my DHCP Server so it wouldn’t be able to get an automatic lease address. I got connected to the IPMI interface okay and powered the server on using it and quickly flipped over to the Remote Control mode which I have to say, works really nicely even when you consider that it’s Java based.

Up with the New

While I was building the chassis for the home server, I had already done some of the pre-work to minimize the downtime. The BIOS was already upgraded to the latest version along with the on-board SAS2008 controller and the IPMI firmware. I had also already configured all of the BIOS options for AHCI and a few other bits (I’ll give out all of the technicalities of this in another post later).

First things first, the Drive Roaming feature on the LSI controller which I blogged about previously, Moving Drives on an LSI MegaRAID Controller worked perfectly. All 9 of the virtual drives on the controller were detected correctly, the RAID1 Mirror for the OS drives stayed in-tact and I knew that the first major hurdle was a behind me. A problem here would have been the most significant to timely progress.

The boot drive was hit okay from the LSI RAID Controller BIOS and the Windows Server 2012 R2 logo appeared at least showing me that it was starting to do something. It hung here for a couple of minutes and then the words “Getting Devices Ready” appeared. The server hung here for at least another 10 minutes at which point I was starting to get worried. Just when I was thinking about powering it off and moving all the drives back and reverting my changes, a percentage appeared after the words “Getting Devices Ready”, starting at 35% and it quickly soared up to 100% and the server rebooted.

After this reboot, the server booted normally into Windows. It took me about another hour after this to clean-up the server. First I had to reconfigure my network adapter team to include the two on-board Gigabit Ethernet adapters on the Supermicro motherboard as I am no longer using the Intel PCIe adapter from the old chassis. Then, using the set devmgr_show_nonpresent_devices=1 trick, I removed all of the references to and uninstalled the drivers for the old server hardware.

After another reboot or two to make sure everything was working properly and a thorough check of the event logs for any Active Directory, DNS or DHCP errors and a test from my 3G smartphone to make sure that my published website was running okay on the new server, I called it a success. One thing I noted of interested here was that Windows appeared to not require re-activation as I had suspected it would. A motherboard and CPU family change would be considered a major hardware update which normally requires re-activation of the license key but even checking now, it reports activated.

Here’s some Mr. Blurrycam shots of the old 4U chassis after I removed it and the new 3U chassis in the rack.

WP_20141230_009

WP_20141230_006

As you can see from the second picture, the bottom 3U chassis is powered up and this is the home server. In disk slots 1 and 5 I have the two Intel 520 Series SSDs which make up the operating system RAID1 Mirror and in the remaining eight populated slots are all 3TB Western Digital Red drives.

Above the home server is the other 3U chassis which will be the Lab Storage Server once all is said and done and at the very bottom I have the APC 1500VA UPS which is quite happy at 20% load running the home server along with my switches, firewall and access points via PoE. I’ll post some proper pictures of the rack once everything is finished.

Behind the scenes, I had to do some cabling in the back of the rack to add a new cable for the home server IPMI interface which I didn’t have before and the existing cables for the home server NIC Team were a bit too tight for my liking caused by the 3U Lab Storage Server above being quite deep and pulling on them slightly. To fix this, I’ve patched up two new cables of longer length and routed them properly in the rack. I’ve got a lot of cables to make soon for the lab (14 no less) and I will be doing some better cable management at the same time as that job. One of the nice touches on the new X-Case RM316 Pro chassis is the front indicators for the network ports, both of which light up and work with the Supermicro on-board Intel Gigabit Ethernet ports. The fanatic in me wishes they were blue LEDs for the network lights to match the power and drive lights but that’s not really important now is it.

More Home Server Changes

The home server has now been running for two days without so much as a hiccup or a cough. I’m keeping an eye on the event logs in Windows and the IPMI alarms and sensor readings in the bedding in period and it all looks really happy.

To say thank you to the home server for playing so nicely during it’s open server surgery, I’ve got three new Western Digital 5TB drives to feed it some extra storage. Two of the existing 3TB drives will be coming out to make up the bulk storage portion of the Lab Storage Server Storage Space and one drive will be an expansion giving me a gross uplift of 9TB capacity in the pool. I would be exchanging the 3TB drives in the home server with larger capacity drives one day in the future anyway so I figured I may as well do two of them early and make good use of the old drives for the lab.

I’m also exploring the options of following the TechNet documentation for transitioning from Windows Server 2012 R2 Essentials to Windows Server 2012 R2 Standard. You still get all of the Essentials features but on a mainline SKU which means less potential issues with software (like System Center Endpoint Protection for example which won’t install on Essentials). On this point I’m waiting for some confirmation of the transition behaviour in a TechNet Forum question I raised at https://social.technet.microsoft.com/Forums/en-US/d888f37a-e9e9-4553-b24c-ebf0b845aaf1/office-365-features-after-windows-server-standard-transition?forum=winserveressentials&prof=required as the TechNet article at http://technet.microsoft.com/en-us/library/jj247582 leaves a little to be desired in terms of information.

I’m debating buying up some Lindy USB Port Blockers (http://www.lindy.co.uk/accessories-c9/security-c388/usb-rj-45-port-blockers-locks-c390/usb-port-blocker-pack-of-4-colour-code-blue-p2324) for the front access USB ports on all the servers so that it won’t be possible for anyone to insert anything in the ports without the unlocking tool to open up the port first. See if you can guess which colour I might buy?

Up Next

Next on my to do list is the re-addressing of the network, breaking out my hacksaw and cabling.

The re-addressing of the network is make room for the new VLANs and associated addressing which I will be adding for the lab and my new addressing schema makes it much easier for me longer term to manage. This is going to be a difficult job much like the job I’ve just finished. I’ve got a bit of planning to finish for this before I can do it so this probably won’t happen now until after new year.

The hacksaw, as drastic as that sounds is for the 2U Hyper-V server which you may notice is not racked in the picture above. For some reason, the sliding rails for the 2U chassis are too long for my rack and with the server installed on the rails and pushed back, it sits about an inch and half proud of the posts which no only means I can’t screw it down in place but I can’t close the rack door. I’m going to be hacking about two inches off the end of the rails so that I can get the server to sit flush in the rack. It’s only a small job but I need to measure twice and cut once as my Dad taught me.

As I mentioned before, I’ve got some 14 cables I need to make and test for the lab and this is something I can be working on in parallel to the other activities so I’m going to be trying to make a start on these soon so that once I have the 2U rails cut to size correctly, I can cable up at the same time.

Project Home Lab: Servers Built

So in the last post (which I wrote in April but only posted a few minutes ago), I talked about some of the elements of the build I had done thus far. Well weekend just gone, I finished the builds bar a few small items and I’m glad to see the back of it to be honest. Here’s the pictures to show the pretty stuff first then I’ll talk effort and problems.

Server Build in Pictures

WP_20141222_001

The image above is a top down view of the 3U Storage Server and you can see it in all it’s finished glory. It looks quite barren inside the case and that’s totally the look I was aiming for, maximizing the available resources to give me oodles of options to expand it in the future should I need. The braided cables which after much, much effort, I’m not quite 100% happy with but 95% there really clean it all up.

WP_20141222_002

This is a close-up of the from edge of the motherboard where the on-board LSI SAS2008 ports live which I spoke about being problematic in my previous post. After the first chassis was built, I knew what was needed and it all went in fairly painlessly but luckily these SAS SFF-8088 multilane cables are quite flexible. The black braid on the LSI cables matches the braiding I used on the ATX cabling which makes the consistency monster in me happy.

In the top of the image, you can see a bundle of cables zip tied together running from left to right and these are the chassis connectors for power button and LED, NIC activity lights and so forth. These run off to the left of the shot and are connected to the pin headers on the motherboard. This is one part of the build I’m really happy with because the cables fitted nicely in the gap between the chassis and the motherboard so they are well kept.

WP_20141222_005

Nothing super exciting here, but this is the Intel PRO/1000PT Low Profile Quad Port network adapter that features in both the 2U Hyper-V Server and the 3U Storage Server with the difference being that the 2U server uses a half-height back plate and the 3U server uses a full-height back plate. No butchery required as I managed to get used versions of both with the correct plate from eBay.

You can also see here the white cables going from left to right. This is the front-access USB port connectors which plug into a pin header just behind the network adapter. I’ve installed the network adapter in the left-most-but-one PCIe slot. This keeps it as far away from the CPU as possible to avoid heat exchange between the two whilst giving a bit of room for the adapter to breathe as it’s passive heat sink is on the left side.

WP_20141222_004

This last shot shows where all the effort has gone in the build for me personally and what has taken me so long to get it to completion. The original ATX looms with the case where over 70cm long and finding somewhere to hide that much cable excess in a tight chassis wasn’t going to be easy or efficient. There are three looms all told: One for the 24-pin ATX connector, one for the Dual 8-pin EPS connectors and the chassis fans and the third and final for the drive enclosures.

The reason I am only 95% happy with these is that I would have in hindsight, considered putting half the drives on the EPS channel and the other half on the same channel as the chassis fans but in reality. What I have got though does mean that the drives get an entire 12v rail to themselves which is good in one respect. Wiring the 24-pin ATX connector was by far the hardest and trying to crimp 24 pins onto cables and then squeeze in inside the paracord before heat shrinking the ends was a challenge for sure. In hindsight here, I should have found a local electrical company capable of such wiring work and paid them to do it. Even if it cost £20 or £30 per chassis to do, it would have been worth it for time and effort on my part.

Outstanding Items

So the only items outstanding are some disks. I didn’t talk about disks in the shopping list as I was kind of undecided about that part but the answers are written now and I just need to finalize some bits.

I was considering the option of using the on-board USB 3.0 port to install Windows Server 2012 R2 on the servers to give me maximum disk slot for data but I didn’t like the fact I only had a single USB 3.0 port on-board so there was no option to RAID the USB. A dual port SD Card controller would have been excellent here but they are only really seen on super high-end motherboards shipping today. Secondly, whilst USB boot for Hyper-V Server is supported, it appears that it’s not supported for Windows Server and as I wanted to keep the design and configuration as production capable as possible that meant this was out of the window too.

The final decision has led me to using a pair of Intel 520 Series 240GB SSD drives in a RAID1 Mirror for the OS in both the Storage Server and the Hyper-V Server with all the drives connected to the on-board LSI SAS2008 controller running in IR mode (Integrated RAID) but more on this in the configuration post.

For the Hyper-V Server, these two disks are the only disks installed as no VM data will reside on the server itself. For the Storage Server, I have another four Intel 520 Series 240GB SSD drives and two 3TB Western Digital Red drives which will make up the six disk Tiered Storage Space. I have two of the SSDs installed now and the other two our going back to Intel tomorrow.

The two SSDs going back to Intel appear to be fried and will not even get detected by the system BIOS or the LSI SAS BIOS. The two Western Digital 3TB Red drives are currently in my Home Server. I have two 5TB Red drives waiting to be installed in my home server and in exchange for the 3TB drives moving out of the Home Server into the Storage Server.

The log jam right now is the Home Server. The Home Server currently lives in an older generation X-Case 4U chassis and as part of Project Home Lab it is moving house into one of the 3U chassis to match the Storage Server. I’ve got a lot of data on the Home Server so taking backups of everything and finding the right moment to offline it all and move it is tough with a demanding wife and kids trying to access the content on it.

Up Next

In the next post, I will talk about some of the things I’ve found and done in the initial configuration of the hardware such as the BIOS and the IPMI.

 

Project Home Lab: Server Build

In case you haven’t gathered, progress on the home lab build has been frankly awful and it’s entirely my fault for putting other things first like sitting and watching TV. Those of you who follow me on Twitter will have seen back in April, I tweeted a picture of the build starting on the 2U Hyper-V server and all of the components I’ve had delivered thus far have been installed. For those of you who don’t, here’s the picture I tweeted.

RM208 Pro Build

Motherboard Installation

Installing the motherboard was a complete pain. The Supermicro X8DTH-6F motherboard has it’s SAS connectors on the very front edge and although this case is designed for extended ATX motherboard installation, it is done so just. To get the motherboard in, I had to pre-attach the SAS cables to the ports on the motherboard and I had to get the board in at some interesting angles to make it fit down. Once installed though, it all looks good. Fortunately for me, the case fans are at the front and the fan guard is at the rear of the fan module otherwise I’d have to come up with an alternative solution for the SAS cabling due to cable to fan death risk. A top tip from me is to install the power supply after the motherboard so that you have the maximum room available to get your board in.

Aside from the issue with SAS connector placement which is frankly, a Supermicro issue, not an X-Case one, the case is brilliant. It’s really solid, sturdy and well built. There are cable guides and pathways in all the right places to route power and SAS cables to the disk backplane. The top lid is secured with a single cross head screw and a locking clip which makes access really easy. The drive caddies slide in and out with ease and look the part too.

This is obviously not finished as you’ll see that none of the power supply cabling is installed, the network card isn’t installed and the cabling that is there looks a bit untidy. The SAS SFF-8087 cables are a bit longer than I would have liked, but I wasn’t able to find cables in lengths shorter than 0.5m from a quality vendor like LSI or Adaptec so I had to go with those as I didn’t want to chance cheap eBay cables in this build melting or the like.

Power Supply Cabling

The power supply cabling supplied with the Seasonic unit is, as always, long enough to cable up any possible combination of case which I think is wrong being that this is a rack mount power supply, the configuration would never look too dissimilar from mine right now. I mocked the installation of the cables and there is so much spare cable to lose in the void between the disk backplane and the fans that I will lose half of my cooling due to blockages. There is also a lot of SATA and PCI-E connectors on the looms which I’m not going to be using adding to the mess.

Seasonic Power Supply ATX Cables

The only connectors I need for this build are the motherboard 24-pin, two 8-pin EPS and four molex to drive the case fans and the disk backplane so there is a lot of excess there for my needs. Because I don’t want to lose that much cooling efficiency and I don’t want to have to lose all of that spare cable in the case and have it looking a mess, I’ve ordered up some heat shrink wrap, cable braiding and replacement ATX connector pins. Once all of these come, I’m going to be modding all of the power supply cabling to the correct length for use and dropping the connectors I don’t need. This is going to make the internals run cooler with the improved airflow and it’s going to look a whole load neater when finished. Yes, it’s going to add some time to the build and two fold for both servers but it will be worth it in the long run if not just for my own perfectionist requirements.

These extras have cost me about £10 per server to order in which is pocket change compared to the price of the rest of the build so shying away from spending this is just compromising. It should all be here in a couple of days and it will probably take me a couple of days to get it all right and how I want but I’ll be sure to post a picture or two once finished. Needless to say, there is going to be more cable left on the cutting room floor then there is going to be installed inside the case. For the 3U storage server there is a slightly different requirement in that I’m going to need six molex, an additional two over the Hyper-V server due to the additional disk capacity but that’s easy enough to sort out.

Changes to the Shopping List

Since the original shopping list, I have made some changes to the builds for technical and budgetary reasons.

All of the memory DIMMs are now PC3-10600R instead of the planned PC3L-10600R. Although these DIMMS are lacking the L denotation for low power, they difference in power draw and heat output is frankly minimal and the extra cost of the L type DIMMs I couldn’t justify. I’ve also increased the memory amount in the Storage Server from 12GB to 24GB so that I can cache more of the hot blocks in memory once I get it all running.

Since I took the picture at the top of the post, I made the decision to build the Hyper-V 2U server with both of it’s CPU sockets populated too. This means that I’ve got three DIMMs populated per CPU to match the channels and I’ve now got a system with two Quad Core Intel L5630 Xeon CPUs installed. I will likely in the future install the additional 6 DIMMs to take me up from 48GB to 96GB memory.

Lastly, the UPS which I stated may well be the APC 1500VA 2U rack mount UPS has indeed been purchased as the APC 1500VA. I had my eyes on a 2000VA but I managed to get the 1500VA for a steal.

Up Next

In the next post, I’ll post the completed builds including all of the cabling that took me so many months to do.

Project Home Lab Hyper-V Server

This is a really quick post but something exciting I wanted to share. Last night, I did a bit of work to help get the home lab up and running and after finishing some bits and pieces, I’ve now got the Hyper-V server up and running with the Windows Server 2012 R2 installation. Here’s a screenshot of Task Manager showing the memory and CPU sockets and cores available on the machine.

Lab Hyper-V Server Task Manager

As you can see, there are two CPU sockets installed with four cores per socket giving me 8 physical cores and 16 virtual cores. There is 24GB of RAM per CPU socket currently installed giving me 48GB of memory and I am using 6 out of 12 available slots so when the time comes that I need more memory, I can double that current number to 96GB or more should I swap out my current 8GB DIMMs for 16GB DIMMs.

I should have some more posts coming up soon as I’m actually (after far too long) reaching the point of starting to put all of this together and building out some System Center and Azure Pack goodness at home, including, finishing off the series introduction where I actually explain all the hardware pieces I’m using.

Making SCSM Custom Lists Visible in the Console

This week, I have been working on a custom management pack for System Center Service Manager to add new classes for Mobile Phone and SIM Card Configuration Items. Once of the requirements for this was to include some lists which can be updated via the console and used to store values abut these two CIs.

Creating the properties in each of the new CIs was no problem and setting their Enumeration Type to a list was no problem either but getting the lists to actually display in the Service Manager Console I found rather challenging. I was able to do it using Service Manager Authoring Tool okay but the Authoring Tool seems to make a horrible mess of generating IDs for objects and it uses Aliases for no reason everywhere. I made the switch from the Authoring Tool to Visual Studio Authoring Extensions for System Center 2012 but Visual Studio doesn’t automatically create the code required to make the lists visible.

To fuel the frustration, I was only able to find a helpful solution after many failed online searches, clearly using the wrong keywords. I was only able to find the answer in the end by creating a list in Service Manager in an unsealed management pack, exporting the Management Pack and viewing the XML to reverse engineer it. From the XML I was able to find the name of the proper name for the code value which then turned up a helpful article on TechNet Blogs.

Using Service Manager Authoring Console

If you are attempting to complete this using the Service Manager Authoring Console then you’re on easy street and you don’t need to do anything in the following sections. Simply create your Enumeration List in the custom Configuration Item Class and the list will automagically be made visible for you. If you saw what I saw which is that the Authoring Console makes a right old mess of your management pack and you decide to use Visual Studio with the Authoring Extensions to create your management packs then read on.

Adding the References

In Visual Studio with the Authoring Extensions (VSAE) add two new references to your solution. The references we need to add are  Microsoft.EnterpriseManagement.SerivceManager.UI.Authoring and Microsoft.EnterpriseManagement.ServiceManager.UI.Console. You can find the SCSM 2012 R2 RTM versions of these system management packs in the Service Manager Authoring Console installation directory at C:\Program Files (x86)\Microsoft System Center 2012\Service Manager Authoring\Library. By default these references have Aliases of !MUSEA and !MUSEC respectively but in my project I have changed these to !Authoring and !Console to make them more intuitive for anyone reading the code.

Making the Lists Visible

With our references added, we need to add the code to make the lists visible in the console. You can either add these lines to the Management Pack Fragment which contains your list definitions (which I have done) or you may wish to have a separate Management Pack Fragment for elements which you are publishing into the UI. Either way, they will be included in the compiled project it’s just your choice about how you structure your project and the code for development.

<Categories>
   <Category ID="Class.List" Target="Class.EnumerationTarget" Value="Authoring!Microsoft.EnterpriseManagement.ServiceManager.UI.Authoring.EnumerationViewTasks" />
   <Category ID="Class.List.Visible" Target="Class.EnumerationTarget" Value="System!VisibleToUser" />
</Categories>

As you can see from the code sample above, we add the Categories section to the fragment and inside that section, we add two Category elements each with unique IDs. The first of the code lines will make the Enumeration List that was declared in the custom Configuration Item class accessible and the second line as you can probably guess from the code makes this visible in the console to end-users.

Unlike most things in Service Manager management pack development, these two Category IDs appear not to require Language Pack Display Strings to be declared so we’re done here. Save your changes, build the project and import the management pack.

Adding List Items to Sealed Management Packs

If you are developing this management pack for a production system then you should be sealing your management pack for import. If you are providing the end-users with an empty list to which they can add their own custom list items then when the first list item is added, you will need to define an unsealed management pack for the list entries to be stored in. Alternatively, if you want to provide a set of default options, you can include these in the sealed management pack as default options using EnumerationValue as part of your EnumerationTypes. These default options will then be included in the sealed management pack and any new entries added will be stored in the unsealed management pack.

Azure Backup Maximum Retention

This is a very short and quick post but something I wanted to share none-the-less.

I got a call from somebody today looking at the potential for using Azure as a long-term solution to store infrequently accessed data. A StorSimple appliance is one obvious answer to the problem but that was out of consideration in this instance and we talked about using Azure Backup as a solution due to the fact that this data doesn’t actually need to be accessible online and an offline recovery to access the data would be viable.

When I started to use Azure Backup with the Windows Server 2012 R2 Essentials integration a number of years ago, Azure Backup was limited to 30 days retention but I knew that this had been increased of late so using the Microsoft Azure Backup client on my server, I looked to see what the maximum value was that I could set the backup job retention to and the number that came out was 3360 Days which in a sensible scale is 9 Years and 3 Months.

That’s quite a lot of retention there but sadly, it still wasn’t enough for this requirement so back to the drawing board. My problem aside, it’s good to see that Azure Backup now supports long-term data retention for backup and 9 years and 3 months is long enough to meet most organisations retention requirements including those in the financial sector.

Office 365 Management Pack for SCOM

Yesterday I got a chance to play with the Office 365 Management Pack for SCOM. Usual rules apply, read the release notes, import the Management Pack and then configure it, the same rules for all Management Packs you import into SCOM.

The installation was simple by downloading the .msi file from the Microsoft Download page at http://www.microsoft.com/en-us/download/details.aspx?id=43708 however in that this is a Microsoft Management Pack for a Microsoft product, I would have expected this to be published to the Management Pack Catalog in SCOM not a separate .msi file download as it would have certainly streamlined the installation process a little.

Once installed, the configuration of the Management Pack is really simple as an Office 365 configuration link is added to the Administration view. It gets added to the very bottom of the list so if you think you don’t have it visible, make sure you’ve scrolled all the way to the bottom. From the configuration wizard, you simply feed it a friendly name for your tenant and give it the email address for a user in Office 365 or configured through your Azure Active Directory.

The reason for this post, other than to explain how simple the Management Pack is to deploy is to have a little gripe. The user which you create in Office 365 needs to be configured as a Global Administrator on your tenant. To compare things to on-premises, that’s like using an account which is a member of Enterprise Admins to monitor Exchange On-Premises, a bit of a sledgehammer to crack a nut. I personally like things to be least privileged so the idea of having a Global Administrator account for this purpose is an annoyance. In that the Management Pack is testing the health of services within your tenant, I personally don’t see any reason that this account couldn’t be a Service Administrator to still give it some administrative powers but lessen them or failing that, a standard user. I suspect the need for being an administrator comes from the need to query a service API which is only available to accounts authenticated with administrative rights.

The upside of course to my gripe about the account being a Global Administrator however is that you do not need to assign any Office 365 service licenses to the account so it means you don’t need to shell out £20 a month for your E3 license per user in order to be able to monitor Office 365 from SCOM.