Windows

Anything concerning Windows be it Windows client operating system, Windows Server operating system, Windows Mobile, Windows Phone and more.

App-V Hidden Drive Letter ADM File

In our environment, our users love their drive letters, and they do so to the Nth degree. As part of a change control process, myself and a colleague have scheduled the deployment of the App-V Client across our business estate to allow us to begin provding the users with user-centric real-time streamed applications to meet their business needs.

We today discovered the true nature of our Nth degree network drive letter because after some review, it became aparent that not a single letter (beyond the usual C, D, E for local disks) was free for company-wide use which caused us pain on the inside. We came to the conslucsion that people in our business very rarely use floppy disk drives anymore, and even less people (zero to my best guess) use a second floppy disk drive, which means that the B: drive would be available across the estate.

Using the Microsoft App-V ADM file for Group Policy (available for download from http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=25070), I re-configured our GPO to force the clients to use the B: drive instead of the App-V default Q: drive. I tested the configuration change on my own machine (ICT dogfooding for everyone), and also streamed a couple of applications to verify the drive letter change didn’t cause any issues, and I came to an idea. If the App-V virtual file system is inaccessible by the user because of the ACLs that App-V applies to it, and because the user has no reason to be meddling in the App-V virtual file system drive, why, display it to them?

I took a look at the Windows Explorer, Hide these specified drives in My Computer policy in the User Configuration portion of Group Policy however for reasons beyond me, Microsoft only gave you a very limited set of options in this policy (Hide A, Hide A and B, Hide A, B and C, or Hide All Drives). This policy was probably useful in the legacy days where you only wanted to restrict use of local floppy disk drives, however it’s not very useful in the 21st century.

The way around this, is to build your own custom ADM file to change the options for disabling the drive letters.

I have this evening created a custom ADM file for such a purpose, and in my example, the file is crafted to allow you to hide the B drive, or no drives, however you can add as many options to this file as you like.

How you configure the file to restrict particular drives is based on a binary value using a reverse alphabet table. Details for calculating this can be found on the Microsoft Support article Using Group Policy to Hide Specific Drives (http://support.microsoft.com/kb/231289). If you aren’t ocomfortable trying to do this in your head, you can simply copy and paste the table out of the article into Notepad and do your working in there.

Simply add the ADM file to an existing GPO and link it to an OU which contains users in AD, and you’re all set.

If you want to only restrict a single letter, then you can simply edit my file by modifying the label for, and the binary value for the BOnly item. The file is shared and free for you to download from my Windows Live SkyDrive account. I’m also happy to take comments or answer emails with questions about how to modify the file.

The Tiny and The Behemoth

Last week I was having a discussion with a colleague (@LupoLoopy) regarding Group Policy processing times and the ago old question of do you create a small handful of behemoth GPOs, or do you create lots of small targetted GPOs for specific purposes?

In this iteration of the debate, I was on the side of small and targetted and my colleague was on the side of the behemoth.

After the discussion, I did a bit of online digging, and turned up a post on the TechNet Magazine at Microsoft by Darren Mar-Elia, a Group Policy MVP. The outcome of the article is that, in his opinion, and based on research by using User Environment timers for monitoring the processing of Group Policy objects, small and targetted seems to be the best strategy.

When a GPO is updated with a change by an administrator, the client will have to process all of the settings within the GPO to determine which settings have changed and determine which settings it needs to apply, however in small and tightly targetted GPOs, there are much fewer settings per GPO which means even in a high churn Active Directory environment, fewer client-side settings need to be re-evaluated.

In largely static environments where there is a very low rate of churn, it could be entirely suitable to use fewer larger GPOs to apply larger configuration setts in bulk, however this will depend on the environment, and following the advice in the link below will allow you to determine the best scenario for your environment.

For anyone interested in reading the full article, you can see it at http://technet.microsoft.com/en-us/magazine/2008.01.gpperf.aspx.

Redirecting Windows Home Server 2011 Remote Web Access for Internal Clients

Windows Home Server 2011 features an impressive remote access site allowing you access to your digital media as well as remote access to your home computers. One of the components which allows all of this functionality to work is the Client Connector. This software element, installed on the client computers (which can be PCs or Macs for the record) enables the Home Server to backup your systems, along with enabling the features required on your system for the RemoteApp Remote Desktop Services connections to remote onto your PC from anywhere online.

In the Home Server Launchpad, the main user facing element of the Client Connector, there is a link for Remote Web Access which directly launches a browser session to the Windows Home Server 2011 Remote Web Access site, after you have configured your free homeserver.com domain with Microsoft and GoDaddy (this is configured using the Windows Home Server 2011 Dashboard).

In a normal home scenario with a router from your ISP or that you purchased elsewhere, clicking the Remote Web Access link will launch the Home Server Remote Web Access site using the homeserver.com domain you registered as the URL. In my not-so-normal home network, I use a Cisco PIX firewall as my edge device means I have a problem.

Unlike a router, the PIX cannot route packets back through the same interface where the packet was initially received.

This sentence from the Cisco PIX Frequently Asked Questions explains the problem in one. Clicking the Remote Web Access link launches the browser session to the correct URL, however because that URL resolves to the Internet IP associated with the outside interface on the PIX means the traffic flow is not permitted back through the firewall.

Being a Windows Systems Administrator, I like things on Windows, which means I prefer to run my infrastructure services like DNS and DHCP on the Home Server instead of allowing the router to do it. The DNS role in Windows Server 2008 R2 (the foundation for Windows Home Server 2011), and the DNS role in any Windows Server operating system for that matter allows you to create multiple zones for multiple domains to which the server will respond with DNS resolutions, and this is where the fix derives from.

The fix, or trick as the case may be, is to use DNS to reroute the client computer by resolving the homeserver.com domain name to the internal IP address of the Home Server, and away from the Internet side of the network, which ultimately will improve the performance of the Remote Web Access interface too.

On the Home Server, launch the DNS Manager console from Administrative Tools.

image

In the console, right-click on Forward Lookup Zones, and select New Zone.

In the New Zone Wizard on the Zone Type panel, select the Primary Zone option,

On the Zone Name panel, enter the full domain name that you specified in the Domain Name Setup Wizard from the Home Server Dashboard (in this example, I’m using server.homeserver.com).

On the Zone File panel, you can leave the default option to Create a New DNS Zone File.

On the Dynamic Updates panel, leave the option set to Do Not allow Dynamic Updates. This will help to prevent any rogue clients on the network from poisoning the DNS zone and directing your clients to the wrong IP address.

imageimageimageimageimage

On the Completing the New Zone Wizard panel, verify that you can specified the homeserver.com domain correctly. and then select Finish to complete the wizard.

Back in the DNS Console, your new zone will be visible. In the new zone, right-click and select New Host (A or AAAA).

image

In the New Host dialog, leave the Name field blank and in the IP Address field, specify the IP Address of your Home Server. This IP Address should either be statically assigned to the Home Server, or it should be configured as a DHCP Reservation on whatever device is running your DHCP Server on the network (although if the Home Server is your DHCP Server, then this should obviously be static).

Congratulations. Your internal clients will now be able to access the Home Server Remote the Web Access site, using the Client Connector user interface as Microsoft had intended, without a single packet touching the outside interface of your server.

If in your home network, you are using the router to perform DNS queries on your behalf, but your router prevents connections through the same interface that the connection was initiated as the PIX does, you could also implement this trick using the DNS HOSTS file, however this would need to be performed on a per client basis editing the HOSTS file. Using this example, the HOSTS file line item would be configured as follows:

192.168.1.100   server.homeserver.com   # Windows Home Server

Remember to flush your DNS cache on the clients using ipconfig /flushdns before testing your work regardless of whether you used the DNS or the HOSTS file methods to implement it.

App-V Client Management via GPO

Deploying the App-V Client to end-user machines can be headache. Microsoft provide ADM files for managing the configuration of the App-V Client via Group Policy in AD DS, however if you are trying to deploy the client yourself, you will soon discover that the Microsoft ADM files don’t allow you to configure an App-V Publishing Server. The only options you have with the ADM files are to override the sequenced application package and icon source roots.

Using this method, you install string for silent installation will look something like this:

setup.exe /s /v” /qn SWIPUBSVRDISPLAY=”App-V Server” SWIPUBSVRTYPE=”RTSP /secure” SWIPUBSVRHOST=”SERVERNAME” SWIPUBSVRPORT=”322” SWIPUBSVRREFRESH=”on” SWIFSDRIVE=”Q””

As anyone can see this isn’t exactly elegant, and if you are using SCCM to deploy the App-V Client as I am, you will soon discover SCCM has a character limit for the installer path which means you may have to turn to building a batch file to execute the installation and then call the file in the SCCM Program.

The other problem you will have are that you are then hardcoded to use the server name and port specified in the install. Yes, you could use a DNS CNAME to direct your clients to the App-V servers, and sure you can use a GPO to edit the registry keys on the end-user machines after the fact, however none of this is elegant as properly managing the deployment.

Introducing Login Consultants, a Netherlands based virtualization specialist company. This company provide a third-party ADM file for you to import into AD DS for extending the management options for App-V from the Microsoft ADM file, and best of all, you can register and download the ADM file for free from http://www.loginconsultants.com/index.php?option=com_docman&task=cat_view&gid=20&Itemid=149.

Using the Microsoft ADM file and the Login Consultants ADM file in conjunction, your install string turns into this:

setup.exe /s /v” /qn”

Much cleaner, easier to setup in Configuration Manager and then it gives you the ability to manage all of your App-V server configuration, including server name, ports, protocol, SFT_SOFTGRIDSERVER environment variable and all the other settings you need via Group Policy.

For centralising and streamlining management, this is a huge boon, as it means you have a one size fits all deployment of the App-V Client and then allowing you to manage everything else from either AD DS or from the App-V Management Server.

Certificate Store Permissions and Windows Live Block App-V RTSPS Protocol

Last week, when converting our existing ICT internal dogfood trial of App-V to a highly available production capable App-V solution, we came to a decision to utilize the RTSPS (Real Time Streaming Protocol Secure) protocol for streaming our applications.

Using some my own and another colleagues laptops for testing the RTSPS protocol, we ran into an issue whereby the client received the following error:

The specified Application Virtualization Server has shut down the connection. Try again in a few minutes. If the problem persists, report the following error code to your System Administrator.

Error Code: xxxxxx-xxxxxx0A-10000009

We initially discovered from an App-V blog article (http://blogs.technet.com/b/appv/archive/2010/03/09/troubleshooting-common-rtsps-issues-with-app-v.aspx) that this issue occurs when the server lacks permissions for the NETWORK SERVICE account to access the certificate store machine keys.

Following the advise of the article for Windows Server 2008 R2 systems, this was quickly resolved by using a Certificate Management based Microsoft Management Console to grant Read permission for the NETWORK SERVICE account to the certificate which is being used to sign the RTSPS protocol in App-V.

Thinking the issue was resolved, we proceeded to initiate a Refresh on the App-V client and tried to stream an application that we had previously sequenced, however we now received a new error:

The Application Virtualization Client could not update publishing information from the server App-V Server. The server will not allow a connection without valid NTLM credentials. Report the following error code to your System Administrator.

Error code: 4615186-1690900A-00002002

Leaving us puzzled. We were unable to find a solution initially, so we turned to Bing for some assistance, unearthing an interesting but niche blog post.

According to the source of our findings (http://blogs.ethz.ch/jlaville/2011/08/25/app-v-error-00002002/) machines with components from the Windows Live Essentials suite of applications cannot run the RTSPS protocol due to a registry key added to the LSA Security Packages key.

AppV Regedit LSA No LIVESSP

After removing the livessp value from the multi-value string in the registry and restarting the system we were successfully able to refresh the server and also stream the applications.

Circumventing Intel’s Discontinued Driver Support for Intel PRO 1000/MT Network Adapters in Server 2008 R2

In a previous life, my Dell PowerEdge SC1425 home server has an on-board Intel PRO 1000/MT Dual Port adapter, which introduced me to the world of adapter teaming. At the time I used the adapters in Adapter Fault Tolerance mode because it was the simplest to configure and gave be redundancy in the event that a cable, server port or a switch port failed.

In my current home server, I have been running since its conception with the on-board adapter, a Realtek Gigabit adapter which worked, however it kept dropping packets and causing the orange light of death on my Catalyst 2950 switch.

Not being happy with it’s performance, I decided to invest £20 in a used PCI-X version of the Intel PRO 1000/MT Dual Port adapter for the server. Although it’s a PCI-X card, it is compatible with all PCI interfaces too, which means it plays nice with my ASUS AMD E-350 motherboard, however I didn’t realise that Intel doesn’t play nice with Server 2008 R2 and Windows 7.

When trying to download the drivers for it from the Intel site, after selecting either Server 2008 R2 or Windows 7 64-bit, you get a message that they don’t support this operating system for this version of network card, which I can kind of understand due to the age of this family of cards, however it posed me an issue. Windows Server 2008 R2 running on the Home Server automatically installed Microsoft drivers and detected the NICs, however that left me without the Advanced Network features to enable the team.

I set off my downloading the Vista 64-bit driver for the adapter and extracting the contents of the package using WinRAR. After extraction, I tried to install the driver and sure enough the MSI reported that no adapters were detected, presumably because of the differences in the driver models between the two OS’s. After this defeat, I launched Device Manager and attempted to manually install the drivers by using the Update Device Driver method. After specifying the Intel directory as the source directory, sure enough, Windows installed the Intel versions of the drivers, digitally signed without any complaints.

With the proper Intel driver installed, I was now left with one problem and that was still the teaming. Inside the package, was a folder called APPS with a sub-directory called PROSETDX. Anyone who has previously used Intel NIC drivers will realise that PROSET is the name used for the Intel management software, so I decided to look inside, and sure enough, there is an MSI file called PROSETDX.msi. I launched the installer, and to my immediate horror, it launches the installer which the autorun starts.

Not wanting to give up hope, I ran through the installer and completed the wizard, expecting it to again say that no adapters were found, however it proceeded with the installation, and soon enough completed.

This part may change for some of you – Intel made a bold move somewhere between version 8.0 of the Intel PROSet driver and version 15.0 of the PROSet driver and moved the configuration features from a standalone executable, to an extension in the Device Manager tabs for the network card. I poured open the device properties, and to my surprise, all of the Intel Advanced Features were installed and available.

image

I promptly began to configure my team and it setup without any problems and it created the virtual adapter without any issues too including installing the new driver for it and the new protocols on the existing network adapters.

With this new server, I decided to do things properly, and I’ve configured the team using Static Link Aggregation. I initially tried IEEE 802.3ad Dynamic Link Aggregation, however the server was bouncing up and down like a yoyo, so I set it back to Static. Reading the information for the Static Link Aggregation mode is a note about Cisco:

This team type is supported on Cisco switches with channelling mode set to "ON", Intel switches capable of Link Aggregation, and other switches capable of static 802.3ad.

Following this advice, I switched back to my SSH prompt (which was already open after trying to get LACP working for the IEEE 802.3ad team). Two commands completes the config: one to enable the Etherchannel and one to set the mode to LACP instead of PAgP.

interface GigabitEthernet0/1
description Windows Home Server Team Primary
switchport mode access
speed 1000
duplex full
channel-group 1 mode on
spanning-tree portfast
spanning-tree bpduguard enable
!
interface GigabitEthernet0/2
description Windows Home Server Team Secondary
switchport mode access
speed 1000
duplex full
channel-group 1 mode on
spanning-tree portfast
spanning-tree bpduguard enable
!

The finishing touch is to check the Link Status and Speed in the Network Connection Properties. 2.0Gbps displayed speed for the two bonded 1.0Gbps interfaces. Thank you Intel.

image

The Trials and Tribulations of Installing Windows Home Server 2011

As I sit here now in my study at home, I am blessed by the new soothing sound of my self-built Windows Home Server 2011 system. And why is the sound soothing? Because it’s silent. My rack is still making some noise, which is coming from the Cisco switch and router which both probably need a good strip down and de-dust to help with the noise, it is nothing compared with the noise of the old PowerEdge SC1425 that I had running.

Unfortunately, installing Windows Home Server 2011 for me wasn’t smooth sailing, and I hit quite a few bumps along the way, so here is the list of problems I faced to help others avoid the same time wasters.

Before even starting the installation, please make sure you do read the release notes. Ed Bott has gone through some of the crazy requirements in a post at ZDNet (http://www.zdnet.com/blog/bott/before-you-install-windows-home-server-2011-rtfm-seriously/3134). The biggest one to watch out for is the clock.

Due to some kind of bizarre issue with the RTM release of WHS 2011, you must change the time in your BIOS to the time for PST (Pacific Standard Time) or GMT –8hrs. You must then leave BIOS and consequentially leave the Windows clock to that time, and during the installation when prompted for Time Zone, you must set this to Pacific Standard Time.

Once the installation is complete, you must then wait a further 24hrs before changing the time back. If you chose not to heed this advice, then the release notes state that you will not be able to join any client computers to the Home Server during this 24hr period. Once your 24hr period is up, you can log into the server and change the time and the time zone accordingly.

The first problem hit at the first phase of the installation, Extracting Files, while it was at 0%. Reviewing the error log from the setup process, I saw that it had encountered a (Trackback:80004005) Setup Error 31: Trackback:80004005 error. A quick look on the Microsoft Social Forums led me to discover that WHS 2011 doesn’t support any kind of RAID or array type disk to be attached for the installation. For me, this meant disconnecting the RAID-10 controller and powering down the disks attached to the controller for the duration of the install. Once install was completed, I simply reconnected the controller and installed the drivers and everything is working perfectly as I expected.

The second problem occurred once the installation was complete and it runs the WHS 2011 customisation process after first logon. It seems that WHS 2011 goes out to Windows Update and pulls a couple of required updates, and as such, needs a suitable network card. My motherboard uses a NIC which isn’t natively supported by WHS 2011, so I had to install the driver, however to my shock, the initial lack of a NIC terminated the setup process and I was forced to restart.

As my existing home server and the new home server where to be using the same IP address, I had the new one disconnected initially. This caused the next problem, because after installing the NIC driver, I was given a prompt that there was no network connectivity and that I should connect a network cable. Once again to my shock and disbelief, this required another restart.

At this point, I also released that my Cisco switch had switchport-security turned on for the Home Server port still and this meant I had to disable that on the switch as it was bound to a different MAC address at the time, and guest what? Reboot again.

My final problem laid with the network card on the motherboard itself. In the BIOS, I enabled the maximum power saving mode setting. It turns out, that for the ASUS E35M1-M PRO motherboard, this prevents the network card operating in 1Gbps mode and drops it to 100Mbps. It took me a while to figure this one out with changing cables, switching between switch ports etc, but I eventually discovered an option under the network card in Device Manager for Green Ethernet. Disabling this setting, which was previously set to Enabled, reset the network connection, and it was then connected at 1Gbps.

After all of this, I have a fully working and perfect home server for me and the family. I’ll be writing some other posts to explain my setup in detail, but this post is purely for the installation process

Corsair AX750 Professional Power Supply and Memory Installation

So in the latest episode of part installation in my progressive Home Server 2011 build, I received my power supply and the memory in the post today.
The memory is the same as that originally specified in my home server design: 4GB of Corsair 1333MHz DDR3 as two 2GB sticks so that I get the most from the dual channel memory controller. This isn’t XMS or Dominator or any of the special Corsair models of memory, but instead the standard Corsair memory. The reason for this is that the home server isn’t going to be running a large number of memory intensive processes and especially not one’s which require ultra-low CAS latency and timings or lots of paging in and out.
I’ve always used Corsair memory since I switched from unbranded about 4-5 years ago due to lots of back to back memory related issues. I’ve never had a single stick go bad, even the three 2GB sticks of Registered DDR2 in my current Dell PowerEdge SC1425 home server build (which has been up and running every day for the last three years). If a stick ever did go bad I know I have Corsairs lifetime warranty to back me up too which is nice.
The power supply has a three key requirements in this build. One is to be silent or as very near to silent as possible. Two is to provide enough SATA connectors to support the six SATA-II disks that will be going in the server, and lastly but not least is to be as energy efficient as possible even as low power draw levels.
Silence was a difficult one for me to find because all of the silent power supplies I was able to find didn’t support more than four SATA-II disks and I didn’t want to be using Molex to SATA converters in the build as it’s just another thing to go wrong or stop something working at 100% efficiency. As per the previous point, six disk support was hard to find. It was achievable but only on the higher end power supplies capable of delivering silly amounts of power up to 1200W in some cases. Whilst a power supply only uses the power is needs, they have power efficiency curves based on the demand. Typically, the lower the draw from the PSU’s peak or recommended continuous load rating, the worse the efficiency, and alas the final requirement.
I was mainly after a supply rated at 80 PLUS Bronze, however anything better was a plus. When I found the Corsair AX750 Professional Series Modular PSU, I was in power heaven. With a peak load of 750W, sleek black good looks, and a modular design supporting up to twelve SATA—II disks, I would not only have enough SATA connectors to meet the needs of my current six disk design, but also capacity to extend to the full ten disk capacity of the case if I wanted to in the future, but thanks to the modular cabling I am able to maximize the air flow in the already airy case by only installing the cases I need to deliver power to the disks and motherboards.
The power supply is 80 PLUS Gold rated the supply delivers a massive 90% efficiency at 230V even when operating at below 20% load (I will probably be in the 6-7% region) which is something very rare for a power supply. The power supply does have a fan, which is rated at 35dB at full 750W load which is load, however when running below 20% load, the fan is disabled due to the lack of heat generation which means I meet the final criteria for silence.
The supply comes with a full seven year warranty from Corsair, and because I will be only running the power supply and extremely low load levels, none of the components are likely to ever be taxed to a level to cause them to fail which means this supply ticks all the boxes I had as minimum requirements, goes an extra ten miles with tonnes of extra features and nice touches but leaves me safe in the knowledge that it will outlast my projected storage utilization for this server (and most likely the shelf life of Windows Home Server 2011 too).
Lastly, I decided that as I not going to be using the rear case fans that I should remove them to give me a bit more through flow for the air and also to get the hanging Molex connectors out of the way. There is a shot of the case without these fans now and the extra ventilation it will give me. The front case fans are still installed as once the build is complete I am going to review the temperatures and make a decision as to whether the temperature warrants having some air movement and if the dB level from them is acceptable.

X-Case RM400 and Asus E35M1-M PRO Motherboard Installation

So I’ve just finished installing the motherboard in the case, after a while spent moving the stand-offs around to accommodate the microATX motherboard. Here are the shots of the motherboard in the case, and the rear panel for the motherboard.

As you can see, the motherboard takes up barely any room in the case, which might be a downside, because I’m going to potentially run into issues with SATA cable length, but it’s excellent, because it means it will give the passively cooled processor far more room for the air to circulate, keeping the internal case temperature lower.

Past the First Hurdle, but In a New Camp

Since my last post about the new Home Server project, I’ve received financial backing in the form of overtime at work to start the purchasing, and I’ve also received WAF (Wife Approval Factor) in that the new server will be near silent, give us much greater storage capacity while cutting the overall power consumption significantly.

Late last week, I ordered the case for the project and the motherboard with a twist.

Since my last post on all things Home Server back in March, I have discovered a new product, recently released by AMD square in the playing field of the Intel Atom. The processor is the AMD E-350 Zacate, based on the AMD Fusion platform. The platform is designed for high performance, yet low power consumption, while integrating HD video capabilities and other top end features into the chipset.

So moving away from the Asus AT3IONT-I board with the Intel Atom 330 processor, I have instead gone for the Asus E35M-1-M PRO motherboard, being an AMD fanboy in a previous life.

This motherboard is microATX by contrast to the Intel board, which was miniITX. This shift in form factor gives me more flexibility due to the increased number of PCI Express slots and also means there is room for more powerful chipset on the motherboard. The net result, is a motherboard, which, for a little over £100 gives you a 18W TDP processor which can be passively cooled, or actively cooled with the optional CPU fan included, support for up to 8GB DDR3 memory, 2x PCI, 1x PCIe 1x and 1x PCIe 16x, 5 SATA-II 6Gbps ports supporting JBOD, RAID0 and RAID1, Gigabit LAN, USB 3.0, HDMI, DVI-D and VGA video output, along with SPIDIF optical out for audio.

The new motherboard sees the power consumption up from 12W to 18W, however this extra 6W, based on the performance benchmarks and a recent review from TheWindowsBlog on Twitter which you can read at the Windows Team Blog site really seems worth it. The motherboard they reviewed is actually the miniITX version of the board, which lacks a couple of the features I have, but the processor and chipset is identical.

Forum topics on sites like The Green Button and AVForums are all suggesting that this processor and video card combo, in a HTPC scenario are more than capable of handling two simultaneous HD streams, something the Atom can’t manage on it’s own.

With the motherboard in hand, I have to say it’s a really nice looking board, and the features still blow me away for such a small and tightly integrated package.

The case is another ball game. The pictures over on the X-Case website really don’t do it justice. The 4U chassis has large slots in it’s front to allow for decent airflow. Unlocking the front panel allows you to lower the flap to reveal the internal air filter and two 80mm fans, with the air filter mounted in front to stop the case inhaling the dust. Inside the case, you have ample room for even a Full ATX board, so my microATX board is going to be swamped inside, but at least it will have six of the ten 3.5” drive bays full with 2TB Western Digital Green disks to keep it company.

I’ll be assembling the motherboard in the case later today, and will grab a picture. Next month, I will hopefully be ordering the memory and the power supply which will give me enough to get the machine powered on and to configure the EFI BIOS settings how I want them before ordering the RAID controller and the disks last.

In light of the additional PCI slots, I am currently thinking about adding an Intel Dual Port Server NIC to the machine so that I can setup a team to give me more throughput and redundancy on the network, as this is what I currently have setup in my existing Dell PowerEdge SC1425 box.