Intel

Meltdown and Spectre CPU Flaws on Windows Systems

Over the course of the last few days, there has been much said online about a security flaw which is affecting the X86 CPU architecture and more specifically Intel CPUs*. This is an issue which has been known since earlier in 2017 but has only recently started doing the rounds. The issue was uncovered by Google (https://security.googleblog.com/2018/01/todays-cpu-vulnerability-what-you-need.html?m=1) and was not scheduled to be made public just yet, however, growing information and leaks online led Google to release it early. The issue has also been logged under three CVEs: CVE-2017-5715, CVE-2017-5753, and CVE-2017-5754. Microsoft also has their own article at https://support.microsoft.com/en-us/help/4073119/windows-client-guidance-for-it-pros-to-protect-against-speculative-exe.

The early release by Google prompted various things that were already in-play to also happen early. Microsoft was forced to release the hotfix for Windows immediately and the Microsoft Azure planned VM maintenance which was scheduled for the 10th January has been brought forward to happen almost immediately.

* There are numerous reports including the original publication from Google that the issues also affect ARM and AMD CPUs as well. I do not wish to get embroiled in a debate whether or not AMD and ARM are affected as there is arguments coming from both sides. For the purposes of this article, I will focus on Intel as we know 100% that their processors are affected. Intel is keen to point out that while they are still affected, CPUs based on the newer platforms like Skylake and Kaby Lake will experience a lesser performance drop-off.

Read more…

Intel HD Graphics Update for Windows 10 Technical Preview

Today is a good news day for Windows 10 Technical Preview users. I’ve been using the Technical Preview on my Dell Latitude E7440 laptop since it’s release and since upgrading to build 9926, I’ve been having a lot of problems with blue screens of death on startup. So much so, that from a cold boot it normally takes me four BSODs to get logged in and working so my laptop normally only ever goes to sleep to avoid the cold boots.

The problem is caused by the Intel HD Graphics driver which I’ve confirmed for myself using WinDbg to analyze the crash dumps for many of these issues. Today, it looks like my luck is in.

Windows 10 Technical Preview Intel HD Graphics Update

Delivered via Windows Update, I’ve got two new drivers waiting for me, one for the Realtek audio driver and another for the Intel HD Graphics driver. I’m installing it as you read this post but fingers crossed it is going to resolve these issues with the Windows 8.1 driver running under Windows 10.

Project Home Lab: Planning for Recovery

In my last post, Server Surgery Aftermath, I talked about the issues I was having with my home server. Whilst continuing to try and identify the issues after the post, I ran across some more BSODs and I managed to collect useful crash dumps for a number of them. Reviewing the crash dumps with WinDbg from the Windows Debugging Tools, I was able to see that in every instance of the BSOD, the faulting module was network related with the blame shared equally between Ndis.sys and NdisImPlatform.sys which means that my previous suspicion of the LSI MegaRAID controller were out of the window.

Included in the trace was the name of another application which is running on the server. I’m not going to name the application in this instance but let’s just say that said application is able to burst ingress traffic as fast as my internet connection can handle it. I decided to intentionally try and make the server crash by starting up the application and generating traffic with it and sure enough within a couple of minutes the server experienced a BSOD and restarted. This started to now make sense because the Windows Service for this application is configured for Automatic Delayed start which is why in one instance after a BSOD, the server had another BSOD about 45 seconds later.

For the interim, I have disabled the services for this application and with the information in hand, I started looking more closely into the networking arrangements. I knew that as part of the server relocation, I had switched from my dual port PCIe Intel PRO 1000/PT adapter to the on-board Intel 82576 adapters and both of these adapter ports are configured in a single Windows Server native LBFO team using the Static Team mode which is connected to a Static LAG on my switch.

To keep this story reasonably short, it turns out that the Windows Update provided network driver for my Intel adapters is quite old but yet the driver set 19.5 that Intel advertise as being the latest available for my adapters doesn’t support Windows Server 2012 R2 but will only install on Windows Server 2012. Even booting the server into the Disable Driver Enforcement mode didn’t allow the drivers to install. I quickly found that many other people have had similar issues with Intel drivers due to them blocking drivers on selected operating systems for no good reason.

I found a post at http://foxdeploy.com/2013/09/12/hacking-an-intel-network-card-to-work-on-server-2012-r2/ which really helped me understand the Intel driver and how to hack it to remove the Windows Server 2012 R2 restrictions to allow it to be installed. The changes I had to make differed slightly due to me having a different adapter model but the process remained the same.

Because my home server is considered production in my house, I can’t just go right ahead and test things on it like hacked drivers so luckily, my single hardware architecture vision came out on top because I’ve installed the hacked and updated Intel driver on the Lab Storage Server and the Hyper-V server with no ill effects. I’ve even tested putting load between the two of them over the network and there has been no issues either so this weekend I will be taking the home servers’ life in my hands and replacing the drivers and hopefully that will be the fix.

If you want to read my full story behind the Intel issue troubleshooting, there is a thread I started on the Intel Communities (with no replies I may add) but all the background detail is there at https://communities.intel.com/thread/58921?sr=stream..

Project Home Lab: Shopping List

Up until now, I’ve talked at length about the various factors dictating what I will be buying and why. In this post which is meant to be a high level overview of all the posts previous, I’m going to give you a shopping list of all of the components needed to make the build tick so that if you want to embark on your own project you can get a head start if you chose to go down the same route yourself.

This series will consist of the following posts. I will update the table of contents links in each post as I produce and publish the articles.

  1. Project Home Lab: Goals
  2. Project Home Lab: Existing Infrastructure
  3. Project Home Lab: Hardware Decisions
  4. Project Home Lab: Network Decisions
  5. Project Home Lab: Shopping List

Common Infrastructure

Storage Server

Disks for the server I’ve yet to purchase or confirm as these are pretty much a commodity item. I’ll update this post when I do select these but expect it to be a mixture of SSD and SATA disk.

With the SAS Multilane cables for connecting to the on-board SAS SFF-8087 ports, do make sure you get the cable with Sideband support, provided by an extra wire or two and an extra pin connection in the cable otherwise you won’t get the SGPIO disk failure and status indication through the disk backplane.

Hyper-V Server

Disks for the server I’ve yet to purchase or confirm as these are pretty much a commodity item. For the Hyper-V server, the disks need not be large or pretty as they will be used primary just for getting the host operating system online. A pair of SSDs in a RAID1 Mirror will be the most likely suspect.

With the SAS Multilane cables for connecting to the on-board SAS SFF-8087 ports, do make sure you get the cable with Sideband support, provided by an extra wire or two and an extra pin connection in the cable otherwise you won’t get the SGPIO disk failure and status indication through the disk backplane.

Next Up

With the shopping list crossed off and most of the hardware now ordered and some of it already in my hands, it’s time to get building. The next posts will show some of the builds, enjoy.

Project Home Lab: Hardware Decisions

In part one and two of this series, I talked about what I want to achieve and what I have in place already. From now on in, it’s all about the new stuff I want.

This series will consist of the following posts. I will update the table of contents links in each post as I produce and publish the articles.

  1. Project Home Lab: Goals
  2. Project Home Lab: Existing Infrastructure
  3. Project Home Lab: Hardware Decisions
  4. Project Home Lab: Network Decisions
  5. Project Home Lab: Shopping List

One Server vs. Multiple Servers

Early on, I had thought about building a single server with storage and hypervisor in bed together but I quickly came to the conclusion that this would hinder me in the long-run. Yes, I would get the fastest possible access to the disk storage for the VMs with an all-in-one but it would leave me with nowhere to go as scaling up would be limited by the specification of the internal hardware in that single server and scaling out would have big costs associated as I would need to buy the networking and Hyper-V servers to break it out.

I also decided that this wouldn’t give me a playground which could simulate much for a customer environment as how many business do you know that run everything on a single host?

To this end, I decided that one server to act as a storage server and another for my hypervisor was the solution. This means that over time if the need arises, I can add additional Hyper-V hypervisor servers to scale out my compute capacity and form a multi node cluster. There may be upgrades required to the storage server to increase the capacity or IOPS but those costs would be minimal and typical business and usual storage growth costs.

Rack Mount vs. Standalone

For most people considering rack mount verses standalone, the choice would be based on whether or not their wife or partner will allow them to get away with having a server rack in the house somewhere. As I have already overcome that obstacle years ago, it makes that easier. Standalone has it’s advantages because the machines can be put into the corner of a room or garage with ease, however standalone servers tend not to have the performance or scalability that I am looking for and want in a virtualization platform which demands big memory to name but one facet. Standalone servers tend not to be so readily available as parts of used systems on eBay for purchase either which makes it harder.

Server Rack Cabinet

Based on my points above regarding a rack, I stayed focused on rack mount however my problem lay in my current rack build. The cabinet I have currently stands roughly 22U tall however only has usable space for 12U of equipment as I built it originally with the design of the remainder of the space being storage compartments which I actually no longer use. With the rack currently being wooden, it’s very primitive and provides me only with front and rear access however due to it’s place in the corner of the garage, I only actually have front access.

Because of this, I am looking to need a new server rack to house everything. This rack will be a multi-tenant rack housing both my production home network and the lab environment and I need to make sure that I buy a rack which will fit my existing space and will give me some additional rack space for expansion should I need it in the future such as additional Hyper-V servers or storage enclosures

A UK based vendor called X-Case have recently started selling server racks and at £214 for a new 22U rack, which is perfect for me. It fits my space perfectly, it’s on castors so I can move it around, has removable side panels with doors from and rear. I’ve bought from X-Case before when I built my home server which lives in one of their 4U cases right now and their products are great and really affordably priced compared to the big name brands.

The 22U cabinet gives me plenty of space to house my current 9U of devices and 13U for new purchases and I don’t plan on taking my lab that big (just yet).

Off the Shelf vs. Custom

I try where possible to buy desktops and laptops from dedicated builders like Dell, Lenovo or the likes. Firstly because they can build it better than I can and gone are the days where it is cheaper to build your own. For servers at home, I have a slightly different view however. On eBay, you will find a myriad of used servers up for sale from the likes of Dell PowerEdge and HP Proliant. Sifting through them to find a good example which fits all the requirements can be a challenge. The problem I have with all of these is that none of them are energy conscious because servers are designed for the datacenter and not the home. It is to this end that I decided to build my own.

Building my own gives me the flexibility to select certain parts bespoke and new and others used and reputably branded whilst keeping an eye on the power meter.

Rack Mount Cases

For rack mount cases, I wanted to buy new. The first reason is that pressed metal tins are generally quite affordable and secondly, because I wanted to buy something that was going to perfectly fit my bill. I didn’t want 1U because that limits me with power supply, expansion card and CPU cooler options. 1U also means that you aren’t going to be fitting in many disks which would impact my disk I/O performance options. Lastly, 1U chassis need cooling fans and 1U fans are small which means they need to rotate fast to push the same cubic feet per minute (CFM) of air. Fast means noisy and all of these factors immediately rules out 1U.

2U is a good height. 2U means I’m not limited by the power supply as almost all server supplies are 2U compatible and most expansion cards are available in a half-height form factor suitable for installation into 2U.  Fans in a 2U chassis are larger which means slower spinning and quieter and I also have more height to work with for CPU coolers. 2U gives me enough room to work with, with my components but isn’t wasteful of space either. 2U does have a limitation however and that is physical space on the front of the enclosure for disks so whilst 2U is a good fit for my Hyper-V hypervisor servers, it doesn’t perfectly fit for the storage.

3U and 4U are ideal sizes for storage. You get all the benefits of 2U as above but more front surface area to jam in disk slots. I looked at what is out there and I decided early on that by using a combination of SSD and SATA disk for storage, I wouldn’t be needing that many disks for a single user environment and the gains in 4U over 3U wasn’t really worth it so I focused my attention on 3U. 4U also has the problem for me that with the number of disks it can support, you typically can’t find this number of SAS channels on a single controller so I would need multiple SAS controllers and if you could find one with enough ports, it would likely be upwards of £1,000 just for the card.

As I haven’t decided on my motherboard or CPU options at this point, but I know that I want this build to be flexible, I need to ensure whatever I buy can support ATX and Extended ATX motherboards so that I’m free to make the right decision.

Accessibility for me is important. I want whatever case I opt for to support sliding rails so that I can draw out a server if it has a fault to replace parts. I also want the disks in any of the servers to be hot-swappable so that I can see a faulting disk and replace it without having to open up the chassis and start messing around with drive screws and cables. As I plan on using a mixture of SSD and SATA disk, I need it to support 3.5″ and 2.5″ disks. I’m not interested in dual redundant power supplies in my build as that adds power demand and cost. If I lose a power supply, I can take the hit of having the lab offline for a few days for a replacement.

As I’ve expressed earlier, I like X-Case. They are a UK firm so I feel like I’m doing my bit for the UK economy and their products are good. For my 2U Hyper-V servers, I have decided on the X-Case RM 208 Pro (http://www.xcase.co.uk/rackmount-cases/2u-rackmount-server-cases/x-case-rm-208-pro-8-hotswap-caddy-with-6gb-sata-sas-backplane-temperature-controlled-fans.html).

The RM 208 Pro is a 2U rack mount enclosure. It’s £194 for the case and £27 for the sliding rail kit for it. It supports 2U power supplies, Extended ATX motherboards, has 8 hot-swappable disk caddies on the front taking 3.5″ disks and the disks are connected via two SAS 6Gbps SFF-8087 Multilane connectors, common on RAID and HBA cards. The SAS backplane supports SGPIO which means I will get disk failure and early warning notification lights on the enclosure if my RAID or HBA cards support it. The internal fans are hot-swappable and are temperature controlled for speed via the motherboard pin headers.

For the storage server, I decided on the X-Case RM 316 Pro (http://www.xcase.co.uk/rackmount-cases/3u-rackmount-server-cases/x-case-rm-316-pro-16-x-6gb-hotswap-caddy-mini-sas-backplane-120mm-temperature-controlled-fans.html). This enclosure looks and feels the same as the RM 208 Pro except at 3U, it has support for 16 3.5″ disks spread over four SAS 6Gbps SFF-8087 Multilane connectors. Everything else about this enclosure matches the RM 208 Pro that I will use for the Hyper-V server. The RM 316 Pro is more expensive at £370 for the chassis and another £33 for the sliding rails but the extra 8 disk slots will not limit me there.

Power Supply

For these servers, I want something fairly cheap but yet reliable and from a known brand as power is what makes the whole thing tick after all. X-Case resell Seasonic power supplies and after much research into them, transpires that they are actually the OEM manufacturer for a number of high-street brand power supplies, including the Corsair Builder Series supply in my current home server which has been running for over two years without a hiccup. The Seasonic SS-600 H2U 600 Watt power supply (http://www.xcase.co.uk/power-supply/2u-rackmount-power-supply-s/saesonic-ss-600h2u-2u-80-psu.html#sthash.WMRWR8NM.dpbs) is 80 Plus efficient and seems like the ticket. At only £100 it’s a good price too considering the price of some ATX power supplies these days. I’ll be using this unit in both the storage and Hyper-V servers.

Processor

In this decision process, processor comes before motherboard as after all, the motherboard is just a life support system for the processor. I knew I wanted a server processor, not a desktop processor. I knew I needed a processor which supported Intel Virtualization Technology (Intel-VT) or AMD-V so that cut down the options to pick from as not all CPUs, even new models released today have Intel-VT or AMD-V. I knew I also wanted a CPU with low TDP to keep power consumption down and heat BTU output down to reduce the cooling requirements and noise of the fans.

Server processors are highly expensive new so I also knew that this was going to be a used part. Intel processors are generally more readily available in used form but I didn’t want to omit AMD from the race as their Opteron processors have really high core counts which is a great thing for a virtualization host. I also wanted to make sure that I used at least the same family of CPU between the storage and hypervisor servers so that I was using consistent parts to keep the builds consistent and simple for me to support.

After weighing up all of the options back and forth, I settled on the Intel Xeon L5630 processor (http://ark.intel.com/products/47927/Intel-Xeon-Processor-L5630-12M-Cache-2_13-GHz-5_86-GTs-Intel-QPI) and got them for £25 per processor on eBay. The L5630 is a quad core CPU with a TDP of 40W which is really low for a server processor. The CPU launched in 2010 which means it’s not that old even if the units I have were first off the line. The L5630 has a clock speed of 2.13GHz and has 12MB of L3 cache. With quad core and Hyper-Threading, the hosts will see eight cores available and with Turbo Boost support, the CPU can boost up to 2.4GHz. As I said previously, Intel Hyper-Threading and Turbo Boost are supported as is Intel VT-x, Intel VT-d, SpeedStep and the latest AES encryption instructions which makes this CPU very feature rich for it’s age.

Memory support is tri-channel DDR3 up to 288GB per processor and it can be used in a dual processor mode thanks to it’s dual Quick-Path Interconnect (QPI). DDR3 support is useful because higher capacity DIMMs such as 8GB or 16GB are rare in DDR2 and DDR2 is becoming harder and more expensive to buy as stocks dwindle while DDR3 is readily available in sorts of shapes, sizes and flavours on eBay.

Motherboard

With my processor decided upon, I now needed to select a motherboard to suit. The Intel Xeon L5630 uses the Socket 1366 motherboard socket running the Intel 5500 series Tylersburg chipset. My first port of call was Supermicro as they make amazing products and they frequently OEM their parts to other vendors which shows a lot of faith in them. This, coupled with the fact that their parts range is wider than the Grand Canyon meant I was sure to find what I wanted.

The requirements for the motherboard, in line with my goals meant that I wanted something which gives me plenty of options for future expansion. I also want accessibility which means I don’t want to be running to the server with a keyboard and monitor in hand to troubleshoot a boot issue so iLO, DRAC or IPMI are very important for me. The more feature rich the motherboard also means the less I potentially need to spend on expansion cards so that was also a factor.

Selecting the motherboard took the longest amount of time due to the options available but eventually I selected the Supermicro X8DTH-6F motherboard. I was able to find this for £250 including shipping and import taxes, new from a seller in the USA on eBay.

The X8DTH-6F (http://www.supermicro.com/products/motherboard/QPI/5500/X8DTH-6F.cfm) has everything. It’s a dual socket Extended ATX motherboard taking the Intel 5500 and 5600 series processors, good for my L5630 Xeon. It can be run in uniprocessor mode and a second added later which meets my expansion plans allowing me to add a second CPU for the Hyper-V server to add processing power at a later stage. Six DDR3 DIMM slots per processor gives me a total of 12 DIMM slots with six usable now supporting 1333MHz DDR3 in either conventional desktop UDIMM format, all the way up to ECC Registered and Buffered. With the second CPU added later, this opens up the additional six DIMMs for use also.

On-board dual port gigabit Intel Ethernet and a dedicated IPMI port supporting remote media and KVM ticks another box. Being Extended ATX, the board has seven PCI Express slots giving me lots of options for expansion cards and the on-board Intel ICH10R and LSI SAS 2008 6Gbps SAS controllers handle all of my drive quandaries too, at least for the medium term.

£250 for the motherboard may seem a bit steep however I consider these factors. Having the two on-board gigabit Ethernet ports saves me about £40 buying a used dual port Intel PCI Express network adapter from eBay to service the management traffic. The on-board LSI SAS controller saves me around £100 buying a used LSI SAS host bus adapter card. Having both of these on-board means two less PCI Express cards installed, theoretically improving the airflow in the case and likely reducing power consumption too. IMPI can be added to any machine with a PCI Express slot, however whether the add-in cards available online are as integrated and feature rich as dedicated on-board is questionable and the cards I have seen online run for about £500 each making the motherboard look positively cheap.

Memory

I want as much memory as possible in my Hyper-V servers. For the storage server I want a sensible amount but not to the extent of the Hyper-V servers. The more memory I have, the more I can give my virtual machines to help give them that production feel.

DDR3 support on the CPU and motherboard means I’m up to date with the current specification although not for memory speed I should point out. I wanted to buy from the Supermicro validated memory support list and I wanted ECC Registered Buffered DIMMs as that’s what you use in servers for the error correction capabilities of these DIMMs. Also, because the motherboard only supports 16GB memory per processor if you use UDIMM desktop type memory this means I really wanted to use ECC Registered DIMMs. I need to make sure I do everything possible to squeeze the maximum performance out of this lab and that means using memory in accordance with the tri-channel native operation mode of the CPU.

For the storage server, I decided on 12GB RAM by way of three 4GB DIMMs. For the Hyper-V server, I decided on 48GB as six 8GB DIMMs for the uniprocessor setup and if I add a second processor later, I can add an additional 48GB.

16GB DIMMs are available but they are just way too expensive right now for me to consider. I managed to get the 4GB DIMMs for about £20 each and the 8gB DIMMs for £35 each. All of the DIMMs are Samsung low voltage DIMMs running at 1333 MHz. To translate, this means I am using PC3L-10600R designation DIMMs. These DIMMs will automatically down rank to the highest speed supported on the motherboard and under-running the memory will help to keep the temperature of the DIMMs during operation down.

With a second processor installed later, this would give me 96GB of RAM in my Hyper-V host if I stay with 8GB DIMMs and if I later upgrade to 16GB DIMMs should their prices become sensible, I could have to 192GB of memory.

Ancillaries

As always with a PC build, you need some odds and sods to finish it off.

The CPU needs a cooler so I opted for the Supermicro SNK-P0037P passive cooler. The cooler is recommended for the motherboard and made by the motherboard manufacturer Supermicro. The cooler is rated for processors up to 90W TDP which means they will more than easily handle my 40W L series CPU and no fan on the CPU will help to keep downs the power consumption and noise as one less moving part to power.

To connect the SAS Multilane connections on the motherboard to the enclosure backplane, I need some SFF-8087 cables. For the Hyper-V server, I will be installing only a pair of SSDs to run the host Windows Server 2012 R2 operating system. To protect against a SAS channel or cable failure, I will be installing both SFF-8087 multilane cables with a single SSD per channel.

For the storage server, I am going to install both channel cables allowing me to run 8 disks. I will operate like this initially and once I need more than 8 disks to increase the performance, I will buy an 8 Port LSI PCI Express SAS HBA to run the other two channels, buying two more cables. Genuine LSI SFF-8087 to SFF-8087 cables with Sideband support for the SGPIO disk information pass-through are £10 each, new on eBay.

The enclosures have 3.5″ drive bays to allow me to use big capacity SATA disk but as I will be using a combination of SSD and SATA, I need a way to mount the 2.5″ SSD disks. For about £10 each on eBay, you can pick up the HP 654540-001. This is a 2.5″ to 3.5″ disk carrier specifically designed for hot swap enclosures. You mount the disk into the carrier and it translates the power and data port positions to match that of a 3.5″ disk. It uses no intermediary disk controller so the disk will be seen exactly for what it is by the controller and the operating system and there is no performance penalty either.

Deploying Server Core 2008 R2 for Hyper-V: Network Teaming

In our deployment, we are using servers with Intel network adapters, so the first thing is to install the manufacturer driver package because this enables the ANS (Advanced Network Services) functionality such as Teaming.

The new version of the Intel driver for Server 2008 R2 includes a command line utility for managing networks in Server Core known as ProsetCL, which operates with a syntax not too dissimilar from PowerShell.

The commands from ProsetCL I will be using in this post are:

  • ProSetCL Adapter_Enumerate
  • ProSetCL Team_Create

The full Intel documentation for ProsetCL can be found at http://download.intel.com/support/network/sb/prosetcl1.txt.

With all of the adapters nicely named from the previous post Deploying Server Core 2008 R2 for Hyper-V: Network Naming, this part is actually pretty easy.

The first step is to run an export of the current network adapters to a text file with ipconfig /all > C:Adapters.txt. Once you have this open the file with Notepad.exe C:Adapters.txt.

With the text file open, in the command line window, navigate to the directory C:Program FilesIntelDMIXCL which is where the ProsetCL utility is installed. You could register the directory into the PATH environment environment variable if it makes your life easier, but I didn’t do this personally.

Execute the command ProsetCL Adapter_Enumerate. This will output a list of the network adapters on the server into the command line. Sadly, the Intel utility and Windows order the network adapters differently which is why the text file is needed to marry the two up.

Once you have figured out which adapters need to be teamed together to form your various Client Access, Management, Heartbeat, CSV and Live Migration networks, you are ready to proceed.

You need to know at this point what type of teams you want to create also. The Intel adapters and utility support the following team types:

Team Type Team Function ProsetCL Shorthand
Switch Fault Tolerance Two adapters are connected to independent switches, with only one adapter active at any one time. In the event of a switch failure, the standby link will become active allowing communication to continue. SFT
Static Link Aggregation Two or more adapters are teamed in an always active manner. This mode allows you to achieve a theoretical speed equal to the sum of the speed of all the adapters in the team. To be used when LACP (Link Aggregation Control Protocol) is not available on your switch infrastructure. SLA
LACP (Link Aggregation Control Protocol) Similar to Static Link Aggregation, however the network adapter and the switch to which it is connected negotiate the aggregation using the LACP protocol. 802.3AD
Adapter Load Balancing Two or more adapters are teamed together, whereby the utility forces traffic to be routed out of each port in turn, equally sharing the load across the ports. ALB
Adapter Fault Tolerance Allows two or more ports to be connected in a team whereby the ports may have differing connection speeds (Eg. 1Gbps for the Primary Active adapter and 100Mbps for the Failover adapter). AFT

For full details and a more detailed explanation of each teaming mode, refer to the Intel ANS page at http://www.intel.com/support/network/sb/cs-009747.htm

Now that you know which adapters to be teamed together and which teaming mode you want to use for each, it is time to create the teams.

Enter the command as follows:

ProsetCL Team_Create 1,2 MAN_Team SFT

In this example, a team will be created using ports number one and two (the numbers as referenced by the previous Adapter_Enumerate command) with a team name of MAN_Team for the Management network using the Switch Fault Tolerance mode.

Following the command, you should receive a prompt that the team was successfully created. A new network adapter will now be present if you execute the ipconfig /all command named, sadly, Local Area Connection.

Assuming you named all your adapters from the default name using the previous post, the adapter will always be called Local Area Connection with no trailing numbers. If you run the netsh interface set interface name command after creating each team, it makes it much easier to name the teams as you go rather than doing them in a batch at the end.

In the next post, I will describe configuring the network binding order to ensure the correct cluster communication occurs out of the correct adapter.

Circumventing Intel’s Discontinued Driver Support for Intel PRO 1000/MT Network Adapters in Server 2008 R2

In a previous life, my Dell PowerEdge SC1425 home server has an on-board Intel PRO 1000/MT Dual Port adapter, which introduced me to the world of adapter teaming. At the time I used the adapters in Adapter Fault Tolerance mode because it was the simplest to configure and gave be redundancy in the event that a cable, server port or a switch port failed.

In my current home server, I have been running since its conception with the on-board adapter, a Realtek Gigabit adapter which worked, however it kept dropping packets and causing the orange light of death on my Catalyst 2950 switch.

Not being happy with it’s performance, I decided to invest £20 in a used PCI-X version of the Intel PRO 1000/MT Dual Port adapter for the server. Although it’s a PCI-X card, it is compatible with all PCI interfaces too, which means it plays nice with my ASUS AMD E-350 motherboard, however I didn’t realise that Intel doesn’t play nice with Server 2008 R2 and Windows 7.

When trying to download the drivers for it from the Intel site, after selecting either Server 2008 R2 or Windows 7 64-bit, you get a message that they don’t support this operating system for this version of network card, which I can kind of understand due to the age of this family of cards, however it posed me an issue. Windows Server 2008 R2 running on the Home Server automatically installed Microsoft drivers and detected the NICs, however that left me without the Advanced Network features to enable the team.

I set off my downloading the Vista 64-bit driver for the adapter and extracting the contents of the package using WinRAR. After extraction, I tried to install the driver and sure enough the MSI reported that no adapters were detected, presumably because of the differences in the driver models between the two OS’s. After this defeat, I launched Device Manager and attempted to manually install the drivers by using the Update Device Driver method. After specifying the Intel directory as the source directory, sure enough, Windows installed the Intel versions of the drivers, digitally signed without any complaints.

With the proper Intel driver installed, I was now left with one problem and that was still the teaming. Inside the package, was a folder called APPS with a sub-directory called PROSETDX. Anyone who has previously used Intel NIC drivers will realise that PROSET is the name used for the Intel management software, so I decided to look inside, and sure enough, there is an MSI file called PROSETDX.msi. I launched the installer, and to my immediate horror, it launches the installer which the autorun starts.

Not wanting to give up hope, I ran through the installer and completed the wizard, expecting it to again say that no adapters were found, however it proceeded with the installation, and soon enough completed.

This part may change for some of you – Intel made a bold move somewhere between version 8.0 of the Intel PROSet driver and version 15.0 of the PROSet driver and moved the configuration features from a standalone executable, to an extension in the Device Manager tabs for the network card. I poured open the device properties, and to my surprise, all of the Intel Advanced Features were installed and available.

image

I promptly began to configure my team and it setup without any problems and it created the virtual adapter without any issues too including installing the new driver for it and the new protocols on the existing network adapters.

With this new server, I decided to do things properly, and I’ve configured the team using Static Link Aggregation. I initially tried IEEE 802.3ad Dynamic Link Aggregation, however the server was bouncing up and down like a yoyo, so I set it back to Static. Reading the information for the Static Link Aggregation mode is a note about Cisco:

This team type is supported on Cisco switches with channelling mode set to "ON", Intel switches capable of Link Aggregation, and other switches capable of static 802.3ad.

Following this advice, I switched back to my SSH prompt (which was already open after trying to get LACP working for the IEEE 802.3ad team). Two commands completes the config: one to enable the Etherchannel and one to set the mode to LACP instead of PAgP.

interface GigabitEthernet0/1
description Windows Home Server Team Primary
switchport mode access
speed 1000
duplex full
channel-group 1 mode on
spanning-tree portfast
spanning-tree bpduguard enable
!
interface GigabitEthernet0/2
description Windows Home Server Team Secondary
switchport mode access
speed 1000
duplex full
channel-group 1 mode on
spanning-tree portfast
spanning-tree bpduguard enable
!

The finishing touch is to check the Link Status and Speed in the Network Connection Properties. 2.0Gbps displayed speed for the two bonded 1.0Gbps interfaces. Thank you Intel.

image

I’m Not as Green as My Name Suggests

With my name being Richard Green, one could go some way to try and associate me with environmental tree-friendliness. Contrary to that, I am actually extremely energy inefficient. My biggest energy crux in my current Windows Home Server machine.

Running on a Dell PowerEdge SC1425 with two Intel 2.8GHz Dual Core Xeon processors and 6GB of DDR2, this thing is total overkill for Windows Home Server and isn’t actually very good at it’s job either. Granted, it’s got dual Gigabit Ethernet for teamed and reliable network connectivity and it’s got SATA-II drives for high speed data movement, but at the same time, its in a 1U chassis which means it only supports a maximum of two drives, and it’s got a 450W power supply which when faced with the two Intel Xeon processors, both of which are designed at 90W power consumption makes for an eye-watering electricity consumption report.

I did try to enhance the usage profile of the machine by using an add-in for Windows Home Server called LightsOut, however the great feature of this software, which is to sleep and wake the server at pre-defined times during the day remained useless on the PowerEdge. Being a server machine its power supply doesn’t support the S3 power state which means it doesn’t support sleep – Only Shutdown and Restart, as a result, meaning the server stays on 24×7.

Granted, I could manually shutdown the server each night and power it back up again during the day when needed, but that’s not the design of a server. It’s designed to be accessible when you need it. My view on energy efficiently and environmental impact kind of fits this mantra also. I’m quite happy to spend a little money on energy efficient products if it will benefit me, and if my way of life isn’t impacted as a result. This example of powering down the sever manually has an impact because it’s an additional action upon me to complete, it means the server is potentially unavailable during start-up periods when I want it and generally makes the appliance less useful.

I’ve been looking around at what other people have done with Windows Home Server machines and seen a growing trend in Atom powered machines with low power consumption, designed for always on availability. My issue herein is that I have a 19” server rack in which all of my kit is mounted so the device needs to comply to the form factor to make it suitable, which basically rules out all of the pre-built systems from people like HP and Asus, so I’m being hurtled back into the world I escaped a few years ago – Self build.

The criteria for the project are quite tight:

  1. 19” Rack Mount Chassis – 1U, 2U, 3U or 4U is not really important.
  2. Support for at Least 4 SATA-II drives.
  3. Ideally support for a regular ATX PSU to reduce cost and improve efficiency over a server PSU.
  4. As near to silent operation as possible.
  5. Low power consumption.

After trawling the internet for quite some time on the subject now, I believe I have produced the ultimate solution using the following:

  • X-Case RM400/10 4U Rack Mountable Case
  • ASUS AT3IONT-I Intel Atom 330 and nVidia ION Montherboard
  • StarTech 4-Port PCI Express SATA-II Controller
  • Corsair Value Select Memory
  • Corsair CX400W Power Supply
  • Western Digital 1TB SATA-II Green Hard Disks

image

The case from X-Case at http://www.xcase.co.uk/product-p/case-x-case-400-fslash-10.htm?CartID=1 is the building block for this system. It allows me the flexibility to use my existing rack at home, while in a 4U chassis is gives enough room for 10x 3.5” hard disks and 1x 5.25” optical drive, although my machine will not have one installed as Windows Home Server can be installed via USB.

image

The ASUS, Intel Atom, nVidia ION Motherboard trick box from Novatech http://www.novatech.co.uk/novatech/prods/components/motherboards/miniitxmotherboards/90-MIBCT0-G0EAY0GZ.html gives me a Dual Core 1.6GHz processor which under full load only draws 8W of power and yet does not require active cooling, and only uses a passive heat sink, all the while, the miniITX form factor of the motherboard keeps the remaining power draw to a minimum.

image

The motherboard hosts 4 SATA-II ports, so needing to increase that to come close to the 10 drive support of the case, I will add a StarTech 4-Port PCI Express SATA-II Controller. The StarTech card was chosen because it appears to be the only card to combine SATA-II and PCI Express interface, as many of the other cards such as those powered by the Silicon Image 3114 controller are PCI based. The StarTech card can be seen here http://www.leaf-computer.de/raid-controller-4-port-sata-ii-pcie-x1.html and can be purposed from Leaf Computers via Amazon Marketplace.

image

The Corsair CX400W power supply from Overclockers UK at http://www.novatech.co.uk/novatech/prods/components/powersupplies/corsair/cmpsu-400cxuk.html is of good efficiency and also being near silent with a slow rotating 120mm fan to keep the air moving. This supply also has six SATA connectors for the hard drive power needs and four Molex connectors which can easily be converted to SATA once the need arises.

image

The Western Digital hard disks are of the Green variety. The demands of a Windows Home Server are not high speed disk access, unlike a RAID10 SQL Server. The needs are for high volumes of always available storage. The Green drives give SATA-II high speed access while providing a low thermal output because of the adaptive rotation speed controls and also the low power consumption.

Although only speculation based on figures collected from sources around the Internet, I believe that the Windows Home Server of this specification would consume a mere 32 Watts at idle and 38 Watts and full load when using 2 1TB Green drives. The drives consume about 6 Watts each, so simply add this amount for each drive added. The other advantage, is by using a standard ATX power supply with 12V 4-Pin connector to power the motherboard, I will have support for S3 power state, allowing the server to be put into Sleep overnight. This will allow me to reduce the operational hours from 24×7 to 17×7 in my example.

Using an online power calculator, we can see that the server of this specification will consume only 16 kWh (Kilo Watt Hours) per month. I have an in-line power meter currently connected to my personal computer which I will be attaching to the Home Server in the next day or so, and then I will be able to see the real-world draw of the current PowerEdge SC1425 to compare the two and see the potential savings.

I will create a new post to show the comparison once the data is available.

Is Atom Atomic or a White Dwarf?

Atom is the latest breakthrough to be announced from Intel, and it sounds very promising if your in the market for small and portable.

Atom is a totally new design microprocessor from Intel designed for the mobile device and ultra-portable device market such as PDA’s, Handhelds and Ultra-Portable notebooks. Unlike most of the processors from Intel of late, this one has been designed from the ground up as a new technology, and not an alteration of something of the previous generation.

Read more…