Posts from January 2013

Breaking the Duck

It’s been over 18 months since I last sat an IT Pro exam of some description and frankly that was far, far too long. I should really have taken my TOGAF 9 exams last year as a minimum as the Architecting the Enterprise course I attended in London in May included the vouchers for the combined TOGAF exam, but it just never happened.

Today though, I finally broke the duck on my exam sitting and took my VMware Certified Professional 5 Datacenter Virtualization (VCP5-DV) exam and passed it. Maximum score for the exam is 500 and the minimum passing score is 300. I scored 380 which works out to be just shy of 80%. I wasn’t thrilled with the result, but I was happy to pass it first time round.

I got lots of questions on VMware FT which is probably my weakest area of the product after spending a lot of time researching iSCSI and NFS to square up on my existing Fibre Channel knowledge to cover all the storage topics. Although I’ve now passed the exam, I’m going to continue my research to try and brush up more of Fault Tolerance.

Next up? Well, my Cisco CCENT qualification expires in April this year, so I’ve got three months to pass my ICND2 exam to gain my CCNA or I lose the earlier CCENT and have to sit both exams again. Luckily, my networking knowledge has grown a lot since the first time I sat ICND2 and failed it about two and a half years ago, so I’m confident with some new research and studying into serial connections, IPv6 and a few other bits, I will be able to pass that exam.

Onwards and upwards…..

Controlling Configuration Manager Clients Across WAN Links

At work this week, we encountered an issue when a package I created for Adobe Reader 10 went mandatory in Configuration Manager. We service retail stores connected via slow WAN links back to our head offices. When I first joined the company, on a monthly basis when new Windows Updates were released into the wild, our network team would come down upon our team, fire wielding whilst we raped and pilaged the lines to stores.

Configuration Manager gives you the power to create BITS (Background Intelligent Transfer Service) policies to throttle the bandwidth that will be consumed by SCCM client transfers for packages and patches, however the problem with Configuration Manager is that it’s policy is not granular, it is singular which means to apply a policy of 32KB/s which we needed to do to facilitate stores, you would also be limiting the speed of head office clients connected to 100 Megabit or 1 Gigabit high speed LAN connections.

Group Policy also gives you the ability to configure BITS throttling policies and in actual fact, Group Policy gives you more options to control the granularity plus the fact that you can link Group Policies to OUs and not just entire domains, or sites, allows us to control the speeds in a more appropriate way.

In a Group Policy Editor window from Group Policy Management Console (GPMC), navigate to Computer Configuration, Administrative Templates, Network, Background Intelligent Transfer Services (BITS). From here, enable the Limit the Maximum Network Bandwidth for BITS Background Transfers setting and configure the speeds and times as you need. You can also configure an out of hours policy which we make use of, limiting the store clients to 32KB/s between 8am and 6pm daily, but allowing them to expand to 256KB/s overnight when the store is closed, not making VoIP calls or trying to transact credit cards.

This worked great and the next time we deployed patches we had no complaints from the stores, but instead, we had a problem at the head office. The problem had not been manifested previously as we had to delay patch deployments before the packages reached everyone, but the issue we experienced now was that due to the length of time a Configuration Manager client was connected to the Distribution Point downloading packages, we were now seeing prolonged connections to the IIS site on the Distribution Point and lots of 64KB/s connections makes lots of bandwidth – We were now actually consuming all of the bandwidth at the head office site, causing inter-site applications between our two head offices to crawl.

We found a solution to this problem in IIS. The solution probably isn’t recommended by Microsoft or any System Center consultants out in the wild, but it works for us and causes no side-effects that we’ve witnessed in the year or so that it has been in place, so it has withstood the test of time.

Using IIS Manager on the Distribution Point server, expand the Default Web Site. From the Actions pane on the right hand side of the interface, select Advanced Settings. From the Advanced Settings dialog, expand the Connection Limits group under the heading Behaviour. By default, IIS accepts a number of connections so large is may as well be infinite. We calculated that based on our free capacity on our link at the head office, taking into account the traffic required for LoB applications, we could handle about 20Mbps of client traffic. We divided the 20Mbps into 64KB/s BITS setting out which gave us a number of 320. Setting the IIS connection limit to 320 and restarting the site in IIS, we saw an instant reduction in Distribution Point server network activity and also the drop we needed on the site link.

As I have mentioned above, this was done over a year ago. In this time, we’ve had not a single complaint from stores, head office users or our network team that there are network contention issues due to SCCM traffic, nor have we seen any apparent SCCM client issues such as clients never properly reporting due to connection retry limits being reached. This isn’t to say that this fix is for everyone: You would need to factor in things like how many total clients do you have in an SCCM site, and if a client was back of the IIS queue, would it be waiting for a connection for so long that it would timeout, or do you need to be able to rapidly deploy high threat updates regardless of the impact to other LoB applications?

Since implanting these changes, we’ve had two Microsoft SCCM RAP assessments and neither have produced red or amber health status problems due to the changes to BITS and IIS, so I think we found ourselves a winner.

Restoring Client Computer Backup Database in Windows Home Server 2011

Quite sometime ago (probably about two months now), the primary drive on my Windows Home Server 2011 was giving me issues due to a new device driver I installed. Nothing got me going with ease: Last Known Good Configuration, Safe Mode, nothing. The problem lied in the fact that the server wouldn’t acknowledge that the OS disk was a boot partition, and after leaving it to attempt to repair the boot files by itself, which for the record, I don’t think I’ve ever seen work, I took to it manually.

Launching the Recovery Console command prompt from the installation media, I tried the good old commands that have served me well in the past for Windows Vista and Windows 7 machines I had to repair, bootrec and bootsec, but nothing worked, so I was left with only one option to re-install the OS. I wasn’t concerned about loosing personal data that is stored on a separate RAID volume, but I was concerned about my client backups which were stored on the same volume.

Using a USB attached hard disk, I manually copied out the Client Computer Backups folder, then rebuilt the operating system. I don’t keep active backups of the Home Server operating system because the Windows Server Backup utility in Windows Server 2008 R2 isn’t that hot. It doesn’t support GPT partitions over 2TB which obviously is an

Once installed, Windows Home Server sets up the default shares and folders including the Client Computer Backups. The critical thing here is that no clients can start a backup to the server before you complete these steps. Once a client starts a backup to the server, it creates the new databases and files for the server, ruining the chances of importing the existing structure.

From the new OS installation, open the directory where the Client Computer Backups live. The default location is C:ServerFoldersClient Computer Backups, but I had moved mine to D:ServerFoldersClient Computer Backups. Once you’ve found the directory, copy across all of the files I had previously copied from the burnt install of Windows and overwrite any files that it requests.

Once completed, restart the server. This will restart all of the Windows Home Server services responsible for running the Dashboard and the Client Computer Backups. Once the restart has completed, open the Dashboard and select the Computers tab where you normally view the computer health states and backups. On first inspection, it looks as though you have no clients and no backups, but look more closely and you will se a collapsed group called Archived Computers. Expand this group and you will see all of your clients listed and all of their associated backups will be listed if you select the Restore Files option for a computer.

The thing to point out here is that these backups will remain disassociated from the clients. Once you re-add a client to the server and commence a backup, it will be listed as a normal computer and the Archived Computer object for it will also remain listed. This is because the server generates GUIDs for the backup files based on a combination of the client identity and the server identity and because the reinstallation of the operating system will cause a new GUID to be generated, they are different. This isn’t a problem for me, but I’ve read a number of posts on the TechNet forums at Microsoft where people have had trouble locating the Archived Computers group in the Dashboard interface and think that they’ve lost everything which clearly isn’t the case.

The Things You Don’t Normally Hear

In a somewhat random post from me, I’m going to make a comment on my Sennheiser HD215 headphones.

I bought these recently to replace my failed Creative I-Trigue 2.1 speakers I use at home on my desktop PC. More and more of late I have been turning to headphones over speakers, largely due to wanting to be able to listen movies, YouTube or good old fashioned music at a sensible volume and with my study being fairly close to the kids bedroom, the speakers weren’t the best option for volume in the evenings while the kids are asleep.

I’ve been using a pair of Sennheiser HD201 headphones at work in the office for around the last year, I like them for the £30 price tag and they are more than good enough for the office. Being a shared office with moderate background noise and the fact that I am at work and can’t rationally expect to pump out 100dB of music without disrupting others and possibly my own productivity, I’ve never really had the greatest of chances to explore them fully. That is coupled to the fact that I find them uncomfortable after more than about an hour of listening although I rarely get a chance to listen to them for that length of time in a solid block so it’s a non-issue: The pad and can size means that they sit on the ear not around it and the padding isn’t that thick so the plastic construction of the cans slowly presses into your outer ear giving you that warm ear discomfort sensation.

Giving the new HD215 cans a try at home this evening, I instantly felt the difference as due to the size of the cans, they sit around the ear, resting instead on your head leaving the ear free to move. Listening to a range of tracks from Dance and Dubstep to Vocal and Acoustic, it’s amazing some of the tones and notes you detect with decent headphones at decent volumes that you just otherwise don’t. My case example is the album Radio 1 Established 1967 which is a 2 CD album of tracks taken from the Radio 1 Live Lounge to celebrate one of Radio 1’s anniversaries and the song’s on it sound completely different. I’ve listened to the album at work before on the HD201 headphones and I remember it sounding a decent amount better that how I previously had heard it, but that is because I had only ever previously used my Sennheiser CX-300 II in ear headphones, but these HD215’s seem to take it up another level.

Let’s be straight. I’m no music expert, nor am I am audiophile with an exceptional ear for quality in headphones or music or the ability to detect the difference between a 20,000Hz tone and a 22,000Hz tone, I just like music. I’m sure that someone with deeper pockets than me could easily comment to this and say that their £500 super-duper headphones with their all singing and dancing digital music optimized listening environment and equipment will sound factors greater than these will and perhaps you are correct, but for £55, they sound incredible.

My only criticism of them is that they are supplied with a coiled lead and not a straight lead. I’m not a fan of the coiled lead as you just end up pulling against it trying to reach the length you want and not the length the cable inherently wants and I think that just adds a level of unnecessary discomfort. Luckily, QED have the answer in the form of a Jack-to-Jack 3.5mm lead, available in 1,2 or 3 metre lengths that I can replace the lead with as at least Sennheiser are nice enough to make this model of headphones with a totally detachable lead using 3.5mm standard jacks at either end.

Windows Server 2012 Essentials and the Failed Migration

Last week, I took a day out of the office as annual leave to migrate my home setup from Windows Home Server 2011 to Windows Server 2012 Essentials, taking in all of the blog posts I have written over the previous months’ about how I intend to use some of it’s new features.

Suffice to say, it wasn’t a success, but I have completed the lessons learnt and I am now preparing for a second attempt.

The main protagonist in the failure was the recently acquired 3ware 9590SE-12ML multilane SAS/SATA RAID controller. After installing the card about a month ago to verify it’s functionality, I saw the message “3ware BIOS not initialized” and the 3ware site left me comforted in the fact that this was due to the fact that I had no drives connected to it. When I connected my two new Intel 520 Series SSD drives to it to create a RAID1 mirror for my new OS drive, I saw the same message still even though the drives we detected okay. I installed the 3DM2 software in Windows Home Server 2011 and I was able to manage the card via the web interface (which is really nice by the way), however after creating the volume unit, the controller began to initialize the disks and the system froze instantly. I left a it a minute or two just in case, but no joy. A hard power off and restart then left the controller completely missing from the POST and startup with even the BIOS not showing it as connected. After trying a few different things, I was able to intermittently get the card to be detected, but not without causing major stability issues and it still wouldn’t properly initialize the BIOS during POST. A colleague leant me an Adaptec card for a day to test and this card was detected okay, allowed me to create a volume and the volume was detected within Windows okay, so I had it down to a compatibility issue between the motherboard and the 3ware card.

I decided that the issue with the motherboard compatibility could be related to the fact that it is a Micro ATX motherboard with the AMD Brazos chipset and the AMD E-350 ultra-low power processor and that the card could perhaps not be able to draw sufficient power from the PCI Express 16x (4x Mode) slot so I began looking at some other options. The processor has actually been one of the things I wish I had done differently of late. When the server was first built and put online it was great, but as I began to utilize the Home Server for more backend centric tasks, I began to notice the 1.4GHz Dual Core processor struggling and some tasks would timeout if they happened their timing happened to collide with other simultaneous tasks.

With the Ivy Bridge 3rd Generation Intel Core family CPUs, Intel released a line of CPU appended with the letterT. This family of CPUs are low power compared to their letter-less or K processors with the Core i5-3470T being the most efficient, pipping even the Core i3 T variant to the peak TDP and performance titles. Compared to the 18W peak TDP of my AMD E-350 chip, the Intel Core i5-3470T consumes a peak TDP of 35W, however it gives in exchange 2.9GHz Dual Core processing with Hyper-Threading allowing Windows to see two additional virtual cores, however because it is an i5 chip and not the lower specification i3 chip, it features TurboBoost which allows the CPU to boost up to 3.6GHz under high load. Using data from cpubenchmark.net, the AMD E-350 produces a score of 774, whilst the Intel Core i5-3470T produces a score of 4,640.

Investing in Ivy Bridge is more expensive then investing in the 2nd Generation Sandy Bridge which also offers some T branded chips for energy efficiency, however the CPU benchmark for the Sandy Bridge vs. the Ivy Bridge speaks for itself not to mention the fact that the Ivy Bridge reduces the TDP by 7W, the extra few pounds between the chips is worth the money.

To support the Ivy Bridge Socket 1155 Core i5 processor, I was going to need a new motherboard. I like ASUS as their are the market leader in motherboards in my view, and I decided upon the ASUS P8Z77-V LX board for several reasons. It’s a step up from the Micro ATX board I have previously been using, up to a standard ATX board.

The benefits of this are it avails me 4 memory modules in a dual channel configuration whereas I only previously had two slots with a single channel. The slot count isn’t an issue as I upgraded about six months ago from my originally purchased Corsair Value Select 2x2GB DIMMs to 2x4GB Corsair XMS3 DIMMs. The new DIMMs allowed me to make use of the higher DDR3 PC3-12800 1600MHz speeds, doubled my memory ceiling as due to running SQL Express on the backend for the MyMovies database I was hitting very close to 4GB daily and gave me a theoretically more stable system as the XMS3 memory is designed for overclocking and high performance cooling with it’s head spreaders, so running them at a standard clock should make them super stable. The other benefit is the increased PCI Express slot count. The new board gives me 3x PCI, 2x PCIe x1 and 2x PCIe 16x, one of which is a true 16x PCIe 3.0 slot and the other a PCIe 2.0 slot with 4x bandwidth.

The other reason for selecting it was the Z77 chipset. The Z77 set affords me the widest range of slots, interfaces and is also the best bang for buck having the best power consumption for the chipset out of all of the full feature chipsets (ignoring the Q77 chipset as although this adds Intel vPro, you lose a lot of slots through it).

All told, with the pair of new SSD drives for the OS mirror, the new Core i5 processor and the new ASUS motherboard, my overall power consumption will increase by what equates to £10-15 a year. When you consider the performance uplift I am going to see from this (the hint is worlds’ apart), it’s £10-15 a year very well spent.

The T variant of the Ivy Bridge supports passive cooling which aligns with my previous mantra of keeping it quiet, but I have come to the conclusion over the last year that this is unnecessary when I have a Cisco 2950T switch and a Cisco PIX Firewall making way more noise than a server would and the fact that it is all racked in my garage, out of earshot of the rest of the house for the one to two hours a month I many spend in the garage, it’s just not worth the thermal though process trying to engineer it quiet and cool. I have also been getting concerned lately of the drive temperatures on the Western Digital Green drives, stacked up inside the 4U case, so I’m switching to active. I selected he Akasa AK-CCE-7101CP. It supports all nature of Intel chipsets including the Socket 1155 for Ivy Bridge and has variable fan speed and decibel output. It’s rated up to 95W TDP for the quad core i5 and the i7 family chips, so running it on the 35W T variant of the i5, I’m hoping it will run at the quiet end of it’s spectrum, putting it at 11.7dB which is silent to the passing ear as it happens anyway.

To assist with my drive cooling problem and also an on-going concern about what I would do to deal with a drive failure or upgrade in a hurry (currently, it’s shutdown the server, drag and keyboard, mouse and monitor to the rack from my study to access the console session, open the case and connect the new drive cables etc) I decided to invest in the X-Case 3-to-5 Hot Swap caddy’s. These caddy’s replace the internal cold swap drive bays which require manual cabling and drive screwing with an exterior access, hot swap caddy system. All the drives in a block of 5 are powered via two Molex connectors, reducing the number of power connectors I need from my modular PSU, and the five SATA data ports on the rear of the cage are to be pre-connected inside the case allowing me to hot add and remove disk without powering down the server or even having to open the case. Each caddy also features a drive status and a drive access indicator so that I can readily tell if a drive fails which drive is the one in question, making fault resolution much easier. This is all the more important and useful with Windows Server 2012 Essentials. The cage also incorporates an 80mm fan which draws air out of the drive cage to keep the disk temperatures down.

To summarize then, I’m doing the following:

  1. Upgrading the ASUS AMD Brazos Motherboard to an ASUS P8Z77-V LX Motherboard
  2. Upgrading the AMD E-350 Dual Core 1.4GHz CPU (774 Score) to an Intel Core i5-3470T 2.9GHz Dual Core CPU (4,640 Score)
  3. Gaining an Extra Memory Channel for my Corsair XMS3 2x4GB DIMMs
  4. Adding X-Case Hot Swap Drive Caddies
  5. Gaining a Bit of Active Cooling

I’m still waiting for a few of the parts to arrive but once they do, it’s going to feel like the Home Server is going to be getting it’s 18 month birthday present in the form of several serious performance and ease of use and management upgrades. I’m really looking forward to it and in a sad kind of way, I’m glad that the upgrade didn’t work out the first time, otherwise I wouldn’t have invested in these parts which I know I’m not going to regret buying.

Once I’ve got everything installed, I’ll run another post to show the images of it and I will hotlink to my old pictures to do a little before and after for comparison, then it’ll be hot trot into Windows Server 2012 Essentials I hope.

 

TP-Link TL-WA801ND Wireless Access Point Review

In my continuing quest to upgrade our home network to 802.11n wireless and gigabit throughout, I purchased the TP-Link TL-WA801ND wireless access point.

My reason for selecting this device was three fold:

  1. Easily affordable and I could write off the price of it if it turned out to be a turkey.
  2. Single manufacturer of networking infrastructure in my home once all the upgrades are complete, making interoperability more likely.

The third reason requires a little more explanation. TP-Link sell two models of AP that I was interested in. The TL-WA801ND and that TL-WA901ND. Upon first inspection the difference is clear in that the 901 has three antenna for greater wireless client antenna diversity, however upon receiving the specifications, you can see that the extra £9 on the 901 isn’t worth it. Both devices feature a 100Mbps LAN connection RJ-45 port. This means that even if your wireless device is connected using a 40MHz channel width at 300Mbps, the most the AP can push out onto the wired network is 100Mbps, so why am I concerned therefore about antenna diversity? I’m quite happy if the wireless speed drops to 130Mbps because I enforce a 20MHz channel width as that is still faster that the wired interface. Had the 901 features a gigabit Ethernet port then the choice would be obviously the 901. An oversight on TP-Links device design teams in my opinion but that’s just me of course.

The first thing I will say about this device is that I was sceptical. The access point, brand new and boxed from Dabs Online via eBay was only £33. I personally couldn’t understand how someone could make a 300Mbps N rated access point for this price so quite frankly, I was expecting a Meccano set to arrive but not to include any of the tools required and that it would be a DIY access point. Oh how wrong I was.

First impressions are that the device looks a bit cheap and plasticy and doesn’t look as solid and robust as some other products available, but I figure that for £33 it’s almost disposable. It’s supplied with a passive PoE (Power over Ethernet) adapter allowing you to use the AP somewhere in your house without a nearby power socket, up to 30 metres away from the source of the power injection. This is a nice touch as Cisco for example, will charge you extra for a separate line item to include a power injector for PoE. The AP is wall mountable by means of two slot on, slot off screw positions on the underside and the wireless antenna are screw on type allowing you to select different antenna types such as uni-directional our outdoor if you require. The supplied antenna can be rotated and angled at any direction you like for optimal positioning if you wall or ceiling mount it.

Configuration is simple using the web interface and once I have resolved my issues, performance is also good. Transferring a file from a 300Mbps wireless client to my Home Server was done at 10MB/s (Megabytes), effectively maxing out the 100Mbps LAN connection. Some of the features include support for multiple AP modes (AP, Client, Multi-SSID and WDS Bridge). I am using it in Multi-SSID mode, connected to a trunk port on the wired side and it works great. There is also support to use the AP as a DHCP server, configure firewall rules up to Layer 4 and also a builtin traffic analyser to allow you to monitor throughput and performance of the access point.

I did have one issue which TP-Link support helped me to resolve, but other than that, the experience has been perfect. My issue was that when transferring files or streaming media content, it would drop the transfer speed to about 10 bytes/sec and would struggle to exceed 2MB/s. This turned out to be because the access point has a problem with LAN switch ports hard set to a specific speed and duplex configuration. My Cisco 2950 which it was connected to at the time was set to 100/Full. Setting the switch port back to Auto/Auto caused the port to stop generating FCS input errors and allowed the AP to negotiate it’s own speed (100/Full as it happens but never mind) and the performance instantly went ‘through the roof’.

Conclusion?

Great product for a great price. I may be looking to buy another in the future to extend my range/signal at the top level of my multi-story town house home.

TP-Link TL-SG3210 Switch Review

Following on from my post Good Enough for a Network Engineer, I thought I would take the time to review my TP-Link TL-SG3210 8 Port Gigabit switch that I purchased about three weeks ago.

The switch is actively in use in my home network, replacing my Cisco 2950T access layer switch and I have to say it’s fantastic with a few caveats.

The switch lives in my study as my access switch, serving my desktop PC, a pair of ports into the bedroom for the Sky box, Xbox and anything that I may want networked in there. Additionally, it also serves as the access for our Vonage VoIP phone gateway as the internal phone wiring master socket is also in the study so it makes it easier to connect to downstream phones from here.

The first thing you notice about the TL-SG3210 is it’s size. For an eight port switch, it’s pretty big, measuring just shy of 12 inches wide. It’s this reason that TP-Link actually supply it with 19″ rack adapters for people who may wish to use it in a rack mount scenario. For your £80, you get a IEC C13 kettle plug type power input on the rear, one RJ-45 console port on the front, along with eight 1000Mbps Gigabit RJ-45 ports and two SFP slots which should accept all industry standard GBIC modules. TP-Link sell their own range of GBIC modules, however one omission in their range are 1000Mbps RJ-45 GBICs so you would have to try using Cisco, HP or another brand if you wanted to use the two SFPs as your trunk ports to upstream switches.

The second thing you notice is the volume. None, nada. The switch is totally silent being passively cooled which is fantastic for my study come home office. My previous use Cisco 2950T switch quotes 47 dB on the Cisco product specification, then add a decibel or two for dust and age of the fans.

Start-up and restart of the switch takes about two to three seconds which is really fast if ever you need to. Configuration is simple thanks to the webmin although TP-Link have console access and Telnet and SSH access too via a Cisco-a-like CLI. The commands in the CLI are fairly syntax akin to Cisco with subtle differences just enough to keep them out of patent infringement but close enough that with the Tab key, most users who know Cisco IOS could tab their way through completing the commands.

The web interface is good and easy to navigate. My only problem with it was that configuring VLANs and assigning them to ports wasn’t as obvious as I would have liked. Creating port channel groups (LAGs) is easily achieved although one item to note is that I like to hard set my LAG ports to the required interface speed, and changing the port from a standard port to a LAG port sets the port speed and duplex back to Auto leaving you to force it back again.

My only problem with the switch relates to firmware updating. After configuring a few bits and pieces on the switch, I noticed the option for firmware update and checked the TP-Link website to find an update available. I downloaded and installed the update only to lose access to the switch afterwards. It appears that updating the firmware causes the switch to reset to factory defaults, causing me to have to re-configure my machine with a static IP in the 192.168..0.0/24 range to access it and configure it again.

Performance wise, I connected two machines, a desktop and a laptop to the switch. One of the machines has an SSD, the other conventional SATA HDD disks. I performed a file copy from the SSD machine to the HDD machine and the transfer speed was sustained at 74MB/s (Megabytes) which to me looks to be the limitation of the disk and disk subsystem and not the switch. With two machines SSD to SSD, it wouldn’t surprise me if I could max out the gigabit link at 100MB/s (Megabytes).

I haven’t fully explored all the features as they are beyond my needs, but some of them include DSCP and QoS configuration, port security, 802.1x authentication, Layer 2 to 4 firewall, switch clustering and more.

Conclusion?

For general home use, this switch is totally over the top and I would suggest actually a TL-SG1008D which is an unmanaged 8 port gigabit switch without the SFP slots. For IT pro at home and power users, this switch is fantastic. For £80 you can’t beat the fact that you are getting (including the SFPs) ten ports of gigabit Ethernet without wasting any of its watts on noise and cooling. It supports so many features that it quite frankly makes Cisco and other high end brands look woefully overpriced and under specified; the Cisco 2960 Express which is an analogous form factor and targets the same sort of market is over £500 and only allows you to configure firewall policies up to Layer 2. Based on just these comments, I couldn’t recommend this switch highly enough.

For small businesses on the other hand, I would not recommend this switch on the basis that updating the firmware causes it to totally factory reset it’s configuration which could leave the uneducated types stuck wondering what is wrong and why they have to access to any network resources, but with that said, that only applies if you are using VLANs and your native VLAN isn’t the switches default VLAN of 1. If you aren’t using VLANs or you are, but your native VLAN for access devices is VLAN 1 then by all means, purchase away.