Desktop PCs Vs. Servers

Besides servers costing 2 to 10 times more why not use a desktop PC as a server?

You CAN use a desktop PC as a server if you don’t mind losing your corportate IT job when it fails. It may work fine in a home environment but in a corporate environment where reliability is important and dozens or hundreds of users are hitting the server at any one time, you won’t get away with it for long. Major hardware differences include RAID drives as essential equipment for corporate servers, hot-swappable components, much larger cases, superior cooling capability, not to mention multiple CPU’s for many corporate Windows compatible servers. I agree that they do seem expensive but there are reasons why they are completely necessary in a high-stress environment where much down-time is simply not allowed.

Shagnasty already covered many of the critical points - no server today is acceptable without hot-swappable power supplies and disk drives (and even PCI cards, but hardly anyone uses that feature). You also must include in the price the incredible amount of testing that happens on “servers”. Sure, the basic compatibility should be there since components are so similar, but when you buy a server you are also buying a lot of guarantees that it will work with certain OSs, Applications and Add-in components, adn that it will be serviced in short order if malfunctioning (lucky bastid!).

The server is also tested with many external storage solutions and backup solutions (software and hardware), and is also tested for scalability with a host of apps. In short, you are buying much more than just the piece of hardware, but even the hardware does have quite a bit more than the desktop variety.

All that being said, I know many people who run their Linux/Apache web servers on 386 PCs.

shagnasty and tradesilicon have already covered most of it. I will add however that servers often have a varitey of built in monitoring tools. These tools are both in hardware and software form. Basically, the hardware in a server is generally ‘smarter’ than the hardware in a typical home PC and will be capable of gathering data about its operation and providing that data to software that is looking for it (or it can affect indicator lights on the chassis or both).

You usually don’t get that level of smarts in a PC as it costs money and home users usually don’t care enough to spend the extra $$$.

[sub]In this case I am thinking specifically of Compaq Insight Manager but other manufacturers provide their own proprietary software.[/sub]

Chubbs,

It is Shagnasty here again. I am an IT developer in $1,000,000,000+ a year company. To put it simply, there are very different requirements when buying a home PC and buying a server for enterprise use. Corporations today are at the very mercy of their information systems. Being available all of the time is not just nice, it is necessary for survival. When one of out servers goes down it costs approximately $675 per hour (9 employees and consultants x $75 per hour) in direct costs. My company owns aproximately 18 servers. If one is down for even in hour, it can cost thousands of dollars in lost revenue plus the cost to pay the idle employees. It simply does not pay (to say the least) be cheap when buying a server. The difference between a $1000 discount PC and a $20,000 server is simply trivial to a large corporation when the business depends on it for its very operation.

Shagnasty , you should have told that to my first University the server would go out for days and weeks at a time since they were using a desktop PC as the server , after they had their £££££ one stolen and neglected to replace it.

What everyone has said so far is correct, but I think that perhaps it overstates the case for servers a little.

I work for a large international company which is very reliant on IT infrastructure, and we do use desktop PCs as servers, in some cases.

We have literally hundreds of servers, and for various reasons (cost, availability etc) many (maybe 40-50%) of them are desktop PCs.

PCs can be fine if they are used where appropriate. Sometimes servers are overkill.

You can start out with a desktop workstation and beef up the elements that you need to be of server quality. The needs of a file server are not necessarily the same as those of a print server, a web server, or a server running a specific service such as FileMaker Server or Citrix Terminal Server.

I’ll speak of FileMaker Server, because it’s what I know best. It doesn’t need a blisteringly fast CPU, nor does it need an ocean of RAM. It does, however, need a blazingly fast (and reliable) hard drive and an uncompromisingly kick-ass NIC. You could start off with a workstation and put in an ultra SCSI hard drive, ditch the ATA, hook its 100-base-T to its own node, rip out all the unnecessary graphics cards (give them to appreciative folks down the hall), install the Fm-compatible server-quality OS of your choice (and/or appropriate for the hardware) and you’ve got a nice FileMaker Server box. Not the tool for the job if you want to run Terminal Server though.

In most cases I’d rather have an accentuated & improved-upon workstation that can be dedicated to the one task than an expensive server that the IT department tries to use to do everything. (This is particularly true for FmServer, which really really wants the whole box to itself. Outlook can also be a resource hog if you have a lot of users, and should have its own box. But you can usually use your print server as your ftp server and file server and get away with it)

For small workgroup servers, or for non essential tasks, it’s quite common to use a standard PC as a server.

For anything bigger than that, or of a mission critical nature, it would be very ill advised for the various reliability reasons stated above.

Sorry MrWhy - I think this is a mistake.

First, servers sell for as little as $800, and still include some of the features that make them servers. Monitoring tools such as the ones Shag has described are precicely what you would want if you have many servers scatterred all about, and it makes life much easier when you can diagnose problems remotely. In fact, this is a major reason for IT organizations to invest the extra money - because they don’t want to have an engineer in each location (much more expensive than a well-managed remote server).

Second, if you have 40-50% PCs for your server infrastructure, you are not likely to have very good uptime. (I’m speculating, of course. But I do have some experience with the matter). Servers, even at the lower price ranges, offer the type of recovery features that PCs don’t have, unless you spend a bunch of time and money beefing them up, and in that case you were better off with a server in the first place.

Again, I know quite a few people (and companies) do this, but the arguements for it are weak.

[Moderator Hat ON]

I think this is better suited to General Questions (unless servers v. PCs is some rancorous opinion-ridden debate I am unaware of, a la Mac v. PC). Moving to GQ.

[Moderator Hat OFF]

Wait, wait…

Hot swappable PSU’s?

How do they work?

I understand hot swapped drives, since you’re probably running a few of those in a RAID config, but PSU’s?

Are there multiple PSU’s in servers?