“That box” costs a bit more that tens of thousands of dollars, is not a home pc, and isn’t configured to do what I want it to do.
Arstechnica’s God Box is going to be the best you can get for a desktop PC. Every time I build a system I read all their various builds, and pick and choose from their options.
Where do you draw the line? PC components go obsolete so fast that anything “top of the line” today will be surpassed tomorrow. There’s always another motherboard that can handle more CPUs and RAM, a faster GPU, larger and faster storage media. If it’s not, it’ll be out soon, which will make all the work of putting ‘the best’ rig together moot. I put together a possible rig on Newegg for my mom back in February for $500, just the guts of a new machine. Those same parts are now worth $344.
More importantly, from the parts list it looks like they’re air cooling. What kind of low-rent ghetto system are they building?
Add a few hundred dollars for water-cooling (and maybe a couple hundred more to get someone to set up the water cooling rig for you). I mean, the lack of fan noise will at least make a noticeable difference to the user, unlike the last 50% of CPU speed or hard drive capacity or all the other hardware upgrades one could do.
I knew you were really asking about building a PC when you said “computer”, but that’s the box I would buy if I won powerball.
So by “home computer” you mean “PC Gaming Rig.” Well, then the Ars Technica god box is about as good as it gets. There’s a few places, like the video card upgrade and system memory which could be tweaked, but at that point you’re just “winning more.” 96GB of system memory won’t matter much to a video game, which will be using the memory on the video card. It might slow the system down because it will need to take cycle time to manage that much memory. Then you need to look at things like multiple video cards and the technologies like ATI’s Crossfire and nVidia’s SLI work great for some games/graphics engines and are terrible for others. This isn’t an answerable question unless you buy into the marketing hype that “newest = best,” “more expensive = better,” and “more = better.”
It all starts with the chipset and it’s the factor which most people put the least thought into. If you’re not familiar with computer architecture you may want to open that article and look at the diagram on the right because it will be helpful in understanding the rest of this post. In general the slowest thing a system does is I/O. So when building a system I look for multiple I/O busses(in this day and age that means SATA and USB controllers), bus size for the graphics card(PCIe aperture size), south bridge(channel between everything but graphics and main RAM and the CPU) speed/width, and then northbridge(between CPU and main RAM/graphics cards/also takes a feed from the southbridge) speed. Some apps do well with L2 cache and a processor with a larger on-die cache will make a large difference. Some will benefit from having a front-side(between the CPU and the northbridge) bus being faster. It really is too application dependent to proclaim one particular configuration as “the ultimate” for all applications. If you build a box without a mind to what app you’re going to run on it, and I look at the app and what its bottlenecks are and then build a box I’ll be able to outperform you at a lower price point 90%+ of the time.
How do I know this? Well, it’s what I do for a living. I’m a software performance engineer and my job is to do configure servers to handle the load a particular app puts on them when used in a particular way to make it optimal. I can tell you now that there’s no way I’d just go out and buy the newest thing on the market. I’d go to my benchmark tests and find out how much memory the app was using that was shared(system) memory, how much was graphics memory, how much was coming from I/O to disk, how much was coming in via other input channels, what the network usage profile was, look at CPU benchmarks and cache performance, etc. THEN I’d suggest a configuration. Even then we’re nowhere near done. It’s just reached the time I really earn my money because every single hardware configuration has thousands of software configurations which are possible for it, and typically have a larger impact on performance than just the theoretical specs. Kernel parameters, thread priorities, memory management, drivers, etc.
There is no “ultimate computer” for all purposes, even in the narrow range of home PCs spec’ed out for gaming.
Enjoy,
Steven
Which is why I put this in IMHO instead of General Questions. You are right about 96 gigs actually slowing the system down, but what about 16, or even 24? Any thoughts on the watercooling suggestion to allow a bit of overclocking?
I dunno. My water cooling system sprang a leak and coated the motherboard with some pink liquid. Since then, I have been a fan of keeping-it-simple fans.
I wouldn’t go above 16(I have 12 in my rig). Going higher puts you into the realm of “We gave it a cursory test because the documentation says we support it, but we didn’t really develop the memory management software/firmware/algorithms for such large address spaces because, really, that’s ridiculous. So we just ran a simple program which fills the entire address space and then deletes it all one address block at a time and it didn’t crash. Now get those marketing assholes off my back.” The further you take yourself out of the mainstream the less likely anyone has given any thought towards optimizing for your system. The high end of the mainstream is probably the sweet spot. Two driver releases after product launch for a high end video card is where I like to be.
I’m a fan of water cooling because it gives better hotspot control and is quieter than air cooling, plus you don’t have the continual dust problems you have with air cooling sucking your nasty, dusty, pet-hairy air into your expensive components. That having been said, water cooling is not a do-it-yourself thing. Get a well-reviewed professional to set it up for you.
Overclocking is a waste of time. 95% of cpu cycles are null operations, they aren’t doing anything. Why would you need to shorten the service life of your components by ratcheting up the power levels and the heat they have to dissipate to process null commands? Invest in a better chipset which has a better bus architecture to get data into the processor so it isn’t just spinning at idle all the time. People who crow about their benchmark performance after they overclock are missing the point. Benchmarks aren’t real applications. For most of the actual stuff they use their PC to do they’ve just moved it from 95% null operations to 97% null operations by increasing the operations it runs per second. Screw that. Get more data to the processor with a faster or wider bus and show me your screenshots running an actual app where there aren’t blocked procedure calls(threads waiting on I/O or memory) or a high percentage of idle threads and then I’ll be impressed.
Enjoy,
Steven
So you can improve on the “god box” by adding 4 gigs of memory and watercooling, but you would avoid the overclocking. Anything else on their list you might tweek just a bit?
I would swapout the Obsidian 800d case with the Thermaltake Armor+ VH600LBWS, which has air cooling and built-in watercooling.
Thank you for giving me a huge PC builder hard on ![]()
is there a realisticaly priced application that will tell you what your blocked procedure calls and idle threads levels are?
On Windows? Not so many. I’m a big fan of process explorer, formerly by sysinternals, now bought out by Microsoft. It has pretty much anything you would ever want to know about a system RIGHT NOW but it’s not so great on logging or keeping trends for later analysis. That having been said, as a diagnostic tool it’s pretty much the best I’ve seen for Windows at a system level. Some of the debuggers for specific technologies are better for their specialty, but nothing which has the broad applicability of process explorer. Plus it’s free and TINY and I’m a fan of efficiently written software. These big bloated packages which install all these agents and end up on the graphs for biggest memory/cpu pigs just piss me off. Sorry, a little professional frustration leaked in there.
Enjoy,
Steven
Define “improve.” It’s likely that a few extra gigs of memory would just sit there idle 99% of the time, especially in gaming situations where it’s not swapping anything into system memory but keeping it all in graphics memory and cache. It might make big operations like defrags, deep virus scans, and full backups run faster(depending on what software you use for these operations), but for the most part an extra four gigs of memory in the “god box” would just sit unallocated while still drawing power, generating heat, and depreciating in monetary value. Is that an improvement? The GPS navigation unit in a new car may be great for long trips and the occasional campout, but it will be idle 90% of the time and will add to the amount your vehicle depreciates as soon as you drive it off the lot. Is that an improvement over a vehicle without a built in GPS? Memory over a certain level is only marginally beneficial, and over a further level is simply detrimental.
Most of what I’d do with the “God Box” is “downgrade” it actually. I’d go with a memory setup which has been thoroughly benchmarked and video cards with updated/Windows certified drivers. This is not possible with the newest/shiniest stuff on the shelf. I’d build last year’s “God Box” instead of this one. Careful selection of a chipset/motherboard and CPU combo will give a system longevity. Make sure it has a good bus setup and you’re in a situation where when that lovely top of the line video card drops in price by 75% you can snatch up two and crossfire those bad boys about two years into your system’s life and get another two years of ass-kicking gaming out of it.
Enjoy,
Steven
Since the topic of this thread isn’t “Is maxing out your computer good in the long run?” I think I’ll just move past this roadblock, geek flag flying high.