What's the hot shit in PC hardware these days?

Almost certainly.

It’s reasonable to deduce that the processor cores are waiting for something. You can easily test for cache misses by running the application on a single core.

So…I have new questions :slight_smile:

I probably won’t buy my new computer until April because winter means parking at the end of our very long driveway and it makes deliveries harder, but that just means I have time to inform myself better, right?

Question one: Would using a data transfer cable mirror my current hard drive onto a new one? I’d like to buy a computer with two hard drives, and it’d be nice to have an exact duplicate of my current hard drive, which would mean I could theoretically run programs from it without having to install everything all over again. If the transfer doesn’t work that way…I guess I’d get a new computer with a single hard drive and make this current one the secondary instead. I’d rather not, though, because I’d like to donate this computer (after restoring the HD to factory settings) to the library and not having a hard drive or operating system wouldn’t make that feasible.

Question two: I’d like to buy one of the Elite series computers from HP, and customize it to my liking when I order it, because while I could add things in myself later, it’s simply easier to have it come with that second hard drive from Q1, a second dvd drive, a better than the least expensive option for the graphic card etc. Now, if I do have them make the changes, especially to the graphics card, is it reasonable to expect that they’ll include a power supply adequate to handle the graphics card? When I shopped for graphics cards a couple of years ago not knowing (then) how to install a power supply limited my options since some cards I was interested in couldn’t keep up with the 250w power supply in the computer. They think of these things as they customize your computer…don’t they? If they do change it from the standard power supply, I don’t expect the manual to reflect that, so is there a way to tell what wattage the power supply is without opening the case?

Question three: are two graphic cards significantly better than one? I’ve heard that they need to be the same card (is this true?), but what sort of boost do you get from having multiple cards? Is it that 2 512mb cards act like a 1gb card/ 2 1gb cards like 2gb, or something else?

Q3: Significantly better, yes. The benchmarks I’ve seen show a 40 to 100% increase in the framerate. A 100% increase being the exception rather than the rule. It’s usually around 50%. They do need to be the same card and you need a motherboard that can SLI or Crossfire (aka XF aka CF). Overall, you might get the same performance out of 2X512 as you do from 1X1024 but it’s unlikely.

Q2: I just ordered a PC from NCIX.com and this is how I did it: You pick a basic design then you have the option of customizing it. They give you a smallish range of options (it would be very surprising if the options they gave you didn’t fit with the rest). You can also use advanced settings which lets you search for any part they have to customize it (though you no longer have much assurance that it’ll fit with the rest of the system unless you know your stuff).

You can choose the power supply you want. In the 1150$ PC (taxes, shipping and everything included) I ordered, I chose a 750W power supply which is more than enough for anything non-crazy. You can check out benchmarks of power consumption online and choose the PS accordindly.

Maybe it’s a crappy site and I’ll be cursing it but I liked that it gave me both guidance and customizability.

Checking wattage: You might keep your PC invoice, the basic info is usually written on there. Opening the case is not that big a deal unless you’re dealing with a laptop/notebook. You won’t break it just by opening it.

The situation that he is descibing while online is temporary internet file caching. Its downloading and saving the page to the hard drive before displaying it. I do not believe this is a behavior that can be sidestepped as it is not part of the ram caching it it a software behavior of most browsers. There are ways that static elements of a page can be reloaded from the hard drive rather than repulling them from the internet. This allows pages to load/reload quicker on systems with limited internet speed as well as save the site some bandwidth.

I don’t know what a data transfer cable is, but you could mirror a drive by using lots of hard drive imaging tools while they’re both hooked up. However, I would not recommend this - your windows is configured for your current set of hardware. Suddenly switching everything while using the same copy of windows can lead to problems. It’s generally best to look at assembling a new computer as a fresh start - get rid of some of the clutter and spyware and general bloat. Maybe finally get around to upgrading the OS (windows 7 is pretty great if you aren’t already using that).

I have no idea about HP’s building practices, but doesn’t customizing your system allow you to choose what PSU to put in there?

This is never a good idea for normal people. You get better performance out of buying one better video card for the same/less price than two worse ones. The performance doesn’t double - it’s more like 30-60% depending on the game - so the cost/performance ratios drop way below just getting a better card.

The only time you want to consider SLI/CF is: if you are building a very high end rig and you want multiple units of the top graphics card available at the time, or if you buy one card now and then in a year you buy an equivelant (now much cheaper) card to add some performance to your current system cheaply.

Outside of that, it’s never a good idea to choose 2 video cards over 1 when building a system.

With regard to multiple cards, I believe - but am not sure - that if you have two non-identical Nvidia cards, you can run graphics on one and physics on the other.

No, picking the power supply is not one of the options.

SLI needs to be identical. Crossfire doesn’t let you match any two cards, but they don’t need to be identical. Every time I say SLI below though, I mean SLI or XF.

Dual graphics cards can be a better value than a single card. For example, SLI Ti 560 GTXs stomp a 580 GTX, but still cost less. Dual 460 GTXs are supposed to be near a single 580 GTX for much less.

Performance really does nearly double with SLI, but you don’t see it in a lot of benchmarks 'cause the CPU performance doesn’t allow the frame rate to double. If you look at like 2560 x 1600 @ 16x AA numbers, SLI scaling is crazy. Of course, the performance difference between a single 580 GTX and a SLI 580 GTXs at 1680 x 1050 @ 2x AA is almost nothing, 'cause it’s limited by other factors. You’ve got to figure out what you’re going to play and how the rest of your hardware can handle it. Dual 5970s paired with an X4 and 15" display is a waste of money, but so is a massively OC’d 2600k and Apple Cinema display with a 5770. That’s why computers get exponentially more expensive, since everything needs to be at the same level to get the most out of it.

SLI has a problem with something called micro stutter. It’s basically rapid fluctuations in your frame rate caused by the extra time it takes to inject the frame from the second card into the buffer. Some people can see it. Some people can’t. I can’t. If your minimum frame rate is above your refresh rate and you use vsync, then you’ll never see it.

You may be right and, having never looked at source for browsers, I’ll take anyone who has at their word if they declare it so. However, as a programmer, it just strikes me as a colossally dumb way of ordering fundamental graphical operations for a browser.

Well having a ton of graphics laying around in ram just in case, is probably a lot less appealing from a cost/performance standpoint. 500MB of hard drive space is no biggie, 500mb of RAM is a big chunk.

I thought about my phrasing at some point after I posted it – in case it wasn’t clear, it’s not caching that’s “colossally dumb”; that’s a local optimization that can pay huge dividends under the right circumstances. So, let’s recall the context – way back when, I said that SSDs would significantly improve bootup and shutdown, then acknowledged the effect on applications like large databases and games (those that used a large amount of textures, repeated images, etc.). The vast amount of web browsing is not like that.

The issue you raised – of disk caching as it relates to general web browsing – comes down to caching files either before or after putting them in RAM for display. (Correct me if I misunderstand.) What would be “colossally dumb” would be to disk-cache files received over the network prior to routing them to RAM/display. I’m sure you see why, but I’ll spell it out for anyone else.

Putting the data received over the network into RAM/graphics memory and displaying it is unavoidable, as that’s the point of a browser. The programming question is simply whether to cache before or after display. Caching after means that display is unaffected by the speed of disk access – it doesn’t matter how fast the disk is, the data has already been displayed. Now, caching before would be “colossally dumb” exactly because it would affect the display.

I still believe that the improvements for web browsing (and other general computer usage) gotten by using a SSD versus a regular spinning platter disk is close to nil, and would be practically indistinguishable to the user. I still see no (good) reason that’s incorrect.

Well, you’re both wrong. What resides in RAM and what is swapped to disk is (ETA: generally) controlled by the Operating System. (Anyone that’s had all their RAM sucked up by Firefox over a 3 day browsing session, with 75 tabs up, would realize it NEVER gives up space. :wink: )

Windows 7 is designed to keep data in RAM as long as possible, and pre-cache code it THINKS you’ll use, in an attempt to make the system run as quickly as possible.

RAM is something on the order of 100x faster than disk (with the modern exception of SSD, but SSD STILL goes over a slower bridge controller than RAM).

Sure. By the way I read drachillix’s post, he was ruling out the OS and limiting our exchange to browsers.

When I upgraded to a new Mac laptop at the office, I took the (then) crazy step of upgrading it to 8 Gb of RAM…a crazy expensive thing at the time. Mostly because I wanted to run two or more VMs and my current 4 Gb Laptop was constantly swapping to disk.

I was rather dismayed to see Firefox, all by itself, consuming 80% of the available RAM at times on that system. Granted, it’s Inactive Memory, and eligible for freeing at any time, but I’ve found when it consumes most all remaining memory (‘Free’ RAM drops to near zero) the system becomes a pig.

Further, it’s REALLY easy to set up a non-obvious usage pattern. Recent example: A 512 Mb VM running windows 7, on a 4 Gb host box was consuming ALL available host memory, AND was a pig. Giving it more RAM, then limiting the in-VM swap to a specific size (rather than ‘grow on demand’) solved the problem. A combination of Windows 7 not having enough ram, wanting to swap to disk, and the VM allocating disc space on-demand, brought the host system to a crawl.

So, you consider such things “general computer usage”? :slight_smile:

Nope, merely speaking to RAM in the tangent conversation. Ignoring the OS isn’t a valid assumption as modern OS design WANTS the developer to ignore resource constraints…that’s what cache and swap and overcommitment of resources are all about.

Right. If we can’t have an actual Turing machine (infinite tape, etc.), we devise lots of tricks to make it appear as if we do. :slight_smile:

The only reason I’m still responding is that you said I was wrong. And weeks ago, someone said my claims were “rubbish”. IIRC, back then, I talked about OS-level design and the effects of SSDs. Now, it was specifically browsers.

As I’ve said from the very beginning, there are various areas where SSDs will show marked system performance improvements. But, as I also just reiterated, for general computer usage (might as well throw in “for the average computer user”), an SSD won’t show overwhelming improvement. Yeah, there’ll be some…I mean, there is a similar reason that I’ve bought 7200 rpm drives over 5400 for years now. But I don’t think it’d be the dramatic eye-popper that some claim.

It’s wierd the improvements SSD’s will exhibit. For one, boot-up will work MUCH more quickly as the system is making a lot of requests that are randomized across a disk…but how often does someone reboot? They would see improvements in things like iPhoto (lots of photos) or a music playback program (list a lot of files in a folder, then open each one and read their metadata)

But most people wouldn’t notice a 30% improvement in something that takes less than a second anyway.

It does let apple get away with using slower (and hence cooler and less powerhungry) processors in thier Air line of laptops, but the perceptual improvement boost would be there in bigger laptops with SSD’s too.