How Do You Rzte A PC Video Card?

I am not much of a gamer and I don’t play advanced games, but I was wondering something.

How do you “read” or “rate” your video card when it comes to a PC game. In otherwords if you can play it or not.

For instance, if I wanted to play Bulley it gives me a list of specs. Fine, but it’ll say it requires XXXX video card or better?

This is what I want to know. How do you read, the card to know if it’s better or not?

I have gone to one site “Can You Run It” and it lists games and you pick the one you want and it’ll tell you if you can run that game on that particular computer.

But I was wondering, if I wasn’t on the computer I plan to use, how to read the video card specs.

Thanks.

You kindof can’t, without some fairly serious research. In all seriousness, this is one of the places where PC gaming goes completely to ****. The only actual clear metric for when one video card is better than another is how much RAM it has on it, and that’s only a valid comparison when you’re comparing within the same line of cards, because most cards in a range have similar amounts, and it’s far from the only or even the most important determiner of the power of a card.

Video cards are a disastrous mess of alphabet soup model numbers that are largely impenetrable to the layman, or in many cases, even the casual hobbyist. You might try looking up your video card on the Anandtech benchmarks here which will give you an idea of how your card stacks up IF it appears in their list. If it doesn’t, you can check the 2011 listings. If you still can’t find it, you’re pretty much SOL, because even if it “sounds” like one of the cards in the list, all it takes is a couple of letters to denote either a suped up or stripped down model, and it’s seldom clear which.

I’m sure some full time hobbyists here are going to come in and tell me I’m wrong, but it’s impossible for me to think of video cards without thinking of [urlhttp://www.shamusyoung.com/twentysidedtale/?p=2133]this.

It’s not like it’s peculiar to video cards. You pretty much need to look up benchmarks for any computer component nowadays. Sure, some can still use numbers, like RAM and hard drives, but good luck finding the relevant numbers (i.e. speed and longevity, not just size) on any place where they are being sold.

If you can’t find it elsewhere, look it up on passmark.com, which uses real benchmarks submitted by users. Some people disagree with the methodology, but it’s good for broad strokes, at least.

Yes, the difference between, say, a 250GTX and a 250GTS can be considerable, and a 480GTX and a 490GTX are light years apart as well. I would strongly recommend Nvidia over ATI, for many reasons; look over the list of cards on the Anandtech ranking as suggested, and look over your best candidates on the Nvidia GeForce site.

Pretty much ANY GTX card in the 200 series or higher is likely to make you happy. The GT and GTS cards can have some serious corners cut.

I’ve asked this question before and have never gotten a clear answer or even been directed to a useful site. There are websites that have relative rankins fo video cards, but because of the sheer number of them they’ll rank just 40-60 specific types, and yours is likely to be on of the 500 they don’t. And as has been pointed out, cards that sound the same often aren’t.

Why the video card manufacturers have elected to make this so hard, I really don’t understand.

Well, the main issue is one of history. It’s easier if you were say, in the market for a new one. A quick visit to Tom Hardware’s best GPu for the buck for the current month, and you’re set. Moreover, you only relaly need to concern yourself with a few models, and two manufacturers (Nvidia and AMD).

If you’re trying to compare specs on your machine, and it’s a recent machine, again, that is easier. If it’s an older card, it’s a bit harder, but it’s not brain surgery.

You google your card model with “review” afterwards or maybe “bechmarks” and that should get you started.

For modern cards the two manufacturers have a similar system. The first number stands for the generation. A 7000 series AMD or 500 series Nvidia are last genertion GPU’s, while the 8000 and 600 cards are current gen. The next number or two stand for the family. This is usually a main designation of power/performance.

AMD has the main three tiers: 700’s, 800’s and 900’s, while Nvidia has the 60’s, 70’s and 80’s. So an AMd 8800 series card is current gen, second family. It’s a mid tier card which will run any game on the market at good settings at 1080p. Nvidia’s equivalent would be something like a GTX 670.

AMD further delineates their models by another number which is a relative measure of power within that family. A 7850 is a bit less capable than a 7870, for example.

The difference in generations is what confuses most folks. A GTX 480 is more powerful than than a GTX 650, for example, as the latter is an entry level GPU, while the former is a high end powerhouse, albeit one of a few generations ago.

So what’s better, a 480 or a 570? What if the 570 has 512MB of memory but the 480 has 1024MB? What about a 480 and a 560? Does a 650 trump a 480? No, does a 660? What if it’s a 480SE? What is the actual performance difference between a 470 and a 480? And how do you compare NVidias to ATIs?

To the average consumer this stuff is just incomprehensible. I’ve played PC games as long as it’s been a going concern and I’m just completely stymied by the whole thing. Obviously the $500 card is probably a lot better than the $69 card, but for slicing the difference between relatively close-in-price cards there’s no clean answer. It also leaves you at a loss when a PC game has a minimum requirement. For instance, my XCom box says “ATI Radeon 2600 XT or greater.” Okay, so is a 3500 good enough? A 4470? Does the first number trump, or is a 4470 worse than a 2600? What if it doesn’t say XT? It says a GeForce 8600 is fine; what’s a GTX 650? 650 is a way lower number than 8600.

Kinthalis has the easiest method. If the second digit of your video card is the same or higher than that of the system requirements, you can run it.

It isn’t a perfect system, since you get wacky stuff like AMD’s 5970 vs 6950 or 8600GT vs GTX 660 Ti, but it’s easy and good enough.

It is an alphabet soup, but Tomshardware publishes a graphics card chart at the end of every “best graphics card for the money” article.
Here is January: GPU Benchmarks Hierarchy 2023 - Graphics Card Rankings | Tom's Hardware

Find the card you are looking for, everything on the same line is going to be close, and move up the chart for better cards. RAM is all but worthless in comparing graphics cards as long as you have enough. Right now the minimum I would get on a new card is 1gb, but they sell Geforce GT 620s with 1gb of RAM that are slower than the integrated graphics in an i3-3225 (HD-4000), and Radeon 7850 1gb cards that are probably the best budget optional available and 10 times the speed of the HD-4000.

In Today’s new card market, you generally get better performance from AMD for the same money, up until about the Geforce 660TI level. The best current cards, from most powerful to least powerful, are NVidias GeForce GTX 690, 680, 670, 660ti, 660, 650ti, 650, and AMD’s Radeon 7970, 7950, 7870, 7850, 7770, 7750. Anything with numbers lower than that shouldn’t be considered a graphics card suitable for playing current games, at least in the current generation of cards. Radeon 8XXX series are NOT an upgrade- they are rebadges of the current 7XXX series (AMD’s Annual GPU Rebadge: Radeon HD 8000 Series for OEMs)

I think there may also be some question of standards support that isn’t as easily reviewed using benchmarks. For example, some games may require that the card support Direct X version 10 or higher. A card may have the GPU power for a game, but not the support.

Once you’ve found the card with the power you want, it’s easy enough to check the technical specs any standards support a game may require.

So yeah. Buying a new card isn’t hard, because you basically say “I want to spend $200 on a video card” and then go read rankings on sites like Anandtech and Tom’s hardware, but trying to evaluate an older card to see what it’s suitable for can be ridiculous.

Yeah, I pretty much rely on Toms Hardware’s tier list. It’s the only quick and effective way I’ve found to compare video cards.

Re: video card numbers - the problem is that they’re not intuitive, but they can be helpful.

  • The first digit simply refers to the generation release. So a Radeon 6XXX is newer than a Radeon 5XXX, but newer doesn’t necessarily mean better. Sure, they make improvements over time to the technology, but it’s not a sure thing.
  • The second digit is the important one, and is generally tied to overall performance, with higher being better. So an X8XX is going to be better than an X3XX.
  • Third digit is also a performance descriptor, if there is one.
  • Last digit is usually just a zero.
  • Extra letters are specific to the manufacturer to describe certain features, etc.

Thus, you can look at this and realize that the Radeon 6350 is going to be much worse than a 5870, even though the 6350 is newer - the “performance” numbers are much worse (35 to 87).

But even THIS doesn’t really work. How would a 4870 stack up against a hypothetical 7350? Sooner or later, generation changes are going to outstrip the “performance” numbers, especially when you’re dealing with less extreme ranges of the latter. 6500 vs 5800? (All numbers invented by me)

AFAIK, the only advantage to more than 512MB of RAM on a video card is to allow high-end games to load that much more texture and map information, (usually) resulting in a framerate increase. Concentrate on the processing speeds first and then adjust the amount of RAM to fit your budget and gameplay needs. You’re better off with faster processing and less RAM than vice versa, and more RAM on the same processing speeds won’t gain you very much in most cases.

Keep in mind that Nvidia video cards can be used as active graphics processors by tools like Adobe - the right card can increase things like Premiere’s real time rendering speed by a huge amount. That’s not just in rendering to screen… it’s actual graphics subprocessing going on under the hood.

I’m just subscribing to the thread.

However, can a Mod please fix the title?

Just a little nitpick, but essentially everything that is composited into the frame needs to be loaded into the GPU’s RAM. If you run out of RAM performance goes down to a couple of frames per second (or your game crashes).

So your texture resolution, your shadow map resolution, your screen resolution, and some form of AA will all be limited by the amount of VRAM you have. For 1080p, with a modern game that features high resolution textures (higher than consoles at least), I would recommend 1 Gig as the minimum.

Well, of course, in the sense that the RAM holds one to several successive frame buffers (its primary purpose). Even an HD frame is only about 8MB of data, though, so assuming the system is fast enough to keep ten frames pre-rendering, 80MB is all that’s needed. The rest is for texel and vertex calculations - but not as much as you’d think - and the rest is to pre-buffer textures, bump maps and the like.

All things being equal, so would I - but for a moderate gamer looking to control the budget, more processing power and less RAM is a better tradeoff than a lesser GPU and 1GB.

This recommendation is wrong, of course, but particularly ironic given the subject of this thread. ATI is far better about laying out a consistent naming scheme for their products, NVIDIA is a big mess.

Anyone who tells you to buy one video card designer over another is steering you the wrong way. Both companies make good products, both lead the market at different times, both in terms of absolute performance and price/performance. So you should go with whatever is best at any given time, and “always buy nvidia/ati” advice runs contrary to that. FWIW, AMD has lead both technologically and in terms of price/performance for about 75-80% of the time in the last 3 years.

Anyway, it’s pretty much been covered. The second number is actually the more important number in terms of raw horsepower. The first number is the generation. I wish they’d make this system more clear, maybe by having letters for generations instead of numbers, so you’d have a B70 vs A80 or whatever, and it would make you think about the difference instead of just thinking “well 7200 is more than 6800, so this should work”

But the easiest way is just to use one of those giant charts that has every video card made in the last few years and see where it is on the results.

Edit: Also, ram isn’t a big factor here. The card you get should have enough ram for your purposes (1gb is generally fine unless you run at very high res), but often low end cards have plenty of ram because it’s cheap, so you should never say “well it has a gig of ram, it’s a good card”

The name of a video card is hardly its most important feature.

As for recommending Nvidia, I do it based on a long, long history of one company trying to create broad standards and support a variety of end uses vs. the other using every fragile trick in the book to beat benchmarks and claim market share among gamers. I’ve had 10X the trouble with ATI over incompatibilities, early failure, driver hassles, and general weirdness. I’ve also learned to hate their Catalyst management tool with a deep passion.

OTOH, I’ve had systems that went through three or four Nvidia cards without a glitch or having to laboriously strip out old driver tech for new.

Nvidia supports extremely powerful GPU coprocessing for purposes other than getting your Skyrim frame rate up another notch. See Adobe (better yet, use Adobe) for the differences between the two technologies.

The only place ATI has ever been superior is in handling video input, such as with their lines of TV-something cards.

You mean like trying to force a proprietary CUDA on the industry instead of using the open source OpenCL like AMD? What about trying to promote proprietary PhysX middleware?

Was the last time you used AMD circle 2004? AMD used to have some hassles, long long ago - I’ve owned AMD for 3 years now with no problems ever.

Your advice is outdated. ATI drivers install the same way as nvidia drivers, and actually are probably easier because steam reminds me of new ATI drivers updates and installs them for me.

Again, ATI is trying to use industry standard non-proprietary open source formats to do this, NVIDIA tries to force companies to go with their proprietary solutions.

What exactly does Adobe do with nvidia that can’t be done with ATI?

Your advice is, at best, outdated.