The nerfing of high-end GPUs. PC vs console sales numbers.

A twofer threads. They’re loosely related topics but I figured it would be easy to keep the two conversations apart.

  1. I get the impression that publishers and big developers make their games primarily for consoles because most of their sales come from consoles rather than PC.

How do the sales numbers of a game that comes out on PC compare to console sales? Is the ratio different for high budget games versus indie titles?

For every PC copy that is bought, how many people pirate the game without buying it?

Do games that offer a playable demo tend to be less pirated?

It seems like it would be easier to make games for the PC first and then scale it down for later release on the consoles since feature integration come before optimization, PC players are more experimental and less price sensitive.

  1. It seems that high-end GPUs like the Titan and Fury X are nerfed. Is this so? If so, why?

The current top Titan has 12GB of VRAM, a memory bandwidth of 337GB/s and 3072 cores going at about 1100MHz.

As a point of comparison, the better balanced 980 Ti has 6GB of VRAM, a memory bandwidth of 337GB/s and 2816 cores going at 1100MHz.

The Titan X and 980 Ti have the same memory bandwidth, about the same processing power but the Titan X has twice the VRAM. Why give the Titan more VRAM than it knows what to do with?

Yes, having 12GB of VRAM is useful for content creation like video editing. Nvidia already offers workstation GPUs that can do that and it must be more eager to sell those than Titans.

Yes, you can pair up two Titan Xs and play at 4K at 60fps. Why pay for 12GBs of VRAM twice though? You could have a Titan with 12GB of VRAM and twice the number of cores: it would save the cost of paying for 12GB of VRAM twice.
As for the Fury X, I understand that HBM is in short supply. If HBM is in short supply and highly sought after, AMD should have made it a proper premium GPU by giving it enough processing power to take full advantage of HBM.

It has even more memory bandwidth than the Titan X and 980 Ti at 512GB/s. Yet it usually comes in somewhat under the 980 Ti and Titan X in benchmarks. It has too few processing units and even though it comes with a water cooler, they only run at 1050MHz.

Since it usually runs much cooler than either the Titan X or 980 Ti*, it could be given more processing units and run at a high clock speed and become the top GPU. Being at the top over the competition is one of the main uses of flagship products. Flagship buyers are also unlikely to be price-sensitive and AMD already has plenty of options for price-sensitive buyers who will not take a second look at the Fury X anyway.

So, why does it seem like Nvidia and AMD are pulling their punches and making their flagship GPUs inefficient?

*http://cdn.wccftech.com/wp-content/uploads/2015/06/AMD-FuryX-Temp-load.jpg

Stacking on extra VRAM is a marketing gimmick. You see the same thing on mid-range cards all the time. 4GB editions of 2GB reference cards. Never mind that the GPU could only push 20 fps at resolutions needing that VRAM.

It runs cooler only because water has a much higher thermal capacity than air. If you were to put an air cooler on that baby it would probably be almost exactly in line with comparable cards from the green team given the power usage. It overclocks terribly too, even with the water cooling, so it really looks like they squeezed about as much out of it as they could already.

The really general answer is this: GPU manufacturers are subject to all kinds of practical considerations that dictate which models they actually ship. If some of the models don’t seem to fit well in the lineup, its likely that they hit some constraints on clock frequencies, part availability, yields, competitive lineup, or the like.

Take the bandwidth on the Titan X for instance. This is a direct function of the fact that the fastest GDDR5 memory out there is about 7 gigabit/s. That’s the fastest it goes without HBM or a wider bus. So why not a wider bus? Because that would require a completely different chip, and a low-volume, high-end SKU doesn’t demand a whole new chip. Ok, so why wasn’t the 980 Ti crippled in comparison to provide more differentiation? Because it has to compete with AMD’s products. Then why was the Titan X given 12 GB at all? Because it’s a high-end card and you have to give the users something, and at least some of them will find it worthwhile.

I don’t know much about HBM, but I suspect there’s a similar kind of story behind it; they shipped what they had, even though it’s in an awkward position.

To answer your second question first, the Titan X is most certainly not nerfed. Some games do use that 12 GB of VRAM, and at 4K+ resolutions cards with only 4 GB of VRAM can be VRAM-limited at higher quality settings. Some games also use the extra VRAM as a a super-cache to load more of the level at one time, or more textures (e.g. Shadows of Mordor, Skyrim mods). And you can’t have a Titan X with twice the number of cores right now: that’s for next year or the year after.

I have two Titan X cards, BTW, and often see VRAM usage exceed 4 GB.

With regards to PC piracy, I have no figures, but I’m under the impression that services like Steam have largely knocked the problem on the head and while piracy is still a problem, it’s nowhere near as bad as it used to be.

Is memory bandwidth chiefly a question of bus and frequency?

Do I understand correctly that if the VRAM can provide no more than 7Gb/s, there’s little point in having more than 3000 CUDA cores at 1GHz?

So, what Palooka said above about marketing gimmicks?

I understand why they only had 4GB of HBM to include in each GPU. What I don’t get is the combination of 1) stream processors are likely not in much of a short supply + 2) it seems doubtful that 4096 streams processors at 1GHz are maxing out the potential of 4GB of HBM.

There must be something I’m not getting about the relationships between different parts. Would including more than 4GB of HBM have allowed a higher bandwidth than 512GB/s?

Which games exceed 6GB of VRAM at 4K? There’s GTA V and what else?

I presume you play Fallout 4. How much VRAM does that take at 4K?

Is the CPU RAM able to get the data faster from the VRAM than the SSD?

I admit that preloading textures & such can be a good use of extra VRAM.

Getting a game on Steam brings extras like automatic updates and the Workshop. Is that all or has Steam reduced piracy for additional reasons?

Yes. I’d say that it’s entirely a function of bus width and frequency, but there’s also efficiency. Both NVIDIA and AMD have efficient memory controllers, though, and so that doesn’t scale over time.

There are always some problems that require more math. Having an unbalanced system doesn’t mean that things don’t get faster at all; it just means that it doesn’t get faster at the rate you’d otherwise expect. If half of the frame is math-limited and you make just the math 20% faster, then your frame gets 10% faster. It’s still a win, but the !/$ goes down.

Only for some people. It’s useful for others, especially with SLI in the picture.

You need more than 4 GB at the very highest settings. The bandwidth is fine; it’s the capacity that’s limited. No one wants a super-fast card that can’t do 4k, even if it can do 1080p at zillions of fps.

Also, premise 1 isn’t correct. The Fury X is at the reticle limit. It has as many SPs as they can fit at the current process node.

In terms of sales, it’s all over the place. It depends on the game, the marketing, the features both of single and multi-player.

Big budget, games with large marketing campaigns typically do a lot better in sales on consoles than they do on PC. But there are exceptions. The Witcher 3 sold a full third of it’s copies on PC beating out the Xbox One. Games like Portal 2 and Skyrim did better on PC as well (the latter selling a full half of it’s copies on PC.

But gargantuan AAA titles like say Call of Duty or GTA V sold incredible numbers on consoles - specially when you take into account last gen console sales.

At this point in time most publishers realize there’s a lot of money to be made on PC, and many port their games to the platform, usually with some extra graphics goodies thrown in, or performance improvements.

A few will actually take even more advantage of the PC with exclusive multiplayer features, advanced graphics improvements, mod support etc.

But ultimately, unless the game is truly geared for PC only release (and remember there is no Sony or Microsoft to buy out games from developers and lock them in as exclusives so most PC exclusives are from genres that do a lot better on PC or require mouse and keyboard input), it needs ot run on consoles, and that will always place a limit on what can be done without adding huge costs and development time.

The elephant in the room is VR. I predict that the next year is going to become the year in which the gaming PC takes center stage again, because VR really is revolutionary, and the current hardware requirements to make it work well are beyond the current generation of consoles.

2016 will see the introduction of VR hardware with early adopters, but the general public will be right behind them as more people experience what it’s like and become instantly sold.

2017 will be the big year in which VR capability becomes a must-have for new computers, huge new killer games and apps will appear, and old-style console gaming will start to look passe’.

The minimum spec for VR that doesn’t make you motion sick and has decent resolution will require the equivalent of a GTX 980. And when the next gen goggles show up with twice the resolution, graphics power will be the limiting factor. There’s no way the consoles will be able to keep up, and VR on them will always be a second-rate experience.

I’m not convinced that VR is going to be the next big thing. Even with all the expensive interfaces, there’s a huge gap in making this something even most gamers really want to play. VR systems as they stand today aren’t able to deliver good interfaces, either, and that’s a huge hurdle to overcome. The first team to figure this out might create the first “killer app” for VR, but gaming has been trending in the other direction, away from hard-core experiences in favor of more casual and open play.

Sony is working on a VR rig for the PS4 (imaginatively named PlayStation VR) and it will likely be released before the consumer version of the Oculus Rift. I haven’t tried it myself yet, but the general consensus is that it’s an experience that’s roughly on par with the Rift.

That said, I agree with smiling bandit, VR is probably a dead end when it comes to the mass market.

Only in relative terms. PC GPU sales are still quite strong. Ten million grandmas playing Candy Crush on their iPads don’t affect the hardcore market at all.