5 years? I wish. The generational gap between this last generation was 11 years. 11 years of being locked in place in an age when amazing advances are constantly happening.
And no, they definitely aren’t as good or better than PC at the start. The CPUs in the current consoles would not beat mainstream PCs from 2008. They are perplexingly pathetic.
Tablets and possibly cell phones will eclipse console hardware within this generation. I mean, they already have, if you include high end stuff like the surface pro. But I mean even the mainstream ARM stuff.
This is ridiculous and it makes me wonder if the people who spew this nonsense have ever had a PC. Every 6 months? No one upgrades on that sort of cycle except high end hobbyists with a lot of money, certainly no one needs to. Let me guess, you think the PC costs $5000 in the first place, too, right?
It doesn’t even make sense to say this in a comparison to consoles. If a PC is more powerful than a console, it will continue to be more powerful the whole time. If you need to upgrade it because technology became more advanced and games do more shit, then you’re benefitting from that change in technology and getting new features and abilities. But if you choose not to, you still get that same better-than-console experience you had when the PC was brand new. It’s not as if consoles somehow become more powerful over time or PCs weaker.
Eight Years… Xbox 360 = 2005. Xbox One/PS4 = 2013.
Also, it’s not just about the CPU. As dedicated machines, consoles can do stuff that a similarly powered PC just can’t. You know this Beef. I know you know this. There’s no doubt that there’s a gap between PC and consoles, but it is is not as wide as you make it sound.
You have to bear in mind that people who say things like that are going off the very real PC vs console question in the late 90s/early 2000s. Spending $2000-$3000 on a gaming machine and having to update it every year to get even decent performance on the same games available for consoles was a very real thing. And all the “PC Master Race” talk isn’t helping convince the world that this is no longer true.
I may not know what common traits console games have, but I can tell you all about common traits that threads about console games have here on the SDMB.
Yeah, the cycle has definitely slowed, but it’s still there. And while it’s a lot less fiddly than it once was, Windows is still fiddly compared to a console. You don’t have to go set the settings for your video card or fiddle with your memory timings, but it behooves you to do so if getting maximum performance out of your PC is important. You can’t do that on a console, even if you wanted to.
Another thing to consider is that console games tend to be optimized more than PC games, if only because the hardware is basically static, while a PC game has to work within a broader framework of using OpenGL or ActiveX or whatever, and not taking direct advantage of the hardware. This gives a bit of performance back to the consoles that an equivalent PC might not have.
I’m not saying I’m a console guy at all; I have both- a fairly capable 2013-era PC (FX-6300, 16 gb RAM, NVIDIA GeForce 600-something GPU (2012),and a few terabytes of disk space. I also have an Xbox One.
Both are good for what they are, and both have their advantages. PCs generally look better and can run faster, but the other thing that consoles (at least the Xboxes) do very well is have all the chat and party utilities integrated and working. No using Ventrilo or Skype to talk, or using your phones to text each other which game you’re in, etc… Xbox Live is pretty handy for all that.
As for the games themselves, the biggest things as far as I can tell revolve around the better graphics that tend to be available on PCs, the better party utilities on consoles, and the controller differences. For some games, the controller is better, and for others, the mouse/keyboard is better. Really depends on the individual game, IMO.
But the games on PC change over that time period. If I buy an Xbox One today, every game that comes out for it in the future will run on it - for say the next 8 years until they retire it. If I buy a high end PC and graphics card today, will the PC games coming out 8 years from now run on it? Or does my choice become to buy nothing but older games, or upgrade? Honest question.
If you spent, say, 1000 dollars today for a legitimately high-end gaming machine, you’ll get a couple years of being able to run everything and its mother on ultra settings. You’ll get a couple more years of running everything new on really nice settings. 4 or 5 years down the road, you’ll be down to medium or even low depending on the games.
In this scenario, your original PC is already very nice. That means that your motherboard, RAM, and power supply will probably not need upgrading. Since your original system is already high end, the only thing you’ll really need to upgrade on a semi-regular basis is your graphics card. A really powerful graphics card will stay good for a long, but eventually it stops getting driver upgrades and can no longer support newer versions of DirectX or whatever. 200 bucks every 2-3 years will probably keep your machine “high end.”
After 4-5 years, you should expect to need a power supply replacement sooner rather than later. That’s maybe a hundred bucks - they’re not expensive. Same goes for your hard drive.
To add onto Johnny Bravo’s post, a high-end graphics card today should last you the life cycle of a console, barring a new version of DirectX that isn’t backwards-compatible with current architecture or some similar but unlikely scenario, and games that only support newer versions (which would have to be PC only as the consoles wouldn’t be able to upgrade mid-cycle if the PC couldn’t).
There are few PC-only games that push graphics to the extreme. Any game that is cross-platform should support older hardware, because it is optimized for older hardware. The developers will often make shinier PC versions to take advantage of the PC advantages, but it (usually) still supports the specs one would find on the console. It’s only when the XBTwo or PS5 leap frog your current set-up that you might start having issues.
Johnny’s right about components potentially giving up the ghost mid-cycle, but it is just as likely to happen in a console. Fans and power supplies can die in consoles too.
I suspect that they’re just as likely to fail at similar number of total operating hours, but consoles are generally going to have a lot less of those than PCs.
Anecdotally, my gaming rig is a 2008 model. The only original parts in the case are the optical drive, the motherboard, and the fans. The RAM was upgraded almost as soon as I bought it. The original hard drive failed (and was replaced) last summer. The power supply and graphics card have both been replaced twice and, until I get a third graphics card, the machine is only nominally a “gaming PC.”
This is all well within my expected parameters for owning a PC. The parts fail because the machine gets used a lot, even when it’s not being used for gaming.
On the other side of the equation, I’ve had a PSx, PS2*, PS3, and now a PS4. None of them have ever suffered any sort of hardware failure. This is also expected. The PS3 and PS4 have gotten a lot of use as media centers, but that’s relatively low impact usage compared to gaming, I would assume.
The nice thing about a console is that you pay for it once and it operates as expected until you don’t want it anymore. Graphics quality generally improves over the lifespan of the system, relative to other console titles. What you can always count on, though, is that a PS3 game released in 2006 will perform as well as a PS3 game released in 2015.
PCs are all about potential. They have the potential to always run the newest games at the best possible quality, but if my gaming rig had the same parts as it did in 2008, the idea of playing anything new on it would be laughable.
*I gave my PS2 away four years ago and I’d bet decent money that it’s still chugging along somewhere; that thing is a legendary workhorse.
Graphics-wise, a big difference is that console games are often locked at 30 or 60 FPS, whereas a computer can go far above that.
One thing I always thought consoles had going for them over PCs was same-screen multiplayer. With PCs, everyone has to have their own machine; not so with consoles. Unfortunately, consoles have been largely starting to move away from this in certain genres - for example, Halo 5 no longer has the split-screen multiplayer that made that game so popular in the first place. Real shame.
Aren’t there multiple versions of the Nintendo 3DS? I was under the impression that at least some of the newer games might not run on the older models, or will certainly run a lot worse.
That’s a handheld, which is arguably different from a console, but lines get a little blurred all over.
Also, I saw some news articles apparently referencing an interview with a Sony VP asking if, since the PS4 is pretty much just an x86 machine at this point, if they might eventually release some sort of PS4 2.0 with upgraded hardware, and he said it was possible. Now, simply not denying the possibility is hardly a solid indicator of future plans, of course, but we could eventually see consoles later in the cycle with better processors/graphics/etc than the early ones.
You’re right, I always think of the 360/ps3 coming out in like 2003 instead of 2005 for some reason.
The CPU is crippling. It was an extremely poor choice. We’re going to be limited by that for a long time to come. It’s like, but not as bad as, the ridiculous 256MB main memory limitation of the 360/PS3 gen of consoles that held a lot back. It won’t be as restrictive, but it’s the puzzlingly bad choice this time around.
As to consoles being able to do more with their hardware, people fundamentally misunderstand what this means. They’ll say “consoles are more efficient because they’re one purpose devices, they won’t have to run an OS and all that stuff in the background” and that’s not really a big factor. Operating systems are designed to get out of your way and focus on the game you’re running - the amount of CPU/GPU/memory bandwidth* the OS uses while you’re gaming is trivial. This generation of consoles are also much less streamlined in terms of having a light OS - they’re integrating all sorts of other functionality into the OS besides strictly running the game.
Not memory usage, but that’s irrelevant if you have enough to run the game.
There are two advantages in this regard that consoles have. One has been fading for a long time, the other is very expensive in terms of development work.
The former is simply that knowing the exact target hardware architecture, they can code closer to the metal - less need for an API to make the code more generic for different hardware architectures. This (if used - it’s optional, and lots of games probably do generically just code for the API) can improve efficiency somewhat. But PC hardware is much more standardized and API-compliant than it was 10 or 15 years ago - the APIs themselves are much less abstracting and much closer to the metal, and with mantle/Vulcan/dx12, we’ve moved more in that direction than we’ve ever been. The advantage there is becoming tiny.
The latter is that developers can test games on their system and manually eliminate bottlenecks. For different types of games/scenes/modes, the bottleneck is not always going to be the same thing. Sometimes it may be the number of polygons being rendered, other times the physics on the CPU, etc. Sometimes the particle effects of one scene are the cause of the problem, other times an overloaded collision test. On consoles, what’s bottlenecking the machine is going to be consistently shared across the machines. So you can manually have your coding/art/testing teams work together and eliminate the bottlenecks one by one, taking away from your game to maintain that minimum performance.
By comparison, a PC might have different bottlenecks. One machine may have a faster GPU than the other, but a slower CPU, leading to different reasons for the bottleneck. So you don’t go in and try to fine-tune the content to meet the bottleneck in the same way.
But this method is expensive in terms of game development resources. You’re spending time which could be used creating new assets/mechanics/whatever instead trying to cut what you already have so it can run smoothly on crippled hardware. It’s a design process that ultimately takes away from games, limits them, and increases their developmental cost.
This also reminds me of the myth that everyone says that developing for PC is a pain in the ass because they have to make it work for thousands/millions of hardware configurations. That’s simply not true. Everything is very standardized through APIs. People seem to think programmers are saying “Okay, today I’ll make it worth with an i2500k CPU and a 750 TI gpu, you take an i3570k with an R280” but that’s not how it works. CPUs are all X86/X64 and generic. GPUs comply strictly with their API.
You’re right in that things were different in the late 90s. But even if what you say is true, then it’s still ignorant and misleading to talk as if nothing has changed in 15 years.
But it wasn’t true then. I certainly never had $3000 + $500 every 6 months when I was a teenager, and yet I always had a capable gaming PC.
With your other statement - “to get even decent performance on the same games” is not only not true, but highlights exactly why the situation was different then. We didn’t have to suffer through having the same games then. Console gaming in the late 90s / early 2000s was great, because they weren’t just shitty little PCs. They were their own thing. With their own games. They did not hold gaming technology back.
That time period you reference was a time of incredible advancement. Games would come out several times a year that would set a new bar, show us things we’ve never seen. The technology was rapidly advancing and gaming exploited that. Consoles were off in their own little world with their own games. Multiplatform development was rare. The lowest common denominator was not holding gaming back.
If you upgraded your video card every year, it’s because technology was progressing amazingly fast, and it was a privilege, not an obligation. You should be happy that you COULD upgrade - that you could get something that was hugely better than what you already have and could enhance the experience you received, and that gaming was staying on the cutting edge so you’d experience a new wonder on a regular basis.
Instead what we have is a model of subsidized hardware supported by overly expensive software, where console makers have an incentive to make the hardware as cheap as possible and make it last as long as possible, because it’s a loss leader for them to make money back on software royalties. This is exactly the opposite of what gaming should be - it incentivizes the gatekeepers to actively retard technological advancement, and to keep software prices high.
PC is the exact opposite - hardware advances are highly incentivized because the companies making them profit on the hardware. Software prices can be lower because they aren’t paying a company for the privilege of developing games for their systems.
So in one, we get deliberately stifled technological advancement combined with high software prices, and in the other we get rapid technological advances combined with low software prices.
Which would be great, the natural order of things, except that consoles have copied PCs so hard that we have to suffer through the idea that PCs are just another platform like xbone and ps4 are. We’re limited by the lowest common denominator. So all of gaming is held back to console cycles.
That puzzles me. I get why there are few PC-only games that push graphics to the extreme in a way that is labor intensive to produce; If you’re going to hire 100 asset creators for 3 years to create all those 3D models and maps, you’ll need to appeal to a wide audience to make your money back.
But that isn’t the only way in which a game can push graphics to the extreme. High-end PCs might be able to have high-end and unique graphics, physics and possibly gameplay without having to create lots of assets by utilizing the greater power of the GPU to make it look interesting or even generate most of the graphics and physical interactions at runtime.
I mean, look at this sort of thing (scroll down a bit for videos):
This is still in beta but it’s a pretty one:
Does anyone have any ballpark figures for the number of people who play somewhat demanding games on PC? By “somewhat demanding” I mean games that require a dedicated GPU.
Also, any idea how many GTX 900 series have been sold? I only have a 1 million units figure from way back in January.
When games do have more complex keybinds (the biggest one for me was World of Warcraft, where I had dozens of custom keybinds for different abilities), you want stuff that you can easily reach with your left hand, which is almost always resting on WASD. All the keys that aren’t in easy reach of your WASD hand are pointless, or only good for abilities you are using out of combat / when there’s no time pressure on you. If you have to move your hand away from that area for any length of time otherwise, you’re screwed.
So yeah, when I played WoW many of my keybinds used Shift and Ctrl modifiers, after I ran out of the easy to reach keys (which are basically Q, E, R, F, Z, X, C, V, 1, 2, 3, 4, and maybe 5). So even CTRL-SHIFT-3 is a far easier keybind than say, P, which you’d have to completely take your hand off your WASD area for and possibly even look down at your keyboard to hit (since your hands aren’t in the default typing position - your right hand is on the mouse).
Carry on - none of this really matters much to the OP but it was just a very strange comment so I felt compelled to respond to it.
Holy cow, Nvidia did good with the GTX 970. It’s the most popular GPU after Intel’s HD 4000!
Just taking Steam active accounts into, eh, account, that’s almost 7 million of those things in the wild, at least. I’d wager it’s probably closer to 9 million overall.
So, JUST taking Nvidia cards into account and just Steam active accounts, that’s over 25 million PC’s with graphics abilities that match or surpass a current gen console. It’s probably closer to 30 if we toss in AMD cards, at least. The actual number is probably larger than that, given Steam isn’t all of PC gaming.
That jumped up severla million since last I checked, about 6 months ago.
And that’s the weird thing. With new console hitting their stride, I figured new PC gamers that came in with the end of the last generation would go back… but it seems PC gaming continues to grow. Steam just hit 13.5 million concurrent users a week ago, a new high!
As Beef said (using a whole lot of words), PC gaming is finally cheap enough and easy enough to deliver an experience that is comparable to console gaming. “It just works” is finally an assumption you can make with a gaming PC and that’s a good thing for everybody.
Seconding the user interface. Generally console games don’t use mouse or mouse-equivalent controls, which is great.
Except for FPS games, I find mouse controls very sluggish. I would like to have a PC game playable solely on the keyboard, as I can type much faster than moving my mouse across the screen on different screens.
For instance, RPGs typically have very tight controls on consoles, like Final Fantasy III on SNES with 8 buttons and a control pad, all easily reachable or Final Fantasy on NES, with 4 buttons and a control pad. (There are also really bad design schemes as well, so it’s not a universal improvement over PC)
But generally, not having to move your hands between keyboard/mouse or waste mice-time moving the pointer across the screen is a big efficiency improvement. I feel like keyboard hotkeys on most PC games are an after thought which is why you end up with lots of hotkeys spread all over the keyboard which are impossible to remember or reach easily when using a mouse simultaneously.
FWIW, I also find PC graphics concerns annoying, so even though graphics are underutilized on console games, I find it an improvement. (Don’t get me wrong, I still play a lot of PC games).
You think opening up menus and sub menus and subsubmenus and scrolling through lists with an analog stick, and holding down one button to open up a radial menu and then trying to line up the other analog stick to select something…
You think these things are harder to do with a mouse and keyboard interface? No… no.
I’ve largely only done PC gaming. I haven’t owned a console since the original Playstation.
Your inability to use a keyboard without moving your hand and misplaced snark aside, 104+ keys is certainly a hell of a lot more options than your console controller offers meaning that, once again, you have a lot more options for single-press keybinds than going through radial or drop down menus.
Of course, I used to raid as a bard in Everquest but your World of Warcraft experience is cute as well