PS4 to be announced Feb 20th - Your predictions

Meh. Not bitin. They said the exact same stuff with the last generation.

What counts is the execution…what surprises me is the hardware selection. That alone makes me think they’re trying to cater to what game developers are already good at…x86 code. If Microsoft makes a similar decision, then the only thing seperating your consoles from a PC is the API and expansion slots.

The API wouldn’t be a seperating factor, everything would be DX11.1.

The things seperating would then become power, advancement with technology, control over your games, community content creation, etc.

Is Sony going DX? Because DirectX is a Microsoft technology. I thought Sony went for a modified OpenGL.

ETA: And there’s more to an API than the Graphics, I/O, and Sound. They could add a lot of things to make stuff easier, like an AI API or something.

Oh, right, I jumped the gun, forgot about that. I don’t know. I would hope so, for development cost/capability/etc reasons.

DirectX 11.x is roughly equal to the OpenGL 4.x specification. There are some minor differences in the way you set up your pipeline between them (mostly having to do with shaders), but swapping between OpenGL and DX isn’t a big deal if your rendering system is competently written. Hell, libraries like SDL already exist that abstract away the differences and let you switch between OpenGL and DirectX with a simple flag (again, except for shaders).

And I should add they weren’t wrong. The new technology we get with every generation does indeed increase the level of creative freedom allowed to developers. The whole snobby anti-technology attitude that suggests that more advanced technology available to gaming is somehow a bad thing is entirely a creation of people either handwaving away the negatives of long console cycles locking us into a current level of technology for many technological generatons, and/or people wanting to tell us how much cooler than they are than us because they aren’t dazzled by such plebian things as more immersive worlds, better graphics, etc.

To be logically consistent, they’d have to bah humbug all this demonstration stuff.

I generally favor the open formats in this sort of competition, but DX has been so widespread and standardized that I think it’d be easier and we’d squeeze out the most possible out of it just by sticking to it. But I don’t know if you have to license DX or what, how that would work for sony.

I can’t tell if these last 2 demos have run at extremely low frame rates, or if the quality of the stream has gone down a lot.

I’ll say it: I think Microsoft would benefit from dropping DirectX for OpenGL. DirectX is a nice API and its shading language has some real benefits, but its main benefit from Microsoft’s perspective is that it keeps people “loyal” to Windows/Xbox.

However, there seems to be an OpenGL push going. More and more games are becoming cross-platform or cross-OS, especially evidenced by Valve’s sudden push and support for Linux (and the Piston). If the new Xbox only supports DX, it could actually have the reverse intended effect and get it less cross platform games because it’s the odd man out.

I know I just said switching between them is easy, but it’s not trivial, and smaller companies may not want to take the time to port all the graphics code when it can run on PC, Mac, Linux, PS4, (Wii U?), but not the new Xbox.

I really don’t understand your leaps of logic. Snobby anti-technology attitude? From the same guy that said multi-threading was hard, so they’re not going to do it?

“Folks are cooler than you because they’re not dazzled by plebian…” do you really talk like that? Who’s the snob?

You need to wait and see if the numbers coincide with ‘modern mid-tier PC hardware’. I suspect it might.

I predict they will NEVER use DirectX. They’re too proud, and they want to maintain SOME kind of proprietary control over the codebase.

As far as demos are concerned, I merely need to point to Malice, the darling of the early Xbox presentations. By the time if finally came out, it was crap.

Also, the last non-Microsoft console to implement DX was the Dreamcast, and I don’t think Sony wants to be the next Dreamcast. :stuck_out_tongue:

What? Being against slow clock speeds isn’t somehow anti-technology. If they feel that they came make up for slow clock speeds with more cores, it’s a bad design decision - fewer, faster cores have better results for the vast majority of processing applications. There’s no reasonable way you can interpret that as an anti-technology attitude.

The people who imply that the only ones who want technical advancement in video gaming are those simpletons who require more explosions and glitz. All of that “I prefer gameplay over graphics” nonsense as an argument for wanting to stop the advancement of technology is snobbishness.

I’ve been saying “according to rumors” or “it looks like”, I’ve been cautious to say I’m only working with rumors.

Did they ever show what the machine will look like?

Have you never watched a console reveal before? This is the same speech with Dreamcast/GameCube/PS2/PS3/Xbox 360 crossed out and PS4 written in.

Doesn’t look like it.

aside: I never said they’d have a slow clockrate. I said they had economic constraints PCs don’t have. I DID say it’d be a multicore solution, which was poo-poohd’ here as being ‘too hard to code for’.

Course, coding to a CPU and GPU IS coding for multiple cores.

Now, having a seperate processor for background downloading, while a little odd, does hopefully take care of my single biggest gripe with the PS3.

Still, no matter WHAT they pull out of their hats, I suspect this is the generation I don’t bother to get.

Nope. No look at the console. No price. No release date beyond “Holiday 2013.”

And which of them were wrong? Which console was not progress in terms of allowing more flexibility and creativity in game design?

My comments were in regard to the rumor that said the next xbox would be 8 core with a 1.6ghz clock speed. So who cares what you said? You’re responding to a comment of mine that was in reference to that information.

And I don’t think you understand what the actual limitations of multithreaded programming are. That’s fine, most people don’t. But then most people don’t go about claiming that the reason programs are not more multithreaded than they currently are is because of anti-technology luddites and developers that aren’t willing to try hard enough.

There are technical hurdles that impede certain sort of actions becoming something that you can actually multithread for. Where paralellizing data works, say, graphics processing, we’ve very efficiently paralellized the processing. It’s not quite apples to apples, but there are graphics processors with over 2000 cores working at a very high efficiency. This isn’t because graphics designers try harder, but because shader and pixel pushing code is suitable for high paralellization, whereas many of the things that a CPU does are not.

You don’t know what you’re talking about on this subject. The statement you just made is effectively meaningless in regards to the current discussion.

Sure. Lets talk deadlocks, race conditions, and SIMD instructions. While we’re at it, we can debate the differences between CUDA and OpenCL.

Don’t presume I don’t know what I’m talking about.

Alright, first why don’t you flesh out an explanation as to why GPUs, which are massively paralellized with many cores, are coded for to very high efficiencies, whereas CPUs, which have been rocking multiple cores for longer than GPUs have had general purpose shader units, are utilized at a far lesser rate in terms of multithreading efficiency?

Obviously, the CPUs aren’t unable to function in that manner, because on the few tasks that are suitable for a high degree of paralellization, like video encoding, you do see multiple cores used efficiently for this purpose.

So then why, for the vast majority of applications, does the main program thread still heavily bottlenecked by using one core?

Specify ‘vast majority’. You don’t think the big boy Gameshops don’t have the technical skill to spin off to other execution lines? Really? In a 20+ BILLION Dollar industry?

Sony, Microsoft, Blizzard (mentioned tonight), Bethesda, 2k.

Think these guys can’t farm off some extra cycles to handle AI? I don’t HAVE to flesh out my explanations. You’re the one calling other people’s creds into question.

Shall we talk about parallelizing Snort using PF_ring in hardware accelerated NICs? How about massively parallelized brute force password guessing using OCLhashcat?

I know my shit. ‘rocking multiple cores’? Please. Take it to the Pit.