This is not an argument.
My argument is this: multiple slow cores is a strategy that can work in applications that have data that are suitable for highly paralellized execution.
My evidence is that we’ve seen this in the real world. In video encoding, and rendering, two applications that are highly paralellized, we see an actual efficient use of multicored processing. Especially in GPUs, where the core counts are in the thousands.
And yet, in an enviornment where we’ve had multicored CPU processing available for as long, we see the main thread of games bottlenecked by processing in a single core.
If the main game thread in a video game were as paralellizable as things like rendering and video encoding, we would’ve seen multithreaded apps and games that properly take advantage of this. For the most part, we have not. The explanation is that not all data is something that you can paralellize in the same way. If you throw a non-paralellizable problem at a multicore slow processor, you just end up dragging along at the speed of the slowest thread while most cores sit idle.
It’s not a lack of developer commitment to the idea of paralellizing data - the fact that we render and encode at near-theoretical limits of efficiency with these designs proves that. It’s the unsuitability of the data.
Just saying “oh the industry is big, they’ll find a way” is missing the point. It’s not the best way to tackle this problem. Even if they manage to work around the limitations to some degree, why design those limitatons in in the first place? Large amounts of slow cores is poor design for general purpose CPUs.
You actually do need to flesh out your explanation. See, I’ve made an argument. It’s logically sound. It explains the current situation. You, on the other hand, may have entered some of those terms into google and then name dropped some of the resulting jargon to scare me off. In any case, you certainly have not offered an explanation which explains the current status quo in regards to multithreaded execution.
Uh, no? What do these have to do with the current discussion? Even if they were somehow relevant to the discussion, they’re another example along with video encoding where the data is suitable for paralellized execution.
Doth protest too much. You open by saying “Sure. Lets talk deadlocks, race conditions, and SIMD instructions. While we’re at it, we can debate the differences between CUDA and OpenCL.” and finish by implicitly saying you’re not actually going to discuss these things and that if I dare question your knowledge I should take it somewhere else.