and not single, common (and fast) memory for both of them, just like most consoles?
Not only it would be more efficient (no syncing issues, something especially important in GPGPU) but probably easier and more flexible to program to…
and not single, common (and fast) memory for both of them, just like most consoles?
Many computers do in fact use common system memory for both. Most computers come stock with integrated graphics as part of the motherboard or CPU and just use the system memory. Only the separate graphics cards use their own RAM. As to why I’ll have to leave that to the more technically inclined on the board.
Graphics cards care more about speed than reliability, so they generally use a different type of RAM, focusing on low latency and ignoring errors. It also helps to have dedicated memory which you don’t have to worry about losing to other system processes. That reduces overhead.
Yeah, but they do that for economical reasons. Cheap chip, cheap memory, and sadly very poor performance. What I don’t understand why they are not making something similar with high performance systems (only using fast memory chips like GDDR5 or something like that, yeah that would be expensive, but gamers usually don’t care that much).
Using the same memory for both CPU and GPU would slow everything down. Having separate data busses increases your memory bandwidth tremendously. Also, it’s a very rare case that the CPU cares what the video memory is doing.
It is becoming more common to use the GPU to perform general purpose calculations that were once only done on the CPU, since a GPU is far faster than any CPU (hundreds to thousands of processing units, although not necessarily able to do everything a CPU can do). In this case, shared video memory would be advantageous; CPU manufacturers are also even starting to make combined CPU/GPU units. The advantage of GPU computing is enormous; the fastest GPUs can exceed a *teraflop *- something once associated with football-field size supercomputers:
(note “as of 2010”, which is very outdated in the computing world)
Yeah, I know.
But, the types of problems that GPUs can solve are not very prevalent in the universe of PCs. Only a tiny fraction of PC users need to render photo-realistic images, for example.
In fact it’s great underestimation of what GPU can do. Any algorithm that can be paralleled will potentially benefit from use of GPU and CG related algorithms are not the only ones…
Physics research, for one, has benefited greatly from calculations done one GPUs. Though I’m not sure that really invalidates the point that most PCs aren’t used for tasks like that. What does the typical user ever do that could benefit from massive parallelization?
I never said they were.
But, there simply aren’t that many applications in everyday computing that really cry out for GPU processing. If you think there is such a crying need, name 5 problems that are currently a bottlenecked by lack of processing power (and, I don’t mean problems that require a supercomputer, like weather prediction, or nuclear weapons modeling) - problems that your average PC user is facing.
I’m using one right now. The advantage is that I get fairly good graphics performance (AMD Radeon HD 6310) at a pretty cheap price.
The disadvantage is that for reasons noted above it’s just fairly good - I could never play Skyrim for example.
I use Bryce 7 in my work.
It takes a LOT of time to render a scene since I am only using an E2200 CPU.
In fact, if I do an intensive render, I have to release one of the cores just so I can surf the web while I wait the 15-45 minutes that a high res picture takes to be complete.
I can’t imagine why a GPU wouldn’t be an order of magnitude better in terms of speed.
I suppose there are several companies working on this because relatively cheep desktop animation is right around the corner or already here.
That would genuinely change my workflow and would justify the purchase of a new computer.
Game console can share memory between the the CPU and the GPU because they’re designed as a discrete unit. In a desktop, though, the GPU and the CPU are separate. In theory you could still share memory, but that would require a custom bus to allow the GPU to fast access to memory. PCIe isn’t designed for that.
Having separate CPU and GPU memory has the following advantages for manufacturers of GPUs:
- They can use the standard PCIe bus, which is used for other high-bandwidth I/O applications like networking or solid state storage. This is cheaper and easier than designing a custom bus.
- Network effects are also in play here: GPU makers design PCIe cards because motherboards have PCIe slots, motherboard designers put PCIe slots on their motherboards because people want to put PCIe cards (including GPUs) in their machines. A GPU maker who wanted to go with a shared memory design would have to design their card for a custom bus, and would likely fail in the market because very few motherboards (if any) would actually support the bus. This is a big reason why new versions of the PCIe standard are backwards compatible with previous revisions – to prevent network effects from killing interest in the new version.
- By putting their own memory on the graphics card, the designer can guarantee the amount of memory that they have available and the performance level of the memory. I imagine that this makes it significantly easier to write the drivers and software that uses the card.
- Also, one way that a GPU maker could segment their market is by having different cards with different amounts of memory, or different memory speeds
45 minutes! I’m a (very amateur) user of Luxrender on a quad core athlon 2400. I render some scenes for several hours! Then of course you realise you realise you set the specularity on the nose different to the rest of the face…
Generate a dxdiag.txt for your PC. Most likely you will see something like this (obviously depending on hardware and Windows version):
--------------- Display Devices --------------- Card name: ATI Mobility Radeon HD 5800 Series Manufacturer: Advanced Micro Devices, Inc. Chip type: ATI display adapter (0x68A1) DAC type: Internal DAC(400MHz) Device Key: Enum\PCI\VEN_1002&DEV_68A1&SUBSYS_10611462&REV_00 Display Memory: 2797 MB Dedicated Memory: 1014 MB Shared Memory: 1783 MB
So you are almost certainly already using “common memory for both”.
no, for several reasons.
graphics cards have RAM chips soldered onto their boards, with the traces for the DRAM bus(es) carefully laid out so that the high data rates they use can, you know, work. CPUs generally have to deal with memory expansion slots, and at the frequencies we’re talking about, you simply could not have DIMMs which work reliably at the data rates of graphics RAM.
consoles may have “shared” memory, but it’s only “shared” inasmuch they’re using the same DRAM chips. The RAM used by the CPU and that used by the GPU are partitioned off. And since the GPU is still addressed using memory-mapped I/O, it can be no other way right now.
General-purpose floating-point calculations. consumer stuff is not really floating-point-intensive aside from 3D games, and in those cases the GPU is better used for graphics.
though they’re on the same piece of silicon, in terms of bus topology they still connect to each other as though they were discrete parts. The GPU is still a PCIe device and still receives and sends data using memory-mapped I/O.
Eh, so what? The Core i5 I’m using has the kind of performance once associated with gigantic supercomputers.
we’re getting close to being a decade into multi-core CPUs. The problems that can be solved via parallelism have been found already. GPGPU applications excel at “embarrassingly parallel” situations like HPC. They’re lousy at the kinds of things the average person does with their PC. they work great for stuff like SIMD, single-precision floating-point, and the like.
Could GPGPU work on … say excel?
I would buy a monster video card just for speeding up Excel if microsoft did that.
(I regularly push the limits of what should be done in excel)
Doubtful, unless you are using the same formula in each of your cells…
A common memory would require communication via the main system buss. That runs slower than the dedicated I/O channel on a GPU memory card.
In addition. the display & GPU must address the memory quite often. If they had to do so on the same bus that the CPu is using to read memory, get keyboard/mouse input, send/receive info from the Internet, and fetch/store data from the disks, that buss would be seriously congested and the system would frequently be waiting for a free moment on the buss. A separate graphics memory with a dedicated I/O channel frees up the main system buss a lot.