Actually, they do. Not in comparison with the CPU or graphics card, but they can easily become too hot to touch, especially if you have several in a confined space. And that’s all heat that needs to be cooled.
OS X has four categories of RAM: Free, Wired, Active and Inactive. Wired is RAM that is ram that cannot be swapped to disk, Active is info currently in memory and reciently used, Inactive is info in memory but NOT actively used.
Inactive memory is what’s freed up when you kill a long-running app that consumes a lot of memory. Normally there’s no performance hit to the system until Free memory drops below a certain threshold, then the system slows down as it’s writing to cache, flushing memory and waiting while the OS makes memory available to the processes that are requesting additional space.
When I’m noticing issues, the LARGEST CONSUMER of Inactive memory was typically a browser. Think Firefox with five windows, each with 5 to 12 tabs. That’s a LOT of memory consumed when you consider half of those may have flash, and they all have a history of previously opened tabs.
This was based on actual data from running benchmarking utilities on ubuntu. It may be the choices made in that app, and it represented a range (leading from 200 ms and sinking to 50 ms in the case of the slower drive). The take away was that there was a big range in speed, depending on the drive selected. The slow one was a WD green series (where green = slow I guess) The other was the default drive DELL selects for their Alienware Aurora desktops.
Your experiences and mine differ greatly. small 2.5" laptop drives don’t generate a LOT of heat, 3.5" 160 Gb drives in aluminum enclosures with no cooling most certainly DO generate a lot of heat.
Bump:
I knew there was a thread in the past couple of months that addressed this…
So what is the answer? Which is supposed to the better processor i3 or i7? I can’t even tell if they’re numbered up or down in order of purported quality…
I suppose it doesn’t matter a lot, but when I purchase a new computer for the first time in four years, it’d be nice to have a clue which processor is supposed to be better when it comes down to evaluating computers in the same price range.
i3 < i5 < i7
Budget < Mainstream < Enthusiast
Office < Gamer < Hard-core Gamer or modelling number-cruncher
Thanks No i3s, then.
Not exactly. Within the same architecture, yes, but since we are straddling the boundary right now, that doesn’t really hold.
For example, an i5 2500K is far superior to an i7 920.
This weekend we’re trying a CFD calculation on the new shared PC across the hall. I tried it last weekend on my laptop and it went about 36 hours of CPU time (modeling 60 seconds of physical time) before stopping with an insufficient memory error; my laptop runs Win XP and so can’t use even 4 GB of ram. The new PC runs Win 7 and has 12 GB, and the processor is supposed to be more powerful too. Will report how far it gets.
About the processors, though - when I got this laptop late last fall, I tried looking up what it’s processor was all about. It’s an i7. Apparently this does not mean much, as Intel is putting this brand on various processors over a wide range of designs and performance for marketing reasons. In other words, it’s advertising, not a design or a performance spec. At least 386 actually meant a specific physical design.
The new i-series like the i7-2600 and i7-2600k are out. I haven’t built my 2600k yet, but by all accounts it’s more efficient than the i7-9xx models. I know the TDP on mine is 95W vs 125W - not that most people will ever operate their chips at anything close to the thermal design parm, but it gives you an idea in the potential difference in heat dissipation (as measured in watts).
I haven’t paid much attention to the i3 and i5’s since I do distributed computing and care mainly about maximizing the load on the CPU - which is what symmetric multithreading on the i7’s (both 9xx and 2600) does.
Here is the wiki entry for all of the i7’s - List of Intel Core i7 processors - Wikipedia
the new ones are also produced using 35nm lithography process (Sandy Bridge) rather than 45nm for Bloomfield chips. This is a big part of the lower power consumption.
There is even a low-power 65W TDP model but I haven’t heard anything about it yet. Been a little behind lately.
I picked up a killawatt the other day at Lowes. My i7-920’s power utilization was kinda interesting…(our power rates are $0.119 per KWH)
At full tilt, transcoding movies and ripping DVDs, and if it could do so 7x24x365, it would cost $250 a year in power and draws 280 watts.
At idle, but powered up, it would consume about $110 a year - 105 watts or so
On sleep, it goes down to $10 a year, pulling 5-8 watts.
Unfortunately, somewhere along the line, both Ubuntu and Windows 7 have forgotten how to dependably make it sleep.
FWIW, the home entertainment center at full tilt draws 400 watts and with everything as off as it can get, still draws 95 watts.
Well, crap, it ran 90 hours and just covered 12 physical seconds. What DID I change in there, anyway? Looks like there’s more work for us making the problem solve more efficiently.
I’ve seen this sort of problem before. Are you multi-instancing or multi-threading your big app? If so, you may be having a cache overflow problem. Try cutting it to a single thread or instance
SPeaking as an awfully uneducated individual in tis particular niche, is it parallelizable? You might find improvements using CUDA or OpenCL to utilize the GPU’s on the vido card(s).
I’ve also heard good things about buying cloud time to solve big problems quickly.
Oh, heavens, I don’t even know how to find out, Quartz and Unintentionally Blank. I’m starting a single instance of the application, and they say it is multithreaded, and it gives the option of using 1, 2, 3, or 4 cores. It is set to use 4 cores, and the performance monitor says all 4 are bouncing around 60 to 90 percent or so. I don’t know how to look for cache overflow or how to use GPUs (do CUDA or OpenCL work outside of an existing application?). Can you give me any hints?
I could easily imagine that some unfamiliar or unexplored setting inside the application is making things much more difficult than they need to be. I mean, there must be a hundred different kinds of settings.
I’d assumed it was your software (like a university setting), Cuda/OpenCL is exposed if the developer specifically codes to it.
For example, if I’m hacking passwords (for my job. Honest.) using the i7 processor, I can guess about 45 million hashes a second (not to shabby), if I use OpenCL and wrangle the two video cards into the mix, that number jumps to 770 million hashes a second. But these are very simple, parallizeable, operations. A video card may have 250 processing units, and I can enlist each one of them so that:
Unit 1 password guesses: cookie, Cookie, Cookie1, Cookie!..
Unit 2 password guesses: cootie, Cootie, Cootie1, Cootie!..
Process Explorer may be able to help, but the easiest way is to simply run it with a single core. If that works, increase it two two cores.
Quartz, do all the cores share a cache? Is the idea here that I am getting too many cache misses and spending too much time swapping memory into the cache, sort of like thrashing with RAM and a disk drive? If this is the case, stopping some cores would reduce what was required of the cache, and it would hit so much more often that the loss in processors would be more than compensated - is that it? I can certainly select 1 to 4 cores, and will try that.
Correct. There are three layers of cache. Each core has its own caches, then there’s the processor cache. And with the i7, that’s a dynamic cache.
Link - look in the tech specifications.
Note that I’m not saying that this is your problem, just that I’ve seen what you’re experiencing and thrashing the cache as you put it turned out to be the problem.
BTW you have turned off hyper-threading, right?
Of course. What kind of an idiot do you think I am? But, as a courtesy to the others who are following the thread, could you possibly elaborate on what hyperthreading is, and how and why one would turn it off?
Hyper-threading is where you have one core pretending to be two. Wiki article here.
Thanks, Quartz. I see four processors in task manager, not eight, and it’s an i7 processor. This means hyper-threading is off, right?
Poking around a little bit on the web for postings about the particular application I’m using, and hyper-threading, I see screen shots of CPU activity in task manager, and when the app is doing something big, all the processors quickly ramp up to the 100% line and stay there. However, on my own machine, I see them bouncing around, creating a jagged line mostly in the top third of the graph area. That is, I think I see them spending some time idle. Can I reasonably guess this is because they’re waiting for memory retrievals, i.e. cache misses?