is there a way to emulate deficient (mobile) cpu and memory on desktop?

if I am developing a Java or Flash app to run on a mobile device, I may be interested in seeing just how fast it would run given slow CPU and limited memory available on that device. What are the state of the art approaches to this issue?

Do emulators, like iPhone emulator, somehow reflect the speed of execution on the target device?

In the case of environments like Java and Flash that ought to function similarly on desktop and mobile, could their virtual machines be made to “throttle” to realistically emulate the mobile device?

I’ve never developed apps for an iphone or anything like that, but I do have a lot of experience with emulators in general. My experience has been that they very accurately emulate the amount of memory and other resources, as well as the functionality of the device (in other words, if X works on a PC but doesn’t work on the real device, it won’t work on the emulator). However, they rarely, if ever, accurately emulate the processing speed of the device. Some emulators do allow you to control exactly how much CPU time they can take from their host PC, and you can do some throttling based on this, but other things won’t scale directly. For example, the floating point processor in some mobile devices is very weak compared to the floating point processor in a PC. You may find that integer calculations scale very well with what you see in the emulator, but you may find that the floating point processing speed of your emulator far exceeds that of the real device.

Really, at some point you need to try it out in the target device anyway (emulators are never perfect), so you can tweak your app for speed at that point if you have to.

I did a little .Net dev for WinMobile 5 & 6. The limited experience I had with the emulator provided as part of the MSFT SDK matches e_c_g’s report.

Two things the emulator did especially poorly was mimic battery life & the response time of peripheral components in the phone, such as the GPS receiver.

This was 3-5 years ago (time flies!), so the state of the art may have improved.

is this just poor emulator design or is there underlying complexity in figuring out how to throttle it properly even if we wanted to? (if only for a PhD thesis or whatnot). Naively speaking, why can’t we collect timing stats about how fast the floating point operations happen on the device and then throttle the emulator either on the “assembly level” or maybe even on the “source code level” if we have the source code for all the graphics-related apis involved? (I am guessing that it is the graphics manipulation that does FP computations, right?)

ETA: re the battery issue mentioned by LSLGuy, similarly, is this deficient design or fundamental complexity? Can we just count the number and type of assembly instructions being executed and amount of data being loaded from the network and from there arrive at how much energy it takes to do it?

For the most part, people aren’t interested in perfect real-time simulation in emulators. Emulators are used to ensure that the thing works and for basic debugging. Timing and optimization problems can be worked out on the actual device.

[QUOTE=code_grey;13951790Do emulators, like iPhone emulator, somehow reflect the speed of execution on the target device?[/QUOTE]

In the specific instance of the iPhone/iPad emulators: No. Apple says as much in the help, indicating that you should always check speed on the target device(s). In general, this is less of an issue for iOS development, since the range of device variation is very small (2-3 “current” models of the iPhone, 2 models of iPod Touch, 2 models of iPad – all with similar processors)