Is this a safe enough way to measure time in C++?

I’m writing an update loop that needs to know how long it’s been with decent precision, and ideally relatively platform independent but I know ultimately I’m bound by the hardware’s clock timing.

I have a class called Timer that essentially stores the time returned by gettimeofday, and every time it’s called it subtracts the old value from the new value, stores the new value, and returns the difference (in suseconds_t). I understand that the specification only requires suseconds_t be an “arithmetic compatable type” so it’s not safe to return it as an int or long or float, even if it works on my machine.

So the update loop essentially looks like



mainUpdateLoop(){

  while(!timeToExit){
     suseconds_t elapsed = timer.getUpdateTime();
     for(int i = 0; i < numOfThingsToUpdate; i++){  // That's not the actual var name
           thingsToUpdate*.update(elapsed); // neither is that
     }
  }

}


It seems okay, but I’ve never really done timing much before, is there some safer or better way I should be doing this? At least for now, this is all in one thread, so any potential problems in parallel environments can be ignored (this isn’t a scheduler, is basically what I’m saying).

I am going to be using the value computed to do math. To use a random example, if I were using physics the velocity might be in meters per second, and I would need to use the value to apply the correct fraction of that velocity that needs to be added to the object’s position. Again, it’s theoretically an “arithmetic compatible type”, I just want to make sure there’s no glaring problems that I won’t notice until it’s too late.

I’m normally relatively confident in programming, but when I start making calls like asking the system for its time all the potential platform dependency and hardware issues start to make me nervous.

You seem to be assuming that the time will change between calls. Thus the behaviour will depend upon the amount of work you do in the updates. It is quite possible you will buzz around the loop more than once before the system clock updates the time. This calling update with zero time change. Most kernels are allowed to drop timer interrupts, and then try to catch up the system time when they can - so you don’t even have an ironclad guarantee that the system time always updates evenly or on every possible clock tick.

So your algorithm design needs to be resilient to these issues.

If your update function take very little time to complete you may end up with a loop buzzing around soaking up lots of CPU and getting zero time delta updates. Use of a system timed wait (ie sleep()) might be something to think about, depending upon what you are doing.

Given you are reproducing a Java function to do the work, I have to ask why you have chosen C++ to write this in? Unless you are forced, for some dreadful reason, I would never start a new project in C++.

Generally a decent amount of work will be done. Enough that I was considering exactly what way I want to deal with updates that take over one second. From previous experience with systems like this in other languages that had timed updated implemented for me, I estimate the average update time to be pretty high. In the frame of microseconds at least, small in the frame of seconds, probably in the thirties of milliseconds in the simplest cases, possibly getting up to a hundred or two milliseconds in the worst (non-lagging) cases.

That said, I should probably deal with that issue, just in case. Is there a better function than sleep I can use? Sleep is in seconds and preferably I’d like to wait as little time as possible. Right now my idea is to check if the delta is 0, and if so, wait a tiny bit and check again. Every time I try to search for ways to make programs wait I get people talking about things like cin.get() or things that require user input.

I mean, I could just do

while(elapsed == 0){
getcurrenttime(&elapsed,NULL);
}

but that might make the system angry with me and seems like terrible design either way.

Yeah, my hands are sort of tied here on this one. To be specific, I’m contributing to a research project at my University with an existing code base all in C and C++, not to mention it involves graphics and vision so C++ and such are… well, not necessary, but certainly more widely used for these purposes. Regardless of any objections, I have to use C++ either way by mandate of project leaders anyway. Sadly, we don’t actually have any timer written yet, so I have to do it from scratch.

[useless off-topic trivia] The IBM 370 guarantees that its Store-Time-of-Day instruction give distinct values. (The Model 135, whose hardware clock ticked only after 16 microsecs., had a delay loop in the STCK microcode just to ensure this.) I’ve always thought that enforcing this was a good idea, since the distinction is useful for software which wants unique labels or distinct random-generator seeds.

setitimer() should work along with pause().

Do not use gettimeofday to measure elapsed time. The time returned from gettimeofday can change in ways you don’t expect if the current time zone is changed or NTP adjusts the current time (it may also be affected by daylight saving time).

On a POSIX system you should instead use clock_gettime(CLOCK_MONOTONIC, …); As a bonus, this will probably get you the current time with more precision, should you care to use that extra precision.