Use of the millisecond in programming... Why?

Why is it that in programming languages like C, Java, Basic, and others (?) time is defined in milliseconds? I understand the desire to have one time unit, as opposed to the hour:minute:second jumble we have now, but why pick thousandths of a second? Was it so programmers would never need to use decimal time values? (I don’t even know if any of the compilers/interpreters for the languages I mentioned would understand something like ‘sleep .001’.) If that was the case, well, nanoseconds are wide open… :smiley: (I don’t know why I find that so funny.)

I design, construct and program automated test fixtures for electronic circuit boards. It is fairly common for my programs to have timing routines that either measure signals or generate signals with a time period less than a second. What types? Keypad presses, enable signals, debounce between switch closures, etc.

Even if I was generating a pulse of, say a half second (500 milliseconds), the system has to support a resolution at least a tenth of that for suitable accuracy. The same holds true for measuring a time period - your resolution has to be at least an order of magnitude better than the time interval that you are measuring.

Granted, using dedicated hardware for timing is better than using software timing routines. But sometimes (generally non critical apps), the software approach is more practical and easier to implement.

The millisecond time base is hardly universal.

The time function in ANSI C reports the elapsed time in seconds from January 1, 1970.

The old DOS timer tick was 18 ticks per second, if I recall correctly. If you needed more resolution you had to take cotnrol of the system timer yourself. Windows (and most other multitasking operating systems) use the timer tick to run the OS kernal, which means its not free for you to muck with, so this technique has kind of vanished from desktop PCs.

Millisecond timers are the most versitile since it is fairly easy for the programmer to convert seconds or any other desired time base to milliseconds. In computers, seconds are very, very long time periods (most likely several hundred MILLION clock cycles on your PC). Milliseconds are small enough that they let you control lots of electronic devices, and at the same time isn’t such a small unit of time that a programmer has a great deal of difficulty doing things in human time spans (wait 5 seconds is simply wait(5000) for example).

And yes, one important key to this is that whatever time base you choose, you will likely specify it with an integer. The time function, for example, will not return tenths of a second, you only get seconds. 16 bit integers range from 0 to 65535. Choose a useful time scale that gives you the most flexibility with that number. If you choose milliseconds, you get 0 to 65.535 milliseconds, which you can easily scrunch over to use another integer for the minutes, and then you’ve got 0 to 60 seconds, and 0 to 65535 minutes (about 45 days).

Sorry, but I’m in a nitpicking mode.

Close, but not quite. The ANSI C time function makes the naive assumption that all minutes are 60 seconds, and completely ignores leap seconds! Sure it makes for easier coding, but it is not the same as “the amount of seconds since [whenever]”. Also, ANSI C does not specify the point in time to start from. However all implementations that I know of count from 1 January 1970.

Close, but if I recall correctly it was 18.2 ticks per second. (4.77MHz / 256k)

Apart from that, I wholly agree with engineer_comp_geek.
It’s worth noting that we’re up for something much much worse than the Y2K problem, once the number of seconds since 1970 gets past the range of signed 32 bit integers (19 January 2038). But hopefully there won’t be many 32 bit systems left by then.

I’ve been hearing this get talked about every so often on Unix or Linux -related sites. I understand the concept, but I don’t know how much of a problem it is seen as. Can we even make meaningful predictions that far ahead in computing? We went from the Commodore 64 to the Pentium 3 in about the length of time between now and January 19, 2038, after all.

I used to program PLCs and other industrial computers.

We used milliseconds for all timing. Some types of industrial processes demand exact timing and seconds are just too long.

Speaking of nitpicking, as a long time quartz crystal person, I believe the actual clock frequency was either 4.768542 or 4.768545 MHz.

I like this nitpicking!:slight_smile:

according to this page (Which I cannot access, but I found it cached on google).

(Which also incidentally almost agrees with what I said earlier. If you take the 14.31818 and divide with 12 (to get 1.19…MHz) and then divide that with 64k you end up with 18.2065MHz)

Me too.

The ANSI C time function makes no assumption whatsoever about minutes including whether they even exist. It just works with seconds. Period.

The ANSI C ctime function knows about minutes. And it clearly states that the tm_sec element of the tm structure has a range of [0-61] because ANSI C assumes that minutes can have as many as 62 seconds. This allows for up to 2 leap seconds if they were ever needed.

But I think the OP was asking why programming timer functions always use the millisecond as the basic unit, so that you say “Delay(10)”, instead of having the second as the basic unit, so you would say “Delay(0.010)”.

I belive the answer is that with milliseconds, you can pretty much do anything you need to do with an integer argument, instead of causing your program to have to deal with floating-point numbers.