When programming an Arduino it is sometimes useful or necessary to measure the time that elapsed between two certain points of the program’s execution. The most straightforward way to do that is to save one timestamp on each point, and then subtracting the two timestamps will yield the time interval.
If precision is not a big issue, one can use the
millis() function which returns the number of milliseconds that have been elapsed since the Arduino was powered up. For example:
uint32_t ts1 = millis(); // ...TASK TO BE MEASURED GOES HERE uint32_t ts2 = millis(); // print the time interval in milliseconds Serial.println(ts2-ts1);
Otherwise, if greater precision is required, one can use the
micros() function which returns likewise the number of microseconds since the Arduino was powered up (with a precision of 4 or 8 μsec, depending on the clock speed of your Arduino). Our example code becomes now:
uint32_t ts1 = micros(); // ...TASK TO BE MEASURED GOES HERE uint32_t ts2 = micros(); // print the time interval in microseconds Serial.println(ts2-ts1);
However the examples above, although simple and straightforward, work most of the time but not always. The reason is that the registers that count both milli- or microseconds since the Arduino’s boot up have a limited capacity of 32 bits. That means that when they reach their highest possible value (
0xffffffff, or equally
4294967295) they will overflow and start all over from zero.
If the overflow occurs anywhere outside the two calls to
micros(), the code above will work fine, because
ts2 will be greater than
ts1 and the result of the subtraction will be correct.
On the other hand, if the overflow occurs between the first and the second call to
micros(), then there is a small problem.
ts2 will not be greater than
ts1 anymore, and simply subtracting
ts1 will yield a wrong result.
The overflow of the counters occurs once every 49.7 days for the milliseconds counter, but in the case of the microseconds counter this happens much more often: every 71.5 minutes! In any case, it may not be always possible to tolerate a wrong calculation, even if it occurs seldom. However this can be worked around!
We can still calculate the time interval between those two timestamps as the sum of two sub-intervals:
dt1 which is the interval between
t1 and the overflow point (timestamp =
dt2 which is the interval between
t1. Then the total interval
dt would be:
uint32_t dt = dt1 + dt2;
dt2 is fairly easy: it’s just
0, or simply
uint32_t dt2 = t1;
dt1 is a bit more complicated, and requires to invert all the bits of
t2, and then add
+1 to it. This is required because if we just invert the bits we get the interval between
0xfffffff. We need to add
+1 to get the actual interval between
uint32_t dt1 = 1 + ~t2;
Wrapping everything together looks like this:
uint32_t dt1 = 1 + ~t2; uint32_t dt2 = t1; uint32_t dt = dt1 + dt2;
Which can be simplified to:
uint32_t dt = 1 + t1 + ~t2;
Now all we have to do is to discriminate the two cases:
uint32_t dt = t1 > t2 ? 1 + t1 + ~t2 : t2 - t1;
The statement above checks if
t2 or not, and then performs the appropriate calculation. I hope this was useful!