When programming an Arduino it is sometimes useful or necessary to measure the time that elapsed between two certain points of the program’s execution. The most straightforward way to do that is to save one timestamp on each point, and then subtracting the two timestamps will yield the time interval.

If precision is not a big issue, one can use the `millis()`

function which returns the number of milliseconds that have been elapsed since the Arduino was powered up. For example:

uint32_t ts1 = millis(); // ...TASK TO BE MEASURED GOES HERE uint32_t ts2 = millis(); // print the time interval in milliseconds Serial.println(ts2-ts1);

Otherwise, if greater precision is required, one can use the `micros()`

function which returns likewise the number of microseconds since the Arduino was powered up (with a precision of 4 or 8 μsec, depending on the clock speed of your Arduino). Our example code becomes now:

uint32_t ts1 =micros(); // ...TASK TO BE MEASURED GOES HERE uint32_t ts2 =micros(); // print the time intervalin microsecondsSerial.println(ts2-ts1);

However the examples above, although simple and straightforward, work most of the time but not always. The reason is that the registers that count both milli- or microseconds since the Arduino’s boot up have a limited capacity of 32 bits. That means that when they reach their highest possible value (`0xffffffff`

, or equally `4294967295`

) they will overflow and start all over from zero.

If the overflow occurs anywhere outside the two calls to `millis()`

or `micros()`

, the code above will work fine, because `ts2`

will be greater than `ts1`

and the result of the subtraction will be correct.

On the other hand, if the overflow occurs between the first and the second call to `millis()`

or `micros()`

, then there is a small problem. `ts2`

will not be greater than `ts1`

anymore, and simply subtracting `ts2`

− `ts1`

will yield a wrong result.

The overflow of the counters occurs once every 49.7 days for the milliseconds counter, but in the case of the microseconds counter this happens much more often: every 71.5 minutes! In any case, it may not be always possible to tolerate a wrong calculation, even if it occurs seldom. However this can be worked around!

We can still calculate the time interval between those two timestamps as the sum of two sub-intervals: `dt1`

which is the interval between `t1`

and the overflow point (timestamp = `0`

), and `dt2`

which is the interval between `0`

and `t1`

. Then the total interval `dt`

would be:

uint32_t dt = dt1 + dt2;

Calculating `dt2`

is fairly easy: it’s just `t1`

− `0`

, or simply `t1`

:

uint32_t dt2 = t1;

Calculating `dt1`

is a bit more complicated, and requires to invert all the bits of `t2`

, and then add `+1`

to it. This is required because if we just invert the bits we get the interval between `t2`

and `0xfffffff`

. We need to add `+1`

to get the actual interval between `t2`

and `0`

:

uint32_t dt1 = 1 + ~t2;

Wrapping everything together looks like this:

uint32_t dt1 = 1 + ~t2; uint32_t dt2 = t1; uint32_t dt = dt1 + dt2;

Which can be simplified to:

uint32_t dt = 1 + t1 + ~t2;

Now all we have to do is to discriminate the two cases:

uint32_t dt = t1 > t2 ? 1 + t1 + ~t2 : t2 - t1;

The statement above checks if `t1`

> `t2`

or not, and then performs the appropriate calculation. I hope this was useful!

Hi,

Sorry, but this is much to complicated! One of the great thing of unsigned arithmetics is, that it is immune to overflows! Suppose our overflow would occur at 100, so valid values would be 0-99. Now set t1=90 and t2=110. Due the overflow, t2 would be stored as 10 (110-100). Now we calculate t2-t1=10-90=-80. Another overflow occurs here, so -80 becomes 20 (-80+100). This is exactly what we expected (110-90=20)!

The only problem with overflows happens, when an overflow occurs twice, so if you’re measuring a time interval longer than 100 ticks (in this example).

Sometimes two problems just cancel out

Thanks for that post, I needed this to properly sample data. Well explained, not too long or short

Niklas

how do you write code to detect overflow in order to perform the second calculation when overflow occurred?

Overflow can be detected if the “after” time value is smaller than the “before” time value. When that’s the case I’m trying to do some correction calculation. The formula described above if valid if the assumption that, the actual time elapsed is not greater than the amount of time that can be measured by the register, holds.

You should use ~t1 instead of ~t2.

Example:

Let:

t1 = 255

t2 = 258

delta = t2 – t1

For simplicity lets use:

t1 = t1 & 255 = 255

t2 = t2 & 255 = 2

delta = t2 + 1 + ~t1 = 2 + 1 + 0 = 3

When using your formula:

1 + t1 + ~t2 = (1 + 255 + 253) & 255 = 253