ACE_High_Res_Timer runs too slow – loss of precision

This post is based on an ACE PRF, so it is formatted like it. Sorry :)



x86 architecture (32-bit), Windows XP Service Pack 3, Winsock 2.2 or newer (WS2_32.DLL version is 5.1.2600.5512)


Windows 2000/XP/Vista/7

THE $ACE_ROOT/ace/config.h FILE [if you use a link to a platform-
specific file, simply state which one]:


THE $ACE_ROOT/include/makeinclude/platform_macros.GNU FILE [if you
use a link to a platform-specific file, simply state which one
(unless this isn’t used in this case, e.g., with Microsoft Visual

Not used, MSVC8 and MSVC9.

CONTENTS OF $ACE_ROOT/bin/MakeProjectCreator/config/default.features
(used by MPC when you generate your own makefiles):

Not used.


Any code reliant on both ACE_High_Res_Timer for timing precision, and
the system clock for standard timestamping and comparisons.


ACE_High_Res_Timer runs too slow, when compared to the standard system
time function: ACE_OS::gettimeofday().

About every 4 hours, my high performance timer seems to lose 1 second,
when compared to the standard system time. This can be clearly seen via timer
printouts every 15 seconds.

Unfortunately, this means that my code – which uses an
ACE_Timer_Queue_Thread_Adapter object – gets notified of events too early,
and since my code expects to be run *after* the event time, it doesn’t work


ACE_High_Res_Timer gets its global scale factor from the host operating system.
This scale factor, as explained in the documentation, is used to convert the high
performance timer to seconds. This factor is stored in an ACE_UINT32 member of
the timer object, called global_scale_factor_.

Under Windows (and generally in most OSs) the amount of high resolution timer
ticks per second is returned as a 64-bit value, in my case – 2341160000. It is divided
by ACE_HR_SCALE_CONVERSION, which is defined as
ACE_ONE_SECOND_IN_USECS, which is defined as 1000000. This divided value
is then set to global_scale_factor_.

Note that 2341160000 / 1000000 = 2341.16, and so 2341 is set as the
global_scale_factor_. Thus – we lose 16 ticks of precision every 234116 ticks.

16 / 234116 = 0.0000683421893, which means that we lose 0.98412752592 seconds
in our high resolution timer every 4 hours. After 10 days of operation – we are a whole
minute earlier than the system clock.


Use an ACE_Timer_Queue_Thread_Adapter, and set it to use ACE_High_Res_Timer instead of the standard system time. Set a timer event – every 15 seconds, for at least 4 hours. Use ACE_LOG((LM_DEBUG, “Timestamp:%T\n”)) to print out the current system time when that timer event is triggered.

Sample output:

After 0 hours: Standard time: 18:19:41.093000, HiRes time: 04:56:08.082000
After 4 hours: Standard time: 22:19:39.437000, HiRes time: 08:56:08.082000
After 8 hours: Standard time: 02:19:38.406000, HiRes time: 12:56:08.082000


A workaround would be to add the time difference (about a second every 4 hours)
manually by having a timer event that causes this value to be calculated. It is
quite trivial to calculate this, but changing the actual timers based on it can be
a hassle.

Another workaround would be to simply use the system time as the basis for the
ACE_Timer_Queue_Thread_Adapter, at the cost of losing precision – but at least
with the timer events executed *after* the system time, not *before* the system time.

A better fix would be to add an ACE_USE_64BIT_HIGH_RES_TIMER_CALCULATIONS
definition, that – if set – would allow the timers to use 64-bit division operations,
and would not require the division of the global_scale_factor_ by ACE_HR_SCALE_CONVERSION. Instead, the division would be done using 64-bit
integers, and the division by ACE_HR_SCALE_CONVERSION would be done
later on – on the final result.

Instead of calculating:

(QueryPerformanceCounter() / ACE_HR_SCALE_CONVERSION) /
(QueryPerformanceFrequency() / ACE_HR_SCALE_CONVERSION)

we could do:

(QueryPerformanceCounter() / QueryPerformanceFrequency()) /

There’d be less loss of precision.

Does anyone have any other suggestions on how I should deal with this?