Jarekczek's blog

By Jarekczek, 11 years ago, In English

I thought it would be interesting to optimize my solution trying to beat the record and have it run the fastest. Unfortunately the machine tester exhibits time resolution about 15 ms so it's not possible to optimize the solution to run in less time. Unless I would be able to reduce it to 0. For example among problem 293B submissions it's apparent that all the fastest solutions have the same running time: 15 ms. Other solutions show times close to k*15 ms.

If it would be possible to increase the time resolution, it might be interesting. As LukasP described in his guide, trying to reach the shortest execution time is a good exercise.

UPD: As different approaches are discussed here, it may be worthwhile to see what problems others experience with execution time measuring, like here in Bug Race (requires TopCoder login).

  • Vote: I like it
  • +4
  • Vote: I do not like it

»
11 years ago, # |
  Vote: I like it +3 Vote: I do not like it

Windows timer's resolution is 15.6 msec — it's default time quantum. You can read more about this here . So I think it's impossible to improve precision. Yes, you can use multimedia timers, but you'll be unable to measure real CPU time then, which is absolutely unacceptable for olympiads, as I think.

  • »
    »
    11 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    So we're limited to Windows and this implies 15 ms? How about TopCoder then? It allows C#, but measures time with 1 ms accuracy. At least for C++ submissions, as I ran system tests on C++ code.

    • »
      »
      »
      11 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      No idea. May be they measure real time (not CPU). It can be done in many ways — for example, with clock() function.

  • »
    »
    11 years ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    There is a QueryThreadCycleTime function in latest Windows systems (Win Server 2008, Win 7, Win 8). That could measure running times with much higher precision.

    • »
      »
      »
      11 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Let me quote MSDN: "Do not attempt to convert the CPU clock cycles returned by QueryThreadCycleTime to elapsed time". Looks like it's the same as rdtsc, but allows to measure CPU ticks of individual process.

      • »
        »
        »
        »
        11 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        The reason listed for that remark seems to be throttling, that should not occur in programming competition environment.

»
11 years ago, # |
  Vote: I like it 0 Vote: I do not like it

You can use rdtsc.

Sample: http://pastebin.com/nRwtxuAy. Requirements: Windows, VC++ 2010+, run with administrator rights. On my cpu it outputs numbers from 300k to 15kk CPU tacts (0.1 to 6.2 ms, Boost uses GetSystemTimeAsFileTime API, it able to store 100ns precision, but really can only 1ms)

  • »
    »
    11 years ago, # ^ |
      Vote: I like it +12 Vote: I do not like it

    It measures total CPU tacts amount (including kernel and other applications) isn't it?

    • »
      »
      »
      11 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Theoretically realtime priorities of process and thread should force Windows almost not to switch context at all. Unfortunately, there is no way to achieve rdtsc precision separately for user and kernel time in Windows because of KTHREAD structure updates by KeMaximumIncrement at context switch (usually 10ms).