codemastercpp's blog

By codemastercpp, history, 3 years ago, In English

We know that keeping aside extra memory usage, long long is faster than int in x64

It’s possible to access data items that are narrower than the width of a machine’s data bus, but it’s typically more costly than accessing items whose widths match that of the data bus. For example, when reading a 16-bit value on a 64-bit machine, a full 64 bits worth of data must still be read from memory. The desired 16-bit field then has to be masked off and possibly shifted into place within the destination register.

But if we are also depending on vectorization for speeding up execution, ints perform better

I say this based on the fact that vectorization width is higher with 32 bits integers

For example, in the following example:

    for (int i = 0; i < n; i++)
        c[i] = a[i] + b[i];

With #define int long long

main.cpp:114:5: remark: vectorized loop (vectorization width: 2, interleaved count: 2) [-Rpass=loop-vectorize]
    for (int i = 0; i < n; i++)

Without #define int long long

main.cpp:114:5: remark: vectorized loop (vectorization width: 4, interleaved count: 2) [-Rpass=loop-vectorize]
    for (int i = 0; i < n; i++)

So normal access speeds up but vectorization slows down with 64 bit integers, then what would be the final outcome? would 64 bit integer be better or 32 bit?

  • Vote: I like it
  • +14
  • Vote: I do not like it

»
3 years ago, # |
  Vote: I like it +13 Vote: I do not like it

For essentially all these constant-factor tradeoffs, the answer is "it depends on your workload". The golden rule of performance optimization is that you have to benchmark your actual workload/code, and tradeoffs like this are why.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I see, that's a pain

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it +1 Vote: I do not like it

      Oh I forgot to mention: there are nontrivial microarchitectural differences between different processors too, and they're typically impossible to fully comprehend from first principles, so you have to benchmark your actual code on the actual judge too!

      • »
        »
        »
        »
        3 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        So it's a must to benchmark on custom invocation if I am depending on these types of optimizations. Thanks.

        Btw after submitting to code forces for a long time have you gotten a general idea of what kind of stuff generally performs better in their system or is it really just "benchmark, don't assume"

        • »
          »
          »
          »
          »
          3 years ago, # ^ |
            Vote: I like it +6 Vote: I do not like it

          Well competitive programming typically does not require heavy performance optimization; any reasonable implementation should pass (as long as setters are picking good time limits). I have a little performance intuition, but I'm pretty often wrong about more complex things.

»
3 years ago, # |
  Vote: I like it +10 Vote: I do not like it

int32_t is generally better for arrays because of vectorization and smaller memory footprint (which helps with caching)

I think int64_t might be faster for a single variable, but I'm not sure.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Did some benchmarking. Turns out int64_t is not faster than int32_t on x86-64, at least in terms of incrementing a variable.

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it +7 Vote: I do not like it

      probably that test isn't that valuable because incrementing a variable doesn't move data around (aka use the data bus)

»
3 years ago, # |
  Vote: I like it +1 Vote: I do not like it

It's hard for me to imagine any real case when int64 calculations will be faster than int32.