venti's blog

By venti, history, 3 years ago, In English

While solving this problem from CSES — Hamming Distance — I was surprised to see that my O(N^2) solution passed where N = 2e4. The solution wasn't pruned so I'm sure it must've taken ~ N^2 operations.

What exactly is a good estimate on number of operations allowed? I believed 3e7 operations was a good estimate, but clearly this isn't the case, so what is the actual threshold?

  • Vote: I like it
  • +4
  • Vote: I do not like it

| Write comment?
»
3 years ago, # |
  Vote: I like it +24 Vote: I do not like it

About 10^9 operations in 1 sec. But it also depends on what you are doing in the code, like recursion tends to be slow than iterative solutions. Also in JAVA using Collections tends to make the code slow.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I see. I knew there'd be other contributing factors too like implementation details and compiler optimizations etc but its interesting to see upto 1e9 operations can pass. Thanks!

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it +3 Vote: I do not like it

      I already passed a n^3 solution where n = 1000. It was possible because just operations like addition, subtraction, atribution where used.

    • »
      »
      »
      3 years ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      Like sf14t said, you can do around 1e9 basic operations in 2.5 sec. In practice, as long as you take advantage of cache and avoid doing heavy operations like mods and divisions you will be fine.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +19 Vote: I do not like it

    The amount of operations per second also heavily depends on the memory usage patterns. The best case is arithmetic operations on the small amount of variables. In this case, $$$10^9$$$ may fit into a second (or even in 0.6 seconds). Another good case is sequential access to memory (for example, a simple loop over the array). In this case, the memory accesses fit well into CPU cache. When the memory accesses become more random, much cache misses occur and the things become dramatically slower.

    For example, consider this simple program:

    volatile int a[1000 * 1000 * 1000];
    
    const int STEP = 1;
    
    int main() {
        for (int i = 0; i < STEP; ++i) {
            for (int j = i; j < 1000 * 1000 * 1000; j += STEP) {
                a[j] = j;
            }
        }
        return 0;
    }
    

    (Minor note: volatile is used to prevent the compiler from optimizing the writes to the array)

    It's easy to notice that the program fills each item of the array exactly once, regardless of the value of STEP. But the total running time will change dramatically if I change STEP. If STEP = 1, it runs in about 0.8 seconds on my laptop, but when STEP = 1000, it takes about 9 seconds. (Note also that it's probably not the worst random memory access pattern, and the things may become even slower).

    In contests, I usually stick to the values of $$$10^8-10^9$$$ operations per second when writing the code with much sequential access (DPs for example). For data structure problems with binary searches and segment trees, the values of $$$10^7-10^8$$$ are more suitable. Though, most of the time the considerations of the amount of time look like "well, seems it may fit into time limits, let's try to submit it and optimize if I got TL".

    Also, you need to consider that integer division and modulo operations are much slower than addition, subtraction and multiplication.

»
3 years ago, # |
  Vote: I like it +3 Vote: I do not like it

It's around or almost 4*10^8, my friend told me that (It's coincidently the same in the blog). I always assume this while solving problems.

»
3 years ago, # |
Rev. 2   Vote: I like it -39 Vote: I do not like it

roughfly around 2*10^8 operation for c/c++. n comes around to be 1.4e4. may be you are good at optimizing and may used basic operations.