Suleyman.A's blog

By Suleyman.A, history, 4 years ago, In English

Hi, I want to know the time complexity of this code

#include <bits/stdc++.h>

#define N 100010

using namespace std;

int T[N*4];

int main()
{
	memset(T, 0x3f3f3f3f, sizeof(T));
}

Many people say that memset's time complexity is O(logN), but my opinion is O(N).

Is O(logN) right? If not, is there any way to do that operation in O(logN)?

Please write your opinions.

  • Vote: I like it
  • +10
  • Vote: I do not like it

| Write comment?
»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Auto comment: topic has been updated by Suleyman.A (previous revision, new revision, compare).

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Auto comment: topic has been updated by Suleyman.A (previous revision, new revision, compare).

»
4 years ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Haven't you asked those people who say O(logN) to explain how? It's O(N) in my opinion.

»
4 years ago, # |
Rev. 2   Vote: I like it +7 Vote: I do not like it

It's $$$O(n)$$$, but faster because it has a smaller constant factor.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +9 Vote: I do not like it

    What do you mean by O(n), but faster? Do you mean 'with smaller constant factor'?

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Yes. Memset has a smaller constant factor because the compiler does more optimisations.

»
3 years ago, # |
  Vote: I like it -13 Vote: I do not like it

i was solving https://codeforces.com/contest/1549/problem/D this problem and was continuously getting tle i kept o changing the code then i removed memset and it got an AC

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +19 Vote: I do not like it

    That's because you're using memset on the whole array each time you're solving for a test case, which leads to a huge number of operations. You should prefer using std::vector of a precise number of elements unless the time limit is very strict. memset has quite a few demerits (filling happens byte-by-byte so you can only fill stuff that is periodic in terms of bytes and you might forget multiplying by the correct size of the datatype in bytes and so on), so if you really want to fill a range with something, consider using std::fill instead.

  • »
    »
    3 years ago, # ^ |
    Rev. 2   Vote: I like it +5 Vote: I do not like it

    This is often the case when there number of test cases is large and maximum n is large as well, e. g. $$$T, \max{N} \leq 2 \cdot 10^5$$$. I used to make this mistake of using memset on the complete array for every test, which leads to $$$O(T \cdot \max{N})$$$ complexity just to memset the array, but if you do it only up to the inputted $$$N$$$, as $$$\sum{N} \leq 2 \cdot 10^5$$$ the problem disappears and leads to the complexity of $$$O(\max{N})$$$. So there's a reason you often see this line "It is guaranteed that the sum of $$$N$$$ over all test cases is less than $$$2 \cdot 10^5$$$"

    EDIT: Seems like nor was a bit quicker than me on this one.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I have there two macros in my template.

    #define init(arr, val) memset(arr, val, sizeof(arr))
    #define bckt(arr, val, sz) memset(arr, val, sizeof(arr[0]) * (sz+5))
    

    is perhaps a scuffed way of going about it. But if I called

    init(arr, 0);
    

    I would make the entire array $$$0$$$. but if I called

    bckt(arr, 0, n);
    

    I would initialize the first $$$n+5$$$ values to be $$$0$$$. I also set my array max constants to be max $$$+10$$$ to avoid collisions.

»
14 months ago, # |
Rev. 2   Vote: I like it -41 Vote: I do not like it

...

  • »
    »
    14 months ago, # ^ |
    Rev. 2   Vote: I like it +8 Vote: I do not like it

    No. You set O(N) bytes of memory to a certain value, you can't do that faster than in O(N).

  • »
    »
    14 months ago, # ^ |
      Vote: I like it +10 Vote: I do not like it

    It seems that you don't understand what big $$$O$$$ notation means. If some operation has $$$O(n)$$$ time complexity, it means that the time it takes to do the operation scales linearly when the size of the object ($$$n$$$) grows. For example an $$$O(n)$$$ operation takes approximately twice as much time to do with a list twice as long.

    If you double the lenght of the array, memset takes approximately twice as long to excecute (regardless of the original array length). Thus, it is $$$O(n)$$$.

»
14 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Theoretically, you can achieve $$$O(\log n)$$$ if you have a stupid amount of threads ($$$O(n)$$$): [link](https://en.wikipedia.org/wiki/Broadcast_(parallel_pattern)#/media/File:Binomial_Tree_Broadcast.gif)