naitik10001's blog

By naitik10001, history, 14 months ago, In English

190829557 This is my submission Which gives me TLE,I think Using Map is more preferable than unordered_map as worst Time Complexity of Unordered_map is O(n^2) as compared to O(logn) for Map!!

  • Vote: I like it
  • 0
  • Vote: I do not like it

»
14 months ago, # |
  Vote: I like it -13 Vote: I do not like it

Rehash your hash-table. m.rehash(n); will allocate in memory space for n elements(like vector::resize(n))

  • »
    »
    14 months ago, # ^ |
    Rev. 3   Vote: I like it -8 Vote: I do not like it

    This still doesn't pass the time limit in the mentioned problem (191032622). The correct solution is to use a custom hash function that makes unordered_map always run in $$$O(1)$$$ (191032878). Take a look at this blog post by neal.

»
14 months ago, # |
  Vote: I like it +10 Vote: I do not like it

It's not quite that simple.

A map is a binary search tree. A binary search tree is a binary tree where every node in each node's left subtree is less than it, and the right subtree greater than it. Updates to a binary search tree are O(log n) because a well-implemented BST will cut the tree in half for each level of depth, akin to binary search, and adding elements is constant time (just make a new node and edge).

An unordered_map is a hash table. Hash tables can't be looked at in any meaningful order. They work by running what is called a hash function on inserted values to generate a number, which is used to index the stored value. They can be added to and accessed in O(1) time. The complexity of making an unordered_map of n elements is not actually O(n^2), but O(n) amortized. The reason O(n^2) is theoretically possible is due to something called a hash collision, which is where a hash function generates the same value for 2 elements inserted into a map. When this happens, a rehash has to performed, which takes O(n) time. If a rehash happens every time you insert into an unordered_map, then yes, it would take O(n^2) time. In practice, this is basically impossible unless the dataset inserted into the hash table was specifically designed to mess up the hash function. This is why we call the time complexity O(n) amortized.

Note: BST maps have amortized time complexity too, actually. To ensure fast lookup times, they occasionally have to balance themselves, which essentially rebuilds the tree. This takes O(n) time.

  • »
    »
    14 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Unordered_map is O(1) on average, not amortized.

    However, map is O(log n) amortized.

    You should look up the difference between the two but basically the main difference is that amortized means it is guaranteed but average does not. So it is guaranteed that in the worst case map has O(log n) time complexity but it is not guaranteed that unordered_map has O(1) time complexity — which is why O(n^2) blowups exist.