## TL;DR

The Policy Hash Table has 3-6x faster insertion/deletion and 4-10x increase for writes/reads. As far as I can tell, there are no downsides. The policy hash table (specifically the open-addressing version), beats out unordered_map in all my benchmarks.

## Background

I've often been irritated by how slow unordered_map is in C++. Too often, I have something that runs fast enough in terms of complexity, but the constant factor from unordered_map slows down the solution too much. Yesterday though, after using the useful order statistics tree from https://codeforces.com/blog/entry/11080, I was curious if there were any other useful data structures hiding in the Policy STL. And lo and behold, I found a hash table.

## Benchmarks

Well, enough backstory, let's look at some numbers. All benchmarks below are compiled with C++14 -O2.

```
unordered_maplinear insertion: 0.689846
cc_hash_tablelinear insertion: 0.408233
gp_hash_tablelinear insertion: 0.256131
unordered_maplinear read/write: 1.69783
cc_hash_tablelinear read/write: 0.202474
gp_hash_tablelinear read/write: 0.26842
unordered_maprandom insertion: 2.90184
cc_hash_tablerandom insertion: 3.15129
gp_hash_tablerandom insertion: 0.56553
unordered_maprandom read/write: 2.02336
cc_hash_tablerandom read/write: 0.333415
gp_hash_tablerandom read/write: 0.403486
```

While for linear insertions, the policy hash table gives modest improvements, the policy hash table blows the unordered_map out of the water when it comes to reads/writes. These are order of magnitude improvements that make hash tables usable when they previously weren't. "Official" benchmarks done by the DS authors can also be found here

## Example

Benchmarks of course, don't always reflect the real world. So here's an example of it allowing a solution to be accepted that TLE's with unordered_map.

Example problem (5000 ms time limit)

Solution with unordered_map: (TLE on test case 8)

Solution with policy hash table directly substituted in: (TLE on test case 26)

Solution with unordered_map, rewritten to not use clears: (TLE on test case 26)

Solution with policy hash table and rewritten to not use clears: (AC with max time of 3180 ms)

## Usage

To use this data structure:

```
#include <ext/pb_ds/assoc_container.hpp>
using namespace __gnu_pbds;
gp_hash_table<int, int> table;
```

From there, the API seems almost exactly the same.

## Defeating Anti-Hash tests

One weakness of hash tables is that mean people can find hash collisions offline and blow up the complexity of your hashmap. In my opinion, the easiest way of solving this is below. There's no need to define your own custom hash function.

```
const int RANDOM = chrono::high_resolution_clock::now().time_since_epoch().count();
struct chash {
int operator()(int x) { return hash<int>{}(x ^ RANDOM); }
};
gp_hash_table<key, int, chash> table;
```

## Why is it so much faster?

See my comment here

## A better hash function

There is one issue with using policy hash tables as is. The default hash function for numerics in C++ is just the identity. This is especially problematic for using hash tables for something like a fenwick tree, especially since the default bucket structure for policy_hash_tables is based of powers of 2 and not primes. If you're using policy hash tables for fenwick trees, you have 2 options. 1. Choose to use prime table sizes, as done here. 2. Choose a better hash function for integers, as done here.

Thanks to adamant for his post that revealed to me the existence of policy data structures, and thanks to ed1d1a8d for the discussion.

PS: In other posts for unordered_map, I've seen people claim that reserve and max_load_factor could increase performance drastically. They didn't seem to do much for me. However, if you want to do something similar for these hash tables, check out the example here

Code for the benchmarks can be found here

EDIT: I realized that gp_hash_table is the way to go, not cc_hash_table. gp_hash_table sacrifices ~10% reading/writing speed to gain 3-6x in insertion/deletion/clearing. I updated the post to reflect the new numbers.

Auto comment: topic has been updated by Chilli (previous revision, new revision, compare).Thank you for sharing this!

How can I utilize a pair as a key?

Unluckily, C++ doesn't provide a hashing operator for pairs by default. Thus, you need to define your own. I've typically done it like this.

For unordered_map, simply defining the operator in the std namespace seems to work. However, for policy hash table, since it's in a different namespace, it seems like we need to pass it in manually.

gp_hash_table continues to perform great with pairs.

EDIT: Updated with gp_hash_table results. EDIT2: Updated with proper way of defining custom hashes.

Thank you for the response.

How did you do to create such hash function? Is there some useful tutorial about hash functions? I would appreciate learning how to create a hash function when I utilize a general struct as key.

Somebody else suggested that one at some point, and I found it worked pretty well, so I've stuck with it.

All hash needs to do is return something that's fairly random. Usually I just multiply each element by a prime, and that seems to work decently.

Ok, I got it. Thank you!

If someone has a link explaining better hash functions I still would appreciate.

Don't do this. It seems to work, but technically it doesn't. It is undefined behavior to add a specialization of the

`std::hash`

template class to the namespace`std`

for`std::pair<int, int>`

:You can achieve the same result without invoking undefined behavior if you define a custom class with an appropriate

`operator()`

and pass it to as a template argument to either an`std::unordered_map`

or another hash table.I see, thanks! I'll update my post to reflect this.

As an alternative solution you can use nested hash tables, for example:

`gp_hash_table<first, gp_hash_table<second, value>> table;`

The first key is the first element in a pair. The value is a hash table that uses the key as the second element in a pair with the final value of the value of pair.

It is much easier and faster to implement it, and you don't need to think about the pair hashing function.

But it actually will be interesting how this approach will affect time and memory efficiency, comparing to custom hashing function.

I don't think defining a custom hash function is too hard. You just need to make a struct with a custom operator.

As for benchmarks, that solution is going to be (as you might imagine, a ton slower).

I think defining a custom operator should be something that C++ competitive programmers know how to do. It's useful for a lot of things.

Auto comment: topic has been updated by Chilli (previous revision, new revision, compare).Auto comment: topic has been updated by Chilli (previous revision, new revision, compare).What's the difference between cc_hash_table and gp_hash_table? Noticed that they were both in the gnu pbds.

Cc_hash_table is a chained hash table, while gp_hash_table is an open address hash table. I have some benchmarks in the post.

It seems like in practice, gp_hash_table has far superior performance for insertions/deletions, while cc has marginally better performance for reads/writes.

Thank you for an interesting post. I have thought that open addressing is an obsolete technique that should not be used.

But why is the standard unordered_map implementation so slow? Could there still be some downsides in the policy based structures that can't be seen in your experiments?

TL;DR: The C++ standard requires certain unnecessary things that make unordered_map implementations very slow.

So I was pretty curious about this too. Reading around, it seems like it's because the C++ standard requires some bizarre things from unordered_map, namely, that you can iterate over buckets, as well as being able to iterate through elements in a bucket.

I found this comment on HN: "A bigger issue IMO is that the existence of some methods in various standard classes/templates really murder performance, no matter how good the implementation is. One particularly bad example is the bucket() method, local_iterator types, etc. in c++11's unordered_map (probably the rest of the unordered family too). These force the table to be implemented in such a way that every insertion requires a heap allocation, and iteration requires much more pointer chasing than would otherwise be necessary (e.g. with a probed implementation), which is... unfortunate for the cache."

Also, there's this talk which cites the same point. I linked to the relevant portion here

Due to the C++ standard, unordered_map's elements must be stored in what is basically a linked list. I don't think there's any real use in accessing the elements in a bucket, nor is there an use in iterating through the buckets, so the policy hash map seems strictly superior to me.

Mayhaps the design of linked_lists -- buckets -- by author, are necessary when it comes to simple management about collision, despite hurting performance; and local_iterator is just an auxiliary feature.

How easy to hack solution with policy hash table in contest? Hacker need to maximize number of collision, like dacin21 wrote in post about rolling hashes. Is there a way to randomize the standard hash function?

UPD. Letn— number of buckets in hash table. I think that worst test is 0,n, 2·n, 3·n, ...,k·n, ... — all integers will be pasted in one bucket. We need to modify hash function like:Gen rdseedIn my opinion, this is the easiest way to avoid hacks. Why is it necessary to xor with a random string?

Xor with random address in memory from heap manager, on every testing system this value will be different for each program from users. Not accidentally the memory is called

Random-access memory(RAM). high_resolution_clock() may tick in microseconds (on codeforces, MinGW), but if we get XOR with pointer from heap manager, this number will be almost perfectly randomI see. I remember seeing somebody hacking a solution by generating anti-hash cases for random seeds in an entire time range, and then checking custom invocation for the exact time to submit it.

Is there any way to use max_load_factor and reserve with gp_hash_table? I believe doing so would make it much faster in many cases. I looked online but couldn't find anything besides unreadable GCC jargon. In addition, I wrote a custom hash table out of curiosity. Interestingly enough, the benchmarks (all gcc with -O2) vary a lot based on how the program is compiled and where it's run. The benchmark consists of adding and then erasing 4 million random integers.

My machine (Intel i5-6300HQ) with Windows and MinGW 4.9.2:

My machine with Linux Subsystem for Windows and g++ 7.3.0:

My machine with Ubuntu on Virtualbox with g++ 7.3.0:

Ideone with C++14:

Benchmarking code

I wrote the definition to do so in the PS (the important thing is the last 2 trues). It didn't seem to do much for me. After using that typedef, you can call table.resize() and table.set_load_factor(). set_load_factor takes a pair (min_load_factor and max_load_factor).

The benchmarks don't seem to vary that much to me (except for the performance of cc_hash_table on your machine). Gp_hash_table outperforms unordered_map by 2-5x somewhat consistently, and is fairly close to your custom implementation for insertion/deletion.

I tried calling resize and set_load_factor after using the typedef, but I get a long compiler error :(

So, a couple more test results. Your custom hash table seems to outperform gp_hash_table by 1-2x for insertion/erasing, but seems to be outperformed by gp_hash_table by 1-2x for reads.

As for the template, it seems that gp_hash_table requires an extra parameter. C++ template errors are awful to debug, aren't they?

This is the correct one:

It didn't seem to do much for me. And here's the benchmark, where there's only 1e5 total elements, but they're reinserted 100 times. It also shows an example for the typedefs needed as well as how to use them.

https://ideone.com/N0Wha6

One more thing. If you're interested in playing around with the settings to see if there's a faster version, check out their benchmarks here: https://gcc.gnu.org/onlinedocs/libstdc++/ext/pb_ds/assoc_performance_tests.html

How does this work internally? i.e. how is it implememted? Any explanations as to why it is faster?

See my comment here: http://codeforces.com/blog/entry/60737?#comment-446357

Thank you so much~

gp_hash_table<int, int> m; gp_hash_table<int, int>::iterator it; it=m.lower_bound(23);

it have compilation error. how to do lower_bound in gp_hash_table?

you can't lower_bound in hash table.you may want to use map.

How to

`rehash`

(set the number of buckets) in`gp_hash_table`

?I don't think you can.

Please tell that to my Clang compiler.

I solved this problem using

`gp_hash_table`

and got accepted. Using`unordered_map`

or`map`

causes TLE.