### neal's blog

By neal, 10 days ago, ,

C++ has always had the convenient data structures std::set and std::map, which are tree data structures whose operations take time. With C++11, we finally received a hash set and hash map in std::unordered_set and std::unordered_map. Unfortunately, I've seen a lot of people on Codeforces get hacked or fail system tests when using these. In this post I'll explain how it's possible to break these data structures and what you can do in order to continue using your favorite hash maps without worrying about being hacked .

So how are they hackable? We always assume hash maps are O(1) per operation (insert, erase, access, etc.). But this depends on a key assumption, which is that each item only runs into O(1) collisions on average. If our input data is completely random, this is a reasonable assumption. But this is no longer a safe bet when the input isn't random, especially so if someone is adversarially designing inputs to our code (a.k.a. hacking phase). In particular, if they know our hash function, they can easily generate a large number of different inputs that all collide, thus causing an O(n2) blow-up.

We'll prove that now by blowing up unordered_map. In order to do that, we first have to determine exactly how it's implemented. For this we can dig into gcc's implementation on GitHub: https://github.com/gcc-mirror/gcc.

After some searching around we run into unordered_map.h. Inside the file we can quickly see that unordered_map makes use of __detail::_Mod_range_hashing and __detail::_Prime_rehash_policy. From this we can guess that the map first hashes the input value and then mods by a prime number, and the result is used as the appropriate position in the hash table.

Some further searching for _Prime_rehash_policy leads us to hashtable_c++0x.cc. Here we can see that there is an array called __prime_list, and the hash table has a policy to resize itself when it gets too large. So we just need to find this list of primes.

The one include on this file leads us to hashtable-aux.cc. Aha, here is the list we're looking for.

One more thing: we need to know the hash function unordered_map uses before modding by these primes. It turns out to be quite simple: the map uses std::hash, which for integers is simply the identity function. Armed with this knowledge, we can insert lots of multiples of one of these primes to the map in order to get n2 blow-up. Not all of the primes work though, due to the resizing policy of the map; in order for a prime to work, we need the map to actually resize to this prime at some point in its set of operations. It turns out the right prime depends on the compiler version: for gcc 6 or earlier, 126271 does the job, and for gcc 7 or later, 107897 will work. Run the code below in Custom Invocation and see what output you get.

#include <ctime>
#include <iostream>
#include <unordered_map>
using namespace std;

const int N = 2e5;

void insert_numbers(long long x) {
clock_t begin = clock();
unordered_map<long long, int> numbers;

for (int i = 1; i <= N; i++)
numbers[i * x] = i;

long long sum = 0;

for (auto &entry : numbers)
sum += (entry.first / x) * entry.second;

printf("x = %lld: %.3lf seconds, sum = %lld\n", x, (double) (clock() - begin) / CLOCKS_PER_SEC, sum);
}

int main() {
insert_numbers(107897);
insert_numbers(126271);
}


Depending on which compiler version you are using, one of these two numbers will take much longer than the other. There are several other primes that also work; try some more for yourself!

Note that for other hash tables like cc_hash_table or gp_hash_table (see Chilli's helpful post), it's even easier to hack them. These hash tables use a modulo power of two policy, so in order to make a lot of collisions occur we can simply insert a lot of numbers that are equivalent, say, modulo 216.

### Don't want to be hacked?

Let's look at how to safeguard these hash maps from collision attacks. To do this we can write our own custom hash function which we give to the unordered_map (or gp_hash_table, etc.). The standard hash function looks something like this:

struct custom_hash {
size_t operator()(uint64_t x) const {
return x;
}
};


However as we mentioned, any predictable / deterministic hash function can be reverse-engineered to produce a large number of collisions, so the first thing we should do is add some non-determinism (via high-precision clock) to make it more difficult to hack:

struct custom_hash {
size_t operator()(uint64_t x) const {
static const uint64_t FIXED_RANDOM = chrono::steady_clock::now().time_since_epoch().count();
return x + FIXED_RANDOM;
}
};


See my post on making randomized solutions unhackable for more details. Awesome, so our hash is perfectly safe now, right? Not so fast. All we've done is add the same fixed number to every input to the function. But if two numbers a and b satisfy a = b (mod m), then a + x = b + x (mod m) for every x as well. Similar problems occur for other very simple hash functions: multiplying by a random large odd number (and overflowing mod 264) is likely effectively modulo p, but will be problematic for gp_hash_table's power of two policy; the same situation occurs for xor-ing with a random number.

A slightly better hash function like the following may look enticing:

struct custom_hash {
size_t operator()(uint64_t x) const {
static const uint64_t FIXED_RANDOM = chrono::steady_clock::now().time_since_epoch().count();
x ^= FIXED_RANDOM;
return x ^ (x >> 16);
}
};


However, if you are using a gp_hash_table this actually still leaves you susceptible to hacks from a strong enough adversary. In particular, after inserting the numbers (1 << 16) + 1, (2 << 16) + 2, (3 << 16) + 3, ..., into this hash table, all of the outputs will be equivalent modulo 216. (Do you see why?)

So we want a better hash function, ideally one where changing any input bit results in a 50-50 chance to change any output bit. Note for example that in the hash function x + FIXED_RANDOM, this property is not satisfied at all; for example, changing a higher bit in x results in a 0% chance of changing a lower bit of the output.

Personally, I like to use splitmix64, which is extremely high-quality and fast; credit goes to Sebastiano Vigna for designing it. Adding all this together, we have our safe custom hash function:

struct custom_hash {
static uint64_t splitmix64(uint64_t x) {
// http://xorshift.di.unimi.it/splitmix64.c
x += 0x9e3779b97f4a7c15;
x = (x ^ (x >> 30)) * 0xbf58476d1ce4e5b9;
x = (x ^ (x >> 27)) * 0x94d049bb133111eb;
return x ^ (x >> 31);
}

size_t operator()(uint64_t x) const {
static const uint64_t FIXED_RANDOM = chrono::steady_clock::now().time_since_epoch().count();
return splitmix64(x + FIXED_RANDOM);
}
};


Now we can simply define our unordered_map or our gp_hash_table as follows:

unordered_map<long long, int, custom_hash> safe_map;
gp_hash_table<long long, int, custom_hash> safe_hash_table;


Once we use these in our program above, it runs very quickly:

x = 107897: 0.035 seconds, sum = 2666686666700000
x = 126271: 0.031 seconds, sum = 2666686666700000

=====
Used: 109 ms, 9204 KB


•
• +715
•

 » 10 days ago, # | ← Rev. 2 →   +1 Added to my snippets ;). Amazing Work.
 » 10 days ago, # |   0 c++ 17 when set with same key has size larger than 8 it will use RBT to store data.
•  » » 9 days ago, # ^ | ← Rev. 2 →   +8 Are you saying unordered_set transitions to using red-black tree when it encounters 8 collisions in the same location? This isn't true. In the code snippet I posted above, insert_numbers(107897) in G++17 takes about as long as insert_numbers(126271) in G++14.
•  » » » 4 days ago, # ^ |   +3 Oh, i am wrong,it was implemented in jdk1.8.
•  » » » » 4 days ago, # ^ |   0 Do you have a link to where it says this? I'm interested in reading the documentation for it.
•  » » » » » 4 days ago, # ^ |   +3
 » 10 days ago, # | ← Rev. 2 →   +3 very useful blog ✋✋ 
•  » » 5 days ago, # ^ |   0 Yes very useful blog
 » 10 days ago, # |   0 Really very useful for me
 » 10 days ago, # |   +5 How convenient! Thanks a lot!
 » 9 days ago, # |   0 very cool. keep it up!
 » 9 days ago, # |   0 Cool! I'm curious how many people actually do anti-hashing hacks in contest. Could you put the standard unordered_map runtimes on the inputs to use as comparisons to the benchmarks you put at the end? Or does it simply take way too much time to even record?
•  » » 9 days ago, # ^ |   +10 The thing about this specific hack is that if anyone successfully makes this hack on anyone else in the contest, their test will be added to system tests which will leave you in trouble. A few examples of recent problems where you can fail for using unprotected unordered_map include 1027F - Session in BSU and 1039C - Network Safety.As far as runtime, it gets a bit slower with the custom hash but not too much. Pure unordered_map gives anywhere between 0.00s and 0.04s on non-adversarial cases when running with Custom Invocation, vs. 0.03s with custom hash. If you're concerned with speed then gp_hash_table with the custom hash is the way to go, since it uses power of two modding and linear probing rather than prime modding and collision chaining.
•  » » » 9 days ago, # ^ |   0 Oh, I wasn't that concerned about the speed of your custom hash. I was curious about the speed of std::unordered_map on the adversarial case that you've created. However, reading it more closely, you have N = 105, so if it really is causing an O(n2) blowup on std::unordered_map, then it's probably too slow to bother recording the time.
•  » » » » 9 days ago, # ^ |   +5 Ah. Run the code from the post in Custom Invocation :)
 » 9 days ago, # | ← Rev. 2 →   +21 I like (uintptr_t)main. :) This pointer should be random for every run because of OS security issue.
•  » » 9 days ago, # ^ | ← Rev. 2 →   +3 Interesting idea. Unfortunately when I tried it on Codeforces just now, it gave the same result every time. :(
•  » » » 9 days ago, # ^ |   +10 Really!? That's too sad. T_T It worked for most other OJ platforms...
•  » » » » 8 days ago, # ^ |   0 You may try new char
 » 9 days ago, # | ← Rev. 2 →   +16 Nicely written, as always. Thanks!Whenever someone talks about hacking hashmaps, I think of this problem: https://ipsc.ksp.sk/2014/real/problems/h.html
 » 9 days ago, # | ← Rev. 2 →   -13 Thanks for this helpful blog. For completeness, it should be noted that the last definitiongp_hash_table safe_hash_table; should be preceded by #include using namespace __gnu_pbds; The following is a slight update to your test program. Update
 » 7 days ago, # |   +5 How does it compare with alternating max_load_factor of the hash table? because it is runs slower as compared to this trick (Arpa's Blog): unordered_set numbers; numbers.max_load_factor(0.25); numbers.reserve(1<<18); this gives output of: x = 107897: 0.018 seconds, sum = 2666686666700000 x = 126271: 0.031 seconds, sum = 2666686666700000 ===== Used: 61 ms, 10600 KB 
•  » » 7 days ago, # ^ |   0 This doesn't make it unhackable, it just changes the prime number that breaks it. Try calling insert_numbers(1056323); instead: x = 107897: 0.043 seconds, sum = 2666686666700000 x = 126271: 0.015 seconds, sum = 2666686666700000 x = 1056323: 1.149 seconds, sum = 2666686666700000 ===== Used: 1201 ms, 10456 KB 
•  » » » 7 days ago, # ^ |   0 I am not sure I understand how it "only" changes the prime number because according to the code, you are inserting numbers with same modulo wrt the prime. Running on equal modulo numbers with: unordered_set numbers; numbers.max_load_factor(0.25); numbers.reserve(1<<20); // NOTICE THE CHANGE which gives output of: x = 107897: 0.010 seconds, sum = 2666686666700000 x = 126271: 0.015 seconds, sum = 2666686666700000 x = 1056323: 0.016 seconds, sum = 2666686666700000 ===== Used: 78 ms, 14864 KB Also reserve must change according to the elements to be inserted (upper bound to be a power of two). Maybe it's because of rehash scheme when max_load_factor is achieved in the bucket under consideration.
•  » » » » 7 days ago, # ^ |   0 When you call .reserve() you are changing the internal capacity of the map, which means you are effectively changing the internal prime number modulo it uses out of this list.So yes if you change the capacity again, it will work well on the previous prime number I gave you, but there will be a new number in the list that is problematic.
 » 5 days ago, # |   0 like splitmix64 is there a good hash function for pairs too?
•  » » 4 days ago, # ^ |   0 If you have a pair of integers you'd like to hash, you can use the custom hash function above on each of them to get two values a and b. Then combine them in any way you like, e.g., a + b.The one issue with a + b is that swapping the two elements of the pair will lead to the same hash value. This isn't a problem from a theory point of view since "O(1) collisions on average" is still valid, but to avoid this situation you can switch to a non-symmetric function such as 3 * a + b or a ^ (b >> 1).