Блог пользователя Errichto

Автор Errichto, 4 года назад, По-английски

Part 1 (link) introduces basic bitwise operations. This is part 2 and it's mainly about (in)famous bitsets and example problems. Also, see links to very useful advanced stuff at the bottom. EDIT: here's video version of this blog (on my Youtube channel).

Built-in functions

In C++, __builtin_popcount(x) returns popcount of a number — the number of ones in the binary representation of $$$x$$$. Use __builtin_popcountll(x) for long longs.

There are also __builtin_clz and __builtin_ctz (and their long long versions) for counting the number of leading or trailing zeros in a positive number. Read more here. Now, try to solve these two simple tasks in $$$O(1)$$$, then open the spoiler to check the solution:

Compute the biggest power of 2 that is a divisor of x. Example: f(12) = 4
Compute the smallest power of 2 that is not smaller than x. Example: f(12) = 16

While popcount is often needed, I rarely use the other two functions. During a contest, I would solve the two tasks above in $$$O(\log x)$$$ with simple while-loops, because it's easier and more intuitive for me. Just be aware that these can be done in $$$O(1)$$$, and use clz or ctz if you need to speed up your solution.

Motivation behind bitsets

Consider this problem: There are $$$N \leq 5000$$$ workers. Each worker is available during some days of this month (which has 30 days). For each worker, you are given a set of numbers, each from interval $$$[1, 30]$$$, representing his/her availability. You need to assign an important project to two workers but they will be able to work on the project only when they are both available. Find two workers that are best for the job — maximize the number of days when both these workers are available.

You can compute the intersection of two workers (two sets) in $$$O(30)$$$ by using e.g. two pointers for two sorted sequences. Doing that for every pair of workers is $$$O(N^2 \cdot 30)$$$, slightly too slow.

We can think about the availability of a worker as a binary string of length $$$30$$$, which can be stored in a single int. With this representation, we can count the intersection size in $$$O(1)$$$ by using __builtin_popcount(x[i] & x[j]). The complexity becomes $$$O(N^2)$$$, fast enough.

What if we are given the availability for the whole year or in general for $$$D$$$ days? We can handle $$$D \leq 64$$$ in a single unsigned long long, what about bigger $$$D$$$?

We can split $$$D$$$ days into convenient parts of size $$$64$$$ and store the availability of a single worker in an array of $$$\frac{D}{64}$$$ unsigned long longs. Then, the intersection can be computed in $$$O(\frac{D}{64})$$$ and the whole complexity is $$$O(N^2 \cdot \frac{D}{64})$$$.

code

So, we can simulate a long binary number with multiple unsigned long longs. The implementation isn't that bad but doing binary shifts would be quite ugly. Turns out all of this can be done with bitsets easily.

Bitsets

bitset<365> is a binary number with $$$365$$$ bits available, and it supports most of binary operations. The code above changes into simple:

code

Some functions differ, e.g. x.count() instead of __builtin_popcount(x) but it's only more convenient. You can read and print binary numbers, construct a bitset from int or string bitset<100> a(17); bitset<100> b("1010");. You can even access particular bits with b[i]. Read more in C++ reference https://en.cppreference.com/w/cpp/utility/bitset.

Note that the size of the bitset must be a constant number. You can't read $$$n$$$ and then declare bitset<n> john;. If $$$n$$$ is up to $$$100$$$, just create bitset<100>.

The complexity of bitwise operations is $$$O(\frac{size}{32})$$$ or $$$O(\frac{size}{64})$$$, it depends on the architecture of your computer.

Problems

P1. Different numbers — You are given a sequence of $$$N \leq 10^7$$$ numbers, each from interval $$$[0, 10^9]$$$. How many different values appear in the sequence? Don't use set or unordered_set because they quite slow.

solution

P2. Knapsack — You are given $$$N \leq 1000$$$ items, each with some weight $$$w_i$$$. Is there a subset of items with total weight exactly $$$W \leq 10^6$$$?

solution

P3. Triangles in a graph — Given a graph with $$$n \leq 2000$$$ vertices and $$$m \leq n \cdot (n - 1) / 2$$$ edges, count triples of vertices $$$a, b, c$$$ such that there are edges $$$a-b$$$, $$$a-c$$$ and $$$b-c$$$.

hint

P4. Chef and Querieshttps://www.codechef.com/problems/CHEFQUE (easy)

P5. Odd Topichttps://www.codechef.com/AABH2020/problems/ODTPIC (medium), thanks to Not-Afraid for the suggestion

P6. Funny Gnomeshttps://www.codechef.com/problems/SHAIKHGN (hard)

Bonuses

1) m & (m-1) turns off the lowest bit that was set to $$$1$$$ in a positive number $$$m$$$. For example, we get $$$24$$$ for $$$m = 26$$$, as $$$11010$$$ changes into $$$11000$$$. Explanation on quora
2) A quite similar trick allows us to iterate efficiently over all submasks of a mask, article on cp-algorithms / e-maxx. This article also explains why masks-submasks iteration is $$$O(3^n)$$$.
3) DP on broken profile (grid dp) — https://cp-algorithms.com/dynamic_programming/profile-dynamics.html
4) SOS dp (sum over subset) — https://codeforces.com/blog/entry/45223 & https://www.youtube.com/watch?v=Lpvsd8WpbWc&t=5m4s
5) _Find_next function and complexity notation for bitsets — https://codeforces.com/blog/entry/43718

I will add links to some problems in online judges, feel free to suggest some in the comments. I think that bonuses 3 and 4 lack some explanation with drawings, maybe I will make some soon.

  • Проголосовать: нравится
  • +326
  • Проголосовать: не нравится

»
4 года назад, # |
  Проголосовать: нравится +12 Проголосовать: не нравится

This problem from Codechef is a perfect example where bitsets comes handy.

»
4 года назад, # |
  Проголосовать: нравится +9 Проголосовать: не нравится

Would you solve a couple of dp_bitmask problems in yournext post?

»
4 года назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

amazing work

»
4 года назад, # |
Rev. 2   Проголосовать: нравится +27 Проголосовать: не нравится

It's worth noting that after adding #pragma GCC target("popcnt") __builtin_popcount() is replaced to corresponding machine instruction (look at the difference). In my test this maked x2 speed up.

bitset::count() use __builtin_popcount() call in implementation, so it's also affected by this.

  • »
    »
    4 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I would like to see your benchmarks. When I made my benchmarks, I noticed about a 6.2% speedup. Not as drastic as you said.

  • »
    »
    7 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I would like too, can you show us your bench?

    • »
      »
      »
      6 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      The problem get submitted after adding #pragma GCC target("popcnt")

      #include <bits/stdc++.h>
      #pragma GCC target("popcnt")
      using namespace std;
      #define int long long
      int mod = 1e9 + 7;
      
      signed main(void) {
          int n;
          cin >> n;
          vector<vector<int>> grid(n, vector<int>(n, 0));
          for (int i = 0; i < n; i++) {
              string s;
              cin >> s;
              for (int j = 0; j < n; j++) {
                  grid[i][j] = s[j] - '0';
              }
          }
          vector<bitset<3000>> bit(n);
          for (int j = 0; j < n; j++) {
              for (int i = 0; i < n; i++) {
                  bit[j][i] = grid[i][j];
              }
          }
          int ans = 0;
          for (int i = 0; i < n; i++) {
              for (int j = i + 1; j < n; j++) {
                  int cnt = (bit[i] & bit[j]).count();
                  ans += cnt * (cnt - 1) / 2;
              }
          }
          cout << ans << endl;
          return 0;
      }
      
»
4 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

this problem on codechef and see comments in editorial some solved using bitset to speed up the brute force solution

»
4 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Another problem where bitsets come handy.

»
4 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

What is the difference between $$$O(1)$$$ and $$$O(30)$$$? How is using bitsets 'true' constant time?

  • »
    »
    4 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Good question. You can think about the number of bits in architecture (usually $$$32$$$ or $$$64$$$) as a variable and then we're talking about $$$O(n)$$$ vs. $$$O(\frac{n}{b})$$$. You come up with an algorithm and it will have speed dependent on the machine it will be run on.

»
4 года назад, # |
Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

Thanks for the informative tutorial. The following is a C++14 demo program on using the 32/64-bit popcount/clz/ctz bit-counting built-in functions in C++. The templates included in this demo may help beginners to use these functions without worrying about memorizing their slightly inconvenient names.

demo

»
4 года назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

Hi Errichto, I had asked you during a recent stream whether it is possible to do an iterative version of going through all bitmasks of length $$$N$$$ in $$$O(2^N)$$$ time, instead of $$$O(N*2^N)$$$. We can do it in recursion, by passing what we need to keep track of as an argument in the recursive call. You had suggested that it is possible via some DP and bit tricks. So I thought a little bit, and the trick used in Fenwick Tree came to mind. I have written a simple code, that for each mask will store all the bits that are on in it. This is the link.

I had a question regarding this, we can get the value of the last set bit using &, but we need to find the position of the bit, i.e. if last set bit value is $$$100$$$ then position is $$$2$$$. To find position, we will still have to iterate over the length of the bitmask right? Does the __builtin_ctz function take 1 operation or length of bits number of operations? What about the bitwise & and $$$|$$$ operators? Ofcourse maximum operations ( that we ever do in CP, mostly ) will only have 64 bit numbers, so you could say O(64) = O(1), but I want to know about the constants exactly.

  • »
    »
    4 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    You can use some magic like https://www.chessprogramming.org/BitScan#With_isolated_LS1B to get the position instead of the value. Don't ask me how it works, idk about that but it works. You can also do it in O(logBITS) with binary search (as mentioned in the link I gave too under divide and conquer).

    Also maybe more important to know, most often optimizing these O(BITS) that you loop over bits is overkill and not necessary because you usually do some really light operations also not involving cache misses so it's actually faster than you'd expect.

    • »
      »
      »
      4 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Wow, the divide and conquer method was mind blowing. Another question I have always had in my mind was, when you do $$$a$$$&$$$b$$$, you must make $$$log(max(a,b))$$$ operations right? So does that make the first part of finding the last bit value itself useless? Also, does this mean Fenwick Tree has $$$log^2(N)$$$ update instead of $$$log(N)$$$?

      • »
        »
        »
        »
        4 года назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        No, a&b should be done in few cycles and should actually be cheaper than a+b because the ALU in cpus for sure have a bitwise and operation and it doesn't need to carry over the carry bit so it's fully parallel for the bits.

        • »
          »
          »
          »
          »
          4 года назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          Oh yeah, makes sense. ( Le me: Recalls Computer Architecture course, wonders what use is university )

          • »
            »
            »
            »
            »
            »
            3 года назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится

            That makes sense to me, because computers probably have built-in circuits in the chips for bitwise operations, where you plug in 2 numbers from one end, some magic happens in constant time, and you get the output.

»
4 года назад, # |
  Проголосовать: нравится +4 Проголосовать: не нравится

Update :)

I made two Youtube videos (part 1 and part2) which cover the same content as my blogs.

And added bonus (5) with a function to find the next bit 1 in a bitset, _Find_next(i).

»
4 года назад, # |
Rev. 2   Проголосовать: нравится +2 Проголосовать: не нравится

Edit: This should be deleted, I was just toasted and put 1LL << 31 instead of 1LL << 32.

... ... ...

Hey people, I need some help.

For chef and queries, I have the current si updating like this after each iteration:

s = ((a*s+b)% MAX_W);

where MAX_W is 1LL << 31;

the type of a, s and b is long long.

I don't get why this messes up the algorithm. If I change their types to unsigned int and remove the % operation in the update, the algorithm works fine. However, I think that the first approach should be working fine, as it is how the problem says we need to calculate si.

Can someone please guide me through the reasoning behind the issue?

Thanks!

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    What was size of your bitset??
    I am taking bitset<1000000001> and it shows stack-overflow
    while taking bitset<100000> on smaller inputs works fine

    • »
      »
      »
      3 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      bitset<1000000001> takes about 125 MB of memory. You are (generally) not expected to allocate so much on the stack. Use the heap or a global variable instead.

      • »
        »
        »
        »
        3 года назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        can you please elaboarte more on a code?
        P4 of the above post.
        As in the bitset video by Errichto, he used a bitset of that size.

»
4 года назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

Please check these 2 codes:

Problem:- https://codeforces.com/problemset/problem/550/B WA:- https://ideone.com/0eCeTX AC:- https://codeforces.com/contest/550/submission/11417471

The only difference is, the AC code has 0 based array, and mine has 1 based array. Is it wrong to make 1 based array for bitmasking?

  • »
    »
    4 года назад, # ^ |
      Проголосовать: нравится +1 Проголосовать: не нравится

    Then you need a bitmask with bits on positions 1 through N, so an even number up to 2^(n+1). I don't think you understand what your code does.

»
4 года назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

Errichto What is the complexity of __builtin_popcount(N)? (Assuming $$$N$$$ can be as arbitrarily large as possible)

  • »
    »
    4 года назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    It will take O(32)

    • »
      »
      »
      4 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      So in other words it's just $$$\mathcal{O}(\text{number of bits})$$$ ? I saw this comment claiming that's it's $$$\mathcal{O}(\log{(\text{number of bits})})$$$

      • »
        »
        »
        »
        4 года назад, # ^ |
        Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

        If it is so, I don't know why it is and what exactly is working at architecture or compiler level. If anyone replies at your comment plz let me know as I also what to know how it can do in log(bits) time.

        • »
          »
          »
          »
          »
          4 года назад, # ^ |
          Rev. 2   Проголосовать: нравится +3 Проголосовать: не нравится

          Here you have how to do it in $$$O(\log bits)$$$.

          Technically speaking it is only considered $$$O(\log bits)$$$ because it assumes adding and shifting numbers can be done in constant time, so it does $$$O(\log bits)$$$ such operations. This is a reasonable assumption in this context since we are handling only with numbers of 32 and 64 bits.

  • »
    »
    4 года назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    Treat int popcount as an $$$O(1)$$$ operation but it's just slightly slower than simpler operations like xor. The complexity becomes $$$O(L/32)$$$ for bitsets of length $$$L$$$.

»
12 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

https://www.codechef.com/problems/FUNARR

this problem can be done for bitset practice.

»
12 месяцев назад, # |
Rev. 2   Проголосовать: нравится -8 Проголосовать: не нравится

I recommend a problem from THUPC 2021: P7606 [THUPC2021] 混乱邪恶 — 洛谷 | 计算机科学教育新生态. It's a problem which uses std::bitset or unsigned long long with bit shifts to represent and calculate DP states. But I'm sorry that this problem only provides the Chinese version.

»
10 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Compute the smallest power of 2 that is not smaller than x. Example: f(12) = 16 In this question if i use 1 << (32 — __builtin_clz(x)) doesn't it produce the same result. Can anyone answer please.

»
8 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

bitset<1000000001> bs; it is showing segmentation fault when i am declaring inside function.But it is fine if i declare it globally.can anyone explaint why is this happening and other restrictions in bitset declarations ?

  • »
    »
    8 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Local variables are placed in a stack. This bitset is equivalent to ~125MB, this is a lot bigger than you local stack size (don't remember exactly what is default stack size). If you will try submitting it to Codeforces, you won't get any errors.

    Increase stack size using compiler flags.

    • »
      »
      »
      8 месяцев назад, # ^ |
      Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

      Thanks!.And if i declare vector< bool > v(10000000001); of same size locally.It is working fine, what is the reason here ?. Also, if you can share the way to customise stack size using compilers flag will be helpful.

»
7 месяцев назад, # |
Rev. 6   Проголосовать: нравится 0 Проголосовать: не нравится

I was struggling to figure out the first problem which was posted in this post. After watching the video for endless times I was missing something, and could not code this problem. Hence, I would like to get some suggestion whether the below code can be improved? Please assist.

There are $$$N \leq 5000$$$ workers. Each worker is available during some days of this month (which has $$$30$$$ days). For each worker, you are given a set of numbers, each from interval $$$[1,30]$$$ , representing his/her availability. You need to assign an important project to two workers but they will be able to work on the project only when they are both available. Find two workers that are best for the job — maximize the number of days when both these workers are available.

Test Case:

Input: $$$\text{workers} = [[2,3,5,6,8],[2,4,5,8],[1,2,10,12,14,16]];$$$

output: 3, pairs: 0 and 1

Unable to parse markup [type=CF_MATHJAX]

My Code

Using bitset

My Code
»
5 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

What is the complexity of the third problem (counting triangle) I think that finding all the pair would take (1/2)N^2 which is O(N^2)and then we Need to iterate over all n again to find the third intersection if it does exist .so its O(n^3/32) am I right ?