Блог пользователя bitset

Автор bitset, история, 10 дней назад, По-английски

Hello sirs, you might know that Boost has a dynamically sized bitset class, but did you know that GCC also has one? Turns out it's included in the tr2 folder. You can simply include the <tr2/dynamic_bitset> header, and use it as std::tr2::dynamic_bitset. Yes, it works on Codeforces.

Here's a code example to count triangles in a graph (abc258_g):

Code

In some problems, we might not be able to use a constant sized bitset. For example, on 1856E2 - PermuTree (hard version). Here's a submission where we replaced neal's custom_bitset template with using custom_bitset = tr2::dynamic_bitset<>;, and got accepted with little performance difference (260853192, original 217308610).

The implementation of the bitset seems identical to a version of Boost's dynamic bitset, so you can read the docs here. You can view the source here.

Comparing to std::bitset, here are some notable things I saw:

  • They include more set operations, such as set difference, and checking if a bitset is a subset of another bitset.
  • They include lexicographical comparisons between bitsets.
  • You can append single bits, as well as whole blocks (i.e. integers)
  • You can also resize it like a vector. (For some reason there's no pop_back though...)
  • Find first and find next are also supported, without weird function names like in std::bitset. Unfortunately, there's still no find previous :(
  • You can specify the word type and allocator as template arguments.

Of course, it also has all the normal std::bitset functionality. However, I'm not sure how fast this is compared to std::bitset; you can let me know in the comments. Interestingly, it seems to be using 64 bit integers on Codeforces.

If you enjoyed this blog, please make sure to like and subscribe for more bitset content.

Thanks qmk for helping with the blog.

  • Проголосовать: нравится
  • +480
  • Проголосовать: не нравится

»
10 дней назад, # |
  Проголосовать: нравится +45 Проголосовать: не нравится

Note: they added more features to dynamic_bitset in a newer Boost version, but they haven't been added to GCC yet. So that's why I linked an older version of Boost's docs.

»
10 дней назад, # |
Rev. 2   Проголосовать: нравится +81 Проголосовать: не нравится

Fun fact, you can also write dynamic_bitset<__uint128_t> which is technically $$$O(\frac{n}{128})$$$! 260859954

  • »
    »
    10 дней назад, # ^ |
      Проголосовать: нравится +69 Проголосовать: не нравится

    I doubt it will be faster, because __uint128_t is not a native type (the CPU can only handle integers up to 64 bits). Basically the compiler thinks __uint128_t is two uint64_ts, and operations involving __uint128_ts are translated to multiple instructions handling uint64_ts.

    Experiment
    • »
      »
      »
      10 дней назад, # ^ |
      Rev. 2   Проголосовать: нравится +17 Проголосовать: не нравится

      128-bit (and higher) integers are indeed native types, though not in basic x86_64 and not equally general — they're supported in particular operations. There are two ways CPUs handle higher-precision operations to achieve faster performance:

      • one or more operands (can also include the output) are represented by 2 registers like you say here, leading to 2x precision — example is 32-bit ARM's SMULL which multiplies 2 32-bit integers to get a 64-bit result in 2 registers using only one operation
      • SIMD instructions that use their own registers — this is the case relevant here, as a __uint128_t variable can use a register %xmm0 with SSE instruction set; bitwise operations are typically provided in earliest versions of SIMD instruction sets as they're very simple, and performance overhead of loading to SIMD registers isn't a big deal as memory still needs to be accessed in new cache lines

      The performance difference depends on hardware and software, which is part of why $$$O(n/128)$$$ isn't valid $$$O$$$-notation.

      • »
        »
        »
        »
        10 дней назад, # ^ |
        Rev. 3   Проголосовать: нравится +8 Проголосовать: не нравится

        I don't think there are SIMD instructions that can be usable to directly compute 128-bit values, at least on x86-64. For example, there's no instruction that compute "add with carry" which is necessary to compute 128-bit additions.

        As I've shown in my experiment, GCC actually generates two adding instructions for a 128-bit addition (on x86-64). I tried various compile options but the compiler did not use SIMD at all. Similarly, a 128-bit multiplication expands to 3 multiplying instructions and 2 adding instructions, and a 128-bit division even becomes a library function call. It might be possible to use SIMD on bitwise operations, but GCC doesn't use SIMD for them, either. See this assembly output in Compiler Explorer.

        • »
          »
          »
          »
          »
          10 дней назад, # ^ |
            Проголосовать: нравится +8 Проголосовать: не нравится

          True, I was thinking specifically for bitwise operations, where the fact that SIMD is packed multiple data doesn't matter. There's no bitset addition in principle.

          Note that even some seemingly "elementary" bitset operations must be split into multiple assembly operations. Typical example: shifts, a massive pain in the ass to implement correctly.

»
10 дней назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

I wonder if using this can help pass some problems in $$$O(n^{2})$$$. Is this the case?

»
9 дней назад, # |
  Проголосовать: нравится +40 Проголосовать: не нравится

Username checks out