Блог пользователя shejibri

Автор shejibri, 3 года назад, По-английски

Why you can not do path compression in DSU with rollbacks? Why is time O(log(n)) instead of O(α(n))? I wrote DSU with path compression and rollbacks but it was slower than DSU without path compression.

  • Проголосовать: нравится
  • +17
  • Проголосовать: не нравится

»
3 года назад, # |
Rev. 2   Проголосовать: нравится +62 Проголосовать: не нравится

You can, no one will stop you. The problem is that the complexity is $$$O(n\alpha(n))$$$ in total. One call to path compression can take $$$\Theta(\log n)$$$ time. So suppose you have a union operation which causes that, then you have to spend $$$\Theta(\log n)$$$ time. Then you need to revert that, again in $$$\Theta(\log n)$$$ time. Then you have to do it again and so on. The complexity will be $$$O(n\log n)$$$ with or without path compression, but with path compression you do a lot of unnecessary operations, causing larger constant factor.

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    Can you describe how to do path compression in DSU with rollback ? i've heard many people said that we couldn't do path compression if we want to rollback

    • »
      »
      »
      3 года назад, # ^ |
      Rev. 2   Проголосовать: нравится -10 Проголосовать: не нравится

      Without rollback, your dsu keep merging the nodes and only query for its ancestor after having path compression. By keep merging a long chain of nodes, it is just $$$O(log(n)) \rightarrow O(\alpha(n))$$$ amortized.

      By having rollback, however for long chain of nodes to be separate all over again, it is might upto $$$O(log(n))$$$

      edited the wrong complexity

      • »
        »
        »
        »
        3 года назад, # ^ |
          Проголосовать: нравится +1 Проголосовать: не нравится

        That is if you only use path compression. But if you want $$$\alpha(n)$$$ amortised complexity without rollback, you are using both compression and join-by-size. Then what Maksim1744 explained happens.

        As for implementing general rollback, for a data structure with a guarantee on the worst-case complexity of operations, you can just store in a vector which fields changed and which values they had before the change, for every operation done.

    • »
      »
      »
      3 года назад, # ^ |
        Проголосовать: нравится +13 Проголосовать: не нравится

      As mango_lassi already mentioned, you can do rollbacks for any data structure by storing all values before the change. For DSU with path compression it would look like this:

      Code

      That's not optimal in terms of history.size(), but this is just an example. For instance, you can save only $$$(2, u, v)$$$ in unite and then restore parent[u] = u, rank[v] -= rank[u]

      Also I said that you could, I didn't say that you should. Having path compression in DSU with rollbacks doesn't change complexity, it's still $$$O(\log n)$$$

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

As I remember, most amortized complexity are broken by doing roll back operations:

  • Amortization kinda like: "Oh, so the value of this function might not reduce more. Observation that the complexity is likely to be reduce each time you perform operations close to linear or something. Therefore its complexity must amortized to something smaller"

  • Roll back: You have to do the work that you have done again. Or even redo and undo over and over for several times. And by going back, you can see that the average complexity on each operation may increase again.

  • Say the vector<>: Each time you extend it by push_back(), emplace_back, resize or something. It will delete the old $$$2^k$$$ data and copy it to bigger $$$2^{k+1}$$$ data size. It will doing it about $$$O(2^0 + 2^1 + \dots 2^{\lfloor log_2(n) \rfloor}) = O(2^{\lfloor log_2(n) \rfloor + 1}) - 1) = O(n)$$$. But once you add such roll back operation, and say to reduce its size by smaller $$$2^{k-1}, 2^{k-2}, \dots, 2^1, 0$$$. Repeating extending and roll-backing and the complexity might be about $$$O(2^k \times n) = O(n^2)$$$ hence the amortized complexity break

Hence it this cases, it is also amortized $$$O(\alpha(n))$$$ on average but break during roll back operations. You can still achieve $$$O(\alpha(n))$$$ if you dont do any roll back operations, which is faster. However it is not likely rolling back with it is also faster.