Hello!

Recently I have been trying to learn link cut trees (and splay trees, for that matter). I have tried to polish an implementation that is short (for our ICPC notebooks), and also easy to use and extend. I might be extending this blog post towards a tutorial on how link cut trees are used and, more so, how this template can be used.

The implementation is here:

**Implementation**

It is around 100 lines long (which is not ideal), but it has some neat features. For example, if you call `lc.Path(a, b)`

, the implementation will return the root of a splay tree node which contains the whole path from `a`

to `b`

. Also, the function `access(v)`

will make sure that node `v`

's splay tree would contain the whole path from the root at that time to `v`

. It also allows subtree query via the (amazing) trick of virtual subtrees (https://codeforces.com/blog/entry/67637).

I feel like there is no implementation with a very clean way of modifying it (for the people who don't know LCT), so I decided to adapt one myself (also, I don't like using raw pointers -- if you're like me, you might enjoy this implementation). Also, another thing that I like about this implementation is that the splay tree code is independent to the fact that it's used inside a link-cut tree (it's just plain splay tree code, which can be extracted away).

Another thing that is very useful in my opinion is that the splay tree can be used completely separated from the link cut tree (it is not embedded in its implementation), so they could go into separate snippets in your ICPC notebooks.

In order to adapt the implementation to path aggregates (updates/queries of any type), you would have to create some function that updates/queries the splay tree node `GetPath(u, v)`

and to potentially modify the `push`

and `pull`

methods (`push`

is lazy propagation, whereas `pull`

is regular tree update from children).

#### Also, here is a pastebin of a strees test code, in case you want to test the implementation.

Any thoughts on how to improve the implementation? What do you think about it overall?

Auto comment: topic has been updated by bicsi (previous revision, new revision, compare).There is in fact a way to implement without having to store a path-parent pointer, and that makes the implementation of

`Access(v)`

a bit more clean. This is my implementation. You can query for the sum of values of the vertices on a path and add a value to all vertices on a path, and I think it is easy enough to change for other types of queries/updates.That’s very interesting! While implementing I realized that one of parent and path parent would always be zero. I’m not sure if I will include it like that, as I think it might make debugging a little painful for little gain (also there might be more ‘if’ guards, if I decide to do so).

I can see that (apart from the shared parent pointer) you have the Access method coded so that the vertex in the argument is splayed at the end. However, this is dependent on the fact that parent pointers are shared, if I’m not mistaken, right?

I think your implementation of

`find_root`

might be buggy. Shouldn’t you propagate before going to the left child?I don't really understand your first question, you can always splay the vertex in the argument at the end of Access.

About having to propagate on

`find_root`

, I wasn't really sure when I read your question, but I've done some stress testing and it seems like you don't need to propagate there for some reason.I guess you’re right about splaying in the end, it was just that my Access code was uglier that I had eanted it to be.

I’ve tried your approach for Access, but I found it to be a bit slower ( I think it’s because you splay two times in each iteration, while this implementation splays only once (splay(v)) is a no-op.

How did you stress test find_root? I assume that the cases where this bug would be exposed would have to be kind of specific. I would try generating long rooted paths and see if calls of rootify(), access() and findroot() yields the proper results (I would be really surprised).

Anyways, I’ll try to think of some example, but intuition tells me that it might be fishy. Anyways, it’s a function that is usually not used so much, so it’s not that big of a deal.

I don't see why it should be slower, one of those splays is actually just a rotation.

I stress tested by generating random pairs of vertices and doing the operations (link, cut, rootify, print root) and comparing the results of the code with and without propagating. Just in case, I have fixed my implementation, thanks for pointing it out!

Your code is buggy, you should correct it!

Also, I’m wondering why link cut trees have such good complexity guarantees. More specifically, heavy path decomposition guarantees that the path depth of any node is at most $$$O(log N)$$$. Why does this ‘random’ path decomposition work as well? Is there an easy-to-understand or intuitive proof that it is the case?

I would suggest you to look up the MIT OCW 6.851 lecture series. I think link cut trees are discussed in the dynamic graph lecture. It is well explained there why the complexity is what it is.

(hint: consider the number of heavy and light edges affected in a preferred path change)

I’ve watched the analysis, and it seems reasonable (and kind of cool, to be fair); however, in order to fully close the loop, we would also have to prove that re-rooting the represented tree doesn’t change the heavy-light invariant considerably (which is probably true, as the only path where light edges might become heavy is on the reversed path, and there’s probably few of them, but I don’t know for sure). It’s also not clear to me how this impacts splay trees’ performance.

Not the cleanest implementation out there, nonetheless I wrote it with readability in mind

It also doesn't use a separate path parent pointer: link

does link-cut work with treap, instead of splay?

Also, another motivation on top of Tutis's is that splay trees are actually comparatively just as easy to code as treaps with parent pointers, in my opinion.

Thanks a lot :D

Hello! Does anybody know how to implement Dinic's algorithm using the optimization provided by link-cut trees?

Sorry for necroposting, but I've recently updated the implementation and it is even cooler. Amongst the useful things that I've done:

The implementation length overall hasn't changed a lot, but the new link cut tree is a lot more powerful.

I just tested the code and ran into a problem.

Suppose we have three nodes and no edges at first.

We then perform the following operations.

The output is

While the first and second query groups are correct, the third query group is not. According to the linking sequence, node 3 should be the root, but after we cut the edge (1,2), node 2 becomes the root instead, leading to incorrect subtree queries.

Is this a caveat of the implementation, or is it unavoidable?

When you using "Cut(1, 2)"，you will reroot node 1.That means node 2 becomes node 3's parent.And then you cut the edge between node 2 and his parent,so 2 becomes the root instead.

I think to avoid this problem,you can use the function "Subtree(2, 3)" instead of "Subtree(2, 0)",which means you reroot the correct node(node 3) before the query.

But what should I do when there are a lot of subtree queries? How should I reroot for each query?

A "Subtree" need an exact root.According to my experience,problems like this will give an exact root.

Or you can explain to me why node 3 should be the root?I think the root should be 2.

Maybe I realize it.You mean "Link(u,v)" sets the node u's parent as v.I have an another way to avoid this.When you use Cut(u,v),check which node is the parent of the other,and then cut edge from the son to parent.

Yes, this convention follows the author's implementation.

In the example above, the edge is cut in the correct order but the result is still incorrect. Changing the order makes no change, either.