Recently I've been looking for a good heavy-light decomposition code I can just copy-paste and continue to live happily. To my greatest disappointment, I didn't find one and had to write my own.

First, there's this blog explaining what HLD is http://blog.anudeep2011.com/heavy-light-decomposition/, but it doesn't contain full code. Also it uses LCA for queries, which is absolutely unnecessary and complicates things too much.

Second, there's this implementation https://sites.google.com/site/indy256/algo/heavy_light. (It has been updated, so the comment is related to old version). OK, complete working code, and it doesn't use LCA. But it has too many arrays. Way too many! Also it's mixed with segment tree code, and I'd rather separate the logic.

Third, there's this post http://apps.topcoder.com/forums/?module=Thread&threadID=796128&start=0&mc=8. So far the best looking code, but it has only LCA method and no path query handling, which is the point of HLD. So, I refined the code and added missing parts.

The code is probably too simple to explain it, but **not well tested yet**. Graph can be represented for example as `vector<vector<int>>`

. I use single segment tree for all the chains. Creating separate trees makes code harder to read and also slower (at least in my experiment). For a good segment tree implementation you can look here: http://codeforces.com/blog/entry/18051.

```
template <class T, int V>
class HeavyLight {
int parent[V], heavy[V], depth[V];
int root[V], treePos[V];
SegmentTree<T> tree;
template <class G>
int dfs(const G& graph, int v) {
int size = 1, maxSubtree = 0;
for (int u : graph[v]) if (u != parent[v]) {
parent[u] = v;
depth[u] = depth[v] + 1;
int subtree = dfs(graph, u);
if (subtree > maxSubtree) heavy[v] = u, maxSubtree = subtree;
size += subtree;
}
return size;
}
template <class BinaryOperation>
void processPath(int u, int v, BinaryOperation op) {
for (; root[u] != root[v]; v = parent[root[v]]) {
if (depth[root[u]] > depth[root[v]]) swap(u, v);
op(treePos[root[v]], treePos[v] + 1);
}
if (depth[u] > depth[v]) swap(u, v);
op(treePos[u], treePos[v] + 1);
}
public:
template <class G>
void init(const G& graph) {
int n = graph.size();
fill_n(heavy, n, -1);
parent[0] = -1;
depth[0] = 0;
dfs(graph, 0);
for (int i = 0, currentPos = 0; i < n; ++i)
if (parent[i] == -1 || heavy[parent[i]] != i)
for (int j = i; j != -1; j = heavy[j]) {
root[j] = i;
treePos[j] = currentPos++;
}
tree.init(n);
}
void set(int v, const T& value) {
tree.set(treePos[v], value);
}
void modifyPath(int u, int v, const T& value) {
processPath(u, v, [this, &value](int l, int r) { tree.modify(l, r, value); });
}
T queryPath(int u, int v) {
T res = T();
processPath(u, v, [this, &res](int l, int r) { res.add(tree.query(l, r)); });
return res;
}
};
```

Hi, first of all, awesome implementation.

I have a doubt about this part of the code

Let's assume that the path (u, v) is in the same chain, if we query the range [treePos[u], treePos[v] + 1), then we will be considering whatever cost the node u has to its parent (and we shouldn't).

Thinking about that, I came up with this:

I am not an expert in HLD, so I may be wrong :)

Good point. I forgot to mention that I consider values to be stored in vertices. I believe the other case (values on edges) can be handled similarly to indy256's approach by changing the line to

I tried this, but this didn't work for the USACO problem grassplant. It also didn't work when I used the original implementation for Timus 1553. I'm pretty sure it's not my segment tree that's wrong since I've verified it on many problems.

As said before: awesome implementation! I was looking for a good code for HLD too, but I couldn't understand all the details only with the blog's explanation... Thank you!

Now, I have a question... If I understood it correctly (maybe I didn't), this decomposition helps if you only need to query information about paths, without changing the underlying tree, right? That's just what I need. However... Once I asked how can we solve dynamic MST problem (I know, with link cut tree) and this answer said we can adapt LCA DP to answer queries about paths, like HLD. My question is: is this LCA DP adaptation as powerful as HLD? Do you know about this technique? Can you please recommend me a good tutorial about this LCA DP for path queries? Searching for "lca for path queries", I found your (this) post. Thanks in advance!

HLD supports updating the tree unlike the LCA sparse array approach.

Updating weights or something like that, right? But the topology must remain the same, right?

Yes. The tree structure is fixed.

Could you please explain this:

What does the program modify path, get the answer?

These are C++11 lambda functions I used to avoid code duplication. Basically the code in brackets will substitute calls to

`BinaryOperation op`

. This code may depend on segment tree implementation and the result type, but I hope it gives the general idea.I understand why the

`value`

and`res`

captures are necessary, but what does`this`

do and why is it necessary? (sorry for the necropost)I have rewritten my Java implementation https://sites.google.com/site/indy256/algo/heavy_light using your neat code (if you are not against). Old version is here.

Sure, thank you for the useful library. Though it would be nice to have a link like Sleator has https://sites.google.com/site/indy256/algo/link-cut-tree-connectivity

Why do you use one SegmentTree for all operations? Isn't it going to slow down the code in the most cases? You are not using cache (tree is too big) and even log is bigger.

I've never seen the code without lca and it's really cool. But still in most problems you just need lca for the purposes of counting some weird functions.

I do because it's easier!

Also it's faster for single element updates and path queries using non-recursive segment tree. Multiple trees may be faster when lazy propagation is needed, but I don't think more than 10% faster if any. Time limits for problems requiring HLD aren't usually strict anyway. And non-recursive segment tree vs recursive is much more significant improvement.

I don't know how are you going to use cache when you need to access

O(logN) different trees per query.This code implicitly finds LCA and it's node

`u`

at the end of`processPath`

method. You can just return it if you need it.Are you sure this is right? I implemented it for Timus 1553 (http://acm.timus.ru/problem.aspx?space=1&num=1553&locale=en), but for some reason got WA on test case 4. I modified it a bit because when I tested your original to find the sum of the number of times each vertex was traversed, I got an incorrect answer. Can someone post a passing solution to Timus 1553 using this implementation?

My original test case These are the edges in the tree 0 1, 1 3, 3 4, 3 5, 0 2, 2 6, 2 7

Actions the program performed

Add 1 to each value on the path from 4 to 5 Query on the number the path from 4 to 6

Using your implementation I got an answer of 5, which is wrong.

where the code at

My apologies. https://ideone.com/p8ntsW

The Segment Tree is from Benq's github.

Extra care must be taken in

`processPath`

if the binary operation isn't commutative, you'll need two segment trees, one for downward sums and one for upward sums.Does it also handles the case when u == v? I think it is not. Can anyone help.

If a tree is like a straight line(1-> 2-> 3-> 4) like so and all queries are like (1, n) i.e from top to bottom can HLD also compress path to handle these queries?

Nice Implementation. I like the part where you combine both the LCA finding and processing the path at the same time using the root array you constructed