neal's blog

By neal, 4 years ago, In English

Inspired by the discussion at Almost-LCA in Tree, I wanted to pose a similar question:

Is there a simple solution to

  1. preprocess a tree with weighted edges in $$$O(n)$$$ or $$$O(n \log n)$$$ and
  2. query the maximum edge weight along the path from $$$u$$$ to $$$v$$$ in O(1)?

Once again, jump pointers can be used to solve each query in $$$O(\log n)$$$ with bad constant factor. Are there faster solutions?

  • Vote: I like it
  • +61
  • Vote: I do not like it

»
4 years ago, # |
  Vote: I like it +26 Vote: I do not like it

I think the method I described here can be extended to answer this.

»
4 years ago, # |
Rev. 2   Vote: I like it +155 Vote: I do not like it

There is the following nice technique called "line tree". I do not know the exact origin of this trick: I learned it from editorial of a recent contest by 300iq.

Formally, a line tree $$$L_T$$$ of a tree $$$T$$$ with weighted edges is such a tree with weighted edges, that:

  1. $$$L_T$$$ and $$$T$$$ have the same set of vertices.
  2. The maximum weight on the path between vertices $$$v$$$ and $$$u$$$ is the same for trees $$$T$$$ and $$$L_T$$$.
  3. $$$L_T$$$ is, well, a line (so we can choose the order of the vertices on the line and the weights of edges between neighbouring vertices, but nothing else).

If we already have a line tree, answering "maximum weight on a path" queries for $$$T$$$ boils down to asking maximum on a subsegment of an array, which can be done with a sparse table.

How to construct a line tree efficiently (even its existence is not obvious)? We can do it recursively: pick an edge in $$$T$$$ with the largest weight $$$w$$$, remove it, build line trees of two remaining connected components and connect them with an edge with weight $$$w$$$. This can be done in $$$O(n \log n)$$$ with some small-to-large tricks and even in $$$O(n)$$$ after sorting with linked list merging. The second property is easy to prove by induction over such construction.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +42 Vote: I do not like it

    Yeah, I come up with this technique in May 2018, but I heard it was well-known long time ago and in China it's called with some name similar to "Kruskal Tree" or smth

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    Hi Kaban-5, may I ask which question was that, which u learnt the trick from? Thanks in advance :)

    • »
      »
      »
      4 years ago, # ^ |
      Rev. 4   Vote: I like it 0 Vote: I do not like it

      Unfortunately, there may be some "legal" problems with sharing the statement and editorial of this contest :(. All links in the first edit level are now dead.

      • »
        »
        »
        »
        4 years ago, # ^ |
          Vote: I like it +8 Vote: I do not like it

        Oh, then Kaban-5 are u allowed to describe just that problem or something? rather than sharing the whole editorial? If not is someone else able to give some other problems which utilises the trick? or at least describe some common scenarios in which this can be used? Thanks in advance, sorry for the trouble.

  • »
    »
    3 years ago, # ^ |
    Rev. 3   Vote: I like it +10 Vote: I do not like it

    Hello. I would like to ask how you can build the line tree in $$$O(n)$$$ time after sorting the edges.

    The following Python code builds a line tree with the small-to-large merging technique in $$$O(n \log n)$$$ time (after sorting) in a bottom-up fashion (in the order of increasing edge weights):

    pp = list(range(N))  # parent pointer
    arr = [[i] for i in range(N)]
    Ws = [[] for _ in range(N)]
    
    eds.sort(key=lambda v: v[2])
    for a, b, w in eds:
    	a, b = pp[a], pp[b]
    
    	if len(arr[a]) < len(arr[b]):
    		a, b = b, a
    
    	Ws[a].append(w)
    	Ws[a].extend(Ws[b])
    
    	for i in arr[b]:
    		pp[i] = a
    		arr[a].append(i)
    
    

    Here each $$$arr[i]$$$ stores the line of nodes in the subtree with (arbitrary root) $$$i$$$. $$$Ws[i]$$$ stores the weights on edges between nodes in the line.

    I see how you can maintain the $$$arr$$$ and $$$Ws$$$ arrays in $$$O(n)$$$ time by using linked lists, but maintaining $$$pp$$$ will still take $$$O(n \log n)$$$ time.

    The following example explains why I think I need to maintain the $$$pp$$$ array: Let's say we have created two (sub-)line trees: $$$[x, a, y]$$$ and $$$[u, b, v]$$$ and the next edge to process is $$$(a, b, w)$$$. Thus we need to connect the two lines by either connecting $$$y$$$ to $$$u$$$ or $$$v$$$ to $$$x$$$. To do this we need to be able to jump to the beginning (and end) of a line in $$$O(1)$$$ time, given a node in a line.

    What is the "trick" that we can use to construct the tree in $$$O(n)$$$ time after sorting the edges? Do we use a top-down approach instead?

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it +10 Vote: I do not like it

      Aha. If we use linked lists and actual Union-Find with path compression, the complexity becomes $$$O(n \,\alpha(n))$$$ after sorting, which is still a bit off from $$$O(n)$$$.

»
4 years ago, # |
  Vote: I like it +41 Vote: I do not like it

Let's run Boruvka on the given tree (it will work in $$$O(n)$$$ because it's tree).

With it we can build a tree with $$$O(\log)$$$ depth with the same maximums on all paths (in each boruvka iteration create a new vertex for all connected components and connect it to all vertices (not initial vertices, but connected components at the start of current boruvka iteration) with weights of edges that you add to MST on current iteration). Now we can use jump pointers on this tree to answer queries in $$$O(\log \log n)$$$. Also with 4-Russians method you can make it work in $$$O(1)$$$, and this technique is used in linear MST algorithm.

»
4 years ago, # |
Rev. 2   Vote: I like it +28 Vote: I do not like it

In $$$O(n~log~n)$$$ you can build centroid decomposition on this tree. Then, to answer a query you need to find LCA of $$$u$$$ and $$$v$$$ in a tree of centroids (that can be done in $$$O(1)$$$ as LCA for usual tree) and take precalced maximums on paths from $$$u$$$ and $$$v$$$ to this centroid.

»
4 years ago, # |
Rev. 2   Vote: I like it +47 Vote: I do not like it

Use the Kruskal algorithm, each time you add a edge $$$u\leftrightarrow v$$$, create a new vertex $$$k$$$ which left son is the disjoint set of $$$u$$$ and right son is the disjoing set of $$$v$$$.

Then the answer of the query $$$(u,\,v)$$$ is the lowest common ancestor of $$$u$$$ and $$$v$$$. It can be done in linear time.

Sorry for my poor English.

»
4 years ago, # |
  Vote: I like it -26 Vote: I do not like it

Assuming there is no update to the tree, i think we can just use RMQ LCA using 2 sparse table, one for getting LCA, one for tracking maximum value of edges so far. We could use O(1) query on sparse table using most significant bit. I think there is a resource about LCA using RMQ in Competitive Programming 3 Book by Steven Halim.

Let's say we have to query path(u,v), we get their LCA using RMQ LCA in O(1) denote as anc, we know how much we need to lift from u and v, then we can just look the the sparse table which track for maximum value then query from u to anc and from v to anc and get maximum of two of them.

Assuming i'm understanding your post correct i think that should work right?

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +10 Vote: I do not like it

    How do you get the maximum on path from $$$u$$$ to $$$lca(u, v)$$$ faster than $$$O(\log(n))$$$?

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +15 Vote: I do not like it

      My bad, i didn't think it through about the maximum query, i thought i would be possible alongside the sparse table used to get the lca(u,v), and yes it's complexity is still $$$O$$$(log($$$n$$$)) at its best.