Hi everyone!

Perhaps you heard about github project on translating e-maxx. The thing is that project is actually more than just translating it. You see, there are bunch of algorithms and approaches which either do not have proper elaborations or have but they're written in some weird uncommon languages like, you know, russian or chinese. And there are some sites which just don't fit for this purpose for some reasons.

Years ago when I started doing competitive programming e-maxx.ru was the main resource to learn things. Things changed a bit now. E-maxx is Russian only and it wasn't updated for years. And now I hope that e-maxx-eng will manage to fill this gap of common resource for everyone to learn new things and keep updated on recent competitive programming tricks and new algorithms.

So I encourage everyone to collaborate in making e-maxx-eng comprehensive guide into competitive programming, and not only on hacktoberfests :). And to begin with I would like to share with you my article on convex hull trick and Li Chao tree I wrote for this resource. Enjoy!

**If you were too lazy to read it thoroughly:** There is a link to CHT and Li Chao tree article just above this sentence!

Oh, I totally forgot, if you have any topics on which you would like to get an article, you can write it here :)

Do you have any link in which Ji Driver tree is mentioned?

May be this could help. Or some chinese participant might know better material.

Oh, Chinese links, nice! I just wanted to add some in the entry :)

Gomory Hu Tree: https://www.youtube.com/watch?v=MneGZIXwnS4

However, I was unable to implement it.

SGT Beats use of segment tree(mentioned here).

Also, I think your article should mention that Li-chao can easily be extended for parabolic functions much easily(useful for problem KILLER)

Well, I mentioned that it only assumed that each pair of functions intersects at most once..

(O(n), O(1)) RMQ algorithm by Fischer and Heun (link).

WPS Binary Search

(or I guess it is also called Lagrange optimization in section 5 of this article)

On behalf of all newbies like me I really thank you! Looking forward for more articles!

Here's a straightforward dynamic CHT/Li Chao tree problem if anyone wants to test their template: https://csacademy.com/contest/archive/task/squared-ends

It is first time to hear about Li Chao tree, it is so awesome and easy to code ^_^

However if the numbers are very large, we can use ranking (instead of dynamic segment tree) as it will save space, memory and easier to code (as the code will be the same, only difference is instead of calling f(x), we call f(rev_rnk[x])), however this works only in offline queries (since we need to rank both input and numbers in queries).

Thank you for the nice article :)

Thanks for the great article adamant! Just to notice, the images/figures on the article are broken.

adamant I'll look into it. Looks like all images are broken on e-maxx-eng.

Are exercise problems sorted in increasing order of difficulty?

Is it working now? I can't enter the site. (Or problem with my vpn?)

Site works for me

While solving 1179D - Fedor Runs for President with Li Chao tree I came up with a idea to improve its memory usage.

If queries are integers from $$$1$$$ to $$$n$$$, then you only need $$$n$$$ nodes.

Your implementation uses $$$4n$$$ nodes because it's segment tree with binary heap indexing.

Segment tree can use only $$$2n-1$$$ nodes with proper indexing.

We can further improve it to $$$n$$$ by replacing segment tree with binary search tree.

Let $$$t[l,r]$$$ denote the node of range $$$[l,r]$$$ and $$$m=\lfloor \frac{l+r}{2} \rfloor$$$. In the Li Chao tree algorithm, $$$t[l,r]$$$ stores a line that has the smallest $$$y$$$-value at $$$x=m$$$ in its subtree. Therefore, when you are querying $$$x=m$$$, and you reach the node $$$t[l,r]$$$, you can just return instead of going down the subtree.

In the original algorithm, $$$leftchild(t[l,r]) = t[l,m], rightchild(t[l,r]) = t[m,r]$$$.

We can modify the algorithm and make $$$leftchild(t[l,r]) = t[l,m-1], rightchild(t[l,r]) = t[m+1,r]$$$.

So we have a bijection between nodes and integers from $$$1$$$ to $$$n$$$, and thus we only need $$$n$$$ nodes.

If you use $$$m$$$ as the index of the node $$$t[l,r]$$$ and draw the tree with index on each node, then you can see it's a binary search tree of integers from $$$1$$$ to $$$n$$$.

This is my implementation.55968228

will there be any operation that only li-chao segement tree could handle?

Just as you describe, we can use binary search trees(ex. implicit treaps) to replace any segment tree with only n nodes. But with the high constant they are unlikely to be any faster/have less memory usage than a segment tree. (Also atleast for me, a segment tree is much faster/easier to code.)

o_To_T_oT If functions are not line but segment, Li Chao segment tree can solve it in $$$O(\log^2 n)$$$ per update and $$$O(\log n)$$$ per query. I dont think Li Chao BST works in this case.

rahulDugar In this case, the constant decreases. The only difference between the original algorithm and mine is that my left child and right child's ranges are shorter.

You can also see it by modifying adamant's code.

just add

`if (r < l) return;`

and replace every`l, m`

to`l, m-1`

, every`m, r`

to`m+1, r`

. After that it became my algorithmOk. I understood it. It will be a nice constant optimisation for Lichao-tree. Thanks

some weird uncommon languages :(

There is some misleading in the article about DSU. It is said there, that if you link a to b randomly, the complexity will not be worse. In fact, if you use this, it is easy to construct a test, where expected value of tree height is n / 2, where n is number of sets. In the article attached, there is another way of random linking — there we assign all the priorities at random and link a node with bigger priority to a node with smaller priority, but do not update the priorities after that, which can be a bit easier to implement.

By the way, adamant, do you know how to construct a test, where dsu with only path compression without rollbacks works in O(log n) on average? I heard that it is not that trivial, and i could not do find this by myself.

Another thing that i believe is worth mentioning, is that you can link each node to it's grandparent, which will have the same complexity, but a bit faster in practice, as said here: link.

I actually can't see the reason for this exact comment being downvoted. I can see reasoning behind downvoting most of the comments from this account, as most of them are rude or unpleasant in some way. But here i make 3 points and each of which seems ok:

I mentioned about a mistake in an article and asked to extend it. You want exact test case where random linking performs badly? It's just easy-peasy: let's link 1 to 2, then 2 to 3, ... 3 to n. Then, expected value of tree height is n / 2 which is easy to prove by induction. I also mentioned, how priorities are chosen in the original paper, which corrects the original articl on cp-algorithm. This seems more than OK to me.

I asked a cool coder whether he knows a test where upper bound for DSU with path compression only is achieved. I googled some information on this topic, but failed, so it is at least not too easy to find it and it is good to ask this question in public, in case there will be other people interested in this topic.

Finally, i mentioned a technique, which may make your life a bit easier and your DSU a little faster and provided a link to an article.

I do not see why this comment was so heavily downvoted. I pointed to a mistake in an article and proposed some ways to improve it. Even if at won`t, i am strongly convinced that a comment upwards has educational value itself and is useful for the cf community.

It is probably because your comment directly contradicts the linked paper. I'm more inclined to trust the paper. You basically just said "Consider a line" and concluded that somehow the algorithm is worse time complexity. Which isn't true at all, plus, the union by size would do the exact same thing on your line, so they would still give the same resulting complexity.

Not "somehow concluded". Each time you with equal probability link a tree to one node and increase depth by 1, link a node to an existing tree and it does not change your depth (it may increase it by one, but may not). The expected value is n / 2.

Maybe I misunderstand the case. It is not a line?

Article on cp-algorithms is wrong, as i shown in my testcase. If you read the original article at http://www.cis.upenn.edu/~sanjeev/papers/soda14_disjoint_set_union.pdf you will find out that they call the thing early linking — they just assign initial ranking at random.

Sorry for getting so many downvotes. I'm guessing the reason for it is, that this discussion about DSU doesn't really belong to this blog post. adamant wrote this blog post to promote mostly his own article about the convex hull trick, and to motivate new people into writing articles. In fact adamant has nothing to do with the DSU article.

(Also being unrated doesn't help much for the upvote/downvote situation.)

Discussion about mistakes in articles is usually done on Github, where we develop/manage the articles.

I've created an issue for your concerns: Issue 437. I will fix the article tomorrow after work. Btw, it is also possible for yourself to fix the article, by creating a pull request on Github.

Yes, this is one of the most common mistakes with randomized linking. Here is a 3 year old discussion about this that's work reading. To summarize the discussion:

noproof that shows $$$O(n\ \alpha (n))$$$ runtime for randomized linking by flipping afaircoin, i.e. by`if(rand()%2)`

. In fact, a benchmark from the linked page suggest otherwise, namely that the runtime is close to $$$\Theta(n \log n)$$$.biasedcoin with weights equal to the sizes of the individual trees, which makes it closely related to union by size.To answer your question on worst-case inputs for path compression:

I haven't thought too much about this, but if we

allow later queries to dependon the answer to earlier find queries (i.e. something like an iterative union-find problem), then the following might show a $$$\Omega\left(n \frac{\log n}{\log \log n}\right)$$$ bound forrandomized linking by flipping a fair coin:Let me know it you have questions regarding the proof or if you find a mistake it it. There are probably some typos in the LaTeX parts.