By Radewoosh, history, 2 years ago, Hello Codeforces!

As all of you know, there are so many known algorithms named after people who invented them — from the easiest ones, like Dijkstra, to the harder ones, like Berlekamp–Massey algorithm. These algorithms were innovative when they were invented, so of course, it's good that they are named after their inventors.

But, today, I've seen a blog about the solution to the problem "compute LCS of two strings in time $O((n + k) \cdot \log(k))$, where $n$ is the sum of lengths of the strings, and $k$ is the number of pairs of matching positions in them". To be honest, it a bit pissed me off that even this algorithm is named after its creators. Is it ok to name the solution to every possible problem after its author? I won't judge it. Anyway, I want my very own Radecki algorithm, so let me give it a try. If anyone has ever heard about it — it's cool. Let me know, and it'll be just another helpful blog on Codeforces.

Let's imagine the following problem: there is a directed graph on $n$ vertices without any edges and a list of $m$ directed edges which we want to add to this graph one by one. After adding each edge, we want to have full information about the strongly connected components. Know the number of strongly connected components, be able to tell whether two vertices are in the same connected component or even make this information persistent. Critical condition: the list of edges is given in advance.

So, for a few years, I'm able to solve this problem, but people told me that it's rather boring and it's not worth to write a paper about it. Anyway, let me give it a try.

Let's use the divide and conquer technique on the list of the edges. We can also think about it like about divide and conquer over the timeline. So, let's assume that we have some function $rec(a, b, edges)$, where $a$ is the start of the time interval, $b$ is the end (we can assume that both are inclusive, doesn't matter, I'm still pissed a bit) and $edges$ is the list of edges together with the times when they're added. We start the process like $rec(0, m-1, everything)$.

Everytime we should look at the middle of the time interval, so let's compute $s=\lfloor \frac{a+b}{2} \rfloor$. Let's use the standard offline algorithm to calculate strongly connected components in the graph which consists only of the edges from $edges$ which were added before $s$ or exactly at $s$. We want to split recursively somehow, but how to do it to keep good complexity? It turns out that in the moments after $s$ all the SCC that we've just calculated will stay connected. So, we can merge whole components into single vertices and only pass to the right edges connecting vertices from different strongly connected components. Also, the edges that are now connecting different connected components were useless in the past, which means that we can pass to the left the rest of the edges. In this way, each edge will occur once at every level of the recurrence, there are $O(\log(m))$ levels, and in each level the overall complexity is linear, thus we have $O(m \cdot \log(m))$ complexity (if we implement it carefully enough to have no extra logs).

What should we take from the recurrence? The simplest idea: at each call of the $rec$ function, for each edge, push some information to some global data structure. My idea is to for each edge that connects vertices from the same strongly connected component push information "these two vertices are in the same strongly connected component at this time". This gives us something which we can interpret as a graph of undirected weighted edges. It's easy to see that we are only interested in its MST (the graph has $n$ vertices and $O(m \cdot \log m)$ edges with weights from $0$ to $m-1$, we can calculate it's MST in $O(m \cdot \log(m))$ time).

What can we do now with this information? We can answer queries like "What's the moment when vertices $a$ and $b$ start belonging to the same strongly connected component?" Yep, that's the maximum value on the path between these two vertices.

Clear? Clear.

Cool? As f**k.

Easy to implement? So so, easy to copy-paste and use. My implementation (without computing MST, just getting the list of undirected edges) has 72 lines.

New? That's a tough question. I know that it was on the prime new year contest 2018, which took place between 2018 and 2019, which tells that this problem was created in 2018. Even if the intended solution is the same as mine, here comes a twist: link. This is the problem authored by me in July 2017. I know, just a pdf is worth nothing, but it's possible to dig deeper and believe me: the solution implemented here is the same as described.

Lest thoughts: the resulting tree is really similar to Gomory-Hu tree (because of minimums/maximums on the paths). scc, Comments (38)
| Write comment?
 » Five minutes (not enough to read it) after posting this blog it has a score of -4. Is it very known algorithm, but it happens that I've never heard about it or what? The blog is cool and educative anyway imo, what's going on?
•  » » People were waiting for me to upvote first, here you go!(jking) But the blog deserves upvote since this is indeed educational for me.
•  » » I looked on your post for whole 5 seconds and couldn't immediately understand the core idea of the algorithms. There are also no pictures and very little fancy $\LaTeX$ formulas. Bad post.
•  » » 2 years ago, # ^ | ← Rev. 8 →   Don't bother, the people downvoting are either doing it for fun or they have 100000 rating on some random website that they read and disproved your algorithm in five minutes.
 » I wont be surprised if people are again fighting over whether to name the solution to the problem on its authors or not instead of talking about other aspects of the blog
 » 2 years ago, # | ← Rev. 2 →   Excuse the stupid question, but what does LSC stand for?
•  » » Longest common subsequence
•  » » » *Longest subsequence common
•  » » » » Ouch, got it
•  » » » » *longest subsequence comeonbruh
 » Waiting for someone to comment "This is a well known algorithm in China since 2011."
 » So, I've got an information that the mentioned problem from prime new year contest was authored by Yuhao Du. amiya: was your solution the same, or did you really want people to implement the solution from mentioned paper?
•  » » This was also an open cup problem in Grand Prix of Zhejiang in September 2018 and based on this comment, the intended solution was the same as yours.
•  » » Same as yours. I came up with this problem individually. Actually, I didn't even read the paper I mentioned.
 » So, next step is an article on arxiv.org.
 » Recursive part of the algo strongly resembles the $O(n \log n)$ offline algorithm for dynamic connectivity problem, by the way.
 » It seems as though something very similar was mentioned in this paper from 2016 here.
 » Somehow in a quilted way in this blog, there is: If I solved a problem, and my solution was completely new to me and I had no prior knowledge about any idea I used in the solution, then this solution should be named after me.
•  » » Isn’t the LCS case just like it (+ way more boring)?
•  » » » The difference is time. If you discovered something easy, then it better be when the topic is new to get your name on it. It seems that LCS algorithm was published in the 70s.
•  » » » » Mr. Kadane smiling in the corner.
•  » » » Recently, I dipped into computer graphics and was surprised to learn that prefix sums are known there by confusing names like "SAT" and "integral images" and sometimes even assumed to be discovered by Franklin Crow. This gave me two things to reflect on: Emergence of catchy names is out of anyone's control. The presumed "inventor" almost surely did something different. In case of Crow, it was about unexpected discovery of fast and easy method for texture filtering, not about prefix sums. (Not sure about LCS algorithm you mentioned, though.)
 » I'd expect something likeThis is my algorithm for offline incremental strongly connected components in O(m*log(m)). There are many others like it, but this one is mine...
•  » » What do you mean? Do you know any other sources?
•  » » » just reference to Full Metal Jacket :)
 » 2 years ago, # | ← Rev. 2 →   Here is an article from 2008 which works online for amortized $O(\sqrt{m})$ per addition. The only reason I know about it is that the following problem was in the Hello Barcelona 2018, Day 3 (link-cut contest): statement Our team (and probably many others, maybe it was an official solution lol, I don't remember) implemented something like you write here (if anyone is interested for some reason: code)If I'm not mistaken, ifsmirnov (last active 6 months ago) prepared this contest. Probably it's worth asking him about the source of this problem and the history of the solution in general
•  » » That’s the one from the prime new year contest. I also know about the paper, but I don’t know this algorithm, so I don’t know if it’s usable on contests (also my complexity is better).
•  » » » I'm implementing Tarjan08 now, and my impression is that the method used to elimnate the log factors are monstronous. I think the "Soft-threshold search" will be practically slower than heap. Though, I'm not done implementing it and I do it only in the free time, so not a very good position to talk about it.
•  » » » » implementing at free time instead of on stream orz
 » Are you drunk? What's up with all that "I'm so cool, look at me" bullshit?
•  » » YES SLAY UM_NIK — CANCEL RADEWOOSH