### mango_lassi's blog

By mango_lassi, 5 months ago, ## Boring backstory of this blog

In 300IQ contest 3, there was a problem where you had to count the number of permutations with two disjoint longest increasing subsequences. We VCd this contest while practising for ICPC, and didn't solve this problem during the contest, so I was very interested to see how it could be solved. Turns out the solution uses something called Young diagrams, and unless you already know what they are, the editorial is impossible to understand.

I asked if someone knew about Young diagrams on the competitive programming discord, and got linked a paper from the Chinese IOI selection camp, written by yfzcsc. If you can speak chinese, you should just read the paper, it covers a lot more than I do here. With the help of my teammate YaoBIG I believe I understand enough to write a English-language blog on this topic for the rest of us :)

## Definitions

We call a sequence $\lambda$ of positive integers a partition of $n$, if $\lambda_i \geq \lambda_{i + 1}$ and $\sum_{i = 1}^{|\lambda|} \lambda_i = n$. We denote by $P_n$ the set of partitions of $n$.

For a partition $\lambda$, we define the Young diagram $Y(\lambda)$ as the set of pairs $(i, j)$ of positive integers such that $i \leq |\lambda|$ and $j \leq \lambda_i$. We call the pairs $s = (i, j) \in Y(\lambda)$ the squares of the diagram.

A normalised Young Tableau $P$ of size $n$ is a pair $(\lambda, p)$ of a partition $\lambda$ of $n$, and a permutation $p$ indexed by $Y(\lambda)$, such that $p_{i, j} < min(p_{i + 1, j}, p_{i, j + 1})$.

We use natural definitions of a transpose: we denote by $\lambda^T$ the transpose partition, that is $\lambda^T_j = |\{i : \lambda_i \geq j\}|$, let $Y^T(\lambda) = Y(\lambda^T)$, and define the transpose $P^T = (\lambda^T, p^T)$ of a normalised Young diagram such that $p_{i, j} = p^T_{j, i}$.

## What is a Young tableau and why are they interesting

Why is it interesting to define Young tableaus? It turns out that there is a one-to-one mapping $f$ from permutations $\pi$ of length $n$ to pairs $(P, Q) = ((\lambda, p), (\lambda, q))$ of standard Young tableaus of size $n$ that share the same partition. The mapping has the following properties:

1. $\sum_{i \leq m} \lambda_i$ is the maximum total size of $m$ disjoint increasing subsequences of the permutation
2. $\sum_{j \leq m} \lambda_j^T$ is the maximum total size of $m$ disjoint decreasing subsequences of the permutation
3. If $\pi^{-1}$ is the inverse permutation, then $f(\pi^{-1}) = (Q, P)$
4. If $rev(\pi)$ is the reversed permutation, then $f(rev(\pi)) = (P^T, ??)$ (actually, the second component is the normalised result of applying the Schutzenberger algorithm to $Q$, but we don't care about that)
5. The mapping can be computed in $\mathcal{O}(n \sqrt{n} \log n)$ time. The inverse can be computed in $\mathcal{O}(n^2)$ time

If $c(s)$ is the number of valid $p$ for the partition $s$, the existence of the mapping shows that $n! = \sum_{s \in P_n} c(s)^2$. By property 3, the number of permutations $\pi$ such that $\pi = \pi^{-1}$ is $\sum_{s \in P_n} c(s)$. The value we want to compute in the problem presented at the start of the blog is $\sum_{s \in P_n : s_1 = s_2} c(s)^2$.

The number of partitions of $75$ is small, so we can loop over all of them. It remains to compute $c(s)$ fast. For this, we can use the hook-length formula, which we can compute in $\mathcal{O}(n)$ time.

## The mapping: Robinson–Schensted–Knuth correspondence

The name is fancy, but the mapping isn't hard (though proofs of its properties are).

Let's first describe how the first tableau $P$ is computed. We loop over the rows from first to last, and in each row, find the first element that is larger than the value we are inserting. If we found such a value, we swap it with the value we are inserting and proceed to the next row. Otherwise, we add the value we are inserting to the end of the row and are done.

In every step, the length of exactly one row of the tableau $P$ increases by $1$. In step $i$, we insert $i$ to tableau $Q$ at the end of the row whose size increased in the first tableau. This way, both tableaus always have the same shape. By a trivial induction proof, the values in the cells of every row and column in both tableaus are strictly increasing. Thus, the mapping is a injection.

The naive way to perform the insertion takes $\mathcal{O}(n)$ time, thus we can compute the mapping in $\mathcal{O}(n^2)$ time.

To compute the inverse, we undo the insertions one by one, starting from the last. Let $i$ be the row s.t. $q_{i, s_i} = n$. This is the row whose size increased by $1$ in the insertion of the last value. Let $v \leftarrow p_{i, s_i}$, then delete cell $(i, s_i)$ from both tableaus.

Now, if $i = 1$, we have $\pi_n = v$ and can recurse to find the rest of $\pi$. Otherwise, the reason $v$ entered row $i$ was because some smaller value replaced it on row $i - 1$. This value is the largest value smaller than it. Swap that value with $v$ and subtract $1$ from $i$.

O(n^2) code for the mapping and its inverse

For the faster algorithm for computing the mapping, note that computing just the first $\lceil \sqrt{n} \rceil$ rows of $P$ can be done in $\mathcal{O}(n \sqrt{n} \log n)$ time. Using property $4$, we can compute the first $\lceil \sqrt{n} \rceil$ columns of $P$ in the same time. Since there cannot be more than $\lceil \sqrt{n} \rceil$ rows with at least $\lceil \sqrt{n} \rceil$ cells, this way we find the entire tableau in $\mathcal{O}(n \sqrt{n} \log n)$ time. To find $Q$, we use property $3$ and then repeat the above.

I am not aware of any way to calculate the inverse mapping in $\mathcal{O}(n \sqrt{n} \log n)$ time.

O(n sqrt(n) log(n)) code for the mapping

Unfortunately, the proofs for properties $1$ to $4$ are long, involved and very hard to understand (at least for my counting-challenged brain), so I won't put them here. The proof for properties $1$ and $2$ for $m > 1$ is due to Greene. Properties $3$ and $4$ appear as theorems 3.2.1 and 4.1.1 here.

## Hook-Length Formula

For $\lambda \in P_n$ and $(i, j) \in Y(\lambda)$, we define $h_{\lambda}(i, j)$ as the number of squares in the diagram directly below or to the right of square $(i, j)$. Formally written, $h_{\lambda}(i, j) = (\lambda^T_j - i) + (\lambda_i - j) + 1$. The hook-length theorem states that $t(\lambda) = c(\lambda)$ for

\begin{equation} t(\lambda) = \frac{n!}{\prod_{(i, j) \in Y(\lambda)} h_\lambda(i, j)} \end{equation}

Computing $t(\lambda)$ is indeed easy to implement in $\mathcal{O}(n)$ time, and unlike many other results stated in this blog, the hook-length formula's proof is not hard and very beautiful. The below proof is from here.

Code for the hook-length formula
Proof of the hook-length formula

The above proof for the hook-length formula also gives an unbiased sampler for $p$ given $\lambda$, though I doubt that will ever be useful in competitive programming.

## Problems

300IQ contest 3 problem D

GP of southeastern europe 2021 problem D

If you know any more problems where this technique can be applied, please share a link with me. In particular, there probably exists a problem where you have to find for every $m$ the maximum total size of $m$ disjoint increasing subsequences. math,
By mango_lassi, history, 11 months ago, A participant who scored strictly more than 50% of contestants on at least one day, but does not receive a medal, will be awarded an honourable mention.

The rules change will be effective starting 2022.

So, if you would be awarded a bronze medal based on the results of at least one day, but do not receive a medal, you will receive a HM. #ioi,
By mango_lassi, 11 months ago,  The first ever European Girls Olympiad In Informatics (EGOI) is now over. You can view the scoreboard of the official contest on the EGOI 2021 webpage. Congratulations to the medalists, and in particular to Alisa Gladchenko AliceG and Ekaterina Shilyaeva AlFlen for the top scores!

We are also organising a virtual contest with the problems from the olympiad. The two contest days will be on codeforces. There is a 28-hour period during which you can take the 5-hour contest. The first contest will open in a few hours and last until midnight on Saturday, and the second will last from the start of Sunday to early Monday morning. All times are UTC+2!

Day 1: Friday 18.6. 20:00 — Saturday 19.6. 23:59 (EGOI 2021 Day 1)

Day 2: Sunday 20.6. 00:00 — Monday 21.6. 04:00 (EGOI 2021 Day 2)

Feel free to discuss the tasks here after the virtual contest frame ends.

A big shout out to the organisers and volunteers.

EDIT: Congratulations to ko_osaga for being the only one to fully solve Lanterns, and to Radewoosh for fully solving Double Move. I am sorry for messing up the settings of the virtual contest (which made it display only the first 5 hours for day 1, and no results at all for day 2). Unfortunately I do not know how to fix that :( By mango_lassi, history, 12 months ago, There was a blog on this topic that I was about to comment on, but it seems that it got deleted :(

It was asking for a solution in $\mathcal{O}(n)$ preprocessing and $\mathcal{O}(1)$ query time. There are no updates. We make the standard assumptions that input integers have $\mathcal{O}(\log n)$ bits and operations on $\mathcal{O}(\log n)$-bit integers take $\mathcal{O}(1)$ time.

The obvious solutions to range AND and range OR are to

• Count the number of times every bit appears in every prefix ($\mathcal{O}(n \log n)$ preprocessing and $\mathcal{O}(\log n)$ query time)
• Use a interval AND / OR segment tree ($\mathcal{O}(n)$ preprocessing and $\mathcal{O}(\log n)$ query time)
• Use a sparse table ($\mathcal{O}(n \log n)$ preprocessing and $\mathcal{O}(1)$ query time).

Four Russians is a standard trick to achieve $\mathcal{O}(n)$ preprocessing and $\mathcal{O}(1)$ query time. However, I don't see how it can be used here to achieve that: while we can solve all queries of length $\Omega(\log n)$ in $\mathcal{O}(1)$ time with $\mathcal{O}(n)$ preprocessing, it is not clear how to solve queries entirely inside a single block. This is because each block contains $\mathcal{O}(\log^2 n)$ bits that matter, not $\mathcal{O}(\log n)$ as is the case for the cartesian tree of the block, when computing interval minimum.

So the best complexity I could come up with is based on iterating the Four Russians trick, which achieves either

• $\mathcal{O}(n \log^* n)$ precalc and $\mathcal{O}(1)$ time per query (where $\log^* n$ is the iterated logarithm)
• $\mathcal{O}(n)$ precalc and $\mathcal{O}(\log \log \dots \log n)$ time per query (taking logarithm any fixed number of times)
Solution with iterated Four Russians

This is pretty good (theoretically, of course this is pointless to do in practice), but is there some way to get $\mathcal{O}(n)$ preprocessing and $\mathcal{O}(1)$ query time? Is there even a solution that uses $\mathcal{O}(n)$ space and has $\mathcal{O}(1)$ queries? By mango_lassi, history, 12 months ago, I saw a blog (thanks to Bugman for finding it!) earlier today asking how to solve the following problem:

Given a linear function $ax + b$, find the minimum $x \geq 0$ such that $ax + b \in [L, R]\ (\text{mod } M)$.

To solve that problem, we can make the following reduction: If $gcd(a, M) > 1$, we divide everything by the GCD. The $x$ for which $ax + b = L\ (\text{mod } M)$ is $(L - b) a^{-1}\ (\text{mod } M)$. Denote this value by $b_0$. Then, the minimum $x$ to get to $L + y$ is $a^{-1} y + b_0\ (\text{mod } M)$. This gives us a reduced problem:

Find the minimum value of $ay + b \text{ mod } M$ over $y \leq k$.

This seemed pretty hard, but surprisingly I figured out how to do it in $\mathcal{O}(\log M)$ time! The algorithm is as follows:

In every step, we reduce the modulo from $M$ to $\min(a, M - a) \leq M / 2$. This guarantees we do at most $\mathcal{O}(\log M)$ steps.

To reduce it to $a$, we consider the first value $s$ among $[0, a)$ achieved by $ay + b$. If $b < a$, it is $b$. Otherwise, it is $b - M\text{ mod } a$. We check in $\mathcal{O}(1)$ if we reach $s$ for some $y \leq k$. If we do, set $M$ to $a$, $a$ to $-M \text{ mod } a$ and $b$ to $s$. Otherwise, output $b$ as the first $a$ values are the only values such that the previous value was larger, thus if we never attain them, the first value we attain is the largest we do.

To reduce it to $M - a$, we consider the first value $s = b \text{ mod } M - a$ among $[0, M - a)$ achieved by $ay + b$. If we reach $s$ for some $y \leq k$, set $M$ to $M - a$, $a$ to $a \text{ mod } M - a$ and $b$ to $s$. Otherwise, we can calculate the number of steps to go from $v$ to $v - (M - a)$ for $v \geq M - a$, and from this the smallest value we can reach with $y \leq k$.

code

What I wanted to ask is, is this a known problem, and is there a simpler or perhaps even faster solution to it? The problem seems fairly simple, so I doubt nobody has thought about it before. By mango_lassi, history, 2 years ago, 1. Is there any way to view top rated users (with inactive users included)? (The "Rating (All)" tab would logically be this, but the list is just the same as "Rating". Further, it seems you need to login to even see the "Rating (All)" tab. What even is it?)
2. Is there a list of users sorted by their max. rating?
3. Is there some convenient way to show only European contestants or do any filtering other than by country, city or organisation?

I've tried to google for these but haven't found anything. If any of these lists are hidden somewhere on the site, or any external sites have these lists, please inform me. By mango_lassi, history, 3 years ago, The 2019 NWERC (Northwestern European regional contest) is tomorrow. You should see the scoreboard here once the contest starts.

There will be an online mirror here.

Let's discuss the problems after the contest! By mango_lassi, history, 3 years ago, This problem

I have a solution I think is correct but can't get it to pass, and apparently even my n^2 code (standard game theory iterating through all states solution) that should obviously work gets WA. They both give the same answers for random small inputs.

I can't find editorials or test data for the contest, so seeing so many teams have +1 or more in this problem, is there some tricky corner case I might have missed?

My code if it is of any interest:

code
n^2 code

Submissions: 54559534 54559186 bug,
By mango_lassi, history, 3 years ago, I wrote this pretty short implementation of Heavy-Light Decomposition today:

code

It uses no recursion, which is nice, and takes data in the index-of-parent format, which can be a small plus or a large minus (I needed HLD over Aho-Corasick automata, so that input format is useful there). The intervals returned by get are in decreasing order, not in the order they are in the path, so something like adding an arithmetic sequence to a path is not possible.

The par-array just stores the parent of every node, ind gives the index of the node in the array HLD maps the nodes to, and pp gives the highest ancestor of the node that is mapped to the same continuous segment.

get returns a vector of at most $O(log(n))$ intervals, such that hld-index of every node on the path between $a$ and $b$ is in exactly one of the returned intervals. Specifically it returns all intersections of heavy paths with the path between $a$ and $b$ that are nonempty, in decreasing order (first interval has highest startpoint and endpoint). Clearly there are at most $O(log(n))$ such intervals, since every path contains at most $O(log(n))$ light edges, so at most $O(log(n))$ heavy intervals.

Get works since the path from $b$ to $pp[b]$ corresponds to the continuous interval $ind[pp[b]], ind[b]$ in the array HLD maps the nodes to, and therefore if $ind[pp[b]] \leq ind[a] \leq ind[b]$, $a$ is on the path between $pp[b]$ and $b$, and if $ind[a] < ind[pp[b]]$, then no node we go over is an ancestor of $a$, since the HLD-index of a node's ancestor is always less than the ind of the node.

Since get also finds the LCA, this also gives a nice LCA implementation:

code

Here's Caves and Tunnels solved with this HLD-implementation. Changing the tree format is pretty nasty, so the code isn't that clean :/

code hld, lca,