-is-this-fft-'s blog

By -is-this-fft-, history, 4 months ago, In English

Today, I want to talk a bit about the relationship between two well-known methods for calculating the $$$n$$$-th term of a linear recurrence: generating functions and matrix exponentiation. More specifically, I want to explore how these two things are related to one another or how they can be thought of as variants of one another.

This blog shouldn't be treated as a tutorial. If you know these algorithms, I doubt that I can teach you to solve some new problem in this blog. Rather, I'm kind of thinking out loud. I think it is a good idea to try to reflect and find connections between different things we have learned. This kind of thinking tends to strengthen the understanding of both methods and makes it easier to improve upon them to solve problems.

And I'm sure that what I'm about to present is already quite well-known to some people ;). But when I realized this it was cool to me, so I want to share it.

Problem. You have two sequences $$$b_1, b_2, \ldots, b_m$$$ (where $$$b_m \ne 0$$$) and $$$c_0, c_1, \ldots, c_{m - 1}$$$. The infinite sequence $$$a_0, a_1, a_2, \ldots$$$ is defined as

$$$ a_i = \begin{cases} c_i & \text{if } i < m \\ \sum_{k = 1}^m a_{i - k} b_k & \text{otherwise.} \end{cases} $$$

Calculate $$$a_n$$$.

As a simple example, you can consider the following simple variant: the Fibonacci sequence is defined by $$$F_0 = 0$$$, $$$F_1 = 1$$$ and $$$F_i = F_{i - 1} + F_{i - 2}$$$ for any $$$i \ge 2$$$. Calculate $$$F_n$$$.

Matrix exponentiation method

Fibonacci case. Notice that for any $$$i \ge 2$$$, we have

$$$ \begin{bmatrix} 1 & 1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} F_{i - 1} \\ F_{i - 2} \end{bmatrix} = \begin{bmatrix} F_{i - 1} + F_{i - 2} \\ F_{i - 1} \end{bmatrix} = \begin{bmatrix} F_i \\ F_{i - 1} \end{bmatrix}. $$$

Thus, we can calculate the $$$n$$$-th term by calculating

$$$ \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}^n \begin{bmatrix} 1 \\ 0 \end{bmatrix} $$$

using binary exponentiation and reading the answer from the bottom entry of the resulting vector.

General case. Define the matrix $$$B$$$ as

$$$ B = \begin{bmatrix} b_1 & b_2 & b_3 & \cdots & b_{m - 1} & b_m \\ 1 & 0 & 0 & \cdots & 0 & 0 \\ 0 & 1 & 0 & \cdots & 0 & 0 \\ 0 & 0 & 1 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 1 & 0 \end{bmatrix} $$$

Notice that for any $$$i \ge m$$$,

$$$ B \begin{bmatrix} a_{i - 1} \\ a_{i - 2} \\ \vdots \\ a_{i - m + 1} \\ a_{i - m} \end{bmatrix} = \begin{bmatrix} \sum_{k = 1}^m b_i a_{i - k} \\ a_{i - 1} \\ \vdots \\ a_{i - m + 2} \\ a_{i - m + 1} \end{bmatrix} = \begin{bmatrix} a_i \\ a_{i - 1} \\ \vdots \\ a_{i - m + 2} \\ a_{i - m + 1} \end{bmatrix}. $$$

Thus we can calculate

$$$ B^n \begin{bmatrix} c_{m - 1} \\ c_{m - 2} \\ \vdots \\ c_1 \\ c_0 \end{bmatrix} $$$

using binary exponentiation and read the answer from the bottom entry in the resulting vector.

Generating function method, case without multiple roots

Fibonacci case. First, recall that the generating function of the Fibonacci sequence, $$$\mathcal{F}(x) = \sum_{i = 0}^\infty F_i x^i$$$, can be expressed using the closed form formula $$$\mathcal{F}(x) = \frac{-x}{x^2 + x - 1}$$$.

Derivation

The polynomial $$$x^2 + x - 1$$$ can be factored as $$$x^2 + x - 1 = (x - \phi)(x - \psi)$$$; you can calculate $$$\phi$$$ and $$$\psi$$$ using the quadratic formula, they are $$$\phi = \frac{-1 + \sqrt{5}}{2}$$$ and $$$\psi = \frac{-1 - \sqrt{5}}{2}$$$.

Note about the square roots when working modulo $P$

We want to represent $$$\mathcal{F}(x)$$$ as $$$\frac{A}{\phi - x} + \frac{B}{\psi - x}$$$, where $$$A$$$ and $$$B$$$ are numbers. To find $$$A$$$ and $$$B$$$, we note that

$$$A (\psi - x) + B (\phi - x) = -x \qquad \text{i.e.} \qquad A \psi + B \phi - (A + B)x = -x$$$

must hold. Thus we must have $$$A+ B = 1$$$ and $$$A \psi + B \phi = 0$$$. This is a system of linear equations, the solutions are $$$A = \frac{\phi}{\phi - \psi}$$$ and $$$B = \frac{\psi}{\psi - \phi}$$$.

We know that $$$\frac{1}{a - x} = \frac{1}{a} + \frac{x}{a^2} + \frac{x^2}{a^3} + \cdots$$$ for any $$$a \ne 0$$$. Thus

$$$ \mathcal{F}(x) = A\sum_{i = 0}^\infty \frac{x^i}{\phi^{i + 1}} + B\sum_{i = 0}^\infty \frac{x^i}{\psi^{i + 1}}. $$$

So, to get the $$$n$$$-th term of the Fibonacci sequence, all you have to do is calculate $$$\frac{A}{\phi^{n + 1}} + \frac{B}{\psi^{n + 1}}$$$.

General case. Let's turn our sequences into generating functions: define $$$\mathcal{A}(x) = \sum_{i = 0}^\infty a_i x^i$$$, $$$\mathcal{B}(x) = \sum_{i = 1}^m b_i x^i$$$ and $$$\mathcal{C}(x) = \sum_{i = 0}^{m - 1} c_i x^i$$$. The problem statement basically says that $$$\mathcal{A}(x) = \mathcal{A}(x) \mathcal{B}(x) + \mathcal{C}(x)$$$. Rearranging this, we get

$$$ \mathcal{A}(x) = \frac{-\mathcal{C}(x)}{\mathcal{B}(x) - 1}. $$$

We must devise a fast way to get the $$$n$$$-th coefficient of this infinite series. I'll also introduce the notation $$$\mathcal{B}^-(x) = \mathcal{B}(x) - 1$$$.

We now proceed as before. The polynomial $$$\mathcal{B}^-(x)$$$ can be factored as

$$$ \mathcal{B}^-(x) = b_m (x - \lambda_1)(x - \lambda_2)\cdots(x - \lambda_m). $$$

Again, I assume we are working in a field where the $$$\lambda_i$$$ exist and if not, we can always move to a field extension. However, in this blog, we assume that $$$\lambda_i$$$ are all distinct.

As before, we want to find the partial fraction decomposition of $$$\frac{\mathcal{C}(x)}{\mathcal{B}^-(x)}$$$: to find $$$k_1, k_2, \ldots, k_m$$$ such that

$$$ \frac{\mathcal{C}(x)}{b_m (x - \lambda_1)(x - \lambda_2)\cdots(x - \lambda_m)} = \frac{k_1}{x - \lambda_1} + \frac{k_2}{x - \lambda_2} + \cdots + \frac{k_m}{x - \lambda_m}. $$$

To do that, multiply out the polynomials $$$b_m k_1 (x - \lambda_2) \cdots (x - \lambda_m)$$$, $$$b_m (x - \lambda_1) k_2 \cdots (x - \lambda_m)$$$ and so on, and add them all together. You need to find values of $$$k_i$$$ such that the resulting polynomial equals $$$\mathcal{C}(x)$$$. For each $$$i = 0, 1, \ldots, m - 1$$$, you have an equation of the form $$$\alpha_1 k_1 x^i + \alpha_2 k_2 x^i + \cdots + \alpha_m k_m x^i = c_i x^i$$$. Divide it by $$$x^i$$$ and you have a linear equation. In other words, you have a system of $$$k$$$ linear equations and $$$k$$$ variables; this can be solved by Gaussian elimination. Solve it for $$$k_i$$$. It can be proven that it always has an unique solution (this can be derived from Bezout's lemma for polynomials).

Now we have shown that

$$$ \mathcal{A}(x) = -\frac{k_1}{x - \lambda_1} - \cdots - \frac{k_m}{x - \lambda_m} = k_1 \sum_{i = 0}^\infty \frac{x^i}{\lambda_1^{i + 1}} + \cdots + k_m \sum_{i = 0}^\infty \frac{x^i}{\lambda_m^{i + 1}}. $$$

Thus, we can calculate $$$a_n$$$ with the formula

$$$ a_n = \sum_{i = 1}^m \frac{k_i}{\lambda_i^{n + 1}}. $$$

The polynomial $$$\mathcal{B}^-(x)$$$ has a coefficient of 1 at $$$x^0$$$, so we know that $$$\lambda_i \ne 0$$$ for all $$$i$$$ — thus there will be no annoying issues with division by zero here.

The cool bit I want to show you, i.e. their relationship

The reason for writing all of the above was basically to make sure we are all on the page about what the methods exactly are, to agree on notation etc. The point of the blog is this next bit.

Generating functions are intriguing. It seems completely outlandish that you can do anything useful at all by converting such sequences into series — objects that on the surface, seem analytic. And we really aren't even converting them to functions. We are converting them to formal power series, which don't make sense as functions since they may not converge. So, it's all just a really strange notation and we have been working with sequences all along. But the really weird part is that this strange notation actually gave us so much insight. When I see such solutions, I wonder if it is really useful at all and whether we are not just overcomplicating things with this notation. If I try to think about the same solution without working with generating functions, the part where we move to the partial fraction decomposition is roughly the point where I can't intuitively understand what's going on with the sequence anymore.

Well, but let's try to somehow describe what we did anyway. We calculated $$$k_1, \ldots, k_m$$$ by solving

$$$ T \begin{bmatrix} k_1 \\ k_2 \\ \vdots \\ k_m \end{bmatrix} = \begin{bmatrix} c_{m - 1} \\ c_{m - 2} \\ \vdots \\ c_0 \end{bmatrix} $$$

for some matrix $$$T$$$. Because the equation has exactly one solution, $$$T$$$ must be invertible and we can write

$$$ \begin{bmatrix} k_1 \\ k_2 \\ \vdots \\ k_m \end{bmatrix} = T^{-1} \begin{bmatrix} c_{m - 1} \\ c_{m - 2} \\ \vdots \\ c_0 \end{bmatrix}. $$$

After that, we multiplied each of the $$$k_i$$$ by $$$\lambda^{-n - 1}$$$. In matrix form, you could say we calculated $$$\Lambda^{n + 1} T^{-1} c$$$, where

$$$ \Lambda = \begin{bmatrix} \lambda_1^{-1} & 0 & \cdots & 0 \\ 0 & \lambda_2^{-1} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_m^{-1} \end{bmatrix}, c = \begin{bmatrix} c_{m - 1} \\ c_{m - 2} \\ \vdots \\ c_0 \end{bmatrix}. $$$

Do you see it? Here, let me clean that up a bit. If we write $$$U = \Lambda T^{-1}$$$, then the vector we effectively calculated was $$$\Lambda^n U c$$$. After that, we took the sum of all the elements of the vector.

Now, let's make the simple concept of summing up elements of a vector more complicated. Consider the matrix $$$U^{-1} = (\Lambda T^{-1})^{-1} = T \Lambda^{-1}$$$. From the construction of $$$T$$$ we can see that the last row of $$$T$$$ is of the form

$$$ \begin{bmatrix} \frac{(-1)^m \prod_{i = 1}^m \lambda_i}{-\lambda_1} & \frac{(-1)^m \prod_{i = 1}^m \lambda_i}{-\lambda_2} & \cdots & \frac{(-1)^m \prod_{i = 1}^m \lambda_i}{-\lambda_m} \end{bmatrix}. $$$

This is because the $$$i$$$-th column of the matrix $$$T$$$ is the polynomial $$$\frac{b_m (x - \lambda_1)(x - \lambda_2) \cdots (x - \lambda_m)}{x - \lambda_i}$$$, with the last row corresponding to the free term.

After multiplying $$$T$$$ with $$$\Lambda^{-1}$$$ from the right, the $$$i$$$-th column is multiplied by $$$\lambda_i$$$, giving the last row the form

$$$ \begin{bmatrix} (-1)^{m + 1} \prod_{i = 1}^m \lambda_i & (-1)^{m + 1} \prod_{i = 1}^m \lambda_i & \cdots & (-1)^{m + 1} \prod_{i = 1}^m \lambda_i \end{bmatrix}. $$$

From Viète's formula and the fact that $$$\lambda_i$$$ are the roots of $$$\mathcal{B}^-(x)$$$, we know that $$$(-1)^m \lambda_1 \lambda_2 \cdots \lambda_m$$$ is the free term of the polynomial $$$\mathcal{B}^-(x)$$$, that is, $$$-1$$$. Putting it together, we see that the last row of $$$U^{-1}$$$ is in fact

$$$ \begin{bmatrix} 1 & 1 & \cdots & 1 \end{bmatrix}. $$$

In other words, the generating function method calculates $$$U^{-1} \Lambda^n U c$$$ and returns the bottom entry of the resulting vector. The actual computation in the generating function method pretty closely mirrors this matrix expression, with the caveat that instead of calculating $$$Uc$$$ we solved a system of linear equations, and instead of multiplying with $$$U^{-1}$$$, we only multiplied with the last row because that was all we needed.

The expression $$$U^{-1} \Lambda^n U$$$ heavily reminds of similar matrices and diagonalization. Recall that matrices $$$X$$$ and $$$Y$$$ are said to be similar if there exists some invertible matrix $$$Z$$$ such that $$$X = Z^{-1} Y Z$$$. Also, recall that if all eigenvalues of a matrix are distinct, then it is similar to a diagonal matrix whose diagonal elements are its eigenvalues. Returning to the matrix exponentiation method, what happens if we try to transform the matrix $$$B$$$ this way? To start off, what are its eigenvalues? Well, we usually calculate eigenvalues by factoring the characteristic polynomial. It turns out that the characteristic polynomial of $$$B$$$ is (up to sign) $$$\mathcal{B}^R(x) = x^m - b_1 x^{m - 1} - b_2 x^{m - 2} - \cdots - b_m$$$.

Derivation

Not exactly $$$\mathcal{B}^-(x)$$$, but very similar — it is basically the same, except the order of the coefficient is reversed. What are its roots? You might have a hunch already, let's test it. Suppose that $$$\mathcal{B}^-(\lambda) = 0$$$. Then

$$$ \begin{align*} \mathcal{B}^R(\lambda^{-1}) &= \lambda^{-m} - b_1 \lambda^{-m + 1} - b_2 \lambda^{-m + 2} - \cdots - b_m \\ \lambda^{m} \mathcal{B}^R(\lambda^{-1}) &= 1 - b_1 \lambda - b_2 \lambda^2 - \cdots - b_m \lambda^m = -\mathcal{B}^-(\lambda) = 0. \end{align*} $$$

Because all roots of $$$\mathcal{B}^-(x)$$$ are nonzero, we can divide the last equation by $$$\lambda^m$$$ and conclude that the eigenvalues of $$$B$$$ are precisely the reciprocals of the roots of $$$\mathcal{B}^-(x)$$$. That also means that they are all different. Hence, there exists an invertible matrix $$$V$$$ such that $$$B = V^{-1} \Lambda V$$$. Here $$$\Lambda$$$ has the same meaning as above.

In the matrix exponentiation method we calculated $$$B^n c$$$ and read the bottom entry of the resulting vector. This is equivalent to calculating $$$(V^{-1} \Lambda V)^n c = V^{-1} \Lambda^n V c$$$ and reading the bottom entry of the resulting vector. Now the relationship between the generating function and matrix exponentiation methods is hopefully clear. The generating function method calculates the same thing, but it optimizes the exponentiation part by diagonalizing $$$B$$$ and applying a few "ad hoc optimizations". In particular, the partial fraction decomposition corresponds to diagonalization.

Read more »

 
 
 
 
  • Vote: I like it
  • +313
  • Vote: I do not like it

By -is-this-fft-, history, 4 months ago, In English

Preface

When the problem came out, someone asked for an explanation of how people come up with such solutions. I wanted to write this blog even before that, but I've just been busy. Anyway now, two months later, I finally had the time to put this blog together.

But the reason I really wanted to write this blog is that the solution to 1503C - Travelling Salesman Problem has a very much non-obvious greedy solution. From reading beginners' comments, it seems that a lot of people have this implicit belief that greedy algorithms are invented by guessing: for example, guessing an order in which to process things, guessing how to process things and then maybe proving correctness if you have time (but really skipping it, because who cares, right?). I think some contest sites even perpetuate this idea by presenting a greedy algorithm in the editorial without any kind of proof or explanation where this comes from. Even here, you may have tutorials that say "here's the algorithm" and after that "here's the proof". While it is convenient (and completely fine!) to present the solution in this form (algorithm and then proof), it's not necessarily how the solution was invented.

While you can solve problems this way (guessing and then proving (or not proving)) and I have solved problems this way, it is quite unusual for me. If someone were to think (perhaps implicitly) that greedy algorithms are invented like that, they'd be shocked when they read the solution to 1503C - Travelling Salesman Problem: how could anyone guess that, how could anyone come up with ordering the cities like this and then doing things like that?

And the answer is that you really can't. It would be very hard to guess this solution unless you already have some insights to the problem. That is roughly how I feel I solve problems: try to really understand the problem, gather insights about it. And at some point, you have found enough observations that you can piece together a solution. The solution isn't proved afterwards — the proof develops alongside the solution. It isn't always like this, but very often it is. (Of course, there is some guesswork involved, but it is more "I guess I should think about this thing", not "I guess the entire algorithm").

In the following, I'll try to show you what I mean.

The problem

Given $$$n$$$ cities and two arrays $$$a$$$ and $$$c$$$ of length $$$n$$$, you want to visit every city once, starting from city $$$1$$$ and returning to city $$$1$$$. The cost of going from $$$i$$$ to $$$j$$$ is

$$$\max(c_i, a_j - a_i),$$$

and you must minimize the total cost. We have $$$2 \le n \le 10^5$$$ and $$$0 \le a_i, c_i \le 10^9$$$.

The path to the solution

I tried to keep the following as true to my thought process as possible, so this is my path to my solution and not necessarily the shortest path to the cleanest solution. And maybe even not the clearest explanation because I really did try to mirror what was going on in my head.

Hmm.

We have to pay $$$\max(c_i, a_j - a_i)$$$ every time. If you ignore the $$$c_i$$$ terms, then the $$$a_j - a_i$$$ terms cancel out. So, let's think about the extra $$$c_i$$$ that we may have to pay sometimes.

Actually, let's do it the other way around. We always pay at least $$$c_i$$$ for each town, right? So it would make sense to think of $$$\sum_i c_i$$$ as the baseline. And to think only about the extra $$$\max(0, a_j - a_i - c_i)$$$ that we have to pay sometimes.

Actually, $$$\max(0, a_j - a_i - c_i) = \max(0, a_j - (a_i + c_i))$$$. So from city $$$i$$$, flying in the direction of decreasing beauty is always free. Flying in the direction of increasing beauty is free to a point, and after that we are hit with a tariff of 1 dollar per beauty. Visually, I imagine it something like this:

Some time passes without much ideas

The idea of cities as points on a line seemed useful, let's do more of that. Draw from $$$a_i$$$ an arc to $$$a_i + c_i$$$ to denote the area of free travel from city $$$i$$$. If I do this for many steps, "connected components" (sorta) arise:

Experience tells me that within a component, I can move from anywhere to anywhere, free of charge. (See for example DFS-tree blog, observations 5 and 6 for a proof that can be adapted and the source of this intuititon).

Of course, that's just normal shortest paths on graphs, without all this stuff about visiting every vertex once and only once. But I'm hopeful. You see these red bits in this image:

Those are the bits that I absolutely have to pay for. Do I have to pay for anything else? If I enter each component through the leftmost vertex, find some Hamiltonian path within the component from the leftmost vertex to the "most powerful one" (i.e. the one with highest $$$a_i + c_i$$$) and exit through the most powerful vertex, then I will only need to pay for the red bits. Can I always find such a path in a component?

Oh, I think I can't. For example, here:

I have to enter the component through the vertex $$$u$$$, and exit through $$$u$$$ too. There's no opporturnity for me to visit vertices $$$v$$$, $$$w$$$ and $$$x$$$.

A bit of time passes

But wait, I don't actually need to visit all the vertices of the component. It is enough for me to find some free path from the leftmost vertex of the component to the most powerful one, not visiting any vertex twice. Then, I'll just visit all the missed vertices on my way back from right to left. Can I always do that? Yeah, it looks like I can: if I am at the most powerful vertex of the component, there will always be some vertex to the left that can reach this vertex. And there will always be some vertex to the left of that vertex that can reach that vertex, and so on, until we reach the beginning of the component.

Ok! Now I have constructed a Hamiltonian circuit that will take me through each city and will always pay for the "red bits" and nothing else. It's also clear that this is the best solution because it's easy to see we absolutely have to pay for the red bits. So all that's left to do is implement code that will calculate the total length of the red segments. It looks like this:

sort(arr, arr + n); // sorts by a

ll ans = 0;
ll mxac = 0;
for (int i = 0; i < n; i++) {
  if (i != 0 && mxac < arr[i].a) {
    ans += arr[i].a - mxac;
  }
  mxac = max(mxac, arr[i].a + arr[i].c);
}

Add to ans the sum of $$$c_i$$$ and output that. That's it!

Final remarks

This was a bit of an experimental blog. I don't think I'll be doing many of them in the future, because it is quite rare that I can put my thought process into words so clearly. Also, I wanted to write this as a demonstration that solving a greedy problem doesn't mean guessing the entire solution at once, but I don't think such blogs are very useful as a general resource.

In any case, thank you for reading, and thanks to 1-gon for creating this nice problem!

Read more »

 
 
 
 
  • Vote: I like it
  • +368
  • Vote: I do not like it

By -is-this-fft-, history, 5 months ago, In English

A week ago, I made this blog post. Earlier, I already wrote a small update to the blog, but I wanted to make a separate post to get more contribution properly announce what I did as well as to share my code.

Thanks to vjudge's suggestion, there is now a way to participate in CodeChef Cook-Off contests virtually on vjudge.

How to participate

  • Go to the list.
  • Open a contest by clicking on its name.
  • Click the "clone" button to create your own private copy of the contest.
  • Set the "begin time" to the time you'd like to start the contest, somewhere in the future.

How it works

The contests are so-called replays -- they are generated from the final ranklists (complete with "time of AC" and penalty for every problem and contestant) of the contests. The contest data is downloaded from CodeChef and exported to vjudge format (an Excel worksheet). This means that the ACs in the standings are correct at any point in time, but information about "who has attempted but not yet solved this problem" is lost.

Vjudge imposes a limitation on the number of participants in the scoreboard: 2000. This means that in some Div. 2 and Div. 3 contests, the ranklist is truncated to the top 2000 participants. I also thought about picking a random 2000-person subset from the list of participants, but this seemed better: it's more important to know which hard problems are starting to be solved than to know whether an easy problem has 4000 or 5000 solvers currently.

Since the Lunchtime ranklists don't contain data about "time of AC", it's unfortunately not possible to create virtual contests of them so easily. Thus, we are limited to Cook-Offs only.

My code

As mentioned above, the contests are created by downloading ranklists from CodeChef and exporting them to vjudge format. The code to do all that is in this GitHub repository. The project is a .NET Core console application; you should be able to run this on every platform. It supports both downloading the contest ranklist from the CodeChef site as well as importing the ranklist from a JSON file (more or less the representation used to generate the ranklist page on CodeChef). Similarly, you can export to both JSON and vjudge's format. In the repo is also a data folder, containing the JSON representations of all Cook-Off and Lunchtime ranklists to date.

I'm sharing my code with you on the grounds that you will use it responsibly. In particular, I really really don't want to bother CodeChef with unneccessary excessive requests. Therefore I ask you that if you want to use CodeChef ranklists for any project (statistics, alternative virtual contest platform etc.), please use the JSON ranklists provided in the data folder instead of re-downloading it from CodeChef.

On the progress of uploading

Once contests are exported to vjudge format, they need to be uploaded. This part is done manually.

I have currently uploaded all Cook-Offs since January 2018. It is unlikely that I will upload earlier contests before I participate in all of them.

However, if you want to help uploading them, that's great! To upload a contest, you should go to the contest page on vjudge, click "Create Contest" and choose Replay. When a contest is exported to vjudge format, an auxiliary info.txt file is created along with the Excel ranklist itself; this file will help you fill in the "new contest" form. There are a couple things to keep in mind:

  • As mentioned above, please use the data folder in the repo instead of redownloading the ranklists.
  • Vjudge expects the contest start time in your local timezone, so I use my local timezone (UTC +3) in info.txt. Keep this in mind to ensure that the contest start time remains correct.
  • Problems need to be entered in the correct order (specified in info.txt) — this is the only way vjudge can match the columns of the ranklist to problems.
  • I have seen some contests where the contest length or start time are questionable: use your common sense in this case. Mostly they are off by a couple of seconds, but there was also at least one contest where the length was falsely reported as "1 day and 18 hours".
  • Pay close attention to the penalty time. It's 1200 seconds for older contests and 600 for newer.
  • Respect the rules of vjudge; don't create messy replays or duplicates. Fill in the form in a dignified way, using proper capitalization in the title and so on.

Huge thanks for virtual judge for making this possible and to anyone who gave advice on the subject!

Read more »

 
 
 
 
  • Vote: I like it
  • +93
  • Vote: I do not like it

By -is-this-fft-, history, 5 months ago, In English

Hi!

I like doing virtual contests. As it stands, I am quickly running out of contests (especially non-ancient ones) on Codeforces, Atcoder and CSAcademy. So, CodeChef would be the next logical place.

However, it seems that there is no option to do that on CodeChef. Since there are always new blogs about tools for competitive programming, I was wondering if someone had already made this or if someone knows of a service like this.

Some things I have looked into:

  • vjudge doesn't really fit the bill, it seems that it is intended more as a "create cross-platform mashups" type of thing. My issues:
    • You need to manually create the contest unless someone else has already done it.
    • Can't look at "the number of people who solved the problem up to this point in time" — this part is important to me.
    • Can't look at "scoreboard at this point in time" — this part is less important.
  • This project seems to be exactly what I'm looking for. However, 149.129.139.145 is down. I'm thinking about hosting this myself (probably not publicly), but I'm not sure I can. They use OAuth to authenticate (and use the bearer token in all subsequent requests to the CodeChef API), and I don't have valid credentials to do that. They do actually have an OAuth client secret pushed to the repo (:O), but when I try using that, the client id is invalid.
  • Writing one myself — it probably wouldn't be very hard to do, but I kind of run into the same problem as above: I can't access the CodeChef API. Web-scraping is also an option, but it isn't exactly pleasant, especially on CodeChef.

To summarize, I have two questions:

  • Does anyone know of such a service?
  • Does anyone have experience with getting access to CodeChef API? Do they just give access to anyone who asks? Do they reply to emails if I write?

Update

Thanks to the recommendation of vjudge, I have found a solution: to fetch ranklists from Codechef and upload them as replays on vjudge. As proof of concept, I have uploaded replays of the last contest:

I tried participating in Div. 1 this morning and was satisfied.

The Div. 2 and Div. 3 ranklists are truncated, featuring only the top 2000 participants. This is a vjudge limitation. Maybe it is better to randomly sample the participants, to get a better feel of "number of solvers"? Let me know what you think.

To participate in them virtually, you need to press the clone button on vjudge: you can then set the start date of the clone to the future and participate. You will be able to see the ranklist as it was at that point, any time during your participation, just like you can on Codeforces gyms and virtuals.

I will try to upload all Cook-Offs to vjudge in the coming days.

Read more »

 
 
 
 
  • Vote: I like it
  • +127
  • Vote: I do not like it

By -is-this-fft-, 9 months ago, In English

I wanted to suggest some improvements for the blog system. Some of these things have probably been said before. Some of these things should be pretty easy to implement, others are harder and might require a complete rework. I tried to put the easy things first.

Let's do something about necroposting

This is probably my biggest pet peeve about Codeforces blogs. Most of the time, Recent Actions is filled with very old blogs, some of which have been brought up for pretty much no reason at all. Currently, we have useful C++ "library from 2013 in Recent Actions, where the recent comment answers an old question which has already been answered better. Similarly, there is How to add friends? which was brought up to make some joke that has already been done.

The thing is though, a lot of the time these blogs are not brought up by trolls or anything like that. They are brought up by helpful people who simply did not notice that the blog is old and the question has already been answered. Sometimes people reply to those guys with angry comments, but that doesn't do much because people don't make that mistake more than once — we just have a constant stream of new users doing that. And it isn't good to chastise people for trying to be useful. My suggestion is to just prevent people from accidentally doing that. I propose that if you are replying to an old blog with no activity in the last 6 months, you get a popup saying something like

The most recent activity in this blog was X years ago. Are you sure you want to write a reply? It's not recommended to reply to old blogs without a good reason.

There are a few types of necroposting that I find annoying but can be argued to have a good reason:

  • replying to an old tutorial with "thank you" (because these tutorials can be useful now and if they are in Recent Actions, they are seen by younger people);
  • replying to an old editorial with "please debug my code" (because posting a new blog is pretty much equivalent).

In my ideal world, these replies wouldn't be necessary, but more about that later.

Don't allow users to delete blogs

The issue here is that if you delete a blog, then you delete all the discussion that happens in the comments. In my opinion, once you have posted a blog, it's not really yours to control anymore. It "belongs" to the community. Sometimes there is some important discussion in the comments that just gets deleted on the author's whim. Sometimes I put hard work and effort into writing a comment, only to discover the next day that the author of the blog has undone all that with the click of a button. We already can not delete comments. Why should we be allowed to delete blogs?

There can be some reasonable reasons to delete blogs, but they can be handled in other ways.

  • if a blog is against the rules, it will be moved to drafts by CF moderation (this is already being done).
  • if a blog is just spam, it will get downvoted and will disappear from Recent Actions (this is also already being done).
  • if you are ashamed of the blog or something along these lines, you can edit the blog and disallow viewing history, but you won't delete other people's thoughts by doing that.
  • if you have a really serious reason for deleting the blog, you can reach out to CF administration.

I also suggested that in a comment yesterday, but the blog got... deleted.

Have a separate place where beginners can ask questions

There have been attempts to create some megathreads for that, but they have always been unofficial and they have always eventually just disappeared, and beginners have gone back to making new blogs or posting under old editorials. For this to work, it needs to be official and there needs to be an official access point.

Also, it needs to have some sticky posts. There are a few recurring points that will solve a large part of the problems beginners have. For example, we have Codeforces Round #FF(255) Editorial and Codeforces Round #134 in Recent Actions currently, and both of them are there because a beginner asked a question of the type "my solution prints different output on Codeforces". Almost always, the solution to that is "You have undefined behavior, use tools like -fsanitize=address to find out where it is. There are two other typical problems:

  • "I have WA and I can't find a reason" — the solution is to write a very simple and naive solution, a program that generates small random tests and a program that compares the output of the two solutions.
  • "I have TLE" — the solution is to learn what the words "time complexity" mean.

Beginners need to see these things before asking a question. And the only way to ensure that is to have some kind of sticky or readme before you ask a question.

Have a separate place for educational resources

This was once suggested here, but it is not implemented. Again, there have been some unofficial attempts to create "lists of tutorials" but they will get similarly buried and updating them depends on the author. Again, for something like this to work, it needs to be official and it needs to have an official access point. Otherwise, it will get buried and obsoleted. EDU is somewhat like that, but it's a bit different — EDU is more like an online course platform, while tutorials can be all kinds of community-created resources that need not follow the EDU format.

Make the blog system into a forum system

This is the ultimate one and it would solve the two previous problems automatically. The real problem is that Codeforces has a blog system, but it has grown into a stage where the blog system is practically used as a forum system, except it doesn't have typical forum features. Currently Recent Actions is the only access point to the blog system, and it is very inflexible.

If we had a forum system, we could have sub-forums "Contest announcements", "Contest editorials", "Beginner questions", "Tutorials", "General discussion" etc. In each of them, you could sort the threads by most recent action, allowing you the same experience as Recent Actions currently has, but with none of the annoying blogs that you don't want to see. Sub-forums could have sub-forums of their own, allowing us to create a separate discussion thread for every problem in every contest (something that has been requested several times). Or to create different sub-forums for different difficulty levels of tutorials, allowing "ODE and differential technique" to not be buried in a sea of "DP for beginners, part 7". You could create sticky threads, solving the problem of beginners asking questions that have a standard solution. And so on.

Read more »

 
 
 
 
  • Vote: I like it
  • +432
  • Vote: I do not like it

By -is-this-fft-, history, 12 months ago, In English

There was a blog today with a similar title, and although it was (I think, not really sure as I didn't get a good look) some kind of satire/joke blog, I wanted to post some serious advice about things I (don't) like to see when people are PMing me. However I decided that this wouldn't be in the spirit of the blog and decided to make a separate blog. And now that blog is deleted anyway.

While it's not a bad thing to write messages to more experienced users, most people who write me do things that make them very difficult to deal with. These are my opinions, but somewhy I feel that many reds will agree. Some things here feel silly to write because they are so obvious. But I'm only writing them because people regularly mess them up.

Use punctuation and spelling.

There are some very simple rules that make your messages infinitely more readable. I find it especially strange when many of these messagers call me "sir", which is supposed to show respect, but then write like this. Is this how you write to someone you respect? Do you write to your teachers or boss like that?

Don't be pushy.

The worst case was when someone contacted me by email, and 20 minutes later sent another message "please reply ??". Think about it like this: you are writing someone who you don't know, who you don't have any previous agreement with and whose job is not to reply to you. I think you aren't automatically entitled to a reply at all. And although I try to reply to most messages, I think it's very rude to expect them to reply in less than a day or even two. Replying to you is not anyone's top priority.

Don't overstay your welcome.

A big reason why I'm wary of responding to PMs is that many people treat this as an invitation to ask 1000 more questions. Of course it's normal to have some followup questions and maybe write again another time, but there comes a point where you just being to annoy. You should understand when that moment comes and stop before.

Don't ask people to debug your code.

I will send the following template message if you get WA/RE:

Have you tried the following:

  • Write a very naive and simple solution to the problem (maybe exponential complexity) (if you can, you may also just copy someone else's solution)
  • Write a program to generate thousands of small (!) test cases
  • Using those two, find a test case where your program gives the wrong answer
  • Use print statements or a debugger to see where exactly your program does the wrong thing.

98% of WAs and REs can be resolved this way. I don't have time to delve into every code I'm sent, it's much harder to debug somebody else's code and being able to debug your own code is a valuable skill. It is also a very routine process that can be learned much faster than problem solving and algorithms.

And if you get TLE, make sure before writing that:

  • you know what complexity is and
  • you read/print input efficiently (sync_with_stdio and friends).

On that note, by the way: cin.tie(NULL) is useful but cout.tie(NULL) doesn't do anything.

Don't go in over your head

This section feels somewhat controversial to write, but I have some bad experience from the past with people who have asked help understanding some advanced tutorials. I accepted but every time it became evident after some time that the person did not yet have the capacity to understand this advanced algorithm or DS.

There is some amount of mathematical/algorithmical maturity needed to understand some complex things. (I don't think that you must have a high rating to understand, after all many top computer scientists haven't even heard of CP, but real understanding of simpler things is required). Moreover, you can not expect complex tutorials to hold a beginners hand and explain everything in such basic terms that anyone will understand. Mathematics (and the kind of CS we do here is basically mathematics) is cumulative and builds upon itself. Abstract stuff requires experience with concrete stuff, complex algorithms use many simple algorithms as subroutines. And you should be able to read some quite dry or terse mathematical text without someone showing you "what this intuitively means" at every little step.

About asking for help in a specific problem

If you ask for solutions or hints for some specific problem, be prepared to send a link. There are two reasons for this.

  1. I don't want you to help you cheat. Asking for solutions in an ongoing solution is cheating. And some of you may not realize it, but so is all other discussion: asking for hints, asking to debug and asking for "general ways to approach problems of this type" (whatever that means). By sending the link, you prove that you're not trying to do any of these things.
  2. I'm being blunt here, but a lot of you suck at retelling problems. For example, many beginners haven't developed a sense of what information is important. There are countless examples of CF blogs where some green asked for help in a seemingly impossible problem until finally it turns out that the author had left out a crucial detail. See below for more.

Of course, it is not always possible to send a problem link. If that is the case, I will probably wait a bit before answering — if the contest is short, there's a good chance that it will be over when I reply. And if you do retell the problem, keep in mind the following:

  • Try to write the problem as closely as in the original problem. If you are inexperienced, don't try to "simplify" the problem because there is a good chance you will fail. It's probably okay to remove the non-mathematical part of the problem like "Vanya found an array in the left pocket" but leave the mathematical part of the problem as it is.
  • Constraints are very important and a part of the problem. For example, there might be a problem where $$$n \le 10^3$$$ allows for a simple DP while $$$n \le 10^{18}$$$ requires some complex algorithms and a lot of creative thinking. Before I start thinking, I want to know whether you are satisfied with the simple DP (that takes me a few minutes) or need the complex algorithm (that might take a few days and may not exist at all).
    • This also goes for "hidden" constraints. If the problem says that there is a grid of lowercase English letters, don't say "there is a grid of some values" because the fact that there are only 26 distinct values may be important.
    • Small constraints are especially important! If $$$n \le 20$$$, we can use all kinds of exponential solutions that would be impossible otherwise.
    • Of course, there is some value for asking what the "best complexity" is. But thinking about that is a bigger investment in time. And even then it's a good idea to outline what you have already come up with.
  • If there is some strange condition in the problem, definitely include it. Problemsetters tend to not write random things and if there is something like that in the problem, there is a good chance that it is important.
  • If the time limit is not close to 2 seconds, that's also important information.

Speak in a language the receiver will understand

It's surprising but I have received a number of messages in languages I don't understand. Now of course you can't really know, but use common sense here. For example, my profile will tell you where I live. If I don't live anywhere near Bangladesh and don't even have a South Asian name, what is the probability that I can understand your message in what I think is Bengali? Same goes for any other language.

Thoughts some common questions

  • "Please give me some guidance" I have no idea what to answer. If you have some specific question, write.
  • "Should I sort by number of solves or by difficulty?" Please understand that there is no consensus or real science on "how CP should be practiced". Furthermore, such questions likely have very little effect on your practice.
  • "How many minutes before looking at the editorial" I will say "until you have solved it". I will also point out that many strong contestants have conflicting views and thus I think it's not the most important thing.
  • "Can you give me some resources" If you ask something specific, maybe I have something good. But 99% of the time I will just send links to train.usaco.org and cses.fi/book.
  • "The resources you sent are too hard" See the part about going in over your head. Also I don't know what you expected but some people expect resources where you don't need to think to understand. This is an unrealistic expectation.
  • "How do you approach problems like X" In high school algebra and calculus you often have well defined recipes that tell you how to solve every problem (of some specific kind) ever. I suspect some people think this applies to CP as well. It doesn't. Most of the time, there is no "standard first step" that you do, you just have to think. But if you have enough experience, many problems have solutions that become obvious immediately.
  • "Can you be my mentor" No.

In general I must say (this is about me though, not necessarily other reds) that I'm much more likely to be able to answer technical questions (specifics of some problem, solution, algorithm...) than "career" questions ("how to practice" and similar). If you have a question in the latter category, it may be a good idea to write to someone else.

Concluding words

Unfortunately I don't think this is going to have a big effect because many of these people who write PMs are beginners who aren't going to see this blog. It would be a really cool feature if I could set some kind of "status" that is visible at the "send message" panel.

Read more »

 
 
 
 
  • Vote: I like it
  • +746
  • Vote: I do not like it

By -is-this-fft-, history, 13 months ago, In English

See this and this for example.

It seems that sometimes after writing an equation on a separate line with Latex like [dollar][dollar][equation][dollar][dollar], the next equation written inline like [dollar][equation][dollar] doesn't work.

From some experiments it seems like Codeforces internally changes single dollar signs to triple dollar signs if there is a match, and uses those to delimit Latex. I suspect the double dollar signs screw up the parser and after an inline equation, the single dollar signs won't get converted.

Read more »

 
 
 
 
  • Vote: I like it
  • +45
  • Vote: I do not like it

By -is-this-fft-, history, 14 months ago, In English

There have been some questions about this comment (and its parents) and it seems different people understand the expression differently. So I wanted to create a poll.

If your answer is "depends on the context", vote based on the context of that comment.

Read more »

 
 
 
 
  • Vote: I like it
  • +52
  • Vote: I do not like it

By -is-this-fft-, history, 18 months ago, In English

I wanted to create this blog because there will probably not be really much discussion of solutions in the award ceremony (currently in progress). Also people who aren't finalists themselves can't see that anyway.

I'm interested to hear approaches of the higher-scoring finalists. (We, TÜ Randomized Algorithms did not do anything interesting :D)

A bit on the negative side: this seemed to be not be a 4-hour contest at all. The problem was kinda complex, I felt that I spent all the contest swamped in implementation, not being able to implement even 1/10 of the ideas I had. Well, it's also partially our fault for coming to troll in the finals with a 2-person team :P.

Also, did anyone manage to use the visualizer?

Read more »

 
 
 
 
  • Vote: I like it
  • +87
  • Vote: I do not like it

By -is-this-fft-, history, 18 months ago, In English

Introduction

Recently there have been several suggestions that there is significant rating inflation now due to COVID-19. The typical hypothesis seems to be along the lines of:

Because people stay at home more now, there is a lot of new inexperienced users who add a lot of rating to the system, pushing "old" users upwards without actual improvement.

A simplified and a bit more general hypothesis that I'm going to test:

Because of the virus I'm gaining more rating than usual without working more.

But most of these comments are based on anecdotal evidence, which is not worth much. I wanted to see if there is any truth to these claims. Specifically, this blog is an implementation (although I modified the approach a bit) of the following comment by geckods:

Methodology

Let's define the "test period" and "control period".

The test period refers to the period from 12 March 2020 to 12 Apr 2020. It was around 12 March when coronavirus became a big deal in most of the world, and for most of the Codeforces userbase. In that period, we had:

  • 2 Div. 1 contests
  • 5 Div. 2 contests
  • 2 Div. 3 contests
  • 2 Educational rounds
  • 1 Global round

The control period is another period that should be "reasonably similar" to the test period, in terms of both length and the contests held. It should be from before coronavirus was known, even to China. It also can't be from too long ago. I have chosen 26 Sep 2019 to 26 Oct 2019. It fits all the criteria; as a matter of fact the "contest summary" is the exact same, except for an additional unrated Technocup mirror. But I have provided the code, you can try other periods and other period lengths.

Consider the users who were active during these periods: at least 3 contests in both periods. If our hypothesis is true, then the distribution of rating changes of these users would be different: these users would have gained more rating during the test period, compared to the control period. It also might be that there is rating inflation in Div. 2, but it has not "propagated" to Div. 1 yet, so it's interesting to see what happens if we take the initial rating into account.

In short, we are asking "Given an user who had rating $$$x$$$ at the beginning of the test period, what was their rating at the end of the test period?" and comparing it to "Given an user who had rating $$$x$$$ at the beginning of the control period, what was their rating at the end of the control period?".

I have implemented it here.

Results

The graph on the left corresponds to the test period, the graph on the right corresponds to the control period. The color of the $$$i$$$-th row and $$$j$$$-th column corresponds to the probability that a participant with rating $$$i$$$ a the beginning of the period would have rating $$$j$$$ at the end of the period. Black corresponds to probability 1, white to probability 0.

I don't want to jump into conclusions from this data, I'll leave it up to you to interpret it. But just from these it seems that there is no serious inflation. Of course, one month is a short period to see the effects. And it does seem that the left graph shows a bit of inflation, but it's hard to tell. At least it seems to cast doubts on the "my rating has risen more without me working more!" claim. I think that if your rating has risen in the last month, you're just getting better even if you don't "feel better", don't worry so much about inflation :)

Disclaimer: I'm not very experienced in statistics, I just decided to implement this because no one else seemed to and I was also curious about what the result would be.

Read more »

 
 
 
 
  • Vote: I like it
  • +340
  • Vote: I do not like it

By -is-this-fft-, history, 20 months ago, In English

Hi.

Sometimes I use Codeforces groups for private trainings. On occassion, this means importing (via Polygon) problems from some other contests (that have public test data and are free to be used like that).

But when importing test cases "from the files" or "from the archive", this happens a lot:
(By the way I have no idea why it happens so frequently. My best guess is sometimes that OI-style contests have to repeat test cases in different subtasks. But I have also seen this with ICPC-style.)

Anyway, my question is:

  • Why does Polygon even care if there are multiple equal tests? It doesn't seem to care if there are two equal tests from generators.
  • Why does it only show one error at a time, and a seemingly random pair at that (It doesn't seem like it is the "lexicographically smallest" pair)?
  • Can I somehow turn this validation off? There are some (probably more important) checks that you can turn off.

(I guess I should be smarter and write a local script to remove the duplicates).

Read more »

 
 
 
 
  • Vote: I like it
  • +32
  • Vote: I do not like it

By -is-this-fft-, history, 2 years ago, In English

The Open Competition is an annual week-long competition for Estonian high school students which serves as the first contest of the season. Just as in last year, we are welcoming international participants as well. The contest started on 14 October, 08:00 EEST and lasts a week, ending at 21 October, 00:00 EEST.

The contest consists of 7 problems with scoring distribution 20-30-40-50-60-60-60. The problems have partial scoring. The first five problems are "classical" problems of increasing difficulty, while the last two slots are reserved for output-only, interactive, out of scope, approximation, optimization — just in general "weird" problems. The problems are mostly aimed at Div. 2 level participants.

Problem statements will be available in Estonian, English and Russian. However, to register, you will need to navigate through a bit of Estonian.

  • Registration (Google Translate works well; foreign participants should sign up in the "other" category and enter the country name in the "school/institution" field)
  • Ranking
  • The contest itself

Please note that you might not appear on the scoreboard immediately after registering.

Read more »

 
 
 
 
  • Vote: I like it
  • +16
  • Vote: I do not like it

By -is-this-fft-, history, 2 years ago, In English

(Huawei Marathon 2)

Read more »

 
 
 
 
  • Vote: I like it
  • +42
  • Vote: I do not like it

By -is-this-fft-, history, 2 years ago, In English

Inspired by people losing their mind over dp[mask][-1] here...

Motivation

Sometimes you need to implement an array with negative indices. For example you may want to have an array dp[i] which stores values for all $$$i$$$, $$$-10^5 \le i \le 10^5$$$. One way is to make dp an (unordered) map. But this is slow and can get TLE in certain problems. Another way is this:

const int MAX_N = 2e5 + 20;
const int ZERO = 1e5 + 5;
int dp [MAX_N];

and instead of calling dp[i] you call dp[ZERO + i]. If I want to access dp[-5] for example, I access dp[ZERO - 5]. But this is tedious and error-prone. It's very easy to make a mistake by forgetting to write ZERO somewhere.

The solution

Do the above, but do it like this!

const int MAX_N = 2e5 + 20;
const int ZERO = 1e5 + 5;
int dp_ [MAX_N];

int& dp (int idx) { // the '&' is very important!
  return dp_[ZERO + idx];
}

A very important part is that dp doesn't return an int; it returns an int& — a reference to the given position in the array. This means that it's very convenient to use. We can assign to dp(i), modify dp(i) etc.

int main () {
  int main () {
  dp(-300) = 7;
  dp(40) = 5;
  dp(-300) += 777; // All of those are fine!
  cout << dp(40) << " " << dp(-300) << endl;
  
  swap(dp(40), dp(-300)); // Even this works!
  cout << dp(40) << " " << dp(-300) << endl;
}

Read more »

 
 
 
 
  • Vote: I like it
  • +121
  • Vote: I do not like it

By -is-this-fft-, history, 2 years ago, In English

Introduction

This is a tutorial/exploration of problems that can be solved using the "DFS tree" of a graph.

For a way too long time, I didn't really understand how and why the classical algorithm for finding bridges works. It felt like many tutorials didn't really explain how it works, kind of just mentioned it in passing and quickly just moved on to implementation. The day someone explained what the DFS tree is, I finally understood it properly. Before, it took me ages to implement bridge-finding properly, and I always had to look up some detail. Now I can implement it at typing speed.

But more importantly, I began to see how the same techniques can be used to solve graph problems that have more or less nothing to do with bridges. The thing is, when you have a black box, you can only ever use it as a black box. But if you have a box that you understand well, you can take it into pieces, repurpose the pieces for completely different things, all without getting lost.

In my opinion, the DFS tree one of the most useful techniques for solving structural problems about graphs that I know of. Also, sometimes there are questionable greedy algorithms whose correctness becomes obvious once you think about the DFS tree.

The DFS tree

Consider an undirected connected graph $$$G$$$. Let's run a depth-first traversal of the graph. It can be implemented by a recursive function, perhaps something like this:

1 function visit(u):
2     mark u as visited
3     for each vertex v among the neighbours of u:
4         if v is not visited:
5             mark the edge uv
6             call visit(v)

Here is an animation of what calling visit(1) looks like.

Let's look at the edges that were marked in line 5. They form a spanning tree of $$$G$$$, rooted at the vertex 1. We'll call these edges span-edges; all other edges are called back-edges.

This is the DFS tree of our graph:

Observation 1. The back-edges of the graph all connect a vertex with its descendant in the spanning tree. This is why DFS tree is so useful.

Why?

For example in the graph above, vertices 4 and 8 couldn't possibly have a back-edge connecting them because neither of them is an ancestor of the other. If there was an edge between 4 and 8, the traversal would have gone to 8 from 4 instead of going back to 2.

This is the most important observation about about the DFS tree. The DFS tree is so useful because it simplifies the structure of a graph. Instead of having to worry about all kinds of edges, we only need to care about a tree and some additional ancestor-descendant edges. This structure is so much easier to think and write algorithms about.

Finding bridges (or articulation points)

The DFS tree and observation 1 are the core of Tarjan's bridge-finding algorithm. Typical tutorials about finding bridges only mention the DFS tree in passing and start by defining weird arrays like $$$\mathrm{dfs}[u]$$$ and $$$\mathrm{low}[u]$$$: forget all that. These are implementation details, and in my opinion this isn't even the most intuitive way to implement finding bridges. In bridge-finding, $$$\mathrm{dfs}[u]$$$ is simply an obfuscated way to check whether one vertex is an ancestor of another in the DFS tree. Meanwhile it is even kind of hard to clearly explain what $$$\mathrm{low}[u]$$$ is supposed to represent.

Here's how to find bridges in an undirected connected graph $$$G$$$. Consider the DFS tree of $$$G$$$.

Observation 2. A span-edge $$$uv$$$ is a bridge if and only if there exists no back-edge that connects a descendant of $$$uv$$$ with an ancestor of $$$uv$$$. In other words, a span-edge $$$uv$$$ is a bridge if and only if there is no back-edge that "passes over" $$$uv$$$.

Why?

For example, in the DFS tree above, the edge between 6 and 2 isn't a bridge, because even if we remove it, the back-edge between 3 and 8 holds the graph together. On the other hand, the edge between 2 and 4 is a bridge, because there is no back-edge passing over it to hold the graph together if $$$2-4$$$ is removed.

Observation 3. A back-edge is never a bridge.

This gives rise to the classical bridge-finding algorithm. Given a graph $$$G$$$:

  1. find the DFS tree of the graph;
  2. for each span-edge $$$uv$$$, find out if there is a back-edge "passing over" $$$uv$$$, if there isn't, you have a bridge.

Because of the simple structure of the DFS tree, step 2 is easy to do. For example, you can use the typical $$$\mathrm{low}[u]$$$ method. Or, for example, you can use some kind of prefix sums, which is what I prefer. I define $$$\mathrm{dp}[u]$$$ as the number of back-edges passing over the edge between $$$u$$$ and its parent. Then,

$$$\mathrm{dp}[u] = (\text{# of back-edges going up from } u) - (\text{# of back-edges going down from } u) + \underset{v \text{ is a child of } u}{\sum \mathrm{dp}[v]}.$$$

The edge between $$$u$$$ and its parent is a bridge if and only if $$$\mathrm{dp}[u] = 0$$$.

Directing edges to form a strongly connected graph

Consider the following problem. Unfortunately I don't know if you can submit solutions somewhere. This is 118E - Bertown roads.

Problem 1. You are given an undirected connected graph $$$G$$$. Direct all of its edges so that the resulting digraph is strongly connected, or declare that this is impossible.

I know of a solution without using the ideas of DFS tree in linear time, but it is quite annoying and I would never want to implement this. Meanwhile, a DFS tree solution is very easy to implement in only a few lines of code.

Observation 4. If $$$G$$$ contains bridges, this is impossible.

Why?

So from now on, suppose that the graph doesn't have bridges. Let's look at its DFS tree. Direct all span-edges downwards and all back-edges upwards.

Observation 5. There is a path from the root to each vertex.

Why?

Observation 6. There is a path from each vertex to the root.

Why?

Therefore, the graph is now strongly connected! This solution can be implemented without an explicit reference to the DFS tree, but it is a very good example of proving the correctness using DFS tree.

Implementing cacti

Sometimes the DFS tree is just a way to represent a graph that makes implementation of certain things convenient. Like an adjacency list but "next level". This section is purely an implementation trick.

A cactus is a graph where every edge (or sometimes, vertex) belongs to at most one simple cycle. The first case is called an edge cactus, the second case is a vertex cactus. Cacti have a simpler structure than general graphs, as such it is easier to solve problems on them than on general graphs. But only on paper: cacti and cactus algorithms can be very annoying to implement if you don't think about what you are doing.

In the DFS tree of a cactus, for any span-edge, at most one back-edge passes over it. This puts cycles to an one-to-one correspondence with simple cycles:

  • each back-edge forms a simple cycle together with the span-edges it passes over;
  • there are no other simple cycles.

This captures most properties of cacti. Often, it is easy to implement cactus algorithms using this representation.

For example, consider this problem:

Problem 2. You are given a connected vertex cactus with $$$N$$$ vertices. Answer queries of the form "how many distinct simple paths exist from vertex $$$p$$$ to vertex $$$q$$$?".

This is named appropriately: 231E - Cactus. The official tutorial has a solution like this:

  1. "squeeze" every cycle to a single vertex, and paint these vertices black;
  2. root the tree;
  3. for each vertex $$$u$$$, calculate the number of black vertices on the path from the root to $$$u$$$; denote this $$$\mathrm{cnt}[u]$$$.
  4. the answer to query $$$(p, q)$$$ is either $$$2^{\mathrm{cnt}[p] + \mathrm{cnt}[q] - 2 \mathrm{cnt}[\mathrm{lca}(p, q)]}$$$ or $$$2^{\mathrm{cnt}[p] + \mathrm{cnt}[q] - 2 \mathrm{cnt}[\mathrm{lca}(p, q)] + 1}$$$ depending on the color of $$$\mathrm{lca}(p, q)$$$.

It isn't very hard to understand why this works. The more interesting part is implementation.

  1. "squeeze" every cycle to a single vertex, and paint these vertices black;

Great. How?

After some time, most people will probably find some way to implement this. But here is a simple way using the DFS tree:

  1. give each back-edge an unique index starting from $$$N + 1$$$;
  2. for each vertex $$$u$$$, calculate the index of the back-edge $$$u$$$ is under; call that $$$\mathrm{cycleId}[u]$$$; if $$$u$$$ isn't in a cycle then $$$\mathrm{cycleId}[u] = u$$$;
  3. form a new adjacency list where for each $$$u$$$, each instance of $$$u$$$ is replaced by $$$\mathrm{cycleId}[u]$$$.

Step 2 can be done like this, for example:

1  function visit(u):
2      for each vertex v among the children of u:
3          visit(v)
4
5      if there is a back-edge going up from u:
6          cycleId[u] = the index of that back-edge
7      else:
8          cycleId[u] = u
9          for each vertex v among the children of u:
10             if cycleId[v] != v and there is no back-edge going down from v:
11                 cycleId[u] = cycleId[v]

Removing edges to form a bipartite graph

Problem 3. Consider an undirected graph. Find all the edges whose removal will produce a bipartite graph.

This is 19E - Fairy. There is no official tutorial, but an unofficial tutorial mentions using complicated data structures like Link/cut tree. Using DFS tree, we can solve the problem without any advanced data structures.

In the original problem, the graph need not be connected. However, observe that:

  • if the graph has no non-bipartite connected components, then removing any edge will produce a bipartite graph;
  • if the graph has multiple non-bipartite connected components, then it is not possible to make the graph bipartite.

Thus, the only interesting case is when we have exactly one non-bipartite connected component. Obviously the edge we remove must also come from that component, so we can just pretend that this is the only component. From now on, we assume that we have a non-bipartite, connected graph.

Let's consider the DFS tree of the graph. We can paint the vertices black and white so that each span-edge connects a black vertex and a white vertex. Some back-edges, however, might connect two vertices of the same color. We will call these edges contradictory.

Observation 7. A back-edge $$$uv$$$ is a solution if and only if $$$uv$$$ is the only contradictory edge.

Why?

Observation 8. A span-edge $$$uv$$$ is a solution if and only if the set of back-edges passing over $$$uv$$$ is the set of contradictory edges.

Why?

Thus, we can solve the problem as such:

  1. find the DFS tree of the graph and paint it in two colors;
  2. if there is exactly one contradictory back-edge, add it to the answer;
  3. use dynamic programming to calculate, for each span-edge, how many contradictory and how many non-contradictory back-edges pass over it;
  4. if a span-edge has all contradictory and no non-contradictory edges passing over it, add it to the answer.

Directed variant

What happens if we find the DFS tree of a directed graph? Let's run our depth-first traversal again:

1 function visit(u):
2     mark u as visited
3     for each vertex v among the neighbours of u:
4         if v is not visited:
5             mark the edge u->v
6             call visit(v)

To clarify, on line 3 we only mean such vertices $$$v$$$ that there is an edge from $$$u$$$ to $$$v$$$.

It might be the case that this traversal never reaches some vertices. For simplicity, we assume that:

  • we started the traversal at vertex 1, i.e. called visit(1);
  • it is possible to reach every vertex from the vertex 1.

Let's call the edges marked at step 5 span-edges.

Observation 9. Span-edges form a rooted spanning tree, directed away from the root.

Other edges fall in two categories:

  • edges that connect an ancestor with a descendant: back-edges;
  • edges that don't: cross-edges.

It's possible to further divide the back-edges into up-edges and down-edges based on their direction.

Observation 10. Cross-edges are always directed from the vertex that was explored later, to the vertex that was explored earlier.

Why?

The directed variant of DFS tree is used to construct the dominator tree of a directed graph, but that is a beast on a whole another level that warrants its own tutorial.

Problems for practice

Solved in this tutorial:

Others:

If you know any other tasks that can be solved like this, share in the comment section! I might add them here, too.

Read more »

 
 
 
 
  • Vote: I like it
  • +565
  • Vote: I do not like it

By -is-this-fft-, history, 2 years ago, In English

Sometimes we see problems where a seemingly naive algorithm — for example simple brute force — is actually the correct solution. Mostly, I mean problems where, due to some clever observations, the complexity of the brute force algorithm is greatly reduced.

For example, in a recent contest we had 1168B - Good Triple. You can notice that any string of length at least 9 contains a "good triple", which means a brute force is sufficient here and runs in $$$O(n)$$$.

Or 1028F - Make Symmetrical where you can notice that on any given circle, there are not too many lattice points.

Randomized input is also a good source of these. In 896C - Willem, Chtholly and Seniorious you can observe that after a bit of time, most adjacent elements of the array are equal and write something seemingly naive based on that.

What are some other examples of problems where a stupid brute force is actually the correct solution?

Read more »

 
 
 
 
  • Vote: I like it
  • +54
  • Vote: I do not like it

By -is-this-fft-, history, 2 years ago, In English

The Baltic Olympiad in Informatics will be held in Tartu, Estonia from April 27 to May 2. Both days of the contest will feature an online mirror contest. We invite you to take part.

Both online mirrors will start one hour after the corresponding onsite contests. Both contests last 5 hours.

Solutions will be accepted in the following programming languages: C++, Java and Python. We have separate time limits for C++/Java and Python. Tasks have IOI-style batched partial scoring. Problem statements will be available in English and a multitude of languages spoken in the participating countries.

Some links:

Let's discuss problems here after the contests!

Read more »

 
 
 
 
  • Vote: I like it
  • +82
  • Vote: I do not like it

By -is-this-fft-, history, 3 years ago, In English

Can someone else staying at this hotel please tell me how to turn off that light:

This is urgent, I need some sleep before the finals.

Read more »

 
 
 
 
  • Vote: I like it
  • +131
  • Vote: I do not like it

By -is-this-fft-, history, 3 years ago, In English

The Open Competition is an annual week-long competition for Estonian high school students which serves as the first contest of the season. This year, we are welcoming international participants as well. The contest started on 15 October, 10:00 EEST and lasts a week, ending at 22 October, 00:00 EEST.

The contest consists of 7 problems with scoring distribution 20-30-40-50-60-60-60. The problems will have partial scoring. The first five problems are "classical" problems of increasing difficulty, while the last two slots are reserved for output-only, interactive, out of scope, approximation, optimization — just in general "weird" problems. The problemset should be interesting for most Div. 2 participants.

Problem statements will be available in Estonian, English and Russian. However, to register, you will need to navigate through a bit of Estonian.

Please note that you might not appear on the scoreboard immediately after registering.

Read more »

 
 
 
 
  • Vote: I like it
  • +52
  • Vote: I do not like it

By -is-this-fft-, history, 4 years ago, In English

#define rep(i, l, r) for ((i) = (l); (i) < (r); (i)++)

Things like that are pretty common. Why do so many of you need to do things like this? What is wrong with a good old for-loop? Is it that slow to write one explicitly?

Read more »

 
 
 
 
  • Vote: I like it
  • +52
  • Vote: I do not like it

By -is-this-fft-, history, 4 years ago, In English

I tried to add a problem prepared in Polygon to a mashup contest. Instead of the problem however, the statement shows this wonderful piece of art:

What is going on?

Read more »

 
 
 
 
  • Vote: I like it
  • +14
  • Vote: I do not like it

By -is-this-fft-, history, 4 years ago, In English

To my (untrained) eye, it seems that Codeforces servers are unable to handle the traffic that comes with a regular Codeforces round. If this is indeed so, a possible solution to alleviate the server load would be to simply limit the number of participants in any given contest: only allowing a fixed number of people to register for the contest — first come, first serve. A limited-access contest is better than "no" contest at all.

Should Codeforces do this? Discuss.

(of course if the issues originate elsewhere then don't mind this)

Read more »

 
 
 
 
  • Vote: I like it
  • +47
  • Vote: I do not like it

By -is-this-fft-, history, 4 years ago, In English

Title says it all. What are some of your favourite problems in competitive programming?

Read more »

 
 
 
 
  • Vote: I like it
  • +15
  • Vote: I do not like it

By -is-this-fft-, history, 4 years ago, In English

I was hoping to see discussion on the problems, but the announcement seems to have disappeared.

EDIT: Back up!

Read more »

 
 
 
 
  • Vote: I like it
  • +91
  • Vote: I do not like it

By -is-this-fft-, history, 6 years ago, In English

I often, not that often, but still often see things like this while hacking:

#define MIN(a,b) (a) < (b) ? (a) : (b)
#define MAX(a,b) (a) > (b) ? (a) : (b)
#define ABS(a) (a) > 0 ? (a) : -(a)

While these are not as common as other (dubious) preprocessor macros, I still see these being used fairly commonly. There are, in my opinion, several downsides to using these -- if the inputs were functions, one of them gets executed twice.

So I want to ask, is there any advantage to using these over std::min(a, b) and others?

Read more »

 
 
 
 
  • Vote: I like it
  • +64
  • Vote: I do not like it