By adamant, history, 15 hours ago,

Hi everyone!

Let $R$ be a ring, $d_0, d_1, d_2, \dots \in R$ and $e_0, e_1, e_2, \dots \in R$ be linear recurrence sequences, such that

$\begin{gather} d_m = \sum\limits_{i=1}^k a_i d_{m-i}\text{ for }m \geq k, \\ e_m = \sum\limits_{i=1}^l b_i e_{m-i}\text{ for }m \geq l. \end{gather}$

In some applications, the following two sequences arise:

$\begin{gather} f_k &=& d_k e_k & \text{(Hadamard product)}, \\ f_k &=& \sum\limits_{i+j=k} \binom{k}{i} d_i e_j & \text{(binomial convolution)}. \end{gather}$

Today I'd like to write about the framework that allows to prove that both the sequences defined above are also linear recurrences. It would also allow to compute their characteristic polynomials in $O(kl \log kl)$, which is optimal as their degrees are $O(kl)$ in both cases.

• +93

By adamant, history, 4 days ago,

Hi everyone!

You probably know that the primitive root modulo $m$ exists if and only if one of the following is true:

• $m=2$ or $m=4$;
• $m = p^k$ is a power of an odd prime number $p$;
• $m = 2p^k$ is twice a power of an odd prime number $p$.

Today I'd like to write about an interesting rationale about it through $p$-adic numbers.

Hopefully, this will allow us to develop a deeper understanding of the multiplicative group modulo $p^k$.

### Tl;dr.

For a prime number $p>2$ and $r \equiv 0 \pmod p$ one can uniquely define

$\exp r = \sum\limits_{k=0}^\infty \frac{r^k}{k!} \pmod{p^n}.$

In this notion, if $g$ is a primitive root of remainders modulo $p$ lifted to have order $p-1$ modulo $p^n$ as well, then $g \exp p$ is a primitive root of remainders modulo $p^n$.

Finally, for $p=2$ and $n>2$ the multiplicative group is generated by two numbers, namely $-1$ and $\exp 4$.

• +140

By adamant, history, 6 weeks ago,

Hi everyone!

Today I want to describe an efficient solution of the following problem:

Composition of Formal Power Series. Given $A(x)$ and $B(x)$ of degree $n-1$ such that $B(0) = 0$, compute $A(B(x)) \pmod{x^n}$.

The condition $B(0)=0$ doesn't decrease the generality of the problem, as $A(B(x)) = P(Q(x))$, where $P(x) = A(x+B(0))$ and $Q(x) = B(x) - B(0)$. Hence you could replace $A(x)$ and $B(x)$ with $P(x)$ and $Q(x)$ when the condition is not satisfied.

Solutions that I'm going to describe were published in Joris van der Hoeven's article about operations on formal power series. The article also describes a lot of other common algorithms on polynomials. It is worth noting that Joris van der Hoeven and David Harvey are the inventors of the breakthrough $O(n \log n)$ integer multiplication algorithm in the multitape Turing machine model.

• +113

By adamant, history, 6 weeks ago,

Hi everyone!

Today I'd like to write a bit on the amazing things you can get out of a power series by putting roots of unity into its arguments. It will be another short story without any particular application in competitive programming (at least I don't know of them yet, but there could be). But I find the facts below amazing, hence I wanted to share them.

You're expected to know some basic stuff about the discrete Fourier transform and a bit of linear algebra to understand the article.

• +72

By adamant, history, 6 weeks ago,

Hi everyone!

Today I'd like to finally talk about an algorithm to solve the following tasks in $O(n \log^2 n)$:

• Compute the greatest common divisor of two polynomials $P(x)$ and $Q(x)$;
• Given $f(x)$ and $h(x)$ find the multiplicative inverse of $f(x)$ modulo $h(x)$;
• Given $F_0,F_1, \dots, F_m$, recover the minimum linear recurrence $F_n = a_1 F_{n-1} + \dots + a_d F_{n-d}$;
• Given $P(x)$ and $Q(x)$, find $A(x)$ and $B(x)$ such that $P(x) A(x) + Q(x) B(x) = \gcd(P, Q)$;
• Given $P(x)=(x-\lambda_1)\dots(x-\lambda_n)$ and $Q(x)=(x-\mu_1)\dots(x-\mu_m)$ compute their resultant.

More specifically, this allows to solve in $O(n \log^2 n)$ the following problems:

Library Checker — Find Linear Recurrence. You're given $F_0, \dots, F_{m}$. Find $a_1, \dots, a_d$ with minimum $d$ such that

$F_n = \sum\limits_{k=1}^d a_k F_{n-k}.$

Library Checker — Inv of Polynomials. You're given $f(x)$ and $h(x)$. Compute $f^{-1}(x)$ modulo $h(x)$.

All tasks here are connected with the extended Euclidean algorithm and the procedure we're going to talk about is a way to compute it quickly. I recommend reading article on recovering minimum linear recurrence first, as it introduces some useful results and concepts. It is also highly recommended to familiarize yourself with the concept of continued fractions.

• +179

By adamant, history, 7 weeks ago,

Hi everyone!

The task of finding the minimum linear recurrence for the given starting sequence is typically solved with the Berlekamp-Massey algorithm. In this article I would like to highlight another possible approach, with the use of the extended Euclidean algorithm.

Great thanks to nor for the proofreading and all the useful comments to make the article more accessible and rigorous.

### Tl'dr.

The procedure below is essentially a formalization of the extended Euclidean algorithm done on $F(x)$ and $x^{m+1}$.

If you need to find the minimum linear recurrence for a given sequence $F_0, F_1, \dots, F_m$, do the following:

Let $F(x) = F_m + F_{m-1} x + \dots + F_0 x^m$ be the generating function of the reversed $F$.

Compute the sequence of remainders $r_{-2}, r_{-1}, r_0, \dots, r_k$ such that $r_{-2} = F(x)$, $r_{-1}=x^{m+1}$ and

$r_{k} = r_{k-2} \mod r_{k-1}.$

Let $a_k(x)$ be a polynomial such that $r_k = r_{k-2} - a_k r_{k-1}$.

Compute the auxiliary sequence $q_{-2}, q_{-1}, q_0, \dots, q_k$ such that $q_{-2} = 1$, $q_{-1} = 0$ and

$q_{k} = q_{k-2} + a_k q_{k-1}.$

Pick $k$ to be the first index such that $\deg r_k < \deg q_k$. Let $q_k(x) = a_0 x^d - \sum\limits_{k=1}^d a_k x^{d-k}$, then it also holds that

$F_n = \sum\limits_{k=1}^d \frac{a_k}{a_0}F_{n-k}$

for any $n \geq d$ and $d$ is the minimum possible. Thus, $q_k(x)$ divided by $a_0$ is the characteristic polynomial of the minimum linear for $F$.

More generally, one can say for such $k$ that

$F(x) \equiv \frac{(-1)^{k}r_k(x)}{q_k(x)} \pmod{x^{m+1}}.$

• +175

By adamant, history, 7 weeks ago,

Hi everyone!

Today I'd like to write about Fibonacci numbers. Ever heard of them? Fibonacci sequence is defined as $F_n = F_{n-1} + F_{n-2}$.

It got me interested, what would the recurrence be like if it looked like $F_n = \alpha F_{n-p} + \beta F_{n-q}$ for $p \neq q$?

Timus — Fibonacci Sequence. The sequence $F$ satisfies the condition $F_n = F_{n-1} + F_{n-2}$. You're given $F_i$ and $F_j$, compute $F_n$.

Using $L(x^n) = F_n$ functional, we can say that we essentially need to solve the following system of equations:

$1 \equiv \alpha x^{-p} + \beta x^{-q} \pmod{x^2-x-1}.$

To get the actual solution from it, we should first understand what exactly is the remainder of $x^n$ modulo $x^2-x-1$. The remainder of $P(x)$ modulo $(x-a)(x-b)$ is generally determined by $P(a)$ and $P(b)$:

$P(x) \equiv r \mod(x-a)(x-b) \iff \begin{cases}P(a) = r,\\ P(b)=r.\end{cases}$

Therefore, our equation above is, equivalent to the following:

$\begin{cases} \alpha a^{-p} + \beta a^{-q} = 1,\\ \alpha b^{-p} + \beta b^{-q} = 1. \end{cases}$

The determinant of this system of equations is $a^{-p}b^{-q} - a^{-q}b^{-p}$. Solving the system, we get the solution

$\begin{matrix} \alpha = \dfrac{b^{-q}-a^{-q}}{a^{-p}b^{-q} - a^{-q}b^{-p}}, & \beta = \dfrac{a^{-p}-b^{-p}}{a^{-p}b^{-q} - a^{-q}b^{-p}}. \end{matrix}$

Multiplying numerators and denominators by $a^q b^q$ for $\alpha$ and $a^p b^p$ for $\beta$, we get a nicer form:

$\boxed{\begin{matrix} \alpha = \dfrac{a^q-b^q}{a^{q-p} - b^{q-p}}, & \beta = \dfrac{a^p-b^p}{a^{p-q} - b^{p-q}}. \end{matrix}}$

This is a solution for a second degree recurrence with the characteristic polynomial $(x-a)(x-b)$.

Note that for Fibonacci numbers in particular, due to Binet's formula, it holds that

$F_n = \frac{a^n-b^n}{a-b}.$

Substituting it back into $\alpha$ and $\beta$, we get

$\boxed{F_n = \frac{F_q}{F_{q-p}} F_{n-p} + \frac{F_p}{F_{p-q}} F_{n-q}}$

which is a neat symmetric formula.

P. S. you can also derive it from Fibonacci matrix representation, but this way is much more fun, right?

UPD: I further simplified the explanation, should be much easier to follow it now.

Note that the generic solution only covers the case of $(x-a)(x-b)$ when $a \neq b$. When the characteristic polynomial is $(x-a)^2$, the remainder of $P(x)$ modulo $(x-a)^2$ is determined by $P(a)$ and $P'(a)$:

$P(x) \equiv r \mod{(x-a)^2} \iff \begin{cases}P(a)=r,\\P'(a)=0.\end{cases}$

Therefore, we have a system of equations

$\begin{cases} \alpha a^{-p} &+& \beta a^{-q} &=& 1,\\ \alpha p a^{-p-1} &+& \beta q a^{-q-1} &=& 0. \end{cases}$

For this system, the determinant is $\frac{q-p}{a^{p+q+1}}$ and the solution is

$\boxed{\begin{matrix} \alpha = \dfrac{qa^p}{q-p},&\beta = \dfrac{pa^q}{p-q} \end{matrix}}$

Another interesting way to get this solution is via L'Hôpital's rule:

$\lim\limits_{x \to 0}\frac{a^q-(a+x)^q}{a^{q-b}-(a+x)^{q-p}} = \lim\limits_{x \to 0}\frac{q(a+x)^{q-1}}{(q-p)(a+x)^{q-p-1}} = \frac{qa^p}{q-p}.$

Let's consider the more generic case of the characteristic polynomial $(x-\lambda_1)(x-\lambda_2)\dots (x-\lambda_k)$.

102129D - Basis Change. The sequence $F$ satisfies $F_n=\sum\limits_{i=1}^k a_i F_{n-i}$. Find $c_1,\dots,c_n$ such that $F_n = \sum\limits_{i=1}^k c_i F_{n-b_i}$.

We need to find $\alpha_1, \dots, \alpha_k$ such that $F_n = \alpha_1 F_{n-c_1} + \dots + \alpha_k F_{n-c_k}$. It boils down to the system of equations

$\begin{cases} \alpha_1 \lambda_1^{-c_1}+\dots+\alpha_k \lambda_1^{-c_1} = 1,\\ \alpha_1 \lambda_2^{-c_2}+\dots+\alpha_k \lambda_2^{-c_k} = 1,\\ \dots\\ \alpha_1 \lambda_k^{-c_k}+\dots+\alpha_k \lambda_k^{-c_k} = 1.\\ \end{cases}$

This system of equations has a following matrix:

$A=\begin{bmatrix} \lambda_1^{-c_1} & \lambda_1^{-c_2} & \dots & \lambda_1^{-c_k} \\ \lambda_2^{-c_1} & \lambda_2^{-c_2} & \dots & \lambda_2^{-c_k} \\ \vdots & \vdots & \ddots & \vdots \\ \lambda_k^{-c_1} & \lambda_k^{-c_2} & \dots & \lambda_k^{-c_k} \end{bmatrix}$

Matrices of this kind are called alternant matrices. Let's denote its determinant as $D_{\lambda_1, \dots, \lambda_k}(c_1, \dots, c_k)$, then the solution is

$\alpha_i = \dfrac{D_{\lambda_1, \dots, \lambda_k}(c_1, \dots, c_{i-1}, \color{red}{0}, c_{i+1}, \dots, c_k)}{D_{\lambda_1, \dots, \lambda_k}(c_1, \dots, c_{i-1}, \color{blue}{c_i}, c_{i+1}, \dots, c_k)}.$

Unfortunately, on practice in makes more sense to find $\alpha_i$ with the Gaussian elimination rather than with these direct formulas.

• +172

By adamant, history, 2 months ago,

Hi everyone!

It's been quite some time since I wrote two previous articles in the cycle:

Part 1: Introduction
Part 2: Properties and interpretation
Part 3: In competitive programming

This time I finally decided to publish something on how one can actually use continued fractions in competitive programming problems.

Few months ago, I joined CP-Algorithms as a collaborator. The website also underwent a major design update recently, so I decided it would be great to use this opportunity and publish my new article there, so here it is:

CP-Algorithms — Continued fractions

It took me quite a while to write and I made sure to not only describe common competitive programming challenges related to continued fractions, but also to describe the whole concept from scratch. That being said, article is supposed to be self-contained.

Main covered topics:

1. Notion of continued fractions, convergents, semiconvergents, complete quotients.
2. Recurrence to compute convergents, notion of continuant.
3. Connection of continued fractions with the Stern-Brocot tree and the Calkin-Wilf tree.
4. Convergence rate with continued fractions.
5. Linear fractional transformations, quadratic irrationalities.
6. Geometric interpretation of continued fractions and convergents.

I really hope that I managed to simplify the general story-telling compared to previous 2 articles.

Here are the major problems that are dealt with in the article:

• Given $a_1, \dots, a_n$, quickly compute $[a_l; a_{l+1}, \dots, a_r]$ in queries.
• Which number of $A=[a_0; a_1, \dots, a_n]$ and $B=[b_0; b_1, \dots, b_m]$ is smaller? How to emulate $A-\varepsilon$ and $A+\varepsilon$?
• Given $A=[a_0; a_1, \dots, a_n]$ and $B=[b_0; b_1, \dots, b_m]$, compute the continued fraction representations of $A+B$ and $A \cdot B$.
• Given $\frac{0}{1} \leq \frac{p_0}{q_0} < \frac{p_1}{q_1} \leq \frac{1}{0}$, find $\frac{p}{q}$ such that $(q,p)$ is lexicographically smallest and $\frac{p_0}{q_0} < \frac{p}{q} < \frac{p_1}{q_1}$.
• Given $x$ and $k$, $x$ is not a perfect square. Let $\sqrt x = [a_0; a_1, \dots]$, find $\frac{p_k}{q_k}=[a_0; a_1, \dots, a_k]$ for $0 \leq k \leq 10^9$.
• Given $r$ and $m$, find the minimum value of $q r \pmod m$ on $1 \leq q \leq n$.
• Given $r$ and $m$, find $\frac{p}{q}$ such that $p, q \leq \sqrt{m}$ and $p q^{-1} \equiv r \pmod m$.
• Given $p$, $q$ and $b$, construct the convex hull of lattice points below the line $y = \frac{px+b}{q}$ on $0 \leq x \leq n$.
• Given $A$, $B$ and $C$, find the maximum value of $Ax+By$ on $x, y \geq 0$ and $Ax + By \leq C$.
• Given $p$, $q$ and $b$, compute the following sum:
$\sum\limits_{x=1}^n \lfloor \frac{px+b}{q} \rfloor.$

So far, here is the list of problems that are explained in the article:

And an additional list of practice problems where continued fractions could be useful:

There are likely much more problems where continued fractions are used, please mention them in the comments if you know any!

Finally, since CP-Algorithms is supposed to be a wiki-like project (that is, to grow and get better as time goes by), please feel free to comment on any issues that you might find while reading the article, ask questions or suggest any improvements. You can do so in the comments below or in the issues section of the CP-Algorithms GitHub repo. You can also suggest changes via pull request functionality.

• +159

By adamant, history, 3 months ago,

Hi everyone!

There are already dozens of blogs on linear recurrences, why not make another one? In this article, my main goal is to highlight the possible approaches to solving linear recurrence relations, their applications and implications. I will try to derive the results with different approaches independently from each other, but will also highlight similarities between them after they're derived.

### Definitions

Def. 1. An order $d$ homogeneous linear recurrence with constant coefficients (or linear recurrence) is an equation of the form

$F_n = \sum\limits_{k=1}^d a_k F_{n-k}.$

Def. 2. In the equation above, the coefficients $a_1, \dots, a_d \in R$ are called the recurrence parameters,

Def. 3. and a sequence $F_0, F_1, \dots \in R$ is called an order $d$ linear recurrence sequence.

The most common task with linear recurrences is, given initial coefficients $F_0, F_1, \dots, F_{d-1}$, to find the value of $F_n$.
Example 1. A famous Fibonacci sequence $F_n = F_{n-1} + F_{n-2}$ is an order 2 linear recurrence sequence.

Example 2. Let $F_n = n^2$. One can prove that $F_n = 3 F_{n-1} - 3 F_{n-2} + F_{n-3}$.

Example 3. Moreover, for $F_n = P(n)$, where $P(n)$ is a degree $d$ polynomial, it holds that

$F_n = \sum\limits_{k=1}^{d+1} (-1)^{k+1}\binom{d+1}{k} F_{n-k}.$

If this fact is not obvious to you, do not worry as it will be explained further below.

Finally, before proceeding to next sections, we'll need one more definition.
Def. 4. A polynomial
$A(x) = x^d - \sum\limits_{k=1}^d a_k x^{d-k}$

is called the characteristic polynomial of the linear recurrence defined by $a_1, \dots, a_d$.

Example 4. For Fibonacci sequence, the characteristic polynomial is $A(x) = x^2-x-1$.

• +266

By adamant, history, 3 months ago,

Hi everyone!

Today I'd like to write about some polynomials which are invariant under the rotation and relabeling in euclidean spaces. Model problems work with points in the 3D space, however both ideas, to some extent, might be generalized for higher number of dimensions. They might be useful to solve some geometric problems under the right conditions. I used some ideas around them in two problems that I set earlier.

#### Congruence check in random points

You're given two set of lines in 3D space. The second set of lines was obtained from the first one by the rotation and relabeling. You're guaranteed that the first set of lines was generated uniformly at random on the sphere, find the corresponding label permutation.

Actual problem: 102354F - Cosmic Crossroads.

##### Solution

Let $P_4(x, y, z) = \sum\limits_{l=1}^n \left((x-x_l)^2+(y-y_l)^2 + (z-z_l)^2\right)^2$. It is a fourth degree polynomial, which geometric meaning is the sum of distances from $(x, y, z)$ to all points in the set, each distance raised to power $4$. Distance is preserved under rotation, hence this expression is invariant under rotation transform. On the other hand it may be rewritten as

$P_4(x, y, z) = \sum\limits_{i=0}^4 \sum\limits_{j=0}^4 \sum\limits_{k=0}^4 A_{ijk} x^i y^j z^k,$

where $A_{ijk}$ is obtained as the sum over all points $(x_l,y_l,z_l)$ from the initial set. To find the permutation, it is enough to calculate $P_4$ for all points in both sets and them match points with the same index after they were sorted by the corresponding $P_4$ value.

It is tempting to try the same trick with $P_2(x, y, z)$, but it is the same for all the points in the set for this specific problem:

\begin{align} P_2(x, y, z) =& \sum\limits_{l=1}^n [(x-x_l)^2+(y-y_l)^2+(z-z_l)^2]\\ =& n \cdot (x^2+y^2+z^2) - 2x \sum\limits_{l=1}^n x_l - 2y \sum\limits_{l=1}^n y_l - 2z \sum\limits_{l=1}^n z_l + \sum\limits_{l=1}^n [x_l^2+y_l^2+z_l^2] \\ =& n \left[\left(x-\bar x\right)^2 + \left(y-\bar y\right)^2 + \left(z-\bar z\right)^2\right] - n(\bar x^2+\bar y^2+\bar z^2) + \sum\limits_{l=1}^n (x_l^2 + y_l^2 + z_l^2), \end{align}

where $\bar x$, $\bar y$ and $\bar z$ are the mean values of $x_l$, $y_l$ and $z_l$ correspondingly. As you can see, non-constant part here is simply the squared distance from $(x, y, z)$ to the center of mass of the points in the set. Thus, $P_2(x, y, z)$ is the same for all points having the same distance from the center of mass, so it is of no use in 102354F - Cosmic Crossroads, as all the points have this distance equal to $1$ in the input.

Burunduk1 taught me this trick after the Petrozavodsk camp round which featured the model problem.

#### Sum of squared distances to the axis passing through the origin

You're given a set of points $r_k=(x_k, y_k, z_k)$. A torque needed to rotate the system of points around the axis $r=(x, y, z)$ is proportional to the sum of squared distances to the axis across all points. You need to find the minimum amount of points that have to be added to the set, so that the torque needed to rotate it around any axis passing through the origin is exactly the same.

Actual problem: Hackerrank — The Axis of Awesome

##### Solution

The squared distance from the point $r_k$ to the axis $r$ is expressed as

$\dfrac{|r_k \times r|^2}{r \cdot r} = \dfrac{(y_k z - z_k y)^2+(x_k z - z_k x)^2+(x_k y - y_k x)^2}{x^2+y^2+z^2}.$

The numerator here is a quadratic form, hence can be rewritten as

$|r_k \times r|^2 = \begin{pmatrix}x & y & z\end{pmatrix} \begin{pmatrix} y_k^2 + z_k^2 & -x_k y_k & -x_k z_k \\ -x_k y_k & x_k^2 + z_k^2 & -y_k z_k \\ -x_k z_k & -y_k z_k & x_k^2 + y_k^2 \end{pmatrix} \begin{pmatrix}x \\ y \\ z\end{pmatrix}.$

Correspondingly, the sum of squared distances for $k=1..n$ is defined by the quadratic form

$I = \sum\limits_{k=1}^n\begin{pmatrix} y_k^2 + z_k^2 & -x_k y_k & -x_k z_k \\ -x_k y_k & x_k^2 + z_k^2 & -y_k z_k \\ -x_k z_k & -y_k z_k & x_k^2 + y_k^2 \end{pmatrix},$

known in analytic mechanics as the inertia tensor. As any other tensor, its coordinate form is invariant under rotation.

Inertia tensor is a positive semidefinite quadratic form, hence there is an orthonormal basis in which it is diagonal:

$I = \begin{pmatrix}I_1 & 0 & 0 \\ 0 & I_2 & 0 \\ 0 & 0 & I_3\end{pmatrix}.$

Here $I_1$, $I_2$ and $I_3$ are the eigenvalues of $I$, also called the principal moments of inertia (corresponding eigenvectors are called the principal axes of inertia). From this representation we deduce that the condition from the statement is held if and only if $I_1 = I_2 = I_3$.

Adding a single point on a principal axis would only increase principal moments on the other axes. For example, adding $(x, 0, 0)$ would increase $I_2$ and $I_3$ by $x^2$. Knowing this, one can prove that the answer to the problem is exactly $3-m$ where $m$ is the multiplicity of the smallest eigenvalue of $I$.

##### Applying it to the first problem

Now, another interesting observation about inertia tensor is that both principal inertia moments and principal inertia axes would be preserved under rotation. It means that in the first problem, another possible way to find the corresponding rotation and the permutation of points is to find principal inertia axes for both sets of points and then find a rotation that matches corresponding principal inertia axes in the first and the second sets of points.

Unfortunately, this method still requires that principal inertia moments are all distinct (which generally holds for random sets of points), otherwise there would be an infinite amount of eigendecompositions of $I$.

• +107

By adamant, history, 4 months ago,

Hi everyone!

Today I'd like to write another blog about polynomials. Consider the following problem:

You're given $P(x) = a_0 + a_1 x + \dots + a_{n-1} x^{n-1}$, you need to compute $P(x+a) = b_0 + b_1 x + \dots + b_{n-1} x^{n-1}$.

There is a well-known solution to this, which involves some direct manipulation with coefficients. However, I usually prefer approach that is more similar to synthetic geometry when instead of low-level coordinate work, we work on a higher level of abstraction. Of course, we can't get rid of direct coefficient manipulation completely, as we still need to do e.g. polynomial multiplications.

But we can restrict direct manipulation with coefficients to some minimal number of black-boxed operations and then strive to only use these operations in our work. With this goal in mind, we will develop an appropriate framework for it.

Thanks to clyring for inspiring me to write about it with this comment. You can check it for another nice application of calculating $g(D) f(x)$ for a specific series $g(D)$ over the differentiation operator:

While this article mostly works with $e^{aD} f(x)$ to find $f(x+a)$, there you have to calculate

$\left(\frac{D}{1-e^{-D}}\right)p(x)$

to find a polynomial $f(x)$ such that $f(x) = p(0)+p(1)+\dots+p(x)$ for a given polynomial $p(x)$.

#### Key results

Let $[\cdot]$ and $\{ \cdot \}$ be a linear operators in the space of formal power series such that $[x^k] = \frac{x^k}{k!}$ and $\{x^k\} = k! x^k$.

The transforms $[\cdot]$ and $\{\cdot \}$ are called the Borel transform and the Laplace transform correspondingly.

As we also work with negative coefficients here, we define $\frac{1}{k!}=0$ for $k < 0$, hence $[x^k]=0$ for such $k$.

In this notion,

$f(x+a) = e^{aD} f(x) = [e^{ax^{-1}}\{f(x)\}],$

where $D=\frac{d}{d x}$ is the differentiation operator. Thus, $\{f(x+a)\}$ is a part with non-negative coefficients of the cross-correlation of $e^{ax}$ and $\{f(x)\}$. More generally, for arbitrary formal power series $g(D)$, it holds that

$g(D) f(x) = [g(x^{-1})\{f(x)\}],$

that is $\{g(D) f(x)\}$ is exactly the non-negative part of the cross-correlation of $g(x)$ and $\{f(x)\}$.

Detailed explanation is below.

• +136

By adamant, history, 4 months ago,

Hi everyone!

Today I want to write about the inversions in permutations. The blog is mostly inspired by the problem С from Day 3 of 2022 winter Petrozavodsk programming camp. I will also try to shed some light on the relation between inversions and $q$-analogs.

#### Key results

Let $F(x)=a_0+a_1x+a_2x^2+\dots$, then $F(e^x)$ is the exponential generating function of

$b_i = \sum\limits_{k=0}^\infty a_k k^i.$

In other words, it is a moment-generating function of the parameter by which $F(x)$ enumerates objects of class $F$.

Motivational example:

The generating function of permutations of size $n$, enumerated by the number of inversions is

$F_n(x) = \prod\limits_{k=1}^n \frac{1-x^k}{1-x}.$

The moment-generating function for the number of inversions in a permutation of size $n$ is

$G_n(x) = \prod\limits_{k=1}^n \frac{1-e^{kx}}{1-e^x}.$

• +156

By adamant, history, 5 months ago,

Hi everyone!

I'm currently trying to write an article about $\lambda$-optimization in dynamic programming, commonly known as "aliens trick". While writing it, I stumbled upon a fact which, I believe, is a somewhat common knowledge, but is rarely written out and proved explicitly. This fact is that we sometimes can repeatedly use ternary search when we need to optimize a multi-dimensional function.

Thanks to

• mango_lassi for a useful discussion on this and for counter-example on integer ternary search!

• +218

By adamant, history, 5 months ago,

Hi everyone!

Today I'd like to write yet another blog about polynomials. Specifically, I will cover the relationship between polynomial interpolation and Chinese remainder theorem, and I will also highlight how it is useful when one needs an explicit meaningful solution for partial fraction decomposition.

• +171

By adamant, history, 5 months ago,

Hi everyone!

This time I'd like to write about what's widely known as "Aliens trick" (as it got popularized after 2016 IOI problem called Aliens). There are already some articles about it here and there, and I'd like to summarize them, while also adding insights into the connection between this trick and generic Lagrange multipliers and Lagrangian duality which often occurs in e.g. linear programming problems.

Familiarity with a previous blog about ternary search or, at the very least, definitions and propositions from it is expected.

Great thanks to mango_lassi and 300iq for useful discussions and some key insights on this.

Note that although explanation here might be quite verbose and hard to comprehend at first, the algorithm itself is stunningly simple.

Another point that I'd like to highlight for those already familiar with "Aliens trick" is that typical solutions using it require binary search on lambdas to reach specified constraint by minimizing its value for specific $\lambda$. However, this part is actually unnecessary and you don't even have to calculate the value of constraint function at all within your search.

It further simplifies the algorithm and extends applicability of aliens trick to the cases when it is hard to minimize constraint function while simultaneously minimizing target function for the given $\lambda$.

#### Tldr.

Problem. Let $f : X \to \mathbb R$ and $g : X \to \mathbb R^c$. You need to solve the constrained optimization problem

$\begin{gather}f(x) \to \min,\\ g(x) = 0.\end{gather}$

Auxiliary function. Let $t(\lambda) = \inf_x [f(x) - \lambda \cdot g(x)]$. Finding $t(\lambda)$ is unconstrained problem and is usually much simpler.

Equivalently, $t(\lambda) = \inf_y [h(y) - \lambda \cdot y]$ where $h(y)$ is the minimum possible $f(x)$ subject to $g(x)=y$.

As a point-wise minimum of linear functions, $t(\lambda)$ is concave, therefore its maximum can be found with ternary search.

Key observation. By definition, $t(\lambda) \leq h(0)$ for any $\lambda$, thus $\max_\lambda t(\lambda)$ provides a lower bound for $h(0)$. When $h(y)$ is convex, inequality turns into equation, that is $\max_\lambda t(\lambda) = h(0) = f(x^*)$ where $x^*$ is the solution to the minimization problem.

Solution. Assume that $t(\lambda)$ is computable for any $\lambda$ and $h(y)$ is convex. Then find $\max_\lambda t(\lambda)$ with the ternary search on $t(\lambda)$ over possible values of $\lambda$. This maximum is equal to the minimum $f(x)$ subject to $g(x)=0$.

If $g(x)$ and $f(x)$ are integer functions, $\lambda_i$ corresponds to $h(y_i) - h(y_i-1)$ and can be found among integers.

Boring and somewhat rigorous explanation is below, problem examples are belower.

• +286

By adamant, history, 11 months ago,

Hi everyone!

Some time ago Monogon wrote an article about Edmonds blossom algorithm to find the maximum matching in an arbitrary graph. Since the problem has a very nice algebraic approach, I wanted to share it as well. I'll start with something very short and elaborate later on.

tl;dr. The Tutte matrix of the graph $G=(V, E)$ is

$$T(x_{12}, \dots, x_{(n-1)n}) = \begin{pmatrix} 0 & x_{12} e_{12} & x_{13} e_{13} & \dots & x_{1n} e_{1n} \newline -x_{12} e_{12} & 0 & x_{23} e_{23} & \dots & x_{2n} e_{2n} \newline -x_{13} e_{13} & -x_{23} e_{23} & 0 & \dots & x_{3n} e_{3n} \newline \vdots & \vdots & \vdots & \ddots & \vdots \newline -x_{1n} e_{1n} & -x_{2n} e_{2n} & -x_{3n} e_{3n} & \dots & 0 \end{pmatrix}$$

Here $e_{ij}=1$ if $(i,j)\in E$ and $e_{ij}=0$ otherwise, $x_{ij}$ are formal variables. Key facts:

1. Graph has a perfect matching if and only if $\det T \neq 0$ when considered as polynomial of $x_{ij}$.
2. Rank of $T$ is the number of vertices in the maximum matching.
3. Maximal linearly independent subset of rows corresponds to the subset of vertices on which it is a perfect matching.
4. If graph has a perfect matching, $(T^{-1})_{ij} \neq 0$ iff there exists a perfect matching which includes the edge $(i,j)$.
5. After such $(i,j)$ is found, to fix it in the matching one can eliminate $i$-th and $j$-th rows and columns of $T^{-1}$ and find next edge.

Randomization comes when we substitute $x_{ij}$ with random values. It can be proven that conditions above still hold with high probability. This provides us with $O(n^3)$ algorithm to find maximum matching in general graph. For details, dive below.

• +223

By adamant, history, 11 months ago,

Hi everyone!

Recently aryanc403 brought up a topic of subset convolution and some operations related to it.

This inspired me to write this blog entry as existing explanations on how it works seemed unintuitive for me. I believe that having viable interpretations of how things work is of extreme importance as it greatly simplifies understanding and allows us to reproduce some results without lust learning them by heart.

Also this approach allows us to easily and intuitively generalize subset convolution to sum over $i \cup j = k$ and $|i \cap j|=l$, while in competitive programming we usually only do it for $|i \cap j|=0$. Enjoy the reading!

• +254

By adamant, history, 11 months ago,

Hi everyone!

Five days have passed since my previous post which was generally well received, so I'll continue doing posts like this for time being.

Abstract algebra is among my favorite subjects. One particular thing I find impressive is the associativity of binary operations. One of harder problems from my contests revolves around this property as you need to construct an operation with a given number of non-associative triples. This time I want to talk about how one can check this property for both groups and arbitrary magmas.

First part of my post is about Light's associativity test and how it can be used to deterministically check whether an operation defines a group in $O(n^2 \log n)$. Second part of the post is about Rajagopalan and Schulman probabilistic identity testing which allows to test associativity in $O(n^2 \log n \log \delta^{-1})$ where $\delta$ is error tolerance. Finally, the third part of my post is dedicated to the proof of Rajagopalan–Schulman method and bears some insights into identity verification in general and higher-dimensional linear algebra.

For convenience, these three parts are separated by horizontal rule.

• +158

By adamant, history, 11 months ago,

Hi everyone!

Long time no see. 3 years ago I announced a Telegram channel. Unfortunately, for the last ~1.5 years I had a total lack of inspiration for new blog posts. Well, now I have a glimpse of it once again, so I want to continue writing about interesting stuff. Here's some example:

• +286

By adamant, history, 2 years ago,

Hi everyone!

Let's continue with learning continued fractions. We began with studying the case of finite continued fractions and now it's time to work a bit with an infinite case. It turns out that while rational numbers have unique representation as a finite continued fraction, any irrational number has unique representation as an infinite continued fraction.

Part 1: Introduction
Part 2: Properties and interpretation

• +138

Hi everyone!

After writing this article I've decided to write another one being comprehensive introduction into continued fractions for competitive programmers. I'm not really familiar with the topic, so I hope writing this entry will be sufficient way to familiarize myself with it :)

Part 1: Introduction
Part 2: Properties and interpretation

• +201

Hi everyone!

It's been a while since I posted anything. Today I'd like to talk about problem I from Oleksandr Kulkov Contest 2. Well, on some similar problem. Problem goes as follows: There is a rational number $x=\frac{p}{q}$, and you know that $1 \leq p, q \leq C$. You want to recover $p$ and $q$ but you only know number $r$ such that $r \equiv pq^{-1} \pmod{m}$ where $m > C^2$. In original problem $m$ was not fixed, instead you were allowed to query remainders $r_1,\dots,r_k$ of $x$ modulo several numbers $m_1,\dots,m_k$, which implied Chinese remainder theorem.

• +182

By adamant, history, 2 years ago,

Hi everyone!

This summer I gave another contest in summer Petrozavodsk programming camp and (although a bit lately) I want to share it with codeforces community by adding it to codeforces gym: 2018-2019 Summer Petrozavodsk Camp, Oleksandr Kulkov Contest 2. To make it more fun I scheduled it on Sunday, 5 january, 12:00 (UTC+3). Feel free to participate during scheduled time or, well, whenever you're up to. Good luck and have fun :)

Problems might be discussed here afterwards, I even may write some editorials for particular problems (per request, as I don't have them prepared beforehand this time).

UPD: 17h reminder before the start of the contest

UPD2: It wasn't an easy task to do, but I managed to add ghost participants to the contest! Enjoy!

• +138

By adamant, history, 3 years ago,

Hi there!

During preparation of Oleksandr Kulkov Contest 1 I started writing some template for polynomial algebra (because 3 problems in contest in one or another way required some polynomial operations). And with great pleasure I'd like to report that it resulted in this article on cp-algorithms.com (English translation for e-maxx) and this mini-library containing all mentioned operations and algorithms (except for Half-GCD algorithm). I won't say the code is super-optimized, but at least it's public, provides some baseline and is open for contribution if anyone would like to enhance it!

Article also provides some algorithms I didn't mention before. Namely:

• Interpolation: Now the described algorithm is and not as it was before.
• Resultant: Given polynomials A(x) and B(x) compute product of Ai) across all μi being roots of B(x).
• Half-GCD: How to compute GCD and resultants in (just key ideas).

Feel free to read the article to know more and/or use provided code :)

tl;dr. article on operations with polynomials and implementation of mentioned algorithms.

• +185

By adamant, history, 3 years ago,

Okay, so compare these two submissions: 51053654 and 51053605

The only difference is that first one made via GNU C++17 and the second one via MS C++ 2017. Code is same, but first gets RE 16 and second one gets AC.

WTF, GNU C++??