By adamant, history, 11 days ago, Hi everyone!

In my previous blog, I wrote about how generating functions can be used to enumerated labeled species. In this blog, I want to continue the topic by writing about how one can account for different kinds of symmetries when counting different combinatorial structures. Ultimately, we will end up deriving and hopefully understanding the analogue of Pólya enumeration theorem in species.

Difficulty: ★★★★☆

Prerequisites:

• Familiarity with combinatorial species (see my prev. blog), OR
• Very good intuition with enumerative combinatorics, genfuncs and recap below.

I will try to use plain language rather than formulas as much as possible, as it seems to be the preferred format for readers.

## Recap

Below is a very brief recap of the most important things from the previous article. I tried to keep it as informal as possible.

Previously on combinatorial species

If $A(x)$ and $B(x)$ are the generating functions for species $A$ and $B$, then $A(B(x))$ is the generating function for their composition.

It is a fundamental result for labeled species. Unfortunately, there is no simple analogue with unlabeled structures, and deriving the composition formula in some reasonable formulation for them is the ultimate goal of this article. By adamant, history, 3 weeks ago, Hi everyone!

This is yet another blog that I had drafted for quite some time, but was reluctant to publish. I decided to dig it up and complete to a more or less comprehensive state for the $300 contest. Essentially, the blog tells how to combine CDQ technique for relaxed polynomial multiplication ("online FFT") with linearization technique from Newton method (similar approach is used in the first example of the ODE blog post by amiya), so that the functions that typically require Newton's method can be computed online as well. I will try to briefly cover the general idea of "online FFT" too and provide some examples, in case you're not well familiar with it. That being said... cdq, By adamant, history, 4 weeks ago, Hi everyone! I had this blog drafter for quite a while, but the$300 contest motivated me to actually finish and publish it. I don't care about the prize that much as I think many other blogs published recently are talking about something even cooler. But I like to make people suffer, so I will apply anyway to make peltorator read even more stuff >:)

Besides, the contest makes it harder to stay on contribution top, so I got to do something to stay afloat. That being said...

Today I'd like to write about an algorithm that solves the following problem:

You're given a set of permutations $G = \{g_1, \dots, g_m\}$. Find the size $|\langle g_1, \dots, g_m \rangle|$ of the group generated by $G$.

At first, I wanted to get into a lot of group-theoretic details with proofs and all, but then I felt like it would make the article harder to comprehend, so I will try to keep group-theoretic stuff to minimum, while telling more about the high-level idea.

## Prerequisites

It is strongly recommended to be familiar with some simpler basis constructing algorithms, e.g. Gauss elimination or XOR basis.

Familiarity with basic group theory (e.g. definitions) will also be beneficial. By adamant, history, 5 weeks ago, Hi everyone! I am happy to announce the Osijek Competitive Programming Camp that is scheduled to happen on 18-26 February 2023 in Osijek, Croatia. The camp is hosted by the Department of Mathematics at J. J. Strossmayer University of Osijek and is brought to you by the organizing team of adamant (that's me!), -is-this-fft-, mblazev, Tomx and antontrygubO_o.

The camp is inspired by various competitive programming camps that we attended during our active years in ICPC, and is aimed to help college students prepare for ICPC regional contests and finals. The camp will consist of 7 ICPC-style contests and 2 days off.

Up until now, most top competitive programming training camps have been organised in Russia, or by Russian entities. Due to the current situation in the world, attending such camps is not possible or desirable for a large number of participants, and we hope to offer a European alternative to Russian camps.

This is the first edition of the camp, and we all are very excited about it and hope that we may see you soon!

### Details

Participation fee for onsite participants is 150€ per person. It does not include regular meals, travel or accommodation.

If you want to participate, but are unable to come onsite, we offer a reduced fee of 100€ per person for online participation. It is also possible to reduce fees individually if you are unable to attend some of the contests (e.g. because of the overlap with SWERC).

We support and empathize with those affected by the ongoing war in Ukraine, therefore we offer a 100€ discount for affected individuals and teams affiliated with Ukrainian institutions. In other words, the fees would be 50€ and 0€ per person for onsite and online participation correspondingly.

The expected starting time for the contests is 10am CET. For online participants, it is still a preferred starting time, but if it is not an option for you, we will also offer an opportunity to compete virtually within 24 hours after the main window is over.

### Participants

If you are interested in participating, please fill the form here.

We ask you to register before February 10 if you want to participate online and before February 4 if you want to participate onsite.

If you want to participate onsite, especially if you require visa support, we would urge you to register as soon as possible, as visa processing may take 30+ days.

Also, if your university has a lively ICPC community that may be interested in attending the camp, and you have some contacts of people in charge (e.g. coaches) we would highly appreciate if you could fill the form here, so that we can send an invitation. Thanks!

### Problemsetters

We'd like to praise the authors of the contests for the camp:

Last but not least, we would like to say special thanks to our sponsors, who make the camp possible:   If you are interested in sponsoring next editions of the camp or have any questions, please feel free to reach out at ocpc@mathos.hr.

Finally, we would also like to thank Codeforces for guidance and promoting this announcement to the main page, eolymp for providing us an online judge for the contests and the Department of Mathematics at J. J. Strossmayer University of Osijek for all their organizational support and providing us a physical location to conduct the camp. By adamant, history, 3 months ago, Hi everyone!

Today I saw a discussion in AC Discord server about how many problems some people made for different competitions. It was sparkled by this CF entry. I haven't keep track of this before, so it made me curious to go through everything that I made and publish in a comprehensive list. I'm doing it mostly out of curiosity, but maybe it might be interesting for someone else too :) By adamant, history, 6 months ago, Hi everyone!

Today, I participate in AtCoder contest, which I quite rarely do. Of course, my performance was quite poor, but I stumbled with quite nice idea along the way. To those who didn't read it, the problem goes as follows:

AGC 058 — Yet Another ABC String. How many ternary strings are there such that they contain exactly $A$ characters a, exactly $B$ characters b and exactly $C$ characters c, and do not have any sub-string that is a cyclic shift of abc?

My approach is very different from the one in the editorial, and seems insightful to me, so I decided to share it in detail.

Besides, I can't reject the request by Um_nik to explain it further :D By adamant, history, 6 months ago, Hi everyone!

Previously, I wrote a general introduction to linear programming duality. In this blog, I would like to write about several problems that could be solved with this technique. Familiarity with the first blog, or general knowledge of dual problems and how to construct them is generally expected to navigate in this one.

Thanks to brunovsky and Golovanov399 for problem suggestions!

And particularly special thanks to WeakestTopology for problem suggestions and all insightful discussions on the topic! By adamant, history, 6 months ago, Hi everyone!

There is a well-known result about bipartite graphs which is formulated as follows:

In any bipartite graph, the number of edges in a maximum matching equals the number of vertices in a minimum vertex cover.

Knowing it you can easily find the size of a minimum vertex cover in a given bipartite graph. But what about actually recovering such cover? Well... There is an algorithm to it, but it's lengthy and not very motivated, so I constantly forget its details. However, there is a simple alternative, which is easily reproduced from scratch, even if you forget specific details.

Besides, there is another well-known result which is formulated as follows:

A bipartite graph has a perfect matching if and only if every subset $\alpha$ of one part has a neighborhood $\beta$ in the other part such that $|\beta| \geq |\alpha|$.

This result can be proven with a similar approach. It is likely known to those familiar with the topic, but still might be of interest to someone. By adamant, history, 7 months ago, Hi everyone!

Previously I wrote about theoretical grounds of "aliens trick". Specifically, I introduced the concept of the Lagrangian duality and explained its connection with the trick. Today I want to elaborate a bit more on the concept of dual problems and their applications to linear programming as well as to common problems in competitive programming.

I initially wanted to also write about some applications in competitive programming problems, but given that the introduction is already quite lengthy, I decided to leave it for another blog, while using most common and well-known theoretical examples here, focusing more on how to construct and interpret dual problems to begin with, rather than how to use it in contests.

I think, it is a crucial first step towards using the duality in actual programming challenges.

#### Prerequisites

It is highly recommended to have some general understanding of basic mathematical optimization concepts (e.g. convexity, Lagrange multipliers), as well as basic understanding of flow algorithms and most well-known results around them, such as cycle-path decomposition of flows, maximum flow / minimum cut duality, minimum cost flows (ideally, the algorithm that computes it using Dijkstra with potentials). The article should hopefully shed some further light on these topics. By adamant, history, 7 months ago, Hi everyone!

Here's another collection of little tricks and general ideas that might make your life better (or maybe worse).

Most of them are somewhat well-known, but perhaps would still be useful to someone. By adamant, history, 7 months ago, Hi everyone!

In today's blog I'd like to write about the theorem that allows to efficiently solve the following problems:

• What is the number of partitions of a non-negative number $n$ for $n \leq 10^5$?
• What is the number of partitions of a non-negative number $n$ for $n \leq 10^5$, so that no summand repeats $d$ or more times?
• What is the number of partitions of a non-negative number $n$ for $n \leq 10^5$, so that no summand is divisible by $d$?
• HackerEarth — Perfect Permutations: What is the number of permutations of length $n$ with $k$ inversions for $k \leq n \leq 10^5$?
• 102354E - Decimal Expansion: Let $\phi = \frac{9}{10} \cdot \frac{99}{100} \cdot \frac{999}{1000} \cdot \dots$, what is the $k$-th digit of $\phi$ for $k \leq 10^{18}$?
• SNSS-2018 Round 5 — Investment Portfolio: You have $2n$ dollars and there are $n$ types of products in the market. There are $a_i$ instances of the $i$-th product, costing $i$ dollars each, such that $a_i > a_j$ when $i > j$. Besides that, there are also $m$ individual offers, $j$-th of them costing $b_j$. How many ways are there to spend all $2n$ dollars, while taking exactly one individual offer for $n, m \leq 10^5$?

The core fact that allows us to solve these problems efficiently is the pentagonal number theorem:

$\large \boxed{\prod\limits_{k=1}^\infty (1-x^k) = 1 + \sum\limits_{k=1}^\infty (-1)^k\left(x^{\frac{k(3k+1)}{2}} + x^{\frac{k(3k-1)}{2}}\right)}$

## Prerequisites

• Familiarity with generating functions, formal power series and operations on them.
• Basic notion of combinatorics.
• Basic notion of number theory. By adamant, history, 7 months ago, Hi everyone!

There is a concept that is sometimes mentioned here and there. The Lagrange inversion theorem.

Let $x = f(y)$. We want to solve this equation for $y$. In other words, we want to find a function $g$ such that $y = g(x)$.

The Lagrange inversion theorem gives an explicit formula for the coefficients of $g(x)$ as a formal power series over $\mathbb K$:

$\large k[x^k] g(x) = [x^{-1}] f^{-k}$

In a special case $y = x \phi(y)$, that is $f(y) = \frac{y}{\phi(y)}$, which is common in combinatorics, it can also be formulated as

$\large k[x^k] g(x) = [x^{k-1}] \phi^k$

Finally, to avoid division by $k$ one may use (see the comment by Elegia) the following formula:

$\large [x^k]g(x) = [x^{-2}] f' f^{-(k+1)},$

which may be formulated for $y = x \phi(y)$ as

$\large [x^k] g(x) = [x^{k-1}] (\phi - x \phi')\phi^{k-1}.$

## Prerequisites

Familiarity with the following:

It is required to have knowledge in the following:

• Polynomials, formal power series and generating functions;
• Basic notion of analytic functions (e.g. Taylor series);
• Basic concepts of graph theory (graphs, trees, etc);
• Basic concepts of set theory (describing graphs, trees, etc as sets, tuples, etc).

I mention the concept of fields, but you're not required to know them to understand the article. If you're not familiar with the notion of fields, assume that we're working in real numbers, that is, $\mathbb K = \mathbb R$.

It is recommended (but not required) to be familiar with combinatorial species, as they provide lots of intuition to how the equations on generating functions below are constructed. By adamant, history, 8 months ago, Hi everyone!

The usage of generating functions is quite common in competitive programming these days. But it seems to happen so, that they're mostly presented as a way to simplify formal manipulation with polynomial coefficients, rather than something meaningful. But there actually is a meaning to all of this, and today I'm going to shed a light on it.

Alongside the blog post we'll uncover the combinatorial meaning of

• The addition $F(x)+G(x)$,
• The multiplication $F(x)G(x)$,
• The exponent $\exp F(x)$,
• The logarithm $\log F(x)$,
• The sum of the infinite geometric progression $\frac{1}{1-F(x)}=1+F(x)+F^2(x)+\dots$,
• The general composition $F(G(x))$

for the generating functions $F(x)$ and $G(x)$ and the underlying structures they represent.

### Prerequisites

• Basic notion of set theory (cardinality of sets, mappings, bijections, cartesian product, disjoint union, etc);
• Polynomials and formal power series (representation, convolution formula, power series definitions of $\exp$ and $\log$);
• Elementary combinatorics (combinations, partitions, etc).

Although combinatorial species are defined with category theory concepts, I will try to avoid them for simplicity. However, knowledge of some category theory concepts would greatly simplify the reading experience.

Some of definitions use pretty dense mathematical notation, I will follow up with informal explanation right away when it is the case.

#### Commutative diagrams

I will also use commutative diagrams to illustrate complicated identities. You may use the explanation below as a reference.

Explanation

Every diagram is also provided with a more verbose text description, so don't be afraid if the concept seems complicated. By adamant, history, 8 months ago, Hi everyone!

There was once a problem named "Easy When You Know How". Unfortunately, I can't remember the contest it originated from.

You're given an array $a_0, a_1, \dots, a_{2^n-1}$. Answer $q$ queries that are:

1. Given $m$, compute the sum over all $a_i$ such that $i$ is a submask of $m$;
2. Given $m$ and $c$, add $c$ to all $a_m$.

Constraints were along the lines of $2^n, q \leq 10^5$. By adamant, history, 8 months ago, Hi everyone!

Today I'd like to write about the so-called Grundy numbers, or nimbers. I will start by providing a formal recap on the Sprague-Grundy theorem and then will advance to the topic that is rarely covered in competitive programming resources, that is I will write about nimber product and its meaning to the game theory.

I was asked to add the following to the blog: THANK SIR MASTER Adam_GS FOR GREAT AND VALUABLE REVIEW SIR

## Prerequisites

Familiarity with the following concepts:

Familiarity with formal computational models (e.g. deterministic finite automata) is not required, but would be helpful.

Although the nimbers are typically constructed on top of ordinal numbers and transfinite nim, I will try to stick with natural numbers and classic nim. You may familiarize yourself with transfinite nim in e.g. this article by emorgan5289. By adamant, history, 9 months ago, Hi everyone!

Let $R$ be a ring, $d_0, d_1, d_2, \dots \in R$ and $e_0, e_1, e_2, \dots \in R$ be linear recurrence sequences, such that

$\begin{gather} d_m = \sum\limits_{i=1}^k a_i d_{m-i}\text{ for }m \geq k, \\ e_m = \sum\limits_{i=1}^l b_i e_{m-i}\text{ for }m \geq l. \end{gather}$

In some applications, the following two sequences arise:

$\begin{gather} f_k &=& d_k e_k & \text{(Hadamard product)}, \\ f_k &=& \sum\limits_{i+j=k} \binom{k}{i} d_i e_j & \text{(binomial convolution)}. \end{gather}$

Today I'd like to write about the framework that allows to prove that both the sequences defined above are also linear recurrences. It would also allow to compute their characteristic polynomials in $O(kl \log kl)$, which is optimal as their degrees are $O(kl)$ in both cases. By adamant, history, 9 months ago, Hi everyone!

You probably know that the primitive root modulo $m$ exists if and only if one of the following is true:

• $m=2$ or $m=4$;
• $m = p^k$ is a power of an odd prime number $p$;
• $m = 2p^k$ is twice a power of an odd prime number $p$.

Today I'd like to write about an interesting rationale about it through $p$-adic numbers.

Hopefully, this will allow us to develop a deeper understanding of the multiplicative group modulo $p^k$.

### Tl;dr.

For a prime number $p>2$ and $r \equiv 0 \pmod p$ one can uniquely define

$\exp r = \sum\limits_{k=0}^\infty \frac{r^k}{k!} \pmod{p^n}.$

In this notion, if $g$ is a primitive root of remainders modulo $p$ lifted to have order $p-1$ modulo $p^n$ as well, then $g \exp p$ is a primitive root of remainders modulo $p^n$.

Finally, for $p=2$ and $n>2$ the multiplicative group is generated by two numbers, namely $-1$ and $\exp 4$. By adamant, history, 10 months ago, Hi everyone!

Today I want to describe an efficient solution of the following problem:

Composition of Formal Power Series. Given $A(x)$ and $B(x)$ of degree $n-1$ such that $B(0) = 0$, compute $A(B(x)) \pmod{x^n}$.

The condition $B(0)=0$ doesn't decrease the generality of the problem, as $A(B(x)) = P(Q(x))$, where $P(x) = A(x+B(0))$ and $Q(x) = B(x) - B(0)$. Hence you could replace $A(x)$ and $B(x)$ with $P(x)$ and $Q(x)$ when the condition is not satisfied.

Solutions that I'm going to describe were published in Joris van der Hoeven's article about operations on formal power series. The article also describes a lot of other common algorithms on polynomials. It is worth noting that Joris van der Hoeven and David Harvey are the inventors of the breakthrough $O(n \log n)$ integer multiplication algorithm in the multitape Turing machine model. By adamant, history, 10 months ago, Hi everyone!

Today I'd like to write a bit on the amazing things you can get out of a power series by putting roots of unity into its arguments. It will be another short story without any particular application in competitive programming (at least I don't know of them yet, but there could be). But I find the facts below amazing, hence I wanted to share them.

You're expected to know some basic stuff about the discrete Fourier transform and a bit of linear algebra to understand the article. By adamant, history, 10 months ago, Hi everyone!

Today I'd like to finally talk about an algorithm to solve the following tasks in $O(n \log^2 n)$:

• Compute the greatest common divisor of two polynomials $P(x)$ and $Q(x)$;
• Given $f(x)$ and $h(x)$ find the multiplicative inverse of $f(x)$ modulo $h(x)$;
• Given $F_0,F_1, \dots, F_m$, recover the minimum linear recurrence $F_n = a_1 F_{n-1} + \dots + a_d F_{n-d}$;
• Given $P(x)$ and $Q(x)$, find $A(x)$ and $B(x)$ such that $P(x) A(x) + Q(x) B(x) = \gcd(P, Q)$;
• Given $P(x)=(x-\lambda_1)\dots(x-\lambda_n)$ and $Q(x)=(x-\mu_1)\dots(x-\mu_m)$ compute their resultant.

More specifically, this allows to solve in $O(n \log^2 n)$ the following problems:

Library Checker — Find Linear Recurrence. You're given $F_0, \dots, F_{m}$. Find $a_1, \dots, a_d$ with minimum $d$ such that

$F_n = \sum\limits_{k=1}^d a_k F_{n-k}.$

Library Checker — Inv of Polynomials. You're given $f(x)$ and $h(x)$. Compute $f^{-1}(x)$ modulo $h(x)$.

All tasks here are connected with the extended Euclidean algorithm and the procedure we're going to talk about is a way to compute it quickly. I recommend reading article on recovering minimum linear recurrence first, as it introduces some useful results and concepts. It is also highly recommended to familiarize yourself with the concept of continued fractions. By adamant, history, 10 months ago, Hi everyone!

The task of finding the minimum linear recurrence for the given starting sequence is typically solved with the Berlekamp-Massey algorithm. In this article I would like to highlight another possible approach, with the use of the extended Euclidean algorithm.

Great thanks to nor for the proofreading and all the useful comments to make the article more accessible and rigorous.

### Tl'dr.

The procedure below is essentially a formalization of the extended Euclidean algorithm done on $F(x)$ and $x^{m+1}$.

If you need to find the minimum linear recurrence for a given sequence $F_0, F_1, \dots, F_m$, do the following:

Let $F(x) = F_m + F_{m-1} x + \dots + F_0 x^m$ be the generating function of the reversed $F$.

Compute the sequence of remainders $r_{-2}, r_{-1}, r_0, \dots, r_k$ such that $r_{-2} = F(x)$, $r_{-1}=x^{m+1}$ and

$r_{k} = r_{k-2} \mod r_{k-1}.$

Let $a_k(x)$ be a polynomial such that $r_k = r_{k-2} - a_k r_{k-1}$.

Compute the auxiliary sequence $q_{-2}, q_{-1}, q_0, \dots, q_k$ such that $q_{-2} = 1$, $q_{-1} = 0$ and

$q_{k} = q_{k-2} + a_k q_{k-1}.$

Pick $k$ to be the first index such that $\deg r_k < \deg q_k$. Let $q_k(x) = a_0 x^d - \sum\limits_{k=1}^d a_k x^{d-k}$, then it also holds that

$F_n = \sum\limits_{k=1}^d \frac{a_k}{a_0}F_{n-k}$

for any $n \geq d$ and $d$ is the minimum possible. Thus, $q_k(x)$ divided by $a_0$ is the characteristic polynomial of the minimum linear for $F$.

More generally, one can say for such $k$ that

$F(x) \equiv \frac{(-1)^{k}r_k(x)}{q_k(x)} \pmod{x^{m+1}}.$ By adamant, history, 10 months ago, Hi everyone!

Today I'd like to write about Fibonacci numbers. Ever heard of them? Fibonacci sequence is defined as $F_n = F_{n-1} + F_{n-2}$.

It got me interested, what would the recurrence be like if it looked like $F_n = \alpha F_{n-p} + \beta F_{n-q}$ for $p \neq q$?

Timus — Fibonacci Sequence. The sequence $F$ satisfies the condition $F_n = F_{n-1} + F_{n-2}$. You're given $F_i$ and $F_j$, compute $F_n$.

Using $L(x^n) = F_n$ functional, we can say that we essentially need to solve the following system of equations:

$1 \equiv \alpha x^{-p} + \beta x^{-q} \pmod{x^2-x-1}.$

To get the actual solution from it, we should first understand what exactly is the remainder of $x^n$ modulo $x^2-x-1$. The remainder of $P(x)$ modulo $(x-a)(x-b)$ is generally determined by $P(a)$ and $P(b)$:

$P(x) \equiv r \mod(x-a)(x-b) \iff \begin{cases}P(a) = r,\\ P(b)=r.\end{cases}$

Therefore, our equation above is, equivalent to the following:

$\begin{cases} \alpha a^{-p} + \beta a^{-q} = 1,\\ \alpha b^{-p} + \beta b^{-q} = 1. \end{cases}$

The determinant of this system of equations is $a^{-p}b^{-q} - a^{-q}b^{-p}$. Solving the system, we get the solution

$\begin{matrix} \alpha = \dfrac{b^{-q}-a^{-q}}{a^{-p}b^{-q} - a^{-q}b^{-p}}, & \beta = \dfrac{a^{-p}-b^{-p}}{a^{-p}b^{-q} - a^{-q}b^{-p}}. \end{matrix}$

Multiplying numerators and denominators by $a^q b^q$ for $\alpha$ and $a^p b^p$ for $\beta$, we get a nicer form:

$\boxed{\begin{matrix} \alpha = \dfrac{a^q-b^q}{a^{q-p} - b^{q-p}}, & \beta = \dfrac{a^p-b^p}{a^{p-q} - b^{p-q}}. \end{matrix}}$

This is a solution for a second degree recurrence with the characteristic polynomial $(x-a)(x-b)$.

Note that for Fibonacci numbers in particular, due to Binet's formula, it holds that

$F_n = \frac{a^n-b^n}{a-b}.$

Substituting it back into $\alpha$ and $\beta$, we get

$\boxed{F_n = \frac{F_q}{F_{q-p}} F_{n-p} + \frac{F_p}{F_{p-q}} F_{n-q}}$

which is a neat symmetric formula.

P. S. you can also derive it from Fibonacci matrix representation, but this way is much more fun, right?

UPD: I further simplified the explanation, should be much easier to follow it now.

Note that the generic solution only covers the case of $(x-a)(x-b)$ when $a \neq b$. When the characteristic polynomial is $(x-a)^2$, the remainder of $P(x)$ modulo $(x-a)^2$ is determined by $P(a)$ and $P'(a)$:

$P(x) \equiv r \mod{(x-a)^2} \iff \begin{cases}P(a)=r,\\P'(a)=0.\end{cases}$

Therefore, we have a system of equations

$\begin{cases} \alpha a^{-p} &+& \beta a^{-q} &=& 1,\\ \alpha p a^{-p-1} &+& \beta q a^{-q-1} &=& 0. \end{cases}$

For this system, the determinant is $\frac{q-p}{a^{p+q+1}}$ and the solution is

$\boxed{\begin{matrix} \alpha = \dfrac{qa^p}{q-p},&\beta = \dfrac{pa^q}{p-q} \end{matrix}}$

Another interesting way to get this solution is via L'Hôpital's rule:

$\lim\limits_{x \to 0}\frac{a^q-(a+x)^q}{a^{q-b}-(a+x)^{q-p}} = \lim\limits_{x \to 0}\frac{q(a+x)^{q-1}}{(q-p)(a+x)^{q-p-1}} = \frac{qa^p}{q-p}.$

Let's consider the more generic case of the characteristic polynomial $(x-\lambda_1)(x-\lambda_2)\dots (x-\lambda_k)$.

102129D - Basis Change. The sequence $F$ satisfies $F_n=\sum\limits_{i=1}^k a_i F_{n-i}$. Find $c_1,\dots,c_n$ such that $F_n = \sum\limits_{i=1}^k c_i F_{n-b_i}$.

We need to find $\alpha_1, \dots, \alpha_k$ such that $F_n = \alpha_1 F_{n-c_1} + \dots + \alpha_k F_{n-c_k}$. It boils down to the system of equations

$\begin{cases} \alpha_1 \lambda_1^{-c_1}+\dots+\alpha_k \lambda_1^{-c_1} = 1,\\ \alpha_1 \lambda_2^{-c_2}+\dots+\alpha_k \lambda_2^{-c_k} = 1,\\ \dots\\ \alpha_1 \lambda_k^{-c_k}+\dots+\alpha_k \lambda_k^{-c_k} = 1.\\ \end{cases}$

This system of equations has a following matrix:

$A=\begin{bmatrix} \lambda_1^{-c_1} & \lambda_1^{-c_2} & \dots & \lambda_1^{-c_k} \\ \lambda_2^{-c_1} & \lambda_2^{-c_2} & \dots & \lambda_2^{-c_k} \\ \vdots & \vdots & \ddots & \vdots \\ \lambda_k^{-c_1} & \lambda_k^{-c_2} & \dots & \lambda_k^{-c_k} \end{bmatrix}$

Matrices of this kind are called alternant matrices. Let's denote its determinant as $D_{\lambda_1, \dots, \lambda_k}(c_1, \dots, c_k)$, then the solution is

$\alpha_i = \dfrac{D_{\lambda_1, \dots, \lambda_k}(c_1, \dots, c_{i-1}, \color{red}{0}, c_{i+1}, \dots, c_k)}{D_{\lambda_1, \dots, \lambda_k}(c_1, \dots, c_{i-1}, \color{blue}{c_i}, c_{i+1}, \dots, c_k)}.$

Unfortunately, on practice in makes more sense to find $\alpha_i$ with the Gaussian elimination rather than with these direct formulas. math,
By adamant, history, 10 months ago, Hi everyone!

It's been quite some time since I wrote two previous articles in the cycle:

Part 1: Introduction
Part 2: Properties and interpretation
Part 3: In competitive programming

This time I finally decided to publish something on how one can actually use continued fractions in competitive programming problems.

Few months ago, I joined CP-Algorithms as a collaborator. The website also underwent a major design update recently, so I decided it would be great to use this opportunity and publish my new article there, so here it is:

CP-Algorithms — Continued fractions

It took me quite a while to write and I made sure to not only describe common competitive programming challenges related to continued fractions, but also to describe the whole concept from scratch. That being said, article is supposed to be self-contained.

Main covered topics:

1. Notion of continued fractions, convergents, semiconvergents, complete quotients.
2. Recurrence to compute convergents, notion of continuant.
3. Connection of continued fractions with the Stern-Brocot tree and the Calkin-Wilf tree.
4. Convergence rate with continued fractions.
5. Linear fractional transformations, quadratic irrationalities.
6. Geometric interpretation of continued fractions and convergents.

I really hope that I managed to simplify the general story-telling compared to previous 2 articles.

Here are the major problems that are dealt with in the article:

• Given $a_1, \dots, a_n$, quickly compute $[a_l; a_{l+1}, \dots, a_r]$ in queries.
• Which number of $A=[a_0; a_1, \dots, a_n]$ and $B=[b_0; b_1, \dots, b_m]$ is smaller? How to emulate $A-\varepsilon$ and $A+\varepsilon$?
• Given $A=[a_0; a_1, \dots, a_n]$ and $B=[b_0; b_1, \dots, b_m]$, compute the continued fraction representations of $A+B$ and $A \cdot B$.
• Given $\frac{0}{1} \leq \frac{p_0}{q_0} < \frac{p_1}{q_1} \leq \frac{1}{0}$, find $\frac{p}{q}$ such that $(q,p)$ is lexicographically smallest and $\frac{p_0}{q_0} < \frac{p}{q} < \frac{p_1}{q_1}$.
• Given $x$ and $k$, $x$ is not a perfect square. Let $\sqrt x = [a_0; a_1, \dots]$, find $\frac{p_k}{q_k}=[a_0; a_1, \dots, a_k]$ for $0 \leq k \leq 10^9$.
• Given $r$ and $m$, find the minimum value of $q r \pmod m$ on $1 \leq q \leq n$.
• Given $r$ and $m$, find $\frac{p}{q}$ such that $p, q \leq \sqrt{m}$ and $p q^{-1} \equiv r \pmod m$.
• Given $p$, $q$ and $b$, construct the convex hull of lattice points below the line $y = \frac{px+b}{q}$ on $0 \leq x \leq n$.
• Given $A$, $B$ and $C$, find the maximum value of $Ax+By$ on $x, y \geq 0$ and $Ax + By \leq C$.
• Given $p$, $q$ and $b$, compute the following sum:
$\sum\limits_{x=1}^n \lfloor \frac{px+b}{q} \rfloor.$

So far, here is the list of problems that are explained in the article:

And an additional list of practice problems where continued fractions could be useful:

There are likely much more problems where continued fractions are used, please mention them in the comments if you know any!

Finally, since CP-Algorithms is supposed to be a wiki-like project (that is, to grow and get better as time goes by), please feel free to comment on any issues that you might find while reading the article, ask questions or suggest any improvements. You can do so in the comments below or in the issues section of the CP-Algorithms GitHub repo. You can also suggest changes via pull request functionality. By adamant, history, 12 months ago, Hi everyone!

There are already dozens of blogs on linear recurrences, why not make another one? In this article, my main goal is to highlight the possible approaches to solving linear recurrence relations, their applications and implications. I will try to derive the results with different approaches independently from each other, but will also highlight similarities between them after they're derived.

### Definitions

Def. 1. An order $d$ homogeneous linear recurrence with constant coefficients (or linear recurrence) is an equation of the form

$F_n = \sum\limits_{k=1}^d a_k F_{n-k}.$

Def. 2. In the equation above, the coefficients $a_1, \dots, a_d \in R$ are called the recurrence parameters,

Def. 3. and a sequence $F_0, F_1, \dots \in R$ is called an order $d$ linear recurrence sequence.

The most common task with linear recurrences is, given initial coefficients $F_0, F_1, \dots, F_{d-1}$, to find the value of $F_n$.
Example 1. A famous Fibonacci sequence $F_n = F_{n-1} + F_{n-2}$ is an order 2 linear recurrence sequence.

Example 2. Let $F_n = n^2$. One can prove that $F_n = 3 F_{n-1} - 3 F_{n-2} + F_{n-3}$.

Example 3. Moreover, for $F_n = P(n)$, where $P(n)$ is a degree $d$ polynomial, it holds that

$F_n = \sum\limits_{k=1}^{d+1} (-1)^{k+1}\binom{d+1}{k} F_{n-k}.$

If this fact is not obvious to you, do not worry as it will be explained further below.

Finally, before proceeding to next sections, we'll need one more definition.
Def. 4. A polynomial
$A(x) = x^d - \sum\limits_{k=1}^d a_k x^{d-k}$

is called the characteristic polynomial of the linear recurrence defined by $a_1, \dots, a_d$.

Example 4. For Fibonacci sequence, the characteristic polynomial is $A(x) = x^2-x-1$. By adamant, history, 12 months ago, Hi everyone!

Today I'd like to write about some polynomials which are invariant under the rotation and relabeling in euclidean spaces. Model problems work with points in the 3D space, however both ideas, to some extent, might be generalized for higher number of dimensions. They might be useful to solve some geometric problems under the right conditions. I used some ideas around them in two problems that I set earlier.

#### Congruence check in random points

You're given two set of lines in 3D space. The second set of lines was obtained from the first one by the rotation and relabeling. You're guaranteed that the first set of lines was generated uniformly at random on the sphere, find the corresponding label permutation.

Actual problem: 102354F - Cosmic Crossroads.

##### Solution

Let $P_4(x, y, z) = \sum\limits_{l=1}^n \left((x-x_l)^2+(y-y_l)^2 + (z-z_l)^2\right)^2$. It is a fourth degree polynomial, which geometric meaning is the sum of distances from $(x, y, z)$ to all points in the set, each distance raised to power $4$. Distance is preserved under rotation, hence this expression is invariant under rotation transform. On the other hand it may be rewritten as

$P_4(x, y, z) = \sum\limits_{i=0}^4 \sum\limits_{j=0}^4 \sum\limits_{k=0}^4 A_{ijk} x^i y^j z^k,$

where $A_{ijk}$ is obtained as the sum over all points $(x_l,y_l,z_l)$ from the initial set. To find the permutation, it is enough to calculate $P_4$ for all points in both sets and them match points with the same index after they were sorted by the corresponding $P_4$ value.

It is tempting to try the same trick with $P_2(x, y, z)$, but it is the same for all the points in the set for this specific problem:

\begin{align} P_2(x, y, z) =& \sum\limits_{l=1}^n [(x-x_l)^2+(y-y_l)^2+(z-z_l)^2]\\ =& n \cdot (x^2+y^2+z^2) - 2x \sum\limits_{l=1}^n x_l - 2y \sum\limits_{l=1}^n y_l - 2z \sum\limits_{l=1}^n z_l + \sum\limits_{l=1}^n [x_l^2+y_l^2+z_l^2] \\ =& n \left[\left(x-\bar x\right)^2 + \left(y-\bar y\right)^2 + \left(z-\bar z\right)^2\right] - n(\bar x^2+\bar y^2+\bar z^2) + \sum\limits_{l=1}^n (x_l^2 + y_l^2 + z_l^2), \end{align}

where $\bar x$, $\bar y$ and $\bar z$ are the mean values of $x_l$, $y_l$ and $z_l$ correspondingly. As you can see, non-constant part here is simply the squared distance from $(x, y, z)$ to the center of mass of the points in the set. Thus, $P_2(x, y, z)$ is the same for all points having the same distance from the center of mass, so it is of no use in 102354F - Cosmic Crossroads, as all the points have this distance equal to $1$ in the input.

Burunduk1 taught me this trick after the Petrozavodsk camp round which featured the model problem.

#### Sum of squared distances to the axis passing through the origin

You're given a set of points $r_k=(x_k, y_k, z_k)$. A torque needed to rotate the system of points around the axis $r=(x, y, z)$ is proportional to the sum of squared distances to the axis across all points. You need to find the minimum amount of points that have to be added to the set, so that the torque needed to rotate it around any axis passing through the origin is exactly the same.

Actual problem: Hackerrank — The Axis of Awesome

##### Solution

The squared distance from the point $r_k$ to the axis $r$ is expressed as

$\dfrac{|r_k \times r|^2}{r \cdot r} = \dfrac{(y_k z - z_k y)^2+(x_k z - z_k x)^2+(x_k y - y_k x)^2}{x^2+y^2+z^2}.$

The numerator here is a quadratic form, hence can be rewritten as

$|r_k \times r|^2 = \begin{pmatrix}x & y & z\end{pmatrix} \begin{pmatrix} y_k^2 + z_k^2 & -x_k y_k & -x_k z_k \\ -x_k y_k & x_k^2 + z_k^2 & -y_k z_k \\ -x_k z_k & -y_k z_k & x_k^2 + y_k^2 \end{pmatrix} \begin{pmatrix}x \\ y \\ z\end{pmatrix}.$

Correspondingly, the sum of squared distances for $k=1..n$ is defined by the quadratic form

$I = \sum\limits_{k=1}^n\begin{pmatrix} y_k^2 + z_k^2 & -x_k y_k & -x_k z_k \\ -x_k y_k & x_k^2 + z_k^2 & -y_k z_k \\ -x_k z_k & -y_k z_k & x_k^2 + y_k^2 \end{pmatrix},$

known in analytic mechanics as the inertia tensor. As any other tensor, its coordinate form is invariant under rotation.

Inertia tensor is a positive semidefinite quadratic form, hence there is an orthonormal basis in which it is diagonal:

$I = \begin{pmatrix}I_1 & 0 & 0 \\ 0 & I_2 & 0 \\ 0 & 0 & I_3\end{pmatrix}.$

Here $I_1$, $I_2$ and $I_3$ are the eigenvalues of $I$, also called the principal moments of inertia (corresponding eigenvectors are called the principal axes of inertia). From this representation we deduce that the condition from the statement is held if and only if $I_1 = I_2 = I_3$.

Adding a single point on a principal axis would only increase principal moments on the other axes. For example, adding $(x, 0, 0)$ would increase $I_2$ and $I_3$ by $x^2$. Knowing this, one can prove that the answer to the problem is exactly $3-m$ where $m$ is the multiplicity of the smallest eigenvalue of $I$.

##### Applying it to the first problem

Now, another interesting observation about inertia tensor is that both principal inertia moments and principal inertia axes would be preserved under rotation. It means that in the first problem, another possible way to find the corresponding rotation and the permutation of points is to find principal inertia axes for both sets of points and then find a rotation that matches corresponding principal inertia axes in the first and the second sets of points.

Unfortunately, this method still requires that principal inertia moments are all distinct (which generally holds for random sets of points), otherwise there would be an infinite amount of eigendecompositions of $I$. 