### Black_Fate's blog

By Black_Fate, history, 5 weeks ago,

Hello Codeforces!

Binary search is useful in many situations. For example, interactive problems or minimizing the maximum problems. Binary search is a good choice when dealing with many situations, and for many binary search problems, monotonousness may be hard to find out and participants may not be able to focus on binary search.

You may think that binary search is simple, but after this blog, you may think more about how binary search is working on competitive programming.

### Part A — What is binary search?

Consider the following problem: you are going to guess a number between $0$ and $100$, and I'll tell you that the number you give is smaller than the number I thought about or not. How to minimize the number of your guessing?

Well, you may guess $50$ first, if $50$ is too large, guess $25$, otherwise $75$, and so on. For each guess, you cut off half the possible numbers, since $x$ is small, the numbers smaller than $x$ are also small. If $x$ is large, the numbers larger than $x$ are also big.

The number of your guessing will be at most $\log 100$ times. If changed $100$ to $n$, the complexity of the guessing will be at most $\mathcal O(\log n)$.

Why are you using binary search when seeing the problem? Let's consider the following monotonous function $f(x)$:

And you are querying the middle place, if too small:

Otherwise, it's just opposite.

This is the motivation for doing binary search. Only if monotonousness is known, binary search can be performed. Otherwise, binary search cannot be performed since the half part cannot be considered useless.

### Part B — Implementation

Here we give out a standard code below:

bool chk(int x) {
//something
}
signed main() {
//...
l = 1, r = maxn;
while(l <= r) {
int mid = (l + r) / 2;
if(chk(mid)) l = mid + 1;
else r = mid - 1;
}
cout << l - 1 << endl;
return 0;
}

The final $l-1$ is the largest value $x$ with check(x) = 1.

To help understanding, let's write out the guessing number code.

#include<bits/stdc++.h>
#define int long long
using namespace std;
bool chk(int x) {
cout << ">= " << x << endl;
string str; cin >> str;
if(str == "Yes") return 1;
return 0;
}
const int maxn = 100;
signed main() {
int l = 1, r = maxn;
while(l <= r) {
int mid = (l + r) / 2;
if(chk(mid)) l = mid + 1;
else r = mid - 1;
}
cout << "! " << l - 1 << endl;
return 0;
}

For example, we are guessing the number $88$, this is the process:

>= 50
Yes
>= 75
Yes
>= 88
Yes
>= 94
No
>= 91
No
>= 89
No
! 88

### Part C — Basic binary search situations

You've got $n$ trees, the $i$-th tree has the height of $h_i$ meters. You are going to cut them. You are going to determine a value $x$, which is the height of the saw. And then:

• For trees with $h_i\ge x$, do $h_i\gets x$, and you will get $(h_i-x)$ meters of wood.
• For trees with $h_i< x$, nothing changes.

You want to get $m$ meters of wood from the trees, to protect the environment, what's the largest $x$ you can determine?

• $1\leq n\leq 10^6$
• $1\leq h_i\leq 2\times 10^9$

It's obvious that, when the $x$ increases, the length of the wood you get decreases. This means that the monotonousness of the problem is very clear. Due to this, let's do binary search:

#include <bits/stdc++.h>
#define int long long
#define rep(i, l, r) for(int i = (l); i <= (r); i++)
using namespace std;
int A[1000005], n, m;
bool chk(int x) {
int sum = 0;
rep(i, 1, n) {
if(A[i] > x) sum += (A[i] - x);
} return sum >= m;
}
signed main() {
int maxn = 0;
cin >> n >> m;
rep(i, 1, n) {
cin >> A[i];
maxn = max(A[i], maxn);
} int l = 1, r = maxn;
while(l <= r) {
int mid = (l + r) / 2;
if(chk(mid)) l = mid + 1;
else r = mid - 1;
}
cout << l - 1 << endl;
return 0;
}

You've got an array $a_1,a_2,a_3,\dots,a_n$, and you are going to devide the array into $m$ segments. You are going to minimize the maximum number of the sum of the subsegments.

For example, $a=[4, 2, 4, 5, 1], m=3$, the answer is $6$ since $[4, 2][4][5, 1]$ is possible, and $\max(6, 4, 6) = 6$ which is the best.

• $1\leq m\leq n\leq 10^5$.
• $0\leq a_i\leq 10^8$.

We can find out that, the larger the maxinum is, the fewer segments will be needed. This leads to the monotonousness. And then we can do binary search on the maxinum, and if the mininum needed segments is no more than $m$, it seems that the mininum can be reached, otherwise it cannot be reached.

So this is the check function:

bool chk(int x) {
int s = 0, ret = 0;
for(int i = 1; i <= n; i++) {
if(s + a[i] > x) ret++, s = 0;
s += a[i];
}
return ret >= m;
}

Complete code example:

#include<bits/stdc++.h>
#define int long long
using namespace std;
int a[200005], n, m;
bool chk(int x) { /* See above */ }
signed main() {
cin >> n >> m;
int sum = 0, maxn = 0;
for(int i = 1; i <= n; i++)
cin >> a[i], sum += a[i], maxn = max(maxn, a[i]);
int l = maxn, r = sum;
while(l <= r) {
int mid = (l + r) / 2;
if(chk(mid)) l = mid + 1;
else r = mid - 1;
}
cout << l << endl;
system("pause");
return 0;
}

Task C3 — [ABC269E] Last Rook

This is an interactive problem.

First, let's think of this picture:

You may find that, the blue part can never been laid a rook, since for each column, there is already a rook. So we have to lay a rook in the red part, and all the blue columns are considered useless.

In implementation, you can always ask the left half of the chessboard. If the result is the same to the numbers of the columns, then the left is useless. Otherwise, the left half is useful. For each query, you can cut off half useless part, so totally you will spend $\log n$ steps to get the unplaced column.

In the same way, get the unplaced line.

The final complexity of the algorithm is $\mathcal O(\log n)$.

int solve() {
int n; cin >> n;
int l = 1, r = n;
while(l < r) {
int mid = l + r >> 1;
printf("? %lld %lld %d %lld\n", l, mid, 1, n);
cout << flush;
int x; cin >> x;
if(x != (mid - l + 1)) r = mid;
else l = mid + 1;
}
int ansx = l;
l = 1, r = n;
while(l < r) {
int mid = l + r >> 1;
printf("? %d %lld %lld %lld\n", 1, n, l, mid);
cout << flush;
int x; cin >> x;
if(x != (mid - l + 1)) r = mid;
else l = mid + 1;
}
int ansy = l;
printf("! %lld %lld\n", ansx, ansy);
return 0;
}

This is an interactive problem.

We have a**Task C4** permutation $a$ of $1,2,3,\dots,n$, and another array $d$ which is the suffix maximum of $a$.

You can ask the interacter to do the following two operations:

• Tell how many different values are there in $d_{1\sim i}$.
• Make $a_i\gets 0$. Note that this may change the array $d$.

You need to guess out the initial permutation $a$ within $5500$ operations, $n\leq 500$.

First let's think about $n\leq 70$.

Make $\text{ask}(k)$ the answer to the first query. It's obvious that:

• $d_{i-1}\not=d_{i}$
• $d_i=d_{i+1}=d_{i+2}=\ldots=d_n$
• $\text{ask}(i-1)+1=\text{ask}(i)=\text{ask}(n)$

Since the monotonousness of $d$, we can find the first $\text{ask}(i)$ which fits the $\text{ask}(i)=\text{ask}(n)$.

Then we can find out the biggest number, or to say, the position of $n$. Then let's find $n-1$. To make $n-1$ the biggest number, make $n$ become $0$, then use the same way to make $n-1$'s position known. And finally all the positions are known in the same way.

If we do brute force to find the first $i$ which fits $\text{ask}(i)=\text{ask}(n)$, this can be solved in $\mathcal O(n^2)$ queries with $n=70$ solved. But let's think of the monotonousness of $d$, we can do a binary search to find the first $i$ which fits $\text{ask}(i)=\text{ask}(n)$. Then it's optimized to $\mathcal O(n\log n)$.

}
void chg(int n) {
printf("? 2 %d\n", n);fflush(stdout);
}
vector<int> v;
int ans[505];
void got(int n) {
printf("! ");rep(i, 1, n) print<int>(ans[i], " ");puts("");fflush(stdout);
}
void solve() {
for(int i = n; i >= 1; i--) {
int l = 1, r = n;
while(l < r) {
int mid = l + ((r - l) >> 1);
if(ask(mid) < sum) l = mid + 1;
else r = mid;
}
ans[l] = i;
if(i > 1) chg(l);
}
got(n);
}

### Part D — Harder Tasks

Specific solutions will be given in the next blog.

You can try to solve it by yourself before the next blog.

To minimize the maxinum, we can consider binary search. And also we can do a greedy to implement the check function.

Task D2 — 505E Mr. Kitayuta vs. Bamboos

To minimize the maxinum, we can consider binary search. To implement the checker, consider to do it reversed with heap.

Task D3 — 627D Preorder Test

To maximize the mininum, we can consider binary search. To implement the checker, consider to do tree indexed dp.

### Ending

This blog talked about basic binary search algorithm and also, we are going to talk about ternary search and more binary search tasks in the next blog.

Thanks for reading! If you still have some problems, you can leave a friendly comment. I'll try my best to reply.

• +37

By Black_Fate, history, 2 months ago,

Hello Codeforces!

Data Structure is a legendary subject that it's really hard for me write it down completely in a single blog, so let's discuss simple DSU first.

DSU is really easy for beginners to understand. Also, this blog talks about more than the simplest DSU.

UPD 2023-1-19: Time complexity analysis updated, thanks Everule for help.

### Part 0 — How does DSU works?

Firstly, DSU is short for "Disjoint Set Union". Let's go to a simple task then.

There are $n$ elements, the $i$-th element is in the $i$-th set initially, and now you can do $m$ queries below: — M a b, to merge the set that $a$ belongs to and the set that $b$ belongs to into a new set. If $a$ and $b$'ve been in a set already, then do nothing. — Q a b, to tell if $a$ and $b$ are in the same set.

Let's go straight to the problem solution.

Solution：

Let's consider contructing a graph. Initially we have self-cycles for all vertecies, which means there are edges $i\to i$, and $\text{root} (i)=i$.

The graph above is the initial state of the graph.

Then if you do M a b operation, then we can match $\text{root} (a)$ with $\text{root} (b)$. Which means we can making a new edge $\text{root}(a)\to \text{root}(b)$.

The graph above is the state of the graph at some point, if we do M 1 3, then since the root of $1$ is $2$, the root of $3$ is $5$, so:

The graph above is the state after M 1 3.

Note that the direction of the edges is not important.

Now we can write out a code to implement the algorithm above.

const int N = 1000005;
int fa[N];
void init(int n) {
for(int i = 1; i <= n; i++) fa[i] = i;
} // the initial situation
int find(int x) {
if(fa[x] == x) return x;
return find(fa[x]);
} // get root(x)
bool check(int x, int y) {
int fx = find(x), fy = find(y);
return fx == fy;
} // query
void merge(int x, int y) {
if(check(x, y)) return ;
fa[find(x)] = find(y);
} // merge and make edge

Note that sometimes you may find that the algorithm is too slow to solve the problems with too large $n$. For example in an extreme case, that we do M 1 2, and then M 2 3, and then M 3 4, and so on. In this case, finally when you are doing the queries, the graph (or the forest) is degenerated into a list. In this case, if we are querying the bottom of the list, we will get the time complexity of $\mathcal O(n^2)$.

How to solve this problem in a better way? Let's consider making the traveling while we are find the root of the tree shorter, just point $i$ direct to $\text{root}(i)$, then the next time we find the root of the tree, we can find it in $\mathcal O(1)$ time.

The better find function:

int find(int x) {
if(fa[x] == x) return x;
return fa[x] = find(fa[x]);
} // get root(x)

Pay attention to the sentence above: fa[x] = find(fa[x]) returns the value of find(fa[x]) and it makes fa[x] become find(fa[x]). The first usage of it is to travel, the second is to note the root of the vertex, and finally to make the next queries of the same vertex $\mathcal O(1)$. After that, the graph becomes:

Which looks like a flower:

Picture taken from the Chinese searching website baidu.com.

So we can call it "a flower graph". Querying the root of the vertex in a flower graph takes $\mathcal O(1)$ time.

Totally the time complexity will be discussed in Part 2.

However, the time complexity still depends on the type of the merging. For fa[u] = v or fa[v] = u may effect the time complexity. There are two ways for us to merge the sets:

1. We consider noting the size of the block(s) which the vertex $u,v$ belong to. The smaller sized vertex will become the son.

2. We consider noting the depth of the block(s) which the vertex $u,v$ belong to. The smaller-depth vertex will become the son.

Here is an example implementation for the first way to merge the sets:

void merge(int x, int y) {
x = find(x), y = find(y);
if (x == y) return;
if (size[x] < size[y]) swap(x, y);
fa[y] = x;
size[x] += size[y];
}

Both the two versions are easy to implement and the time complexity will be smaller. It'll be discussed in Part 2. These are called heuristic merging.

### Part 1: Problems solved using dsu

#### ABC284C

For each edge x y, just do the operation merge(x, y). Then find all the roots of the vertecies, the count of the roots is the answer.

Part of my code:

int fa[100005];
int find(int x) { return fa[x] == x ? x : fa[x] = find(fa[x]); }
void merge(int x, int y) { fa[find(x)] = find(y); }
//----------------------------------------------------------------
namespace solution{int solve() {
for(int i = 1; i <= n; i++) fa[i] = i;
for(int i = 1; i <= m;i ++) {
merge(u, v);
}
set<int> s; //This is not really necessary.
for(int i = 1; i <= n; i++) s.insert(find(i));
write<int>(s.size(), '\n');
return 0;
}}

Since $\text{root}(i)\in [1,n]$, then there is no need to use std::set to count the number of the roots, just use an array to count the number of thems is enough.

#### Extensions for ABC284C

If we are querying the answer of ABC284C when constructing the graph, call the answer cnt, then you can maintain the variable cnt. When merging the vertices, if the vertices've been in the same set already, then cnt remains unchanged. Otherwise, cnt = cnt - 1, since $2$ sets became $1$ set.

void merge(int x, int y) {
if(check(x, y)) return ;
fa[find(x)] = find(y), cnt--;
} // merge and make edge

Note that the initial value of cnt is the number of vertices, a.k.a. $n$.

#### USACO 2010 OCT Silver — Lake Counting

You are given a map of the farm. . stands for land, and W stands for water. Farmer John would like to figure out how many ponds have formed in his field. A pond is a connected set of squares with water in them, where a square is considered adjacent to all eight of its neighbors. Given a diagram of Farmer John's field, determine how many ponds he has.

For example, the map below:

10 12
W........WW.
.WWW.....WWW
....WW...WW.
.........WW.
.........W..
..W......W..
.W.W.....WW.
W.W.W.....W.
.W.W......W.
..W.......W.

Turns out that there are $3$ ponds.

Solution:

This task seems to have nothing to do with dsu. Then consider how to construct the graph.

If two W squares are adajacent to each other, then you can merge them. After these merging operations are done, we can find out that the answer of ABC284C is the answer we need. Note that you do not need to count the number of elements with ..

You can labelize the squares.

Code

Spoiler

Practice:

Try to turn the 2D problem into a 3D problem.

Code:

Spoiler

### Part 2 — Time complexity of dsu algorithm

If you are just doing road compression without heuristic merging, the worst time complexity will become $\mathcal O(\log n)$, the average time complexity is equal to the version with heuristic merging.

If you are doing heuristic merging, the time complexity is below:

We usually call the time complexity of the algorithm $\mathcal O(\alpha (n))$ when there are $n$ elements to deal with.

What is $\alpha (n)$? We know that there is a famous function called Ackermann function, which is:

$A_{k}(j)=\left\{\begin{array}{ll} j+1 & k=0 \\ A_{k-1}^{(j+1)}(j) & k \geq 1 \end{array}\right.$

While $\alpha(n)$ is the inverse function of the Ackermann function. Ackman functions promote very fast. While $\alpha(n)$ promotes very slowly.

However, we do not call $mathcal O(n\alpha(n))$ lineal, but we consider $\alpha(n)$ a tiny constant.

The proof is using potential energy analysis which may be too hard to understand for beginners. In this case, if you are interested in this analysis, you can search for it on the Internet. We will skip this analysis here.

However, the road compression is doing to many editions and it may effects the time complexity of the algorithm when you are doing the algorithm with segment tree merging or persistent dsu, in this case, we will not use road compression. Instead, heuristic merging is enough.

### Part 3 — Another example on dsu

Let's think about a question: dsu can handle many different merging methods. What about splitting and deleting vertices? It's shown that dsu cannot handle splitting sets of vertices in a efficient time complexity. Let's see a simple example below.

#### Provincial selection contest in China, 2008 — Vertex deletion

Given a graph of vertices $1,2,3,\dots, n$. Then there are $q$ following operations:

• Remove a vertex $x$ from the graph.
• Just after the vertex deletion, answer how many connected blocks are there in the graph.

Solution:

Note that removing a vertex is difficult. Let's consider making the it offline, and to do the whole deletion after reversing the algorithm. Then deletion is turned into addition. Addtion is solvable by dsu, and then we can solve it easily.

Code:

Spoiler

After solving this problem above, I think you can solve basic problems by using the dsu method, now let's go to further discussion on dsu.

### Part 4 — Homework

#### CNOI2015 — Algorithm self-analysis

There are many elements $e_1,e_2,\dots,e_m$ and $n$ constraints:

• 1 x y, which means that $e_x=e_y$.
• 2 x y, which means that $e_x\not=e_y$.

Tell whether there is a solution for $e$ under the constraints.

$1\leq m\leq 10^{12}, 1\leq n\leq 10^6$.

#### CNOI2002 — Legend of Galactic Heroes

There are $n$ queues $Q_1,Q_2,\dots,Q_n$. Initially in the $Q_i$ queue has only one element $i$. Then you can do operations below:

• M i j, which means to make all the elements in $Q_j$ pushed into the queue $Q_i$, which doesn't change the order of the elements in $Q_j$.
• Q i j, which means to query if $i$ and $j$ are in the same queue. If so, then give the distance of the elements.

### Part 5: Ending

This blog post explains how to use dsu algorithm. But it's not all what can dsu do. Instead, dsu can be used in many more ways. Next blog we'll discuss problems with dsu algorithm, and more information and ways about dsu merging.

Thanks for reading. If you have any questions, just leave me a comment, i'll check it out.

• +39

By Black_Fate, history, 3 months ago,

We've talked about Generating Functions in the past blogs, but this time we are going to explain about it in a better way, as I think last time's note was really messy.

First let's have a look at this problem: how many subsets $s$ of $S=\lbrace 1,2,3,4,5\rbrace$ are there which fit:

• The sum of all the elements in $s$ is $k$.

This may be a very traditional problem, let's see the solution below:

• Let's construct a function $f(x) = (1 + x) (1 + x^2) (1 + x^3) (1 + x^4) (1 + x^5)$, and let's expand it into $1 + x + x^2 + 2 x^3 + 2 x^4 + 3 x^5 + 3 x^6 + 3 x^7 + 3 x^8 + 3 x^9 + 3 x^{10} + 2 x^{11} + 2 x^{12} + x^{13} + x^{14} + x^{15}$. And if you are getting the result of $k$, just look up the coefficient of $x^k$. If you don't believe this, let's just write it out:

Denote $\text{sum}[array]$ to the sum of the array.

• $\text{sum}[]=0$.
• $\text{sum}[1]=1$.
• $\text{sum}[2]=2$.
• $\text{sum}[1,2]=\text{sum}[3]=3$.
• $\text{sum}[1,3]=\text{sum}[4]=4$.
• $\text{sum}[1,4]=\text{sum}[2,3]=\text{sum}[5]=5$.
• $\text{sum}[1,5]=\text{sum}[2,4]=\text{sum}[1,2,3]=6$.
• $\text{sum}[1,2,4]=\text{sum}[2,5]=\text{sum}[3,4]=7$.
• $\text{sum}[3,5]=\text{sum}[1,2,5]=\text{sum}[1,3,4]=8$.
• $\text{sum}[2,3,4]=\text{sum}[1,3,5]=\text{sum}[4,5]=9$.
• $\text{sum}[1,2,3,4]=\text{sum}[2,3,5]=\text{sum}[1,4,5]=10$.
• $\text{sum}[1,2,3,5]=\text{sum}[2,4,5]=11$.
• $\text{sum}[1,2,4,5]=\text{sum}[3,4,5]=12$.
• $\text{sum}[1,3,4,5]=13$.
• $\text{sum}[2,3,4,5]=14$.
• $\text{sum}[1,2,3,4,5]=15$.

And you are finding that, the two ways we've been using, are getting the exactly same results. Why is it working? Well, you can understand it emotionally, just like the way we prove special binomial theorem.

But wait, don't you think that, the function looks seemingly out of the blue. So let's explain it.

First, we find that if we are counting the numbers of different subsets, the equation should be $(1+1)(1+1)(1+1)(1+1)(1+1)=32$. It's really similar to the function we constructed, but turned $1$ to $x^i$.

Just as what we said the last time, the $x^i$ can be considered as a simple simple, without caring what $x$ actually represents.

We can concretize the function:

We can find that, merging the options by $\text{and}$ operation is the same as multiplying them together, according to the multiplication theorem. Also, according to the addition theorem, we can consider the $\text{or}$ operation as the result of adding the elements together.

But in this task it's different. Choosing both leads to sum instead of product. So let's consider a way to turn multiplication into addition. Clever readers may find out that, exponent arithmetic is a good way. Since $x^a\times x^b=x^{a+b}$.

To choose $x$ in your subset, the index should be $x$, otherwise it should be $0$. Since $x^0=1(x\not=0)$, we can turn the concretized funtion into the maths function:

$f(x) = (x^0 + x) (x^0 + x^2) (x^0 + x^3) (x^0 + x^4) (x^0 + x^5)$

Or:

$f(x) = (1 + x) (1 + x^2) (1 + x^3) (1 + x^4) (1 + x^5)$

Here we go. You may be confused that the whole thing we have been doing has something conflicting: there are two different addition operations here, one is index addition, one is addition between elements. The first operation is addtion in the problemset, the second one is $\text{or}$ operation. If you are not only interested in one item in the function, you can let $x$ be more complicated values, say $x_0$, and calculate the value of $f(x_0)$, which may be what you are interested in.

Here is a simple example. If you are going to count the number of subsets $s$ of $S=\lbrace 1, 2, 3, \dots, 2000\rbrace$, which fit:

• The sum of all the elements in $s$ is $5t$, $t\in\mathbb {N}$.

Then it means we only need to care the coefficients before 5-divisible-indexed items. Just make $x=\zeta$, which fits:

$\zeta^5-1=0$

According to the basic theorem of algebra, the function has $5$ different roots

$\zeta_1,\zeta_2,\zeta_3,\zeta_4,\zeta_5$

on a complex plane, which fits $\zeta_1=1$. Let's visualize the roots:

Actually, $\zeta_2^2=\zeta_3$, $\zeta_2^3=\zeta_4$, $\zeta_2^4=\zeta_5$, $\zeta_2^5=\zeta_1=1$. So, any one of $\zeta_2,\zeta_3,\zeta_4,\zeta_5$ can be $x$ in $f(x)$. Here we make $\zeta=\zeta_2$.

Then the function $f(x)$ is:

$f(x)=((1+\zeta)(1+\zeta^2)(1+\zeta^3)(1+\zeta^4)(1+\zeta^5))^{400}$

Then calculate the value of this function is okay, if you expand it, you will find only 5-divisible indexes are kept. To calculate the value of this function is no our point today, if you are interested in this, you can watch 3Blue1Brown's video about this. I'll link it later.

This is the first usage of this function.

Now let's consider another usage of this function: we talked about recurrence relations before. Now we're going to solve them by generating functions.

For example, considering the fibonacci array:

So, $F(x)+xF(x)+x^2F(x)=0$, then $F(x)=\dfrac{1}{1-x^2-x}=\frac{1}{\sqrt{5}}\left(\frac{\phi}{1-\phi x}+\frac{1 / \phi}{1+x / \phi}\right)$.

There are far more ways to use the generating function, this is only an example. If you are still interested in this, you can leave a friendly comment and I'll check it and write more about it.

Also, if I'm not writing this blog, my knowledge on this will be limited. I hope that this blog will not only help you but also you guys. Thanks for reading.

• 0

By Black_Fate, history, 3 months ago,

Hello Codeforces!

Today I'll be writing about what I have learnt about combinatorics, which played, and, in my opinion, will still be playing an important role in both Codeforces and CP (short for competitive programming).

However, combinatorics is such a great subject that made that impossible for me to write it all in one blog. So, this is just the fourth blog, which is friendly to beginners. If you are interested, please, pay attention to this account and I'll give posts as series for a long term.

If you have found some mistakes in the text, or if you have some questions about the topic, please, leave a comment, and I'll check it weekly and reply. Also, if you find some grammar mistakes, a kind comment will be also welcomed.

### Content

1. generating function
2. using generating function to solve problems
3. group theory
4. permutation group
5. Burnside's lemma
6. Polya's enumeration theorem
7. Polya's enumeration theorem under generating functions
8. using Polya's enumeration theorem to solve problems
9. solution to some problems left in the blog

The homework tutorial and homework this time will be posted in another blog, or it'll be TOO long.

### Part 1 — Generating function

How to represent an array $a_0,a_1,\dots, a_{n-1}$ with a function? Maybe we can use Berlekamp-Massey algorithm to calculate the recurrence relation out.

But it's too complicated. Maybe to be more easy, we can generating a function $f(x)=a_0+a_1x^2+\dots+a_{n-1} x^n=\sum_{i=1}^{n}a_{i-1}x^i$. For example $[1,2,3]$'s generating function is $3x^2+2x+1$.

Wtf? This is really naive isn't it? But you may find that it is meaningless to calculate the value of the function with any possible $x$.

Don't worry. Here is a task for you:

There are many red and blue tickets. You can choose at most $K$ red tickets and $M$ blue tickets only if $M$ is even. You want to get $N$ tickets in total. How many different ways are there to do the problem?

We can construct a generating function $R(x)$ generated with $r=[\underbrace{1,1,1,\dots 1}_{\text{n+1 ones}}]$, when $r_i=1$ means you can get $i$ red tickets.

Also, we can construct a generating function $B(x)$ generated with $b=[1,0,1,0,\dots]$, when $b_i=1$ means you can get $i$ blue tickets.

Then you can make $F(x)=R(x)\times B(x)=\sum\limits_{i=0}^{M}f_ix^i$. And $f_M$ is the answer. Proof is trivial when you do the convolution by yourself, it is same as brute force with $\mathcal O(n^2)$ complexity.

Under FFT, the time complexity is $\mathcal O(n\log n)$.

But this is not all what generating functions can do. We have another important property of it: $x$'s detailed value is useless, so we can give a proper range to it.

What is the generating function of $[1,1,1,\dots]$?

Answer: $\frac{1}{1-x}$.

Why??? OK, I have to say $1+x+x^2+\dots$ is also correct. But why these two are the same? Are you kidding the readers? Actually, we can give the range $(-1,1)$ to it. Then since $1+x+x^2+\dots +x^n=\frac{1-x^n}{1-x}$, when $x\to \infty$, $x^n\to 0$. Then $1-x^n=1$, so the answer is $\frac{1}{1-x}$.

Practice: what is the generating function of $[1,0,1,0,1,0,\dots]$? What about $[1,0,0,1,0,0,\dots]$, $[1,2,3,4,\dots]$ and $[1,1+2,1+2+3,1+2+3+4,\dots]$?

Try to use special binomial theorem to explain some of and, even more than them. For example what generates $\frac{1}{(1-x)^k}$.

### Part 2 — using generating function to solve problems

If you want to implement them, you should learn the polynomial theory first.

#### Codeforces 632E — Thief in a Shop

This is a backpack-dp problem, but can you optimize it?

Construct a generating function $f(x)=\sum\limits_{i=1}^{n}x^{a_i}$, this is what you can get when you can choose one thing.

Then expand one to $\textit{k}$，you only need to calculate $f^k(x)$. Use FFT to reach $\mathcal O(n\log n)$.

#### Codeforces 438E — The Child and Binary Tree

Catalan is discussed before, now we construct the generating function for the weights of a single node $W(x)=\sum_{i=1}^{n}x^{c_i}$.

Additionally, the generating function of Catalan sequence is $C=xC^2+1$, then we get $F^2W-F+1=0$. Solve it, we get $\dfrac{2}{1-\sqrt{1-4W}}$.

Use Newton's method (calculate the square root and inversion) to reach $\mathcal O(n\log n)$.

### Part 3 — Group Theory

What is a group? A group is composed of a set $\mathcal S$ and a dyadic operation $*$. A group needs to meet these four rules below:

• Closure: $\forall a,b\in\mathcal S,a*b\in\mathcal S$.
• Combination: $\forall a,b,c\in\mathcal S,(a*b)*c=a*(b*c)$.
• identity element occurs: $\exists e\in \mathcal S$, we have $\forall a\in \mathcal S, a*e=a$.
• inversion element occurs: $\forall a\in\mathcal S, \exists b\in\mathcal S$, we have $a*b=b*a=e$.

Practice: Prove that when $\mathcal S={0,1,2\dots,n-1}$, $*$ is addition modulus $n$.

• Closure: Since $a,b\in \mathcal S$, we have $a,b\in\mathbb N$, then $(a+b)\bmod n\in\mathcal S$.
• Combination: trivial.
• identity element: $0$.
• inversion element: for all $x$, since $-x\equiv n-x\pmod n$, then $n-x$ is the inversion element of it.

This is simple. You can try to make more examples and prove them.

### Part 4 — permutation group

What is a permutation here? It's the similar, but not the same.

We define $\begin{pmatrix}1 & 2 & 3 & \dots & n\ p_1 & p_2 & p_3 & \dots & p_n\end{pmatrix}$ a permutation when $p$ is a permutation of $1,2,3\dots, n$. How does the permutation work? It replaces $i$ to $p_i$. For example, $(2,1,3,5,4)$ will become $(1,3,4,2,5)$ with $\begin{pmatrix}1 & 2 & 3 & 4 & 5\ 3 & 1 & 4 & 2 & 5\end{pmatrix}$.

But permuations can be multipled too. Permutations multiply and turn out to be new permutations.

$\begin{pmatrix}1 & 2 & 3 & 4 & 5\\ 3 & 1 & 4 & 2 & 5\end{pmatrix}\begin{pmatrix}1 & 2 & 3 & 4 & 5\\ 4 & 2 & 1 & 3 & 5\end{pmatrix}=\begin{pmatrix}1 & 2 & 3 & 4 & 5\\ 3 & 1 & 4 & 2 & 5\end{pmatrix}\begin{pmatrix}3 & 1 & 4 & 2 & 5\\ 1 & 4 & 3 & 2 & 5\end{pmatrix}=\begin{pmatrix}1 & 2 & 3 & 4 & 5\\ 1 & 4 & 3 & 2 & 5\end{pmatrix}$

What does it mean? It means doing $p_1$ and then $p_2$ on the left is the same to doing the one permutation on the right.

Here pay attention that $p_1p_2\not=p_2p_1$, but combination exists.

All permutations of length $n$ obviously make a group $\mathcal P_n$. This can be easily proved.

• Closure: Any multiplication gets a permutation and it's in $\mathcal P_n$.
• Combination: trivial.
• identity element: $\begin{pmatrix}1 & 2 & 3 & \dots & n\ 1 & 2 & 3 & \dots & n\end{pmatrix}$.
• inversion element: $\begin{pmatrix}1 & 2 & 3 & \dots & n\ p_1 & p_2 & p_3 & \dots & p_n\end{pmatrix}^{-1}=\begin{pmatrix}p_1 & p_2 & p_3 & \dots & p_n\ 1 & 2 & 3 & \dots & n\end{pmatrix}$. Easilize it and you'll get a permutation.

Note that we usually put $1\ 2\ 3\ \dots\ n$ on the first line, but if you really need to, it doesn't matter if you put another permutaion on the first line. Then it means that $a_i$ will be changed into $b_i$, after sorting $a$ it's actually the same.

Now we define a new way to write out a permutation:

$(a_1a_2a_3\dots a_n)=\begin{pmatrix}a_1 & a_2 & a_3 & \dots & a_{n-1} & a_n\ a_2 & a_3 & a_4 & \dots & a_n & a_1\end{pmatrix}$

What does it means? From the element $a_i$, do permutaion for $n$ times and $a_i$ will remain the same. This makes a cycle, we can note the cycle. Note that $a_n$ may not be the completed permutation, because the cycle's size $|C|$ can be any one in $[1,n]\cap \mathbb N$. Many cycles make a permutation.

$\begin{pmatrix}1 & 2 & 3 & 4 & 5\ 5 & 2 & 3 & 1 & 4\end{pmatrix}=\begin{pmatrix}1 & 5 & 4\end{pmatrix}\big(2\big)\big(3\big)$.

$(1\ 5\ 4)$ cannot be turned into $(1\ 4\ 5)$ but it can also be $(4\ 1\ 5)$.

You may find that it's similar to contructing a graph when $a_i$ and $b_i$ has an edge when the permutation is $\dbinom{a}{b}$.

What is this way of expressing permutation used for? Have a look at the problem below:

You have a $15$-puzzle situation, is it reachable from the initial version?

For example:

$\begin{bmatrix}15 & 14 & 13 & 12\\ 11 & 10 & 9 & 8 \\ 7 & 6 & 5 & 4\\ 3 & 2 & 1 & 0 \end{bmatrix}$

$0$ is the empty square. Then, all the swaps together, are the same to this permutations:

$\begin{pmatrix}0 &1 &2 &3 &4 &5 &6 &7 &8 &9 &10 &11 &12 &13 &14 &15\\ 0 & 15 &14 &13 &12 &11 &10 &9 &8 &7 &6 &5 &4 &3 &2 &1\end{pmatrix}$

And express it with the cycles: $\text{(0)(1 15)(2 14)(3 13)(4 12)(5 11)(6 10)(7 9)(8)}$

There are $9$ cycles, while for each swap you do, it always makes even cycles. (Even cycles never multiplies to make an odd-cycles-permutation)

Finally, we can prove that there does not exist a way to reach it.

If you expand $4$ to $n$, then we can solve the task in $\mathcal O(n^2\alpha(n^2))$, while $\alpha(n^2)$ is the time complexity of DSU, because you can use it to count the cycles.

### Part 5 — Burnside's lemma

What's Burnside's lemma? Let's go to wikipedia.

In the following, let $G$ be a finite group that acts on a set $X$. For each $g$ in $G$ let $X_g$ denote the set of elements in $X$ that are fixed by $g$ (also said to be left invariant by $g$), i.e. $X_g = \lbrace x \in X | g.x = x \rbrace$. Burnside's lemma asserts the following formula for the number of orbits (or the cycles), denoted $|X/G|$:

$|X / G|=\frac{1}{|G|} \sum_{g \in G}\left|X^{g}\right|$

Formalized lemmas do no help to solving problems. Let's go further.

Warning: many more definitions below, if you've forgotten it, try to find the definitions back.

Note $G$ as the permutation group we are going to discuss.

In a permutation $g\in G$, sometimes there exists fixed elements. For example $\begin{pmatrix}1 & 2 & 3\ 3 & 2 & 1\end{pmatrix}$ has a fixed element $2$.

Count of the fixed elements in $g$ are noted as $c_1(g)$.

For a specific integer $k$ in $[1,n]$, then do $\forall g\in G$ on $k$. There must exist $k$ itself, formally $g(k)=k$. Let the valid $g$-s make a set $Z_k$. It is a subset of $G$. $Z_k$ is a group too, easily proved.

For a specific integer $k$ in $[1,n]$, then let all the permutations act on $k$. Then we have $|G|$ answers, let them make a set $E_k$.

Sublemma: $|G|=|Z_k|\cdot |E_k|$.

Proof: Let $Z_k=\lbrace z_1,z_2,\dots z_{|Z_k|}\rbrace$, $E_k=\lbrace k_1,k_2,\dots, k_{|E_k|}\rbrace$, $p=|Z_k|,q=|E_k|$.

For a specific $k_1\not= k$ in $E_k$, then since there is always $g\in G$ that meets $g(k)=k_1$, then $g\not\in Z_k$.

Let this $g$ generate a new set $Z_{k,k_1}={gz_i,gz_2,\dots,gz_p}$. Note that $z_ig$ may be incorrect, $z_ig=gz_i$ does not always fits. According to the definition, $Z_{k,k_1}\cap Z_{k}=\phi$, and $|Z_{k,k_1}|=p$.

Good. If you are clever enough, you may guess that all $g\in G$ which fits $g(k)=k_1$ is in $Z_{k,k_1}$. Suppose that $\exists \hat g\in G$ which fits $\hat g(k)=k_1$, then $(g^{-1}\hat g)(k)=(g^{-1}g)(k)=k$.

So $g^{-1}\hat g\in Z_k$. Then let $g^{-1}\hat g=z_x$, therefore $\hat g=gz_i$, so $\hat g\in Z_{k,k_1}$.

Let $\mathbb Z_k=\bigcup\limits_{i = 1}^{q - 1} Z_{k,k_i}$. Then $G=Z_k\cup \mathbb Z$. And since no one of them have common elements, just add them together and the answer is $pq=|E_k|\cdot|Z_k|$.

Burnside Lemma:

Let numbers of different equivalence classes be $l$.

$\displaystyle l=\dfrac{c_1(a_1)+c_1(a_2)+c_1(a_3)+\dots +c_1(a_g)}{|G|}$

Proof:

Let $\forall g\in G,x\in[1,n]\cap \mathbb N,S(g,x)=\begin{cases}1 & g(x)=x\ 0 & g(x)\not= x\end{cases}$.

Then

$\displaystyle\sum\limits_{i=1}^{g}c_1(a_i)=\sum\limits_{i=1}^{g}\sum\limits_{x=1}^{n} S(a_i,x)=\sum\limits_{x=1}^{n}|Z_x|$

The equivalence classes make $i$ equivalence classes' set $E_{x_1},E_{x_2},\dots,E_{x_l}$, then we have:

$\displaystyle\sum\limits_{x=1}^{n}|Z_x|=\sum_{i=1}^{l}\sum_{x\in E_{x_i}}|Z_x|$

According to sublemma1, we have:

$\displaystyle \sum_{i=1}^{l}\sum_{x\in E_{x_i}}|Z_x|=\sum_{i=1}^{l}|E_{x_i}||Z_{x_i}|=\sum_{i=1}^{l}|G|=l|G|$

Then:

$l|G|=c_1(a_1)+c_1(a_2)+c_1(a_3)+\dots +c_1(a_g)$

Finally, WE GOT PROVED.

### Part 6 — Polya's enumeration theorem

Let $\overline G$ be the permutation group of $n$ elements, color the $n$ elements with $m$ colors. Let $c(a_k)$ be the length of the cycle to permute $a_k$, Then different ways of coloring is:

$\displaystyle l=\dfrac{m^{c(\overline a_1)}+m^{c(\overline a_2)}+m^{c(\overline a_3)}+\dots +m^{c(\overline a_g)}}{|\overline G|}$

$\overline a_i$ are elements of $\overline G$.

Proof:

Don't be afraid, based upon Burnside's lemma it's easy.

Suppose all ways instead of considering permutations to color, let the set be $S$ and according to multiplication principle $|S|=m^n$.

Then we find that every element of $\overline G$ represents a permutation of $n$ elements, and also represents permutations of ways to color the elements $\hat a_j$ and all of $\hat a$ make a set $\hat G$. Since the one-to-one correspondence, $|\overline G|=|\hat G|$, so $c_1(\hat a_k)=m^{c(\overline a_k)}$. According to Burnside's lemma:

$\displaystyle l=\dfrac{1}{|\hat G|}\sum_{k=1}^{g} c_1(\hat a_k)=\dfrac{m^{c(\overline a_1)}+m^{c(\overline a_2)}+m^{c(\overline a_3)}+\dots +m^{c(\overline a_g)}}{|\overline G|}$

Similar to Burnside, isn't it? Sometimes we do not seperate them.

### Part 7 — Polya's enumeration theorem under generating functions

Polya's theorem is used to count the ways to color the elements. But why isn't it called Polya's counting theorem? Because it can be used for enumerate the states.

Still from this formula:

$\displaystyle l=\dfrac{m^{c(\overline a_1)}+m^{c(\overline a_2)}+m^{c(\overline a_3)}+\dots +m^{c(\overline a_g)}}{|\overline G|}$

Let $m^{c(\overline a_i)}$ be $\prod_{p=1}^{n}(\sum_{i=1}^{m}b_i^p)^{c_p(a_i)}$.

Then define a polynomial $P(G)$:

$\displaystyle P(G)=\dfrac{1}{G}\sum_{i=1}^{g}\prod_{k=1}^{n} s_{k}^{c_k(a_i)}=\dfrac{1}{|G|}$

### Part 8 — using Polya's enumeration theorem to solve problems

#### ABC198F — Cube

Cube rotates. Rotate it, and label the six numbers, permutation comes.

Since the cube has $6$ numbers to be written and only $6$, so the permutation group can be written out easily. (If $6$ expands to $N$ then it'll be terrible) With Digit DP and Polya's enumeration theorem, we can solve it.

#### ABC284Ex — Count Unlabeled Graphs

Something's grasped from the offical editorial.

Writing numbers $1\sim K$ is to paint them in $K$ colors. This is Polya's theorem for sure. Let $G$ be the set of all labeled graphs, $\overline G$ be the set of unlabeled ones.

Unlabeled Graphs are hard to count, count labeled graphs. Let $g$ be a graph in $G$. Then the number of labeled graph in $G$ that is isomorphic up to labels multiplies the number of permutations $p$ such that $p(g)$ is isomorphic (as a labeled graph) to $g$ is $N!$. Let the expression be $X_g\cdot Y_g=N!$.

Then using Polya's enumeration theorem, then $N!\cdot |\overline G|=\sum\limits_g\dfrac{N}{X_g}=\sum Y$. Then:

$|G|=\dfrac{1}{N!}\sum_{k=1}^{g} c_1(\hat a_k)=\dfrac{m^{c(\overline a_1)}+m^{c(\overline a_2)}+m^{c(\overline a_3)}+\dots +m^{c(\overline a_g)}}{|\overline G|}$

Finally we can do a dp just as what we did in ABC226F, and we can solve it in polynomial complexity time.

Code

Explaination: $\text{gcd}[\bf n][m]$ is memorized because of the recalculate may lead to TLE. (I've not tried on Atcoder yet, but I've met a similar problem that will be mentioned in the homework which made me TLE without doing GCD like this)

### Part 9 — solution to some problems left in the blog

Practice: what is the generating function of $[1,0,1,0,1,0,\dots]$? What about $[1,0,0,1,0,0,\dots]$, $[1,2,3,4,\dots]$ and $[1,1+2,1+2+3,1+2+3+4,\dots]$?

1. $[1,0,1,0,1,0,\dots]$

Not very difficult. Generating function: $1+x^2+x^4+x^6+\dots$ and what's this? Interesting! Replace $x$ with $x$ in $1+x+x^2+x^3+\dots$ and you will find these two the same. Then since $1+x+x^2+x^3+\dots=\dfrac{1}{1-x^2}$.

1. $[1,2,3,4\dots]$

A little bit difficult. You can do derivation, but here we just give out the solution: $\dfrac{1}{(1-x)^2}$. Why? Try to multiply two $1+x+x^2+x^3+\dots$ together, then you'll find the two ways to express the generating function $1+x+x^2+x^3+\dots$ is changed into the generating function of $[1,2,3,4\dots]$ and it's equal to $\Big(\dfrac{1}{1-x}\Big)^2$.

1. $[1,3,6,10,\dots]$

The same, do derivation. But too complicated. Multiplpy $1+x+x^2+x^3+\dots$ for $3$ times. Then what will you get? You will get $1+3x+6x^2+10x^3+\dots$, try it by yourself. Then it's $\dfrac{1}{(1-x)^3}$ remains non difficulty to be proved.

1. What generates $\dfrac{1}{(1-x)^k}$?

With the 2 problems discussed above, then it's not hard to get that $\sum_{i}^{\infty} C_{i+k-1}^{k-1} x^{i}$, or to say $[\binom{k-1}{k-1},\binom{k}{k-1},\binom{k+1}{k-1},\dots]$ generated it. Just multiply $1+x+x^2+x^3+\dots$ for $k$ times. And for each coefficient, it means choosing $k-1$ elements from $i+k-1$. There's no doubt that the answer is $\sum_{i}^{\infty} C_{i+k-1}^{k-1} x^{i}$.

$\dfrac{1}{1-x^k}$ and $\dfrac{1}{(1-x)^k}$ are two important generating functions. Try to do some problems for homework.

• +43

By Black_Fate, history, 3 months ago,

Hello Codeforces!

Today I'll be writing about what I have learnt about combinatorics, which played, and, in my opinion, will still be playing an important role in both Codeforces and CP (short for competitive programming).

However, combinatorics is such a great subject that made that impossible for me to write it all in one blog. So, this is just the third blog, which is friendly to beginners. If you are interested, please, pay attention to this account and I'll give posts as series for a long term.

If you have found some mistakes in the text, or if you have some questions about the topic, please, leave a comment, and I'll check it weekly and reply. Also, if you find some grammar mistakes, a kind comment will be also welcomed.

### Content

1. Homework Tutorial
2. Catalan Sequence
3. Stirling Number
4. Derangement
5. Linear constant coefficient homogeneous recurrence relation
6. homework

### Part 0 — Pre-knowledge

In order not to make this blog too long (it's already very long now xD), I won't explain these things anymore. You can find them on wikis or google them.

• What Combinatorics (1)(2) mentioned.
• CP basics.

### Part 1 — Homework Tutorial

The first task of it can be solved with two quick powers in $\mathcal O(\log{(n+k)})$ time.

The second task can be solved with combination initiation, $inv[i]$ initiation mentioned in Task C in $\mathcal O(n)$ time.

Let's define $z$ the number of the $0$s in the array, $p$ the number of the positive elements in the array, $m$ the number of the negative elements in the array.

When the product is $0$, we have to choose at least one $0$. You may find it difficult to solve it directly, let's reduce the invalid subsets' count, which contains no $0$. So the count is $2^{n-z}$. The whole count is $2^n$, the answer is $2^n-2^{n-z}$ which can be solved in $\mathcal O(\log n)$ time.

When the product is positive, we have to choose $2i$ negative numbers. So we can enumerate the count of the negative numbers and calculate the answer. Notice that $2i\in[0,n]$. First, we can choose any subset of the set of positive numbers. So the answer is:

$\displaystyle\sum_{i=0}^{2i\leq n}\binom{m}{2i}\times 2^{p}$

When the product is negative, we have to choose $2i+1$ negative numbers. So we can enumerate the count of the negative numbers and calculate the answer. Notice that $2i+1\in[0,n]$. First, we can choose any subset of the set of positive numbers. So the answer is:

$\displaystyle\sum_{i=0}^{2i+1\leq n}\binom{m}{2i+1}\times 2^{p}$

All above can be solved in $\mathcal O(n)$ time.

Code

There are two ways.

The first way: After the prework of $fac[]$ and $ifac[]$, $inv[i]=fac[i-1]\times ifac[i]$. The proof is simple: $\frac{(i-1)!}{i!}=\frac{1}{i}$.

The second way: First show you the code:

void init() {
inv[0] = inv[1] = 1;
for(int i = 2; i < N; i++) inv[i] = (mod - mod / i) * inv[mod % i] % mod;
}

So:

$i^{-1}\equiv-\left\lfloor\frac{p}{i}\right\rfloor \cdot(p \bmod i)^{-1} \bmod p\pmod p$

Proof:

Since $a\bmod b=a-\lfloor\frac{a}{b}\rfloor\times b$, so:

$-\frac{\left\lfloor\frac{p}{i}\right\rfloor}{p-\left\lfloor\frac{p}{i}\right\rfloor i}\equiv\frac{1}{i}\pmod p$

Then:

$\left\lfloor\frac{p}{i}\right\rfloor\equiv \left\lfloor\frac{p}{i}\right\rfloor-pi\pmod p$

Since $pi\equiv 0\pmod p$, we've got proved.

This takes $\mathcal O(n)$ time too, but less memory.

This is Catalan sequence, so let's go to the second part first and discuss it later.

### Part 2 — Catalan Sequence

First Task D can be solved with dp, we define $f_{n}$ as the answer. Then:

$\displaystyle f_{n}=\sum_{i=1}^{n-1} f_{i}f_{n-i}$

This can be solved in $\mathcal O(n^2)$ time. But it's too slow, let's optimize it:

$\textbf{Lemma: }f_n=\frac{\binom{2n}{n}}{n+1}$

Proof:

Since $f_2=1$, then:

$f_{n+1}-2f_n=f_3f_{n+1}+\ldots+f_{n+1}f_3$
$\therefore (n-3)f_n=\dfrac{n}{2}(f_{n+1}-2f_n)$
$\therefore nf_{n+1}=(4n-6)f_n$

Then we let $f'_{n+1}=nf_{n+1}$

Then

$f'_{n+1}=\dfrac{(2n-2)(2n-3)}{(n-1)(n-1)}f'_{n}=F(n)f'_n$

So

$\dfrac{f'_{n+1}}{f'_{n}}=\dfrac{(2n-2)(2n-3)}{(n-1)(n-1)}$

Since $f'_2=f_2=1$

Then

$f'_{n+1}=\prod\limits^{n}_{i=2}F(i)$

Reduce of the fraction, then $f'_{n+1}=\dbinom{2n-2}{n-1}$

$\therefore \dbinom{2n-2}{n-1}=nf_{n+1}$

So

$f_{n+1}=\dfrac{1}{n}\dbinom{2n-2}{n-1}$

At last, $f_n=\dfrac{1}{n+1}\dbinom{2n}{n}$

This can be solved in $\mathcal O(2n)=\mathcal O(n)$ time, maybe a little bit slow.

Also we have another formula, and can be initiated in $\mathcal O(n)$ time:

$f_{n}=\dfrac{f_{n-1}(4 n-2)}{n+1}$

Since the combination can be solved in $\mathcal O(n)$ time, and $\frac{1}{n+1}$ is exactly $inv[n+1]$ which can be initiated in $\mathcal O(n)$ time too.

This sequence $f$ is noted as Catalan sequence. Usually $H_n$ or $C_n$ denoted to it.

There are many more problems with can be solved with Catalan sequence, and it'll be put in the homework part.

### Part 3 — Stirling Number

We note $S(n,k)$ as the different ways to cut $n$ different elements into $k$ undifferentiated non-empty subsets. For example, $S(5,3)$ denotes to:

$\begin{matrix}[1,2,3][4][5] & [1,2][3,4][5] & [1,2][3][4,5] & [1,2][3,5][4] & [1,3][2][4,5] \\ [1,3][2,4][5] & [1,3][2,5][4] & [1,4][2,3][5] & [1,4][2,5][3] & [1,4][2][3,5] \\ [1,5][2,3][4] & [1,5][2,4][3] & [1,5][3,4][2] & [1,2,4][3][5] & [1,2,5][3][4] \\ [1,3,4][2][5] & [1,3,5][2][4] & [1,4,5][2][3] & [1][2][3,4,5] & [1][2,3][4,5] \\ [1][2,4][3,5] & [1][3][2,4,5] & [1][4][2,3,5] & [1][5][2,3,4] & [1][2,5][3,4] \end{matrix}$

So $S(5,3)=25$. Why isn't $[5,3][1][4,2]$ counted? Because it's the same as $[1][2,4][3,5]$.

Then we can find it expressed by a dp way:

$S(n,k)=\begin{cases} S(n-1,k-1)+k\times s(n-1,k) & k\not= 0\\ [n=0] & k=0\end{cases}$

Here we give out another way to express it without proof, since it is in need of binomial inversion:

$S(n,k)=\sum_{i=0}^{k} \frac{(-1)^{k-i} i^{n}}{i !(k-i) !}$

If you are interested in it, maybe you can start with the inclusion-exclusion principle.

### Part 4 — Derangement

How many permutations $p$ of $1,2,3,\dots,n$ are there fits $\forall p_{i}\not=i$?

For example when $n=3$ the answer is $2$, since $[2,3,1][3,1,2]$ are valid, no $p_i=i$ occurs.

This problem can be also solved with dp, we define $D_n$ the answer.

Then there are two conditions:

• $p_{1...n-1}$ is already a derangement. Then if $p_n=n$, it's bad. So we can swap $p_n$ and any $i$ which fits $1\leq i\leq n-1$. Then it's trivial that $p$ is a derangement. There are $(n-1)\times D_{n-1}$ ways.
• $p_{1...n-1}$ has only one $i$ which fits $p_i=i$, then swap $p_n$ and $p_i$. Since this $i$ can be between $1$ and $n-1$, so there are $(n-1)\times D_{n-2}$ ways.

There are no more conditions to decide $d_n$ in strictly one operation to make $p$ a derangement.

So we have $D_n=(n-1)\times (D_{n-1}+D_{n-2})$.

### Part 5 — Linear constant coefficient homogeneous recurrence relation

Defination:

$\tag{*}{\begin{matrix}a_n+c_1a_{n-1}+c_{2}a_{n-2}+\dots +c_ka_{n-k}=0\\ a_0=d_0,a_1=d_1,\dots,a_{k-1}=d_{k-1}\end{matrix}}$

If $c_1,c_2,\dots,c_k,d_1,d_2,\dots,d_k$ are constants, we call $(*)$ a linear constant coefficient homogeneous recurrence relation.

And we can always solve it to get the general term formula. Proof is in the homework, and I'll prove it in the next blog (or the blog will be too long). Let's go to some examples when $k=2$. ($k\leq 1$ is too easy for us to discuss)

For example:

The fibonacci array $F_n$:

$F_n=F_{n-1}+F_{n-2}, F_1=F_2=1$

How to easilize it?

Characteristic equation: $x^2-x-1=0$, Solution: $\alpha = \dfrac{1+\sqrt5}{2},\beta = \dfrac{1-\sqrt5}{2}$

$G(x)=\dfrac{P(x)}{1-x-x^2}=\dfrac{P(x)}{\Big(1-\dfrac{1+\sqrt5}{2}x\Big)\Big(1-\dfrac{1-\sqrt5}{2}x\Big)}=\dfrac{A}{1-\alpha x}+\dfrac{B}{1-\beta x}$

$P(x)$ is a linear polynomial. Let it be $ax+b$.

$\displaystyle G(x)=\Big(A\sum_{i=0}\alpha^ix^i\Big)+\Big(B\sum_{i=0}\beta^ix^i\Big)$

We can grasp $2$ special value of the array.

$F_0=A+B=0,F_1=A\alpha+B\beta=1.$

So

$\begin{cases}A+B=0\ A-B=\dfrac{2}{\sqrt5}\end{cases}$

Solution: $(A,B)=(\dfrac{1}{\sqrt5},-\dfrac{1}{\sqrt5})$.

So $F_n=\dfrac{\alpha^n-\beta^n}{\sqrt5}$.

What if the Characteristic equation turns out to have same solutions?

The array: $a_n-4a_{n-1}+4a_{n-2}=0,a_0=1,a_1=4$.

Characteristic equation: $(x-2)^2=0$, Solution $x_1=x_2=2$.

$G(x)=\dfrac{A}{1-2x}+\dfrac{B}{(1-2x)^2}$

So:

$\displaystyle G(x)=\Big(A\sum_{i=0}2^ix^i\Big)+\Big[B\Big(\sum_{i=0}2^ix^i\Big)^2\Big]$
$\iff a_n\cdot 2^n+B(n+1)2^n=(C+Bn)2^n$

We can grasp $2$ special value of the array.

$a_0=C=1,a_1=2(1+B)=4.$

So $B=1$.

Solution: $a_n=(1+n)2^n$.

What if the characteristic equation turns out to have imaginary solutions?

$a_n=a_{n-1}-a_{n-2},a_1=a_0=1$.

Since $a_n-a_{n-1}+a_{n-2}=0$, characteristic equation: $x^2-x+1=0$.

Solution: $\alpha=\dfrac 12+\dfrac{\sqrt3i}{2},\beta=\dfrac 12-\dfrac{\sqrt3i}{2}$.

$G(x)=\dfrac{P(x)}{(1-\alpha x)(1-\beta x)}=\dfrac{A}{1-\alpha x}+\dfrac{B}{1-\beta x}$
$a_n=A\alpha^n+B\beta^n$

We can grasp $2$ special value of the array.

$a_0=A+B=1,a_1=A\alpha+B\beta=1.$

So $\begin{cases}A+B=1\ A-B=-\dfrac{i}{\sqrt3}\end{cases}$.

Then $(A,B)=(\dfrac{1-\frac{i}{\sqrt3}}{2},\dfrac{1+\frac{i}{\sqrt3}}{2})$

Or more specifically,

Since $e^{ix}=\cos x+i\sin x$

Then $a_n=\cos \dfrac{n\pi}{3}+\dfrac{1}{\sqrt 3}\sin \dfrac{n\pi}{3}$.

### Part 6 — Homework

Solve $a_{n}=233 a_{n-1}+666 a_{n-2}, a_{0}=0, a_{1}=1$ modulus $10^9+7$ with $T$ queries of $n$.

$1\leq n\leq10^9, 1\leq T\leq 5\cdot 10^7$.

Prove: We can always solve a linear constant coefficient homogeneous recurrence relation to get the general term formula.

Solve $a_n=2a_{n-1}+1, a_1=1$.

Solve $a_n-6a_{n-1}+8a_{n-2}=0, a_0=x,a_1=y$.

Solve $a_n=9a_{n-2}, a_0=x,a_1=y$.

Solve $a_n+14_a{n-1}+49a_{n-2}=0,a_0=x,a_1=y$.

$n$ men are writing letters to their wives, but the postman's sent it wrongly. Though each wife receives a letter, but only $k$ of them receives the right one from their husbands. How many possible versions of the derangement are there?

$1\leq k\leq n\leq 10^6$.

We have a regular $n$-sided polygon with vertex $1,2,3\dots,n$. Now you will divide in into $n-2$ triangles after drawing $n-3$ non-intersected diagonal, how many different ways are there to complete this division? For example, when $n=4, ans=2; n=5, ans=5$.

$1\leq n\leq 10^7$.

There are $2n$ dots which are regularly distributed on a circle. Now we divide the $2n$ dots into $n$ pairs of dots. In a pair, join the two dots. No segments are intersected inside the circle. How many different ways are there to divide the dots?

$1\leq n\leq 10^7$.

$2n$ people are going to watch a film which needs a $5$-dollar-ticket. $n$ people take a $5$ dollar note, and the others take a $10$ dollar note. Now they are going to buy the tickets at the ticket office, however, the ticket office has no change at first. Everyone will pay strictly $5$ dollar to watch the film. How many different orders of the $2n$ people are there to make everyone pay strictly $5$ dollar to watch the film?

$1\leq n\leq 10^7$.

### At last

The third blog is completed, it discussed the sequences and recurrence relations. Next blog we will discuss things about recurrence relations and generating functions. If you are puzzled with something in the text, you can comment under the blog!

Thanks for reading! This series will be written on.

Last but not least: Happy new year!

Also, thanks to the book Combinatorics (Fifth version, Tsinghua University publishing house co., ltd).

• +57

By Black_Fate, history, 3 months ago,

Hello Codeforces!

Today I'll be writing about what I have learnt about combinatorics, which played, and, in my opinion, will still be playing a important role in both Codeforces and CP (short for competitive programming).

However, combinatorics is such a great subject that made that impossible for me to write it all in one blog. So, this is just the second blog, which is friendly to beginners. If you are interested, please, pay attention to this account and I'll give posts as series for a long term.

If you have found some mistakes in the text, or if you have some questions about the topic, please, leave a comment, and I'll check it weekly and reply. Also, if you find some grammar mistakes, a kind comment will be also welcomed.

### Blogs in the future

• Combinatorics (3)
• Probabilities

### Content

1. Quick power
2. Fermat's little theorem
3. extend-gcd
4. The multiplicative inverse of an integer
5. Prework optimize
6. homework

### Part 0 — Pre-knowledge

In order not to make this blog too long (it's already very long now xD), I won't explain these things anymore. You can find them on wikis or google them.

• What Combinatorics (1) mentioned.
• CP basics.

### Part 1 — Quick power

How to calculate the value of $a^b\bmod p$?

You may write out this code easily:

long long power(long long a, long long b, long long p) {
long long res = 1;
while(b--) res = res * a % p;
return res;
}

Seems easy, right? This code has $\mathcal O(b)$ time complexity. But what if $1\leq a,b\leq 10^9$? Can you solve it?

Here, we use divide ans conquer to solve this:

$a^b=\begin{cases}a^{\left\lfloor \frac{b}{2} \right\rfloor} \times a^{\left\lfloor \frac{b}{2} \right\rfloor} & n \text{ is even} \\ a^{\left\lfloor \frac{b}{2} \right\rfloor} \times a^{\left\lfloor \frac{b}{2} \right\rfloor} \times a & n \text{ is odd}\end{cases}$

According to this, we can write out a code easily:

long long powpow(long long a, long long b, long long p) {
if(b % 2 == 1) {
long long r = powpow(a, b / 2, p);
return r * r % p * a % p;
} else {
long long r = powpow(a, b / 2, p);
return r * r % p;
}
}

Now certainly the time complexity is $\mathcal O(\log b)$. But this code contains a recursion. Since the recursion is simple, we can get rid of recursion:

long long quickpow(long long a, long long b, long long p) {
long long res = 1, nowa = 1;
while(b) {
if(b % 2 == 1) res = res * nowa % p;
nowa = a * a % p;
a = nowa;
b /= 2;
}
return res;
}

Now I'll give a fast version of it without explaining, you can use it in the later implementaions.

long long quickpow(long long a, long long b, long long p) {
long long res = 1;
while(b) {
if(b & 1) res = res * a % p;
a = a * a % p;
b >>= 1;
}
return res;
}

The time complexity remains the same, but the constant is smaller now. If you still wonders why, you can wait for my bitmask post.

Exercise: Calculate $(a\times b) \bmod p$, $10^9 \leq p\leq 10^{18}$.

In C++, long long varible is between $-2^{63}$ and $2^{63}-1$, which is about $10^{18}$. When you do a * b % p, then things below will happen:

• $a\times b$ is calculated first, it can overflow and lead to error.
• $a\times b\bmod p$ is calculated, the result is already wrong.

So multiply it with brute force is impossible, can you think of an $\mathcal O(\log b)$ solution?

Solution

You may wonder, if we divide it into $2$ parts, why don't we divide it into $k$ parts?

Actually, the time complexity is $\mathcal O(k\times\log_{k} n)$.

Look at the image we draw out $y=k+k\times \log_{k}n$:

And we draw out $\color{green} {x=2}$ and $\color{red}{x=3}$ when $n$ is very large:

So we find that $k=2,3$ is optimal. But $k=3$ uses $3$ if sentences, this lead to larger constant than expected, so we use $k=2$ instead.

### Part 2 — Fermat's little theorem

Here we start with a simple lemma, that is given here without a proof:

$\textbf{Lemma: } a^{p-1}\equiv 1\pmod p, p \text{ is a prime}$

Then we have $a^{p-2}\equiv \frac{1}{p}\pmod p$, so we can do modulus operation to all rational numbers, since $\frac{p}{q}=\frac 1q \times p$.

Then you may find that we can do division operation modulus a number in $\mathcal O(\log p)$ time with quick power!

long long div(long long d, long long p) {
return quickpower(d, p - 2, p);
}

This is very important, since many problems requires you to output the result modulus $998244353$ or $10^9+7$, and they are all primes.

Now you can try to calculate single combination in $\mathcal O(n\log p)$ time.

### Part 3 — Extend-gcd

First, let's review the Euclidean algorithm to calculate gcd:

int gcd(int a, int b) { if(a == 1) return b; return gcd(b % a, a); }

For example:

• $\gcd(66,26)$
• $66\div 26=2\cdots\cdots 14$
• $26\div14=1\cdots\cdots 12$
• $14\div12=1\cdots\cdots 2$
• $12\div2=6\cdots\cdots 0$
• $\therefore\gcd(66,26)=2$

Let's undo the progress:

• $\gcd(66,26)=2$
• $14=66-26\times 2$
• $12=26-14\times 1$
• $2=14-12\times 1$
• $0=12-2\times 6$

So easily we get:

$2=14-12\times 1$
$\;\;\;=14-(26-14\times 1)\times 1$
$\;\;\;=(66-26\times 2)-(26-(66-26\times 2)\times 1)\times 1$
$\;\;\;=2\times 66+(-5)\times 26$

Then you find that, we get a particular solution of the equation $ax+by=\gcd(a,b)$ when $a,b$ is given. This progress is called Extend Euclidean algorithm, also Extend GCD. For example, when $a=66,b=26$, one of the possible solutions is $(x,y)=(2,-5)$.

Ok, then we can write out the algorithm:

long long exgcd(long long a, long long b, long long &x, long long &y){
if(b == 0){
x = 1;
y = 0;
/*
* When b = 0, ax = gcd(a, 0) = a. So x = 1.
* The func returns the value of gcd, since a, b is not connected to x, y.
* But x, y is connected to a, b.
* Some also writes exgcd in the type of void,
* But in both versions,
* your varible x and y will be turned into one of the solutions of the equation.
*/
return a;
}
long long res = exgcd(b, a % b, y, x);
y -= (a / b * x);
return res;
}

But what's this used for? Let's return to the question we discussed in Part 2, to solve the congruence equation $x\equiv \frac{1}{a}\pmod p$ when $p$ is a prime, is to solve the equation $ax \equiv 1 \pmod p$, which means $ax + py = 1$. Because $p$ is a prime, therefore, $\gcd(p,a)=1$. So $ax+py=\gcd (a,p)$, then you only need to do exgcd(a, p, x, y).

Show you the code to do division operation in another way:

int div(int x, int p) {
exgcd(x, p, x, *new int);
return(x % p + p) % p;
}

Time complexity proof:

First the time complexity of Exgcd is the same as Euclidean algorithm, it's trivial.

Then we find two situations when we do Euclidean algorithm:

• $a<b$, and we have $\gcd(a,b)=\gcd(b,a)$.
• $a\ge b$, and we have $\gcd(a,b)=\gcd(b, a\bmod b)$, and when doing modulus, the new $a$ will be at most $\left\lfloor\frac{a}{2}\right\rfloor$ now. If the worst, every time exactly $\left\lfloor\frac{a}{2}\right\rfloor$, and the time complexity is $\mathcal O(\log a)$.

We find that the two conditions take turns to appear, so finally the complexity is $\mathcal O(\log a)$. What's more, when $a,b$ is adjacent in a fibonacci sequence for example $8,13$. it reaches the limit.

Note:

There is another way to implement exgcd (but not very common) without proof getting rid of recursion.

int exgcd(int a, int b, int &x, int &y) {
int x1 = 1, x2 = 0, x3 = 0, x4 = 1;
while (b != 0) {
int c = a / b;
x1 = x3, x2 = x4, x3 = x1 - x3 * c, x4 = x2 - x4 * c, a = b, b = a - b * c;
}
x = x1, y = x2;
return a;
}

### Part 4 — The multiplicative inverse of an integer

The inverse $e$ of an integer $a$ under the meaning of operation $*$, is defined as:

• We note $o$ as the identity element.
• The identity element is defined as $\forall a, a*o=a$. When $*=+$, $o=0$, when $*=\times$, $o=1$, and so on.
• This does not stop here, but if we have to discuss it more formally, it will be related to the group theory, which is another tremendous branch of the combinatorics.
• $e*a=o$.

For example, the inverse of an integer $7$ under the meaning of operation addition is $-7$.

Also, the inverse of an integer $7$ under the meaning of operation multiplication is $\frac{1}{7}$.

So, the inverse of an integer $7$ under the meaning of operation multiplication modulus $5$ is $7^{5-2}=343$.

According to the definition, the inverse of an integer $x$ under the meaning of operation multiplication modulus $p$ when $p$ is a prime, is to calculate $a$ what we've talked about in the last two parts: $a\equiv \frac{1}{x}\pmod p$.

To divide a number by $x$ under the meaning of operation multiplication modulus $p$, is to multiply the inverse of an integer $x$ under the meaning of operation multiplication modulus $p$.

Let's go to the revision straightly:

const int mod = 998244353;
int pw(int a, int b, int p = mod) {
int res = 1;
while(b){
if(b & 1) res = res * a % p;
a = a * a % p;
b >>= 1;
}
return res;
}
int gcd(int x,int y){return!y?x:gcd(y,x%y);}
int exgcd(int a, int b, int &x, int &y) {
int x1 = 1, x2 = 0, x3 = 0, x4 = 1;
while (b != 0) {
int c = a / b;
x1 = x3, x2 = x4, x3 = x1 - x3 * c, x4 = x2 - x4 * c, a = b, b = a - b * c;
}
x = x1, y = x2;
return a;
}
int inv(int x, int p = mod) {
exgcd(x, p, x, *new int);
return(x % p + p) % p;
//or pw(x, p - 2, p);
}

### Part 5 — Prework optimize

Last post, I talked about the $\mathcal O(n^2)$ initiation of the combination. Now we can do it in $\mathcal O(n+\log p)$ time.

Step 1:

We talked about $n!=(n-1)!\times n$ last time, so we can initiate factorial in $\mathcal O(n)$ time.

const int N = 10000005, mod = 998244353;
long long fac[N];
void init() {
fac[0] = 1;
for(int i = 1; i < N; i++) fac[i] = fac[i - 1] * i % mod;
}

Then call to fac[i] you will get the value of $i!$ modulus $998244353$.

Step 2:

Since $n!=(n-1)!\times n$, so $\frac{1}{n!}=\frac{1}{(n+1)!}\times (n+1)$. We define ifac[i] as the value of $\frac{1}{i!}$ modulus $998244353$.

Then we can dp it but in a reversing way:

const int N = 10000005, mod = 998244353;
long long fac[N], ifac[N];
void init() {
ifac[0] = fac[0] = 1;
for(int i = 1; i < N; i++) fac[i] = fac[i - 1] * i % mod;
ifac[N - 1] = inv(fac[N - 1]);
for(int i = N - 2; i; i--) ifac[i] = ifac[i + 1] * (i + 1) % mod;
}

Step 3:

$\binom{n}{k}=\frac{n!}{k!(n-k)!}$, so:

long long binom(int n, int m) {
return fac[n] * ifac[n - m] % mod * ifac[m] % mod;
}

But to prevent conditions like $n<0,n-m<0$ or $m<0$, instead of pending it when calling to the function, we contain it in the function and return value $0$, it's okay.

So, the final code:

const int N = 10000005, mod = 998244353;
long long fac[N], ifac[N];
void init() {
ifac[0] = fac[0] = 1;
for(int i = 1; i < N; i++) fac[i] = fac[i - 1] * i % mod;
ifac[N - 1] = pw(fac[N - 1], mod - 2);
for(int i = N - 2; i; i--) ifac[i] = ifac[i + 1] * (i + 1) % mod;
}
long long binom(int n, int m) {
if(n < 0 || n < m || m < 0) return 0;
return fac[n] * ifac[n - m] % mod * ifac[m] % mod;
}

Calling to binom(n, m) to get the value of $\binom{n}{m}$.

### Part 6 — Homework

Complete the implementation of Task A, B of last time's homework.

Solve Gym409982D in $\mathcal O(n)$ time.

Maybe you have enter this first.

Initiate the array inv[], when $inv[i]$ denotes to the inverse of the integer $i$ under the meaning of operation multiplication modulus $998244353$ in $\mathcal O(n)$ time.

We have a string $S$ with $\tt ($ and $\tt )$.

A valid string $S$ is:

• If $S$ is empty, $S$ is valid.
• If $T$ is valid, $S=\texttt ( T \texttt )$ is valid.
• If $s,t$ are valid, $S=st$ is valid.

For example $\texttt{(()(())), ()()()(), ()}$ are all valid while $\texttt{)(, ()(()}$ aren't.

Count the number of valid strings when the length of $S$ is given.

#### At last

The second blog is completed, it discussed the implementation. Next blog we will discuss things about maths again, get ready. If you are puzzled with something in the text, you can comment under the blog!

Thanks for reading! This series will be written on.

Last but not least: Happy new year!

• +83

By Black_Fate, 3 months ago,

Hello Codeforces!

Today I'll be writing about what I have learnt about combinatorics, which played, and, in my opinion, will still be playing a important role in both Codeforces and CP (short for competitive programming).

However, combinatorics is such a great subject that made that impossible for me to write it all in one blog. So, this is just the very first blog, which is for beginners. If you are interested, please, pay attention to this account and I'll give posts as series for a long term.

If you have found some mistakes in the text, or if you have some questions about the topic, please, leave a comment, and I'll check it weekly and reply. Also, if you find some grammar mistakes, a kind comment will be also welcomed.

### Content

2. factorial, permutation and combination
3. special binomial theorem
4. principle of inclusion-exclusion
5. pigeonhole principle
6. homework

#### Part 0 — Pre-knowledge

In order not to make this blog too long (it's already very long now xD), I won't explain these things anymore. You can find them on wikis or google them.

• Set theory.
• CP basics.

#### Part 1 — addition / multiplication principle

it is the intuitive idea that if we have $A$ number of ways of doing something and $B$ number of ways of doing another thing and we cannot do both at the same time, then there are $A + B$ ways to choose one of the actions.

Now generally speaking, if we have $n$ things to do, and we can only do strictly one thing. For the $i$-th thing, you have $a_i$ ways to do it. Then there are $(a_1+a_2+a_3+\dots a_n)$ (or in the lingo, $\displaystyle\sum^{n}_{i=1}a_i$) ways to choose one of the actions.

What is multiplication principle?

Stated simply, it is the intuitive idea that if there are $a$ ways of doing something and $b$ ways of doing another thing, then there are $a·b$ ways of performing both actions. It means that you have to do the first thing first then the second thing second.

Now generally speaking, if we have $n$ things to do. For the $i$-th thing, you have $a_i$ ways to do it. Then there are $a_1\cdot a_2\cdot a_3\dots a_n$ ($\displaystyle\prod^{n}_{i=1}a_i$) ways of performing both actions. It means that the $i$-th thing comes the $i$-th and you can only do it strictly once, in a fixed order.

Here is a problem for you to solve:

Mike has $a$ hats, $b$ coats, $c$ pairs of trousers and $d$ pairs of shoes to choose from. He can choose strictly one coat, one pair of trousers and one pair of shoes. Since it's not very cold so Mike can choose whether he'll wear a hat. But still he can only wear no or $1$ hat(s). Please count the number of the ways for Mike to choose one of the valid dressing ways.

Here comes the solution:

Solution 1
Solution 2

#### Part 2 — factorial, permutation and combination

What is factorial?

We define $n!=1\times 2\times3\times \dots n$. For example: $3!=3\times 2\times 1=6$.

Respectively, we define $0!=1$.

You may wonder why we define this, maybe one of the reasons is that, when we define $0!=1$, then we can define $n!$ in a recursive way: $n!=(n-1)!\times n\ (n\in \mathbb N^+)$

Here a question comes: Is this related to what we talk about today? Let's see the problem below:

Count the different versions of an array $a_1,a_2,\dots,a_n$ which satisfies: — $1\leq a_i\leq n$. — Every number in the array occurs only one time.

This is also called a permutation of $[1,2,3,\dots, n]$ or a permutation with its length $n$. For example $[1,3,2,5,4]$, $[1]$ and $[3,2,1]$ are permutations, while $[1,1,1]$, $[10,13,11,12]$ and $[1,2,4,5,6]$ are not.

Given $n$, give the answer.

Here comes the solution:

Solution

But if we only need to decide an array $a_1,a_2,\dots, a_k$, which still fits $1\leq a_i\leq n$ and every number in the array occurs only one time. This is called permutation. The answer is $n\times (n-1)\times \dots, (n-k+1)=\frac{n!}{(n-k)!}$, which is similar to the previous task, but here we note the answer $\frac{n!}{(n-k)!}=P_{n}^{k}$ or $A_{n}^{k}$. Specially, we call the $n=k$ situation

A new question comes after the last problem: what if the order of the array we construct is no longer important, the array $a$ and all the permutations of the array $a$ are all considered the same, how many different arrays are there now?

In the set theory, we can write it out more formally: How many sets $s$ which fits $s\in S$, or to say $s$ is a subset of $S$, are there when $|s|=k,|S|=n$?

For each array we count in the last problem, it has $k!$ same arrays (including itself), so, we have to divide the previous answer by $k!$. (If you cannot understand here, $S=[1,1,2,2,3,3]$, isn't the count of different numbers $\frac{6}{2}=3$? Here it's the same.) So the answer is $\frac{n!}{(n-k)!k!}$, we note the answer $\frac{n!}{(n-k)!k!}=\binom{n}{k}=C_n^k$. (But on wikipedia I found it $C_k^n$, there are many versions of the expression.)

This is called combination.

Here are some interesting features of $C_n^k$:

1

Actually $\binom{n}{k}$ doesn't refer to the combination numbers. They refer to the coefficient of the polynomial $f(x)=(x+1)^n=\binom{n}{n}x^n+\binom{n}{n-1}x^{n-1}+\binom{n}{n-2}x^{n-2}+\dots+\binom{n}{1}x+\binom{n}{0}$. Now we find that when $f(x)=(x+1)(x+1)(x+1)\dots(x+1)$, if you choose $k$ $x$-s from $n$ $x$-s, there will be $C_n^k$ different $x^k$-s to combine, then the coefficient of $x^k$ is $C_n^k=\binom{n}{k}$. Since their value are totally the same, we note $\binom{n}{k}$ as combination too.

2

$\binom{n}{k}=\binom{n-1}{k-1}+\binom{n-1}{k}$.

Proof
Another Proof

According to this, we can get the value of $\binom{n}{m}\bmod k$ in $\mathcal O(1)$ time and $\mathcal O(n^2)$ initation work:

int lim = 2000;
for(int i = 0; i <= lim; i++) c[i][0] = c[i][i] = 1;
for(int i = 0; i <= lim; i++) {
for(int j = 1; j < i; j++) {
c[i][j] = c[i - 1][j - 1] + c[i - 1][j];
c[i][j] %= k; // We will mention this later.
}
for(int j = i + 1; j <= lim; j++) {
c[i][j] = -1;
}
}

By the way, $\binom{n}{m}$ is a large number, so we usually get it modulus a number (usually a large prime, the reason will be discussed in the next post in the series).

#### Part 3 — special binomial theorem

Here we give the equation of special binomial theorem out:

$\sum_{i=0}^{n}\binom{n}{i}=2^n, n\in\mathbb N$

Here $\sum\limits_{i=0}^{n}\binom{n}{i}=2^n$ denotes to $\binom{n}{0}+\binom{n}{1}+\dots+\binom{n}{n-1}+\binom{n}{n}$.

Proof:

How many subsets (empty allowed) of $S$ are there when $|S| = n$?

Solution 1: For the subset $s$, you can choose whether you choose the $i$-th element in $S$ or not. Here it fits the multiplication principle, the answer is $2\times 2\times \dots\times 2=2^n$.

Solution 2: For $|s|=0,1,2,\dots,n$, the solution is $\binom{|s|}{n}$. According to the addition principle, the solution is $\binom{n}{0}+\binom{n}{1}+\dots+\binom{n}{n-1}+\binom{n}{n}=\sum\limits_{i=0}^{n}\binom{n}{i}$

Since the $2$ solutions points to the same answer, so we get the proof.

An easy trick:

If you want to enumerate all the subsets of $S=\lbrace S_1,S_2,\dots,S_n\rbrace$ when $|S|=n$, we can give out a binary string $s$ of size $n$ to represent a subset.

• $s_i=\texttt 1$, when $S_i\in s$, which denotes that you chose the $i$-th element.
• $s_i=\texttt 0$, when $S_i\not\in s$, which denotes that you didn't choose the $i$-th element.

But a binary string can be seen as an integer in binary system which $\in [0,2^n)$. So you can just enumerate the integer. Isn't it faster and easier to implement. than DFS?

#### Part 4 — principle of inclusion-exclusion

$10$ students like maths, $11$ students like dp, $12$ students like graphs. Students each like at least one thing of these three things. How many students are there in the class? Is it $10+11+12=33$? No. Because one can like both maths and dp. Or even one can like all the three subjects.

Here we note student that like maths, dp and graphs as $A,B,C$ respectively. So there are $|A\cup B\cup C|$ students in class. Moreover:

• $|A\cup B\cup C|=|A|+|B|+|C|-|A\cap B|-|B\cap C|-|C\cap A|+|A\cap B\cap C|$

View this image for understanding.

If we promote $3$ to $n$, here comes the principle of inclusion-exclusion formula.

$\displaystyle\Big|\bigcup^{n}_{i=1} A_i\Big|=\sum_{i=1}^{n} |A_i|-\sum_{1\leq i<j\leq n} |A_i\cap A_j|+\sum_{1\leq i<j<k\leq n} |A_i\cap A_j\cap A_k|-\dots+(-1)^n|A_1\cap A_2\cap A_3\cap \dots \cap A_n|$

There are no tasks temporarily, I'm very sorry. Tasks will be given out after the next post, because without the implementation, it's hard to write, and complain the time complexity out.

In the homework I'll leave some problems instead of here.

#### Part 5 — pigeonhole principle

It is always used to prove existential problems or something in the worst condition.

Easy condition

There are $n+1$ pigeons and $n$ pigeonholes. Prove: There is at least one pigeonhole which as more than one pigeon in it.

Proof

More commonly

For $m$ pigeons in $n$ pigeonholes, there is at least one pigeonhole with no least than $\left\lceil\frac{m}{n}\right\rceil$ pigeons.

Proof: for homework, it's pretty easy.

#### Part 6 — homework

There are $k$ sets $S_1,S_2\dots,S_k$, which fits:

• $\displaystyle\bigcup_{i=1}^{k} S_i = \lbrace 1,2,3,\dots,n\rbrace$

Give out the number of different conditions of $S_1,S_2,S_3,\dots, S_n$.

Sample: $n=2, k=2, ans=9$.

I'm standing on the point $(0,0)$ and I want to walk to the point $(n,m)$. For each walk, suppose I'm at $(i,j)$, I can go to $(i+1,j)$ or $(i,j+1)$. How many different routes are there for me to finish the walk?

Sample: $n=2, m=2, ans=6$.

We define $S(a,b,c,d)=(a-b)(a-c)(a-d)(b-c)(b-d)(c-d)$.

Calculate the value of

$\gcd\limits_{a,b,c,d\in\mathbb N^+} \{S(a,b,c,d)\}$

Garlic is a clever girl, she wonders when given $n,m,k$, How many pairs $i,j$ fits $k|\binom{i}{j}$ for all $0\leq i\leq n,0\leq j\leq \min \left ( i, m \right )$.

There are $q$ queries of $n,m$, while the $k$ is constant.

Input

The first line two integers, $q,k$.

The following $q$ lines, for each line, two numbers $n,m$.

Output

$q$ lines. For each line, one number represents the answer.

Input Sample

2 5
4 5
6 7

Output Sample

0
7

Data range

$1\leq n,m\leq 5000, 1\leq k\leq 50, 1\leq q\leq 10^6$

#### At last

If you're unsure of the solution to the homework:

tutorial

This blog talks about something easy, maybe it's only a test for the series. Harder things will be mentioned later, such as initation of combinations in $\mathcal O(n)$ time, Lucas theory and so on.

Thanks for reading! This series will be written on.

• +139

By Black_Fate, history, 3 months ago,

Thank you MikeMirzayanov for the amazing platform.

Maybe I have to work harder to be good at my own handle Black_Fate.

• -12

By Black_Fate, history, 7 months ago,

The rating change rolled back just now, if a CM rolled back to Expert, and he/she participates Div. 2, will he/she become unrated, or he/she will be forced to participate Div. 1?

• +7