### KokiYmgch's blog

By KokiYmgch, history, 3 years ago,

I wrote about the easy implementation of centroid decomposition on a tree.

First of all, the implementation of centroid decomposition tends to be complicated, and you might have seen someone's code which has too many functions named 'dfs n' (n = 1, 2, 3, ...). I, for one, don't want to code something like that!

So, let me introduce my implementation of centroid decompositon. I hope you get something new from it.

Firstly, we need to know one of the centroids of the tree. Be careful not to forget that some vertices are going to die while repeating the decompositon.

The function which returns the centroid is easily implemented in the following way:

int OneCentroid(int root, const vector<vector<int>> &g, const vector<bool> &dead) {
static vector<int> sz(g.size());
function<void (int, int)> get_sz = [&](int u, int prev) {
sz[u] = 1;
for (auto v : g[u]) if (v != prev && !dead[v]) {
get_sz(v, u);
sz[u] += sz[v];
}
};
get_sz(root, -1);
int n = sz[root];
function<int (int, int)> dfs = [&](int u, int prev) {
for (auto v : g[u]) if (v != prev && !dead[v]) {
if (sz[v] > n / 2) {
return dfs(v, u);
}
}
return u;
};
return dfs(root, -1);
}


Then, using this centroid, you can implement centroid decomposition like this.

void CentroidDecomposition(const vector<vector<int>> &g) {
int n = (int) g.size();
function<void (int)> rec = [&](int start) {
int c = OneCentroid(start, g, dead);           //2
for (auto u : g[c]) if (!dead[u]) {
rec(u);                                //3
}
/*
compute something with the centroid    //4
*/
};
rec(0);                                                //1
}


This works following way:

1. Calculate on the entire tree. All the vertices are alive now.

2. Find the centroid of the current tree, and make it die.

3. Calculate on the subtree which doesn't include the centroid. Go to 2 with this subtree.

4. Calculate something required which includes the centroid.

5. Make the centroid alive again, because this is DFS.

Simply enough, when you use this, you just need to change the part 4. All the other parts are the same, which means you can use it generally.

Let me show you an example.

(I guess this statement is available only in Japanese. Sorry for inconvenience!)

Summary: You are given a tree with N vertices. Answer the Q queries below.

Query v k : Find the number of the vertices, such that the distance from v is exactly k.

N, Q ≤ 105

The obvious solution to this problem is, for each query v, k, make v-rooted tree and count the number of the vertices whose depth is equal to k. This solution, however, requires time O(NQ).

When you want to count something on a tree, especially when it's related to a path, centroid decomposition is one of the good directions you are heading for.

First of all, let all the queries on the tree, and deal with them all at once. It's easy to see that these queries are actually asking the number of the paths whose length is k and the end point is v.

If you decompose the tree, as I mentioned above, you only need to count the paths which include the centroid.

More specifically, just calculate the the number of the distances from the centroid, and make the paths whose length is exacly k, and count them. Again, I didn't change almost anything but the part 4 of the implementation above.

#include <cstdio>
#include <vector>
#include <algorithm>
#include <functional>
#include <map>
#include <cassert>
#include <cmath>
using namespace std;

int OneCentroid(int root, const vector<vector<int>> &g, const vector<bool> &dead) {
static vector<int> sz(g.size());
function<void (int, int)> get_sz = [&](int u, int prev) {
sz[u] = 1;
for (auto v : g[u]) if (v != prev && !dead[v]) {
get_sz(v, u);
sz[u] += sz[v];
}
};
get_sz(root, -1);
int n = sz[root];
function<int (int, int)> dfs = [&](int u, int prev) {
for (auto v : g[u]) if (v != prev && !dead[v]) {
if (sz[v] > n / 2) {
return dfs(v, u);
}
}
return u;
};
return dfs(root, -1);
}

vector<int> CentroidDecomposition(const vector<vector<int>> &g, const vector<vector<pair<int, int>>> &l, int q) {
int n = (int) g.size();
vector<int> ans(q, 0);
function<void (int)> rec = [&](int start) {
int c = OneCentroid(start, g, dead);
for (auto u : g[c]) if (!dead[u]) {
rec(u);
}

/*
changed from here
*/
map<int, int> cnt;
function<void (int, int, int, bool)> add_cnt = [&](int u, int prev, int d, bool add) {
cnt[d] += (add ? 1 : -1);
for (auto v : g[u]) if (v != prev && !dead[v]) {
}
};
function<void (int, int, int)> calc = [&](int u, int prev, int d) {
for (auto it : l[u]) {
int dd, idx;
tie(dd, idx) = it;
if (dd - d >= 0 && cnt.count(dd - d)) {
ans[idx] += cnt[dd - d];
}
}
for (auto v : g[u]) if (v != prev && !dead[v]) {
calc(v, u, d + 1);
}
};
for (auto it : l[c]) {
int dd, idx;
tie(dd, idx) = it;
ans[idx] += cnt[dd];
}
for (auto u : g[c]) if (!dead[u]) {
calc(u, c, 1);
}
//

};
rec(0);
return ans;
}

int main() {
int n, q;
scanf("%d %d", &n, &q);
vector<vector<int>> g(n);
for (int i = 0; i < n - 1; i ++) {
int a, b;
scanf("%d %d", &a, &b);
a --, b --;
g[a].push_back(b);
g[b].push_back(a);
}
vector<vector<pair<int, int>>> l(n); //dist, query idx
for (int i = 0; i < q; i ++) {
int v, k;
scanf("%d %d", &v, &k);
v --;
l[v].emplace_back(k, i);
}
auto ans = CentroidDecomposition(g, l, q);
for (int i = 0; i < q; i ++) {
printf("%d\n", ans[i]);
}
return 0;
}


You can practice centroid decomposition on these problems too! Try them if you would like!

http://codeforces.com/contest/914/problem/E

These problems ask you to count the number of the specific paths too.

• +15

By KokiYmgch, history, 3 years ago,

The centroid(s) of a tree is, the vertice(s) whose all subtrees' size is not more than n(the size of the whole tree).

A tree may have one centroid or may have two centroids. If it has two centroids, they are always connected (otherwise, the tree can't have n vertices).

You can find these vertices by checking the size of each subtree, doing DFS. When the size of a subtree is s, the size of the other part is n - s.

vector<int> Centroid(const vector<vector<int>> &g) {
int n = g.size();
vector<int> centroid;
vector<int> sz(n);
function<void (int, int)> dfs = [&](int u, int prev) {
sz[u] = 1;
bool is_centroid = true;
for (auto v : g[u]) if (v != prev) {
dfs(v, u);
sz[u] += sz[v];
if (sz[v] > n / 2) is_centroid = false;
}
if (n - sz[u] > n / 2) is_centroid = false;
if (is_centroid) centroid.push_back(u);
};
dfs(0, -1);
return centroid;
}


Usage: By using g, the adjacent lists of a tree, get a vector with one or two centroids.

vector<vector<int>> g(n);
/*
construct a tree
*/
auto centroids = Centroid(g);
if (centroids.size() == 1) {
int c = centroids[0];
//
} else if (centroids.size() == 2) {
int c1 = centroids[0];
int c2 = centroids[1];
//
} else {
assert(false);
}


You may sometimes want to find the centroids of any subtree by cutting the original tree. When cutting a tree, you don't really 'cut' the tree. Instead, just make the vertice die. By ignoring died vertices, you can re-implement the Centroid function like this way.

vector<int> Centroid(int root, const vector<vector<int>> &g, const vector<bool> &dead) {
static vector<int> sz(g.size());
function<void (int, int)> get_sz = [&](int u, int prev) {
sz[u] = 1;
for (auto v : g[u]) if (v != prev && !dead[v]) {
get_sz(v, u);
sz[u] += sz[v];
}
};
get_sz(root, -1);
int n = sz[root];
vector<int> centroid;
function<void (int, int)> dfs = [&](int u, int prev) {
bool is_centroid = true;
for (auto v : g[u]) if (v != prev && !dead[v]) {
dfs(v, u);
if (sz[v] > n / 2) is_centroid = false;
}
if (n - sz[u] > n / 2) is_centroid = false;
if (is_centroid) centroid.push_back(u);
};
dfs(root, -1);
return centroid;
}


Don't forget to reuse sz, or it's going to be slow (or if you like, use map).

By the way, when you do Centroid Decomposition, you don't need to know 2 centroids of the tree. Therefore, if you just want to find 'a' centroid of a dynamic tree, you can implement this in the following way:

int OneCentroid(int root, const vector<vector<int>> &g, const vector<bool> &dead) {
static vector<int> sz(g.size());
function<void (int, int)> get_sz = [&](int u, int prev) {
sz[u] = 1;
for (auto v : g[u]) if (v != prev && !dead[v]) {
get_sz(v, u);
sz[u] += sz[v];
}
};
get_sz(root, -1);
int n = sz[root];
function<int (int, int)> dfs = [&](int u, int prev) {
for (auto v : g[u]) if (v != prev && !dead[v]) {
if (sz[v] > n / 2) {
return dfs(v, u);
}
}
return u;
};
return dfs(root, -1);
}


This is very fast, because it returns a centroid shortly after it finds a centroid for the first time.

Thank you for your reading! I'll write an article on centroid decomposition later!

• +30

By KokiYmgch, history, 3 years ago,

Atcoder Petrozavodsk Contest was held last Sunday, and I found 'problem I' really interesting. Let me share this problem and the way to get to the answer.

This is written in Japanese too: http://www.learning-algorithms.com/entry/2018/02/05/160601

Summary: There's an H × W grid, and n squares of the grids are colored black, while the others white. Find the sum of the minimum distances of all the pairs of white squares. The distance here, is defined as the minimum steps by which you can move to the left, right, up, or down direction, and can't move to the black square. H, W ≤ 1000000,  n ≤ 30

If you have possibly, possibly, read my article on Hirschberg's Algorithm, this problem is not super difficult for you. A close idea is actually used.

http://codeforces.com/blog/entry/57512

First of all, let me vertically divide the grid into two grids, and consider the minimum distance between the two white squares, one of which is in the left grid, while the other is in the right grid. If the 2 columns nearest to the division are all colored white, given that this path is the minimum path, there must be a position which the path goes through exactly once. This means that you can calculate the number of times that all the paths which go through this division, independently, and then contract the 2 columns into 1 column. The value is the product of the numbers of white squares in each grid.

You can make the grid smaller and smaller by repeating this process, and thanks to the restriction, n ≤ 30, the grid is going to be very small O(n) × O(n) grid.

Now, you can do simple BFS in this new grid, but wait, you need to check the weight of each grid.

Suppose you contract the following grid:

Of course you need to check how many squares are actually hidden in each grid! Thus, the weight of each grid is going to be like this:

This is easily calculated by seeing columns and rows independently.

On the contracted grid, the minimum distance d between a white square A(weight WA) and another white square B(weight WB) is, the minimum distance between all the squares included in A and all the squares included in B, before the contraction, therefore, you need to calculate the sum of d * WA * WB for all the pairs in the contracted grid. This works effectively enough, in O(n4). If you counted all the paths twice, be careful not to forget to halve the value. Finally, the sum of this value and the pre-calculated value is the answer to this problem.

#include <cstdio>
#include <vector>
#include <algorithm>
#include <functional>
#include <map>
#include <set>
#include <string>
#include <iostream>
#include <cassert>
#include <cmath>
#include <queue>
using namespace std;

const int MOD = 1e9 + 7;

struct state { int y, x, step; };
static const int dx[] = {1, 0, -1, 0}, dy[] = {0, 1, 0, -1};

int main() {
long long h, w;
scanf("%lld %lld", &h, &w);
int n;
scanf("%d", &n);
vector<int> w_cnt(w, 0), h_cnt(h, 0);
vector<bool> w_exist(w, false), h_exist(h, false);
vector<pair<int, int>> black;
for (int i = 0; i < n; i ++) {
int y, x;
scanf("%d %d", &y, &x);
w_cnt[x] ++;
h_cnt[y] ++;
w_exist[x] = true;
h_exist[y] = true;
black.emplace_back(x, y);
}
for (int i = 1; i < w; i ++) w_cnt[i] += w_cnt[i - 1];
for (int i = 1; i < h; i ++) h_cnt[i] += h_cnt[i - 1];
long long ans = 0;
//precalc
for (int i = 0; i < w - 1; i ++) {
if (!w_exist[i] && !w_exist[i + 1]) {
long long left = (long long) (i + 1) * h % MOD - w_cnt[i];
long long right = (long long) (w - (i + 1)) * h % MOD - (n - w_cnt[i]);
ans += left * right;
ans %= MOD;
}
}
for (int i = 0; i < h - 1; i ++) {
if (!h_exist[i] && !h_exist[i + 1]) {
long long left = (long long) (i + 1) * w % MOD - h_cnt[i];
long long right = (long long) (h - (i + 1)) * w % MOD - (n - h_cnt[i]);
ans += left * right;
ans %= MOD;
}
}
//compress
map<int, int> newx, newy;
vector<pair<long long, bool>> widths, heights; //(length, is_white)
{
int cnt = 0;
for (int i = 0; i < w; i ++) {
if (!w_exist[i]) {
cnt ++;
} else {
if (cnt) {
widths.emplace_back(cnt, true);
cnt = 0;
}
newx[i] = (int) widths.size();
widths.emplace_back(1, false);
}
}
if (cnt) widths.emplace_back(cnt, true);
}
{
int cnt = 0;
for (int i = 0; i < h; i ++) {
if (!h_exist[i]) {
cnt ++;
} else {
if (cnt) {
heights.emplace_back(cnt, true);
cnt = 0;
}
newy[i] = (int) heights.size();
heights.emplace_back(1, false);
}
}
if (cnt) heights.emplace_back(cnt, true);
}
//re-write the grid
int neww = (int) widths.size();
int newh = (int) heights.size();
vector<vector<long long>> s(newh, vector<long long> (neww, 1)); //-1 when it's black, weight when it's white
for (auto b : black) {
s[newy[b.second]][newx[b.first]] = -1;
}
for (int i = 0; i < newh; i ++) {
for (int j = 0; j < neww; j ++) {
if (heights[i].second || widths[j].second) {
s[i][j] = heights[i].first * widths[j].first % MOD;
}
}
}
for (int i = 0; i < newh; i ++) {
for (int j = 0; j < neww; j ++) {
cerr << s[i][j] << ' ';
}
cerr << endl;
}
//BFS
long long sum = 0;
for (int sy = 0; sy < newh; sy ++) {
for (int sx = 0; sx < neww; sx ++) {
if (s[sy][sx] == -1) continue;
long long res = s[sy][sx];
vector<vector<bool>> used(newh, vector<bool>(neww, false));
queue<state> q;
q.push({sy, sx, 0});
used[sy][sx] = true;
while (!q.empty()) {
state p = q.front(); q.pop();
if (p.y != sy || p.x != sx) {
assert(s[p.y][p.x] != -1);
sum += (long long) p.step * res % MOD * s[p.y][p.x] % MOD;
sum %= MOD;
}
for (int d = 0; d < 4; d ++) {
int xx = p.x + dx[d], yy = p.y + dy[d];
if (xx < 0 || xx >= neww || yy < 0 || yy >= newh) continue;
if (used[yy][xx] || s[yy][xx] == -1) continue;
used[yy][xx] = true;
q.push({yy, xx, p.step + 1});
}
}
}
}
sum *= (MOD + 1) / 2;
sum %= MOD;
ans += sum;
ans %= MOD;
printf("%lld\n", ans);
return 0;
}


• +17

By KokiYmgch, history, 3 years ago,

I wrote an article about Hirschberg's Algorithm

Firstly, look at the problem below.

Summary: You are given a H×W grid which has a number in each cell. You want to go to the lower right cell from the upper left cell, and the score is the sum of the nunbers of the cells you pass through. Minimize the score, and show an example of the paths which satisfy the score. H ≤ 10000, W ≤ 10000.

If you just need to calculate the minimum score, you can solve this problem by a simple DP with the time complexity O(HW).

dp(x, y) = grid(x, y) + min(dp(x - 1, y), dp(x, y - 1))

Let me show the grids on which the numbers and the DP values are written.

In this example, you found out that the minimum score is 29. Now, let's restore the path which gets the score. On DP grid that I showed above, you can restore DP from the last cell. When you are on a cell in the optimal path, the previous cell must be the cell which has the smaller DP value. For each cell, therefore, just write 1, if the nuber in the upper cell is smaller than that in the left cell, and 0, vice versa. In this way, you can get the grid like the left of the image, and the optimal path can be restored.

What I wrote above is simply implemented. It's a piece of cake!

vector<vector<long long>> dp(h, vector<long long> (w, INFL));
dp[0][0] = grid[0][0];
for (int i = 0; i < h; i ++) {
for (int j = 0; j < w; j ++) {
if (i - 1 >= 0) dp[i][j] = min(dp[i][j], (long long) grid[i][j] + dp[i - 1][j]);
if (j - 1 >= 0) dp[i][j] = min(dp[i][j], (long long) grid[i][j] + dp[i][j - 1]);
}
}
vector<bitset<10010>> restore(10010);
for (int i = 0; i < h; i ++) {
for (int j = 0; j < w; j ++) {
if (i == 0) {
restore[i][j] = false;
} else if (j == 0) {
restore[i][j] = true;
} else {
restore[i][j] = dp[i - 1][j] < dp[i][j - 1];
}
}
}
int y = h - 1, x = w - 1;
string ans = "";
while (true) {
if (x == 0 && y == 0) break;
if (restore[y][x]) {
ans += "D";
y --;
} else {
ans += "R";
x --;
}
}
reverse(ans.begin(), ans.end());
cout << ans << endl;


This problem seemed quite easy for many coders, but if you look at the problem statement again, you'll notice that the memory limit is unusual (it's emphasized though!). This memory limit implies that you can't even use memory space O(HW).

If you just want to know the minimum score, you can do the same DP by reusing the 2 DP arrays, but how do you restore the path?

One possible solution is to restore the path from right side by reusing the 2 restoring arrays. However, since you can't save the DP values, every time you restore one step of the path, you need to do the DP from the left side again, which unfortunately requires the time O(HW * min(H, W)). Not fast enough!

This problem can be solved by Hirschberg's Algorithm! This algorithm makes it possible to restore this DP within space O(min(H, W)), and time O(HW), respectively.

This algorithm is based on Divide and Conquer Algorithm. Dividing the grids, Finding the path on each grid, and merging them.

Let's see this algorithm, using the same example above. First of all, divide the grid into two (almost) same size grids. According to the rule of the movement, there must be only one position, where you pass from the left grid to the right grid.

In order to find the position, you just do the DP from the upper left to the lower right on the left grid, while you do the DP from the lower right to the upper left on the right grid.

Consequently, at each position on the line of the division, the score was calculated from the upper left and from the lower right, thus, the position where these sums are the minimum, must be used. In this example, these sums are 35, 33, 29, 36 from the top, which implies that the 3rd position from the top should be used.

Then, just repeat this process until all the movement is restored. The second time, you need to consider only the following part.

This means that you need to traverse only half part than you previously traversed. In the third time, you only need to check the part of the entire grid.

The space complexity is obviously O(min(H, W)), while the time complexity is

I hope you enjoyed this algorithm!

#include <iostream>
#include <cstdio>
#include <algorithm>
#include <vector>
#include <functional>
#include <string>
using namespace std;

template<class T>
void amin(T &a, T b) { if (a > b) a = b; }

int main() {
int h, w;
scanf("%d%d", &h, &w);
vector<int> u(h), v(w);
for (int i = 0; i < h; i ++) scanf("%d", &u[i]);
for (int i = 0; i < w; i ++) scanf("%d", &v[i]);
function<long long (int, int)> get_val = [&](int i, int j) {
return (long long) (u[i] + j + 1) ^ (long long) (v[j] + i + 1);
};
//Hirschberg
vector<pair<int, int>> pos;
function<void (int, int, int, int)> Hirschberg = [&](int li, int lj, int ri, int rj) {
int mid = (lj + rj) / 2;
int height = ri - li + 1;
if (rj - lj < 1) return;
if (height == 1) {
pos.emplace_back(mid, li);
Hirschberg(li, lj, li, mid);
Hirschberg(li, mid + 1, li, rj);
return;
}
//left
int w_left = mid - lj + 1;
vector<vector<long long>> dp(2, vector<long long> (height));
dp[0][0] = get_val(li, lj);
for (int i = 1; i < height; i ++) {
dp[0][i] = dp[0][i - 1] + get_val(li + i, lj);
}
bool f = 1;
for (int j = 1; j < w_left; j ++) {
for (int i = 0; i < height; i ++) {
dp[f][i] = 1LL << 60;
}
for (int i = 0; i < height; i ++) {
int val = get_val(li + i, lj + j);
amin(dp[f][i], dp[!f][i] + val);
if (i - 1 >= 0) amin(dp[f][i], dp[f][i - 1] + val);
}
f = !f;
}
f = !f;
vector<long long> m1(height);
for (int i = 0; i < height; i ++) {
m1[i] = dp[f][i];
}
//right
int w_right = rj - mid;
dp[0][0] = get_val(ri, rj);
for (int i = 1; i < height; i ++) {
dp[0][i] = dp[0][i - 1] + get_val(ri - i, rj);
}
f = 1;
for (int j = 1; j < w_right; j ++) {
for (int i = 0; i < height; i ++) {
dp[f][i] = 1LL << 60;
}
for (int i = 0; i < height; i ++) {
long long val = get_val(ri - i, rj - j);
amin(dp[f][i], dp[!f][i] + val);
if (i - 1 >= 0) amin(dp[f][i], dp[f][i - 1] + val);
}
f = !f;
}
f = !f;
vector<long long> m2(height);
for (int i = 0; i < height; i ++) {
m2[height - i - 1] = dp[f][i];
}
//
long long mi = 1LL << 60;
int res = -1;
for (int i = 0; i < height; i ++) {
long long sum = m1[i] + m2[i];
if (sum < mi) {
mi = sum;
res = i;
}
}
res += li;
pos.emplace_back(mid, res);
Hirschberg(li, lj, res, mid);
Hirschberg(res, mid + 1, ri, rj);
};
Hirschberg(0, 0, h - 1, w - 1);
//
sort(pos.begin(), pos.end());
int y = 0, x = 0;
string ans = "";
while (true) {
if (x == w - 1) {
while (y != h - 1) {
ans += "D";
y ++;
}
break;
}
if (pos[x].second == y) {
x ++;
ans += "R";
} else {
y ++;
ans += "D";
}
}
cout << ans << endl;
return 0;
}


• +105

By KokiYmgch, history, 3 years ago,

This is an article on how to compute the chromatic number of a graph effectively.

The original article was written in Japanese here.

http://www.learning-algorithms.com/entry/2018/01/27/235959

#### What is "chromatic number?"

Chromatic number is the minimum number of colors to color all the vertices, so that no two adjacent vertices have the same color.

If you look at a tree, for instance, you can obviously color it in two colors, but not in one color, which means a tree has the chromatic number 2.

#### Chromatic polynomial

In order to discuss the chromatic number, I introduce the chromatic polynomial first.

The chromatic polynomial P(K), is the number of ways to color a graph within K colors. Let's take a tree with n ( ≥ 2) vertices as an example. First of all, a tree has at least one leaf, so color it first with any color. After that, you can just color the rest with a different color from a previous color in order. This concludes the following chromatic polynomial for a tree.

P(K) = K(K - 1)n - 1 ... (1)

Now, we are ready to calculate the chromatic number.

#### Compute the chromatic number

Chromatic number is computed in the following way:

1. Find the chromatic polynomial P(K)
2. Evaluate the polynomial in the ascending order, K = 1, 2, ..., n
3. When the value gets larger than 0 for the first time, the value of K is the chromatic number

Let's compute the chromatic number of a tree again now. Using (1), we can tell P(1) = 0, P(2) = 2 > 0 , and thus the chromatic number of a tree is 2.

I describe below how to compute the chromatic number of any given simple graph.

As I mentioned above, we need to know the chromatic polynomial first. If you can divide all the vertices into K independent sets, you can color them in K colors because no two adjacent vertices share the edge in an independent set.

Let V be the set of vertices of a graph. For any subsets , let me define ind(U) as 'the number of subsets of U, which compose an independent set.'

The number of ways to choose K sets from ind(U) is ind(U)K, therefore, Inclusion-exclusion principle allows you to calculate the chromatic number in the following way:

The implementation is really simple. ind(U) can be calculated by bitDP.

The time complexity is O(2n n).

int ChromaticNumber(const vector<int> &g) {
int n = g.size();
if (n == 0) return 0;
//randomly choose a large prime
const int modulo = 1077563119;
int all = 1 << n;
vector<int> ind(all), s(all);
for (int i = 0; i < all; i ++) s[i] = ((n - __builtin_popcount(i)) & 1 ? -1 : 1);
ind[0] = 1;
for (int i = 1; i < all; i ++) {
int ctz = __builtin_ctz(i);
ind[i] = ind[i - (1 << ctz)] + ind[(i - (1 << ctz)) & ~g[ctz]];
if (ind[i] >= modulo) ind[i] -= modulo;
}
//compute the chromatic number (= \sum (-1)^{n - |i|} * ind(i)^k)
for (int k = 1; k < n; k ++) {
long long sum = 0;
for (int i = 0; i < all; i ++) {
long long cur = ((s[i] * (long long) ind[i]) % modulo);
s[i] = cur;
sum += cur;
}
if (sum % modulo != 0) return k;
}
return n;
}


That's it! Thank you for reading.

• +39