### Errichto's blog

By Errichto, 15 months ago,

This is my 100th CF blog!

This is a list of techniques with $O(\sqrt n)$ time complexity. Watch the lecture https://youtu.be/BJhzd_VG61k, with timestamps!

1. Square root decomposition — split the sequence into blocks of fixed size.
2. Splitting objects (e.g. vertices) into light and heavy.
3. Square root decomposition by the time of queries & rebuilding the structure.
4. Mo's algorithm — processing queries in proper order and updating the answer by erasing/inserting new elements. https://cp-algorithms.com/data_structures/sqrt_decomposition.html
5. Strings — if the sum of lengths is $S$ then there are at most $\sqrt{S}$ distinct lengths.
6. Birthday paradox & baby-step giant-step. See P4 and P6 here, and see https://cp-algorithms.com/algebra/discrete-log.html.

P1. 398D - Instant Messanger
P2. 220B - Little Elephant and Array
P3. 86D - Powerful array (actually, skip this one because it's boring)
P4. Count triangles in a graph, i.e. cycles of size 3.
P5. Given $N$ strings with total length $S$, find a pair where one string is a substring of the other, in $O(S \cdot \sqrt S)$.

Homework (will be discussed in a second stream soon)
P6. You are given $N$ positive integers $a_1, a_2, \ldots, a_N$ and target value $X$. Check if there is subset with sum equal to $X$, in $O(X \cdot \sqrt S)$ where $S = \sum a_i$.
P7. 13E - Holes
P8. 455D - Serega and Fun
P9. You're given a sequence $a_1, a_2, \ldots, a_n$ ($1 \leq a_i \leq n$). It describes a functional graph: there is a directed edge from $i$ to $a_i$. Handle updates U i x (change $a_i$ to $x$) and answer queries Q v — if we start in $v$ and keep following edges, after how many steps do we get to an already visited node?

And two problems from recent contests: 1580C - Train Maintenance and ABC 219 G Propagation.

Thanks to krismaz for letting me use his list of techniques and problems.

• +463

| Write comment?
 » 15 months ago, # |   0 Are there any easy problems as well? These looks like out of reach.
•  » » 15 months ago, # ^ |   +41 Problems involving sqrt decomposition tend to have a problem rating of >2000, so I highly doubt that there are easy problems for this topic.
•  » » 2 months ago, # ^ |   0 The first 4 problem from de section Range queries in CSES are very easy to beginhttps://cses.fi/problemset/task/1646
 » 15 months ago, # | ← Rev. 2 →   0 Knowing Mo's Algorithm finishes P2 and P3.
 » 15 months ago, # | ← Rev. 2 →   +8 Also see this nice Atcoder problem.
 » 15 months ago, # |   +9 Good problem: 1580C - Train Maintenance
 » 15 months ago, # |   +6 Coolest sqrt decomposition ive seen so far: https://codeforces.com/problemset/problem/13/E
 » 15 months ago, # | ← Rev. 3 →   +32 Few good problems based on the Heavy set — Light set based SQRT Decomposition: Few good problems based on SQRT Decomposition on Queries: https://codeforces.com/contest/342/problem/E (Standard Centroid Decomposition Problem, but can also be solved using Query SQRT Decomposition) https://www.codechef.com/problems/COW3E Few good problems based on Mo's
•  » » 15 months ago, # ^ |   +8 I don't see why you would wanna use square root decomposition on this: https://www.codechef.com/problems/COW3E. It can be solved in O(nm) time with 2D — prefix sums, square root decomposition would only make it more complicated..Also, for good practice problems, USACO guide has a nice list of SQRT decomposition problems: https://usaco.guide/plat/sqrt?lang=cpp#set-a
 » 15 months ago, # |   0 Some really cool problems: https://toph.co/p/fabby-and-her-wedding-dress (Heavy — Light trick) Problem — F — Frank Sinatra (Mo on trees + SQRT decomposition) https://codeforces.com/problemset/problem/348/C (Heavy — Light trick)
 » 15 months ago, # |   +10 plugheavy/light example too: 1545F - AquaMoon and Potatoes
 » 15 months ago, # | ← Rev. 2 →   -48 Strings — if the sum of lengths is S then there are at most sqrt(S) distinct lengthsi think it is wrong, for example:S = 15 the strings can be of length: 1, 2, 3, 4, 5there are 5 distinct lengths but $\sqrt 15$ is less than 4
•  » » 15 months ago, # ^ | ← Rev. 2 →   +50 It's just the magnitude. The exact limit should be $\sqrt{2 * S}$.
 » 15 months ago, # |   0
 » 15 months ago, # |   0 problem on square root heuristic, (not sure if is under square root decomposition, but we do decompose into square root buckets) https://www.codechef.com/JAN19A/problems/PARRTY codeforces(2100 rated): Time to Raid Cowavans codeforces(2400 rated): Mr. Kitayuta's Colorful Graph
 » 15 months ago, # | ← Rev. 4 →   0 Update : I managed to find the bug and remove WA(was overcounting) but now I am getting TLE on test 63. I also tried various block sizes from 300-700 and it gave TLE on various tests. I am sure that my add,delete and update functions work in O(B) time. I don't know how to remove TLE now. Also changed set to unordered set but that too failed :(. Errichto if you can shed some light on how to select B and comment on if I have applied heavy/light trick correctly or not..then that would be helpful :) Thanks and sorry for tagging. Code#include using namespace std; #define fast ios::sync_with_stdio(0); cin.tie(0); cout.tie(0); #define ll long long #define all(v) v.begin(),v.end() #define F first #define S second #define pb push_back #define mp make_pair #define pi pair #define pii pair #define REP(i,n) for(int i=0;i H(50005,0),on(50005,0); unordered_set edges[50005],heavy[50005]; int deg(int x){ return (int)(edges[x].size()); } void upd(int u){ on[u]=1-on[u]; //D1 + D2 + D3 + ..... + DN = 2*M (sum of degrees is twice the number of edges) //So number of Di>=sqrt(2*M) is of O(sqrt(2*M)). //Hence there can be atmost sqrt(2*M) heavy vertices. for(int x : heavy[u]){ H[x]+=(on[u]>0 ? 1 : -1); } } void add(int u,int v){ edges[u].insert(v); edges[v].insert(u); if(deg(u)==B){ for(int x : edges[u]){ H[u]+=on[x]; heavy[x].insert(u); if(deg(x) >= B){ heavy[u].insert(x); } } } if(deg(v) > B){ heavy[u].insert(v); H[v]+=on[u]; } if(deg(v)==B){ for(int x : edges[v]){ H[v]+=on[x]; heavy[x].insert(v); if(deg(x) >= B){ heavy[v].insert(x); } } } if(deg(u) > B){ heavy[v].insert(u); H[u]+=on[v]; } } void del(int u,int v){ if(deg(u)==B){ for(int x : edges[u]){ heavy[x].erase(u); } H[u]=0; } if(deg(v)==B){ for(int x : edges[v]){ heavy[x].erase(v); } H[v]=0; } if(deg(u) > B){ heavy[v].erase(u); H[u]-=on[v]; } if(deg(v) > B){ heavy[u].erase(v); H[v]-=on[u]; } edges[u].erase(v); edges[v].erase(u); } int calc(int u){ if(deg(u)>=B) return H[u]; int sum = 0; //calculate answer for light nodes. for(int x : edges[u]) sum+=on[x]; return sum; } void run_case(){ int n,m,q,o,u,v,i; cin >> n >> m >> q >> o; for(i=0;i> u; --u; on[u]=1; } for(i=0;i> u >> v; --u,--v; add(u,v); } while(q--){ char op; cin >> op; if(op=='O'){ cin >> u; upd(u-1); } if(op=='F'){ cin >> u; upd(u-1); } if(op=='A'){ cin >> u >> v; add(u-1,v-1); } if(op=='D'){ cin >> u >> v; del(u-1,v-1); } if(op=='C'){ cin >> u; cout<
•  » » 15 months ago, # ^ |   +3 I don't fully understand your code. For example, you sometimes check deg > B and sometimes deg >= B.Sets and unordered sets are slow. Change heavy to vector type. Let's see how to handle erasing:1) The first two heavy[x].erase(...) you can just skip. Whenever iterating heavy neighbours for(int x : heavy[u]){ in upd(u) function, remove all $x$ with too small degree.2) The other two occurrences heavy[v].erase(u); and heavy[u].erase(v) can be done linearly by iterating the vector. You might want to erase the small-degree elements here as well.Alternatively, see the author implementation 5926635
 » 15 months ago, # |   0 Is there a place to submit P9?
•  » » 15 months ago, # ^ |   +8
 » 15 months ago, # | ← Rev. 4 →   +13 Another one ： 896E Welcome home, Chtholly
 » 15 months ago, # |   -8
 » 15 months ago, # |   0
 » 15 months ago, # |   +10 https://www.codechef.com/problems/GERALD07 Hint(Mo's algorithm with DSU rollbacks)
 » 15 months ago, # |   0 Anyone getting TLE on P1? I'm updating light and heavy nodes and getting TLE on test 32My submission — 135313235
•  » » 15 months ago, # ^ |   +5 I don't know if this problem can be accepted with sets. See my discussion with DontLookBack a few comments above.
•  » » » 15 months ago, # ^ |   0 Thanks! changing the sets to vectors instantly gave AC, I forgot that the size of the heavy vector would also be small so deleting would be fast
 » 14 months ago, # |   0 Is there any place I can submit P4 (and other problems without an attached question)
•  » » 14 months ago, # ^ |   0 I'm not aware of any such place :(
 » 13 months ago, # |   +1 A really nice recent problem directly based on this blog: 1619H - Permutation and Queries
 » 8 months ago, # |   0 Errichto when is the second stream coming out? Also, can you provide hints to P9
•  » » 8 months ago, # ^ | ← Rev. 2 →   +6 P9 hint: rebuild the graph once every $\sqrt{Q}$ queries.Also, it might help to solve the following problem: given a functional graph and two starting nodes $A$ and $B$, find the first node where they can meet (i.e. minimize the sum of distances from $A$ and $B$ to the target). P9 full solutionThere are $Q$ events (queries and updates). Split them into blocks of size $\sqrt{Q}$. At the beginning of every block, rebuild the graph and keep only edges that will not change during this block. You get an incomplete (disconnected) graph, where every node eventually leads to a cycle or to a dead-end (out-degree 0). There are $O(\sqrt{Q})$ dead-ends. There always exists an edge from such a dead-end but the target node depends on the exact query time. When you get a query, you need to repeatedly jump in $O(1)$ to the next cycle or dead-end (from which there is some current edge). If you ever get to the initial a visited connected component, you need LCA to get the first repeated node.Actually, blocks of size $\sqrt{Q}/\log{Q}$ or $\sqrt{Q} \cdot \log{Q}$ might be better for time complexity, depending on your LCA algorithm.I'm not planning a second lecture, sorry.
•  » » » 8 months ago, # ^ |   0 Thanks a lot! I think I have got the solution but I am getting a different block size for better time complexity. Can you please go through the below once?Repeating what I have understood, let us rebuild after every S queries. Assuming that we find LCA using binary lifting, total rebuilding takes O(Q/S * NlogN)Now, for the current block of queries there are O(S) dead-ends. For each query of the second type, it might take us O(S * logN) time to get the first repeated node where S comes from the number of dead-ends and logN from LCA. Hence for O(S) queries of second type in a block, it will take us O(S*S*logN)To minimize total complexity, we should equate, S*S*S=Q*N, i.e. a block size of (QN)^(1/3)
•  » » » » 8 months ago, # ^ |   +3 1) You don't need LCA for every dead-end, just for the one when you enter an already visited connected component (so, an already visited dead-end).2) There are Q/S blocks, so the final product is S*S*Q/S instead of S*S*S.
 » 8 months ago, # |   0 Another one:Magical Subsequence
 » 7 months ago, # |   0 For 1580C — Train Maintenance, I read the editorial but cannot quite understand how handling the trains satisfying $x_i + y_i \le \sqrt{m}$ can be done in $O(\sqrt{m})$. If we create an array of length $a$ for each $a \le \sqrt{m}$, I agree that updating for each array across each day takes $O(a)$ but when you sum over the arrays $1+2+\dots+\sqrt{m} = O(m)$. Am I understanding the arrays to create wrongly?Can someone explain in more detail what each array contains and how all arrays get updated (or how information is extracted) on each day?