### gen's blog

By gen, 6 months ago, translation,

• +101

 » 6 months ago, # | ← Rev. 3 →   +137 My solutions: A: https://ideone.com/X8Ugwu B: https://ideone.com/Qddkde C (LCT): https://ideone.com/l7dTFa similar to https://dmoj.ca/problem/wac4p7 C (D&C): https://ideone.com/EBWCgV
•  » » 6 months ago, # ^ |   0 How do you use so many functions and manage to get under the TLE?
•  » » » 6 months ago, # ^ |   +25 The time complexity of your code has little to do with the number of functions one uses. You should look up basic algorithm complexity analysis (say, an intro to Big O notation) to be introduced to analyzing the runtime complexity of a program.
•  » » 4 months ago, # ^ | ← Rev. 2 →   +6 Sorry to disturb you,but could you explain how LCT in problem C worked.I do not understand your solution.I compute the longest suffix of edge which can form a bipartite firstly,and then I try to add edge from prefix and remove the edge in the suffix to compute the array $Last$(the same as the $last[]$ in the tutorial).but it doesn't work,because some circle is ignored(maybe in these cycles,there is a circle of odd length).Thank you very much.
•  » » » 4 months ago, # ^ |   +6 First I concatenate the edges array with itself, doubling its length. For each $l\in [0,M-1]$ I compute the max $r$ such that edges $l\ldots r$ are bipartite. Maintain the maximum spanning tree of edges $l\ldots r$ (where the weight of an edge is just its index). When you increment $l$, you remove edge $l$ only if it is currently present in the spanning tree. When you increment $r$, you add edge $r$ to the spanning tree and remove the edge of minimum weight along the path between the two endpoints of edge $r$ if they were previously connected. You should check the editorial for the other problem I linked if what I said about maintaining the MST doesn't make sense. :)
•  » » » » 4 months ago, # ^ |   +6 I know that now.Thank you~
 » 6 months ago, # | ← Rev. 2 →   +22 How do we maintain DSU for C to check if graph is bipartite?
•  » » 6 months ago, # ^ |   +17 Initially set all to the same color.Let's say we want to add an edge from $u$ to $v$. Check if $u$ and $v$ are of the same parent(Are connected). If they are, then make sure their colors are different. If they have different parents, Let's say $p$ and $q$ respectively, add an edge from $q$ to $p$ in the DSU(parent[q] = p). If their colors are the same, you want an additional mark on the edge which says to reverse the color.When you want to get the color, go through all the edges leading to the parent, and change the color as you go along the edges if you need to reverse the color.
•  » » 6 months ago, # ^ |   +8 Look here
 » 6 months ago, # |   0 Can some one give me the solution to 1386A — Colors ,Subtask 1 (N≤64) in python pls
 » 6 months ago, # |   +5 In problem C in subtask $4$ there is $l_i \leq 500$ in editorial but $l_i \leq 200$ in the problem statement. Which variant is correct?
•  » » 6 months ago, # ^ |   +3 Sorry, it should be 200, I will fix it later.
 » 6 months ago, # | ← Rev. 2 →   -8 In problem A, subtask 3, 2*sqrt(1000) = 2 * 33 = 66, and 66 > 64, the limit of querys in that problem, so, 2*sqrt(n) is ok?, why?, i know that you can use that solution 3 times and get 3*cubic_root(n) which is good enough (it uses 30 querys or something like that), but is more hard to code, and it has a lot of case handiling. UPD: Sorry, sqrt(1000) <= 32
 » 6 months ago, # |   -18 bro please share the test cases .itt would be a great help
 » 6 months ago, # |   +18 For Joker subtask 5, I think the post should say: If we choose $B = M \div \sqrt{Q}$ we get the following runtime: (rather than $B = M \sqrt{Q}$)otherwise the numbers don’t add up. $B$ is the block size, and there are $M \div B$ blocks. The total number of operations is $M$ per block for the right pointer + $QB$ in total for the left pointer = $M^2 \div B + QB$ in total. Equating the two terms so that neither dominates the other gives $B = M \div \sqrt{Q}$. And to expand slightly on the algorithm: at the start of each block, initialize DSU with all edges to the left of the block’s leftmost edge. Sort queries by $r$ in descending order. For each query, add the new edges on the right, then add the appropriate edges on the left, answer the query, and immediately roll back all of those left edges, so that once again the left pointer is at the very start of the block. Don’t bother trying to remove individual left edges (and struggling to do it without also rolling back right edges): just remove them all after each query.
•  » » 6 months ago, # ^ | ← Rev. 6 →   0 Further, I think it should be possible to use path compression to improve the complexity somewhat, although I’m not sure it will help in practice.The original reason for the $\log N$ factor is that path compression is not performed in the DSU because it “is impossible” with rollbacks. Of course, there’s nothing preventing one from implementing rollbacks of path compression, but the question is whether it such revertible path compression actually improves efficiency.I can’t answer that. But for this particular problem, we can still improve the complexity by using path compression in a part of the solution that doesn’t require rollbacks:Notice that the DSU operations corresponding to the right-hand-side edges are not disturbed by any rollbacks. (When a new block starts, we can rebuild the whole DSU from scratch. It is not necessary to roll back the previous block’s right-hand-side operations.) There are $M$ unions per block from the right-hand side. We can do those unions with path compression, for a total run-time of $M \alpha(N)$. Assuming revertible path compression is meaningless, we can do the left-hand-side operations without further path compression, at $\log N$ time per operation. The total time is then $M^2 \div B\, \alpha(N) + Q B \log N$ (down from $(M^2 \div B + Q B) \log N$), and equating the two terms to ensure neither dominates the other gives optimal $B = M \div \sqrt{Q \frac{\log N}{\alpha(N)}} \approx M \div \sqrt{Q \log N}$ with the grand total run-time $\mathrm{O}\left(\mathrm{sort}\ Q + M \sqrt{Q\, \alpha(N) \log N}\right)$.
•  » » » 6 months ago, # ^ |   +3 You can do the LHS operations with path compression too, making the complexity $M\sqrt Q\alpha(N)$. Spoiler (71 pts)int N; struct DSU { int e[MX], off[MX]; void clr() { F0R(i,N) e[i] = -1, off[i] = 0; } void prop(int x) { if (e[x] < 0) return; prop(e[x]); off[x] ^= off[e[x]]; e[x] = (e[e[x]] < 0 ? e[x] : e[e[x]]); } pi get(int x) { prop(x); return {e[x] < 0 ? x : e[x],off[x]}; } bool unite(int _x, int _y, int z = 0) { // union by size pi x = get(_x), y = get(_y); y.s ^= z; if (x.f == y.f) { if (x.s == y.s) return 0; return 1; } if (e[x.f] > e[y.f]) swap(x,y); e[x.f] += e[y.f]; e[y.f] = x.f; off[y.f] = x.s^y.s^1; return 1; } }; DSU X,Y; const int BLOCK = 4500; vector> query[MX]; int _cnt = 0, vis[MX]; void reset(int x) { if (vis[x] != _cnt) { vis[x] = _cnt; Y.e[x] = -1; Y.off[x] = 0; } } int main() { setIO(); int M,Q; re(N,M,Q); N++; vpi ed(M); re(ed); F0R(i,Q) { int l,r; re(l,r); l--,r--; query[l/BLOCK].pb({r,l,i}); } vector ans(Q); F0R(i,MX) { if (!sz(query[i])) continue; X.clr(); sort(rall(query[i])); bool bad = 0; int nex = M-1; F0R(l,i*BLOCK) if (!X.unite(ed[l].f,ed[l].s)) bad = 1; trav(t,query[i]) { while (nex > t[0] && !bad) { if (!X.unite(ed[nex].f,ed[nex].s)) bad = 1; nex --; } bool BAD = bad; _cnt ++; FOR(j,i*BLOCK,t[1]) { if (BAD) break; pi A = X.get(ed[j].f), B = X.get(ed[j].s); reset(A.f), reset(B.f); if (!Y.unite(A.f,B.f,A.s^B.s)) BAD = 1; } if (BAD) ans[t[2]] = 1; } } F0R(i,Q) { if (ans[i]) ps("YES"); else ps("NO"); } } 
•  » » » » 6 months ago, # ^ |   0 Ah, the trick is to have the LHS-within-block build a DSU over entire sets from the RHS+previous-LHS-blocks rather than over individual nodes. The RHS sets don’t change while moving the LHS, so the LHS can use a separate DSU tree and can be wiped clean after each query without any granular rollbacks. Nice!
•  » » » » » 5 months ago, # ^ |   0 What does LHS mean?
•  » » » » » » 5 months ago, # ^ |   0 Left-hand side. Or simply left, as opposed to right. In this case, I’m talking about about the disjoint-set union formed by the streets whose indices are smaller than the patrolled streets.
•  » » 6 months ago, # ^ |   +3 Thanks, fixed!
 » 6 months ago, # |   0 Can someone explain the strategy when k is odd, in Problem A? thx
•  » » 6 months ago, # ^ |   +3 When k is odd, do query of length k/2, and the problem size becomes at most k/2+1, which is acceptable. BTW the solution has missed a keypoint: let function begin be the startpoint index, then begin(n)=n/2+1-begin(n/2) when n is even, and begin(n)=n/2+2-begin(n/2+1) when n is odd.
•  » » » 6 months ago, # ^ |   0 Could you explain more carefully how to use the strategy of k using color [1,k] to construct a new strategy of 2k+1 using color [1, 2k+1] ?
•  » » » » 5 months ago, # ^ |   0 It's not using stategy k to construct 2k+1, but to use stategy k and stategy k+1 to construct 2k+1. For example, strategy for 3 is 2 -> 3 -> 1 and strategy for 4 is 2 -> 4 -> (1 or 3). Then stategy 7 is:We first calculate the startpoint, begin(7)=5-2=3. So we do query 3 -> 6 first. (6=3+(7/2))If res==0, then the result is in [4, 7], and the problem reduces into strategy 4, which is 6 -> 1 -> (5 or 7). If res==1, then range is [1, 3], so the problem reduces into strategy 3, which is 6 -> 5 -> 7.
•  » » » » 5 months ago, # ^ |   0 You may refer to my solution: 89211392
•  » » » » » 5 months ago, # ^ |   0 Got it. But I still have a question: if res == 0, why we won't use a color twice or more?