### mazihang2022's blog

By mazihang2022, 11 days ago,

We are really sorry that C is tough and E is easier than D. But anyway hope you can enjoy the problems themselves and learn something from them.

1806A - Walking Master

Solution
Code

1806B - Mex Master

Solution
Code

1806C - Sequence Master

Solution
Code

1806D - DSU Master

Solution
Code

1806E - Tree Master

Solution
Code

1806F2 - GCD Master (hard version)

Solution
Code

• +117

 » 11 days ago, # |   +5 so fast editorial!
 » 11 days ago, # | ← Rev. 2 →   +7 Stucked for a long in problem C. hardForces
•  » » 10 days ago, # ^ | ← Rev. 6 →   -34 
 » 11 days ago, # |   +11 Orz
 » 11 days ago, # |   +8 this editorial is so fast!
 » 11 days ago, # |   -8 In problem E,I just stored every pair of $(x,y)$ in a map and it would be neither TLE nor MLE. So I gave up in the end. It's a sad story.
•  » » 9 days ago, # ^ |   -18 you mean "EITHER" , not "NEITHER" !!! .
 » 11 days ago, # | ← Rev. 2 →   +87 I solved E with Mo's algorithm on tree.When I visited a pair $(u, v)$ with Mo's algorithm order, I already have most of the data of every node on the path from $u$ to $v$. To solve the problem, I only need to maintain the following things: cnt_layer[d] — number of touched nodes with depth $d$. layer_prod[d] — the product of value of touched nodes with depth $d$. So when cnt_layer[d] equals 2, I'll add layer_prod[d] to the answer, otherwise, I remove it from the answer.The total complexity is still $O((n + q) \sqrt n)$
•  » » 11 days ago, # ^ |   0 what does the "ranges::" thing do ? do you have some more problems using this technique ?
•  » » » 11 days ago, # ^ |   +11 This is a new feature in C++20 with functional programming idea. It is interesting , but many functions won't complete before C++23. You can go to cppreference to see more details.
 » 11 days ago, # |   -46 F is so hard . I wrote about 100 lines but ... I couldn't pass the sample . hardForces
 » 11 days ago, # |   +7 Passed D just by finding patterns.
•  » » 11 days ago, # ^ |   +6 Also, it feels a little wierd that unordered_map can pass E but map cannot.
•  » » » 11 days ago, # ^ |   0 Your Accepted solution has now TLE. I tried the same!!
•  » » » » 10 days ago, # ^ |   0 True, I wonder what happened too.
•  » » » » 10 days ago, # ^ |   0 I solved E again with hand-written hash table. 198019367 Hope this solution won't get TLE again...
•  » » 11 days ago, # ^ | ← Rev. 2 →   +9 UPD: For some reason, all AC submissions of mine got TLE on Test 10. Just write the hash table by hand and create a hash table as big as possible that it has the best efficiency. 198019367The time limit is strict for problem E, so I write two important optimizations in solving E here.1) Reorder the node pair that x> instead of simply using (unordered) map> or long long where {x,y} is replaced by x<<32+y. This reduces the time spent from O(log(xx*yy)) to O(1)+O(log(yy)). (xx is the range of x, and yy resp.)
•  » » » 11 days ago, # ^ | ← Rev. 2 →   0 Yet now that I think about this, optimization 1) should make no improvement for the worst case, but without this optimization my solution would get MLE/TLE. Does it imply that there is still plenty room for uphacking? XD
•  » » » 11 days ago, # ^ |   0 how can i assume the complexity of unordered_map ? isn't it O(1) but how could it be as slow as O(logn) idon't get the" O(log(xx*yy)) to O(1)+O(log(yy))" Because in my limited knowledge and experience that is hash and it should be O(1)->O(1) ,so how vector will be faster?
•  » » » » 11 days ago, # ^ | ← Rev. 2 →   0 Sorry for my inaccurate expression. The analysis is based on map.
•  » » » » 8 days ago, # ^ |   +18 If two procedures have time asymptotics of $\mathcal{O}(1)$ this does not necessarily mean that they would have same running time. In case of std::unordered_map it has a large constant factor, which makes it much slower than std::vector.
•  » » » » » 7 days ago, # ^ |   0 you are right,thx
•  » » » 10 days ago, # ^ |   +3 I got AC using something like path compression and it runs in ~1sec but I can't analyse the time complexity could you provide some help ? AC submission : https://codeforces.com/contest/1806/submission/198034329
•  » » » » 10 days ago, # ^ |   +3 Your optimization makes perfect sence, and it's a complexity-level optimization. Map and unordered_map can both pass the problem by this optimization. 198070832 198070932Denote the distance between two memorized layers by $d$, for each query you need go up $O(d)$ layers to reach a memorized layer, so the time spent in this part is $O(qd)$.For each memorized layer, check the worst case of my original solution, where all $a_i=\sqrt{q}$. There are $\frac{n}{d\sqrt{q}}$ memorized layers in this case, and each layer requires storing and checking $q$ different pairs, which is done by (unordered) map, hence there are $\frac{n\sqrt{q}}{d}$ memorized pairs in total. If the check failes, each memorized pair need to go up $O(d)$ layers, and the time spent in this part is $O(n\sqrt{q})$.Therefore, the afterall time complexity after this optimization is $O(qd+n\sqrt{q})$ + $O(\frac{n\sqrt{q}}{d}\cdot t_{map})$, where $t_{map}$ is the time complexity of the data structure we store the value, in this case it's (unordered) map.If we choose $d=\frac{n}{\sqrt{q}}$, the time complexity is $O(n\sqrt{q})+O(q\cdot t_{map})$.However, this solution has it's own worst case, where the data puts all vertices at layer $kd$, and the other layers only have 1 vertex. But this can be easily fixed by selecting the $k$th memorized layer randomly in $[kd,(k+1)d)$. The expected numbers of memorized pairs in each layer would be $\frac{\sum\limits_{i=1}^{d}\min\left(a_i^2,q\right)}{d}$, and the sum of all layers would be $\sum\limits_{j=0}^{\frac{n}{d}}\frac{\sum\limits_{i=1}^{d}\min\left(a_{jk+i}^2,q\right)}{d}=\frac{\sum\limits_i\min(a_i^2,q)}{d}$, and the worst case is the same as my original solution.
•  » » 10 days ago, # ^ |   0 Can you teach me how to find the pattern?
•  » » » 10 days ago, # ^ |   +6 Sure. I start with the second sample. You can see that in the second sample, the numbers seems to be proportional. 1 = 1 * 0 + 1 2 = 2 * 1 7 = 3 * 2 + 1 31 = 4 * 7 + 3 167 = 5 * 31 + 12 1002 = 6 * 167 7314 = 7 * 1002 + 300 60612 = 8 * 7314 + 2100 You can observe 2 patterns $a_i=a_{i-1}\cdot i$ and $a_i=a_{i-1}\cdot i+k$, and which pattern to follow is related to the 0/1 in the input. The problem remained is the value of $k$.You can see that $k$ also follows some pattern, 1 * 3 = 3, 3 * 4 = 12, 12 * 5 * 5 = 300, 300 * 7 = 2100. For continuous 0 the $k$ is multiplied by $i$, and for 1 the $k$ is multiplied by $i-1$. Though there is no continuous 1 in the sample, we can guess that this pattern stays correct for continuous 1. This results in my solution 197972006But it's very rare that the sample contains the complete pattern for the solution, the problem setter can easily avoid this by limitting the information in the sample, and you may at least need to write a brute-force code yourself. And afterall, observing the pattern doesn't work for most of the time.
•  » » » » 10 days ago, # ^ | ← Rev. 3 →   0 Those were amazingly wild guesses! Many thanks for your detailed reply.
•  » » » » 7 days ago, # ^ |   0 Sir as the problem name is DSU master. Can u tell me how to solve this problem using DSU if possible??
 » 11 days ago, # | ← Rev. 4 →   +27 Why the solution of problem E, where we just memorizing all the answers for all encountered pairs of vertices is getting TLE test 10?There is about $449 \cdot 225 \cdot (10^5 : 450) \approx 22 \cdot 10^6$ pairs we need to memorize in the worst case, so it should fit the constraints. Worst case is when the tree forms a "sun" $-$ a graph, where all subtrees of the root are bamboos and in every query we choose two leaves of different bamboos, and in every query this pair of bamboos is distinct. I chose $450$ bamboos of length approximately $222$. In that case we could choose $10^5$ distinct pairs of bamboos for $10^5$ queries and bamboos are as long as possible.
•  » » 11 days ago, # ^ |   +14 Maybe it's just because that this problem has strict time limit, and map & unordered map are too slow for some ways of implement. Your time complexity seems right to me.
•  » » » 11 days ago, # ^ |   +5 It seems quite strange to me, because $22 \cdot 10^6$ isn't really much and I feel like map/unordered_map can handle it in $3$ seconds (at least I used to think so and it always worked)
•  » » » » 11 days ago, # ^ |   +13 197986245 I changed the unordered_map[xx<<32+yy] to vector[xx][yy] in your code, and it passed. (Actually a little faster than my own solution) I cannot analyze it for unordered_map, but for map, the difference is from log(xx*yy) to O(1) + log(yy), but the overall complexity is the same. It reduces the constant factor by about half, I suppose.
•  » » » » » 11 days ago, # ^ |   0 Wow, nice optimization, thank you!Actually, noticed another interesting observation there: if you try to submit the code from your AC submission using GNU C++ 17, it will still get TLE test 10. Seems like unordered_map received a decent performance improvement in C++ 20 =)
•  » » » » » » 11 days ago, # ^ |   +3 Thank you for your information. It never occured to me that compiler option can make so much difference. I'll raise my attention from now on XD
•  » » » » » » » 10 days ago, # ^ |   0 https://codeforces.com/contest/1806/submission/198003630why tle on test 58, submitted i have changed my code thousand times but it just wont submit!??
•  » » » » » » » » 10 days ago, # ^ |   0 Somehow my code passed previously doesn't work anymore. Try hand-write hash table and make it as big as possible to maximize the efficiency.198019367
•  » » » » » 9 days ago, # ^ |   0 My code is almost the same as yours, but I received a TLE error at test case 10 when using an unordered_map, and at test case 58 when using a map
 » 11 days ago, # |   +3 I TLEd using hashtable for E. Thought it should work but figured I was missing something as it seemed to simple for a question E.
 » 11 days ago, # |   +3 I didn't even work out question B this time. Actually, this question is so simple.I need to work harder.
•  » » 11 days ago, # ^ |   +5 Hoshino kawaii!
•  » » 11 days ago, # ^ |   0 same..
•  » » 11 days ago, # ^ |   0 do you know why answer would be <=2?
•  » » » 10 days ago, # ^ |   0 Case 1: 0 is less than half. Then we can construct like 0 a 0 b 0 c ... 0 x 0. It's clear that $a_i+a_{i+1} \neq 0$ forall $1\leq i\leq n-1$. So mex = 0.Case 2: 0 is greater than half and the rest is all 1. No matter what order we construct , there always exist adjacent 0 0 and 0 1. But we can separate adjacent 1 with 0 to make sure there are no 1 1. So mex = 2.Case 3: 0 is greater than half and the rest is not full of 1. We know there always exist adjacent 0 0. So we can construct like 0 0 0 ... 0 x a b c d ... ($x \neq 1 , x,a,b,c,...\neq 0$). So mex = 1.Beware corner cases.
 » 11 days ago, # |   +24 I have seen C when practicing math olympiad problems but I am not sure which olympiad it was from.
•  » » 11 days ago, # ^ |   +58 Ok just found it, it was ELMO 2016 P1: https://artofproblemsolving.com/community/c6h1262189p6556895
 » 11 days ago, # | ← Rev. 2 →   0 About problem E. Therefore, the time complexity is $O(\sum_{i=1}^k s_i^2)$. When $s_i=\sqrt{n}$, the formula above has a maximum value of $n\sqrt{n}$, because $(a+b)^2\geq a^2+b^2$. How do I understand this? Why did $k$ in the formula disappear and why the maximum value is it?Sorry for my poor english and math , but I still want to know how the conclusion came out.
•  » » 11 days ago, # ^ | ← Rev. 2 →   +23 I just write my proof here.Denote all vertices with the same depth as a layer. Our solution is to store all different pairs that may be visited.Consider a layer with x nodes. There can be x^2 different pairs in this layer, but because for each query, only 1 pair would be visited for each layer, so the total amount of different pairs visited is min(x^2,q)Now consider all k layers. Because there are n nodes in total, so sum of x_i = n, and there are sum min(x_i^2,q) pairs may be visited in total.This value gets maximum when all x_i = sqrt(q). A simple proof is that for x_i < x_j < sqrt(q), moving a node from layer i to layer j would make more possible pairs.Therefore, we have x_i = sqrt(q), and hence there are n/sqrt(q) layers in total, where each layer have q different pairs. So there may be n/sqrt(q)*q = n*sqrt(q) possible pairs in maximum.
•  » » » 11 days ago, # ^ |   +1 Thanks a lot.
•  » » » » 11 days ago, # ^ |   +3 :D
•  » » » 10 days ago, # ^ |   0 If possible, can you please elaborate the proof for x_i = sqrt(q)?
•  » » » » 10 days ago, # ^ |   0 The number of different pairs is $\sum\limits_{i=1}^n \min(a_i^2,q)$, where $a_i$ is the number of vertices in the $i$th layer. Hence, the number of pairs in the worst case can be transformed into the following problem: Imagine you have a sequence $a_1,a_2,\dots,a_n$ and a value $q$, you have $\sum a_i=n$, and you want to maximize $\sum \min(a_i^2,q)$.Because for any $a_i\le a_j<\sqrt{q}-1\;(i\neq j)$, $a'_i=a_i-1$ and $a'_j=a_j+1$ results in a larger sum, so the resulting sequence must not have such $a_i$ and $a_j$, which means at most 1 of $a_i$ satisfies $0 •  » » » » » 10 days ago, # ^ | 0 Finally able to understand.. thanks  » 11 days ago, # | +3 Unfortunately, I couldn't catch solving C within time (I solved it like 5 minutes post-contest).Anyway, my question is, Was the only way to solve it by brute-forcing good-Q arrays and looking for a pattern? (That's how I figured out my solution but was wondering if there's a faster way) •  » » 11 days ago, # ^ | 0 I mean you can always prove your solution (if you're good enough at math). •  » » » 11 days ago, # ^ | 0 Can you recommend resources to get better at math? (Proving stuff like this always puzzled me) •  » » » » 11 days ago, # ^ | +9 •  » » » » 11 days ago, # ^ | 0 Oh I didn't explain well, I'm horrible at math and I didn't prove my solution to problem C but you can solve these type of problems fast if you can prove your solutions. •  » » » » » 9 days ago, # ^ | -6 Hi, in problem C, if n = 3, then what is the good array ? according to editorial, the array is as below,,,{ -1 , -1 , -1 , -1 -1 , 3 } Now, LHS => -1 * -1 * -1 = -1 , RHS => 3 + (-1) + (-1) = 1 Since LHS != RHS, how can this be a good array ?? •  » » » » » » 9 days ago, # ^ | +4 If n = 3, the only good array is {0, 0, 0, 0, 0, 0} •  » » 11 days ago, # ^ | 0 I had the same thing. •  » » 10 days ago, # ^ | ← Rev. 3 → 0 I had a strong feeling that there is a very limited number of possible good arrays since there are a lot of constraints. This is because each set of$N$elements should have the property mentioned in the problem statement.Then, I kinda "guessed" that all numbers should be the same, call it$C$, which gives exactly$1$possible unique set of$n$numbers, or only$1$number is different than the other$2 \times N-1C$'s, call it$X$, which gives exactly$2$possible unique sets of$n$numbers, one with the$X$and one without it. I tried for$n=2$to generate a set of$4$different numbers to follow all conditions but did not feel possible, and it is not. So I explored all possible values for$C, X, N$, which were very limited as I guessed initially.Anyway, following my guess for the first case is equivalent to solve$C^N = NC$would result in the followings:$N=1$[Trivial]$C=0$[Trivial]$N=2, C=2$, meaning that the array is$[2,2,2,2]$And for the second case would reduce the problem into the following equations:$(1)C^N=X+(N-1) \times C = X + NC - C(2)C^{N-1}X=NC$By subtituting RHS of$(2)$in RHS of$(1)$, my cases were limited to the followings:$C^{N-1}=-1 → C=-1$and$N$is even. Subtituting in$(2)$results in$X=N$. Meaning that the array is$[-1,-1,...,-1,N]C=X$which belongs to the first case. So, all the cases were limited to these cases of the array:$[C, C]$, when$N=1[2,2,2,2]$, when$N=2[0,0,...,0]$for any$N$.$[-1,-1,...,-1,N]$when$N$is even (total number of elements is divisible by$4$). •  » » » 10 days ago, # ^ | 0 Thanks a lot  » 11 days ago, # | -25 What a stupid idea for$E$... I abandoned this idea and thought, that it is impossible to make it pass. •  » » 11 days ago, # ^ | +13 But the magical complexity analysis is also a part of the algorithm , isn't it?  » 11 days ago, # | ← Rev. 3 → 0 .  » 11 days ago, # | ← Rev. 3 → +3 My submission to E: 197981349Process the queries with ascending depths. For each of the queries move up the tree node by node, and stop if we encounter a preexisting pair. However we only memorize the current answer every 300 moves. If we don't memorize the current answer and only the final answer (i did this in contest), it gets TLE on test 27: 197946777 :( •  » » 11 days ago, # ^ | +3 Can you elaborate on memorizing every 300 moves? I was also getting TLE on test case 27 during contest. Submission Link — 197963238  » 11 days ago, # | 0 I stuck in problem C , wondering Did the good q exist...... Then ,I found$q_n$, with 10 minutes left , too late to write code......hardForces •  » » 9 days ago, # ^ | 0 Hi, in problem C, if n = 3, then what is the good array ? according to editorial, the array is as below,,,{ -1 , -1 , -1 , -1 -1 , 3 } Now, LHS => -1 * -1 * -1 = -1 , RHS => 3 + (-1) + (-1) = 1 Since LHS != RHS, how can this be a good array ?? •  » » » 9 days ago, # ^ | +1 editorial says that case happens when n|2 , so if n = 3 good array is only {0,0,0,0,0,0} •  » » » » 9 days ago, # ^ | 0 Understood, thanks.  » 11 days ago, # | ← Rev. 2 → +1 [deleted].  » 11 days ago, # | 0 Can anybody help, why am I getting TLE on test$10$? My submission I tried similar Brute Force approach as told in the editorial. •  » » 11 days ago, # ^ | ← Rev. 3 → +3 Map is much slower than hashtable because it's$O(\log n)$, and the complexity in total will reach$O(n\sqrt{n}\log{n})$. I think you can use an array of hashtable like unordered_map umap[200005] and use it just like umap[x][y] = value. •  » » » 11 days ago, # ^ | 0 Thank you so much! •  » » 11 days ago, # ^ | +6 I modified your code to pass the problem 197988689. In addition to mashduihca's comment, another important optimization is to reorder the two nodes x, y that x •  » » » 11 days ago, # ^ | 0 Thanks a lot! Can you please tell a bit more about why doing$x < y$halves the number of possible pairs. •  » » » » 11 days ago, # ^ | ← Rev. 2 → +3 Yes. For example, for the pair {2,3}, without this optimization it may appear 2 times as {2,3} and {3,2}. Yet now that I think about this, this optimization should make no improvement for the worst case, but without this optimization it would get MLE/TLE. Does it imply that there is still plenty room for uphacking? XD •  » » » » » 11 days ago, # ^ | 0 OK Understood, thanks again for your time! •  » » » » » » 11 days ago, # ^ | +3 :D •  » » » » 11 days ago, # ^ | +3 For example, now you are calculating$a_3 \times a_5$. It's a waste if you save both (3,5) and (5,3) because the order isn't important at all. And as we know , the constant of hashmap is too large , maybe even 10 times more than a single swap operation , so we don't want use it more. •  » » » » » 11 days ago, # ^ | 0 Yes, understood! Thanks! •  » » » » » » 11 days ago, # ^ | +3 You're truly welcome. •  » » » 11 days ago, # ^ | ← Rev. 2 → 0 i submit your solution again.it's not pass. •  » » » » 10 days ago, # ^ | 0 I solved it again with hand-written hash table. 198019367 Though I still wonder why I'd get TLE on test 10, which I passed previously. •  » » » » » 10 days ago, # ^ | 0 Maybe new case added.  » 11 days ago, # | 0 Problem E is so strict that i cant solve it by map/unordered_map in limited time •  » » 11 days ago, # ^ | 0 OK, Testcase 58 ??? I tried other code but tle, this question must use array instead of unordered_map? I dont know why .  » 11 days ago, # | ← Rev. 2 → 0 I found that if you just climb up T(const) steps, and then run the normal memorized search, the solution might be faster. I have tried some Ts and has found that it's fastest with T around 700. I think it's kind of 'metaphysical'. Can anyone explain this phenomenon? •  » » 11 days ago, # ^ | 0 BTW, if T = 0, my solution will get a TLE.  » 11 days ago, # | +14 The observation in F is really amazing. This task is truly a great one(and hard).  » 11 days ago, # | 0 What? The intended solution of E is not Mo's algorithm on tree? But my HashMap solution failed many times.  » 11 days ago, # | 0 Can anybody help in problem B, why is the answer always <=2? •  » » 11 days ago, # ^ | +3 3 cases1) Less than a half numbers are 0. In this case, we can insert 0 into the positions between positive numbers, and no sum will be 0, hence the mex would be 0.2) More than a half numbers are 0, and there exists positive numbers >1. In this case, there must be a sum equals to 0, so place all 0 on the left, followed by a positive number >1, and the rest can be ordered arbitarily. Because there are no sum equals to 1, the mex would be 1.3) More than a half numbers are 0, and all positive numbers are 1. In this case, sums equals to 0 and 1 are both inevitable. But by inserting 1 into the positions between two 0, we can prevent sums equal to 2. Hence the mex would be at most 2.The 'a half' in former text is a little different than n/2. It's a border that the number of 0 > the number of positive integers + 1. •  » » » 11 days ago, # ^ | 0 Thanks a lot.  » 11 days ago, # | 0 I still don't understand why problems setters add problems like F1 and F2 to a DIV2 contest! Isn't the round meant for div2 participants? Or your div2 testers were able to solve it? I see even Div1 participants struggled with the problem. •  » » 11 days ago, # ^ | +36 Actually it is common to have a problem harder than 2800 in a div2 contest. •  » » » 11 days ago, # ^ | 0 Yes, but why? •  » » » » 11 days ago, # ^ | +10 It is a custom in programming contests that the hardest problem in a contest should be unsolvable for most contestants,which is used to distinguish top contestants. •  » » » » » 11 days ago, # ^ | 0 it is a div2 contest. it should distinguish top DIV2 contestants, and I doubt a real div2 participant can solve a 2800 problem in 2:00 hours coupled with other problems.  » 11 days ago, # | +3 E is barely possible with Java :(. Had right solution but still tle.  » 11 days ago, # | ← Rev. 3 → 0 In problem E, I don't understand the part: "It's easy to discover that when$s_i > \sqrt{n}$, the contribution of it will equal to the contribution when$s_i = \sqrt{n}$" •  » » 11 days ago, # ^ | 0 Because for each query, it only accesses 1 pair of nodes in each layer, so the number of pairs accessed in a layer has an upper bound of q, making it min(s_i*s_i,q). •  » » » 10 days ago, # ^ | 0 Thanks a lot :D  » 11 days ago, # | ← Rev. 8 → +18 I have upsolved 1806E - Tree Master in$O\left(n \cdot q\right)$.My accepted submission (1300 ms)Unfortunately, my$O\left(n \cdot q\right)$solution, which have passed all of the pretests, got TLE on 56-th system test during the system testing. It required 2 additional fixes before becoming accepted. Hope, that in the next time I will be better in using of AVX2.You can find an explanation of main ideas of$O\left(n \cdot q\right)$solution under a spoiler below. How to solve in O(nq)Solving in$O\left(n \times q\right)$means, that we will calculate an answer on each question in linear time. Lets start from arrays.If we are given 2 distinct arrays of integers, we can calculate dot product of them in$O\left(\dfrac{n}{8}\right)$with 256-bits YMM registers (AVX2 extension) with next function: using src_type = int; using dst_type = long long; __attribute__((optimize("Ofast,unroll-loops"),target("avx2"))) dst_type dotProd(const src_type * __restrict a, const src_type * __restrict b, const int len) { dst_type res{}; for (int i = 0; i < len; i++) res += (dst_type)a[i] * b[i]; return res; } Now we are able to calculate dot product for two arrays. The next step is preparing of these arrays: convert each path in tree to subsegment of some array. We can do it with Heavy-Light Decomposition.Now, we can start to calculate an answer on query. If two vertices stay on different paths in HLD, we will calculate part of answer with our dot product function and go up. When two vertices will stay on the same path, then they will stay on the same position (so,$u = v$) of this path and we can use precalculated sum of squares on path to root.UPD. With a honest going up to root I got 1800 ms. Submission  » 11 days ago, # | ← Rev. 2 → +6 Solution of E with block algorithm: Mark every node whose depth is divisible by sqrt(n) and subtree has at least sqrt(n) nodes as important nodes.There are at most sqrt(n) important nodes.for every pair of important nodes whose depth are the same,calculate the answer and store it.As there are at most n pairs of important nodes and for every pair of important nodes,there must be another pair of important nodes after sqrt(n) jumps,the time complexity is O(n*sqrt(n)).For every query,there must be a pair of important nodes after at most 2*sqrt(n)-1 jumps.So the time complexity is O(q*sqrt(n)).overall time complexity:O((n+q)*sqrt(n)),overall memory complexity:O(n). •  » » 10 days ago, # ^ | 0 Can you elaborate more on why There must be a pair of important nodes after at most 2*sqrt(n)-1 jumps. Is it because we need to have at least sqrt(n) nodes in the subtree ? Shouldn't the jumps be sqrt(n) + 1, since it will satisfy both conditions for important nodes? •  » » » 10 days ago, # ^ | 0 If there is a subtree with size sqrt(n)-1 and the depth of its root is divisible by sqrt(n),for one node of a query in the subtree,we will jump at most sqrt(n)-1 times to the root of the subtree,but as the size of subtree is not greater than sqrt(n),this will not be a important node.We have to jump sqrt(n) more times to find another node whose depth is divisible by sqrt(n).The subtree of this node must have at least sqrt(n) nodes.So there must be an important node after 2*sqrt(n)-1 jumps.For the other node,it is the same.So there must be a pair of important nodes after at most 2*sqrt(n)-1 jumps. •  » » » » 10 days ago, # ^ | 0 Thanks, understood.  » 11 days ago, # | 0 How I can derive a equation for this type of problems? : like Problem A. I was trying to solve this by brute-force approach(with two cases) but that didn't work. I get stuck at this type of problems. •  » » 11 days ago, # ^ | 0 It doesn't need any equation....just need observation...firstly you can increase x and y simultaneously so first of all d>=b must hold because you can never decrease y and to reach d you have to increase b by (d-b) and a will also increase by (d-b) then our another move is (x-1,y) so now c<=a must hold since we can't increase x coordinates now. •  » » » 10 days ago, # ^ | 0 Thank you  » 11 days ago, # | 0 Hey I am stuck at problem A I understood most of the part but why are we substracting c?  » 11 days ago, # | 0 In B, why the answer can not exceed 2  » 11 days ago, # | 0 Hi mazihang2022, I just tried to uphack my own solution for problem E and got the response "unexpected verdict". Could somebody look into this? https://codeforces.com/contest/1806/hacks/895387 •  » » 10 days ago, # ^ | 0 I think it works now. You can try again. •  » » » 10 days ago, # ^ | 0 More and more unexpected verdict. What should u do? •  » » » » 10 days ago, # ^ | 0 I see your hacks. It is just because I wrongly marked some code as AC in polygon. The intended solution works fine and uses about only 600ms in your input. I think everything should be fixed now.  » 11 days ago, # | 0 why i always got stuck on C? folks please suggest me something so that i can improve myself :(((  » 11 days ago, # | 0 In problem C, many good arrays are not being considered. For instance, 1 2 3 1 2 3 is a good array. So for a given array [1,2,4,1,2,3] the output should be 1, the given code gives output 14 •  » » 11 days ago, # ^ | 0 You are considering subarrays instead of subsequences.  » 11 days ago, # | 0 Authors, thank you for the opportunity to practice the Mo's algorithm on trees in problem E!  » 11 days ago, # | +33 Problem E appeared in CodeChef long challenge 2.5 years ago: link  » 10 days ago, # | ← Rev. 2 → 0 can someone tell me what is wrong here https://codeforces.com/contest/1806/submission/197944882 i didn't understand why it didn't workI'm kind knew to Codeforcese T_T •  » » 10 days ago, # ^ | 0 −10^8≤a,b,c,d≤10^8 Its too slow. (One test can pass in 1 second maybe, but the number of testcases is like 10^4) You need O(1) solution here  » 10 days ago, # | +23 Problem F has been appeared on CPSPC 2017, here  » 10 days ago, # | ← Rev. 2 → 0 Could anyone explain D in a more concise way? The editorial is a bit too brief for me :( For a[i] = 1, why is only inserting in the end invalid? i don't really understand how the answer is calculated. •  » » 9 days ago, # ^ | +8 Here's my gathering: some intuition for DSU MasterLet's denote node number 1 as N1. f(k) is equal to the count of permutations of [1,..,k] such that, in corresponding graph of nodes [1,..,k+1], all the nodes have a path to N1, either directly or indirectly.Now let's derive f(k) from f(k-1). First of all, if some permutation of [1,..,k] results in all the (k+1) nodes to have a path to N1, then removing number k from this permutation will also result in remaining k nodes to have a path to N1. Which means, the value of f(k) is somehow related to the value of f(k-1). Now take some permutation of [1,..,k-1] which contributes to f(k-1). Then if A(k) is 0, i.e there will be an edge (k+1) -> (k) at some point, we can insert number k to anywhere in between [1,..,k-1] and resulting permutations will also contribute to f(k). If A(k) is 1, i.e there will be an edge (k) -> (k+1) drawn at some point, denote the root of weakly connected component which k belongs to as root(k) , just before we draw this (k) -> (k+1) edge. If that root(k) is not N1, future edges will connect root(k) to N1, by definition of f(k-1). If root(k) is N1, which can only be after last edge draw of [1,...,k-1], new edge becomes root(k) -> (k+1) , i.e, 1 -> (k+1) , so such a choice doesn't contribute to f(k+1).To calculate answer, let's denote total number of incoming edges of N1 in all [1,..,k-1] permutations as ans(k-1). Then we can insert number k anywhere in between any of these permutations and incoming degree of N1 coming from [1,..,k] doesn't change, since number k is only responsible for (k) -> (k+1) edge. (or (k) <- (k+1) depending on A(k) Since, there are k slots to insert number k, ans(k) = ans(k-1) * k + count of permutations where node k+1 is directly pointing to N1. Note that since new node k+1 appeared when we switched from k-1-length to k-length permutations, this new node can also directly point to N1. It can happen only when all [1,2,...k] first nodes are already connected to N1 (either directly or indirectly) in f(k-1) ways, and when we apply (K+1) -> (K) edge as the last edge, which only can be applied when A(K) is 0. Submission: https://codeforces.com/contest/1806/submission/198274330 •  » » » 9 days ago, # ^ | +1 Thanks for this amazing detailed explanation. Had a hard time understanding editorial solution. •  » » » 9 days ago, # ^ | ← Rev. 2 → +1 Ohh, I got stucked until I read this tutorial. I can understand now. Thanks a lot! You helped me! <3  » 10 days ago, # | 0 I still don't know why unordered_map will TLE in problem E = = •  » » 10 days ago, # ^ | 0 Because it's too slow,you should achieve it yourself.  » 10 days ago, # | 0 Can someone explain D why If ai=1 only inserting at the end is invalid, so fi+1=fi⋅(i−1)? •  » » 9 days ago, # ^ | 0 here's my understanding: https://codeforces.com/blog/entry/114048?#comment-1014400  » 10 days ago, # | 0 Got screwed in C by using int and not long long  » 10 days ago, # | 0 where to practice tasks similar to C. I am too bad at intuitive thinking  » 10 days ago, # | +5 There is a mistake in editoral of D. The last two paragraph, the fi+1 should be fi. means (i,i+1,0)operation can make contribution only when it's at the end of operations. I think the editorial it's too brief for us,and it lack a lot of proof. •  » » 10 days ago, # ^ | 0 Formula fixed. But I'm not quite sure how to explain it in a clearer way :(  » 10 days ago, # | 0 Having trouble understanding problem D even with editorial, any good explanations of it?  » 10 days ago, # | ← Rev. 2 → 0 Can someone tell me how can I improve this approach in Problem E.My submission  » 10 days ago, # | 0 Can anybody share their thoughts on how they came up with problem C? How did they think and what made them think that way?  » 10 days ago, # | +11 为什么没有中文版？Why is there no Chinese version? •  » » 10 days ago, # ^ | +3 I think it is necessary,too.  » 10 days ago, # | 0 In B , wdym by count of 0s can't be greater than n/2 . It can be equal to n/2 + 1 in case n is odd. 01010 has valid mex 0 •  » » 10 days ago, # ^ | 0 so can we just check the first number with frequency less than or equal to n/2 for even sized array and freq less than or equal to n/2 +1 for odd sized array ? Tried this but WA  » 9 days ago, # | ← Rev. 2 → 0 Please help me explain the problem D's formula to calculate ans array! I have been stuck in that. :'(  » 9 days ago, # | +3 Why F1=F2=2900? F2 is much harder than F1...  » 8 days ago, # | ← Rev. 4 → +24 I remember the problem F2 from last years Turkish End of the Winter Camp OI (I just made up the name, TEWOI sound kinda cool tho) (It doesn't). I'm pretty sure that Turkish writers have stolen the problem from an obscure OI, and the only difference was you needed to solve the problem for all values of K, which was a nice addition to the problem for who wan't to solve. I'm not saying contests writers intentionally used an idea from another OI tho, idea of grouping numbers and summing up their gcds is not something that odd, and pretty sick problem nevertheless, I'm just surprised no one else from that camp mentioned it, also sorry for kinda necrocommenting, I just saw the problem yesterday. My idea for the harder versionStill solve the problem for the array with unique elements and use the fact that there are only$O(\log m)$different prefix gcds, and notice that the different element that you merge with prefix can only be a different element other than the first element at most$O(\log m)$times. (When prefix gcd changes.) So like adding the multiple elements you need to do at last, you can extract elements between prefix gcd changes. Use binary search or two pointers to find how many numbers you will add and extract for each different prefix gcds, thus solving it in$O(n \log^2 m)$or$O(n \log m)\$ (And sorry, I know the way I write is not in any means rigarous, just yell at me in the comments if you wan't me to clear something)
 » 8 days ago, # |   0 Seriously I need help . I am still not able to understand Tree Master Problem E, read all comments and editorial but cant really understand what is the logic behind it.
•  » » 7 days ago, # ^ |   0 Firstly try to implement an "naive" solution that would give right answers without worrying about execution times. One way to verify its correctness is to pass the first several tests.Then, analyze why the solution takes long time on larger trees, and think about how to optimize it. One natural idea is to memorize the result of visited pairs (x, y) in a map and re-use the results in future queries. Such "natural" idea however would still lead to TLE and require more optimizations. One such optimization is to skip adding pairs to the cache in the first M (like 1000) steps walking-up the tree. More other ideas were discussed in the thread.
•  » » » 7 days ago, # ^ |   0 could you please elaborate more I mean the entire approach
•  » » » » 7 days ago, # ^ | ← Rev. 2 →   0 The tree in the first example input is like below. Each node has two numbers id and value.  1:1 | 2:5 / \ 3:2 6:1 / \ 4:3 5:1 The first query is (4,5), which has following path to root respectively:  4: 4:3 3:2 2:5 1:1 5: 5:1 3:2 2:5 1:1 3*1 + 2*2 + 5*5 + 1*1 = 3 + 4 + 25 + 1 = 33 multiple corresponding value of these nodes we have the answer 33 for the pair (4,5).If you understand this example and apply the idea generally you will get a "naive" solution. If you could NOT understand this example, I suggest you focus on solving problem A, B, C instead and skip E and above.
 » 5 days ago, # |   0 Why not a youtube solution
 » 5 days ago, # |   0 IN Mex Master Problem :Case: [0,0,0,0,0,1,1,2] ->Ans can be 3 right??