### skywalkert's blog

By skywalkert, history, 2 years ago,

Hi, all!

This year GP of China is scheduled on Sunday, March 10, 2019 at 11:00 (UTC+3). Writers have spent several days and nights creating and developing these problems. Hope you enjoy the contest!

UPD: Thanks to zimpha, Claris, quailty and me, here is editorial.

• +61

 » 2 years ago, # |   +36 "The virtual contest is in progress. You are not allowed to participate" again...
•  » » 2 years ago, # ^ | ← Rev. 2 →   0 It seems Div.1 is postponed for 2 minutes, and Div.2 will be postponed for 30 minutes.UPD: Said on the main page, Div.2 will be postponed for 1 hour.
 » 2 years ago, # |   +31 You shouldn't replace problems during the contest.I finished coding some problem, tested samples, found that samples don't look correct, then reloaded the problem and found that the problem was changed to a completely different problem...
•  » » 2 years ago, # ^ |   +8 Maybe the old problem statement was misplaced by admin? I'm not sure about that. I have no access to the backend so far. Anyway, sorry for the inconvenience.
•  » » » 2 years ago, # ^ |   +24 OK, sorry for complaining if it's not your mistake. I guess it's just an unfortunate error.
 » 2 years ago, # |   +1 I'm just a contestant, can I register a sector?
•  » » 2 years ago, # ^ |   +5 I'm afraid I can't answer the question. If you really want to register, you can send messages to snarknews for more details.
 » 2 years ago, # |   0 What is test 44 in B?
•  » » 2 years ago, # ^ |   -19 k=1,2
•  » » 2 years ago, # ^ | ← Rev. 2 →   +8 Try this test:16 6 12 41 66 42 53 65 4Answer: Yes
 » 2 years ago, # |   0 Contrary to its outlook, J was actually very cool. Thanks!C is pretty easy if we are given some oracle which draws a line slightly right to some two points, but it seems that's what actual problem is. Is there any easy way for that?And how to solve H?
•  » » 2 years ago, # ^ |   0 How to solve C if we can draw such line?
•  » » » 2 years ago, # ^ |   0 Assume n is even. Then we divide n points into half by sorting in pair (x, y) and draw some line almost parallel to x-axis. Now there is n / 2 points in each side. You pick convex hull of all points. Then there exists some edge in hull, which one point is in upper side, and other is in lower. Draw a line to eliminate it.Of course this is not verified, so please tell me if this is wrong.
•  » » » » 2 years ago, # ^ |   +11 Here's an "oracle" that passes the tests: vector GetSegments(const point& a, const point& b) { const int deltax = b.x - a.x; const int deltay = b.y - a.y; const int k = 500001; point aa(a.x - k * deltax, a.y - k * deltay); point bb(b.x + deltax, b.y + deltay); vector segments; static const int dx[4] = { -1, 0, 1, 0 }; static const int dy[4] = { 0, 1, 0, -1 }; for (int d = 0; d < 4; ++d) { point aaa(aa.x + dx[d], aa.y + dy[d]); point bbb(bb.x, bb.y); if (ccw(aaa, bbb, a) == 0) continue; if (ccw(aaa, bbb, b) == 0) continue; segments.push_back(segment(aaa, bbb)); } return segments; } It returns up to four segments, some of which pass slightly to the right, and others slightly to the left, so you try them all and take the one that eliminates the two points.
•  » » » » » 2 years ago, # ^ |   0 That's very clean, thanks!
•  » » 2 years ago, # ^ |   +1 How to solve J?
•  » » » 2 years ago, # ^ |   0 We first see how to enumerate all repeats in a string. Fix the length of repeat 2k. Divide the string into [0, k - 1], [k, 2k - 1].... We partition such N / k string into a maximal interval of string which all strings are same. Thus, we are left with some interval [ik, jk + k - 1]. If we slightly extend it right (by amount of LCP(ik, jk)) and extend it left (ReverseLCP(ik, jk)), they are the maximal substring which all 2k substrings are repeat. With suffix arrays we can enumerate them in time. Thus, we are left with pieces of information indicating that there is an edge connecting x + i, y + i with cost w connecting . Of course, we can decompose them in pieces where x = 2k for some k. We will now process from k = 19 -> k = 0. Notice, that for each k, we only need O(n) such information. because anything that is not contained in spanning forest are useless. So, we can compress it by Kruskal's algorithm, and X information in k = t + 1 can be replaced with equivalent 2X information in k = t. If we propagate down until k = 0, we get exactly what we want. This gives . While writing it down I found my implementation was actually , but it passes in 1.5s so who cares.
•  » » » » 2 years ago, # ^ |   +28 A piece can just be represented as two pieces where x = 2k since redundancy doesn't affect the answer.The intended solution is to sort all k by wk and merge pieces found by length k.We can maintain disjoint union sets for each k. When we want to merge x + i and y + i(0 ≤ i < 2k), first let's check whether x and y are already connected in dsuk, if so then we are done, just break it. Otherwise we need to connect x and y in dsuk and then recursively consider x, y and x + 2k - 1, y + 2k - 1 in dsuk - 1. Since there are at most O(n) times of merges in each dsu, the total complexity is .
•  » » » » » 2 years ago, # ^ |   0 Yeah, it should be two pieces. I think that was my original thought, and I even preprocessed ⌊ log2(n)⌋ because of it. But somehow I did in the hard way :p
•  » » 2 years ago, # ^ |   +18 H:Use centroid decomposion on T1. Suppose the centroid is o, and the distance between x and o is wx. What we need to calculate is . Here x, y are vertices in a connected component S of T1 according to centroid decomposion.Let's take an edge with length len in T2. The contribution to answer is . It can be easily calculated using tree DP in O(n).O(n) is too slow, but we can mark those vertices in S as important vertices. Let's compress those unimportant vertices whose degree is no more than 2, we will get a tree with O(|S|) vertices. Then run a linear tree DP on it is OK.The total complexity is . The first log is for centroid decomposion, and the second log is for std::sort to build the compressed tree.The solution also exists, we leave it for readers.
•  » » » 2 years ago, # ^ |   +8 That's awesome... Thanks!
 » 2 years ago, # |   0 What is test 19 on B about?
•  » » 2 years ago, # ^ |   -18 k=3
•  » » 2 years ago, # ^ |   0 We were failing test 19 because of this case:14 4 11 22 33 11 4Answer: No
•  » » 2 years ago, # ^ |   0 I'm not really sure if this is test 19 or not (I can't pass it myself) but my solution fails miserably on a test like this and don't know now how to fix it yet) pic input1 24 27 4 1 2 2 3 1 3 2 4 4 5 5 6 6 7 7 8 8 9 9 10 10 11 11 6 5 12 12 13 12 14 14 15 15 16 16 17 17 18 18 19 19 15 14 20 20 21 21 22 22 23 23 24 24 21 Answer: No
 » 2 years ago, # | ← Rev. 2 →   +16 How to solve D in O(n2) ? We managed to pass sample in O(n2 * log(n)), but unfortunately our solution is too slow.
•  » » 2 years ago, # ^ |   0 My solution calculates expo(a, b) only when a, b ≤ n. Hence, powers can be precomputed.
•  » » » 2 years ago, # ^ |   0 Of course, we did it, but we had other difficulties. I will share our O(N2 * log(N)) approach. Let's calculate number of ways to make connected component with i vertices having cycle with j vertices on it. Let's call such value gi, j. Let's calculate fi and f2i using gi, j , total number of edges in all valid connected components with size i and total number of ways to build a valid component with i vertices, respectively. We calculated it in O(N2). Now let's dpi, j be the total number of edges in graph having i vertices and not having components with size greater than j. It's obvious that answer for N is dpN, N. And we don't know how to calculate dp with time faster than O(N2 * log(N)). The main idea we used to calculate dp is that we add components in non-descending order of their size. But since we can add more than one component of same size we should divide answer by cnt! and we can't just independently add two components of the same size. I mean we can't just calculate dpi, j as linear combination dpi, j - 1 and dpi - j, j. We should try every value of k and get the value of dpi, j as linear combination of dpi - k * j, j - 1.
•  » » » » 2 years ago, # ^ |   0 How about only consider dpi, dp'i as the total number of edges in graphs having i vertices and the total number of graphs having i vertices? A typical approach is to enumerate the size k of the component which contains vertice 1 and regard the rest part as dp'i - k.
•  » » 2 years ago, # ^ | ← Rev. 4 →   +26 1) Precalculate xy for all x, y ≤ n. Also precalculate and .2) Calculate number of connected graphs with i vertices and a loop with length j — it's for 3 ≤ j ≤ i ≤ n and ii - 2 for j = 0 (with no loops).3) The resting part is quite straightforward. Calculate count of connected graphs with i vertices and no more than one loop, sum of edges in all loops in such graphs using values calculated in previous step. Let's say we calculated Ci and Si. Then we can calculate such values for all graphs (not only connected): , It was not hard for me to find a formula in part 2 because I've already seen similar formula. It was in TopCoder, SRM 700 Div1 Medium.
•  » » » 2 years ago, # ^ |   0 Thanks! from the third part is exactly what we needed.
•  » » » 2 years ago, # ^ |   0 Great! The formula in part 2 is an application of generalized Cayley's formula, right? And seems like it appears in some Codeforces round.Sadly I forgot to use that. Instead, I used the derivative of generating functions to get the conclusion. What a funny stupid :P
•  » » » » 2 years ago, # ^ | ← Rev. 2 →   0 Yes, as TimonKnigge mentioned in a comment below it's just a generalization of Cayley's formula. By the way, I completely forgot about it too — I just remembered task with similar formula on TopCoder :)
•  » » 2 years ago, # ^ | ← Rev. 2 →   +26 Here is our solution, in quadratic time. We had to precompute exponentials to remove the logarithmic factor. I'm sure there's many ways to compute the answer though.Let Cn be the number of partitions of {1, ..., n} into cycles. Take C0 = 1, then . Now note every partitioning will have n edges in a cycle.Let Hn count the number of partitions of {1, ..., n} into cycles with trees attached (so each component will have at least one cycle), weighted by the number of edges in cycles (per the problem statement. Then Here we use a generalization of Cayley's formula. So $k$ is the number of vertices (and hence edges) in a cycle, and the remaining n - k vertices have to be attached as subtrees.Finally, we may have some components that do not contain cycles. Let Wn count the number of ways to build such a forest with n vertices. Then W0 = 1 and .Finally, to compute the answer An we then have to partition the n vertices into components with and without cycles, so .
 » 2 years ago, # |   +13 What's the easy way to code B?
•  » » 2 years ago, # ^ |   +5 To give up.
•  » » » 2 years ago, # ^ |   +71 (Seriously, fuck that task.)
•  » » 2 years ago, # ^ |   0 I don't think there's an easy solution, but here's what I did: If m < n - 1, just read 2m integers and print No. Compute 2-edge-connected components. There should be exactly 1 component of size 3, 4, ..., k + 2. Those components should be cycles (number of internal edges = number of vertices). The other components should have size 1. The graph should be connected. Cycle components should have exactly one edge to another components. Denote by C the number of components. Compress components (we get a tree) and mark vertices that correspond to the cycles. Note that those vertices are now leaves by the previous condition. No vertex should have degree  ≥ 4. If there is no vertex of degree 3, we need k ≤ 2 and C ≥ k + 2. Otherwise, for every marked vertex, store its distance at the closest vertex of degree 3. There shouldn't be a vertex of degree 3 with no distances or with two distances of one. If all the above conditions are satisfied, the answer is Yes. Coding wasn't to bad, but figuring out all the cases was.
 » 2 years ago, # |   +16 How to solve I?
•  » » 2 years ago, # ^ |   +18 We need to count number of equivalence classes of arrays of positive integers of length n with sum m and then subtract bad arrays that have some element at least . Let's deal with bad arrays later. To count number of equivalence classes we use Burnside's lemma: it's equal to , where I(π) is the number of arrays that don't change after applying π and G is a group of permutations generated by a cyclic shift by one s and reversal r. Each permutation in G is either sk, i.e. a cyclic shift by k or skr, i.e. reversal followed by a cyclic shift by k, so |G| = 2n.For sk, number of cycles is equal to gcd(k, n) and each cycle has length . On each cycle we need to have equal elements. So if elements of the i-th cycle are equal to ai, we have . If d doesn't divide m it's impossible, otherwise we need to count the number of sequences ai with sum . It's a standard binomial coefficient problem. So far we learned how to do this part in O(n). But everything depends on gcd(k, n) and the number of k ≤ n such that gcd(k, n) = g equals to . So we can try all g dividing n and for each g multiply the answer by . We can precalculate φ(x) for all x ≤ 107 and after that this part of the solution will work in .For skr, we notice that we have 0, 1 or 2 cycles of length 1, depending on and and all other cycles have length 2. If all cycles had length 2, we could again use binomial coefficients and everything would be easy. Notice that if we knew that some cycle of length 1 has even value, we could split it into 2 halves and treat it like a cycle of length 2. If it has odd value, we could subtract 1 and also split it into 2 cycles. So for each cycle of length 1 we try both cases. Here we need to be careful and remember which cycles can have zero values and which cycles cannot. This part works in O(1).Now we need to subtract bad arrays. Notice that there can be only one edge with length at least , so we cannot shift anything. But we still can reverse the array. So now we have |G| = 2 and we need to count the number of fixed points for id and r. id is obvious and r is very similar to the previous case: we have 1 or 2 cycles of length 1 and all other cycles have length 2. This is solved in the same way as previous case, except we subtract from one edge. Here's the code
 » 2 years ago, # |   +54 What's the point of constraints on modulo in D? Why 1 ≤ p?
•  » » 2 years ago, # ^ |   +74 Probably author wanted to increase amount of pain in this world
•  » » 2 years ago, # ^ |   +82 probably for the same reason we can have 2e5 tests with n=2e5, m=1 in B
•  » » 2 years ago, # ^ |   0 The modulo had nothing to do with primes but it was somehow written as p
 » 2 years ago, # |   +8 How to solve I?
 » 2 years ago, # | ← Rev. 2 →   0 How to solve F?We tried some greedy after fixing permutation to kill monsters. After we kill first monster using 1...i, if we give it more damage than its health, we take some of to give to other 2 players. These cannot be more than 102 each, so we brute over them. Also, these 2 damages cannot be same if they are  ≤ 2. Then try to kill second monster using i + 1...j and possibly take some extra damage dealt during an intermediate attack to third monster similar to 1st case, but here all of this is transferred to third monster. Then we kill third as fast as possible. Can someone point out a flaw in this argument?
•  » » 2 years ago, # ^ | ← Rev. 2 →   +8 We may kill two little monsters first. In this case, the damages given to the boss monster may exceed 102, right?In fact, we know in this case the first two monsters should be killed in no more than 102 seconds, since their health points don't exceed 100 and we only need to waste at most one second for the last monster.P.S. Editorial is coming soon :)
•  » » » 2 years ago, # ^ | ← Rev. 2 →   0 Consider the example.HPs: 1, 1, 1+2+...+1000Attacks: 1, 1, 1e9In this case it makes sense to kill the third monster in the first 1000 moves. If we delay his death and kill small monsters earlier, we get more damage from the large one (and fewer from the smaller ones, but that's minute).So, in this example it is not true that small monsters die early. Am I wrong somewhere?
•  » » » » 2 years ago, # ^ |   -10 I would like to discuss the health points of the third monster first, and if HPC > 5050, I would discuss the order of their death. A mass of discussion, huh?
•  » » » » » 2 years ago, # ^ |   0 Probably I don't get your point. In the version of the analysis I have (shipped to the Moscow Workshop) there was a claim that the first two monsters always die in first 100 seconds. If I understand it wrong and some weaker statement is indeed claimed, I am happy to wait for the analysis with a proof.
•  » » » » » » 2 years ago, # ^ |   -10 It seems that the analysis was fixed last afternoon, so maybe what you've seen is an old version (and wrong). Please check out the final version that I posted on this blog.
•  » » » 2 years ago, # ^ |   0 The damage I am talking about( ≤ 102) is given to other monsters before the first monster dies completely. This will definitely be less that 102 if the monster 1 has low health. If monster 1 has high health, then it does not make sense to give more than 102 damage to other monsters while the high health monster dies, because they can die in weaker attacks.This failed for test 3. Will it be possible to share the test cases?
•  » » » » 2 years ago, # ^ |   0 If the high-health monster has health 1 + 2 + 3 + 4 + ... + 102 and does high damage too (say, 109), whereas the other two monsters have low damage (say, 1), then you should still attack the high-health monster first.From round 100 (or rather, max{Ha, Hb}) you can speed this process up though by binary searching for when monster c is dead, after which you can one-hit the other one or two monsters that remain.
•  » » » » 2 years ago, # ^ |   0 Try this one:16 3 6 5 3 5Answer: 54
•  » » 2 years ago, # ^ |   +8 F: After the first 100 attacks, we just keep hitting one monster until it dies and only then switch to the next monster. (At that point, we kill the small monsters in one hit, so if we hit then before the boss dies, we should do so before hitting the boss, as we'll take less damage from them and we'll deal more damage to the boss that way.) We can hence try all 6 permutations. For the first 100 attacks, do DP[attacksdone][hpa][hpb] =  minimum amount of damage we'll need to take to finish from that state. Note that we can compute the remaining health of the boss from these three parameters. To transition, try hitting every non-dead monster. The base case of DP[100] is computed by the first paragraph, another base case is all monsters being dead. Note that hpa, hpb might get negative, but never smaller than  - 100.
 » 2 years ago, # |   -10 Auto comment: topic has been updated by skywalkert (previous revision, new revision, compare).