### xiaowuc1's blog

By xiaowuc1, history, 4 months ago, ,

Hi all,

The second contest of the 2017-2018 USACO season will be running from Friday, January 19th to Monday, January 22nd.

Please wait until the contest is over for everyone before discussing problems here.

Update: The contest is now live!

•
• +96
•

 » 4 months ago, # |   0 Auto comment: topic has been updated by xiaowuc1 (previous revision, new revision, compare).
 » 4 months ago, # |   +1 The contest should be over by now. Can anyone else confirm?
•  » » 4 months ago, # ^ | ← Rev. 2 →   0 Confirmed. "[...] as you start during the larger contest window starting the morning of Jan 19 and ending Jan 22 at 23:59 UTC-12 time" (see contest rules). Adding 4 hours, all contestants are done at 4:00 UTC-12. Now, it's 5:25 UTC-12.How to solve gold 3?
•  » » » 4 months ago, # ^ |   +4 First, notice that a string is good if and only if it has a substring consisting of K equal symbols. (That substring corresponds to the last stamp.)I found it easier to compute the complement — the number of strings where no symbol appears more than K-1 times in a row. You can do that by finding some linear recurrences and then doing an O(N) dp.
•  » » » » 4 months ago, # ^ | ← Rev. 4 →   0 Can you elaborate?EDIT: I deduced a recurrence (see below) and will test it when USACO analysis mode opens. Thanks for the hint. Still interested in other approaches.
•  » » » » » 4 months ago, # ^ | ← Rev. 2 →   +5 f(n) = (M-1)(f(n-1)+f(n-2)+f(n-3)+...+f(n-(k-1))I found this pattern by working out a 2D DP table on paper, and I had also seen this problem prior to this contest for the two color case.
•  » » » » » » 4 months ago, # ^ |   0 Here's another idea. Imagine an abitrary string s of length n with  < k equal symbols in row (call s good). If we append a symbol x, there are two cases: s ends with  < k - 1 equal symbols or with k - 1 equal symbols and last symbol of s is different from x. In this case, new string remains good. s ends with k - 1 equal symbols and last symbol is equal to x. Cut the last k symbols from s. We get a good string with n - k + 1 symbols. There are f(n - k + 1) such strings. Thus, there are f(n) = M·f(n - 1) - f(n - k + 1) good strings.
 » 4 months ago, # |   +5 How to solve plat 2?
•  » » 4 months ago, # ^ |   0 Compress the edges.
•  » » » 4 months ago, # ^ | ← Rev. 2 →   0 How can we prove this works? It seems like still O(n^2) worst case?
•  » » » » 4 months ago, # ^ |   0 Can you construct a test case to make it even close to ? I tried but the worst I could get was something like .
•  » » » » » 4 months ago, # ^ | ← Rev. 2 →   0 Compress edges + dfs is still O(N^2) in worst case Consider the following test case Spoilercout<<70000<
•  » » » » » » 4 months ago, # ^ |   0 good point.
•  » » » 4 months ago, # ^ |   0 Have you tried this in contest and got AC or are you just saying that you think that this would work?I came up with a weird solution out of contest that uses a Merge Sort Tree and BITs.
•  » » » » 4 months ago, # ^ | ← Rev. 3 →   0 I tried to compress the edges during the test and it solved all the test cases with ease (around 300 ms runtime). ![ ]()
•  » » » » 4 months ago, # ^ |   0 I got AC with it during contest yeah.
•  » » » 4 months ago, # ^ |   +5 What does “compress the edges” mean? Could you point me to some kind of tutorial on this?
•  » » » » 4 months ago, # ^ | ← Rev. 2 →   0 Compressing the edges means turning all nodes of degree 2 into a single edge with a larger weight.For example A--B--C--D with each edge of length 1 might get compressed into A--D with length 4.Regarding tutorial, sorry I don't know any.
•  » » 4 months ago, # ^ |   +11 I came up with an O(nlogn) solution during the contest. (I'm not really sure about it though...)First, let's talk about the obvious O(n2) solution. We just have to run a simple DFS in the tree with every root from 1 to n. Assume that the current root is r, let dep[u] be the depth of node u and mindist[u] be the minimum distance between a leaf in the subtree of u and u. Then we go down from the root and whenever we find a node u such that dep[u] ≥ mindist[u] we stop going down and continue the DFS progress with other nodes. Here for simplicity, let's say that "u is an optimal node for r".Now, I will demonstrate my idea to optimize the above solution. We will find the result for node 1 first. Keep all optimal nodes for the subtree of u in an array save[u]. Suppose that we are currently located in node v, and its children are u1, u2, ... Then the candidates of the optimal nodes for v can be all nodes in save[u1], save[u2], ... and their direct parents. Obviously, we should choose the nodes greedily with the increasing order of the distance to node v. We can use some kind of STL like set to solve this. You may think that this is slow but there is a useful observation: a node can be an optimal node for at most 2 other nodes.By doing that, we can easily compute the answer for node 1. To compute the answers for 2, 3, ..., n, when we run dfs(u, p) (u is the current node, p is the direct parent of the current node) we should save pair(u, p) in a map. If we revisit the state dfs(u, p) we do not go down any further, instead, use the save[] array that we just find before. Since an edge (u, v) can be visited in two directions: (u, v) and (v, u), so this DFS progress should run approximately 2n times, I guess.This requires a very complicated and tricky implementation. Also, I am not sure whether the solution runs in O(nlogn) because my solution is very slow in practice. If there are any counterarguments, please comment.My code: https://pastebin.com/J2FmK10Y
•  » » » 4 months ago, # ^ | ← Rev. 2 →   0 Looks like node could be optimal for more than two other nodes. Consider star graph with attached paths of length 1 and 2. Central node of this graph will be optimal for central node of all attached paths length 2. But statement "Total number of optimal nodes for all vertices is linear" still could be correct.
•  » » » » 4 months ago, # ^ |   0 Yeah, you are right... My observation is wrong. My solution cannot pass the "star graph" test case.
•  » » » 4 months ago, # ^ |   +6 I solved it in NlogN starting from the same N^2. With a couple of easy algebraic observations, I deduced a more exact way of seeing the properties that uniquely define those nodes u, and coded a nasty centroid decomposition. I don't understand the time limit/N limit. Most likely the author wasn't able to solve it in NlogN
•  » » » » 4 months ago, # ^ |   +10 It sounds very interesting. Can you please explain your solution in more details?
•  » » » » 4 months ago, # ^ | ← Rev. 3 →   +20 I had the same idea (centroid decomposition) but had to stop halfway during the contest, so I wasn't able to finish it. I don't think the approach is 'nasty' and actually quite like it.The basic idea is to think of all paths starting in a node. We should count the number of paths where d[u]>=mindist[u] ONLY for the last node on the path (where d[u] is the distance to our starting node and mindist[u] is the minimum distance to a leaf).With the centroid decomposition in a single step we consider only the paths going through the centroid. Now a path from u to v (not in the same subtree) should be counted for u iff 1) dep[u]+dep[v]>=mindist[v], 2) for all nodes on the path from u to the root dep[u]-dep[x]
•  » » » » » 4 months ago, # ^ |   0 Yes, the idea was the exact same one. The approach itself is not "nasty". I was talking more about the code (specifically, my code, which was not the happiest implementation). But you're right, I may have exaggerated. Also, I'm not really sure I understood how you handle the middle condition. What I did was to prove that you don't need to make sure dep[i] > dep[x] + mindist[x] for any ancestor, but rather only for the immediate father, which significantly eases the solution.
•  » » » » » » 4 months ago, # ^ | ← Rev. 2 →   0 I'm not sure which part you don't understand. Maybe my code clarifies things: centroid decomposition [AC in 360ms] merge-small-to-large [AC in 180ms]
•  » » 4 months ago, # ^ |   0 I solved it with O((sum of answers) * log N). I think it is O(NlogNsqrt(N)), but I couldn't prove it.
•  » » » 4 months ago, # ^ |   +23 Considering a star graph, insert one vertex into each edge. i.e.1 2 2 3 1 4 4 5 1 6 6 7 ... The sum of answer will be O(|V|2)
 » 4 months ago, # |   0 Who know any easy solution for plat 1? Have idea how solve it by maintaining k queues and calculate n·k dynamics using them, but this looks hard to code for me.
•  » » 4 months ago, # ^ |   +5 Use dp, with state dp[interval][skipped], the brute force transition will be another k factor, but it turns out that you only need to consider one transition (from the leftmost interval that intersects the current interval).
•  » » 4 months ago, # ^ | ← Rev. 2 →   +38 I have an easy DP (I was to lazy to do the contest but it should work).If there is an interval that covers another one fully, remove the smaller one and decrease K (obviously there is no point in saving a small interval, if we can save a larger).After this process sort all intervals that are left by r in increasing order. For every pair i < j these inequalities will hold: li < lj and ri < rj.We start iterating the intervals in increasing order of their r or l. Now consider the following DP: dp[current interval][number of removed intervals][0/1 flag showing if we have chosen interval i] = answer if we consider all intervals until "current interval" (including it).We can easily compute dp[i][j][0] - it's simply equal to max(dp[i - 1][j - 1][1], dp[i - 1][j - 1][0]). Now the dp[i][j][1] we will use a small observation: If we chose (not remove) interval x and let the interval y be the leftmost interval with ry ≥ lx. We can safely remove all intervals in the range [y + 1;x] as they will simply won't change the answer. And the more intervals we remove from there, the less the answer will be.From this we get that dp[i][j][1] = max(dp[p][j — (i — p)][0], dp[p][j — (i — p)][1]), where p is the leftmost interval with rp ≥ li, because it's enough to consider only two case — we either will get the previous interval or won't. The complexity will obviously be .PS: Maybe some minor details like (+1 or -1 in the DP states) are wrong in the explanation as I haven't checked it, but I think one can get the general idea from reading this.
•  » » » 4 months ago, # ^ |   +19 Exactly what I did! Except I switched dp[i][j][0] and dp[i][j][1].Passed all test cases during the contest.
•  » » » » 4 months ago, # ^ |   0 I only looked at dp[i-1] instead of dp[p]. Still somehow got AC :D
•  » » » » 4 months ago, # ^ | ← Rev. 3 →   0 How can one efficiently save the previous DP states for p when calculating dp[i][j][1] (assuming NK memory MLEs on USACO servers)?My Code Without Memory Optimization (Also Need Help to Fix WAs on Cases #2 and #3)
•  » » » » » 4 months ago, # ^ |   +3 NK memory was completely fine for me.
•  » » 4 months ago, # ^ |   0 Is it possible to apply wqs on this task to solve K <= 100000 as well? (Something like, let the score for a set S of lifeguards fired be [amount we can cover + X * len(S)], and then binary search on X)?I'm not sure if this would work, since the cost function here is linear rather than quadratic.
•  » » » 4 months ago, # ^ | ← Rev. 2 →   0 I used convex hull optimization. I think that is what zeliboba was thinking of. Still only O(NK)
•  » » » » 4 months ago, # ^ | ← Rev. 3 →   0 How do you use convex hull trick here? The cost function doesn't look anything like a product, right?
•  » » » » » 4 months ago, # ^ |   0 It’s increasing and that’s what matters My code
•  » » » » » » 4 months ago, # ^ | ← Rev. 2 →   0 Wait how does this work? Doesn't convex hull trick help you find min/max among a set of lines? Or is this something else?
•  » » » » » » » 4 months ago, # ^ |   0 Convex hull trick helps you maintain the set of dp states that will be helpful at some point in a way that the best past dp state is easily found. I'm talking about the one in this video: here
•  » » » 4 months ago, # ^ |   0 This isn't a convex hull optimization, but I do think it uses it (dp[n] = dp[i] + smth * (n-i) type) but if it should work I think it will be something like N log N (maybe some more logs)
•  » » » 4 months ago, # ^ |   +3 Yes, it can be solved in NlogN. I don't know how to prove it, but it works (using alien's trick)
•  » » » » 4 months ago, # ^ | ← Rev. 3 →   0 I tried to solve it using alien's trick but something is not working and I can't find out what (it works on 6 tests cases out of 10). Can someone please help me? Code#include using namespace std; #define pb push_back #define mp make_pair #define fi first #define se second #define INF (1LL << 55) #define MOD (1000 * 1000 * 1000 + 7) #define maxn 200111 typedef long long ll; typedef long double ld; typedef pair pll; ll n, k; pll p[maxn]; ll dp[maxn], from[maxn]; pll pre[maxn]; bool take[maxn]; // 0 - we don't take guard, 1 - we take guard set s; bool comp(pll a, pll b){ if(a.fi != b.fi) return a.fi < b.fi; return a.se > b.se; } ll solve(ll C, bool pr){ dp[0] = 0; take[0] = 0; from[0] = -1; pre[0] = mp(0, 0); s.clear(); ll ind = 0; for(ll i = 1; i <= n; i++){ while(ind + 1 <= n && p[ind + 1].se <= p[i].fi){ ind++; s.erase(mp(dp[ind] - p[ind].se, ind)); } take[i] = 0; dp[i] = dp[i - 1]; // we don't take this guard from[i] = i - 1; if(pr) cout << i << " " << dp[i] << " " << from[i] << " " << take[i] << endl; pll cur; if(s.size()){ cur = *(--s.end()); if(p[i].se + cur.fi - C > dp[i]){ // we take guard (intersect with some previous one) take[i] = 1; dp[i] = p[i].se + cur.fi - C; from[i] = cur.se; } } if(pr)cout << i << " " << dp[i] << " " << from[i] << " " << take[i] << endl; cur = pre[ind]; if(p[i].se - p[i].fi + cur.fi - C > dp[i]){ // we take guard (doesn't intersect with some previous one) take[i] = 1; dp[i] = p[i].se - p[i].fi + cur.fi - C; from[i] = cur.se; } if(pr)cout << i << " " << dp[i] << " " << from[i] << " " << take[i] << endl; s.insert(mp(dp[i] - p[i].se, i)); pre[i] = max(mp(dp[i], i), pre[i - 1]); } ll x = n; ll num = 0; while(x > 0){ // count number of guards used in optimal solution num += take[x]; x = from[x]; } return num; } int main(){ freopen("lifeguards.in", "r", stdin); freopen("lifeguards.out", "w", stdout); scanf("%lld%lld", &n, &k); for(ll i = 1; i <= n; i++) scanf("%lld%lld", &p[i].fi, &p[i].se); sort(p + 1, p + 1 + n, comp); ll mx = 0, cnt = 0; p[0] = mp(-1, -1); for(ll i = 1; i <= n; i++){ if(mx >= p[i].se) continue; mx = max(mx, p[i].se); p[++cnt] = p[i]; } // for(int i = 0; i <= cnt; i++) // printf("%lld %lld\n", p[i].fi, p[i].se); k -= n - cnt; n = cnt; ll bound = n - k; ll l = 0, d = INF, mid, ans = 0; while(l <= d){ mid = (l + d) / 2; if(solve(mid, 0) <= bound){ d = mid - 1; ans = mid; } else l = mid + 1; } solve(ans, 0); // for(ll i = 1; i <= n; i++) // printf("%d %lld\n", i, dp[i]); printf("%lld\n", dp[n] + bound * ans); fclose(stdin); fclose(stdout); return 0; } 
•  » » » » » 4 months ago, # ^ |   +5 Haven't looked at the DP but how are you sure that the "cost" in the aliens trick will be an integer?
•  » » » » » » 4 months ago, # ^ | ← Rev. 2 →   0 I was thinking about this but if I used doubles then the answer wouldn't be accurate. Also I remember that we could use integer cost in this problem: https://csacademy.com/contest/archive/task/or-problem/statement/
•  » » » » » » » 4 months ago, # ^ |   +5 Weird. For the CS Academy problem, my solution passed during the contest but now by making the costs integers it doesn't. Also I don't understand your point of "the answer wouldn't be accurate".
•  » » » » » » » » 4 months ago, # ^ | ← Rev. 4 →   0 Answer of the problem is integer. Won't answer differ because of rounding (doubles usually have precision problems)?
•  » » » » » » » » » 4 months ago, # ^ |   +5 No, as the answer in the end is something like "DP answer" + cost * K. You know that "DP answer" is "integer — cost * K", so the end result will be really close to a integer. I get the final answer by doing rounding down of ("DP answer" + cost * K + 0.5) as you can be confident that the double precision error is less than 0.5.
•  » » » » » » » » » 4 months ago, # ^ |   0 Aha ok, thank you. I will change that.
•  » » » » » » » » » 4 months ago, # ^ |   0 I changed cost from long long to long double, but now the answer differs even more (I guess because now all dp values are long doubles). Can you please tell me what I need to change so it will work. Code#include using namespace std; #define pb push_back #define mp make_pair #define fi first #define se second #define INF (1LL << 55) #define MOD (1000 * 1000 * 1000 + 7) #define maxn 200111 typedef long long ll; typedef long double ld; typedef pair pll; ll n, k; pll p[maxn]; ld dp[maxn], from[maxn]; pair pre[maxn]; bool take[maxn]; // 0 - we don't take guard, 1 - we take guard set > s; bool comp(pll a, pll b){ if(a.fi != b.fi) return a.fi < b.fi; return a.se > b.se; } ll solve(ld C, bool pr){ dp[0] = 0.0; take[0] = 0; from[0] = -1; pre[0] = mp(0.0, 0); s.clear(); ll ind = 0; for(ll i = 1; i <= n; i++){ while(ind + 1 <= n && p[ind + 1].se <= p[i].fi){ ind++; s.erase(mp(dp[ind] - p[ind].se, ind)); } take[i] = 0; dp[i] = dp[i - 1]; // we don't take this guard from[i] = i - 1; if(pr) cout << i << " " << dp[i] << " " << from[i] << " " << take[i] << endl; pll cur; if(s.size()){ cur = *(--s.end()); if(p[i].se + cur.fi - C > dp[i]){ // we take guard (intersect with some previous one) take[i] = 1; dp[i] = p[i].se + cur.fi - C; from[i] = cur.se; } } if(pr)cout << i << " " << dp[i] << " " << from[i] << " " << take[i] << endl; cur = pre[ind]; if(p[i].se - p[i].fi + cur.fi - C > dp[i]){ // we take guard (doesn't intersect with some previous one) take[i] = 1; dp[i] = p[i].se - p[i].fi + cur.fi - C; from[i] = cur.se; } if(pr)cout << i << " " << dp[i] << " " << from[i] << " " << take[i] << endl; s.insert(mp(dp[i] - p[i].se, i)); pre[i] = max(mp(dp[i], i), pre[i - 1]); } ll x = n; ll num = 0; while(x > 0){ // count number of guards used in optimal solution num += take[x]; x = from[x]; } return num; } int main(){ freopen("lifeguards.in", "r", stdin); freopen("lifeguards.out", "w", stdout); scanf("%lld%lld", &n, &k); for(ll i = 1; i <= n; i++) scanf("%lld%lld", &p[i].fi, &p[i].se); sort(p + 1, p + 1 + n, comp); ll mx = 0, cnt = 0; p[0] = mp(-1, -1); for(ll i = 1; i <= n; i++){ if(mx >= p[i].se) continue; mx = max(mx, p[i].se); p[++cnt] = p[i]; } // for(int i = 0; i <= cnt; i++) // printf("%lld %lld\n", p[i].fi, p[i].se); k -= n - cnt; n = cnt; ll bound = n - k; ld l = 0, d = 1e14, mid, ans = 0; for(int r = 0; r < 100; r++){ mid = (l + d) / 2; if(solve(mid, 0) <= bound){ d = mid; ans = mid; } else l = mid; } solve(ans, 0); // for(ll i = 1; i <= n; i++) // printf("%d %lld\n", i, dp[i]); cout << (ll) (dp[n] + bound * ans + 0.5) << endl; fclose(stdin); fclose(stdout); return 0; } 
 » 4 months ago, # |   +5 plat 3?
•  » » 4 months ago, # ^ |   0 I need this one too
•  » » 4 months ago, # ^ |   +5 There are 2 realizations: 1. The area being water/fertilized will always be one big region 2. The only possible way for a rectangular region to be both water and fertilized is for one sprinkler to be on the SW corner and the other one on the NE corner.I used 2 arrays to keep track of the running minimum and maximum of the region. I also kept track of the corner/critical points (only the max) (there are redundant points that take up time to calculate). Instead of cycling through all the points for every point which takes O(N^2), I only cycle through the critical points and skip the irrelevant maxs.In the end, I had to subtract the region that was calculated twice. My AC code
•  » » » 4 months ago, # ^ |   +5 I don't think it has to be one big region — for example you could have sprinklers at (0,4), (2,6), (4,0), (6,2). But it probably doesn't make a difference for the algorithm.
•  » » 4 months ago, # ^ |   0 Here is my O(N) solution. It is similar to xaz3109's solution, but I used prefix sums and some math to speed things up: link
 » 4 months ago, # |   0 Can you help me find a solution for Gold 2?
•  » » 4 months ago, # ^ | ← Rev. 2 →   0 Root the tree at Bessie's start and use DP to compute number of farmers for each subtree. Utilize the distance to nearest leaf and compare with distance from Bessie's start. Does this hint suffice?
•  » » » 4 months ago, # ^ |   0 So, the state for the DP will only include the node? Don't you have to also include distance from the root, which will inevitably run out of time and memory.
•  » » » » 4 months ago, # ^ | ← Rev. 2 →   +3 Actually, I don't think is DP. It is just a bfs traverse from Bessie's start. So, before, you should have precalculated the distance to closest leaf for each node. In the bfs, each time you arrive at a node which minimal distance from any leaf is lower than Bessie's start, you count one to answer and don't push neighbors in the queue (cause you put the farmer in that leaf and for sure Bessie can't go further).
•  » » » » » 4 months ago, # ^ |   0 It makes a lot of sense actually. To precalculate the distance from any node to its closest leaf I assume it's just a BFS from each leaf, but this would be N BFSs at worst. Any hint for this precalculation?
•  » » » » » » 4 months ago, # ^ | ← Rev. 2 →   +6 Yes, Since only you care is the minimal distance from a leaf, you can do a multisource bfs. So first push all leafs to queue with distance 0, and then just do normal bfs.
•  » » » » » » 4 months ago, # ^ |   0 You could also observe that the number of nodes each farmer will move up is floor(depth/2), if you root the tree at Bessie's start. Then use binary lifting to efficiently calculate the kth ancestor of each leaf.
•  » » » » » 4 months ago, # ^ | ← Rev. 2 →   +6 OK. In fact, I solved it using DP. Let dp[i] denote the minimum number of farmers required for the subtree rooted at vertex i. If the distance from start to i is  ≥  distance to nearest leaf from i, we have dp[i] = 1. Else, we have where sum is taken over all children of i.
•  » » 4 months ago, # ^ | ← Rev. 2 →   0 I did a greedy solution, maybe over complicated it. Lets make a DS, that can solve these queries -1. Activate a node.2. Find minimum distance to an active node from a given query node.It can be done with Centroid Tree. Now, in our problem, lets sort the leafs according to their distance from Bessie. Call them v0, v1, ..., vk - 1. Now pseudo code is something like this — for i = 0 to k - 1: if(dist(bessie, v[k]) < ds.query(v[k])) ans++; ds.activate(v[k]); 
 » 4 months ago, # |   +8 Exactly same with Gold 1: http://www.spoj.com/problems/ADABRANC/By the way, how to solve Gold 3? Tried dp with inclusion-exclusion, but seems it was not enough to find solution.
•  » » 4 months ago, # ^ |   +3 Start reading here. :)
•  » » 4 months ago, # ^ |   +10 Hi there, I'm the problem-writer behind Gold 1 :)First of all, I hadn't seen this problem before writing my problem. However, because USACO Silver and Gold use very limited subsets of the material tested in programming contests, it's natural that problems will use similar ideas over time. Indeed, after I wrote the problem (but before the USACO contest), a problem on a CF round reused an extremely similar idea: http://codeforces.com/contest/915/problem/FThe similarity between these two problems is not quite so uncanny, but anyone who had solved both would surely notice that they rely on the same idea.Moreover, it's worth noting that anyone who has studied the Silver and Gold-level material so thoroughly as to be able to find this type of thing on short notice probably is ready for promotion anyway. Any contestant who had somehow already seen this problem almost certainly did so in the process of solving a far greater number of relevant problems, in which case they'd most likely achieve promotion regardless.My apologies for the similarity--however, as we've seen here, it is extremely difficult to create ideas that nobody has ever come up with before when we're able to use such a small subset of the contest material. Situations like this should definitely be avoided in Platinum, but for the lower divisions, they're very likely to show up simply because almost all elementary ideas have been discovered already.
•  » » » 4 months ago, # ^ |   +6 Of course, it is really hard to came up with new problem with unused idea. I wrote above comment without any offense :)
•  » » » » 4 months ago, # ^ |   +3 None taken! I just wanted to clarify, especially to make it clear that the problem was not written directly as a copy of the one you linked. :)
•  » » 4 months ago, # ^ |   0 Noam527 solved it with combi (not sure if PIE), maybe you can pm him
•  » » » 4 months ago, # ^ |   0 I didn't, I just thought it was with combinatorics before I solved it.
 » 4 months ago, # |   +1 How do you do Gold 1?
•  » » 4 months ago, # ^ |   +5 Sort the queries by threshold and use DSU to update tree. Since number of vertices is required to be reported, store size of each set/component and union-by-size will give you AC.
•  » » 4 months ago, # ^ |   0 Code, idea is exactly same as [user:neutron-byte] said.int par[maxx]; void init(){ for(int i = 1; i < maxx; i++){ par[i] = -1; } } int Find(int node){ return par[node] < 0 ? node : (par[node] = Find(par[node])); } void Union(int x, int y){ x = Find(x); y = Find(y); if(x == y) return; if(par[x] > par[y]) /// size(x) = -par[x] < size(y) = -par[y] swap(x, y); par[x] += par[y]; par[y] = x; } int n, q, ans[maxx]; vector < pair < pair , pair > > v; int main() { freopen("mootube.in","r",stdin); freopen("mootube.out","w",stdout); IO; init(); cin >> n >> q; for (int i = 1; i <= n - 1; i++) { int a, b, c; cin >> a >> b >> c; v.pb ({{c, 1}, {a, b}}); } for (int i = 1; i <= q; i++) { int k, vv; cin >> k >> vv; v.pb ({{k, 0}, {vv, i}}); } sort (rall (v)); for (int i = 0; i < n - 1 + q; i++) { if (v[i].F.S == 0) { ans[v[i].S.S] = -par[Find(v[i].S.F)]; } else { Union (v[i].S.F, v[i].S.S); } } for (int i = 1; i <= q; i++) { cout << ans[i] - 1 << endl; } return 0; } 
 » 4 months ago, # |   0 Any prediction of gold-to-platinum promotion cutoff?
•  » » 4 months ago, # ^ |   0 I've looked and in the past years for January it seems to be 700 or below.(Personally) I found the gold contest quite lenient on the partial credit, as I was able to get about 70% for #1 and #2 without really understanding the correct solution (probably ~700-720 total). For #3 I did an N^2 dp instead of the correct O(N) and only got the first 9/12, as I imagine many others did as well So I'm worried a relatively high amount of partial credit awarded might make the cutoff 750 or above,Anyone else have any thoughts on the relative difficulty? This is only second gold contest for me so I'm still new
•  » » » 4 months ago, # ^ | ← Rev. 3 →   0 Makes sense. The cutoff is 650.
•  » » » 4 months ago, # ^ |   0 For G1 I got 9/10 with suboptimal solution :)
 » 4 months ago, # |   0 Does anyone know how to compute continuous intervals for S1?
•  » » 4 months ago, # ^ |   0 Make an array len[] which len[i] means how many units of i'th interval will remain from intersections to other intervals if delete i'th inverval. For calculating len[], sort intervals by R and update len[i] like sweep line. Rest is easy.
 » 4 months ago, # |   0 I tried to download the test cases for plat 2 but when I open test case #2, a string of seemingly random characters in a variety of languages appears. Anyone have a solution to this?
•  » » 4 months ago, # ^ |   0 What are you using to open it? Notepad is what I use and it works perfectly fine on that.
•  » » 4 months ago, # ^ | ← Rev. 2 →   0 Try using atom, it will open them up all in one place too.
 » 4 months ago, # |   0 There is a problem with author's solution in Gold 3.cout << (((long long)ans) + MOD — ((long long)s[N]) + s[N-1])%MOD << '\n';should becout << (((long long)ans) + MOD — ((long long)s[N]) — s[N-1])%MOD << '\n';
•  » » 4 months ago, # ^ |   0 I believe the solution is correct as is, though the number of parentheses is admittedly confusing. We are subtracting s[N] - s[N - 1] from the total number of ways.
 » 4 months ago, # | ← Rev. 2 →   +22 I came up with a nice solution for Plat #2 that runs in worst-case time. It takes 350ms on the test cases from the contest.Let dp[n][k] be the number of farmers needed to stop Bessie when she is starting from node n and only allowed to visit nodes in the n-subtree, after the farmers get a head start of k time steps. Similarly, let dp2[n][k] be the number of farmers needed to stop Bessie starting from node n with a k time step head start for the farmers, but Bessie is only allowed to visit nodes not in the n-subtree.First, set dp[n][k] = dp2[n][k] = 1 if n is distance at most k from a leaf. You can then proceed to make the following observations about state transitions. The answer for each node is either 1 if it is a leaf, or dp[n][0] + dp2[n][0]. We thus have a solution for the problem in O(N2) time.To improve our algorithm to , note the following key lemma:Lemma: For any fixed value of n and varying value of k, both dp[n][k] and dp2[n][k] take at most different values.Proof (by pacu): Giving a head start of k is equivalent to removing all the leaves from the tree k times. The value of dp is limited by how many leaves in the tree there are. Thus, supposing there are at least x different values of dp, the number of nodes in the tree must be at least 1 + 2 + ... + x = O(x2), which completes the proof.We can thus use a variable-sized vector of distinct values and indices for each node instead of an array, updating state transitions with the merge operation from mergesort. This gives us a solution using time and memory. Sample Implementation#include using namespace std; typedef long long LL; const double PI = 4 * atan(1); #define MAXN 70013 int N; vector adj[MAXN]; vector > dp[MAXN]; vector > dp2[MAXN]; int idx[MAXN]; int val[MAXN]; int leafd[MAXN]; void bfs() { memset(leafd, -1, sizeof leafd); queue q; for (int i = 0; i < N; i++) { if (adj[i].size() == 1) { q.push(i); leafd[i] = 0; } } while (!q.empty()) { int v = q.front(); q.pop(); for (int u : adj[v]) { if (leafd[u] == -1) { leafd[u] = leafd[v] + 1; q.push(u); } } } } // recursion to calculate dp void dfs(int n, int p=-1) { int ch = adj[n].size() - (p != -1); if (ch == 0) { dp[n].push_back({0, 1}); return; } typedef pair, int> piii; priority_queue, greater > pq; for (int v : adj[n]) if (v != p) { dfs(v, n); pq.emplace(dp[v][0], v); } int cur = 0; int totalval = 0; while (!pq.empty()) { auto p = pq.top(); pq.pop(); int t = max(0, p.first.first - 1); if (t != cur) { dp[n].emplace_back(cur, totalval); } cur = t; totalval += p.first.second - val[p.second]; val[p.second] = p.first.second; ++idx[p.second]; if (idx[p.second] != dp[p.second].size()) { pq.emplace(dp[p.second][idx[p.second]], p.second); } } dp[n].emplace_back(cur, totalval); int case2 = leafd[n]; while (dp[n].size() && dp[n].back().first >= case2) { dp[n].pop_back(); } if (dp[n].empty() || dp[n].back().second > 1) dp[n].emplace_back(case2, 1); } // recursion to calculate dp2 void dfs2(int n, int p=-1) { if (n == 0) dp2[n].emplace_back(0, 0); else { // O(N sqrt N) dp2 update int a = 0, b = 0, c = 0; int va, vb, vc; int cur = 0; while (true) { int ta = a < dp2[p].size() ? dp2[p][a].first - 1 : MAXN; int tb = b < dp[p].size() ? dp[p][b].first - 1 : MAXN; int tc = c < dp[n].size() ? dp[n][c].first - 2 : MAXN; // cout << ta << ' ' << tb << ' ' << tc << endl; if (ta == MAXN && tb == MAXN && tc == MAXN) break; if (ta <= tb && ta <= tc) { int t = max(ta, 0); if (cur != t) { dp2[n].emplace_back(cur, va + vb - vc); } cur = t; va = dp2[p][a++].second; } else if (tb <= ta && tb <= tc) { int t = max(tb, 0); if (cur != t) { dp2[n].emplace_back(cur, va + vb - vc); } cur = t; vb = dp[p][b++].second; } else { int t = max(tc, 0); if (cur != t) { dp2[n].emplace_back(cur, va + vb - vc); } cur = t; vc = dp[n][c++].second; } } dp2[n].emplace_back(cur, va + vb - vc); int case2 = max(0, leafd[p] - 1); while (dp2[n].size() && dp2[n].back().first >= case2) { dp2[n].pop_back(); } if (dp2[n].empty() || dp2[n].back().second > 1) dp2[n].emplace_back(case2, 1); } for (int v : adj[n]) if (v != p) { dfs2(v, n); } } int main() { ios_base::sync_with_stdio(false); cin.tie(0); freopen("atlarge.in", "r", stdin); freopen("atlarge.out", "w", stdout); cin >> N; for (int i = 1; i < N; i++) { int a, b; cin >> a >> b; --a; --b; adj[a].push_back(b); adj[b].push_back(a); } bfs(); dfs(0); dfs2(0); for (int i = 0; i < N; i++) { if (adj[i].size() == 1) { cout << 1 << '\n'; } else { cout << dp[i].front().second + dp2[i].front().second << '\n'; } } cout.flush(); return 0; }