By SecondThread, history, 2 months ago,

# Meta Hacker Cup Round 3

Good morning! Meta Hacker Cup Round 3 starts on Saturday, October 8th in under 48 hours! You're automatically registered for Round 3 if you placed in the top 500 of Round 2.

## Important info:

• You'll qualify for the Hacker Cup Finals if you place in the top 25 in this round.
• If you place in the top 200 of this round, your Hacker Cup shirt will have a badge on the sleeve indicating you did so.
• The round will be 3 hours long.

• Though we attempt to sort the problems in ascending order of difficulty and assign point values appropriately, we particularly encourage reading all the problems in this round.

A separate post about t-shirt distribution will be coming in the next few days. Good luck, have fun, and we'll see you on the scoreboard!

Days until contest: 4, 3, 2, 1, 0

• +99

 » 2 months ago, # |   +41 SecondThread, could you please read your DMs? I have sent you one a few days ago regarding tax issue with last year's Hacker Cup T-Shirt, and it is still unread.
•  » » 2 months ago, # ^ |   +40 Hi, sorry, I've been focused on preparing round 3 and dealing with t-shirts this week. I've read and replied to all of them. There's also a message cap on CF, so I had to go somewhat slowly responding as well.
•  » » » 2 months ago, # ^ |   +3 will we get mail regarding t-shirt like last year ?
•  » » » 2 months ago, # ^ |   +38 No you haven't reply my DM which is sent 6 months ago.
•  » » » » 2 months ago, # ^ |   0 lmao
 » 2 months ago, # |   +133 Will Meta considering holding contests in future years a couple of hours earlier (around the time slot that Google Code Jam or Codeforces hold their rounds)? The Code Jam timing seems to allow West Coast US all the way till East Asia to take part. On the contrary, Meta Hacker Cup timing requires East Asia participants to take part at 1-4am.I know some people say sleep is for the weak or coders know no sleep etc, but when you are in your thirties, 3am is a pain...
•  » » 2 months ago, # ^ | ← Rev. 2 →   +40 +1. Majority (?) of contestants are in Asia timezones, so I can't understand why the contest time is like this. If you hold the contest for just 2 hours earlier (which will be 8am for Facebook engineers in California), thousands of contestants' live would be much better.
 » 2 months ago, # |   +59 Will finals be an onsite contest?
 » 2 months ago, # |   +8 I can't download the input to problem A.I get:Sorry, something went wrong. We're working on getting this fixed as soon as we can.The validation worked fine though.
•  » » 2 months ago, # ^ |   +3 Same here...And don't repeat my mistake — I clicked the next button hoping to download the plaintext file, it doesn't work either :(
 » 2 months ago, # |   +43 5 problems, 3 rolling hash.
•  » » 2 months ago, # ^ |   0 Which 3? You can use it for Second Mistake and Zero Crossings, but what else?
•  » » » 2 months ago, # ^ |   0 Can use it for third trie too (I did and AC)
•  » » » 2 months ago, # ^ |   0 We don't actually need it, but I also used the hash in Third Trie.
•  » » » » 2 months ago, # ^ |   +5 What was the approach? The runtime wasn't O(100^6 * 10^4) per testcase, was it? We had specific cases breaking that naive with C++ -O3 TLEs to make sure it worked.
•  » » » » » 2 months ago, # ^ | ← Rev. 2 →   +3 Can count the number of occurrence for each string over all tries and then add nC3- (n-count)C3 for each string (since each string will only occur once per trie)
•  » » » » » » 2 months ago, # ^ |   +18 Oh I didn't even consider that. Yeah the intended is to combine the tries into a single trie and then just do a DFS on that. Hashing is just ridiculously strong when checking big things for equality.
•  » » » 2 months ago, # ^ | ← Rev. 2 →   +13 -
 » 2 months ago, # |   +16 Missed E1 by 6 minutes :( needs more templates...
 » 2 months ago, # |   +33 I feel like d1 was much easier than d2, but it was worth more points.
•  » » 2 months ago, # ^ |   0 I had a small bug in D1, but my D2 passed :( annoying that D2 is worth little
•  » » 2 months ago, # ^ |   +21 It is assumed that if you solve D2, you solve D1 as well, so the points for D2 represent the extra difficulty of D2 compared to D1, not the total difficulty.
•  » » » 2 months ago, # ^ |   +3 The assumption is not correct tho
 » 2 months ago, # |   +13 What's the solution for A? I bricked there so hard :/
•  » » 2 months ago, # ^ |   +8 We look at the highest card to lowest, and determine which ones will win rounds. A card will win if there is no available higher card from the opposite team that plays later. So, you can do a greedy approach with the following code  // person[i] is index of person that holds card i, available is an array that starts as all 0s. int rounds_left = r; for (int i = n - 1; rounds_left > 0 && i >= 0; i--) { if (available[person[i] + 1] > 0) { // next player can cover my card available[person[i] + 1]--; available[person[i]]++; } else if (available[person[i] + 3] > 0) { // only relevant for b2 covering a1 available[person[i] + 3]--; available[person[i]]++; } else { // this card will win rounds_left--; win[person[i]]++; available[person[i]]++; } } 
 » 2 months ago, # |   +41 Sudden death when I noticed I failed D1 when I was received WA at validation of D2
•  » » 2 months ago, # ^ |   +98 I was smart and validated both D1 and D2 before submitting either. SpoilerI still failed both xd
 » 2 months ago, # |   0 Is E2 persistent treap?
•  » » 2 months ago, # ^ |   0 Yep :)
 » 2 months ago, # |   +39 I failed D because I wrote an N for M. :)
•  » » 2 months ago, # ^ |   +49 Nice clutch making it to the finals with E2 though. We were worried when it looked like you were going to not qualify despite solving E1 because of WAs on D
•  » » » 2 months ago, # ^ |   +50 what kind of favoritism is this
•  » » » » 2 months ago, # ^ |   +81 I like the people who solve our problems
 » 2 months ago, # |   +3 When can we expect editorials to be up by? fun problems btw
 » 2 months ago, # |   +13 I failed E1 because of an invalid sample test. It was actually updated on the site, but I noticed it too late (there was no announcement). As a result, I spend the last 10 minutes of the contest debugging a solution which was in fact correct. SecondThread Can you make an announcement when sample tests are updated? Some people download them once, not copy&paste it from the page each time.
•  » » 2 months ago, # ^ |   0 You're right. Sorry. We will next time.
 » 2 months ago, # |   +23 This is the first time I do Meta Hacker Cup and it turned out that I was pretty lucky. Here is what I have experienced during the contest. I never thought before that the local stack size might be an issue for CP but this happened after I downloaded the full input of problem B (since I used a dfs function). Fortunately, I managed to change my dfs function into a bfs function and submit the output & solution in the last two seconds. (Or maybe even the last one second?)Besides, when I started coding D2, I realized that my code for D1 would output wrong answer when $k = 1$. Fortunately, I checked the full input data of D1 and found that there was no test case with $k = 1$, XD.
 » 2 months ago, # |   0 It seems that everybody got errors when submitting A. The organizers should have announced the issue instead of letting us waste time (before eventually using clarifications).(It didn't affect my score. I'm just saying that an announcement would improve the contest.)
•  » » 2 months ago, # ^ | ← Rev. 2 →   +16 Most people had no issues with A. We received fewer than 25 clars for A in total, which includes statement-related clars. It seems the slowness came in waves (either of high stress or of random maintenance on servers), we're not sure why yet.
•  » » » 2 months ago, # ^ |   +3 ok, them my suggestion is invalidI asked two friends and they also had issues with submitting. Too small of a sample, I guess.
•  » » 2 months ago, # ^ |   0 By the way, did anyone get constant factor TLEs on C? My solution isn't optimized that well, but it took 2-3m to run on my (reasonably strong, i5-8400) PC, which makes me suspect that it may have TLE'd if run on a weaker PC.
•  » » » 2 months ago, # ^ |   0 I did! Took me ~8 minutes to run on my laptop (submitted as practice after round ended and it was AC)
 » 2 months ago, # |   0 I passed C with naive iteration for each dictionary word over trie built over all query strings. Was this intended? Can anything be proven due to dictionary words being distinct?
•  » » 2 months ago, # ^ | ← Rev. 2 →   0 Intended was: build one trie that's the union of all tries. Then iterate over that mega-trie and do combinatorics on it.It turns out that it's actually very hard to get the solution for every query. We were able to disguise it well by hiding the fact that you return the sum of query answers in other problems too, thinking perhaps people wouldn't think too much about that approach at first. (The fact it was solved early gave that away a bit, which lead to it being easier to spot I think)Judges' solutions wouldn't be able to solve this problem if you needed to print the xor of answers for instance.
•  » » » 2 months ago, # ^ |   +8 I was talking about "Second Mistake". I indeed solved "Third Trie" with the mega-trie.
•  » » » » 2 months ago, # ^ |   0 That ought to TLE on perfect data. Just because the words are distinct doesn't mean they won't share a large prefix, forcing you to go down the same long path on the trie multiple times.
 » 2 months ago, # |   +8 Interestingly enough, my solution for D1 does not generalize (at least by me) to the solution for D2 (in D1 I maintain the remaining number of connections that need to be made, while in D2 I maintain the GCD of the bars between different-colored stakes)
 » 2 months ago, # |   +65 Wrote tons of hashes in the previous rounds, luckily don't have to write any more, got rank 26 in round 3 finally. (ＴДＴ)
 » 2 months ago, # | ← Rev. 3 →   +22 Whoops, after some discussion, I realized I passed both D1 and D2 with a completely incorrect solution. In all my tests, it was enough to assume all cells of each color will be consecutive on a good board. So I only cared about the leftmost cell of each color and of all cells being consecutive. D1 fails on something like: 6 4 2 4 3 5 1 6 1 2 1 123456 -> 123356 -> 123316 -> 123311 -> 113311  ah, eto... bleh
•  » » 2 months ago, # ^ |   0 The tests are weak in terms of catching slow solutions too. Merging "first to second" instead of "smaller to larger" takes less than 1s. That's $O(N^2)$.
 » 2 months ago, # |   +12 I solve the problem C faster than all other contestants. Why the first blood list in the hacker cup homepage is Yuhao Du (apiad), but not me. :(My rank is 264th. Please SecondThread check it.
•  » » 2 months ago, # ^ |   +3 Sorry about that, and thanks for the correction! We've updated the summary.
 » 2 months ago, # |   0 Will solutions to the problems be presented as they have been for previous rounds? In C my solution (which sounds broadly like what people mentioned in the blog — hashing meet-in-the-middle sort of thing) ran ~8 minutes on the large input. So I am interested whether I just have a bad constant or there is something more that I missed :)
•  » » 2 months ago, # ^ |   0 Did you compile your program with at least -O2?
•  » » » 2 months ago, # ^ |   0 yes. I did use meh hardware and didn't take much care of the constant (didn't realize I'd run into problems with it, which is my fault anyway), it would still be a bit disappointing if I would've passed simply by working from my desktop instead of my laptop
 » 2 months ago, # |   +16 I failed miserably, but it was fun anyway.Waiting for the editorial, as I still need to learn how to correct my D1 "solution" :)
 » 2 months ago, # | ← Rev. 3 →   0 I still can't download A's full input(status code 500). If it's just me, could anyone help me out and upload it somewhere? Every other problem's fine.Also no K=1 tests in D1 -- kinda evil.
 » 2 months ago, # |   +29 Since there is apparently no editorial, I can write my solutions (briefly) so that nobody needs to wait. A: Fourth playerThere already is a solution in the comments, but mine differs.I claim that the strategy for the Second player is to assign a "counter-card" to every card the First player can play. Therefore, in each round, the set of cards that reaches the Third player is defined by the card played by the First. Moreover, I claim the following:For each Player, from the Second to the Fourth, when it is their turn to play, either their team wins or they lose at the moment. Let's assume that we already know the sets $G$ and $B$ that mean the following: when the Player's team is winning when it is their turn to play, let us say that the highest card now is $x$. Then $G$ is the set of these $x$'s. Similarly, $B$ is the set of the highest cards so far in the cases when the Player's team is losing. In particular, $|G| + |B| = n/4$. Then my claim is that the Player can assign their cards as the responses to all these situations, and then play correspondingly. The meaningful part of this claim is that we can ignore the order the First player plays their cards: if we decided to counter $2$ with $9$, we do it no matter what and we do not waste the $9$ in any other case.More specifically, the Player first wants to counter as many numbers from $B$ as they can, so we can counter the maximal possible card from $B$ with our best card, then counter the next maximal possible card from $B$ with our next best card, and so on, until there are no cards from $B$ that we counter. Then there are some cards remaining, and we will use them to strengthen the weakest cards where we win.All this is wavy and without proofs, because I have none, but this got AC 2 minutes after the contest. B: Third somethingThe solution was, again, mentioned here, and it was to merge all tries into a mega-trie where each node stores the number of tries it is merged from, and then sum something over all nodes.My solution was, for every trie, try to traverse it and simultraverse all other tries similarly. I don't want to elaborate, I think that the previous paragraph explains it better than any way that I try to describe my solution. C: Second mistakeFor every word $V_i$ create all words at the Hamming distance $1$ from it, that is, all words that differ from it by $1$ mistake. There are $3NL$ such words in total. Hash all of them (with rolling hash or just assign a random 64-bit number to every pair (position, letter) and let the hash of a word s be the sum of all numbers corresponding to (i, s[i])), this can be done in $O(1)$ per added word, if we do it carefully. For each hash we store the number of its occurrences.For every query word $W_j$ generate all the $1$-distance words as well. If we add the numbers of their occurrences, we almost obtain twice the answer. Indeed, let $d(s, t)$ be the distance between words $s$ and $t$, that is, the number of positions they differ. If $d(V_i, W_j) > 2$, then our sum contains nothing from $V_i$, as it should. If $d(V_i, W_j) = 2$, then this sum contains two occurrences; for example, meta- and tata's neighbors intersect by teta and mata. If, however, $d(V_i, W_j) = 1$, then they also have two common neighbors, and if $V_i = W_j$, then they have $3L$ common neighbors. However, these cases can be carefully handled with additional hashing of the words $V_i$ themselves in the separate set.About the set: if you use tree-based map instead of hashmap, it will be extra-log factor, and you may get TLE. If you look for the query neighbors using operator [] instead of .find(), you may add new elements very often, thus increasing the map size and therefore the execution time, which may also lead to a TLE. D1: First somethingHere is my solution, seemingly useless for D2.Let the stakes 1 to $K$ be the first group, $K+1$ to $2K$ be the second group, etc. In particular, there are $M = \lceil N / K \rceil$ groups. Initially, each color is present, and all its occurrences (one) correspond to some particular group, we know which it is. Call an operation successful, if both colors it involves are present. There is a particular number of successful operations that need to be made: if all of them are expected to be each involving the colors of the same group, then this number of operations is $N - M$. But some successful operations involve two colors from different groups. Sometimes we will have to merge two groups into one, but the meaning of the word "group" is always "the set of stakes that need to end being the same color".We store for each color the group it represents, or -1, if there are no stakes of this color. Consider each operation, say it is replace color $u$ with color $v$. If $u$ is not present, ignore. If $v$ is not present, swap the groups for colors $u$ and $v$. Otherwise the operation is successful. If $u$ and $v$ correspond to the same group, we decrease the number of successful operations remaining. Otherwise, since some two stakes from different groups will now have the same color for eternity, and in the end each of these groups needs to be same-colored, then these groups must be merged into one. We use DSU to do this, then we change the groups of all colors of the smaller group. When the number of remaining successful operations to be made becomes zero, we win. D2: First somethingIf we look at the somehow colored fence, can we find all $k$ which it is properly colored for?A fence is properly colored for $k$ iff, for all pairs $(i, i + 1)$ such that stakes $i$ and $i + 1$ are of different colors, $i$ is divisible by $k$. In other words, GCD of all such $i$ must be divisible by $k$.Assign a tag to each color, initially assign $i$ to $i$. We will maintain two things: the tags of all stakes in order, and the set of all such $i$'s that $i$ and $i + 1$ have different colors, ergo tags.We use tags so we don't have to recolor everything to maintain the first thing. If we need to recolor $u\to v$, then there are several options: if $u$ is not present, do nothing, if $v$ is not present, put $tag(v) = tag(u)$, then $tag(u) = -1$, then do nothing (since both things we cared about did not change), if $u$ and $v$ are present, relabel all colors of the smaller of these tags with the other tag. While doing it, exclude all $i$ so that now $i$ and $i + 1$ have the same color therefore tag. Two things remaining: 1) calculate the gcd of all $i$ after each operation, and 2) find the answer.1) We can either do it online with some segment tree, where each node stores gcd of its children, and removing number $i$ is "replace some segment tree leaf with 0". Or we can do it in reverse, calculate the final gcd, then go over operations from latest to the first, each time doing some for (int x : excluded_numbers[i]) { g = gcd(g, x); } 2) When we know the gcd after all operations, for each $k$ we can find the first moment when this gcd is divisible by $k$ using binary search. There are other ways, but I think this is the easiest. E: Zero crossingsStep one: parse the picture to build the inclusion tree: that is, vertices are the polygons and the whole plane (which can be considered as a very large polygon), we have $u\to v$ if polygon $v$ is inside $u$ and there is no polygon $w$ between them. It can be done by sorting all points and then maintaining some order with treap.Step two: given two points, consider the smallest polygons $u$ and $v$ that contain these points. We need to verify that $subtree(u)$ is isomorphic to $subtree(v)$, $subtree(par(u))$ is isomorphic to $subtree(par(v))$, and so on (exercise for the reader).First let's distinguish the subtrees. Let's define hash inductively. For a vertex $v$, let $a_1, \ldots, a_d$ be the sorted list of hashes of its children. Then hash of $v$ is either the first natural number that is still unused for hashing vertices, of whatever we hashed the list $(a_1, \ldots, a_d)$ with before, if it is not the first time.Now that we have all hashes of our vertices, let's hash all paths from the root. The path that contains only root will be hashed with $0$. For all other vertices $v$, if $h$ is the hash of $par(v)$, the path from the root to $v$ will be hashed either with the first natural number that is still unused for hashing paths, of whatever we hashed the pair $(h, hash(v))$ with before, if it is not the first time.
 » 2 months ago, # |   +25 no post regarding the t-shirt. it has been now a week SecondThread
•  » » 2 months ago, # ^ |   0 Please see this post.