Meta Hacker Cup Round 3
Good morning! Meta Hacker Cup Round 3 starts on Saturday, October 8th in under 48 hours! You're automatically registered for Round 3 if you placed in the top 500 of Round 2.
Important info:
 You'll qualify for the Hacker Cup Finals if you place in the top 25 in this round.
 If you place in the top 200 of this round, your Hacker Cup shirt will have a badge on the sleeve indicating you did so.
 The round will be 3 hours long.
Less important advice:
 Though we attempt to sort the problems in ascending order of difficulty and assign point values appropriately, we particularly encourage reading all the problems in this round.
A separate post about tshirt distribution will be coming in the next few days. Good luck, have fun, and we'll see you on the scoreboard!
Days until contest: 4, 3, 2, 1, 0
SecondThread, could you please read your DMs? I have sent you one a few days ago regarding tax issue with last year's Hacker Cup TShirt, and it is still unread.
Hi, sorry, I've been focused on preparing round 3 and dealing with tshirts this week. I've read and replied to all of them. There's also a message cap on CF, so I had to go somewhat slowly responding as well.
will we get mail regarding tshirt like last year ?
No you haven't reply my DM which is sent 6 months ago.
lmao
Will Meta considering holding contests in future years a couple of hours earlier (around the time slot that Google Code Jam or Codeforces hold their rounds)? The Code Jam timing seems to allow West Coast US all the way till East Asia to take part. On the contrary, Meta Hacker Cup timing requires East Asia participants to take part at 14am.
I know some people say sleep is for the weak or coders know no sleep etc, but when you are in your thirties, 3am is a pain...
+1. Majority (?) of contestants are in Asia timezones, so I can't understand why the contest time is like this. If you hold the contest for just 2 hours earlier (which will be 8am for Facebook engineers in California), thousands of contestants' live would be much better.
Will finals be an onsite contest?
I can't download the input to problem A.
I get:
Sorry, something went wrong. We're working on getting this fixed as soon as we can.
The validation worked fine though.
Same here...
And don't repeat my mistake — I clicked the next button hoping to download the plaintext file, it doesn't work either :(
5 problems, 3 rolling hash.
Which 3? You can use it for Second Mistake and Zero Crossings, but what else?
Can use it for third trie too (I did and AC)
We don't actually need it, but I also used the hash in Third Trie.
What was the approach? The runtime wasn't O(100^6 * 10^4) per testcase, was it? We had specific cases breaking that naive with C++ O3 TLEs to make sure it worked.
Can count the number of occurrence for each string over all tries and then add nC3 (ncount)C3 for each string (since each string will only occur once per trie)
Oh I didn't even consider that. Yeah the intended is to combine the tries into a single trie and then just do a DFS on that. Hashing is just ridiculously strong when checking big things for equality.

Missed E1 by 6 minutes :( needs more templates...
I feel like d1 was much easier than d2, but it was worth more points.
I had a small bug in D1, but my D2 passed :( annoying that D2 is worth little
It is assumed that if you solve D2, you solve D1 as well, so the points for D2 represent the extra difficulty of D2 compared to D1, not the total difficulty.
The assumption is not correct tho
What's the solution for A? I bricked there so hard :/
We look at the highest card to lowest, and determine which ones will win rounds. A card will win if there is no available higher card from the opposite team that plays later. So, you can do a greedy approach with the following code
Sudden death when I noticed I failed D1 when I was received WA at validation of D2
I was smart and validated both D1 and D2 before submitting either.
I still failed both xd
Is E2 persistent treap?
Yep :)
I failed D because I wrote an N for M. :)
Nice clutch making it to the finals with E2 though. We were worried when it looked like you were going to not qualify despite solving E1 because of WAs on D
what kind of favoritism is this
I like the people who solve our problems
When can we expect editorials to be up by? fun problems btw
I failed E1 because of an invalid sample test. It was actually updated on the site, but I noticed it too late (there was no announcement). As a result, I spend the last 10 minutes of the contest debugging a solution which was in fact correct. SecondThread Can you make an announcement when sample tests are updated? Some people download them once, not copy&paste it from the page each time.
You're right. Sorry. We will next time.
This is the first time I do Meta Hacker Cup and it turned out that I was pretty lucky. Here is what I have experienced during the contest.
I never thought before that the local stack size might be an issue for CP but this happened after I downloaded the full input of problem B (since I used a dfs function). Fortunately, I managed to change my dfs function into a bfs function and submit the output & solution in the last two seconds. (Or maybe even the last one second?)
Besides, when I started coding D2, I realized that my code for D1 would output wrong answer when $$$k = 1$$$. Fortunately, I checked the full input data of D1 and found that there was no test case with $$$k = 1$$$, XD.
It seems that everybody got errors when submitting A. The organizers should have announced the issue instead of letting us waste time (before eventually using clarifications).
(It didn't affect my score. I'm just saying that an announcement would improve the contest.)
Most people had no issues with A. We received fewer than 25 clars for A in total, which includes statementrelated clars. It seems the slowness came in waves (either of high stress or of random maintenance on servers), we're not sure why yet.
ok, them my suggestion is invalid
I asked two friends and they also had issues with submitting. Too small of a sample, I guess.
Some feedback on the contest:
A: Great problem, interesting to think about and not too difficult to implement. On a personal level, it's a bit disappointing that validation didn't catch all the bugs (in my solution, when player three cannot cover any remaining card played by player two, they cover the highest possible card played by player one rather than the lowest; this passes validation tests), but obviously it's unreasonable to expect these tests to catch every little error, and my mistake was both sufficiently subtle and sufficiently silly/oneoff (i.e., I doubt many other contestants wrote the same error) that I can't be too frustrated by the lack of pretests covering the case (i.e., I don't think this is evidence that the authors failed to prepare good validation tests).
B: Good problem; reasonably interesting solution and straightforward implementation. I think this was probably easier than A, but it's close.
C: Decent problemadhocs involving hashing are always a bit more implementationheavy than I'd like, but this one wasn't too bad and the idea was a nice special case of meetinthemiddle.
D: I enjoyed this problem overall. D1 felt a bit standard as a use case of smalltolarge merging, but it wasn't too painful to write. When I read D1, I suspected that D2 wouldn't be too much harder; I think I was right, but even so, the extra step to finish the problem was fairly interesting.
E: I wasn't a fan of this problem. It felt like three disjoint problems, each of which I've seen before (ICPC NAC 2020 C, some problem on Codeforces I can't remember involving hashing trees to check isomorphism of subtrees, and this problem from this year's NA ICPC regionals, concatenated on each other. As such, coming up with the solution was very easy (as little/no thought was required to reduce the problem to these tasks, and I remembered the solution to each subproblem immediately), but the task still took three problems worth of implementation/debugging time. I'm also not a huge fan of geometry in the Hacker Cup format because geometry problems tend to be rife with edge cases and MHC penalizes subtle implementation bugs far more heavily than ICPC or (to some degree) Codeforces, but obviously the bug I wrote in this particular case is my own skill issue more than anything else.
Overall, I enjoyed the contestmy one overall point of criticism is that the round felt a bit like speedforces (i.e., exactly four problems must be solved to make finals, there is a clear set of four easiest problems, and finals are decided largely by penalty time on these four problems, with the finalists solving them very quickly). One of the things I think GCJ usually does very well is making sure that there are always multiple paths to the finals. Last year's scoreboard is a great example of thisafter picking off the "free points" (A and B small), there were a number of options for how to move forward (fast B large/C small, B/C full, or D full were all strategies that made the finals). This meant that midcontest, observing the scoreboard and making guesses about FST rates could enable competitors to make better choices about which problem to attempt next. (For example, in the aforementioned GCJ round, I solved A slowly and then decided that because many others had accumulated more points already, my best chance to make finals was to fully solve D; this would have worked if I had a bit more time to finish the small subtask of B afterwards.)
As another example, last year's R3 made B, C, and D all sufficiently challenging that solving two out of three was enough for the finals, meaning that there was some strategy involved in problem selection (and in particular, guessing early on that two of these problems were necessary for the finals, while one could be safely skipped, could have led to better strategic decisions throughout the contest).
In this year's R3, the set of "mustsolve" problems for finals was ABCD, without much opportunity for variation. I suppose that someone at ~52 points with around an hour left might be incentivized to attempt E instead of their last problem from AD, but E was hard enough that this strategy was unlikely to work. If I were writing this contest and could choose problem difficulties arbitrarily, I'd replace two of ABCD with problems harder than D and easier than E (the hope being that contestants will need at least one of these problems, but not both, in order to make finals). More generally, I think future rounds should have fewer than four easymedium problems and more mediumhard problems, so that finals are decided by more challenging problems and so that there are multiple sets of problems that can be solved in order to make finals.
Thanks for the contest! Hoping to write fewer oneline bugs next year :)
By the way, did anyone get constant factor TLEs on C? My solution isn't optimized that well, but it took 23m to run on my (reasonably strong, i58400) PC, which makes me suspect that it may have TLE'd if run on a weaker PC.
I did! Took me ~8 minutes to run on my laptop (submitted as practice after round ended and it was AC)
I passed C with naive iteration for each dictionary word over trie built over all query strings. Was this intended? Can anything be proven due to dictionary words being distinct?
Intended was: build one trie that's the union of all tries. Then iterate over that megatrie and do combinatorics on it.
It turns out that it's actually very hard to get the solution for every query. We were able to disguise it well by hiding the fact that you return the sum of query answers in other problems too, thinking perhaps people wouldn't think too much about that approach at first. (The fact it was solved early gave that away a bit, which lead to it being easier to spot I think)
Judges' solutions wouldn't be able to solve this problem if you needed to print the xor of answers for instance.
I was talking about "Second Mistake". I indeed solved "Third Trie" with the megatrie.
That ought to TLE on perfect data. Just because the words are distinct doesn't mean they won't share a large prefix, forcing you to go down the same long path on the trie multiple times.
Interestingly enough, my solution for D1 does not generalize (at least by me) to the solution for D2 (in D1 I maintain the remaining number of connections that need to be made, while in D2 I maintain the GCD of the bars between differentcolored stakes)
Wrote tons of hashes in the previous rounds, luckily don't have to write any more, got rank 26 in round 3 finally. (ＴДＴ)
Whoops, after some discussion, I realized I passed both D1 and D2 with a completely incorrect solution. In all my tests, it was enough to assume all cells of each color will be consecutive on a good board. So I only cared about the leftmost cell of each color and of all cells being consecutive.
D1 fails on something like:
The tests are weak in terms of catching slow solutions too. Merging "first to second" instead of "smaller to larger" takes less than 1s. That's $$$O(N^2)$$$.
I solve the problem C faster than all other contestants. Why the first blood list in the hacker cup homepage is Yuhao Du (apiad), but not me. :(
My rank is 264th. Please SecondThread check it.
Sorry about that, and thanks for the correction! We've updated the summary.
Will solutions to the problems be presented as they have been for previous rounds? In C my solution (which sounds broadly like what people mentioned in the blog — hashing meetinthemiddle sort of thing) ran ~8 minutes on the large input. So I am interested whether I just have a bad constant or there is something more that I missed :)
Did you compile your program with at least O2?
yes. I did use meh hardware and didn't take much care of the constant (didn't realize I'd run into problems with it, which is my fault anyway), it would still be a bit disappointing if I would've passed simply by working from my desktop instead of my laptop
I failed miserably, but it was fun anyway.
Waiting for the editorial, as I still need to learn how to correct my D1 "solution" :)
I still can't download A's full input(status code 500). If it's just me, could anyone help me out and upload it somewhere? Every other problem's fine.
Also no K=1 tests in D1  kinda evil.
Since there is apparently no editorial, I can write my solutions (briefly) so that nobody needs to wait.
There already is a solution in the comments, but mine differs.
I claim that the strategy for the Second player is to assign a "countercard" to every card the First player can play. Therefore, in each round, the set of cards that reaches the Third player is defined by the card played by the First. Moreover, I claim the following:
For each Player, from the Second to the Fourth, when it is their turn to play, either their team wins or they lose at the moment. Let's assume that we already know the sets $$$G$$$ and $$$B$$$ that mean the following: when the Player's team is winning when it is their turn to play, let us say that the highest card now is $$$x$$$. Then $$$G$$$ is the set of these $$$x$$$'s. Similarly, $$$B$$$ is the set of the highest cards so far in the cases when the Player's team is losing. In particular, $$$G + B = n/4$$$. Then my claim is that the Player can assign their cards as the responses to all these situations, and then play correspondingly. The meaningful part of this claim is that we can ignore the order the First player plays their cards: if we decided to counter $$$2$$$ with $$$9$$$, we do it no matter what and we do not waste the $$$9$$$ in any other case.
More specifically, the Player first wants to counter as many numbers from $$$B$$$ as they can, so we can counter the maximal possible card from $$$B$$$ with our best card, then counter the next maximal possible card from $$$B$$$ with our next best card, and so on, until there are no cards from $$$B$$$ that we counter. Then there are some cards remaining, and we will use them to strengthen the weakest cards where we win.
All this is wavy and without proofs, because I have none, but this got AC 2 minutes after the contest.
The solution was, again, mentioned here, and it was to merge all tries into a megatrie where each node stores the number of tries it is merged from, and then sum something over all nodes.
My solution was, for every trie, try to traverse it and simultraverse all other tries similarly. I don't want to elaborate, I think that the previous paragraph explains it better than any way that I try to describe my solution.
For every word $$$V_i$$$ create all words at the Hamming distance $$$1$$$ from it, that is, all words that differ from it by $$$1$$$ mistake. There are $$$3NL$$$ such words in total. Hash all of them (with rolling hash or just assign a random 64bit number to every pair
(position, letter)
and let the hash of a words
be the sum of all numbers corresponding to(i, s[i])
), this can be done in $$$O(1)$$$ per added word, if we do it carefully. For each hash we store the number of its occurrences.For every query word $$$W_j$$$ generate all the $$$1$$$distance words as well. If we add the numbers of their occurrences, we almost obtain twice the answer. Indeed, let $$$d(s, t)$$$ be the distance between words $$$s$$$ and $$$t$$$, that is, the number of positions they differ. If $$$d(V_i, W_j) > 2$$$, then our sum contains nothing from $$$V_i$$$, as it should. If $$$d(V_i, W_j) = 2$$$, then this sum contains two occurrences; for example,
meta
 andtata
's neighbors intersect byteta
andmata
. If, however, $$$d(V_i, W_j) = 1$$$, then they also have two common neighbors, and if $$$V_i = W_j$$$, then they have $$$3L$$$ common neighbors. However, these cases can be carefully handled with additional hashing of the words $$$V_i$$$ themselves in the separate set.About the set: if you use treebased map instead of hashmap, it will be extralog factor, and you may get TLE. If you look for the query neighbors using
operator []
instead of.find()
, you may add new elements very often, thus increasing the map size and therefore the execution time, which may also lead to a TLE.Here is my solution, seemingly useless for D2.
Let the stakes 1 to $$$K$$$ be the first group, $$$K+1$$$ to $$$2K$$$ be the second group, etc. In particular, there are $$$M = \lceil N / K \rceil$$$ groups. Initially, each color is present, and all its occurrences (one) correspond to some particular group, we know which it is. Call an operation successful, if both colors it involves are present. There is a particular number of successful operations that need to be made: if all of them are expected to be each involving the colors of the same group, then this number of operations is $$$N  M$$$. But some successful operations involve two colors from different groups. Sometimes we will have to merge two groups into one, but the meaning of the word "group" is always "the set of stakes that need to end being the same color".
We store for each color the group it represents, or
1
, if there are no stakes of this color. Consider each operation, say it is replace color $$$u$$$ with color $$$v$$$. If $$$u$$$ is not present, ignore. If $$$v$$$ is not present, swap the groups for colors $$$u$$$ and $$$v$$$. Otherwise the operation is successful. If $$$u$$$ and $$$v$$$ correspond to the same group, we decrease the number of successful operations remaining. Otherwise, since some two stakes from different groups will now have the same color for eternity, and in the end each of these groups needs to be samecolored, then these groups must be merged into one. We use DSU to do this, then we change the groups of all colors of the smaller group. When the number of remaining successful operations to be made becomes zero, we win.If we look at the somehow colored fence, can we find all $$$k$$$ which it is properly colored for?
A fence is properly colored for $$$k$$$ iff, for all pairs $$$(i, i + 1)$$$ such that stakes $$$i$$$ and $$$i + 1$$$ are of different colors, $$$i$$$ is divisible by $$$k$$$. In other words, GCD of all such $$$i$$$ must be divisible by $$$k$$$.
Assign a tag to each color, initially assign $$$i$$$ to $$$i$$$. We will maintain two things: the tags of all stakes in order, and the set of all such $$$i$$$'s that $$$i$$$ and $$$i + 1$$$ have different colors, ergo tags.
We use tags so we don't have to recolor everything to maintain the first thing. If we need to recolor $$$u\to v$$$, then there are several options:
Two things remaining: 1) calculate the gcd of all $$$i$$$ after each operation, and 2) find the answer.
1) We can either do it online with some segment tree, where each node stores gcd of its children, and removing number $$$i$$$ is "replace some segment tree leaf with 0". Or we can do it in reverse, calculate the final gcd, then go over operations from latest to the first, each time doing some
2) When we know the gcd after all operations, for each $$$k$$$ we can find the first moment when this gcd is divisible by $$$k$$$ using binary search. There are other ways, but I think this is the easiest.
Step one: parse the picture to build the inclusion tree: that is, vertices are the polygons and the whole plane (which can be considered as a very large polygon), we have $$$u\to v$$$ if polygon $$$v$$$ is inside $$$u$$$ and there is no polygon $$$w$$$ between them. It can be done by sorting all points and then maintaining some order with treap.
Step two: given two points, consider the smallest polygons $$$u$$$ and $$$v$$$ that contain these points. We need to verify that $$$subtree(u)$$$ is isomorphic to $$$subtree(v)$$$, $$$subtree(par(u))$$$ is isomorphic to $$$subtree(par(v))$$$, and so on (exercise for the reader).
First let's distinguish the subtrees. Let's define hash inductively. For a vertex $$$v$$$, let $$$a_1, \ldots, a_d$$$ be the sorted list of hashes of its children. Then hash of $$$v$$$ is either the first natural number that is still unused for hashing vertices, of whatever we hashed the list $$$(a_1, \ldots, a_d)$$$ with before, if it is not the first time.
Now that we have all hashes of our vertices, let's hash all paths from the root. The path that contains only root will be hashed with $$$0$$$. For all other vertices $$$v$$$, if $$$h$$$ is the hash of $$$par(v)$$$, the path from the root to $$$v$$$ will be hashed either with the first natural number that is still unused for hashing paths, of whatever we hashed the pair $$$(h, hash(v))$$$ with before, if it is not the first time.
no post regarding the tshirt. it has been now a week SecondThread
Please see this post.