LoneFox's blog

By LoneFox, history, 5 years ago, In English

Round 2 of the 2019 Facebook Hacker Cup is less than 48 hours away!

The round will begin on July 13th, 2019 at 10am PDT and will last for 3 hours. You can check the start time in your local timezone here.

The contest will be available here shortly before the round begins.

You're eligible to compete in Round 2 if you scored at least 30 points in Round 1.

The top 200 contestants will advance to Round 3, which will take place on July 27th, and the top 500 contestants will win Hacker Cup t-shirts! We'll begin shipping out t-shirts by September, at which point the winners will receive emails with tracking information. Please note that all submission judgments will only be revealed after the round ends, and that time penalty will be used to break score ties. More details about rules and other information can be found here.

Good luck!

The corresponding Facebook post can be found here.

Update: The round has ended, and solutions have been posted here. Thanks for participating!

  • Vote: I like it
  • +168
  • Vote: I do not like it

| Write comment?
»
5 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Where is the final contest held?

  • »
    »
    5 years ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    nevermind, found it on the page:
    "We're also excited to announce that this year's Hacker Cup Finals will be held on October 24-26 at Facebook Dublin!"

»
5 years ago, # |
  Vote: I like it +36 Vote: I do not like it

Just to return this to recent actions: contest will start in 1.5 hours!

»
5 years ago, # |
  Vote: I like it +2 Vote: I do not like it

Can we solve D using the idea that input is random. And use that decreasing subsequence length in a random sequence will be small.

  • »
    »
    5 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I have a solution which solves any instance in $$$O(N \log N)$$$. But don't listen to me, I got WA.

  • »
    »
    5 years ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    Mine is using segment tree. For each "R" position find the interval it covers.

  • »
    »
    5 years ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    Test 7
    n = 199128
    Test 97
    n = 799999
    Test 133
    n = 799306

    All other tests have $$$n \le 10$$$ in decreasing sequence. So, not all the tests are random :)

  • »
    »
    5 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    The sequence can be strictly increasing or strictly decreasing by setting $$$A = 0$$$, $$$B = 1$$$ and $$$C = 1, D - 1$$$.

»
5 years ago, # |
  Vote: I like it +49 Vote: I do not like it

I don't think an email was sent out? I found out about the contest an hour into it going on, now I didn't make the top 200 (215th) due to time penalty :(

  • »
    »
    5 years ago, # ^ |
    Rev. 2   Vote: I like it +13 Vote: I do not like it

    Same. I could have easily solved A+B and gotten in top 500 for a T-shirt, but now I solved B way too late and had to attempt C because A wouldn't do it anymore with the time penalties.

  • »
    »
    5 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I checked my inbox for an email if the contest was due this weekend. When it turned out empty, I just slept a little early. And I wake up to this.

»
5 years ago, # |
  Vote: I like it +99 Vote: I do not like it

Gosh, why do you put constraints like T<=350, n<=4000, m<=10000 and do not give any constraint on their sum and then in an actual test give input data so that sum of m's is like 1% of what it could be? These constraint were so high I was fairly dubious whether $$$O(tnm \cdot \alpha(n))$$$ would pass but decided to go with it because many people submitted it and it seemed really hard to get something faster then $$$O(tnm)$$$ and that my laptop is pretty fast and then you give input data so that my program runs 0.5s when I was afraid it would time out on 6 minutes?

Could you not take the worst ideas from Jagiellonian University where they sometimes give "t is the number of testcases, $$$t \le 2 \cdot 10^9$$$" and want contestants to rely on guessed assumptions about what the input size really is?

  • »
    »
    5 years ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    in Facebook, usually I get the number of max test is not bigger than 10-20 in testset.

  • »
    »
    5 years ago, # ^ |
      Vote: I like it +24 Vote: I do not like it

    You can do O(t*n*n). I expected them to maybe make it hard for O(t*n*m) but that was not the scenario.

    Idea
  • »
    »
    5 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    You need only longest palindrome for each center, so you can build the graph in $$$O(m+n^{2})$$$, don't know why do you need DSU when you can do DFS. $$$O(tn^{2})$$$ looked like pretty safe bet.

    • »
      »
      »
      5 years ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      No reason for DSU other than I prefer coding it over DFS ¯\_(ツ)_/¯. And I had that observation about longest palindrome as well, but it decreases m from 10000 to 8000, so that's not that big gain :P. But on the other hand if m=8000 then many intervals are short and we actually need to traverse just half of each interval... Yeah, constant factor analysis xd...

    • »
      »
      »
      5 years ago, # ^ |
        Vote: I like it +8 Vote: I do not like it

      DFS on 8000000 edges for 350 times doesn't look like a safe bet for me. (I'm quite bad at constant tuning)

      I only wrote $$$O(tn^2)$$$ because I can't do knapsack in $$$O(n^2/64)$$$ when I should track the actual answer.

      • »
        »
        »
        »
        5 years ago, # ^ |
          Vote: I like it +10 Vote: I do not like it

        What is so special about DFS? Number of recursive calls is only O(n), all other things are just array traversing, even in cache-friendly order.
        You need one test to work no more than 0.5s. Looks fine to me.

  • »
    »
    5 years ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    doesn't 350 * 4000 * 10000 = 14,000,000,000 usually holds when the memory usage is around 40,000,000 ints (ten megabytes?). I kinda assumed that it should finish in about 10 seconds or so... but maybe I assumed wrong?

  • »
    »
    5 years ago, # ^ |
    Rev. 2   Vote: I like it +8 Vote: I do not like it

    I thought that idea was from Yangon Regional Contest, in which they put $$$n \leq 2^{31}$$$ in problem C ($$$n$$$ is the number of vertices in the graph)

  • »
    »
    5 years ago, # ^ |
      Vote: I like it +18 Vote: I do not like it

    Union find with path compression runs in $$$O(V + E \log_{2 + \frac{E}{V}}(V))$$$ (here $$$V = n$$$ and $$$E \leq \frac{n m}{2}$$$), which is $$$O(V + E)$$$ on dense graphs (more precisely, if $$$E \in O(V ^{1 + \varepsilon})$$$ for some $$$\varepsilon > 0$$$). Hence in this task, the worst case runtime with union find is also $$$O(t (n^{1 + \varepsilon} + n m) )$$$. And since this only uses $$$O(n+m)$$$ memory, stuff fits into cache, so I expected it to run quite fast.

    I would agree that having more precise precise bounds on the input size would be nice (i.e. $$$n \leq 100$$$ in all except at most $$$50$$$ cases, similar to what they do in codejam).

    • »
      »
      »
      5 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      If $$$E=V$$$ it will be $$$O(V + E \log_3{(V)})$$$, how is it $$$O(V + E)$$$?

  • »
    »
    5 years ago, # ^ |
      Vote: I like it +31 Vote: I do not like it

    LoneFox is that possible to address this for future contests? It should be very easy to add this to future problems and it would improve "quality of life" a lot.

»
5 years ago, # |
Rev. 2   Vote: I like it +15 Vote: I do not like it

So, solutions of the two problems I solved:

A — All of the points should have same (x+y)%2 if k = 2, else answer is NO

B — Used DSU to find the sizes of the sets which have to be the same, then used knapsack and backtracked to find the solution. This should have timed out ideally, but passed in like 2s because FBHC.

Rank — 450, happy for the T-shirt.

  • »
    »
    5 years ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    I am usually stupid, so couldn't solve A in 20 minutes, just skipped.

    B — just dsu and knapsack. Like everyone else.

    C — just some standard dp problem. Can we get, dp[0/1][i][j] — can we have on i-th letter have j blocks starting on 0/1. Got O ( s * h^2) complexity, can easily reduced to s * h * log(h), but why write such thing.
    Easy 240 and T-shirt.

    • »
      »
      »
      5 years ago, # ^ |
        Vote: I like it +10 Vote: I do not like it

      How to solve C in s*h*log(h). Do you use some extra greedy strategy in this solution compared to O(s*h*h).

»
5 years ago, # |
  Vote: I like it 0 Vote: I do not like it

How to solve "Seafood" ?

  • »
    »
    5 years ago, # ^ |
    Rev. 2   Vote: I like it +18 Vote: I do not like it

    For each pair of clams, if $$$p_i < p_j$$$ and $$$h_i < h_j$$$, then you can delete clam $$$i$$$.

    After that, after the sort, heights of clams are non-decreasing.

    So let's calculate $$$dp_x$$$ is the smallest cost of deleting all clams on positions $$$\leq x$$$, using only rocks on positions $$$\leq x$$$ with ending on position $$$x$$$.

    To calculate $$$dp_i$$$, let's find all clams, such that there are no rocks of bigger height on the segment from this clam to $$$i$$$. From there, you need to fix maximum clam that you will delete when you will go to the left and then to the right. You should go to the left to the first rock that is bigger than this clam.

    You can optimize this $$$dp$$$ using stacks.

    Code
  • »
    »
    5 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    My greedy solution passed system test. We need a few observations for this:

    1. In the optimal solution we would always go to the rightmost clam.

    2. After we reached the rightmost clam it makes sense to visit only one rock, which is hard enough to open all still not opened clams.

    So the solution is to fix the hardest clam (suppose the value is $$$X$$$), which we could have not opened when we reach the rightmost clam. Now we need to deal with all clams which hardness exceeds $$$X$$$. This could be done in a greedy way by going to the closest rock for each such clam. The only thing is that we need to account for possible overlaps when opening multiple clams with the same rock.

    If we iterate over $$$X$$$ in the decreasing order we could calculate the answer for each case with total complexity $$$O(Nlog(N))$$$.

    code
»
5 years ago, # |
Rev. 2   Vote: I like it -48 Vote: I do not like it

Created 2D array of size n*n locally in problem B, tested on the sample input and then downloaded the input file. Now, my code isn't running for more than 9 cases. What to do? Clock ticking! Couldn't find the fault that 4000*4000 can't be declared locally, has to be global. So, I just submit anything, knowing that it is gonna fail eventually. Also, now they won't allow me to resubmit. So, I sit there waiting for the contest to get over and check my solution(which turns out to be correct). I absolutely hate this 6-minute thing!

Use this comment as a (Remove-the-timer) button.

Why so much hate? Seems like everyone loves the timer.

»
5 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Auto comment: topic has been updated by LoneFox (previous revision, new revision, compare).

»
5 years ago, # |
  Vote: I like it +1 Vote: I do not like it

$$$C$$$ can actually be solved in $$$O(hs)$$$ per testcase, I presume majority of people did $$$O(h^2s)$$$, but it actually becomes much more interesting question to make it faster.

»
5 years ago, # |
  Vote: I like it +10 Vote: I do not like it

Actually B can be solved in much faster complexity as well as pointed out to me by Radewoosh and mnbvmar. $$$O(n \log^2 n)$$$ I think.

  • »
    »
    5 years ago, # ^ |
      Vote: I like it +10 Vote: I do not like it

    How to solve it faster than $$$O(n^2)$$$ after building the graph? This problem seems to be equivalent to knapsack problem.

    • »
      »
      »
      5 years ago, # ^ |
      Rev. 2   Vote: I like it +18 Vote: I do not like it

      FFT + Divide & Conquer

      • »
        »
        »
        »
        5 years ago, # ^ |
          Vote: I like it +4 Vote: I do not like it

        How to solve knapsack with FFT + D&C?

        • »
          »
          »
          »
          »
          5 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Divide all elements into two parts, solve them recursively.

          After that, multiply polynomials from the left and from the right to get the answer for all elements.

          • »
            »
            »
            »
            »
            »
            5 years ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            And presumably the divide-and-conquer also allows the partition to be reconstructed efficiently. Once you know the sum you want, you can scan the two input polynomials to determine a feasible sum from each half, and then recursively reconstruct those two sums. Nice!

    • »
      »
      »
      5 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Yeah, that's knapsack, but you can significantly optimize this knapsack in at least three ways :). First one is exactly as ~300iq said. However there is an alternative approach where you observe that sum of sizes of items is at most n and hence there are at most $$$O(\sqrt{n})$$$ different sizes and you can use that to do it in $$$O(n^\frac32)$$$ which shouldn't be slower. And you can divide bruteforce's complexity by 64 using bitsets as well (and you can even restore the result).

      • »
        »
        »
        »
        5 years ago, # ^ |
          Vote: I like it +13 Vote: I do not like it

        How do you restore the result if you use bitsets?

        • »
          »
          »
          »
          »
          5 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Like in a solution with FFT, find which sum you need to find to the left and to the right and proceed recursively, bounding the size of bitset.

          • »
            »
            »
            »
            »
            »
            5 years ago, # ^ |
              Vote: I like it +10 Vote: I do not like it

            I see, so you'd need to use divide-and-conquer rather than just adding one item at a time; and presumably also use a dynamically-sized bitset rather than std::bitset.

        • »
          »
          »
          »
          »
          5 years ago, # ^ |
            Vote: I like it +10 Vote: I do not like it

          If your items have weights $$$w_i$$$, let $$$DP[i] = $$$ bitset of weights you can achieve with the first $$$i$$$ items, compute $$$DP[i+1] = DP[i] \mid DP[i] < < w_i$$$, then you can reconstruct by walking back in the $$$DP$$$ table:

          Start at $$$DP[n][j]$$$ where $$$j$$$ is the optimal possible weight. To to reconstruct from $$$DP[i+1][j]$$$, simply check if $$$DP[i][j]$$$ is set. If yes, then go to $$$DP[i][j]$$$, otherwise go to $$$DP[i][j - w_i]$$$ and add the $$$i$$$-th item to the answer. Repeat this until you reach $$$DP[0][0]$$$.

          This does require storing the whole $$$DP$$$ table. If memory is an issue, then only storing $$$DP[\sqrt{n}], DP[2\sqrt{n}], \dots$$$ and recomputing the last $$$\sqrt{n}$$$ rows also works.

          Getting $$$O\big(\frac{n^{\frac{3}{2}}}{64}\big)$$$ with bitset should also be possible .

          • »
            »
            »
            »
            »
            »
            5 years ago, # ^ |
              Vote: I like it +20 Vote: I do not like it

            I think we can go even better: let $$$last[i]$$$ be the weight you last added when you achieved total weight $$$i$$$. Initially, all values are uninitialized, and each value will be set exactly once (the first time we get to weight $$$i$$$).

            Now, as you previously mentioned, we have a transition of the form $$$DP[i+1] := DP[i]\, \mathrm{or}\, (DP[i] \mathrm{< <} w_i)$$$. Notice however that $$$DP[i+1] \supseteq DP[i]$$$. We can therefore iterate over the set bits in $$$DP[i + 1] \setminus DP[i]$$$ to find out which new total weights we can achieve with $$$w_i$$$. For these weights, set the corresponding elements in $$$last[\star]$$$ array.

            Notice that such array allows us to recover the result easily. Moreover, we don't need to remember past rows of $$$DP[\star]$$$, and therefore we only need $$$O(n)$$$ memory. Obviously, we get $$$O(n^2/64)$$$ time complexity, but $$$O(n\sqrt{n}/64)$$$ should be possible with this approach as well.

            • »
              »
              »
              »
              »
              »
              »
              5 years ago, # ^ |
                Vote: I like it 0 Vote: I do not like it

              Nice. I knew that it could be done with $$$O(n^2)$$$ memory, but hadn't thought of this approach for $$$O(n)$$$ memory.

        • »
          »
          »
          »
          »
          5 years ago, # ^ |
          Rev. 3   Vote: I like it 0 Vote: I do not like it

          Nah, all that you said guys is an overkill (EDIT: ah, just noticed that mnbvmar already said that). When adding i-th item you do $$$newdp = olddp | (olddp «w)$$$ and you just compute $$$olddp \oplus newdp$$$ and iterate over its bits to know which bits started to exist thanks to i-th item. Everything easily doable with std::bitset.

      • »
        »
        »
        »
        5 years ago, # ^ |
          Vote: I like it +1 Vote: I do not like it

        (I hope this is not widely known and will be useful to many)

        I was trying the bitset approach to knapsack subproblem of 102275B - Bitstrings as a Service discussed above, but I faced the fact that shift operations are not available on BitSet of Java/Scala standard libraries (unlike std::bitset of C++).

        I was about doing my own implementation for these operations, but then it occurred to me that I can use BigInteger instead of BitSet. It has all the bit manipulation methods necessary, including the shift. Here's my implementation in Scala (BigInt is just a wrapper of java.lang.BigInteger with symbolic method names, so the same functionality is available in Java as well): 57590997

»
5 years ago, # |
  Vote: I like it 0 Vote: I do not like it

How does Facebook know where to send the t-shirt? Did we fill out address before all the rounds started or something?

»
5 years ago, # |
  Vote: I like it +13 Vote: I do not like it

Meet the training in the GYM: 2019 Facebook Hacker Cup, Round 2.

»
5 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Is Knapsack required for B, or does some greedy assignment also work?

  • »
    »
    5 years ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    Greedy is not optimal. Take something like

    7 6 5 4 4
    

    Greedy in descending order gives something like

    [7,4,4] [6,5] diff=4
    

    while optimal is

    [7,6] [5,4,4] diff=0
    

    Edit: there are other rather good heuristics, like this differencing approach, but all have cases where they can fail.

»
5 years ago, # |
Rev. 2   Vote: I like it +12 Vote: I do not like it

I got Top 500! NICE!! My first t-shirt ever!

»
5 years ago, # |
  Vote: I like it +5 Vote: I do not like it

I didn't like A because it was optimal to guess and assume that the first problem is very easy.

And compressed input is a very bad idea in problems like D. FHC organizers should remember about this. I found a stack of clams with decreasing hardness (say, there are $$$s$$$ of them) and implemented $$$O(s^2)$$$ dp — for each state, I made $$$O(s)$$$ transitions to states on the right. Then just changed that to making transitions to next $$$2000$$$ and last $$$2000$$$ states, what is a standard hack against bad tests. I see now that it was enough to consider next $$$16$$$ and last $$$1$$$ state each time. Maybe it isn't a coincidence that $$$16$$$ is around $$$\log n$$$ btw.

  • »
    »
    5 years ago, # ^ |
      Vote: I like it +2 Vote: I do not like it

    A: Luckily there was a sample with the hunters in the corners and the prey in the center. It cleared all doubts.

»
5 years ago, # |
  Vote: I like it +33 Vote: I do not like it

It is impossible to see submission times in the standings. Only their sum (penalty) is shown. Please fix it.

»
5 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Even if A is easy, but I was so surprised that people could solve it in 3-4 minutes (figuring solution + write code + submission). That's amazing :'(.

»
5 years ago, # |
  Vote: I like it +11 Vote: I do not like it

LoneFox My email id is not linked to my facebook account, so I did not receive mail regarding t-shirt for coming in top 500 in round 2.Can you help me regarding this?

Also, now when I try to add my email id, I recognise that it was added to some childhood account. I have deleted the childhood account, but still I am not able to add that email id to my fb account.Can you help or guide with this also?