LoneFox's blog

By LoneFox, history, 6 years ago, In English

Round 2 of the 2018 Facebook Hacker Cup is less than 24 hours away!

The round will begin on August 4th, 2018 at 10am PDT and will last for 3 hours. You can check the start time in your local timezone here.

The contest will be available here shortly before the round begins.

You're eligible to compete in Round 2 if you scored at least 35 points in Round 1.

The top 200 contestants will advance to Round 3, which will take place on August 18th, and the top 500 contestants will win Hacker Cup t-shirts! We'll begin shipping out t-shirts in September, at which point the winners will receive emails with tracking information. Please note that all submission judgements will only be revealed after the round ends, and that time penalty will be used to break score ties. More details about rules and other information can be found here.

Good luck!

The corresponding Facebook post can be found here.

Update: The editorial is now available here.

  • Vote: I like it
  • +71
  • Vote: I do not like it

| Write comment?
»
6 years ago, # |
  Vote: I like it +11 Vote: I do not like it

Not exactly related to this, but do you maybe have any information about t-shirts from last year?

  • »
    »
    6 years ago, # ^ |
      Vote: I like it -12 Vote: I do not like it

    We're aware that some winners unfortunately didn't end up receiving their t-shirts in the past, and are working on having those re-delivered through an improved process this year. Please let us know if you were among the top 500 contestants in Round 2 of the 2017 Hacker Cup but didn't receive your t-shirt, thanks!

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it +8 Vote: I do not like it

      Do you mean we should send you a message?

      • »
        »
        »
        »
        6 years ago, # ^ |
          Vote: I like it +25 Vote: I do not like it

        Sure, a message on CodeForces will work.

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it +8 Vote: I do not like it

      I received mine today :D (from 2017)

»
6 years ago, # |
  Vote: I like it +11 Vote: I do not like it

I scored 35 points in Round 1 however did not get an email from Facebook about qualifying for Round 2, like I received about qualifying for Round 1. Is it the case that Facebook just didn't send an email for this round?

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +6 Vote: I do not like it

    You're indeed qualified for Round 2 if the Round 1 scoreboard says that you got 35 points. Unfortunately, some qualified contestants may not have received emails for this round.

»
6 years ago, # |
  Vote: I like it +70 Vote: I do not like it

Out of curiosity — why are there no email reminders about FBHC rounds? I would've forgotten about this one if not for the contest calendar imported to my Google Calendar; I assume many people just forget about these rounds.

»
6 years ago, # |
Rev. 2   Vote: I like it +68 Vote: I do not like it
  1. I go to https://www.facebook.com/hackercup/, which is the "main page" of the Hacker Cup.
  2. I CAN'T FIND THE LINK TO THE CONTEST.
  3. Thankfully, this blog on Codeforces exists, and I can follow the link on this page. Here it is, once more: https://www.facebook.com/hackercup/contest
  4. Or https://www.facebook.com/hackercup/round/211051912693116/
»
6 years ago, # |
  Vote: I like it +126 Vote: I do not like it

Tfw stack space screws your B

I hope Facebook changes its contest format like CodeJam did this year. This was just sad and frustrating.

  • »
    »
    6 years ago, # ^ |
    Rev. 2   Vote: I like it +28 Vote: I do not like it

    Same issue here. I had Runtime Error in my PC without noting that the problem was with the stack. I'll share a trick that I saw in HackerRank a few years ago. You can increase the size of the stack with one snippet. I'll post my entire solution for problem B of today HackerCup (the contest is over already)

    The snippet is at the bottom of the code (and one "magic" include in the begining).

    Spoiler

    Without the trick my code crash :(

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it +60 Vote: I do not like it

      On Linux just run ulimit -s unlimited.

      • »
        »
        »
        »
        6 years ago, # ^ |
          Vote: I like it +24 Vote: I do not like it

        I guessed there exist some flags to increase stack size, but didn't know at contest time :( and couldn't find anything in the 6min time window.

        Anyway this snippet is really useful for ONLINE judges, where you can't control flags to compiler ;)

      • »
        »
        »
        »
        6 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Unfortunately, not for Codeblocks users, at least I did set it for all terminals but the Codeblocks terminal just failed with Segmentation Fault :'(

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +49 Vote: I do not like it

    And this becomes more irritating when you realize the first 2 questions were enough to get the T-Shirt.

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it +30 Vote: I do not like it

      They were even enough to qualify I think.

    • »
      »
      »
      6 years ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      Maybe they already count the ability to detect and fix such error :V, no way to easily get thing from Facebook, u know :V (even if u did pass, it is known to take a life to wait for the T-Shirt to come)

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +97 Vote: I do not like it

    This sounds sad and strange.

    A failure is an opportunity to learn. If you don't know how to increase stack size locally, and it costed you a problem in an important contest, the natural reaction is to learn how to do it, once and for all.

    Hoping that the platform changes its contest format instead is counterproductive, as it doesn't make your own situation any better. Are you not going to test your solutions locally then?

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it +8 Vote: I do not like it

      Ofcourse I'm going to learn it now.

      I agree that some fault lies in me not knowing how to increase the stack size locally, but I believe that at a major contest like Facebook HackerCup, this non-CP knowledge should not be the deciding factor.

      I knew how to solve the question, and had it passed (which it would have — I checked later), I would have gotten atleast a T-Shirt and some morale to solve further questions.

      • »
        »
        »
        »
        6 years ago, # ^ |
          Vote: I like it +71 Vote: I do not like it

        It's not a deciding factor. There were 2 other problems which were enough to advance. You should be glad that they put such problem in round 2 and not in round 3 or Finals, where every problem matters.

        Putting the fault on organizers is stupid. It's a programming competition and they even give you complete freedom to use any software you want. If you don't know how to compile and run programs properly it's your and only your fault.

        • »
          »
          »
          »
          »
          6 years ago, # ^ |
            Vote: I like it +11 Vote: I do not like it

          I'm not putting the fault on them. I just said that I prefer CodeJam style contest over this. I wasn't the only one who suffered, and I do believe that it distorted the ranklist a little.

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +17 Vote: I do not like it

    The easiest solution to this issue, was to do a local testing on max test prior to submit.

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it +3 Vote: I do not like it

      Yeah, especially since the generators are short enough. Here are some example max-test generators in Python2 for this problem: random (non-uniform), star, binary, bamboo. Python 3 would need a few more parentheses.

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    RE stack issues: You can set your stack size programmatically using this code

      #include <sys/resource.h>
    .....
    int main() {
      const rlim_t kStackSize = 60 * 1024 * 1024;
      struct rlimit rl;
      int result;
    
      result = getrlimit(RLIMIT_STACK, &rl);
      if (result == 0) {
        if (rl.rlim_cur < kStackSize) {
          rl.rlim_cur = kStackSize;
          result = setrlimit(RLIMIT_STACK, &rl);
          if (result != 0) {
            fprintf(stderr, "setrlimit returned result = %d\n", result);
          }
        }
      }
    
    .....
    

    you can put it at the start of the main function and things will run well here is a sample solution that sets the stack size this way

    Solution
»
6 years ago, # |
  Vote: I like it +22 Vote: I do not like it

Feels bad, I didn't get the second problem since my computer's stack memory is too small :(

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +7 Vote: I do not like it

    I also failed B cuz of stack overflow. S**t.

  • »
    »
    6 years ago, # ^ |
    Rev. 3   Vote: I like it +12 Vote: I do not like it

    If you are on linux you can change the computer stack memory with the command ulimit -s 1048576. It will set stack size to 1 Gb, which is enough.

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it +8 Vote: I do not like it

      ulimit -s unlimited is what I usually do.

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    Same here

  • »
    »
    6 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    well, it's not that hard to google in 6min how to fix that

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it +59 Vote: I do not like it

      Except that you do not know it's definitely Stack Space causing RTE, and not some minor fault in your program, which you are more likely to check first?

      • »
        »
        »
        »
        6 years ago, # ^ |
          Vote: I like it +27 Vote: I do not like it

        well, you run debugger and see stack trace where it fails. It shows you both minor errors and stack overflow.

        • »
          »
          »
          »
          »
          6 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Could you provide an example how to use the debugger like gdb or lldb to examine the stack trace? When I use lldb I only found "stop reason = EXC_BAD_ACCESS". When I use gdb I only found "error reading variable: Cannot access memory at address 0x0>".

          • »
            »
            »
            »
            »
            »
            6 years ago, # ^ |
              Vote: I like it +3 Vote: I do not like it

            I use visual debugger in clion. But in gdb it should be enough to enter command bt

      • »
        »
        »
        »
        6 years ago, # ^ |
          Vote: I like it +5 Vote: I do not like it

        You may use the GCC's flag -fsanitize=address.

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it +27 Vote: I do not like it

      I tried, but nothing I found in time worked

      I'm on a Mac so much of what I found online didn't help

      • »
        »
        »
        »
        6 years ago, # ^ |
          Vote: I like it +25 Vote: I do not like it

        Also, on a mac, this worked for me

        g++ -std=c++17 -Wl,-stack_size -Wl,0x10000000 candy.cpp

        that is, this worked for me after my window expired :-(

      • »
        »
        »
        »
        6 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Mac, too.

        And I can’t find anything useful on google, either

        My friends told me to use ulimit -s XXXX

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it +14 Vote: I do not like it

      Suuuure... But you have to spend X minutes to figure out wtf is going on. I straight up thought that my solution was wrong somewhere, then debugged in panic before realizing that my code suddenly crashed randomly at node 50000, hence X=5 for me D:

      • »
        »
        »
        »
        6 years ago, # ^ |
          Vote: I like it +15 Vote: I do not like it

        Well, I don't know what is your debug strategy, but first thing I do when I get runtime error is I run debugger and examine a stack trace to get to know actual line where it fails. Seeing stack of thousands functions says you you have either too small stack or you recourse infinitely.

        • »
          »
          »
          »
          »
          6 years ago, # ^ |
          Rev. 2   Vote: I like it 0 Vote: I do not like it

          Your strategy makes sense, but in my case, stack overflowing is the last thing I would ever think of because I recall having increased it a loooong time ago. Then I realized I was coding on a computer I just bought recently xD.

          Otherwise yeah, debugger should have prevented that.

  • »
    »
    6 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Very strange. Had the same problem. There was segment tree on 1 << 19 verticies, fixed by changing this count to 1 << 20 (after six minutes another solution of this problem was appeared stack 300m -> 500m). Here code of my program without stack overflow 1 << 20

»
6 years ago, # |
Rev. 2   Vote: I like it +23 Vote: I do not like it

Was Jack's Candy Shop just merge smaller to greater set(DSU on Trees)?

UPD- Failed because of stack overflow :(

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +11 Vote: I do not like it

    Segmentation fault here too bro xD

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it +5 Vote: I do not like it

      I got segmentation fault too. Is it possible that it happened because of stack overflow in my computer ?

      • »
        »
        »
        »
        6 years ago, # ^ |
          Vote: I like it +5 Vote: I do not like it

        Yeah, I think that's exactly what happened...

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +5 Vote: I do not like it

    Yes, I tried the question using the same approach but couldn't generate the output file because of stack-overflow issue.

»
6 years ago, # |
  Vote: I like it +8 Vote: I do not like it

Simple O(N5) passed C.

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +10 Vote: I do not like it

    I genuinely couldn't come up with it and found D easier.

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it +11 Vote: I do not like it

      How do you solve D? I found C pretty doable (in O(N5)) but came up with nothing for D for an hour.

      • »
        »
        »
        »
        6 years ago, # ^ |
          Vote: I like it +3 Vote: I do not like it

        DP optimization can be done with segment tree and lazy update (I think).

      • »
        »
        »
        »
        6 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        I think it's very similar to USACO 2012 Bookshelf.

      • »
        »
        »
        »
        6 years ago, # ^ |
          Vote: I like it +11 Vote: I do not like it

        I got it 3 mins after contest ended.

        Obviously, sort the fossils by position first.

        The idea is that we have dp of the form

        dp[i] = minj ≥ L[i](dp[j - 1] + max(aj, aj + 1, ..., ai)) + S, where L[i] is the largest index such that Pi - PLi ≤ 2M.

        My solution is to store maintain the values F[j] = dp[j - 1] + max(aj, aj + 1, ..., ai) as i increases and note that when we have a new value ai + 1 and ak, ak + 1, ..., ai ≤ ai + 1, then we can remove F[k + 1], F[k + 2], ..., F[i] from our set, since dp is non-decreasing and the max part is the same. However, when we increment L[i] to L[i + 1], we might need some of the values F[k + 1], F[k + 2], ..., F[i] back. However, we can do that by adding the F[.] values one by one as we increment our pointer from L[i] to L[i + 1]. Finally, we just need to take the minimum from the set to update the dp.

        My explanation might not be very clear. Here is my code.

        • »
          »
          »
          »
          »
          6 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          I came up with the exact same solution (that I didn't manage to code in time, unfortunately). However, I can't figure out the complexity of maintaining a stack/deque of prefix maxima. Could somebody post it?

          • »
            »
            »
            »
            »
            »
            6 years ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            It's O(N) because you add at most N elements and remove at most N elements

      • »
        »
        »
        »
        6 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        https://ideone.com/8w07Sk

        It's dp optimization. I think O(N2) is easy and optimizing it is easy too.

        For O(N2) just sort x coordinates and create a shaft between consecutive indices using dp.

        For O(N * log(N)), get minimum from segment tree, also update "maximum value till the index i$ values simultaneously using stack.

      • »
        »
        »
        »
        6 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        First of all, the rectangles don't need to intersect. Afterwards, you can reduce the problem to the following:

        Given N points on the OX axis, the ith of which has weight Wi, cover these points with intervals of length at most 2M. Let K be the number of intervals used, then the cost of such a covering is given by K·S plus the maximum weight in each of the K ranges.

        Sort the points in increasing order. Let dpi = the minimum cost of a covering of the first i points with intervals of maximum length 2M.

        It's pretty easy to implement this in O(N2) time. You can do better by close inspection of the recurrence relation — maintain a stack of maximums while going from i = 1 to i = n and updating the costs used in the dp in a segment tree (operations: add value to range, query maximum in range). Time complexity O(NlogN).

      • »
        »
        »
        »
        6 years ago, # ^ |
          Vote: I like it +23 Vote: I do not like it

        I was able to do it without any segment trees / range updates. Code: https://pastebin.com/5661e30m

        First, sort all the fossils by x-coordinate. Let f(i) denote the minimum cost to get all fossils  ≥ i. We compute f(i) in decreasing order of i.

        For any group of fossils  ≥ i, the first (leftmost) shaft should have its left edge at xi.

        Case 1: The first shaft does not overlap any other shaft. In this case it needs to cover all the fossils within its width. As i decreases we use a set of pairs to maintain the set of fossils that are in this range and retrieve the greatest depth among them.

        Case 2: The first shaft overlaps some other shaft. The second shaft will need to have its left edge at the first fossil not covered by the first shaft. The second shaft is assumed to be at least as deep as the first; otherwise we could just move it to the right until it no longer overlaps. These two facts together mean that there should not exist any fossil that is both to the left and deeper than the first fossil not covered by the first shaft. We refer to the set of fossils that could satisfy this constraint as the dominating set.

        We can maintain the dominating set using a deque. They are kept ordered by their x-coordinate. We pop fossils off the back once their x-coordinate is too great to be the left edge of the overlapping second shaft. Before inserting any new fossil to the front, we pop from the front all fossils that are dominated by it.

        Note that for any group  ≥ i, the dominating set of fossils includes fossil i. For any fossil j ≠ i in the dominating set, the total cost if the second shaft begins at that fossil is given by S + D + f(j), where D is the depth of the fossil appearing to the left of j in the dominating set (its depth determines the necessary depth for the first shaft). We can maintain a separate set of these "candidates" for f(i) and update it accordingly whenever the dominating set is modified.

        The answer for f(i) is given by the minimum among the candidate from Case 1 and the set of candidates from Case 2. The solution is fast enough because each fossil enters and leaves the dominating set at most once.

  • »
    »
    6 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I did the same. Wonder if this problem can be solved in better complexity.

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      https://ideone.com/D1DMBH

      I did it in O(N3). It's not more than just some case analysis. For example, we can use RIGHT, UP pair before s or UP, LEFT after s, to cover s. Same thing for e.

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +18 Vote: I do not like it

    why not? 505 ≈ 3·108

»
6 years ago, # |
  Vote: I like it -24 Vote: I do not like it

This B is a disaster.

  • »
    »
    6 years ago, # ^ |
    Rev. 2   Vote: I like it +13 Vote: I do not like it

    There is a really neat solution. Put the Euler path into a segment tree, and then satisfy the customer in increasing depth[C_i] order by extracting the maxima of the subtrees.

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I did exactly that on B and accepted.

      Spoiler
    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I did so too, with Timestamps + Segment Tree, but my stack crashed. Had to change it to iterative DFS but couldn't make it on time :'(

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    A disaster from educational rounds? It was very straightforward application of small-to-large technique.

»
6 years ago, # |
  Vote: I like it +31 Vote: I do not like it

When you forgot to print the number of edges in A...

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    Same :c

  • »
    »
    6 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    When you put 3 nodes as an edge case with solution set to 0 and you don't realize that 3 4 has solution 1 :((. Short and sad history for me that end in not getting a t-shirt since i only manage to solve B.

»
6 years ago, # |
  Vote: I like it +5 Vote: I do not like it

When you found you should change your computer due to the stack size

I bought my computer about 8 years, ago...

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +11 Vote: I do not like it

    Well, I bought my computer just 1 month ago. I forgot to increase my stack size...

    Seems like both extreme cases screw people up the same way :P

»
6 years ago, # |
  Vote: I like it 0 Vote: I do not like it

When you have high hopes for top 500 and then fail B due to stack size... :^)

»
6 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Aren't editorials out?

»
6 years ago, # |
  Vote: I like it +21 Vote: I do not like it

I had a really simple solution for B: since we want to sell the most expensive candy possible, iterate candies from n-1 to 0. We can buy a candy if it has a grandparent with at least 1 buyer remaining. Let's check this by moving up the parent chain until we either find some buyers or reach the root. Naive implementation of this idea would be O(nodes*edges), but we can speed it up by not traversing the same chains over and over again. Let's say we traversed nodes from 9->5->7->2->3 and we found first buyer at node 3. We use this buyer for node 9 and it's possible there are more buyers at 3 or at its grandparents. Since we know that there are no buyers before node 3, we can mark that down to all nodes we traversed (5,7,2). For example, later when we want to know if node 5 can be bought, we don't have to traverse from 5 again, we can start traversing from 3.

Code: https://pastebin.com/tU2B1GnF

I thought that this would run in O(nodes + edges), but surprisingly it took 3 minutes to run. The time limit was 6 minutes so I got it in barely in time. Can anyone analyze why it's slower than expected?

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    Because you shoud use this: Link to DSU This make the algorithm O(M+N*alpha(N))

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +13 Vote: I do not like it

    I got some help with this and improved my solution so it's now linear: https://pastebin.com/PTsahgpw

    (Worst case to my old code was a long chain such as 0->6->5->4->3->2->1. The new code traverses it fast by making "small jumps" when necessary.)

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +43 Vote: I do not like it

    Yup, I did exactly like this (+DSU trick). It's a neat 30 lines of codes imo! :D

    Code
»
6 years ago, # |
  Vote: I like it -88 Vote: I do not like it

why only 200 persons advance to round 3 ? i think it would be fair if at lease 250 advanced as it's the half size of the people who will get T-Shirts, can you really consider it ?

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +40 Vote: I do not like it

    why only 200 persons advance to round 3 ? i think it would be fair if at lease 500 advanced as it's the people who will get T-Shirts, can you really consider it ?

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it +177 Vote: I do not like it

      Why as many as 200 person advance to round 3? I think it would be fair if exactly 71 advanced as it would make me advance yet keep the competition in the next round as small as possible.

      • »
        »
        »
        »
        6 years ago, # ^ |
          Vote: I like it +23 Vote: I do not like it

        Meh, why as many as 200 if it could have been 194?

        • »
          »
          »
          »
          »
          6 years ago, # ^ |
            Vote: I like it +21 Vote: I do not like it

          Or 195 (just to troll Radewoosh) :)

          • »
            »
            »
            »
            »
            »
            6 years ago, # ^ |
              Vote: I like it +55 Vote: I do not like it

            Why even make round 3? They could just let top 25 advance to the Finals.

»
6 years ago, # |
Rev. 2   Vote: I like it +8 Vote: I do not like it

Sorry, my friends posted it just wanted me to get downvotes.

Sorry about that.

  • »
    »
    6 years ago, # ^ |
    Rev. 3   Vote: I like it -82 Vote: I do not like it

    Sorry, my friend posted it just wanted to play a joke on me. I was not mean to make the comment in Rev 1. Sorry.

»
6 years ago, # |
  Vote: I like it +1 Vote: I do not like it

Is there any way to increase stack limit (say to 40MB) permanently (i.e, for all programs I run on my PC) ?
P.S : I use Ubuntu 16.04 and use Sublime Text Editor

  • »
    »
    6 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Yes. I just increased my stack size permanently, after I got Seg. fault on B. :( To set stack limit (say 32MB), open you ~/.bashrc file, and paste this line. (change the number to your desired limit in KB).

    ulimit -s 32768
    

    This will increase your stack limit everytime you open up your terminal.

»
6 years ago, # |
  Vote: I like it +13 Vote: I do not like it

Not a complain, just funny fact. My solution for C was ready 15 minutes before contest is over. I build it successfully, run .... and got message that 'executable file format is not recognized by Windows'. Whaaaaat? I use VS with the same solution file for all problems in contest and never had such issue. Tried to rebuild it, played with different configurations (Release, Debug, Win64), deleted all auto-generated folders, copied source code in well project for problem A. Nothing helped. Build succeeded without any warning, run/debug doesn't work because windows don't recognize file format. As it turned out problem was in array I statically allocated for the DP:

int dp[55][55][55][55][55];

It seems it exceeded some limit and when I changed it to

int dp[51][51][51][51][51];

It magically start working (not immediately, but after I wiped out all auto-generated folders including .vs). Unfortunately, this was done after contest was over. And what is more sad this solution without any changes passed all tests. I could ended up in first 120 and advance to 3rd round. :) Lesson learned — if you allocate memory close to 1.5Gb be prepared to get 'non-Windows executable binary' if compile in VS.