LoneFox's blog

By LoneFox, history, 4 years ago, In English

Round 1 of the 2020 Facebook Hacker Cup is less than 48 hours away!

The round will begin on August 15th, 2020 at 10am PDT and will last for 24 hours. The contest can be found here.

You're eligible to compete in Round 1 if you solved at least one problem correctly in the Qualification Round.

Everyone who scores at least 25 points (regardless of time taken) will advance to Round 2, which will take place on Sat. Aug. 29th. Please note that all submission judgments will only be revealed after the round ends. More details about rules and other information can be found in the FAQ.

We wish you the best of luck, and hope that you enjoy the contest!

The Facebook post can be found here.


Update 1: Due to an error with problem B (Dislodging Logs), only 25 points are now required to advance to Round 2, rather than 30.

Update 2: The round has ended, thank you for participating! The scoreboard can be found here, and the solutions here.

Update 3: As an update on prizes, everybody who solves at least one problem in the upcoming Round 2 will receive a Facebook Hacker Cup t-shirt! The top 200 competitors from Round 2 will also have a "Top 200" badge on their shirt.

  • Vote: I like it
  • +87
  • Vote: I do not like it

| Write comment?
»
4 years ago, # |
  Vote: I like it +17 Vote: I do not like it

When the announcement about the t-shirts will be made?

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +5 Vote: I do not like it

    The only reason anyone less than international grandmaster would participate in it.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +12 Vote: I do not like it

    It's already existed in 'Terms of Service'.

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Wow, you are right. Top 2000 from round 1 will receive t-shirts.

      • »
        »
        »
        »
        4 years ago, # ^ |
          Vote: I like it +46 Vote: I do not like it

        Sorry, that information mentioned in the Terms had been a placeholder and has been removed for now. We hope to offer such prizes based on Round 2, but we're unfortunately unable to make any public announcements about that quite yet, due to working out the effects of COVID-19 on prize shipping. We will announce more information when possible, before Round 2!

        • »
          »
          »
          »
          »
          4 years ago, # ^ |
          Rev. 3   Vote: I like it +102 Vote: I do not like it

        • »
          »
          »
          »
          »
          4 years ago, # ^ |
            Vote: I like it +29 Vote: I do not like it

          Didn't all the participants already agreed to the terms and conditions? Can you really now change that after setting the expectations?

          • »
            »
            »
            »
            »
            »
            4 years ago, # ^ |
              Vote: I like it +8 Vote: I do not like it

            Yes, change of TOC is taken into account in

            section 15.1

            of the TOC.

            • »
              »
              »
              »
              »
              »
              »
              4 years ago, # ^ |
                Vote: I like it +3 Vote: I do not like it

              Damn! I can't believe I agreed to such a condition without reading it. XD

              Anyway, I think every other TOC that I have actually read always had this point. Keeping TOC is basically a formality nowadays. :)

              • »
                »
                »
                »
                »
                »
                »
                »
                4 years ago, # ^ |
                  Vote: I like it +72 Vote: I do not like it

                100% of the time terms and conditions basically translates to "We can do whatever we want".

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Are you guys done reviewing all the disabled accounts LoneFox?

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Yes, I believe we've helped each participant who reached out to us at [email protected] with details regarding a disabled FB account (generally caused by new FB accounts with no profile information sending friend requests to others, which was activity flagged as being normally associated with fake/spam accounts).

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +3 Vote: I do not like it

      generally caused by new FB accounts with no profile information sending friend requests to others, which was activity flagged as being normally associated with fake/spam accounts

      Lol that reminds me of a conversation I once had: "just contact me on FB" "I don't have FB" "make an account, you don't need to use it for anything except as a messenger" "when I tried that my account got instalocked everytime" "nah there's no way" "well yes way, mail me".

»
4 years ago, # |
  Vote: I like it +173 Vote: I do not like it

Please prefer formal problem statements in further rounds.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +1 Vote: I do not like it

    What do you mean by "formal" in this case?

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +33 Vote: I do not like it

      Where problem statement is not written like some essay

    • »
      »
      »
      4 years ago, # ^ |
      Rev. 2   Vote: I like it +57 Vote: I do not like it

      By formal, I mean very short problem statements without any story. I feel the current problem statements due to the story is too long. Or you can write a story in the first paragraph in italics. In such a way that one who skips the first paragraph doesn't lose any points.

      For e.g. see this https://codeforces.com/contest/1286/problem/F story ends in the first line.

      Many times problem setters write long story and end last paragraph with - "Formally, find no of subarrays of a given array with positive-sum." One not interested in reading story can just read paragraph starting with "Formally". This paragraph many times involve some mathematical notations to describe problem statement in a line or two.

»
4 years ago, # |
Rev. 5   Vote: I like it -46 Vote: I do not like it

LoneFox where is time limit of each question is mentioned?

I am not talking about the 6 min rule.

I am talking about allowed max running time of my program aryanc403 ~~~~~ aryanc403 can you help me? ~~~~~

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +11 Vote: I do not like it

    no one limits it, no one cares about it

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it -23 Vote: I do not like it

      is It means that I can write code that is doing more than 10^12 iterations is fine?WitchOfTruth

      • »
        »
        »
        »
        4 years ago, # ^ |
          Vote: I like it +17 Vote: I do not like it

        All they need is the output file so as far as your code finishes with the given input before the 6 minutes countdown ends it's fine.

  • »
    »
    4 years ago, # ^ |
    Rev. 3   Vote: I like it +1 Vote: I do not like it

    You can also write a program which takes 1e18 operations. As long as you use the language with free compiler/interpreter and it completes those 1e18 operations in less than 6 minutes. I'm not sure if using GPUs/supercomputers is allowed but you can check rules.

    Spoiler
»
4 years ago, # |
  Vote: I like it +63 Vote: I do not like it

Why, MacOS stack size limit, why...

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +9 Vote: I do not like it

    I had the same problem but in windows, the thing is that in windows the size of the stack is 1mb, in Linux and MacOsX it is bigger. This is highly operating system dependent, but can still be changed with a few commands. Luckily the same code that gave me RTE for the stack in windows gave me AC in Ubuntu, but the submition time had passed...

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Unfortunately Mac has a hard limit of 64 mb, and I'm not sure if there's a workaround.

      I even tested my solution with some test cases that resulted in deep recursions, and there was no problem (since I got similarly burned in the qualification round).

      So I went over to my Linux desktop, fumbled around transferring the code and the input file, it ran no problem... and I was 2 seconds too late.

      Next time I'll just SSH to the desktop...

      • »
        »
        »
        »
        4 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        I also ran into this problem — passed validation but sadly ...... I should have tested my code before this with max limits tbh, since it's a 24h contest

        anyone know a good solution to this other than having another desktop? are there safe+fast remote sites to run code with file input? i only have a mac for now, and virtual machines probably don't work if they're on mac anyway.

        otherwise i will just have to code bottom up things or implement my own stack next time i guess... which is quite sad but doable

        • »
          »
          »
          »
          »
          4 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Instead of creating arrays of size 10^6 create dynamic arrays using pointers and new function. This runs fine on my Windows 10 ,as then memory is allocated in heap and not stack.

          • »
            »
            »
            »
            »
            »
            4 years ago, # ^ |
              Vote: I like it +5 Vote: I do not like it

            Recursion also use stack memory and there is no easy way to configure it to use heap instead.

          • »
            »
            »
            »
            »
            »
            4 years ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            It's OK to make huge arrays as long as they're global. Global variables aren't allocated on the stack.

            • »
              »
              »
              »
              »
              »
              »
              4 years ago, # ^ |
                Vote: I like it +8 Vote: I do not like it

              Well, I decided not to bother with coordinates compression and decided to have a 500M array of ints for A1. Which exceeded Windows' 32-bit 2GB RAM limit. Thankfully, after panicking for 5 minutes (it was the last hour of the contest) I just launched Ubuntu using Windows Subsystem for Linux; and everything worked like a charm there.

        • »
          »
          »
          »
          »
          4 years ago, # ^ |
            Vote: I like it +9 Vote: I do not like it

          Actually an Ubuntu virtual machine inside the Mac works, I did it that way because I didn't have a compiler on the Mac.

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      This is highly operating system dependent, but can still be changed with a few commands Maximum stack size can be changed?? If yes, how?? I'm on Windows 10, and I'm encountering the same RTE problem...

      • »
        »
        »
        »
        4 years ago, # ^ |
        Rev. 2   Vote: I like it +9 Vote: I do not like it

        Actually I couldn't change it but what I was able to google it and I found some tutorials to do it, some suggest to use commands like / F and / STACK, I really can't figure out how to do it so I just ran the program in an ubuntu virtual machine and that fixed the problem Here you have some links. Link 1 Link 2

      • »
        »
        »
        »
        4 years ago, # ^ |
          Vote: I like it +23 Vote: I do not like it

        I use this '-Wl,--stack,268435456' in windows. Pass this as a command argument. Try with/ without '', different shells parse this differently afaik.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Damnit this was also my issue, except i was on windows.. and i dont even know the flag to increase stack size. Alas, i was too inexperienced. Didn't even notice it was a stack overflow before timed out..

    Lesson learned i guess..

  • »
    »
    4 years ago, # ^ |
    Rev. 2   Vote: I like it +44 Vote: I do not like it

    g++ -std=c++17 -Wl,-stack_size -Wl,0x10000000 main.cpp works to increase the stack size on MacOS

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Thanks!

      I tried to find these command line options in the past, but couldn't get it to work.

    • »
      »
      »
      4 years ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      Thanks for the command. I've been harassed by stack-overflow problem multiple times and have not dealt it well.

      wondering what is that 0x10000000 mean? stack size in bytes (ie 10MB) or recursion depth? I searched for g++ documentation for this option without success.

      g++ -std=c++17 -Wl,-stack_size -Wl,0x10000000 main.cpp // This command passes on for 10million recursive calls in mac

      g++ -std=c++11 -Wl,-stack_size -Wl,0x10000000 main.cpp // this one does not. I guess g++ fixed some bugs in c++17 or just this is recent enhancement


      int test (long long t) { int a[3]; if (t <= 0) return 0; if (t % 100000 == 0) cout << t << endl; int b = test(t-1); if (b == 25) { b = 34 + a[2]; } else { b = 53; } return b; } int main() { cin.sync_with_stdio(0); cin.tie(0); test (100000000); return 0; }
  • »
    »
    4 years ago, # ^ |
    Rev. 2   Vote: I like it +5 Vote: I do not like it

    deleting post since too many down votes — perhaps it was too long or the code wasn't good :)

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    yeah, me too, and lost my problem C...

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I feel your pain! thats tough losing two problems man. Always worth it for the excitement of when you do solve on though, I kinda like the format.. the excitement of your computer or connection crashing, making sure you upload the correct file, only having 6 minutes!

      • »
        »
        »
        »
        4 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        yeah, man, lessons learned, now i would execute my code with stack increasing command in windows cmd, lol.

»
4 years ago, # |
  Vote: I like it +18 Vote: I do not like it

I saw that B got updated so that S=0 for all cases now. I submitted my solution before that change was made. Will my source code be run against the new cases?

  • »
    »
    4 years ago, # ^ |
      Vote: I like it -29 Vote: I do not like it

    Yes, earlier submissions will be rejudged accordingly, we apologize for the inconvenience!

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +5 Vote: I do not like it

      Is the code run by Facebook against the new S = 0 cases, or is our output file compared to the old S > 0 cases?

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +15 Vote: I do not like it

      How does this work? I just looked at my input file and S is not 0 (at least in the first few cases I checked, I didn't go through the whole file). And my solution assumes S can't be 0 (or at least, I'm not sure if it does).

      So, will my submission be judged against the original output?

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +13 Vote: I do not like it

      In my code, there is a division by S. How are you going to judge my solution?

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +21 Vote: I do not like it

      My submission depends on $$$s > 0$$$ in a very silly way, so rejudging won't work in my case.

      Besides that my code doesn't compile without an extra owned library (just for basic templates). From my understanding of the following rule it should be ok:

      What file formats can I submit?

      Your output and source code should both be in plain text. Their file extensions don’t matter. Your source code must be a single file. If your solution depends on additional library files, please upload only the single main source code file.

      Do not submit executable files (such as .exe files.).

  • »
    »
    4 years ago, # ^ |
    Rev. 3   Vote: I like it +21 Vote: I do not like it

    I just submitted and after refresh the page see the announcement... Since the contest is blind they should let anyone resubmit with the new constraints.. The constraints on S before was strictly greater than 0, and someone who puts some assertation may even fail now....

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +16 Vote: I do not like it

      Don't worry -- if need be we'll handle any such cases manually as they arise.

»
4 years ago, # |
Rev. 2   Vote: I like it +43 Vote: I do not like it

I just realized that S = 0 for the new updated test for Problem B. This change happened just when I was submitting. When validating, I got the S != 0 input set. I waited for around 10 minutes upon submitting validation output file to ensure that my solution was correct and then downloaded the final input file, unknowing of this new constraint.

Upon running my code, I got an error and realized that S = 0. I reread the problem and discovered this new constraint. For this reason, I completely missed the submission window for B. I believe that something should be done about this.

First, all competitors that submitted before the change should be given an option to resubmit for the new version of the problem (if they think it's easier). Otherwise, those who submit B later get an advantage. I believe that this "resubmission" window should be rather lenient especially for users like me who had a gap between validation and final submission.

Furthermore, when something else like this happens, I believe that FHC should have a way of notifying contestants (e.g. CF's way of giving a browser notification), not simply relying on contestant's refreshing the page. I had no idea this happened until I looked at the input file.

wjomlex LoneFox please look into this. I think that this is a potential reason to make the round "unrated". This new constraint will mess up the point values associated to problems and may cause an imbalance in problems.

Edit: My timer has been refunded. Thank you to FHC for the fast response! Nevertheless, I think a better notification system of these issues should be established. Do you think that the point values will be recalculated?

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Yes, you're right, thank you for the suggestion. Aside from rejudging submissions that were made at all, we're also working to reset the timer for everybody who downloaded the input but made no submissions, as they could have also been affected (though there are few such cases).

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Based on what we've seen, we're proceeding with the round and the existing point values. We believe that, though unfortunate, this issue has had little effect on the balance and results of the round.

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

I'm unable to download the validation input file for B (but did it for A1 earlier without issues). Am I missing something or is there an outage?

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    It should be working, can you please try again after reloading your page, and send us a clarification if the issue persists?

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +2 Vote: I do not like it

      Thanks for your reply. I was able to get it working by quitting Chrome entirely (it seems like the download was hanging in some no-person's-land).

      Now I have another issue -- I'm getting WA on the validation input. I figured out why my algorithm is wrong, but I'm not sure how to fix it. Are you the right person to ask for help with this? :D

      • »
        »
        »
        »
        4 years ago, # ^ |
        Rev. 2   Vote: I like it +1 Vote: I do not like it

        You can download the validation input and re-validate as many times as you need to. Feel free to send a clarification through the contest page if you encounter any issues.

»
4 years ago, # |
  Vote: I like it +50 Vote: I do not like it

I don't understand why was the constraint of S changed for problem B??

I think S > 0 worked perfectly fine.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +132 Vote: I do not like it

    Let's not discuss the problem from a running contest and wait for the official explanation later on (I have my suspicion about what happened).

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it -8 Vote: I do not like it

      Did they forget to generate different S for each test case?

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +137 Vote: I do not like it

    Assume $$$P$$$ and $$$Q$$$ to be sorted. Let $$$x_i$$$ be the index of the log driver that set the explosive charge on cluster $$$i$$$. For $$$S = 0$$$, there will be an optimal solution in which $$$x_i \leq x_{i + 1}$$$ for all $$$i$$$.

    But for $$$S > 0$$$, this might not be true.

    Example : $$$P = [0, 50]$$$, $$$Q = [1, 2, 3, 4, 75, 100]$$$, $$$S = 10^9$$$. In the optimal solution, log driver at position $$$0$$$ covers clusters at $$$1, 2, 100$$$, and the other log driver covers clusters at $$$3, 4, 75$$$.

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Auto comment: topic has been updated by LoneFox (previous revision, new revision, compare).

»
4 years ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

For some reason I couldn't run the full test case file on 1B... and couldn't submit my solution within the 6 minutes. Tried to run it with both file and system io.

Sad :( Does this have to do with the error just announced?

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I'm afraid that would have been unrelated — you should have still received a valid input file, meaning that it would have been an issue with your code.

»
4 years ago, # |
  Vote: I like it -48 Vote: I do not like it

Why does making a problem easier, reduce the cutoff instead of increase it?

»
4 years ago, # |
  Vote: I like it +186 Vote: I do not like it

Using dynamite on the river kills the fish. The fish had their revenge in the failure of this task.

»
4 years ago, # |
Rev. 2   Vote: I like it -48 Vote: I do not like it

Help LoneFox I just solved the chapter 1 problem in today's round and while I ran the actual input file on my 32 bit python compiler, it gave me memory error, and due to being panicked, I couldn't think much at that time. Next, after my 6 minutes got elapsed, I tried the same input file and code on 64 bit compiler, and it generated the output file. I know its my fault but still, it feels really bad when u got the solution and couldn't submit at such an important contest because u chose the wrong compiler version. Can anything be done regarding this please?

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +86 Vote: I do not like it

    Two things:

    1. You can still solve other tasks.
    2. You can learn from the experience.
»
4 years ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Just one small doubt. Do I get points for B if I submit now after the change, if my solution complies with the change(S=0) and is correct? I didn't make a submission for B before the change.

»
4 years ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Update: Due to an error with problem B (Dislodging Logs), only 25 points are now required to advance to Round 2, rather than 30

So what about this problem? I assume that this error is now fixed, all samples (including the validation input) are correct and we can still solve it?

»
4 years ago, # |
  Vote: I like it -31 Vote: I do not like it

Wierd inputs (large) . Just declaring array as global gave correct output else i was getting segmentation fault .

Debugging it took more than 6 minutes ,and i was not able to submit my solution.

Even B is giving WA ( i think my logic is correct).

Now ,there seems no way i can make it to next round.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +1 Vote: I do not like it

    You must not be initializing your array. Declaring globally implicitly initializes the array for you.

»
4 years ago, # |
  Vote: I like it +6 Vote: I do not like it

Submitted C, it showed processing your submission for ~1 min, timer expired and said a problem occurred, please try again. Not being a cry baby but do something for me please.

»
4 years ago, # |
  Vote: I like it +10 Vote: I do not like it

Friends standings page in not available anymore, is it intended to?

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    It looks like there's a bug affecting some people. We'll try to fix it soon.

»
4 years ago, # |
  Vote: I like it +133 Vote: I do not like it

Does anyone have a solution to the old version of B (S > 0)?

I was unable to come up with a solution that passes this test case:

input
output and explanation

This case also hacked all of my friends solutions (possibly this is why they changed the problem).

Any insights?

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +43 Vote: I do not like it

    My suspicion is that they thought the binary search would work for S > 0 too. Before seeing your example I was convinced that the optimal solution would involve the leftmost driver going to the leftmost groups of logs. Really curious about what the solution & best complexity is for the original problem

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

How to solve A2 and B , I was thinking of a lazy propagation based solution for A2 but couldn't came up with one! Anyone please give me the correct approach for these problems ,Thanks in advance!

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    B is binary search. Process drivers from left to right. Leftmost driver visits the leftmost cluster, then tries to go right as far as possible. Or vice versa, he goes right then visits the leftmost cluster.

    • »
      »
      »
      4 years ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      Ya exactly first look says to me it is a binary search ,but how to simulate for a possible k seconds , my mind was revolving around right left right left movements only.

      • »
        »
        »
        »
        4 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        well there are 4 possible routes for each driver: L, R, LR, RL. We know the starting point, the leftmost point, and we have to maximize the rightmost point. If we can't visit the leftmost point, the answer is no.

  • »
    »
    4 years ago, # ^ |
    Rev. 2   Vote: I like it +22 Vote: I do not like it
    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +15 Vote: I do not like it

      That's my problem, when I read it, I just copied my old code and it worked :D

      • »
        »
        »
        »
        4 years ago, # ^ |
        Rev. 3   Vote: I like it +8 Vote: I do not like it

        I'm so glad I had solved it before hackercup.
        I had taken almost 10-12 hours to figure out the correct solution back then. (Seeing it for the first time during hackercup would have been very bad for me XD).

        PS: It was a nice problem btw!

»
4 years ago, # |
  Vote: I like it +3 Vote: I do not like it

Can someone share his solution for problem C?

  • »
    »
    4 years ago, # ^ |
    Rev. 4   Vote: I like it +1 Vote: I do not like it

    For each vertex, calculate with DP the following values only considering its subtree:

    • total number of possible pairs
    • size of the current connected component of uninfected patients
    • maximum size of the connected component of uninfected patients
    • number of vertices included in maximum size connected components

    Now, consider removing a edge connecting a vertex u and it's parent. Optimal way is to connect the maximum size connected component of each separated tree. For the subtree of u, we can just use the DP value before. For the other tree, the DP values coming from the parent can be calculated with rerooting technique. You should take care of case where there is no uninfected patient in a certain tree : any pair is optimal in this case.

»
4 years ago, # |
  Vote: I like it +8 Vote: I do not like it

I had all the cases for C down (no components, 1 component, multiple components) and passed validation but failed the real tests due to a bug in one of my dfs I guess..kind of rip

»
4 years ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Can someone post the correct validation output for C? I can't for the life of me figure out what's wrong with my logic/code.

Edit: Damn. If anyone is wondering what was wrong, my for loop to generate edges used 0 based indexing so, copying over the provided formula that uses one based indexing, I calculated the edges mod $$$i-2$$$ instead of $$$i-1$$$. This unluckily passes the only sample that generates additional edges and is 4 hours of my life gone.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +11 Vote: I do not like it
    Correct output

    The thing I missed for quite some time is that I allowed removing edges between two uninfected nodes.

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +5 Vote: I do not like it

      Wouldn't removing edges between two uninfected nodes be fine as long as the result remained a tree and the size of the newly created uninfected component was the maximum possible?

      Like say we had three connected areas of uninfected rooms of size $$$x$$$. Suppose removing an edge from one of them breaks the graph into two connected components such that the other two uninfected areas of size $$$x$$$ were in different components, then we would be able to create a room of size $$$2 * x$$$ by attaching any two nodes from among those uninfected areas of size $$$x$$$.

      Anyway, thanks for the input, I seem to be somehow printing the wrong maximum answer for some cases while printing the correct number of ways.

      • »
        »
        »
        »
        4 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        In your case, yes you can break it into parts of size a, b, 2x, where a+b=x. But that's not optimal — it's better to break it into x, 2x. In general, you never want to break up a piece into smaller pieces because aC2+bC2 < (a+b)C2.

        There's one annoying edge case though — when everyone is uninfected, you do want to separate uninfected edges.

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Huh. It seems I either have a fundamental misunderstanding of the problem or I am reading input wrong. My O(N^4) brute outputs a different answer on Case #7.

      • »
        »
        »
        »
        4 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Let’s say you have connected components of uninfected nodes with sizes [3,3,4]. The optimal solution would connect either the first or second component with the third, thus creating a component of size 7. However, since we’re asked about the max number of pairs of people that can visit each other, we can’t break one of the remaining components up, since that would reduce this number. So in this case, the optimal solution will end up with one component of size 3, which contributes 3 pairs to the answer, and one component of size 7, which contributes 21 pairs to the answer. So, the max number of pairs you can get is 24, breaking up any component in the middle would make this number smaller.

        • »
          »
          »
          »
          »
          4 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Seems the same as my solution. When you have time could you share your initial components for Case #7? I got [11, 4, 4, 3, 2, 1 . . .] which I think gives 115 instead of 193 (Unless I'm missing something): 15*14 + 4*3 + 3*2 + 2 = 230.

          • »
            »
            »
            »
            »
            »
            4 years ago, # ^ |
              Vote: I like it +8 Vote: I do not like it

            If you don't mind me intruding in your conversation, I have [18, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1], which gives $$$20\cdot19+2\cdot1+2\cdot1+2\cdot1 = 386 = 193\cdot2$$$

            • »
              »
              »
              »
              »
              »
              »
              4 years ago, # ^ |
                Vote: I like it 0 Vote: I do not like it

              Yep, I think I rember having 18 and four 2s somewhere, thanks for the input :)

            • »
              »
              »
              »
              »
              »
              »
              4 years ago, # ^ |
                Vote: I like it 0 Vote: I do not like it

              Yikes, I probably should not have copied over my component finding into my brute force then

    • »
      »
      »
      4 years ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      But what if you have 3 components of uninfected nodes with size 4?

      Suppose one of these components is an ancestor of the other two, and those two are different descendants.

      You can remove one edge between uninfected nodes from the parent component and connect the other two.

      Am I missing something?

      • »
        »
        »
        »
        4 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Well, this is a much simpler case.

        Link to image

        • »
          »
          »
          »
          »
          4 years ago, # ^ |
            Vote: I like it +5 Vote: I do not like it

          If you’d used one of the edges incident to a red node instead, you would have made two groups of white nodes (with sizes 2 and 4). This means that the number of pairs of patients that can reach each other is 2+6=8. In your example, there are groups with sizes 1, 1 and 4, so the number of pairs of patients that can reach each other is 0+0+6=6, which is less than 8. The problem asks us to count the number of ways to achieve a situation which maximizes the mentioned number of pairs, so removing the edge you showed should not be included in the solution.

        • »
          »
          »
          »
          »
          4 years ago, # ^ |
          Rev. 3   Vote: I like it 0 Vote: I do not like it

          You can't do this because we want to maximize total number of pairs of people who can visit each other (not just in one component, but over ALL components), so the answer in your diagram above should be 4C2 + 2C2 = 7.

          I made the same mistake and spent 2 hours trying to debug before rereading the problem statement and re-solving stuff.

          • »
            »
            »
            »
            »
            »
            4 years ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            Ohhh that's right. Got it. I think I spent so much time solving one component that I forgot the rest.

            Thanks!!

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +5 Vote: I do not like it

    +1, I even wrote an n ^ 4 brute force to stress test but it passed 10k cases against that and I can't figure out what is wrong with it ._.

»
4 years ago, # |
  Vote: I like it +493 Vote: I do not like it

I don't want to be so critical but this contest was IMO really bad (even ignoring the thing with B). Almost all problems contained no interesting parts but had tons of annoying implementation (esp. A and C). I get that Round 1 maybe shouldn't have very hard problems but easy problems should still be tolerable. This contest I felt like quitting but forced myself to continue because it's an "important" contest. I very much liked rng_58's opinion on easy problems:

We try to make ARC-ABs "harmless", that is, they may not be interesting, but also are not annoying.

More generally: how is it that regular small contests (Codeforces, AtCoder, OpenCup, even TopCoder) can deliver such nice problems weekly while "major" contests on average have mediocre or even bad problems? Most of all I mean FHC, but to some extent also GCJ and ICPC WF.

Also the input format: I realize this is a response to people who couldn't download the big inputs, but this is so ugly. And it is quite limited because you can still only do "big input" and "random input" (the latter might help the contestants in problems like A3). I know this requires additional development but wouldn't the following be better:

  • server generates a password and puts the file in a password-protected zip;
  • client downloads the zip;
  • at any time, the client can request the password, after that the 6 minute countdown will start.
  • »
    »
    4 years ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    I'd like to offer another opinion on problem A. By doing this, I'm not in any way discounting your opinion, and in fact I do think your opinion as someone who is more experienced and more invested in the sport should matter more than that of the casuals like me.

    That being said, as someone who is in the target audience for easy problems, I enjoyed A quite a bit. In hindsight, figuring out the details of the perimeter updates was tedious, but I guess at the time, I just assumed that was part of the process and didn't think much about it.

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +130 Vote: I do not like it

      Interesting. IMO the details of A were tedious enough to consider quitting the contest.

      Unlike some other reds I have nothing against "classical" problems in contests aimed at lower-rated people. But I'd have thought that no one likes tedious problems.

      • »
        »
        »
        »
        4 years ago, # ^ |
          Vote: I like it +1 Vote: I do not like it

        Yeah I think for me, I just considered figuring that part out as part of the problem's solution, whereas I'm guessing you probably considered it more of a time-wasting implementation detail after you had already figured out the problem's high-level solution?

      • »
        »
        »
        »
        4 years ago, # ^ |
          Vote: I like it +33 Vote: I do not like it

        In general I think having "tricky" implementation "tedious" problems is ok in principle for programming contests. In fact, I find that it is a very "testable" and remarkable skill: Some people are actually very good at writing such fairly error-prone tricky things without mistakes and without taking too much time, and since I wish I had such a high implementation skill, I find it ok to "reward" that coding ability.

        However, I think that it is critical to consider "implementation" or "being tedious" or "being tricky" as part of a problems difficulty. Both for a problem setter (to achieve the aimed difficulty) and for a contestant (to avoid thinking that a problem is "easy" because the idea is easy: an "easy" but "tedious" problem is not easy anymore).

        In that way, I think that the contest completely failed to have easy problems (maybe it was intended): the ones with the "easy" ideas were very tricky and tedious.

        Also I agree that the general contest felt fairly unbalanced: the overall contest difficulty was mainly in the tricky/tedious area, and not the "algorithmic" part itself, which is ok for a single problem but not nice for the whole contest.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Good idea!!

  • »
    »
    4 years ago, # ^ |
      Vote: I like it -61 Vote: I do not like it

    Well, you could skip A2, A3 and C to be honest. I am happy that I did that. A1 and B are pretty easy to write and are not that annoying as some classical bullshit segment merging in A2 and A3.

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +68 Vote: I do not like it

      yeah, because everybody's sure their B solution is correct

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      B is only easy to write after the revision setting S=0, which came (I believe) very late in the contest.

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +8 Vote: I do not like it

      Not sure about A3, but A2 is reasonably easy solvable with std::set

      • »
        »
        »
        »
        4 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Solved A2 using set, and A3 can be solved using set too, in the same way.

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +11 Vote: I do not like it

      I think that, since queries are all available beforehand, A2 is much easier to solve using sweep-line.

      Sort all the "rectangle intervals" events (start and end), an process, adding a rectangle to current-set upon opening and removing when closed. Each "piece" can then be annotated with the lowest-index rectangle containing it, since that rectangle will fill it once and for all. A list of pieces for each rectangle can be precomputed this way.

      Then, process each rectangle in order and for each one, explicitly iterate all its pieces, knowing that each such piece "grows" the area from height 0 to that height (just have to check the height-values of neighbor pieces, which can be kept in a map or array after coordinate-compression).

      On the other hand, A3 can be solved with coordinate compresion + lazy-propagation-segtree in a much more less error prone way (in my opinion) than the official solution (assuming a correct lazy-propagation-segtree is available to simply copy-paste it).

      However, although less tricky to implement, this other way involves using totally different methods for each chapter, and would not work with online queries, as the segment merging version does.

  • »
    »
    4 years ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    There is one detail here: you need to provide each contestant a slightly different file, otherwise contestants may cheat by collaborating: each person solves one problem and shares the output file to other people (technically people can cheat by sharing code too, but that is easier to catch). GCJ (in the old format) did that for large dataset, and for small dataset they even generated different files per submission to prevent people from brute forcing the solution.

    There was one task in GCJ Final 2009 where Google handled large input file by following: they provided the contestants the script for generating large input, and when submitting, contestant download a seed value as parameter to run the script against. There are still variations on how fast/slow it is on individual computers, but assuming a reasonably large time limit that is still pretty reasonable.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Yeah, I think some problems like this are OK for $$$\ge 24$$$ hour contests, but having all/most of them like this is a little tiring. I managed to solve all the problems, but took quite a portion of the day to do it (due to hunting all the little bugs that caused samples or verification to fail)

»
4 years ago, # |
  Vote: I like it +6 Vote: I do not like it

Unable to check the scoreboard on the friends tab

»
4 years ago, # |
  Vote: I like it +2 Vote: I do not like it

How to solve A2 and A3

  • »
    »
    4 years ago, # ^ |
    Rev. 2   Vote: I like it +1 Vote: I do not like it

    Maintain a set of non-overlapping polygons (storing starting position, ending position and perimeter of each polygon would be enough). In the $$$i$$$th iteration, try to combine the $$$i$$$th rectangle with an already stored polygon or store it as a separate one. The technique for combining two polygons is slightly different in A2 and A3. In A3, you will need another set (or other DS) to efficiently find the current height of the polygon at the position $$$x$$$.

»
4 years ago, # |
Rev. 2   Vote: I like it +17 Vote: I do not like it

Solution to Problem A1 without using the constraint on W:

int was = 0;
was += h[0] * 2 + w*2;
int ans = was % inf;
deque<int> q;
q.push_back(0);
for(int c=1; c<n; c++){
   was += min(xl[c] - xl[c-1], w) * 2; 
   while(!q.empty() && xl[q.front()] + w < xl[c])q.pop_front();
   if(!q.empty())was += max(0ll, h[c]-h[q.front()]) * 2;
   else was += h[c] * 2;

   while(!q.empty() && h[q.back()] <= h[c])q.pop_back();
   q.push_back(c);
   was %= inf;
   ans *= was; ans %= inf;
}
cout << "Case #" << z+1 << ": " << ans << "\n";
»
4 years ago, # |
Rev. 2   Vote: I like it +89 Vote: I do not like it

I don't know if this has been asked before but could there be a way to filter the scoreboard by country?

As far as I know, the only way to filter results is by adding people as friends, and people who recently made a new account will get their accounts disabled if they send friend requests. Filtering by country might help that issue.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    Thanks for the suggestions! We'll definitely consider adding public countries of representation in the future.

»
4 years ago, # |
  Vote: I like it -13 Vote: I do not like it

LoneFox Why in the input we were only getting first K elements and then generate rest using a formula? I personally think it's unnecessary as it requires no extra thinking but just writing an extra for loop, unless it's because of judge limitations, is this the case?

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +5 Vote: I do not like it

    Not sure of official reason, but in Qualification round some people could not finish downloading input in time constraint. This formula makes it so input is small while having a large test case.

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it -30 Vote: I do not like it

      Yeah, that makes sense, but this also puts the limitation on the kind of problems you can create like suppose you have to print a final array of the same size as input because in that case input can be shrunk but output can't. I think it's better to create judge like CF where you just submit the source code, Google also did this last year.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    I couldn't download the test data for problem C during the qualification round on time. They reset my timer but they also told me that they wouldn't do it again during the next rounds.

    Generating tests with a formula is a nice solution to this matter. However, one has to be really careful coding these formulas.

»
4 years ago, # |
  Vote: I like it -52 Vote: I do not like it

I personally feel FB should offer a server for executing the source code, primarily because of the insufficient resources for executing the program for larger inputs in our personal computers. This is highly true in case of FBHC, where the inputs are of the order of 1e6.

Take my case for example, in the Qualification round, there was a problem on tree dp (the last problem if I recall correctly). After checking my program against the validation input, I tried running it on the test input and I received a segmentation fault. I had assumed there was an error with my code, and spent debugging it for nearly 8hrs, and couldn't find anything, and finally resorted to using Valgrind, through which I found out that my stack overflowed due to the high recursion depth. There was a similar case in Round1 problem C, as well, which left me helpless (not that it affected my qualification, but still getting higher in the scoreboard always feels nice).

In my opinion, these are not things one should worry about while coding the solutions, not that I had a workaround for the stack overflow considering I needed a DFS.

LoneFox Is there any chance of FB providing us a server for running codes? Will round 2 also have the same format of downloading and executing on our personal computers ? If so, people with superior PCs have an unfair advantage of coding a looser solution ( lets not even consider my scenario).

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +34 Vote: I do not like it

    I'm not trying to dismiss this opinion, I think these are good points, I'm just trying to understand for my own curiosity. I don't remember that many complaints about code execution on contestants' PCs when Google Code Jam did it not too long ago. I genuinely wonder what's the reason of this?

    1. My memory is just wrong.
    2. People were assuming it as a given for Code Jam and not voicing their concerns, but for Hacker Cup it's a new rule change.
    3. People started to use less powerful machines more to participate in programming competitions (e.g. Chromebooks).
    4. Competitive programming has become more popular over the last years and as such the number of issues we are hearing about this has increased.
    5. Codeforces has become more popular over the last years and as such the number of issues we are hearing about this has increased.
    6. Something else.
    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +8 Vote: I do not like it

      Unfortunately it seems like the old GCJ problems are being transitioned to use the new system so we can't access them now.

      If my memory serves me right, we rarely had a task with a huge input size (including pseudorandoms) in older Google Code Jam (Kickstart is not included). I can't remember having tasks that differentiate O(N) solutions and slower solutions--which what usually N>500k tasks are designed for. What I remember to be the only exception to this is on Finals problem, which is less of an issue since the contestants have the same working environment.

      Most of the older tasks are differentiating between polynomial and exponential time solutions, which IMHO makes it interesting.

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +5 Vote: I do not like it

      I am not sure about the others, but this is my first ever Facebook Hacker Cup, and till now I haven't given a single GCJ round. So, I just voiced out my opinion. I just feel that the competing environment should be fair. Even if its impossible to provide a common execution platform because of the huge number of people in the Qualification and Round 1, the further rounds should be considered for the case, considering time penalty is an important factor.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +124 Vote: I do not like it

    I just wanted to say something in support of this local-judge format that FBHC uses and GCJ used to use. Note that the alternative for FBHC would be remote-judging without any feedback, as you only get one shot to submit.

    1. This format is more contestant friendly in terms of debuggability. I think the fact that OP was able to run valgrind to find the bug already illustrates that they have a lot more debugging tools available. If this were on CF or something, and the bug were some out-of-bounds, they would just get an RTE with no further explanation. (Without feedback, it would just fail entirely.) Being able to debug this way is a fine skill to reward. While there is some pressure, you would've failed the problem if it were remotely judged with no feedback, which would be strictly worse for the contestant.
    2. This format is more contestant friendly in terms of tooling. Contestants are free to use whatever tooling or languages they have, without worry about version incompatibility or what standard libraries are available. For C++, I think letting contestants use Boost or Abseil is a great thing! We shouldn't be crippled by this single-file submission format (or even single language), and this is a good way to allow it.
    3. This format is more contestant friendly in terms of performance predictability. Contestants are free to create and run any cases locally and have no doubt that the performance in local testing will match the performance on the judge. Each year, IOI puts a ton of effort into accomplishing this by buying matching laptops for the contestants and the judges; this format gives this predictability for free.
    4. This format may be slightly friendlier to programmers who have never done competitive programming before. This is mostly the same as the point about tooling: new contestants don't have to learn the idiosyncrasies of the remote environment or how to do the right file-io/stdio setup.

    Also, I think the validation file is a great addition; it allows the contestants to ensure their setup is working correctly in a 0-stakes situation. (Old GCJ's small-inputs kind of shared this function.)

    As you point out though, there are some drawbacks, but I don't think they out-weigh the benefits:

    1. Contestants may prefer the well-managed/well-configured environments of remote judges, e.g. to set stack limits. For the record, I don't think this is a valid excuse for any advanced contestant; at the top level, you need to be able to locally test max cases anyways, and you should totally be able to configure your environment for that. On the other hand, this may disproportionately impact novice programmers who don't have a great mastery of their local toolchain/environment. I'm not sure how this weighs against not having to learn the intricacies of the remote environment; perhaps CP nowadays is sufficiently "plug-n-play" that running code remotely "just works" for a beginner. I think the best solution to this, though, is to use some online service like repl.it to run your code; these are designed to be beginner friendly, and give you a nice prebuilt VM to use. These services are plentiful and accessible enough that I think anyone learning to program can easily find one to use. (I think repl.it has file upload/download too.)
    2. Some contestants may have beefier computers/setups which give them an unfair advantage. I think this is a real phenomenon which may impact results, but I'm not quite convinced that it's a big deal. First, 6 minutes is very generous; any solution that could've passed in a blind remote-judging scenario will almost certainly run within 6 minutes on pretty much any machine, so anyone who would've succeeded anyways will continue to succeed. On the other hand, it is true that people may be able to squeeze solutions that otherwise would be too slow by parallelizing or using a bigger machine. I have a few responses: first, writing parallel code is a programming skill that seems fine to reward. Second, this can be partially mitigated by choosing problems where the difference between the intended solution and the slow solutions are larger, e.g. polynomial vs exponential. In practice, I don't think we see people renting out AWS machines to pass problems, so I don't think this is a huge problem.

    Overall, I think this format is net positive for contestants, and I really hope FBHC keeps it (especially with validation sets).

»
4 years ago, # |
Rev. 2   Vote: I like it +59 Vote: I do not like it

Fun fact: Task A can be solved (albeit much more technical) for inputs without any restrictions on $$$H$$$ using the idea behind Segment Tree Beats. In each segtree node you keep the value of the minimum height, the second minimum height, and how much of the $$$y$$$-perimeter of the figure would change if you were to add $$$1$$$ to all the blocks with the minimum height (computed from leaves to root). Now you can write the push and pull functions and it should be (kind of) straightforward.

For the restricted subtasks, though, you could obtain $$$O(N logN)$$$ if you recurse down if the minimum height is strictly smaller than the maximum height in the subtree (i.e. if not all heights are equal), and with minimal lazy propagation logic. This is similar to the set solutions, but (IMHO) much easier to implement and with fewer moving parts. And it has the advantage of solving A1-A3 with just one implementation (even without the imposed constraint on $$$W$$$ for A1).

»
4 years ago, # |
  Vote: I like it +148 Vote: I do not like it

Regarding the stack limit:

I'm not surprised by all people struggling with stack limit but I hope that it won't make FHC to change the contest format in the future. A qualification round is there to test the system and this is when you might be surprised by stack limit and forced to learn how to deal with it. You should know your own environment and be able to run a program.

That being said, it's still bad that machine power and internet speed differ for participants but the former isn't a big deal if intended solutions work in ~10 seconds and brute force requires hours, and the latter is resolved by providing encrypted tests in advance.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +16 Vote: I do not like it

    It was good that they had a problem in the qual round that forced us to deal with the stack limit. But for some reason, even though I set my system limit to the maximum and passed the qual round problem, I still got a segfault in R1. Only with some extra g++ options did it pass. At least I know how to set my compiler invocation properly now!

    I do think it's good that the format is different. It could lead to more variety — like using more obscure languages or third party libraries, or more room for precomputation for some kinds of problems. It's just that the execution makes it stressful.

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it -18 Vote: I do not like it

      It's just that the execution makes it stressful.

      That's the fun!

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

I did the segment merging approach which some people have described here for A2. It passed the validation input but failed on the main cases :(. Could someone tell me what I am doing wrong?

https://gist.github.com/cathreya/e76fb3918bbb4dbd5437d545572ed4cf

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +13 Vote: I do not like it

    auto lpt = ls.lower_bound({l[i],{0,0}}); auto rpt = ls.upper_bound({l[i]+w[i],{0,0}});

    You may need to give sufficiently big values instead of {0,0}. I have made an exact same mistake.

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      What are {0,0} here?
      I stored {{left,right},perimeter} and used {{l[i],l[i]},0} in the lower_bound function. I had to write two messy functions to find those two bounds.

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Damn. It worked. I can't believe I messed this up :'(

      Thanks a lot!

»
4 years ago, # |
Rev. 3   Vote: I like it -15 Vote: I do not like it

Lol. In problem B I was like: god damn it I'm not missing another dp or binary-search-over-answer solution again! And then all I did was to keep thinking how I could binary search, try dp, or thought it could be binary search with dp, and then I freaking made it xD Every big contest. EVERY BIG CONTEST I miss a dp or a BS solution. Well, not today! (well actually not yesterday :0 )

Edit: fun fact — I watched Errichto's video and lold hard when I saw him saying the exact words as me. Dp? Bs? Bs with dp?

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    One funny thing is , for the first problem A1, I had initially thought of the same approach as Errichto. Then I thought it won't finish executing under 6mins.

    In his video , his computer (which is a lot faster than mine in terms of performance) finished the test in 3mins. I think in my case it would have been a very close call if I had implemented Errichto's solution.

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Interesting... I'll check it out.

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +13 Vote: I do not like it

      If you use unodered_map on his solution you get (on my pc) from ~4 mins to ~26 seconds. Also if you clear the map when l[i-1] + w +1 > l[i] (so when you'll never use the prev values), you get like 2 minutes.

      My solution that simulates this by having a vector for the values, and clearing it from time to time is ~2 secs.

      • »
        »
        »
        »
        4 years ago, # ^ |
          Vote: I like it -8 Vote: I do not like it

        Yup. As I said during a video, I would quickly switch to an array/vector if it seemed that I'm running out of 6 minutes.

        • »
          »
          »
          »
          »
          4 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          My map solution was too slow and it was taking around 20+ seconds for each $$$10^6$$$ cases. After switching to unordered map it was able to finish all cases in 20+ seconds.

          Fun fact is I didn't know that 6 minute window was the only time limit and I thought they would run codes on their server. So I spent an hour to find a more efficient solution with an array of size 20 (thanks to the constraints).

          I started late and I couldn't finish implementing A3 before CF Global Round started because of spending time on making A1 efficient. Luckily A1 and A2 was enough for clearing this round.

          One question: is it possible to initialize array of size $$$5*10^8$$$ in high config pc? I couldn't do it on my machine.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Too bad, the binary search + dp solution is flawed. :P

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Well, I wrote from my phone so maybe I wasn't clear but in the end I solved it with binary search only.

      Or you mean that you tried dp+bs and it didn't work? :0

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      My solution for $$$S = 0$$$ was DP + optimum monotonocity (cause the transitions were to minimize the maximum of two functions (one non-decreasing and the other one non-increasing)); so I used binary search to get the optimum for each state. I could have used Two pointers and get a linear solution, but this wasn't necessary.

»
4 years ago, # |
  Vote: I like it +21 Vote: I do not like it

So has anyone figured out a solution for B with S> 0?

»
4 years ago, # |
Rev. 4   Vote: I like it 0 Vote: I do not like it

.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    It fails because you took max of H every time. think of a smaller height between two bigger ones, it will increase the perimeter.

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Auto comment: topic has been updated by LoneFox (previous revision, new revision, compare).

»
4 years ago, # |
  Vote: I like it +37 Vote: I do not like it

I'm happy to say that we're now able to share information on this year's t-shirt prizes, and will be giving out more than ever before! Everybody who solves at least one problem in the upcoming Round 2 will receive a Facebook Hacker Cup t-shirt. The top 200 competitors from Round 2 will also have a "Top 200" badge on their shirt.