### LoneFox's blog

By LoneFox, history, 5 weeks ago,

Round 1 of the 2020 Facebook Hacker Cup is less than 48 hours away!

The round will begin on August 15th, 2020 at 10am PDT and will last for 24 hours. The contest can be found here.

You're eligible to compete in Round 1 if you solved at least one problem correctly in the Qualification Round.

Everyone who scores at least 25 points (regardless of time taken) will advance to Round 2, which will take place on Sat. Aug. 29th. Please note that all submission judgments will only be revealed after the round ends. More details about rules and other information can be found in the FAQ.

We wish you the best of luck, and hope that you enjoy the contest!

The Facebook post can be found here.

Update 1: Due to an error with problem B (Dislodging Logs), only 25 points are now required to advance to Round 2, rather than 30.

Update 2: The round has ended, thank you for participating! The scoreboard can be found here, and the solutions here.

Update 3: As an update on prizes, everybody who solves at least one problem in the upcoming Round 2 will receive a Facebook Hacker Cup t-shirt! The top 200 competitors from Round 2 will also have a "Top 200" badge on their shirt.

• +87

 » 5 weeks ago, # |   +17 When the announcement about the t-shirts will be made?
•  » » 5 weeks ago, # ^ |   +5 The only reason anyone less than international grandmaster would participate in it.
•  » » » 5 weeks ago, # ^ |   0 Wow, you are right. Top 2000 from round 1 will receive t-shirts.
•  » » » » 5 weeks ago, # ^ |   +46 Sorry, that information mentioned in the Terms had been a placeholder and has been removed for now. We hope to offer such prizes based on Round 2, but we're unfortunately unable to make any public announcements about that quite yet, due to working out the effects of COVID-19 on prize shipping. We will announce more information when possible, before Round 2!
•  » » » » » 5 weeks ago, # ^ | ← Rev. 3 →   +102
•  » » » » » » 5 weeks ago, # ^ |   +19 Are we getting a new design for this year btw?
•  » » » » » » 3 weeks ago, # ^ |   0 back to panel 2 again for now :smug:
•  » » » » » 5 weeks ago, # ^ |   +29 Didn't all the participants already agreed to the terms and conditions? Can you really now change that after setting the expectations?
•  » » » » » » 5 weeks ago, # ^ |   +8 Yes, change of TOC is taken into account in section 15.115.1 Changes; Termination. Facebook reserves the right to limit the participation of any person in the Competition, amend or interpret these Terms or official communications governing the Competition or cancel the Competition for any reason with prior notice. Notices for any such amendment, interpretation or cancellation will be deemed given by posting on the Competition website and by virtue of a Competitor's continued participation in the Competition. A Competitor may terminate participation in the Competition upon written notice to Facebook.of the TOC.
•  » » » » » » » 5 weeks ago, # ^ |   +3 Damn! I can't believe I agreed to such a condition without reading it. XDAnyway, I think every other TOC that I have actually read always had this point. Keeping TOC is basically a formality nowadays. :)
•  » » » » » » » » 5 weeks ago, # ^ |   +72 100% of the time terms and conditions basically translates to "We can do whatever we want".
 » 5 weeks ago, # |   0 Are you guys done reviewing all the disabled accounts LoneFox?
•  » » 5 weeks ago, # ^ |   0 Yes, I believe we've helped each participant who reached out to us at hackercup@fb.com with details regarding a disabled FB account (generally caused by new FB accounts with no profile information sending friend requests to others, which was activity flagged as being normally associated with fake/spam accounts).
•  » » » 5 weeks ago, # ^ |   +3 generally caused by new FB accounts with no profile information sending friend requests to others, which was activity flagged as being normally associated with fake/spam accounts Lol that reminds me of a conversation I once had: "just contact me on FB" "I don't have FB" "make an account, you don't need to use it for anything except as a messenger" "when I tried that my account got instalocked everytime" "nah there's no way" "well yes way, mail me".
 » 5 weeks ago, # |   +173 Please prefer formal problem statements in further rounds.
•  » » 5 weeks ago, # ^ |   +1 What do you mean by "formal" in this case?
•  » » » 5 weeks ago, # ^ |   +33 Where problem statement is not written like some essay
•  » » » 5 weeks ago, # ^ | ← Rev. 2 →   +57 By formal, I mean very short problem statements without any story. I feel the current problem statements due to the story is too long. Or you can write a story in the first paragraph in italics. In such a way that one who skips the first paragraph doesn't lose any points. For e.g. see this https://codeforces.com/contest/1286/problem/F story ends in the first line. Many times problem setters write long story and end last paragraph with - "Formally, find no of subarrays of a given array with positive-sum." One not interested in reading story can just read paragraph starting with "Formally". This paragraph many times involve some mathematical notations to describe problem statement in a line or two.
 » 5 weeks ago, # | ← Rev. 5 →   -46 lonefox where is time limit of each question is mentioned? I am not talking about the 6 min rule. I am talking about allowed max running time of my program aryanc403 ~~~~~ aryanc403 can you help me? ~~~~~
•  » » 5 weeks ago, # ^ |   +11 no one limits it, no one cares about it
•  » » » 5 weeks ago, # ^ |   -23 is It means that I can write code that is doing more than 10^12 iterations is fine?WitchOfTruth
•  » » » » 5 weeks ago, # ^ |   +17 All they need is the output file so as far as your code finishes with the given input before the 6 minutes countdown ends it's fine.
•  » » » » » 5 weeks ago, # ^ |   0 Thanx man madlogic
•  » » 5 weeks ago, # ^ | ← Rev. 3 →   +1 You can also write a program which takes 1e18 operations. As long as you use the language with free compiler/interpreter and it completes those 1e18 operations in less than 6 minutes. I'm not sure if using GPUs/supercomputers is allowed but you can check rules. SpoilerIf you still want a strict time limit. Assume that the time limit is 15s on code forces server. Most of the people use those lenient 6 mins to debug in the last moment in case something fails because fixes after those 6 mins won't give you any points.
 » 5 weeks ago, # |   +63 Why, MacOS stack size limit, why...
•  » » 5 weeks ago, # ^ |   +9 I had the same problem but in windows, the thing is that in windows the size of the stack is 1mb, in Linux and MacOsX it is bigger. This is highly operating system dependent, but can still be changed with a few commands. Luckily the same code that gave me RTE for the stack in windows gave me AC in Ubuntu, but the submition time had passed...
•  » » » 5 weeks ago, # ^ |   0 Unfortunately Mac has a hard limit of 64 mb, and I'm not sure if there's a workaround.I even tested my solution with some test cases that resulted in deep recursions, and there was no problem (since I got similarly burned in the qualification round).So I went over to my Linux desktop, fumbled around transferring the code and the input file, it ran no problem... and I was 2 seconds too late.Next time I'll just SSH to the desktop...
•  » » » » 5 weeks ago, # ^ |   0 I also ran into this problem — passed validation but sadly ...... I should have tested my code before this with max limits tbh, since it's a 24h contestanyone know a good solution to this other than having another desktop? are there safe+fast remote sites to run code with file input? i only have a mac for now, and virtual machines probably don't work if they're on mac anyway.otherwise i will just have to code bottom up things or implement my own stack next time i guess... which is quite sad but doable
•  » » » » » 5 weeks ago, # ^ |   0 Instead of creating arrays of size 10^6 create dynamic arrays using pointers and new function. This runs fine on my Windows 10 ,as then memory is allocated in heap and not stack.
•  » » » » » » 5 weeks ago, # ^ |   +5 Recursion also use stack memory and there is no easy way to configure it to use heap instead.
•  » » » » » » 5 weeks ago, # ^ |   0 It's OK to make huge arrays as long as they're global. Global variables aren't allocated on the stack.
•  » » » » » » » 5 weeks ago, # ^ |   +8 Well, I decided not to bother with coordinates compression and decided to have a 500M array of ints for A1. Which exceeded Windows' 32-bit 2GB RAM limit. Thankfully, after panicking for 5 minutes (it was the last hour of the contest) I just launched Ubuntu using Windows Subsystem for Linux; and everything worked like a charm there.
•  » » » » » 5 weeks ago, # ^ |   +9 Actually an Ubuntu virtual machine inside the Mac works, I did it that way because I didn't have a compiler on the Mac.
•  » » » 5 weeks ago, # ^ |   0 This is highly operating system dependent, but can still be changed with a few commands Maximum stack size can be changed?? If yes, how?? I'm on Windows 10, and I'm encountering the same RTE problem...
•  » » » » 5 weeks ago, # ^ | ← Rev. 2 →   +9 Actually I couldn't change it but what I was able to google it and I found some tutorials to do it, some suggest to use commands like / F and / STACK, I really can't figure out how to do it so I just ran the program in an ubuntu virtual machine and that fixed the problem Here you have some links. Link 1 Link 2
•  » » » » 5 weeks ago, # ^ |   +23 I use this '-Wl,--stack,268435456' in windows. Pass this as a command argument. Try with/ without '', different shells parse this differently afaik.
•  » » 5 weeks ago, # ^ |   0 Damnit this was also my issue, except i was on windows.. and i dont even know the flag to increase stack size. Alas, i was too inexperienced. Didn't even notice it was a stack overflow before timed out..Lesson learned i guess..
•  » » 5 weeks ago, # ^ | ← Rev. 2 →   +44 g++ -std=c++17 -Wl,-stack_size -Wl,0x10000000 main.cpp works to increase the stack size on MacOS
•  » » » 5 weeks ago, # ^ |   0 Thanks!I tried to find these command line options in the past, but couldn't get it to work.
•  » » » 5 weeks ago, # ^ | ← Rev. 2 →   0 Thanks for the command. I've been harassed by stack-overflow problem multiple times and have not dealt it well.wondering what is that 0x10000000 mean? stack size in bytes (ie 10MB) or recursion depth? I searched for g++ documentation for this option without success. g++ -std=c++17 -Wl,-stack_size -Wl,0x10000000 main.cpp // This command passes on for 10million recursive calls in macg++ -std=c++11 -Wl,-stack_size -Wl,0x10000000 main.cpp // this one does not. I guess g++ fixed some bugs in c++17 or just this is recent enhancement int test (long long t) { int a[3]; if (t <= 0) return 0; if (t % 100000 == 0) cout << t << endl; int b = test(t-1); if (b == 25) { b = 34 + a[2]; } else { b = 53; } return b; } int main() { cin.sync_with_stdio(0); cin.tie(0); test (100000000); return 0; } 
•  » » 5 weeks ago, # ^ | ← Rev. 2 →   +5 deleting post since too many down votes — perhaps it was too long or the code wasn't good :)
•  » » 5 weeks ago, # ^ |   0 yeah, me too, and lost my problem C...
•  » » » 5 weeks ago, # ^ |   0 I feel your pain! thats tough losing two problems man. Always worth it for the excitement of when you do solve on though, I kinda like the format.. the excitement of your computer or connection crashing, making sure you upload the correct file, only having 6 minutes!
•  » » » » 5 weeks ago, # ^ |   0 yeah, man, lessons learned, now i would execute my code with stack increasing command in windows cmd, lol.
 » 5 weeks ago, # |   +18 I saw that B got updated so that S=0 for all cases now. I submitted my solution before that change was made. Will my source code be run against the new cases?
•  » » 5 weeks ago, # ^ |   -29 Yes, earlier submissions will be rejudged accordingly, we apologize for the inconvenience!
•  » » » 5 weeks ago, # ^ |   +5 Is the code run by Facebook against the new S = 0 cases, or is our output file compared to the old S > 0 cases?
•  » » » 5 weeks ago, # ^ |   +15 How does this work? I just looked at my input file and S is not 0 (at least in the first few cases I checked, I didn't go through the whole file). And my solution assumes S can't be 0 (or at least, I'm not sure if it does).So, will my submission be judged against the original output?
•  » » » 5 weeks ago, # ^ |   +13 In my code, there is a division by S. How are you going to judge my solution?
•  » » » 5 weeks ago, # ^ |   +21 My submission depends on $s > 0$ in a very silly way, so rejudging won't work in my case.Besides that my code doesn't compile without an extra owned library (just for basic templates). From my understanding of the following rule it should be ok:What file formats can I submit?Your output and source code should both be in plain text. Their file extensions don’t matter. Your source code must be a single file. If your solution depends on additional library files, please upload only the single main source code file.Do not submit executable files (such as .exe files.).
•  » » 5 weeks ago, # ^ | ← Rev. 3 →   +21 I just submitted and after refresh the page see the announcement... Since the contest is blind they should let anyone resubmit with the new constraints.. The constraints on S before was strictly greater than 0, and someone who puts some assertation may even fail now....
•  » » » 5 weeks ago, # ^ |   +16 Don't worry -- if need be we'll handle any such cases manually as they arise.
 » 5 weeks ago, # | ← Rev. 2 →   +43 I just realized that S = 0 for the new updated test for Problem B. This change happened just when I was submitting. When validating, I got the S != 0 input set. I waited for around 10 minutes upon submitting validation output file to ensure that my solution was correct and then downloaded the final input file, unknowing of this new constraint.Upon running my code, I got an error and realized that S = 0. I reread the problem and discovered this new constraint. For this reason, I completely missed the submission window for B. I believe that something should be done about this.First, all competitors that submitted before the change should be given an option to resubmit for the new version of the problem (if they think it's easier). Otherwise, those who submit B later get an advantage. I believe that this "resubmission" window should be rather lenient especially for users like me who had a gap between validation and final submission.Furthermore, when something else like this happens, I believe that FHC should have a way of notifying contestants (e.g. CF's way of giving a browser notification), not simply relying on contestant's refreshing the page. I had no idea this happened until I looked at the input file.wjomlex lonefox please look into this. I think that this is a potential reason to make the round "unrated". This new constraint will mess up the point values associated to problems and may cause an imbalance in problems.Edit: My timer has been refunded. Thank you to FHC for the fast response! Nevertheless, I think a better notification system of these issues should be established. Do you think that the point values will be recalculated?
•  » » 5 weeks ago, # ^ |   0 Yes, you're right, thank you for the suggestion. Aside from rejudging submissions that were made at all, we're also working to reset the timer for everybody who downloaded the input but made no submissions, as they could have also been affected (though there are few such cases).
•  » » 5 weeks ago, # ^ |   0 Based on what we've seen, we're proceeding with the round and the existing point values. We believe that, though unfortunate, this issue has had little effect on the balance and results of the round.
 » 5 weeks ago, # |   0 I'm unable to download the validation input file for B (but did it for A1 earlier without issues). Am I missing something or is there an outage?
•  » » 5 weeks ago, # ^ |   0 It should be working, can you please try again after reloading your page, and send us a clarification if the issue persists?
•  » » » 5 weeks ago, # ^ |   +2 Thanks for your reply. I was able to get it working by quitting Chrome entirely (it seems like the download was hanging in some no-person's-land).Now I have another issue -- I'm getting WA on the validation input. I figured out why my algorithm is wrong, but I'm not sure how to fix it. Are you the right person to ask for help with this? :D
•  » » » » 5 weeks ago, # ^ | ← Rev. 2 →   +1 You can download the validation input and re-validate as many times as you need to. Feel free to send a clarification through the contest page if you encounter any issues.
 » 5 weeks ago, # |   +50 I don't understand why was the constraint of S changed for problem B??I think S > 0 worked perfectly fine.
•  » » 5 weeks ago, # ^ |   +132 Let's not discuss the problem from a running contest and wait for the official explanation later on (I have my suspicion about what happened).
•  » » » 5 weeks ago, # ^ |   -8 Did they forget to generate different S for each test case?
•  » » 5 weeks ago, # ^ |   +137 Assume $P$ and $Q$ to be sorted. Let $x_i$ be the index of the log driver that set the explosive charge on cluster $i$. For $S = 0$, there will be an optimal solution in which $x_i \leq x_{i + 1}$ for all $i$.But for $S > 0$, this might not be true.Example : $P = [0, 50]$, $Q = [1, 2, 3, 4, 75, 100]$, $S = 10^9$. In the optimal solution, log driver at position $0$ covers clusters at $1, 2, 100$, and the other log driver covers clusters at $3, 4, 75$.
 » 5 weeks ago, # |   0 Auto comment: topic has been updated by LoneFox (previous revision, new revision, compare).
 » 5 weeks ago, # | ← Rev. 2 →   0 For some reason I couldn't run the full test case file on 1B... and couldn't submit my solution within the 6 minutes. Tried to run it with both file and system io. Sad :( Does this have to do with the error just announced?
•  » » 5 weeks ago, # ^ |   0 I'm afraid that would have been unrelated — you should have still received a valid input file, meaning that it would have been an issue with your code.
 » 5 weeks ago, # |   -48 Why does making a problem easier, reduce the cutoff instead of increase it?
 » 5 weeks ago, # |   +186 Using dynamite on the river kills the fish. The fish had their revenge in the failure of this task.
 » 5 weeks ago, # |   -121
 » 5 weeks ago, # | ← Rev. 2 →   -48 Help LoneFox I just solved the chapter 1 problem in today's round and while I ran the actual input file on my 32 bit python compiler, it gave me memory error, and due to being panicked, I couldn't think much at that time. Next, after my 6 minutes got elapsed, I tried the same input file and code on 64 bit compiler, and it generated the output file. I know its my fault but still, it feels really bad when u got the solution and couldn't submit at such an important contest because u chose the wrong compiler version. Can anything be done regarding this please?
•  » » 5 weeks ago, # ^ |   +86 Two things: You can still solve other tasks. You can learn from the experience.
 » 5 weeks ago, # | ← Rev. 2 →   0 Just one small doubt. Do I get points for B if I submit now after the change, if my solution complies with the change(S=0) and is correct? I didn't make a submission for B before the change.
•  » » 5 weeks ago, # ^ |   0 You can still solve B and score points.
 » 5 weeks ago, # | ← Rev. 2 →   0 Update: Due to an error with problem B (Dislodging Logs), only 25 points are now required to advance to Round 2, rather than 30 So what about this problem? I assume that this error is now fixed, all samples (including the validation input) are correct and we can still solve it?
•  » » 5 weeks ago, # ^ |   0 That is correct.
 » 5 weeks ago, # |   -31 Wierd inputs (large) . Just declaring array as global gave correct output else i was getting segmentation fault .Debugging it took more than 6 minutes ,and i was not able to submit my solution. Even B is giving WA ( i think my logic is correct).Now ,there seems no way i can make it to next round.
•  » » 5 weeks ago, # ^ |   +1 You must not be initializing your array. Declaring globally implicitly initializes the array for you.
•  » » » 5 weeks ago, # ^ |   -10 It's easier in case of multiple test cases. That's why i usually do it . It also reduces memory complexity of program.
 » 5 weeks ago, # |   +6 Submitted C, it showed processing your submission for ~1 min, timer expired and said a problem occurred, please try again. Not being a cry baby but do something for me please.
 » 5 weeks ago, # |   +10 Friends standings page in not available anymore, is it intended to?
•  » » 5 weeks ago, # ^ |   0 It looks like there's a bug affecting some people. We'll try to fix it soon.
 » 5 weeks ago, # |   +133 Does anyone have a solution to the old version of B (S > 0)?I was unable to come up with a solution that passes this test case: input1 2 3 2 6 2 4 0 0 0 10 8 9 0 0 4 10  output and explanation16 The log driver at position 2 sets an explosive charge at the log at position 9.The log driver at position 4 sets an explosive charge at the logs at positions 5 and 8.This case also hacked all of my friends solutions (possibly this is why they changed the problem).Any insights?
•  » » 5 weeks ago, # ^ |   +43 My suspicion is that they thought the binary search would work for S > 0 too. Before seeing your example I was convinced that the optimal solution would involve the leftmost driver going to the leftmost groups of logs. Really curious about what the solution & best complexity is for the original problem
 » 5 weeks ago, # |   0 How to solve A2 and B , I was thinking of a lazy propagation based solution for A2 but couldn't came up with one! Anyone please give me the correct approach for these problems ,Thanks in advance!
•  » » 5 weeks ago, # ^ |   0 B is binary search. Process drivers from left to right. Leftmost driver visits the leftmost cluster, then tries to go right as far as possible. Or vice versa, he goes right then visits the leftmost cluster.
•  » » » 5 weeks ago, # ^ | ← Rev. 2 →   0 Ya exactly first look says to me it is a binary search ,but how to simulate for a possible k seconds , my mind was revolving around right left right left movements only.
•  » » » » 5 weeks ago, # ^ |   0 well there are 4 possible routes for each driver: L, R, LR, RL. We know the starting point, the leftmost point, and we have to maximize the rightmost point. If we can't visit the leftmost point, the answer is no.
•  » » 5 weeks ago, # ^ | ← Rev. 2 →   +22 current B (S=0) : http://codeforces.com/problemset/problem/343/C
•  » » » 5 weeks ago, # ^ |   +15 That's my problem, when I read it, I just copied my old code and it worked :D
•  » » » » 5 weeks ago, # ^ | ← Rev. 3 →   +8 I'm so glad I had solved it before hackercup. I had taken almost 10-12 hours to figure out the correct solution back then. (Seeing it for the first time during hackercup would have been very bad for me XD).PS: It was a nice problem btw!
 » 5 weeks ago, # |   +3 Can someone share his solution for problem C?
•  » » 5 weeks ago, # ^ | ← Rev. 4 →   +1 For each vertex, calculate with DP the following values only considering its subtree: total number of possible pairs size of the current connected component of uninfected patients maximum size of the connected component of uninfected patients number of vertices included in maximum size connected components Now, consider removing a edge connecting a vertex u and it's parent. Optimal way is to connect the maximum size connected component of each separated tree. For the subtree of u, we can just use the DP value before. For the other tree, the DP values coming from the parent can be calculated with rerooting technique. You should take care of case where there is no uninfected patient in a certain tree : any pair is optimal in this case.
 » 5 weeks ago, # |   +8 I had all the cases for C down (no components, 1 component, multiple components) and passed validation but failed the real tests due to a bug in one of my dfs I guess..kind of rip
 » 5 weeks ago, # | ← Rev. 2 →   0 Can someone post the correct validation output for C? I can't for the life of me figure out what's wrong with my logic/code.Edit: Damn. If anyone is wondering what was wrong, my for loop to generate edges used 0 based indexing so, copying over the provided formula that uses one based indexing, I calculated the edges mod $i-2$ instead of $i-1$. This unluckily passes the only sample that generates additional edges and is 4 hours of my life gone.
•  » » 5 weeks ago, # ^ |   +11 Correct outputCase #1: 3 4 Case #2: 0 4 Case #3: 1 2 Case #4: 3 8 Case #5: 3 12 Case #6: 15 16 Case #7: 193 432 Case #8: 100 180 Case #9: 131955 122400 Case #10: 0 8099820001 Case #11: 3 8099640007 Case #12: 1 6 The thing I missed for quite some time is that I allowed removing edges between two uninfected nodes.
•  » » » 5 weeks ago, # ^ |   +5 Wouldn't removing edges between two uninfected nodes be fine as long as the result remained a tree and the size of the newly created uninfected component was the maximum possible?Like say we had three connected areas of uninfected rooms of size $x$. Suppose removing an edge from one of them breaks the graph into two connected components such that the other two uninfected areas of size $x$ were in different components, then we would be able to create a room of size $2 * x$ by attaching any two nodes from among those uninfected areas of size $x$.Anyway, thanks for the input, I seem to be somehow printing the wrong maximum answer for some cases while printing the correct number of ways.
•  » » » » 5 weeks ago, # ^ |   0 In your case, yes you can break it into parts of size a, b, 2x, where a+b=x. But that's not optimal — it's better to break it into x, 2x. In general, you never want to break up a piece into smaller pieces because aC2+bC2 < (a+b)C2.There's one annoying edge case though — when everyone is uninfected, you do want to separate uninfected edges.
•  » » » 5 weeks ago, # ^ |   0 Huh. It seems I either have a fundamental misunderstanding of the problem or I am reading input wrong. My O(N^4) brute outputs a different answer on Case #7.
•  » » » » 5 weeks ago, # ^ |   0 Let’s say you have connected components of uninfected nodes with sizes [3,3,4]. The optimal solution would connect either the first or second component with the third, thus creating a component of size 7. However, since we’re asked about the max number of pairs of people that can visit each other, we can’t break one of the remaining components up, since that would reduce this number. So in this case, the optimal solution will end up with one component of size 3, which contributes 3 pairs to the answer, and one component of size 7, which contributes 21 pairs to the answer. So, the max number of pairs you can get is 24, breaking up any component in the middle would make this number smaller.
•  » » » » » 5 weeks ago, # ^ |   0 Seems the same as my solution. When you have time could you share your initial components for Case #7? I got [11, 4, 4, 3, 2, 1 . . .] which I think gives 115 instead of 193 (Unless I'm missing something): 15*14 + 4*3 + 3*2 + 2 = 230.
•  » » » » » » 5 weeks ago, # ^ |   +8 If you don't mind me intruding in your conversation, I have [18, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1], which gives $20\cdot19+2\cdot1+2\cdot1+2\cdot1 = 386 = 193\cdot2$
•  » » » » » » » 5 weeks ago, # ^ |   0 Yep, I think I rember having 18 and four 2s somewhere, thanks for the input :)
•  » » » » » » » 5 weeks ago, # ^ |   0 Yikes, I probably should not have copied over my component finding into my brute force then
•  » » » 5 weeks ago, # ^ | ← Rev. 2 →   0 But what if you have 3 components of uninfected nodes with size 4?Suppose one of these components is an ancestor of the other two, and those two are different descendants.You can remove one edge between uninfected nodes from the parent component and connect the other two.Am I missing something?
•  » » » » 5 weeks ago, # ^ |   0 Well, this is a much simpler case.Link to image
•  » » » » » 5 weeks ago, # ^ |   +5 If you’d used one of the edges incident to a red node instead, you would have made two groups of white nodes (with sizes 2 and 4). This means that the number of pairs of patients that can reach each other is 2+6=8. In your example, there are groups with sizes 1, 1 and 4, so the number of pairs of patients that can reach each other is 0+0+6=6, which is less than 8. The problem asks us to count the number of ways to achieve a situation which maximizes the mentioned number of pairs, so removing the edge you showed should not be included in the solution.
•  » » » » » 5 weeks ago, # ^ | ← Rev. 3 →   0 You can't do this because we want to maximize total number of pairs of people who can visit each other (not just in one component, but over ALL components), so the answer in your diagram above should be 4C2 + 2C2 = 7. I made the same mistake and spent 2 hours trying to debug before rereading the problem statement and re-solving stuff.
•  » » » » » » 5 weeks ago, # ^ |   0 Ohhh that's right. Got it. I think I spent so much time solving one component that I forgot the rest.Thanks!!
•  » » 5 weeks ago, # ^ |   +5 +1, I even wrote an n ^ 4 brute force to stress test but it passed 10k cases against that and I can't figure out what is wrong with it ._.
 » 5 weeks ago, # |   +493 I don't want to be so critical but this contest was IMO really bad (even ignoring the thing with B). Almost all problems contained no interesting parts but had tons of annoying implementation (esp. A and C). I get that Round 1 maybe shouldn't have very hard problems but easy problems should still be tolerable. This contest I felt like quitting but forced myself to continue because it's an "important" contest. I very much liked rng_58's opinion on easy problems: We try to make ARC-ABs "harmless", that is, they may not be interesting, but also are not annoying. More generally: how is it that regular small contests (Codeforces, AtCoder, OpenCup, even TopCoder) can deliver such nice problems weekly while "major" contests on average have mediocre or even bad problems? Most of all I mean FHC, but to some extent also GCJ and ICPC WF.Also the input format: I realize this is a response to people who couldn't download the big inputs, but this is so ugly. And it is quite limited because you can still only do "big input" and "random input" (the latter might help the contestants in problems like A3). I know this requires additional development but wouldn't the following be better: server generates a password and puts the file in a password-protected zip; client downloads the zip; at any time, the client can request the password, after that the 6 minute countdown will start.
•  » » 5 weeks ago, # ^ | ← Rev. 2 →   0 I'd like to offer another opinion on problem A. By doing this, I'm not in any way discounting your opinion, and in fact I do think your opinion as someone who is more experienced and more invested in the sport should matter more than that of the casuals like me.That being said, as someone who is in the target audience for easy problems, I enjoyed A quite a bit. In hindsight, figuring out the details of the perimeter updates was tedious, but I guess at the time, I just assumed that was part of the process and didn't think much about it.
•  » » » 5 weeks ago, # ^ |   +130 Interesting. IMO the details of A were tedious enough to consider quitting the contest.Unlike some other reds I have nothing against "classical" problems in contests aimed at lower-rated people. But I'd have thought that no one likes tedious problems.
•  » » » » 5 weeks ago, # ^ |   +1 Yeah I think for me, I just considered figuring that part out as part of the problem's solution, whereas I'm guessing you probably considered it more of a time-wasting implementation detail after you had already figured out the problem's high-level solution?
•  » » » » 5 weeks ago, # ^ |   +33 In general I think having "tricky" implementation "tedious" problems is ok in principle for programming contests. In fact, I find that it is a very "testable" and remarkable skill: Some people are actually very good at writing such fairly error-prone tricky things without mistakes and without taking too much time, and since I wish I had such a high implementation skill, I find it ok to "reward" that coding ability.However, I think that it is critical to consider "implementation" or "being tedious" or "being tricky" as part of a problems difficulty. Both for a problem setter (to achieve the aimed difficulty) and for a contestant (to avoid thinking that a problem is "easy" because the idea is easy: an "easy" but "tedious" problem is not easy anymore).In that way, I think that the contest completely failed to have easy problems (maybe it was intended): the ones with the "easy" ideas were very tricky and tedious.Also I agree that the general contest felt fairly unbalanced: the overall contest difficulty was mainly in the tricky/tedious area, and not the "algorithmic" part itself, which is ok for a single problem but not nice for the whole contest.
•  » » 5 weeks ago, # ^ |   0 Good idea!!
•  » » 5 weeks ago, # ^ |   -61 Well, you could skip A2, A3 and C to be honest. I am happy that I did that. A1 and B are pretty easy to write and are not that annoying as some classical bullshit segment merging in A2 and A3.
•  » » » 5 weeks ago, # ^ |   +68 yeah, because everybody's sure their B solution is correct
•  » » » 5 weeks ago, # ^ |   0 B is only easy to write after the revision setting S=0, which came (I believe) very late in the contest.
•  » » » 5 weeks ago, # ^ |   +8 Not sure about A3, but A2 is reasonably easy solvable with std::set
•  » » » » 5 weeks ago, # ^ |   0 Solved A2 using set, and A3 can be solved using set too, in the same way.
•  » » » 5 weeks ago, # ^ |   +11 I think that, since queries are all available beforehand, A2 is much easier to solve using sweep-line.Sort all the "rectangle intervals" events (start and end), an process, adding a rectangle to current-set upon opening and removing when closed. Each "piece" can then be annotated with the lowest-index rectangle containing it, since that rectangle will fill it once and for all. A list of pieces for each rectangle can be precomputed this way.Then, process each rectangle in order and for each one, explicitly iterate all its pieces, knowing that each such piece "grows" the area from height 0 to that height (just have to check the height-values of neighbor pieces, which can be kept in a map or array after coordinate-compression).On the other hand, A3 can be solved with coordinate compresion + lazy-propagation-segtree in a much more less error prone way (in my opinion) than the official solution (assuming a correct lazy-propagation-segtree is available to simply copy-paste it).However, although less tricky to implement, this other way involves using totally different methods for each chapter, and would not work with online queries, as the segment merging version does.
•  » » 5 weeks ago, # ^ | ← Rev. 2 →   0 There is one detail here: you need to provide each contestant a slightly different file, otherwise contestants may cheat by collaborating: each person solves one problem and shares the output file to other people (technically people can cheat by sharing code too, but that is easier to catch). GCJ (in the old format) did that for large dataset, and for small dataset they even generated different files per submission to prevent people from brute forcing the solution.There was one task in GCJ Final 2009 where Google handled large input file by following: they provided the contestants the script for generating large input, and when submitting, contestant download a seed value as parameter to run the script against. There are still variations on how fast/slow it is on individual computers, but assuming a reasonably large time limit that is still pretty reasonable.
•  » » 5 weeks ago, # ^ |   0 Yeah, I think some problems like this are OK for $\ge 24$ hour contests, but having all/most of them like this is a little tiring. I managed to solve all the problems, but took quite a portion of the day to do it (due to hunting all the little bugs that caused samples or verification to fail)
 » 5 weeks ago, # |   +6 Unable to check the scoreboard on the friends tab
 » 5 weeks ago, # |   +2 How to solve A2 and A3
•  » » 5 weeks ago, # ^ | ← Rev. 2 →   +1 Maintain a set of non-overlapping polygons (storing starting position, ending position and perimeter of each polygon would be enough). In the $i$th iteration, try to combine the $i$th rectangle with an already stored polygon or store it as a separate one. The technique for combining two polygons is slightly different in A2 and A3. In A3, you will need another set (or other DS) to efficiently find the current height of the polygon at the position $x$.
 » 5 weeks ago, # | ← Rev. 2 →   +17 Solution to Problem A1 without using the constraint on W: int was = 0; was += h[0] * 2 + w*2; int ans = was % inf; deque q; q.push_back(0); for(int c=1; c
 » 5 weeks ago, # | ← Rev. 2 →   +89 I don't know if this has been asked before but could there be a way to filter the scoreboard by country?As far as I know, the only way to filter results is by adding people as friends, and people who recently made a new account will get their accounts disabled if they send friend requests. Filtering by country might help that issue.
•  » » 5 weeks ago, # ^ |   +8 Thanks for the suggestions! We'll definitely consider adding public countries of representation in the future.
 » 5 weeks ago, # |   -13 LoneFox Why in the input we were only getting first K elements and then generate rest using a formula? I personally think it's unnecessary as it requires no extra thinking but just writing an extra for loop, unless it's because of judge limitations, is this the case?
•  » » 5 weeks ago, # ^ |   +5 Not sure of official reason, but in Qualification round some people could not finish downloading input in time constraint. This formula makes it so input is small while having a large test case.
•  » » » 5 weeks ago, # ^ |   -30 Yeah, that makes sense, but this also puts the limitation on the kind of problems you can create like suppose you have to print a final array of the same size as input because in that case input can be shrunk but output can't. I think it's better to create judge like CF where you just submit the source code, Google also did this last year.
•  » » 5 weeks ago, # ^ |   +3 I couldn't download the test data for problem C during the qualification round on time. They reset my timer but they also told me that they wouldn't do it again during the next rounds.Generating tests with a formula is a nice solution to this matter. However, one has to be really careful coding these formulas.
 » 5 weeks ago, # |   -52 I personally feel FB should offer a server for executing the source code, primarily because of the insufficient resources for executing the program for larger inputs in our personal computers. This is highly true in case of FBHC, where the inputs are of the order of 1e6. Take my case for example, in the Qualification round, there was a problem on tree dp (the last problem if I recall correctly). After checking my program against the validation input, I tried running it on the test input and I received a segmentation fault. I had assumed there was an error with my code, and spent debugging it for nearly 8hrs, and couldn't find anything, and finally resorted to using Valgrind, through which I found out that my stack overflowed due to the high recursion depth. There was a similar case in Round1 problem C, as well, which left me helpless (not that it affected my qualification, but still getting higher in the scoreboard always feels nice).In my opinion, these are not things one should worry about while coding the solutions, not that I had a workaround for the stack overflow considering I needed a DFS.LoneFox Is there any chance of FB providing us a server for running codes? Will round 2 also have the same format of downloading and executing on our personal computers ? If so, people with superior PCs have an unfair advantage of coding a looser solution ( lets not even consider my scenario).
•  » » 5 weeks ago, # ^ |   +34 I'm not trying to dismiss this opinion, I think these are good points, I'm just trying to understand for my own curiosity. I don't remember that many complaints about code execution on contestants' PCs when Google Code Jam did it not too long ago. I genuinely wonder what's the reason of this? My memory is just wrong. People were assuming it as a given for Code Jam and not voicing their concerns, but for Hacker Cup it's a new rule change. People started to use less powerful machines more to participate in programming competitions (e.g. Chromebooks). Competitive programming has become more popular over the last years and as such the number of issues we are hearing about this has increased. Codeforces has become more popular over the last years and as such the number of issues we are hearing about this has increased. Something else.
•  » » » 5 weeks ago, # ^ |   +8 Unfortunately it seems like the old GCJ problems are being transitioned to use the new system so we can't access them now.If my memory serves me right, we rarely had a task with a huge input size (including pseudorandoms) in older Google Code Jam (Kickstart is not included). I can't remember having tasks that differentiate O(N) solutions and slower solutions--which what usually N>500k tasks are designed for. What I remember to be the only exception to this is on Finals problem, which is less of an issue since the contestants have the same working environment.Most of the older tasks are differentiating between polynomial and exponential time solutions, which IMHO makes it interesting.
•  » » » 5 weeks ago, # ^ |   +5 I am not sure about the others, but this is my first ever Facebook Hacker Cup, and till now I haven't given a single GCJ round. So, I just voiced out my opinion. I just feel that the competing environment should be fair. Even if its impossible to provide a common execution platform because of the huge number of people in the Qualification and Round 1, the further rounds should be considered for the case, considering time penalty is an important factor.
•  » » 5 weeks ago, # ^ |   +124 I just wanted to say something in support of this local-judge format that FBHC uses and GCJ used to use. Note that the alternative for FBHC would be remote-judging without any feedback, as you only get one shot to submit. This format is more contestant friendly in terms of debuggability. I think the fact that OP was able to run valgrind to find the bug already illustrates that they have a lot more debugging tools available. If this were on CF or something, and the bug were some out-of-bounds, they would just get an RTE with no further explanation. (Without feedback, it would just fail entirely.) Being able to debug this way is a fine skill to reward. While there is some pressure, you would've failed the problem if it were remotely judged with no feedback, which would be strictly worse for the contestant. This format is more contestant friendly in terms of tooling. Contestants are free to use whatever tooling or languages they have, without worry about version incompatibility or what standard libraries are available. For C++, I think letting contestants use Boost or Abseil is a great thing! We shouldn't be crippled by this single-file submission format (or even single language), and this is a good way to allow it. This format is more contestant friendly in terms of performance predictability. Contestants are free to create and run any cases locally and have no doubt that the performance in local testing will match the performance on the judge. Each year, IOI puts a ton of effort into accomplishing this by buying matching laptops for the contestants and the judges; this format gives this predictability for free. This format may be slightly friendlier to programmers who have never done competitive programming before. This is mostly the same as the point about tooling: new contestants don't have to learn the idiosyncrasies of the remote environment or how to do the right file-io/stdio setup. Also, I think the validation file is a great addition; it allows the contestants to ensure their setup is working correctly in a 0-stakes situation. (Old GCJ's small-inputs kind of shared this function.)As you point out though, there are some drawbacks, but I don't think they out-weigh the benefits: Contestants may prefer the well-managed/well-configured environments of remote judges, e.g. to set stack limits. For the record, I don't think this is a valid excuse for any advanced contestant; at the top level, you need to be able to locally test max cases anyways, and you should totally be able to configure your environment for that. On the other hand, this may disproportionately impact novice programmers who don't have a great mastery of their local toolchain/environment. I'm not sure how this weighs against not having to learn the intricacies of the remote environment; perhaps CP nowadays is sufficiently "plug-n-play" that running code remotely "just works" for a beginner. I think the best solution to this, though, is to use some online service like repl.it to run your code; these are designed to be beginner friendly, and give you a nice prebuilt VM to use. These services are plentiful and accessible enough that I think anyone learning to program can easily find one to use. (I think repl.it has file upload/download too.) Some contestants may have beefier computers/setups which give them an unfair advantage. I think this is a real phenomenon which may impact results, but I'm not quite convinced that it's a big deal. First, 6 minutes is very generous; any solution that could've passed in a blind remote-judging scenario will almost certainly run within 6 minutes on pretty much any machine, so anyone who would've succeeded anyways will continue to succeed. On the other hand, it is true that people may be able to squeeze solutions that otherwise would be too slow by parallelizing or using a bigger machine. I have a few responses: first, writing parallel code is a programming skill that seems fine to reward. Second, this can be partially mitigated by choosing problems where the difference between the intended solution and the slow solutions are larger, e.g. polynomial vs exponential. In practice, I don't think we see people renting out AWS machines to pass problems, so I don't think this is a huge problem. Overall, I think this format is net positive for contestants, and I really hope FBHC keeps it (especially with validation sets).
 » 5 weeks ago, # | ← Rev. 2 →   +59 Fun fact: Task A can be solved (albeit much more technical) for inputs without any restrictions on $H$ using the idea behind Segment Tree Beats. In each segtree node you keep the value of the minimum height, the second minimum height, and how much of the $y$-perimeter of the figure would change if you were to add $1$ to all the blocks with the minimum height (computed from leaves to root). Now you can write the push and pull functions and it should be (kind of) straightforward. For the restricted subtasks, though, you could obtain $O(N logN)$ if you recurse down if the minimum height is strictly smaller than the maximum height in the subtree (i.e. if not all heights are equal), and with minimal lazy propagation logic. This is similar to the set solutions, but (IMHO) much easier to implement and with fewer moving parts. And it has the advantage of solving A1-A3 with just one implementation (even without the imposed constraint on $W$ for A1).
•  » » 5 weeks ago, # ^ |   +23 I actually implemented this after I finished (for fun/for verification), you can find a screencast here: https://www.youtube.com/watch?v=CwEJmC3RWOI.
•  » » » 5 weeks ago, # ^ |   0 Can you paste the final code in text format somewhere? Thanks!
 » 5 weeks ago, # | ← Rev. 2 →   0 Problem A1: TLE :(CodeProblem is with my array size or my logic is wrong?
 » 5 weeks ago, # |   +148 Regarding the stack limit:I'm not surprised by all people struggling with stack limit but I hope that it won't make FHC to change the contest format in the future. A qualification round is there to test the system and this is when you might be surprised by stack limit and forced to learn how to deal with it. You should know your own environment and be able to run a program. That being said, it's still bad that machine power and internet speed differ for participants but the former isn't a big deal if intended solutions work in ~10 seconds and brute force requires hours, and the latter is resolved by providing encrypted tests in advance.
•  » » 5 weeks ago, # ^ |   +16 It was good that they had a problem in the qual round that forced us to deal with the stack limit. But for some reason, even though I set my system limit to the maximum and passed the qual round problem, I still got a segfault in R1. Only with some extra g++ options did it pass. At least I know how to set my compiler invocation properly now!I do think it's good that the format is different. It could lead to more variety — like using more obscure languages or third party libraries, or more room for precomputation for some kinds of problems. It's just that the execution makes it stressful.
•  » » » 5 weeks ago, # ^ |   -18 It's just that the execution makes it stressful. That's the fun!
 » 5 weeks ago, # |   0 I did the segment merging approach which some people have described here for A2. It passed the validation input but failed on the main cases :(. Could someone tell me what I am doing wrong?https://gist.github.com/cathreya/e76fb3918bbb4dbd5437d545572ed4cf
•  » » 5 weeks ago, # ^ |   +13  auto lpt = ls.lower_bound({l[i],{0,0}}); auto rpt = ls.upper_bound({l[i]+w[i],{0,0}}); You may need to give sufficiently big values instead of {0,0}. I have made an exact same mistake.
•  » » » 5 weeks ago, # ^ |   0 What are {0,0} here?I stored {{left,right},perimeter} and used {{l[i],l[i]},0} in the lower_bound function. I had to write two messy functions to find those two bounds.
•  » » » 5 weeks ago, # ^ |   0 Damn. It worked. I can't believe I messed this up :'(Thanks a lot!
 » 5 weeks ago, # | ← Rev. 3 →   -15 Lol. In problem B I was like: god damn it I'm not missing another dp or binary-search-over-answer solution again! And then all I did was to keep thinking how I could binary search, try dp, or thought it could be binary search with dp, and then I freaking made it xD Every big contest. EVERY BIG CONTEST I miss a dp or a BS solution. Well, not today! (well actually not yesterday :0 )Edit: fun fact — I watched Errichto's video and lold hard when I saw him saying the exact words as me. Dp? Bs? Bs with dp?
•  » » 5 weeks ago, # ^ |   0 One funny thing is , for the first problem A1, I had initially thought of the same approach as Errichto. Then I thought it won't finish executing under 6mins.In his video , his computer (which is a lot faster than mine in terms of performance) finished the test in 3mins. I think in my case it would have been a very close call if I had implemented Errichto's solution.
•  » » » 5 weeks ago, # ^ |   0 Interesting... I'll check it out.
•  » » » 5 weeks ago, # ^ |   +13 If you use unodered_map on his solution you get (on my pc) from ~4 mins to ~26 seconds. Also if you clear the map when l[i-1] + w +1 > l[i] (so when you'll never use the prev values), you get like 2 minutes.My solution that simulates this by having a vector for the values, and clearing it from time to time is ~2 secs.
•  » » » » 5 weeks ago, # ^ |   -8 Yup. As I said during a video, I would quickly switch to an array/vector if it seemed that I'm running out of 6 minutes.
•  » » » » » 5 weeks ago, # ^ |   0 My map solution was too slow and it was taking around 20+ seconds for each $10^6$ cases. After switching to unordered map it was able to finish all cases in 20+ seconds.Fun fact is I didn't know that 6 minute window was the only time limit and I thought they would run codes on their server. So I spent an hour to find a more efficient solution with an array of size 20 (thanks to the constraints).I started late and I couldn't finish implementing A3 before CF Global Round started because of spending time on making A1 efficient. Luckily A1 and A2 was enough for clearing this round.One question: is it possible to initialize array of size $5*10^8$ in high config pc? I couldn't do it on my machine.
•  » » 5 weeks ago, # ^ |   0 Too bad, the binary search + dp solution is flawed. :P
•  » » » 5 weeks ago, # ^ |   0 Well, I wrote from my phone so maybe I wasn't clear but in the end I solved it with binary search only.Or you mean that you tried dp+bs and it didn't work? :0
•  » » » » 5 weeks ago, # ^ |   0 If you were talking about solving it when S>0, read this comment: https://codeforces.com/blog/entry/81418?#comment-681841
•  » » » 4 weeks ago, # ^ |   0 My solution for $S = 0$ was DP + optimum monotonocity (cause the transitions were to minimize the maximum of two functions (one non-decreasing and the other one non-increasing)); so I used binary search to get the optimum for each state. I could have used Two pointers and get a linear solution, but this wasn't necessary.
 » 5 weeks ago, # |   0 "Efficient ordered sets are supported in many programming languages, backed by BBSTs (balanced binary search trees) and capable of supporting the necessary searching, deletion, and insertion operations in logarithmic time in the number of elements in the set."most examples I found are done in C++ using set(with its upper_bound and lower_bound API) Could anyone please recommend something similar for Python? Ended up implementing AVL manually during the contest, passed validation but failed with execution time on main test set,
 » 5 weeks ago, # |   +21 So has anyone figured out a solution for B with S> 0?
 » 4 weeks ago, # | ← Rev. 4 →   0 .
•  » » 4 weeks ago, # ^ |   +8 It fails because you took max of H every time. think of a smaller height between two bigger ones, it will increase the perimeter.
•  » » » 4 weeks ago, # ^ |   0 oh! I see. Thanks.
 » 4 weeks ago, # |   0 Auto comment: topic has been updated by LoneFox (previous revision, new revision, compare).
 » 4 weeks ago, # |   +37 I'm happy to say that we're now able to share information on this year's t-shirt prizes, and will be giving out more than ever before! Everybody who solves at least one problem in the upcoming Round 2 will receive a Facebook Hacker Cup t-shirt. The top 200 competitors from Round 2 will also have a "Top 200" badge on their shirt.