Helping Contestants Help Us All or Competition Pitfalls

Revision en3, by MikeMirzayanov, 2016-09-11 21:36:37

During my competition history I've collected a vast amount of different fail stories. And those usually come in two flavours. You either get an aftertaste of a personal failure or you get a feeling that you were helped by organisers in your adventure. And it's always helpful to try and learn something from these failures. The first ones are kind of personal, and only affect you or your team, so they are not that interesting to talk about. The second ones however are more important. First of all, this is a direct feedback to the organisers, in hopes that it might make future contest better. And then you usually try to use that feedback yourself when you move on the organisers side in an attempt to make sure that you avoid setting all those pitfalls that you were annoyed about when competing. So here I'm going to try and describe two types of pitfalls that in my opinion the organisers can deliberately or accidentally set that helps teams to embark on a wonderful journey to fail-land a lot easier. This was originally intended to be a comment about Bubble Cup problem set, but since I've realised that the problemset contained great examples for both of these pitfalls, I've decided to post it separately to enable discussion.

"We weren't even bothered to try and submit something as stupid as this". Once in a while you get in a situation where everyone is solving one problem but you have no ideas on how to proceed. This usually happens if you manage to overcomplicate this task. However, sometimes these happen when you attempt to solve the task rather than just getting AC. Some cases are reasonably sensible – if you want to prove every single bit of your program before submitting it, you might be doing it wrong – sometimes it is sensible to try and assume something which looks sensible. However, sometimes this goes too far. Imagine you have a greedy or random solution, which certainly doesn't look like it should work and you sure as hell don't have any proof for it, but it actually works, either because there is a complicated proof or just because tests were weak/it turned out to be difficult to construct a test breaking the solution. And this is a type of tasks which I personally highly dislike. Why? You often get penalised for solving the task. If you just happen to try something "maybe this somehow will get AC" early you get a huge advantage in time and manhours for the rest of the contest even if you end up solving it.

So, let's go back to Bubble Cup. Take a look at the problem H. This had a much bigger influence on the onsite contest than on the mirror as it was the fourth problem after C+D+E. From a personal experience, with about 1.5 hours to go about a third of teams had this. At this point we looked at the scoreboard, and seen that a couple of strong teams we've identified before the competition didn't have it either. At this stage we stopped solving this problem and decided to do some random-greedy approach, which felt like shouldn't work but let's try it out and maybe put some more heuristics later. Of course it worked on the first try after fixing the bugs.

So here is the takeaway. I'm sure the solution to this problem is nice. But if a lot of things passes without any thought involved whatsoever, then it is not a good task in my opinion as it enables a lottery of whoever was lucky enough to submit something to see if this would get AC. Of course you can better at this lottery with more experience as a contestant, but I don't believe this should be an excuse for doing this sort of task. And of course, sometimes there are some solutions not imagined by the authors, but in this particular problem I got the feeling that anything more or less sensible passes.

"Entering the door through the cat flap". Imagine you are drunk. So you get home and try to enter the house. Sure, inserting the key and opening a door might be a challenge, but eventually you get in. However, instead you can try to get in through the cat flap. Sure, sometimes you will actually get in but most of the time you'll spend a lot of time before failing and realising that you're doing something wrong. Something similar can happen in a programming contest. You would try to do some different approach (generally worse asymptotically) and then spend some time to think about how to actually fit it inside the limits. Sometimes you'll succeed but that's not always the case. Often it is obvious that some solutions would ultimately fail, but this is not always the case. Sometimes you get an impression that you're along the right way but you just need a bit of optimisation to make it across the line and this in my opinion isn't a great situation.

Again, back to Bubble Cup. Let's take a look at the problem B. You could take a quadratic dynamic programming with linear transition and optimise it to amortised constant transition. 100M operations should pass in 1 second, right? However, we need linear memory in this approach, so it doesn't really fits. However, we use previous values in a specific pattern, so there are a lot of things to optimise, including trading memory for time, etc. So that's exactly what we were doing during the contest and at least one other team onsite after the contest had a question of how to fit this in the memory. So I was really surprised to learn that the intended solution was actually O(log N). So why was N set to only 100M? According to the authors it was to avoid giving out a hint. Sure, I could imagine that this is sometimes an issue, so fair enough. However, this left a non-obscure way of approaching this task on the boundary "this looks like it might pass but actually is unlikely", which is what I believe should be attempted to be eliminated by the organisers. The smallest accepted to attempts ratio in the mirror in my problem hints that this might have been an issue in the mirror as well. More often than not this happens when the organisers miss one particular approach. For example, last NWERC there was a problem with O(N log N) solution but if you go the other way about optimising it with segment trees you'd get O(N log^2 N), which looked like could pass with some optimisations, but actually did not. After the competitions and speaking with the organisers I got the impression that they would be fine with accepting this approach but they did not though about it.

So here is the takeaway. Try not to leave the cat flaps. Sometimes it's hard and you will miss some, but I believe an attempt should be made to think about alternative approaches and then make sure that you accept them or that it is reasonably obvious that you will not accept them, so that your contestants don't get stuck in the cat flap.

So, now I guess I'm open to discussions. These two are my personal points which I try to make sure does not occur whenever I am involved in organising a programming competition, but perhaps you might have some more or you think that some of the problems I've identified might not be actually problems.

And in the end, I'd like to express my thanks to the Bubble Cup organisers. The whole onsite event was wonderful and I hope that this feedback can be used for positive purposes. In general it saddens me when it is obvious that a huge amount of effort is put into the organisation and even a single problem can leave a lot of teams with some negative experience due to the issues mentioned above, so I believe it's important to address those.

Tags problemsetting

History

 
 
 
 
Revisions
 
 
  Rev. Lang. By When Δ Comment
en3 English MikeMirzayanov 2016-09-11 21:36:37 9 Tiny change: 'n[cut]\n\n**
en2 English MikeMirzayanov 2016-09-11 21:35:25 9 Tiny change: 'ssion.\n\n**"We ' -> 'ssion.\n\n[cut]\n\n**"We '
en1 English eduardische 2016-09-11 19:34:35 7633 Initial revision (published)