Balkan Olympiad in Informatics — Day 2

Revision en1, by Enchom, 2015-07-02 19:01:31

The second day of the Balkan Olympiad in Informatics has finished and the competition went pretty smoothly, there was a minor bug in the tests of the second problem, but it was fixed quickly.

Final results

The tasks in the second day were, in my opinion, more interesting than the first day. Once again here's a link with the problems (from both days):

Tasks

Clarkson

This problem was quite interesting and the solution was based on a suffix structure. Both suffix array and suffix tree were good enough to solve the problem, and building them for O(N log N) or perhaps even O(N log^2 N) was sufficient. The structure was used so you can calculate the function:

F[i] — the largest substring in the first string, starting at index i, that is present in the second string.

After this function was computed, binary search for the answer combined with dynamic programming was enough to solve the problem in O(N log N), making the total complexity O(N log N) assuming the suffix structure is built fast enough.

Fun fact : I actually practiced my suffix trees right before the first competition day as the last BOI had a problem that involved suffix structures and I wasn't able to solve it because I couldn't build a suffix tree/array, and I was worried it might happen again :)

Radio

This was the hardest problem of the day and one of the best in the competition. Here is my solution in summary:

Let's imagine each tower as a segment [X-P; X+P]. Obviously our goal is to increase the P of the chosen subset in such a way, so that each pair of chosen intervals will overlap or touch. It is easy to see that this means that all of the chosen intervals will have at least one common point, that is, there will be a point that belongs to all intervals. Let's call this the "middle" point. Fixing the middle point gives an easy linear computation of the cost.

This gives the idea for solution with N=K. However we can't try all 10^9 positions, so we have to make the following observation: It is sufficient to try interval endings to be middle points. I will not prove it, I believe it is easy enough to see. This gives us straightforward O(N^2) solution.

To speed it up we can break down intervals to beginnings and ends, and process them as events going from left to right. Keeping some proper values, one can easily speed it up to O(N log N) in total, which gives 45 points (subtasks 1, 2 and 4).

For subtasks with Si=1, I am not sure what the intended solution was, but from what I understood it has to do with finding a median point as it will always be the best choice for a middle point.

Finally, for the final subtask one again has to process endpoints as events from left to right, however keeping a few priority queues is necessary. I call "active" those towers that I will preserve, and "unactive" those that will be sold. Then at all times I keep 4 sets/priority queues:

1) RightUnactive — all unactive towers whose beginning hasn't been processed

2) MiddleUnactive — all unactive towers whose beginning has been processed, but whose ending hasn't

3) MiddleActive — all active towers whose beginning has been processed, but whose ending hasn't

4) LeftActive — all active towers whose both beginning and end has been processed

It turns out that after processing a new endpoint, only a few changes can occur in those sets, and the sets can be easily updated if they are sorted in the right way. I will not go into details as my solution was quite long. The total complexity should be O(N log N) and worked in 0.2s on the largest testcase.

Tiling

Finally this was a tricky problem if you wanted to prove your solution. The crucial observation was that you had to break the grid into L*K squares of size NxN, and query once in each (I'm not sure if it mattered which cell in each square, I used center cells) and then the answer always gave an unique solution and was minimal in respect to queries. Proving that is difficult as far as I know, but it wasn't necessary, quite a few people managed to find out that these are the optimal queries.

Probably the harder thing was actually finding the tiles' layout. I used backtracking with optimizations so that some "obvious" tiles were put before the backtracking, but sadly I spent too much time on problem 2 and couldn't finish my optimizations, hence 70/100.

The surprising thing was that the intended solution was a quick way to place the tiles without backtracking (as backtracking should be exponential?), but almost all competitors managed to pass with the backtracking. I do not know the quick way so maybe someone can find out how it works?

Medals distribution will be announced tomorrow at the official closing ceremony, thanks for reading :)

History

 
 
 
 
Revisions
 
 
  Rev. Lang. By When Δ Comment
en1 English Enchom 2015-07-02 19:01:31 4960 Initial revision (published)