xiaowuc1's blog

By xiaowuc1, history, 6 years ago, In English

Hi all,

The third contest of the 2017-2018 USACO season will be running from Friday, February 23rd to Monday, February 26th. This will be the last contest before the US Open, scheduled for March 23rd to March 26th.

Please wait until the contest is over for everyone before discussing problems here.

  • Vote: I like it
  • +77
  • Vote: I do not like it

»
6 years ago, # |
  Vote: I like it +2 Vote: I do not like it

Does the difficulty level increase with each contest or will it this one too have the same difficulty level as the previous contests?

»
6 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Hi, I read participation is open to all. Can someone please help me with the division ( gold silver etc ) participation rules. Can I directly participate in gold / silver , or give multiple divisions in the same contest ?

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +19 Vote: I do not like it

    You start off in bronze. You can get promoted to the next division during the contest if you do 100%, and you'll get promoted to the next division after the contest if you get at least 75% (it can change though, last time it was 60% to go from gold to platinium).

    So if you do 100% you can do the silver contest, if you 100% again you can do the gold one and so on...

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +9 Vote: I do not like it
»
6 years ago, # |
  Vote: I like it -18 Vote: I do not like it

Predictions for Silver Cutoff? I got 700 exactly.

»
6 years ago, # |
  Vote: I like it +2 Vote: I do not like it

The contest should be over. Can anyone confirm?

»
6 years ago, # |
Rev. 2   Vote: I like it +58 Vote: I do not like it

Contest is over for everyone.

For those who competed in Platinum:

Problem 1: Slingshot
Problem 2: New Barns
Problem 3: Cow Gymnasts
  • »
    »
    6 years ago, # ^ |
    Rev. 5   Vote: I like it +21 Vote: I do not like it

    Very nice solution of 2. I was trying to come up with something like this (sparse table + LCA) but I did not notice all the properties of a diameter. I ended up coding offline centroid decomposition...

    Every node and a query appear at a different time. You will be answering the query, using values which appeared before that query.

    The idea is that for each query, you will be traversing the centroid tree (going up by centroid parents), calculate the distance from the query node to centroid and from centroid to node with max distance, in a different subtree.

    In each centroid you store the vector of 2 best pairs(max distance, direct child of a centroid, through which this distance is realized) — second pair must have a different direct child than the best one, for each node in centroid component.

    First, you record a distance and a direct child of a centroid, for each node within centroid component (you start dfs from each child of a centroid, calculate distances and remember the root). You record these values under different times as every node has a different arriving time. You have the best pair for each particular time and you sort these values acording to time.

    Then you count best and second best pair for each prefix (the best values seen so far). We need that as we will be looking for a max among the nodes which arrived earlier than our query. It means that if we have a worse value at time j than at time i and j > i, we should use value from time i instead.

    You answer each query by traversing the whole centroid tree and for each centroid get the max value for max time, which is less than the query time — upper_bound. You just need to check if the direct child of the centroid, which realizes best value, is the parent of the query node. If that is the case, use the second best value, for a different direct child. Also at the beginning you should check that the centroid appeared before the query, if this is not the case, skip it and go to the parent centroid.

    Quite an overkill thow, compared to the nice, online solution...

    • »
      »
      »
      6 years ago, # ^ |
      Rev. 3   Vote: I like it +5 Vote: I do not like it

      Intended solution is n root n but I didn't notice the better way. This is also better than it...

      It was supposed to be batch processing, LCA on the root N in the block for their shortest distances, and then every time the block reaches root N to perform a tree DP to update the distances for each node. But it turns out there are much better solutions.

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    I have a solution to Slingshot which could possibly be interpreted as simpler than yours.

    First of all, let's notice that the slingshot you use for a query, if any, is going to shoot you in the same direction. So we can separately consider left-to-right slingshots and queries and then right-to-left slingshots and queries.

    I will explain how to solve the left-to-right case. For the right-to-left one, just swap the x and y values for all queries and slingshots which were right-to-left initially.

    Consider some query (X, Y). Regarding the positioning on the number line, we can distinguish 4 cases for the optimal slingshot (A, B, T), if any:

    1. X ≤ A ≤ B ≤ Y. In this case the cost is A - X + T + Y - B = (Y - X) + A + T - B. Since Y - X is constant for a given query, our task is to find the minimize A + T - B among all slingshots satisfying the aforementioned condition.

    2. X ≤ A ≤ Y ≤ B. In this case the cost is A - X + T + B - Y = ( - Y - X) + A + T + B. Since  - Y - X is constant for a given query, our task is to find the minimize A + T + B among all slingshots satisfying the aforementioned condition.

    3. A ≤ X ≤ B ≤ Y. In this case the cost is X - A + T + Y - B = (Y + X) - A - B + T. Since Y + X is constant for a given query, our task is to find the minimize  - A - B + T among all slingshots satisfying the aforementioned condition.

    4. A ≤ X ≤ Y ≤ B. In this case the cost is X - A + T + B - Y = ( - Y + X) - A + B + T. Since  - Y + X is constant for a given query, our task is to find the minimize  - A + B + T among all slingshots satisfying the aforementioned condition.

    Turns out that we can solve all four cases independently offline with the help of sweep line and segment tree for minimum.

    In cases 1 and 2, we sort the queries and slingshots in order of decreasing X or A. Then we loop over all queries while also keeping a pointer of the last slingshot inserted into the segment tree. Whenever the X of the current query becomes less than or equal to the A of the next slingshot to be inserted, we simply update on position B with  + A + T - B for case 1 and  + A + T + B for case 2. After all updates resulting from the current query are done, we simply try to update the answer of the current query with (Y - X) + minquery(X, Y) in case 1 and with ( - Y - X) + minquery(Y, ∞)) in case 2.

    Cases 3 and 4 can be handled in the same way, we just have to sort the queries and slingshots in order of increasing X and A.

    The whole code isn't that long if you implement the segment tree structure once and use it for all 4 cases. It was about 200 lines of code for me, also including some long comments before all cases. Unfortunately, I delete all my codes after a contest and I have to wait for the final results before I can access my solution and post it here.

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      can I ask, why do you delete the code after contest?

      • »
        »
        »
        »
        6 years ago, # ^ |
          Vote: I like it +10 Vote: I do not like it

        Because for contests I have a folder Work and in there I create all A/B/C/... folders and then when the next contest comes, I have to empty the Work folder and create new folders :D Just bad practice, I don't know how I employed it :D

    • »
      »
      »
      6 years ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      Here is my AC code for Slingshot. It uses the same solution described by P_Nyagolov.

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it
  • »
    »
    6 years ago, # ^ |
    Rev. 4   Vote: I like it +5 Vote: I do not like it

    I heard from bovinophile that problem 1 can also be done with a k-d tree.

    PS: I used a 2d segment tree (which is O((N + M)log2N)) to do queries online for #1 but it only got 3/10. Turns out that the brute force O(NM) solution got 5/10. :(

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it +15 Vote: I do not like it

      Well this is expected if you do dynamic 2D segment tree instead of the compressed offline one, because the constant in it turns out to be huge.

      • »
        »
        »
        »
        6 years ago, # ^ |
          Vote: I like it +8 Vote: I do not like it

        I agree, but I feel like this should get at least as much credit as brute force. :P

        • »
          »
          »
          »
          »
          6 years ago, # ^ |
            Vote: I like it +16 Vote: I do not like it

          At least you didn't spend 3 hours constant optimizing Nlog^2N for p2 and then thinking about how to remove a logN factor, to no avail.

          Guess thats what happens when you make everything a function

          • »
            »
            »
            »
            »
            »
            6 years ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            Hey that's basically what I did for p1 :P

            • »
              »
              »
              »
              »
              »
              »
              6 years ago, # ^ |
                Vote: I like it +18 Vote: I do not like it

              At least you weren't 5 seconds away from a correct solution to 2. Had two cases, submitted the correct solution without having a print statement, got WA on sample, added print statement, uploaded, just as time ran out...

              • »
                »
                »
                »
                »
                »
                »
                »
                6 years ago, # ^ |
                  Vote: I like it +16 Vote: I do not like it

                And the new code gets 9 cases out of 10. Sad life.

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +5 Vote: I do not like it

    This is my code for Problem #2 if anyone needs it; it's pretty short and simple. My solution was inspired by this problem that I encountered before. Thanks to Aviously for giving me credit for the solution.

    Code (C++11)
    • »
      »
      »
      6 years ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      I also remembered that problem but I did not notice hint 2 (in the post above) and wrote that whole centroid stuff. I am a little angry as it would have taken me 5 minutes to change that solution, instead of 2 hours to implement and 1 hour to debug the new approach...

      At least got some centroid practice :)

  • »
    »
    6 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    When adding a node and the diameter changes, why must one of the endpoints be one of the old diameter's endpoints?

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it +15 Vote: I do not like it

      You can use well know fact (or prove it on the paper) that if you have node x, go to the farthest node y, then go to the farthest node from y, z. y-z is a diameter.

      Now you have d1-d2 (diameter) and you add x. Let's assume that for parent[x], d1 was the farthest (based on the property above). Then for x also d1 is the farthest. For d1 so far d2 was the farthest and now it could be changed only to x. If that's the case, we have a new diameter.

    • »
      »
      »
      6 years ago, # ^ |
      Rev. 3   Vote: I like it +5 Vote: I do not like it

      Notice that when you add a leaf u as child of a parent p the distance between all other pairs of vertices remains constant.

      This means that the new diameter must either be the previous diameter v - w or be u - x for some vertex x. Suppose it was u - x. Then x must be the farthest vertex from u. But the farthest vertex from u is obviously the farthest vertex from p, which must be v or w. So thus the resulting diameter after adding u is either v - w, w - u, or u - v. Thus, when processing an add query, you can just check these three cases (using lca or whatever) in O(logn) in order to determine which of them it actually is.

»
6 years ago, # |
  Vote: I like it +2 Vote: I do not like it

Silver 3? I'm pretty sure it had something to do with medians- sort of like a 1d minimum sum of distances problem with a twist.

»
6 years ago, # |
Rev. 3   Vote: I like it +8 Vote: I do not like it

If anyone needs codes for the Platinum problems, here are my accepted solutions. I think they are the same as the ones in Aviously's comment, but I'm not sure as I haven't read his whole comment.

Problem 1: Slingshot
Problem 2: New Barns
Problem 3: Cow Gymnasts
  • »
    »
    6 years ago, # ^ |
    Rev. 2   Vote: I like it +10 Vote: I do not like it

    In problem 2, your solution is offline, whilst Aviously's solution is online. You don't need to read everything first, to build LCA structure.

    Edit. I like that feeling when I barely manage to solve one task during the contest and later I discover that everybody here is getting max score :) Congrats!

»
6 years ago, # |
  Vote: I like it +11 Vote: I do not like it
»
6 years ago, # |
Rev. 3   Vote: I like it 0 Vote: I do not like it

I have a problem. Code I submitted for Platinum 1 gets 1/10 at competition site. But if I download test cases and run them locally one by one, all of them are correct. When I run all them with my grader, the same problem occurs as at competition site (results are large negative numbers). Can please someone explain why this happens and how can it be avoided?

Code
  • »
    »
    6 years ago, # ^ |
      Vote: I like it +10 Vote: I do not like it

    Disparities between your computer and the grader, perhaps. Try running your code with warnings, checking for uninitialized variables, etc.

»
6 years ago, # |
  Vote: I like it +21 Vote: I do not like it

I'm just curious, does anyone know how they determine the order to list participants with the same score: by time spent, random, or something else?

  • »
    »
    6 years ago, # ^ |
    Rev. 2   Vote: I like it +5 Vote: I do not like it

    It's random-- I believe I actually emailed Brian Dean once, out of curiosity.

»
6 years ago, # |
  Vote: I like it +25 Vote: I do not like it

The test data for Platinum problem 1 is too weak... If you sort all the slingshots and for each query, use the nearest 3400 slingshots to update the answer, you can pass all the tests.

This strategy helped me to score full points.

code
  • »
    »
    6 years ago, # ^ |
      Vote: I like it +10 Vote: I do not like it

    total 1700 slingshots also adequate for answer :)

    code