ko_osaga's blog

By ko_osaga, history, 8 days ago, In English

Hello! I'm happy to announce XXI Open Cup. Grand Prix of Suwon. Gyeonggi Science High School is located in Suwon, Korea.

This contest was used as a Day 9 Contest from Petrozavodsk Winter 2021 Camp.

List of relevant previous contests:

Enjoy!

 
 
 
 
  • Vote: I like it
  • +121
  • Vote: I do not like it

»
5 days ago, # |
  Vote: I like it +27 Vote: I do not like it

Thank you for the participation. Credits are:

The setter is anonymous if not listed.

There are no editorials, sorry! Also, please expect about 1-2 weeks on the Gym migration.

»
5 days ago, # |
  Vote: I like it +17 Vote: I do not like it

How to solve J problem?

  • »
    »
    5 days ago, # ^ |
      Vote: I like it +14 Vote: I do not like it

    Note that [3..7] should always be next to each other in sorted array. Try every 2 in increasing order and maintain the set of valid positions of [3..7]. Each time pick the highest valid position and then binary search for 1.

    • »
      »
      »
      3 days ago, # ^ |
      Rev. 3   Vote: I like it 0 Vote: I do not like it

      My solution use a similar idea, but with an additional observation (which make the solution easier to implement):

      Suppose that we are trying to use $$$a_i$$$ as $$$p_2$$$:

      • If $$$a_i + a_{i+1} < a_{i+2} + a_{i+3} + a_{i+4} +a_{i+5}$$$, it's optimal to use $$$a_i, \ldots, a_{i+4}$$$ as $$$p_3, \ldots, p_7$$$ and find best $$$p_1$$$ by binary search
      • Otherwise, we can brute-force over all possible positions for $$$p_3, \ldots, p_7$$$ (and find the best $$$p_1$$$ by binary search)

      Notice that the number of position $$$i$$$ that fall into the second case is $$$O(\log A)$$$, so this solution is $$$O(N \log N \log A)$$$, which is sufficiently fast.

  • »
    »
    5 days ago, # ^ |
      Vote: I like it +7 Vote: I do not like it

    My solution: sort the values. We always pick consecutive $$$P_7, P_6, P_5, P_4, P_3$$$, so first decide $$$i$$$ and let $$$P_7 = A[i], P_6 = A[i+1], \dots, P_3 = A[i+4]$$$.

    Then, we want to pick the maximum $$$j$$$ s.t. $$$A[j] < P_7 + P_6 + P_5 + P_4 - P_3$$$, and such that some $$$P_1$$$ can still be selected, which holds IFF $$$A[j+1] - A[j] < P_3$$$.

    The first condition gives an upper bound on $$$j$$$, and we can find the last position satisfying the second condition before it with a range minimum segment tree.

  • »
    »
    4 days ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    If you observe that 3,4,5,6,7 should be consecutive, then it's easy to come up with $$$O(n^3)$$$ solution. Then you can simply take top-500 participants and run the $$$O(n^3)$$$ solution. The testers failed to prove or disprove this — I believe it's correct.

»
5 days ago, # |
  Vote: I like it +59 Vote: I do not like it

Someone please explain F :(

  • »
    »
    4 days ago, # ^ |
      Vote: I like it +16 Vote: I do not like it

    F

  • »
    »
    4 days ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    Brief explanation: consider cycles as a xor vector space, sufficient cycles would be the ones u can get with backedges in DFS tree, because u can take a cycle independently of anything else, by traversing to a cycle, going around the cycle, and traversing back the same path, define D(u,v) as max xor path from u to v, then D(u,v) consists of any path from u to v and some set of cycles. We can take the path from u to v as the one from the DFS tree, define that path as P(u,v). Define R(u) as xor on the path from root to u, then P(u,v)=R(u)^R(v). Now we have to maximize our P(u,v) by xoring it with some cycles. Construct a base for ur xor space, reduce it to reduced row echelon form, then we can greedily starting from the highest bit, if our answer doesnt have that bit, and our base has a vector with that MSB, we can xor our answer with that vector. We can see that only bits we actually care about are the MSB of the vectors from the base. We can see that xoring our answer with a vector from the base doesnt change any other bit that we care about(becaused our base is in reduced row echelon form). So our answer only depends on the base and R(u)^R(v). Going back to the original queries, we only have to count for every bit that is a MSB of a vector from our base, parity of number of pairs of R(u)^R(v) which have that bit set, l<=u,v<=r, if the parity is odd, we add that vector from the base to our query answer, and we have to add to our answer R(u) l<=u<=r if (r-l+1) is even.

»
4 days ago, # |
  Vote: I like it +24 Vote: I do not like it

Is there a nice clean solution to A? My solution was a 300-line monstrosity, which I couldn't debug in time :P

  • »
    »
    4 days ago, # ^ |
      Vote: I like it +27 Vote: I do not like it

    Let's set vertex 1 as the root. Each edge's contribution to the answer is either (sum of $$$A_i$$$ in it's subtree) or (sum of all $$$A_i$$$)-(sum of $$$A_i$$$ in it's subtree): later case only happens when the edge is in the path from query vertex to the root. We can maintain the sum of $$$A_i$$$ in each edge's subtree with some path updates and subtree updates, so dfs ordering+HLD is enough.

  • »
    »
    4 days ago, # ^ |
      Vote: I like it +7 Vote: I do not like it

    It seems that A is a standard task with top trees. Top tree is by no means "nice clean" solution, but if you have a good book code, then things will be different.

    Nobody involved with the preparation was aware of the fact. It sounds like a high time to learn new technology :) The complexity is $$$O(Q \log N)$$$ in that case.

»
4 days ago, # |
  Vote: I like it +37 Vote: I do not like it

How do you do L and K? For L I thought there's a particularization of some k-th shortest path method (one can easily consruct an almost line graph) as for K, I could find this which is precisely the same problem and they do provide a linear time solution, but I guess the intended solution should've been easier.

  • »
    »
    4 days ago, # ^ |
    Rev. 4   Vote: I like it 0 Vote: I do not like it

    Model solution of L simply implements this paper. I failed to find any ad-hoc solutions exploiting the structure of graph, and hoped it doesn't exist cause that was the point. But there are ad-hoc solutions as well and I think they are interesting.

    You can see some good implementation of Epp98 here. And it is surely a fun read. (I think sufficiently strong competitors can simply come up this from scratch.) I can explain my implementation in detail.

    • »
      »
      »
      4 days ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Do you have any pointers about those ad-hoc solutions?

      I've came up with the following one: let's build a tree of subproblems. Each subproblem is "cover a segment [L, R] such that its bounds must be included". To reduce a subproblem to smaller ones, we iterate over how the middle is covered (there are six cases: X|X, XO|X, X|OX, XOO|X,XO|OX,X|OOX), and go to two smaller segments. This way we get O(n*logn) segments, but with a pretty big constant.

      Then for each segment we maintain two things: some number of shortest solutions for this segment that we have "graduated", and a priority queue of pending ones. We maintain the property that for every subsegment solution that was used to produce an output, we have graduated at least one solution after it (if there is any), and for each vertex the priority queue contains not-yet-graduated combinations of the two children that cover at least (the last child solution used to produce output + 1) graduated solutions for each child. This ensures that the next solution to graduate from the top-most segment is the next in sorted order.

      Every time we output a solution, we must trace it back through the tree, and sometimes graduate one more solution from the priority queue to the list, and sometimes combine more solutions from the two children and put them to the priority queue.

      I don't have a proof of the complexity, but it feels like this could be O((n+k)logn) memory and O((n+k)*(logn)**2) time. However, even on 250K ones it takes 20 seconds and consumes 3G of memory :(

      • »
        »
        »
        »
        4 days ago, # ^ |
        Rev. 3   Vote: I like it +9 Vote: I do not like it

        That seems close to what we did in upsolving. Try speeding up like this:

        when dividing segment (l, r) don't try to divide it near m=(l+r)/2 but near m = (r&~(1<<(31-builtin_ctz(l ^ r)) - 1)) - 1 i.e near the place where the largest bit changes. This way segments get reused much more. As far as I remember that was last optimization we did and it changed 4KK nodes to 1KK nodes

        • »
          »
          »
          »
          »
          4 days ago, # ^ |
            Vote: I like it +11 Vote: I do not like it

          Thanks a lot, that's an awesome optimization! It reduced the number of nodes from 4M to 1M for us as well.

          Unfortunately, we're still (barely) over ML, but now it looks possible to get there with some additional small optimizations.

      • »
        »
        »
        »
        4 days ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        ksun48 told me a solution along that line of thinking.

  • »
    »
    4 days ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    For K, there is a 20-liner solution submitted by Ptz teams, but I doubt the author was aware of it.

»
4 days ago, # |
  Vote: I like it +19 Vote: I do not like it

How to solve D and H?

  • »
    »
    4 days ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    D: If we can not cross over two sides, then the problem is easy. You can consider clauses for each wire, and run 2-SAT (which is actually bipartite coloring).

    Now the key idea is to consider clauses for each endpoint of the wire, not the wire. If some wire is fully contained in another wire (two endpoints of the interval is contained in another interval), then the two endpoints should lie on the same side. If some wire crosses another wire (exactly one endpoint of the interval is contained in another interval) then we can find two pair of endpoints that should not lie in same side.

    You can see that these two cases are sufficient, and we can run the identical 2-SAT with that problem. Time complexity is $$$O(N^2)$$$ and you can optimize this to $$$O(N \log N)$$$ with some typical DS tricks.

  • »
    »
    4 days ago, # ^ |
      Vote: I like it +10 Vote: I do not like it

    If you have a planarity test that outputs an embedding then that's a no-brainer in $$$O(n)$$$

    • »
      »
      »
      4 days ago, # ^ |
      Rev. 3   Vote: I like it 0 Vote: I do not like it

      The planarity test is why we asked for a certificate, but hmm, maybe.

      • »
        »
        »
        »
        4 days ago, # ^ |
        Rev. 2   Vote: I like it 0 Vote: I do not like it

        We got AC in upsolving using planarity test from our library and there were no problems with that, it's really easy. Too bad somehow I didn't realize that during the contest :<. Planarity test tells you whether each edge outgoing from each $$$i$$$ enters it from above or from below. If we have an edge from $$$i$$$ to $$$j$$$ such that it enters both of these vertices from above and $$$i<j$$$ then you go $$$j-i$$$ steps up from $$$i$$$, then $$$j-i$$$ steps right, then $$$j-i$$$ steps down. If it enters $$$i$$$ from above and $$$j$$$ from below then from $$$i$$$ you go $$$n+i$$$ steps up, $$$2i+j$$$ steps left, $$$2n+i+i$$$ steps down, $$$2j+i$$$ steps right, $$$n+j$$$ steps up.

        • »
          »
          »
          »
          »
          4 days ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Yeah, at first sight that information looked insufficient, but I now agree it is easy. The actual contest was in Dec 2020 so I forgot some properties on the problem :(

          Btw, mind sharing your planarity test codes? :)

»
4 days ago, # |
  Vote: I like it +24 Vote: I do not like it

How to solve I -- Integer Array Shuffle?

  • »
    »
    4 days ago, # ^ |
      Vote: I like it +1 Vote: I do not like it

    Hint: figure out the condition for answer to be 1, then 2 and so on.

»
4 days ago, # |
  Vote: I like it +10 Vote: I do not like it

I'd like to know the solutions for H,K (or at least pointers to some papers?) :D

  • »
    »
    4 days ago, # ^ |
      Vote: I like it +10 Vote: I do not like it
  • »
    »
    4 days ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Relevant paper for H.

    Brief explanation: The optimal solution can be partitioned into a chain of circles where the first / last circle have a maximum possible radius and the others simply follow it. This gives an $$$n^2$$$ solution: You enumerate the one with maximum possible radius, and move left/right to find the possible radius for other circle, giving $$$O(n)$$$ candidates of radius for each.

    Let $$$D_i = X_i - X_{i - 1}$$$. You can find the maximal increasing / decreasing subarray of "gaps", like $$$D_i < D_{i + 1} < D_{i + 2} ... $$$, In this case, it is not optimal for $$$i, i + 1$$$ point to have maximum radius. This reduces the number of candidates.

    Also, you can't move beyond the subarray of gaps (it will make the radius at most 0), so you can just try $$$O(1)$$$ candidate of radius or each circles.

»
4 days ago, # |
  Vote: I like it 0 Vote: I do not like it

How to solve G?

  • »
    »
    4 days ago, # ^ |
      Vote: I like it +9 Vote: I do not like it

    Let $$$dp[l][r]$$$ be answer in segment $$$[l,r]$$$. For calculation we need to choose some $$$i$$$ from that segment to be maximum. For $$$i$$$ it is optimal $$$-C[i][j] + V[i][j] * q[l][r][i] + dp[l][i - 1] + dp[i + 1][r]$$$ to be maximal, where $$$q[l][r][i]$$$ is number of queries from the segment $$$[l,r]$$$ containing $$$i$$$. We can pre-calculate $$$q$$$ and optimal $$$-C[i][j] + V[i][j] * q[l][r][i]$$$ for all possible $$$l,r,i$$$ with convex hull.

    • »
      »
      »
      3 days ago, # ^ |
        Vote: I like it +12 Vote: I do not like it

      The most important trick in this solution is that we do not need to take care of the fact that value at point $$$i$$$ is actually bigger than all other values at intervals $$$[l, i-1]$$$ and $$$[i+1, r]$$$

»
4 days ago, # |
  Vote: I like it 0 Vote: I do not like it

What was the model solution for C?

  • »
    »
    4 days ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Binsearch the answer and do line sweeping with lazy segtrees and std::set.

    TL is 4-5x model solution, but suboptimal implementation is easier and tempting. We did not want to block it, but there were other issues when we gave more than 5s.

»
3 days ago, # |
Rev. 2   Vote: I like it -25 Vote: I do not like it

when will u stream next time?
& can u share your vimrc as well?