Azret's blog

By Azret, history, 4 years ago, In English

If it is permitted to discuss problems now, how to solve A? I think B and C are more or less straightforward.

plants supertrees tickets

  • Vote: I like it
  • +75
  • Vote: I do not like it

| Write comment?
»
4 years ago, # |
  Vote: I like it +10 Vote: I do not like it

Auto comment: topic has been updated by Azret (previous revision, new revision, compare).

»
4 years ago, # |
Rev. 8   Vote: I like it +75 Vote: I do not like it

Solution to $$$supertrees$$$ $$$(B)$$$:

  • if $$$p_{i,j} = 1$$$, it’s a tree. Form any tree. For example, a “branch”, which is a tree that looks like a line.

  • if $$$p_{i,j} = 0/1$$$, it’s a “forest” of trees. Use DSU to merge vertices $$$i, j$$$, for which $$$p_{i,j} = 1$$$ holds. Then, construct a “branch” for each union.

  • if $$$p_{i,j} = 0/2$$$, it’s a “forest” of necklaces (cycles). Again, you can use DSU and then for each union build a necklace.

  • if $$$p_{i,j} = 0/1/2$$$, it’s a “forest” of necklaces where from each vertex of every necklace there is a “branch”. Notice that if $$$p_{i,j} = 2$$$ there can’t be $$$k$$$ such that $$$p_{i,k} = 1$$$ and $$$p_{k,j} = 1$$$. It’s not hard to check that. If this condition holds, just construct necklaces/branches first and then add the branches/necklaces.

  • if there is any $$$p_{i,j} = 3$$$, then answer doesn’t exist.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +74 Vote: I do not like it

    Why this comment is downvoted? :P Isn’t it the solution? What’s the matter? I am open to logical objections, it’s just that I don’t understand a reason for downvoting. Not that I care about upvotes/downvotes at all, I’m just curious.

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +73 Vote: I do not like it

      there's nothing wrong with your comment. CF community is just stupid sometimes.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +11 Vote: I do not like it

    I might have misread the problem statement but I don't quite understand why for any number of p[i][j]=3 answer does not exist. For instance, for the following example:

    1333
    3133
    3313
    3331
    

    I can construct this graph, and it looks like there are 3 distinct paths for every node (for A->B it would be AB,ACB,ACDB)

    graph-ex

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +4 Vote: I do not like it

      For A->D you have 4 (ABD, ACD, ABCD, ACBD)...

»
4 years ago, # |
  Vote: I like it +43 Vote: I do not like it

Where are the problems?

»
4 years ago, # |
Rev. 6   Vote: I like it +35 Vote: I do not like it

Solution to $$$tickets$$$ $$$(C)$$$:

  • if $$$m = 1$$$, select all numbers (trivial), the answer is sum of the largest $$$\displaystyle \frac{n}{2}$$$ numbers — sum of the smallest $$$\displaystyle \frac{n}{2}$$$ numbers.

  • if $$$k = 1$$$, notice that we can pick either minimum or maximum number from each row. Let’s say we need to pick maximum from $$$\displaystyle \frac{n}{2}$$$ rows and minimum from $$$\displaystyle \frac{n}{2}$$$ rows. We can solve this greedily. First, let’s say we always “add” the maximums and “subtract” the minimums. Let’s denote $$$l_i$$$ and $$$r_i$$$ as minimum and maximum numbers in row $$$i$$$, respectively. Let’s first add $$$n$$$ maximums to the answer, although it’s not correct. Now, we need to “trade” $$$\displaystyle \frac{n}{2}$$$ maximums with minimums in the best way possible. Let’s look at how a trade looks like. When we subtract a minimum in some row, we also subtract the maximum from this row (because it was added initially). So, we initially have $$$r_1 + \ldots + r_n$$$. Now, if we make a trade in row $$$i$$$, then we gain $$$-l_i-r_i$$$. Let’s sort all values of trades, i.e. $$$-l_i-r_i$$$. We will need to take the best $$$\displaystyle \frac{n}{2}$$$ trades.

  • full solution is a generalization of the previous solution. Notice that we need to take select some number of minimums and maximums from each row, so that we choose $$$k$$$ numbers from each row and so that we choose $$$\displaystyle \frac{nk}{2}$$$ minimums and $$$\displaystyle \frac{nk}{2}$$$ maximums. Let’s again add $$$nk$$$ maximums and then we will need to make $$$\displaystyle \frac{nk}{2}$$$ trades. But now trades will look a little different. If we trade $$$j$$$-th minimum in $$$i$$$-th row, then we gain $$$-a_{i,j}-a_{i,m-k+j}$$$ (0-indexation). It is possible to make best $$$\displaystyle \frac{nk}{2}$$$ trades using priority queue.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +41 Vote: I do not like it

    Can somebody explain why the comment is so heavily downvoted? :P I mean, that’s how Islam Davletov (KGZ team) solved this problem. If you didn’t understand the solution, please ask your questions here :sweat-smile:

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Auto comment: topic has been updated by Azret (previous revision, new revision, compare).

»
4 years ago, # |
Rev. 5   Vote: I like it +16 Vote: I do not like it

Is it true that in 4th subtask of $$$plants$$$ (where there always exists an answer for every query) initially there can be only one $$$r_i = 0$$$ and we can kind of find a topsort of our DAG using a bfs (by decrementing $$$r_j$$$'s, searching for any $$$r_j = 0$$$ and so on)?

»
4 years ago, # |
  Vote: I like it +13 Vote: I do not like it

Why there is nobody on CF, discussing the problems? ;(

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +9 Vote: I do not like it

    I suspect because the tasks just got released to the public and as for the actual contestants, I'd usually advise to forget about Day 1 as soon as it's over and start mentally preparing for Day 2.

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +1 Vote: I do not like it

      Definitely good advice for contestants! :)

»
4 years ago, # |
  Vote: I like it +27 Vote: I do not like it

Problem A is soooo hard... Plz help me, Worteltje

What I briefly found out is:

  • N < 2K, then the sequence is unique. We can find the largest one and reduce N.
  • If $$$i$$$-th and $$$j$$$-th are incomparable, then (I claim that) there is a sequence $$$(h_1, \cdots, h_n)$$$ so that both $$$(h_1, \cdots, h_i, \cdots, h_j, \cdots , h_n)$$$ and $$$(h_1, \cdots, h_j, \cdots, h_i, \cdots , h_n)$$$ are satisfied $$$r$$$-conditions.
  • Thus, if $$$| i-j | < K$$$, then $$$i$$$-th and $$$j$$$-th must be comparable.

And, I'm stuck. T.T Could anybody help for key-concepts?

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Can you elaborate more on those observations, please? :P

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +8 Vote: I do not like it

      Let $$$d(i, j) := j-i$$$ if $$$i <= j$$$, $$$d(i, j) := N-i+j$$$ otherwise.

      Assume $$$N < 2K$$$. Let $$$i_1, \cdots, i_t$$$ are the $$$i$$$s, such that $$$r_i = 0$$$. We know that one of $$$i_*$$$ is the index of the largest $$$h$$$.

      Let $$$h_{i_1}$$$ be the largest. Then, $$$d(i_1, i_k) < K$$$ must hold. If not, then $$$d(i_k, i_1) < K$$$, which implies that $$$h_{i_k} > h_{i_1}$$$.

      Surprisingly, such $$$i_1$$$ is unique, so we can find the largest $$$h$$$.

      • »
        »
        »
        »
        4 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        So, you mean you find $$$i_1$$$ such that $$$d(i_1, i_{*}) < K$$$ holds? Then, what do we do? Find the largest $$$h$$$ repeatedly this way?

        • »
          »
          »
          »
          »
          4 years ago, # ^ |
            Vote: I like it +5 Vote: I do not like it

          Yes. Generally,

          • We know some values of $$$h_*$$$,
          • then we can find the largest unknown $$$h_*$$$.

          Repeat $$$N$$$ steps, then all $$$h_i$$$ are revealed.

          Using Fenwick tree or Priority queues, the total time complexity is $$$O(N \log N)$$$.

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +5 Vote: I do not like it

      The proof of the second observation is quite hard to explain for me. If $$$(h_1, \cdots, h_n)$$$ satisfies $$$r$$$-conditions, then:

      • The top $$$k_1$$$ largest $$$h_*$$$ can be shuffled.
      • The next top $$$k_2$$$ largest $$$h_*$$$ can be shuffled.
      • ... and so on.

      So, if $$$i$$$-th and $$$j$$$-th are incomparable, then we can find a sequence $$$(h_1, \cdots, h_i, \cdots, h_j, \cdots, h_n)$$$, which can be modified to $$$(h_1, \cdots, h_j, \cdots, h_i, \cdots, h_n)$$$ by some shuffle operations.

      Therefore, if $$$i$$$-th and $$$j$$$-th are incomparable, and $$$|i-j|<K$$$, one of $$$r_i$$$ and $$$r_j$$$ must be increased when we swap $$$i$$$-th and $$$j$$$-th. That's a contradiction.

»
4 years ago, # |
Rev. 4   Vote: I like it 0 Vote: I do not like it

Did anyone get wa on problem B, subtask 4, test 23? This is how I wasted 3 hours without being able to fix the code :/

UPD: I made a wrong assumption: for each connected component with $$$p[i][j] \geq 1$$$, I assumed that there exist only one connected component with $$$p[i][j] = 1$$$. In fact, I failed on test

5
1 1 2 2 2
1 1 2 2 2
2 2 1 2 2
2 2 2 1 1
2 2 2 1 1

I'm quite surprised by passing all but 4 tests with that wrong approach.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    I feel you bro, I didn't even take a look past the second two subtasks of problem B and instead spend a lot of time on the first subtasks of problem A... Well, better luck next year.

»
4 years ago, # |
Rev. 3   Vote: I like it +130 Vote: I do not like it

Problem A (plants)

This is a pretty proof-y writeup; a lot of it is just mathematical rigor. The high-level approach section gives an overview of the main ideas of the solution, and the algorithm is a bit scattered throughout the writeup. My implementation is at https://oj.uz/submission/300416, but be warned that it's written to be more optimal and not very readable.

In general, we will be a little loose with our indices and assume that all indices are taken circularly mod $$$N$$$. Inequalities will typically refer to circular ranges, and it should be clear from context how they wrap. We will say that $$$\lvert i-j \rvert$$$ is the shorter circular distance between $$$i$$$ and $$$j$$$.

High-level approach

We can find one valid height array by repeatedly identifying a candidate for the largest element ($$$r[i] = 0$$$ and $$$r[j] > 0$$$ for all $$$i-K < j < i$$$) and deleting it (adjusting the $$$r$$$ accordingly). It turns out that the relative ordering of the elements of each subarray of size $$$K$$$ is unique/fixed, and we can use this greedy solution as an oracle for these subarray orderings.

Furthermore, satisfying the relative ordering of the subarrays is also sufficient for consistency (this is obvious because each value of $$$r$$$ only depends on the relative ordering of one of these subarrays of size $$$K$$$). Thus, we can prove $$$h[x] < h[y]$$$ only if there is some chain $$$h[a_0 = x] < h[a_1] < h[a_2] < \cdots < h[a_{l-1}] < h[a_l = y]$$$ where $$$\lvert a_i - a_{i+1} \rvert < K$$$ (indices are still circular). Any two locations within $$$K$$$ are guaranteed to be ordered, so we only need to consider chains where $$$x = a_0 < a_1 < \cdots < a_l = y$$$ or $$$x = a_0 > a_1 > \cdots a_l = y$$$ (either all CW or all CCW); WLOG the former.

To answer queries, note that we can greedily jump from $$$a_i$$$ to the smallest successor greater than $$$a_i$$$ as long as it's before $$$y$$$, i.e. $$$a_i \to s(a_i) = \displaystyle \operatorname{argmin}_{a_i < j < a_i+K \text{ and } h[j] > h[a_i]} h[j]$$$ as long as $$$s(a_i) \le y$$$. We can precompute jump pointers for $$$s$$$, and then do one oracle query once $$$y - K < a_i \le y$$$.

Part 1: Identifying ordering of subarrays of size $$$K$$$

The first key lemma is that we can uniquely derive the relative ordering of all subarrays of size $$$K$$$, and these relative orderings are both necessary and sufficient. More formally:

Lemma 1: For any two vertices $$$i$$$ and $$$j$$$ such that $$$\lvert i-j \rvert < K$$$, we can uniquely determine whether $$$h[i] < h[j]$$$ or $$$h[i] > h[j]$$$. Furthermore, these constraints are sufficient; any array $$$h$$$ that satisfies all of these constraints is consistent with $$$r$$$.

Proof of Lemma 1: These constraints are obviously sufficient, because each $$$r[i]$$$ only depends on $$$h[i] \overset{?}{<} h[j]$$$ for $$$i < j < i+K$$$.

It remains to prove that these constraints are unique. To show this, we give an algorithm which certifies either $$$h[i] < h[j]$$$ or $$$h[i] > h[j]$$$ for all $$$i < j < i+K$$$. The algorithm is the greedy algorithm which removes any possibly-maximal element at any time.

We will maintain a few things at every point in time:

  1. A timestamp $$$t$$$ starting at $$$0$$$.
  2. A set of "alive" indices $$$A_t$$$, initialized at $$$A_0 = \{0,1,2,\ldots,n-1\}$$$. For convenience, we will assume all indices are restricted to ones in $$$A_t$$$.
  3. An array $$$r_t$$$ such that for each $$$i \in A_t$$$, $$$r_t[i] = \text{#}(h[j] > h[i] \mid i < j < i+K \text{ and } j \in A_t)$$$ ($$$r_t$$$ is undefined for $$$i \notin A_t$$$). We will show as we construct $$$r_t$$$ that the definition will hold for any/all consistent choices of $$$h$$$. This is initialized at the given $$$r_0 = r$$$.

Now, the algorithm proceeds as follows.

  1. If $$$A_t$$$ is empty, exit.
  2. Otherwise, identify some $$$i \in A_t$$$ so that $$$r_t[i] = 0$$$ and $$$r_t[j] > 0$$$ for all $$$i-K < j < i$$$ where $$$j \in A_t$$$. This must exist: assume that there is some valid height list $$$h^*[i]$$$. Then, these conditions are satisfied for $$$\operatorname{argmax}_{i \in A_t} h^*[i]$$$.
  3. Note that we are guaranteed that $$$h[i] > h[j]$$$ for all $$$i-K < j < i+K$$$ where $$$j \in A_t$$$ and $$$j \ne i$$$. This is obviously true when $$$i < j < i+K$$$ because $$$r_t[i] = 0$$$. When $$$i-K < j < i$$$, this is also true; assume to the contrary, and consider the maximal $$$j$$$ with $$$h[j] > h[i]$$$. Then, $$$h[j] > h[i]$$$ and by maximality, $$$h[k] < h[i]$$$ for all $$$j < k < i$$$. Also, $$$r_t[i] = 0$$$ so $$$h[i] > h[k]$$$ for all $$$i < k < i+K$$$. Thus, $$$h[j] > h[i] \ge h[k]$$$ for all $$$j < k < i+K, k \in A_t$$$, so $$$r_t[j] = 0$$$, a contradiction, so we're done.
  4. Delete $$$i$$$ from the alive set, i.e. set $$$A_{t+1} = A_t \setminus \{i\}$$$. By the observation in step 3, then, we can show that when $$$i-K < j < i$$$ and $$$j \in A_t$$$, $$$r_{t+1}[j] = r_t[j]-1$$$, and otherwise $$$r_{t+1}[j] = r_t[j]$$$. (Remember $$$r_{t+1}$$$ is defined in terms of $$$A_{t+1}$$$.)
  5. Increment $$$t$$$ and go back to step 1.

Note that at any point in the algorithm, we're deriving the true and unique value of $$$r_t$$$ given the current subset $$$A_t$$$. Thus, consider any $$$i \ne j$$$ with $$$\lvert i - j \rvert \le K$$$, and WLOG assume that $$$i$$$ was deleted before $$$j$$$. Then, by step $$$3$$$, we know that $$$h[i] > h[j]$$$, because $$$i-K < j < i+K$$$ and $$$j$$$ was alive at the time. Thus, we have proven our lemma. $$$\mathrm{\square}$$$

(Instead of using this uniquely-defined-$$$r_t$$$ argument, we can also just observe that any greedy choice cannot hurt our ability to make other greedy choices, so the greedy choices "commute" in some sense. That's a bit harder to formalize, though.)

To actually run this algorithm, we need to be able to perform steps 2 and 4 efficiently. We store $$$r$$$ in a lazy-propogation segment tree that supports range-decrement, point-mark-dead, and range-query-argmin. Step 4 is obvious on this data structure.

For step 2, use range-query-argmin on the whole range to find some $$$i$$$ so that $$$r_t[i] = 0$$$. Then, we can do a range-query-argmin to check if there is any $$$i-K < j < i$$$ so that $$$r_t[j] = 0$$$. If $$$r_t[j] > 0$$$ for all $$$j$$$, we use $$$i$$$. Otherwise, find any such $$$j$$$ and note that $$$i$$$ cannot be removed until after $$$j$$$ is removed. Thus, we can push $$$i$$$ onto a stack and try to use $$$j$$$ first; we'll check $$$i$$$ only after $$$j$$$ succeeds.

This runs in an amortized linear number of attempts (so $$$O(n \log n)$$$ runtime), because we either find a valid $$$i$$$ and make progress and shrink our stack by one, or our stack grows by one. Note that the stack size is limited by the number of alive items, because we know it's always possible to make progress.

Finally, we'll also record the order of deletion during the algorithm; by the proof of our lemma, this gives a valid (decreasing) ordering of the plants.

Part 2: Querying arbitrary pairs

At this point, we have one consistent array $$$h_{greedy}$$$, and we know that an array $$$h$$$ is consistent iff for all $$$ \lvert i-j \rvert < K$$$, $$$(h[i] < h[j]) \iff (h_{greedy}[i] < h_{greedy}[j])$$$.

First, for simplicity, we'll answer the question of whether it's guaranteed that $$$h[x] < h[y]$$$; we can just query whether $$$h[x] < h[y]$$$ and $$$h[y] < h[x]$$$ to answer the actual queries.

Now, it's well known that given several inequalities, the only other inequalities you can derive are the transitive closure, i.e. paths of given inequalities. There are several proofs; for example, you can use the existence of a topological sort in any acyclic digraph (DAG).

Then, $$$h[x] < h[y]$$$ iff there exists some path $$$h[x = a_0] < h[a_1] < h[a_2] < \cdots < h[a_{l-1}] < h[a_l = y]$$$ where $$$\lvert a_i - a_{i+1} \rvert < K$$$. We'll now make a few observations to simplify the paths we need to consider. First, consider the path of minimal length (number of terms). WLOG, assign a direction to each jump $$$a_i \to a_{i+1}$$$, either CW or CCW.

Paths are unidirectional and don't loop

Then, this minimal path cannot switch directions; otherwise, if WLOG $$$a_{i-1} < a_i > a_{i+1}$$$, then $$$a_i - K < \{a_{i-1}, a_{i+1}\} < a_i$$$, so $$$\lvert a_{i-1} - a_{i+1} \rvert < K$$$ and we can shrink the path by removing $$$a_i$$$. (Remember, $$$\lvert a_{i-1} - a_{i+1} \rvert < K$$$, so the ordering $$$h[a_{i-1}] < h[a_{i+1}]$$$ is guaranteed.)

Also, this path cannot make more a full CW or CCW loop: otherwise, by a similar argument, it must come within $$$K$$$ of itself and we can shrink it to make it smaller.

Thus, we can make the stronger claim: $$$h[x] < h[y]$$$ iff there exists some path $$$h[x = a_0] < h[a_1] < \cdots < h[a_l = y]$$$ where $$$x = a_0 < a_1 < \cdots < a_l = y$$$, or if there exists some path with $$$x = a_0 > a_1 > \cdots > a_l = y$$$, i.e. the path is along one of the two circular arcs between $$$x$$$ and $$$y$$$. We will check these 2 cases separately (so now we're at 4 checks per query: $$$h[x] < h[y]$$$ vs $$$h[x] > h[y]$$$ and $$$a_0 < a_l$$$ vs $$$a_0 > a_l$$$). WLOG, we'll try to find if there exists a path with $$$x = a_0 < \cdots < a_l = y$$$.

Paths can be greedily constructed by taking "successor"

Now, we'll show that we can greedily construct most of this path. For any plant $$$p$$$, consider its direct successor in $$$h[p:p+K-1]$$$, i.e. consider $$$\operatorname{successor}(p) = \arg\min_{p < q < p+K, h[q] > h[p]} h[q]$$$. We'll show that you can essentially just pick $$$a_{i+1} = \operatorname{successor}(a_i)$$$.

For any path $$$x = a_0 < \cdots < a_l = y$$$, consider any index where $$$a_i < \operatorname{successor}(a_i) < y$$$ and $$$a_{i+1} \ne \operatorname{successor}(a_i)$$$ if it exists. Consider the first $$$j$$$ where $$$a_j > \operatorname{successor}(a_i)$$$ (so $$$a_i \le a_{j-1} \le \operatorname{successor}(a_i)$$$). Then, we can instead use the path $$$x = a_0 < \cdots < a_{i-1} < a_i < \operatorname{successor}(a_i) < a_j < a_{j+1} < \cdots < a_l = y$$$ because $$$a_j - \operatorname{successor}(a_i) \le a_j - a_{j-1} <K$$$ and $$$h[a_j] > h[a_{j-1}] > h[\operatorname{successor}(a_i)]$$$ by definition of successor. This path has more correct successor links, so we can keep iterating.

Thus, $$$h[x] < h[y]$$$ along an increasing path iff there exists a path $$$x = a_0 < \cdots < a_l = y$$$ with $$$h[a_0] < \cdots h[a_l]$$$ where $$$a_{i+1} = \operatorname{successor}(a_i)$$$ for all $$$i$$$ except $$$\operatorname{successor}(a_{l-1}) \ge a_l = y$$$.

To check for paths of this form, we can just use binary lifiting on jump pointers of $$$\operatorname{successor}$$$ to find $$$a_{l-1}$$$, and then query $$$h_{greedy}[a_{l-1}] \overset{?}{<} h_{greedy}[y]$$$. Thus, we have an $$$O(n \log n + q \log n)$$$ algorithm.

Part 3: Answering queries in $$$O(1)$$$

This is an optimization of the jump-pointers in the final step. As before, we'll solve the problem of testing whether there exists a path $$$x = a_0 < a_1 < \cdots < a_l = y$$$ with $$$h[a_0] < h[a_1] < \cdots < h[a_l]$$$, with the observation that it suffices to consider $$$a_{i+1} = \operatorname{successor}(a_i)$$$ for $$$i \ne l-1$$$.

First, we will switch to non-circular indices and "unroll" the circle into a path of length $$$2N$$$, indexed from $$$0$$$ to $$$2N-1$$$. Thus, we're trying to find an unrolled rightwards path from $$$x$$$ to $$$y$$$ if $$$x<y$$$, or from $$$x$$$ to $$$y+N$$$ if $$$x>y$$$.

Now, we will explicitly construct a "maximal topologial sort" of these $$$2N$$$ vertices, which obeys all rightwards constraints ($$$x < y$$$ and $$$h[x] < h[y]$$$) and exactly those constraints. This will be maximal in the sense that the small indices occur as late as possible in the ordering. We'll be more formal when we prove its properties.

Algorithm: Maintain a linked list which is initially empty. Iterate from $$$i = 2N-1$$$ to $$$0$$$, and insert element $$$i$$$ in the linked list as the direct predecessor of $$$\operatorname{successor}(i)$$$. Here, we'll modify the definition of successor slightly: $$$\operatorname{successor}(i) = \arg\min_{i < j < \min(i+K,2N), h[j] > h[i]} h[j]$$$. (Equivalently, we can initialize the linked list with the elements $$$2N-K \ldots 2N-1$$$ in sorted order according to $$$h_{greedy}$$$.) Finally let $$$idx[i]$$$ be the index of $$$i$$$ in the final ordering.

Lemma 2: For any $$$0 \le i \le 2N-K$$$, the $$$K$$$ elements from $$$i$$$ to $$$i+K-1$$$ are in the correct relative order; we can prove this via simple induction downwards on $$$i$$$, using the definition of successor.

Using this, we come to our key claim.

Claim: For any $$$x < y$$$, there exists a valid increasing path of the above form from $$$h[x]$$$ to $$$h[y]$$$ iff $$$idx[x] < idx[y]$$$ in this ordering.

First, if there exists a valid path $$$h[a_0 = x] < h[a_1] < \cdots < h[a_l = y]$$$ with $$$a_0 < a_1 < \cdots < a_l$$$ and $$$a_{i+1} - a_i < K$$$, then by Lemma 2, $$$idx[a_0 = x] < idx[a_1] < \cdots < idx[a_l = y]$$$. In the other direction, if $$$idx[x] < idx[y]$$$ where $$$x < y$$$, then when $$$x$$$ was inserted directly before $$$\operatorname{successor}(x)$$$, $$$y$$$ was already in the linked list, so $$$idx[\operatorname{successor}(x)] \le idx[y]$$$. Thus, we can build a path by taking successor jumps until $$$y-x < K$$$, at which point Lemma 2 lets us finish the path. (This array essentially allows us to perform all the greedy jumps.)

Thus, the claim is true, so we can just precompute $$$idx$$$, and at query time, for $$$x < y$$$, we can identify increasing paths from $$$x$$$ to $$$y$$$ by simply testing $$$idx[x] < idx[y]$$$ in $$$O(1)$$$ time. We compute a symmetric $$$rev_idx$$$ to handle the decreasing paths from $$$x$$$ to $$$y$$$.

Final note: this "maximal ordering" can also be derived directly during the greedy algorithm by selecting the smallest $$$i$$$ such that $$$r_t[i] = 0$$$ at each stage, though it modifies the algorithm in other subtle ways. I won't elaborate, but you can see https://oj.uz/submission/300153 (model solution plants-yanhao-ac.java).

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +96 Vote: I do not like it

    I am the author of the problem plants and I really appreciate that you wrote very detailed editorial!

    When I found we can uniquely determine whether h[i] < h[j] or h[i] > h[j] for all |i-j|<K, I was very surprised and it seems very interesting (maybe contestants could find it easier because of hints in subtasks)

    Your solution is exactly the same with my solution, and the testers found queries in O(1) which I couldn't come up with.

    I hope that many participants enjoyed this problem.

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +21 Vote: I do not like it

      I already get a lot of enjoyment from your wonderful problem, ainta!

      And thank you, ecnerwala. You made my day.

      BTW, I'm surprised that 9 IOI-participants solved it in only 5 hours, whereas I spent a whole day. Are they Legendary Grandmasters? XD

      • »
        »
        »
        »
        4 years ago, # ^ |
          Vote: I like it +59 Vote: I do not like it

        I just got lucky, I don't know about the others...

        Also I didn't really prove anything, I just submitted a bunch of things and see what worked and what didn't (hence 16 submissions).

      • »
        »
        »
        »
        4 years ago, # ^ |
          Vote: I like it +29 Vote: I do not like it

        I was able to actually prove most of it but spent too much time and failed problem tickets :/. Although loved the problem plants, it was really fun going through the observations :).

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +10 Vote: I do not like it

    Final note: this "maximal ordering" can also be derived directly during the greedy algorithm by selecting the smallest $$$i$$$ such that $$$r_t[i]=0$$$ at each stage, though it modifies the algorithm in other subtle ways. I won't elaborate, but you can see https://oj.uz/submission/300153 (model solution plants-yanhao-ac.java).

    This solution is somewhat complicated; I will try my best to explain it.

    Suppose at some time $$$t$$$, we have $$$r_t[0]=0$$$, but also $$$r_t[n-1]=0$$$. We cannot simply remove $$$0$$$ from the segment tree- it is being `blocked'. The circular nature of the problem makes it tougher than if it were to be linear.

    Instead, what I do is as follows:

    Let $$$i$$$ be the smallest index satisfying $$$r[i]=0$$$. If $$$i\geq k-1$$$,we can simply decrease $$$r[i-k+1], r[i-k+2], \ldots, r[i-1]$$$ by 1 in the segment tree; we do not need to do anything special.

    If $$$i < k-1$$$, we decrement $$$r[0], r[1], \ldots, r[i-1]$$$. Under the "normal" solution, we would have decremented $$$r[i+n-k-1], \ldots, r[n-1]$$$. Over here, we do not make the decrement; instead we do it in a lazy way.

    This is done by keeping an additional queue (which is called "stash" in the code referenced). Every time the above happens, we put $$$i$$$ into the stash, to tell us that we need to decrement $$$r[i+n-k-1], \ldots, r[n-1]$$$ sometime in the future. The reason for doing so is because executing the decrement immediately may result in some value of $$$r$$$ being negative.

    If every element in the array is non-zero, then we start to execute these lazy operations, in the same order that the instructions are queued.

    Using this, we can obtain paths which do not wrap around. To check if the paths wrap around, we will have to iron out more technical details.

    I had the idea of the maximal topological sort and doubling the circle, which I have implemented as well.

    I was just trying to see how far I could push this idea and reduce the runtime. By not doubling the circle, the number of segment tree operations are halved, and we get a more efficient solution.

    (In case you are wondering, I am from the Scientific Committee and this model solution is mine.)

»
4 years ago, # |
  Vote: I like it +1 Vote: I do not like it

Is there any online judge for these problems?