Thanks for the participation!

1602A - Two Subsequences was authored and prepared by napstablook

1602B - Divine Array was authored and prepared by Anti-Light

1601A - Array Elimination was authored and prepared by isaf27

1601B - Frog Traveler was authored and prepared by KiKoS

1601C - Optimal Insertion was authored and prepared by isaf27

1601D - Difficult Mountain was authored and prepared by Tikhon228

1601E - Phys Ed Online was authored by jury and prepared by fedoseev.timofey

1601F - Two Sorts was authored by cdkrot and prepared by cdkrot, LHiC

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Wow! Editorials of previous contests are all very fast! Thanks a lot!

That's the fastest editorial I've seen yet. I struggled with D, but a nice round nonetheless!

shouldn't it be

minimalnumber of moves needed to travel from $$$i$$$ to $$$0$$$? (div1.B)nice round btw!

Problem 1C/2E appeared on stackexchange.

For problem B can someone prove that it requires at most log(n) steps for the array to become constant .?

mark

following. Would like to know for n also.

The array will become constant until the frequency of each element is different from that of other elements.And the maximum value of frequency is n. If the array has NOT became constant, there are two same frequency at least.At the worst situation, let's say the same frequency equals to f, then it will equal to 2f, 4f, 8f and so on... Even if the f equals to 1, it requires at most log(n) steps to f equals to n. Done!!!

`let's say the same frequency equals to f, then it will equal to 2f, 4f, 8f and so on`

What does this statement mean? Please tell.

For example,[1,2,3,3,7,7,7,7].

Hope the example will help you understand my statement.

Someone Please correct me if my proof is wrongSay I define total repeated Frequencies by F At each step in the worst case say we manage to get groups of 2 numbers with same frequency . So if we proceed according to the algorithm our F reduces by a factor of 2 . so it becomes F/2 . Say it takes k steps for the process to terminate and give us a constant array . so F/2^k >= 1 ( as if it is 1 distinct frequency with 1 additional step I can get constant array)=> F >= 2^k but F can be atmost n so n >= 2^k => k<= log(n) so we need at most log(n) steps till we get a constant array .Why the same frequency is not f、2f、3f?

If a_i changes, the number of integers which equal to a_i will multiply. When a_i stops changing and becomes the minimum integer(least occurrences) at the same time, it's locked and never change.

Also, processing one step, if the array doesn't remain unchanged, the minimum number of occurrences of different elements unlocked will multiply. After at most log(n) steps there's no element unlocked.

1602E - Optimal Insertion is super similar to a recent contest's 1579E2 - Array Optimization by Deque.

For some reason I've decided to not use the Fenwick trees like I did before and switched to Order Statistics Tree from GNU C++ PBDs. The solution should be

`O((n + m) (log(n) + log(m)))`

Can someone help me understand why I have TLE?The idea is as follows:

`b`

.`n + m`

and "pop" (choose) an element from either`a`

or`b`

at each step based on the following: for the next element in`a`

count the number of items in the remainder of`b`

that are less than this element (this will be the number of the inversions we "add" by choosing an element from`a`

at this step) and do the same for the next element in`b`

.The asymptotic seems right: there are

`n + m`

steps, at each of them there are at most`log(n) + log(m)`

operations, so this should really fit the limitations. I can't figure out why I get TLE :(Here's the code: 133029362

Okay, it looks like the time limits are really tight, so having the Statistics Tree being used twice over $$$log(n) + log(m)$$$ gets a TLE.

I've changed the solution to pass the TLE but now I get WA on Test #3 (133042951) which means my solution is wrong. I need to think more on the ideas. I was under the impression that the greedy additions to the result are going to be fine but somehow it's not the correct approach.

I was under the impression that greedily adding the element from $$$a$$$ or $$$b$$$ by choosing which one will have the least number of inversions with the remainders of $$$a$$$ AND $$$b$$$ should be right but the result is somehow not correct in some cases. Any idea how to solve this with Segment Tree/Order Statistics/Fenwick tree?

Here is my solution to 1601C - Optimal Insertion. I had no time to debug it during round.

First, without proof assume that we can greedy pick first place with best outcome and insert there. I don't have proof yet, so my solution is not proved in my eyes, and perhaps might be hacked.

code snippet of main ideaDetails under spoiler.

Wall of textNext, rephrase influence of new element $$$x$$$ by two sums:

Now, trick! if we make $$$f(True) = 2$$$ then minimum still at the same index. Next trick, we

shiftboth, and make $$$f(True) = 1$$$ and $$$f(False) = -1$$$. Minimum will take same place again. But now, $$$f(ans[j] > x) = -f(ans[j] \leq x)$$$. The only missing thing to make similar second sum is that it also counts when they equal. To deal with it, we define:Then

For fixed $$$x$$$ define $$$p$$$ as follows:

Then inversions formula becomes:

This is the same as minimization of $$$-2p_i$$$ and the same as maximization of $$$p_i$$$. Thus, following snippet should also work:

Code snippetNext step. Notice that when we increase $$$x$$$ only those values which were bigger will become equal, so $$$g$$$ will increase, and then they will become less, so $$$g$$$ increase once again. This allows us to sort original array $$$b$$$ and then maintain array $$$p$$$ in segment tree with operations: add on segment, and get position of maximum. Most tricky part, is that it would be awesome if we could also insert, but it's a lot of work, because simplest workaround which I know is to implement dynamic balanced tree (treap or red-black tree or B-tree or Splay tree) with segment-tree like information.

But I want to avoid that. So, notice that between two values of original $$$a$$$ array all numbers will be inserted in increasing order. Thus, we can store list of values we are planning to insert between two numbers. And our decision where we're going to insert is only before certain number from array $$$a$$$. Using this observation we can maintain $$$p$$$ array with different meaning. It will instead store prefix sums of $$$g$$$ with this information taking into account. It will store prefix sum up to next $$$a$$$ taking into consideration all inserted elements. Now index of maximum in new array $$$p$$$ will say where we want to insert (in which list we should append element).

With this new workaround, we can use segment-tree with operation: range-add, get index of maximum.

I don't want to claim that my solution doesn't have any bugs, but here it is: 133050747

ORZ, thanks so much.

Let's define $$$p_i$$$ as the optimal position for $$$b_i$$$(after sorting) to insert. To prove the conclusion, we just prove for any $$$i<j$$$, $$$p_i>p_j$$$ doesn't hold. All we have to consider is between $$$p_j$$$ and $$$p_i$$$. let's assume there are $$$r_i$$$ elements less than $$$b_i$$$ and $$$s_i$$$ elements greater than $$$b_i$$$ at the interval. It's optimal for $$$b_i$$$, so $$$r_i>=s_i$$$, otherwise $$$p_j$$$ would be a better position for $$$b_i$$$. Since $$$b_i<b_j$$$, $$$r_j>=s_j$$$ holds too. Therefore, $$$p_i$$$ would be a better position for $$$b_j$$$. So there is a contradiction.

What conclusion do you prove? If you only prove that $$$p_i$$$ increasing, this won't be enough for my solution. I insert values in first best place: 1) there could be plenty of those, 2) I take into account inserted. Also, I think you can't say something would be better so easily, and you didn't mention case $$$b_i = b_j$$$.

In fact, you don't have to construct a solution meeting $$$p_i$$$ increasing. I proved that $$$p_i$$$ is increasing, even if just in one case. But it's enough. It turns out that if we insert values in first best place as $$$p_i$$$, the answer will be optimal. So even if we insert the elements in other fist best place, the answer won't change.

I don't see connection with your proof. What if I pick any increasing $$$p_i$$$? You don't use anything related to idea to pick first best at current state.

Why would the solution to pick the first best place be not optimal? Because there might be smaller $$$b_i$$$ inserted after bigger $$$b_j$$$ and produce a cost $$$(b_j,b_i)$$$. But from my first conclusion, we can move $$$b_i$$$ to $$$p_j$$$, so the solution to pick the first best place is optimal.

It's not the only case. It may be $$$b_i$$$ inserted before $$$b_j$$$ but at wrong place. I stop replying unless feel necessary.

Div2B: "There is even a better lower bound: it can be shown that after at most log(n) steps a becomes repetitive". Is there a proof of this fact somewhere?

Can anyone pls share their thought process for solving problem D div2 I am not able to understand the editorial's solution.

firstly let's notice that if we were staying in some position, then there is no need to go there again. This is, it's the main idea. let us assume that we are staying at depth 0 and want to go to depth n, for that just reverse array a and b.it is not needed, but it was just more comfortable for me. we will do some kind of a bfs.if we are at x now, than we will try to go to all position that is not visited yet.in case to do it fast enough let us do the following.

we will have a set of positions that we have not used to rest at .suppose we are currently at position x, now we are interested in all positions that we did not rest at and are in the range [x,x +a[x]]. We can find them using our set. suppose y is one of such position then after the rest it will become y — b[y].than if we have not visited y — b[y] put it into the queue. It can be seen that it work in O(nlogn),because there is at most n "edge",and log is from set.

P.S.S here is my code

My thought process was more BFS-like than DP. If you were to do a BFS, it's easy to see that you would visit every height at most once. Now, the only part left to optimize is adding nodes into the queue.

If you were to linearly traverse through all possible jumps the time complexity would be O(N*N). Upon closer inspection we see that you are basically trying to find some numbers in an interval. We can make a minimum range query segment tree from an array

`C`

where`C[i] = i + b[i]`

, that is the place you will end up after jumping to height`i`

after slipping.Coming back to our BFS function. When we are adding the nodes into the queue we would simply query the desired interval of possible jumps and check if there is any that are not INF. If we find a value that is not

`INF`

, we add it's height into the queue and set it's value in the segment tree to`INF`

. To clarify the`INF`

part, it's basically just a mark whether it has been visited. Similarly if you were to make a maximum range query, you would set the value of marked heights to`-INF`

To get the traversal order we can keep track of each node's parents.

Here is my code if you want to take a closer look:

`seggy`

is my segment tree.`wher`

is the aformentioned array C https://codeforces.com/contest/1602/submission/133023221Hope this helps. :)

Thankyou Guys :))

You can Have a look at my approach here :

Video Solutions to A,B,C and D (div 2)

Fast Editorial! Thanks! I've got a lot RE on problem D but finally solved it, and fortunately the conclusion I guess in problem E is right.

My O(n) solution to Div2D/Div1B: let`dp[i]`

be the minimal number of steps from n to i. Notice that we can't just go with a for-loop from n to 0 and call it a day, because some dp[x] states can only be calculated from dp[y] (y < x) states.My idea was to propagate the DP states: from the current dp[x] state, we try to calculate all states reachable from x.

For now, we will use a heap/priority_queue to maintain, in increasing order of the dp value, all the states that haven't been propagated yet. Notice that from position x we reach a subarray of positions x-a[x]...x, before slipping. But because of our priority_queue, it means that if a part of this x-a[x]...x subarray was already visited, then by propagating dp[x] you get, in the best case scenario, the same cost from before (in the already visited part of x-a[x]...x). To be more precise, the already visited part of x-a[x]...x forms a suffix of this subarray. So we can maintain a variable`lim`

, which shows the leftmost visited position in the propagation. This is one of the key parts of the O(n) solution: you have to visit each dp state only once.Then, I realised that we can replace the heap with a queue. My gut feeling was that, sort of like in a BFS, the first time you propagate to a certain dp state creates the optimal cost for that state.

AC submission

I think O(n) code is easiler to write than O(nlogn) code.

Using BFS, push each height into the queue once, which is O(n).

And it's best to reach this height for the first time.

here is the code

hey man your solution was pretty cool and thanks for sharing it. I have understood most of the code but i am not able to understand the use of "w" or on what basis are you updating. It will be very helpful if u can explain .Thank you.

"w" is the minimum depth you can reach

`if(y>=w)break;`

is based on "it must be better to reach this depth for the first time", because you took the least number of steps when you first arrivedSo each depth will only join the queue once

The update is a normal BFS update

but there will be some unvisited also..how can we directly disregard those.... i believe you are saying: if the maximum i can go from depth 6 to depth 2 in the first iteration without considerring the sliipings then in the next iteration all depths less than 2 cannot give an optimal answer...but there may be some depths between 2 and 6 which are not visited due to slippage???

Yes, there will be some depth not being visited, but those are the depths that do not contribute to the best answer.

When we jump from depth 6 to depth 2, we will search all the depths between [2,6], which represents all the depths that the frog can jump to at present, and then let it slide. Through these sliding depths, we can deduce the next situation.

If the minimum depth to which these sliding depths can jump next time is greater than

`w`

, it means that there is no need to jump at all, so we will ignore it.ohkk got it..thank you so much

Consider that this problem can be transferred into a shortest path problem which all edges' weights are 1.

So using BFS, I think we can calculate the answer directly, without using dp.

My submission

In retrospective, my solution is indeed a bit overcomplicated, as my solution isn't that much of a DP solution, rather than a BFS. Though the DP idea definitely helped me find the solution

Problem B (Div. 1) is so similar to this.

For div1B/div2D, I understand that we want to find all j's that can land on u, but how is this achieved? I don't get the segment tree thing mentioned in the editorial.

When we are looking at possible landing spots, it's always a continuous interval, which leads us to the thought of using a segment tree. For simplicity's sake let's say that we are using min range query. Initially the segment tree will have all elements set to something that is not

`INF`

. When we visit a height we set it to`INF`

in the segment tree. When we want to look for possible landing spots we query the segment tree for the desired interval, that being from`i`

to`i - a[i]`

, where`i`

is our current height. It's easy to see that if there is a value that is not visited the query function will return it as it will definitely be smaller than`INF`

. If our query function returns`INF`

then we know that we cannot make a jump as every value in the desired interval would have been visited. Hope this helps.Err... Sorry for that but the editorial for Div2B shows that a tight upper limit is $$$\log(n)$$$, and I think a more accurate limit is $$$\log(n) + 1$$$ ? Maybe I was wrong but the data below shows $$$\log(n)$$$ has some little problems:

In this case we need $$$2$$$ rounds but $$$\log(n) = 1$$$.

Probably, the editorial means the upper limit is around $$$\log(n)$$$ but I didn't get the point. Sorry for my poor English again.

Usually when people talk about time complexities or bounds it doesn’t mean exactly N, NlogN or Logan. It is approximately. So if it is written logN, it may be logN+1, 5*logN, etc…

Thanks a lot, I'm weak in English and read few English books, some imply meanings are hard to get for me.

maybe it means

`O(\log n)`

Div2 D/ Div 1 B : linear time solution, and intuitive solution. (You can check the code out in my submissions.) First, let's say that max_jump given from nth level is x . Then all nodes from n-x to n can be reached in one jump (without considering slipping) . Now , see where all can you slip back from these x nodes (only one node for each ). While seeing those nodes that we can slip back to, calculate the highest node you can jump to from these slip back nodes. Let's say the highest such node is p.

If ever p <= x , then return -1 (you are stuck after this) . else continue to iterate now on nodes from x+1 to p , to see where all you can slip back to. Keep going until we reach 0. It has a linear time complexity.

Could anyone please give me some hints why BFS solution gets TLE for div2-D (div1-B)?

The complexity is still O(n*log(n)) where E=n*log(n)

Can you post the link to your submission that got you TLE?

This one, Thanks

So basically, for each iteration of the while loop, the for loop runs for a[i] iterations, thus making the complexity O(N x max(A)) in the worst case, which gives you a TLE. Instead, try running the for loop in reverse, i.e. from a[i] to 1 and add a break statement whenever you find that a depth is visited.

The difference is that suppose you are at depth 'i' now, you would visit i to i — a[i]. Now lets suppose after a few iterations of the while loop, you are at depth 'j', where i > j >= i — a[i]. Now, your for loop runs from 1 to a[j], i.e. checks for positions from j — 1 to j — a[j]. Recall that in the previous iteration, when we were at depth 'i', we already visited i — a[i] to i, which includes these above positions. We can avoid this unnecessary revisit by running for loop in reverse, so that it iterates only on all unvisited depths.

Below is a link to my solution: Solution

i use dp + segment tree lmao =))

nice round for fast judging and tutorial but sad me for the rating

I'm having a hard time understanding the solution of Div1C/Div2E... I just don't get why we can find the optimal p_mid only considering the inversions of b_mid, as choosing p_mid influences choosing the other p values. Does someone have a clear explanation?

I guess it relate to dp optimize technique (you can see it here : https://codeforces.com/blog/entry/8219 )

We want to prove $$$best[i] <= best[i + 1]$$$ where $$$best[i]$$$ is the best position to put $$$b[i]$$$.

But how can we prove it? I don't know...

Oh, I realized something just a few minutes ago

I guess you are same to me : observe that $$$b$$$ will be sorted, and we will find a way to put $$$b[i]$$$ in $$$a$$$. But we actually skip a few things.

Let $$$best[i]$$$ is the best position for $$$b[i]$$$ to put in array a, which mean : if we put $$$b[i]$$$ at $$$best[i]$$$, the numbers inversion of $$$b[i]$$$ for $$$a$$$ will be minimum (notice that we not consider the numbers inversion in array $$$b$$$)

assume that we find a pair $$$i$$$ and $$$j$$$ such that $$$b[i]$$$ < $$$b[j]$$$ and $$$best[i]$$$ $$$>$$$ $$$best[j]$$$, it will be a contradition. Why ? Because why don't move $$$best[j]$$$ to $$$best[i]$$$ ? Yup, by answer this question, we will get what we want. (why don't you write some example on paper to see something ?)

The important is : $$$best[i]$$$ $$$<=$$$ $$$best[i + 1]$$$ (for sorted $$$b$$$)

Amazing.. Thanks for your help!!

A problem in our school's private contest is similar with 1D/2F, even stronger, the difference is every climber has a weight and you need to maximize the sum of weight.There is a quite simple solution(but I wrote another in CF contest).

An important conclusion is, there exists an answer with $$$a_i+b_i$$$ increasing. It can be proved by classified discussion, then any datastructure can work.

Can you prove 1D about this common solution,thanks?

Sorry I can't..

It seems that the solution you showed is based on the classical "Interval Covering Problem"? Each alpinist can be seen as an interval $$$[s_i,max(s_i,a_i)]$$$ (it is not guaranteed that $$$a_i\ge s_i$$$), and our goal is to select disjoint intervals as many as we can. We can solve the problem with an simple Greedy Algorithm.

(Just my opinion... maybe incorrect)

This is also how I solved it. Sort by (a_i + s_i) increasing and break ties by s_i. May I ask if solution to weighted version is any different ?

Perhaps you need DS to optimize dp.

Another easy solution for problem E is:

There are two cases first cases is $$$K > sqrt(N)$$$:

In this case it's obvious that any student will buy at most $$$sqrt(N)$$$ tickets so the answer is $$$ (\sum_{i=l}^{n} min(A_l,\dots,A_i))$$$ where $$$i \equiv l\ (\textrm{mod}\ K)$$$.

This can be done easily by using sparse table to get min query in $$$O(1)$$$.

Second case for each $$$m \le K$$$ do the following algorithm:

We are gonna process the queries offline and iterate over elements in reverse building a monotonic . Having element $$$j$$$ in this stack means that $$$min(A_l,\dots,A_i) \space \forall \space stk_j \leq i \le stk_{j+1} = A_{stk_j}$$$.

I can easily calculate the contribution of element $$$i$$$ in the answer by counting the number of indexes $$$i \equiv m \space (mod K)$$$.

The rest is just prefix sum on the contribution and binary search to handle stack elements at the end of the range.

I have a easy-understanding greedy method for Div1.D.

In fact,we can divde all the alpinists into two parts:

`a<=s`

and`a>s`

.For the first part,we can sort it by the order of $$$s$$$.

All the alpinists whose $$$a$$$ is greater than $$$d$$$ can be choosed in this order.

When we concern about the

`a>s`

part,we can find that all the alpinists we choose can be transformed into some disjoint segments,which means if we choose one from it ,we may give up another segment.We can insert those segments into the first part in the order of $$$a$$$.

If we can insert one segment without any conflict,we can insert it.

Otherwise,we can prove that one segment at least affect one alpinist in the first part,but it can't give us bigger contribution.

So we can solve the problem by two pointers.

The new algorithm has time complexity of $$$O(nlog n)$$$.

Sorry for poor English.

If you want to see the code:133058942

Maybe a little similer to the Tutorial.(C2020jzm told me.)

Can someone explain to me the second question and its solution?

"This sum differs by a constant from" — may someone please explain this part of the solve function for problem div2E / div1C — Optimal Insertion, I don't really get it so I can't understand how the first sum reduces to the second one?

Can anyone explain div2/C.

Consider an operation on k elements, if all of these elements have i-th bit equal to 1, their i-th bit'll be set to 0. We delete 0 or k "1 bit" for each position. Let cnt(i) = number of a[i] has i-th bit. The solution is just iterate g from 1 to n, if $$$cnt\left( i \right) \vdots g$$$ for all $$$0 \le i \le 29$$$, output g.

Can anyone please point out why this 133072007 gets TLE but this one 133078047 barely makes it although both are almost the same logic?

Language: Python

Problem: 1601A - Array Elimination edit: formatting

it's a typos in E. min(bl+k,bl+2k+...+bl+tk) should be min(bl+k,bl+2k,...,bl+tk)

In first question why cant we give the answer as a=empty string b=given string as every empty string is a subsequence of given string.

Read again. the questions clearly says that both should be non-empty strings. so that's why we can't.

thanx bro

please send solution code for 1602B — Divine Array

You can look others solution. Here's mine : https://codeforces.com/contest/1602/submission/133140715

Thanks Bro.

You're Welcome.

A HUGE thanks to KiKoS. Problem 1601B - Frog Traveler is totally adorable. The thing that I don't have to IF every border violation due to perfect input data in this task. Ohhhh my, it's absolutely perfect!

Div.1 C O(n log^2 n) Isn't supposed to work? I have a solution using DP with divide and conquer and I have TLE

Isn't 1601A - Array Elimination complexity should be $$$O(n \log C + \sqrt{C})?$$$. Or, can you find all divisors faster than $$$O(\sqrt{C})$$$ without too much complicated number-theory algorithms?

just iterete over k and check whether cnt[i] % k == 0 ,where cnt[i] mean count of numbers in which i-th bit is on,

Oh, indeed, I'm dumb. We need to know divisors of value which is not greater than

n.Can anyone please advise why this submission for Div2D gets WA6? I tried to implement what is written in editorial, and was hoping to get a TL so that I can do further optimisations (just wanted to make sure that I got the idea right). Thank you!

big thanks for the round! great problems, i liked Div2E the most

can u tell ur solution for E pls,coz i cant understand one described in editorial?

What would be the solution for div2C/div1A if the operation was OR, Xor instead of AND.?

For Div1D I sorted by Max (s_i, a_i), if two indices i and j are such that max(s[i],a[i]) == max(s[j],a[j]) then one with lower skill goes before. Then I just greedily try to climb according to sorted order above. Can anyone prove/disprove my approach? I got AC but can't actually prove my solution

Bonus $$$O(N)$$$ solution for Div2 D/Div1 B here

Did anyone use the method in the editorial to solve Div.1 C? It is OK to give an implementation? I got WA on #8. Thanks.

For 1C/2E，in the editoral，could you tell me what's the “This sum differs by a constant " in the "solve"？I can find the constant...please

I have another solution for 1602D - Frog Traveler which works in $$$O(N\cdot logN)$$$. I created a graph in the form of a segment tree and ran a 0/1 BFS on it. Have a look at it here: 133096348.

I tried implementing div2 c / div1 a but it is giving a runtime error. Can someone pls suggest where I went wrong. Your text to link here...

Your divisors array is only of size 31. But valu is 0<=valu<=N. And you are trying to access the element out of bounds of 31. You should calculate answer only for valu, not all possible values of “valu”

is video tutorial present for div2D if yes plz share the link if not ak2006 can u please make a video tutorial for div2D , i am not able to understand anything in this , help plz. thnx in advanced!

is there any proof for the observation that the array will remains constant after n operations? DIV2. B

Note that if array $$$a$$$ remains constant after step $$$i$$$, it will remain constant after any other step $$$j > i$$$. Now note that if array changes after step $$$i$$$ then at least two groups of equal numbers merged in one. Since there are no more than $$$n$$$ groups initially, there won't be more than $$$n - 1$$$ merges. (But $$$+1$$$ step for the first step, since it's the only step when $$$a_i$$$ may differ from the size of its group.)

I got wrong result from __lg in cpp, what was going on? When I calculates it by myself, it works right. Can somebody explain what happen? Here 133143224 is my code for 1E. A runtime error occurs. But when I replaced it by the lower 2 line, I got accepted.

Does the linear time solution to D involve monotonic queue/stack?

No, it involves noticing that if you made a jump to some height, you should not jump to that height again. You go through every jump from the bottom and you push unique places where you slid to the queue. Now, when you process the next level, you should not repeat those jumps which you already did (from the bottom), you should only jump to the lower heights if possible and again, push to the queue unique places after you slid. So you record the lowest height you jumped so far and make all the jumps that go past that height.

When you start jumping from the bottom, it would be a continuous segment. When you process the next place, it is impossible to create a new disjoint continuous segment (you start within the segment, which was covered by the initial jumps and you make another continuous segment of jumps). So the new jumps would extend that continuous segment and so on.

The amortized complexity would be O(n) as the total number of jumps would be n.

Wow that's beautifully explained! Thanks!

Sorry,I can't understand div2 D's tutorial.How to build the minimum segment tree and how to use this segment tree to find all u for every j

Can anyone explain how to solve C — Optimal Insertion using divide and conquer or share some code？

https://codeforces.cc/contest/1601/submission/133157528

We can apply divide and conquer optimization because $$$p\left[ i \right] \le p\left[ {i + 1} \right]$$$ for every i. (Otherwise we can't)

Got it.Thank u bro

my simple O(n) solution for div2 D is submission

In problem Frog Traveler, how does the editorial traverse through all j's at most once?

For any particular value of $$$j-a_j$$$ there can be such values of $$$j$$$ such that $$$j-a_j\leq j\lt u$$$.

So, we can't simply use all indices with a fixed value of $$$j-a_j$$$.

I tried maintaining a map such that $$$map[x]$$$ has all indices $$$j$$$ such that $$$j-a_j=x$$$ sorted in increasing order. But it leads to TLE. Any solution similar to editorial might help.

Also there must be a correction in the editorial:

$$$dp_i = 1+min(dp_j+b_j)$$$ must be replaced with $$$dp_i=1+min(dp_{j+b_j})$$$

There two facts:

First, the described solution uses a Segment Tree which keeps for each index $$$j$$$ value $$$j - a_j$$$. And for $$$u$$$ you ask range minimums only on suffix $$$[u, n]$$$. If minimum $$$m = j - a_j \le u$$$, you make a move at such $$$j$$$ and replace $$$j - a_j$$$ with $$$\mathit{INF}$$$. That's why you take each $$$j$$$ exactly once.

Second (more interesting) fact: actually, you can ignore $$$j$$$, i.e. you can always take non-visited $$$j$$$ with minimum $$$j - a_j$$$. So you can keep segments $$$[j - a_j, j]$$$ either in set, or just sorted in vector. It can be proven that making prohibited steps (from $$$u$$$ to $$$j < u$$$) is never optimal.

But $$$j\lt u$$$ would mean $$$u$$$ can't be reached from $$$j$$$ in one step. So, updating $$$dp[j]$$$ as $$$d+1$$$ would be incorrect.

For C: Optimal Insertion, am I missing something or does the editorial not explain: how is it that we can be sure that the greedy choice of p_mid will result in an optimal solution?

The greedy choice of p_mid is optimal as you covered the whole available range. Then you just limit those ranges based on the fact that p[i] <= p[i+1].

Hm, I still don't totally understand. Perhaps more specifically, why is it not possible that there is some other solution out there that chooses a different p_mid value which gives more inversions for b_mid but makes up for it with fewer inversions in the recursive calls?

Because the ranges would be limited suboptimally. If the optimal choice is p[mid] = p_mid and instead you would choose p_mid-1, some of the values in the left call might get worse because the max value for them would be p_mid-1. However, it may turn out that it would be better if they used p_mid instead. In the right call, they would not use the p_mid-1 because the optimal value is p_mid and we know that p[mid+1] >= p[mid] = p_mid > p_mid-1.

After mulling on this for a while I think I understand. Thanks for helping me.

For future readers, here's how I convinced myself. Consider placing b_mid somewhere in the array A, particularly, the position that minimizes inversions between the elements of A and b_mid. So we have [ (elements of a), b_mid, (elements of a) ].

Now we want to place some B > b_mid. We will prove that even disregarding inversions between B and b_mid, it is never optimal to place B to the left of wherever b_mid is.

The number of inversion for b_mid is (# of elems to my left that are bigger than me) + (# of elems to my right that are smaller than me). Consider marching b_mid to the left. The former quantity may decrease and the latter quantity cannot decrease. But regardless of how much the former quantity decreases, we know that it is never "worth it" since we have found the current placement of b_mid to overall minimize inversions.

Now consider what happens when B is in place of b_mid. If (# of elems to my left that are bigger than me) decreases as we march left, then it must have also done so for b_mid, since such element X > B would also be X > b_mid since B > b_mid. But we knew it was never "worth it" for b_mid, so it will also never be "worth it" for B. And as with b_mid, the latter quantity (# elems to my right that are smaller than me) again will not decrease as we march left. Thus, the optimal placing of B must be to the right of b_mid.

Would someone please offer a proof for B? Specifically,

1) Prove that at most $$$n$$$ steps after transformation, $$$a$$$ becomes repetitive.

2) Prove that at most $$$\log n$$$ steps after transformation, $$$a$$$ becomes repetitive.

If both 1) and 2) are unknown, how do you come up with either of these 2 conjectures?

Thanks a lot.

This one helped Can anyone help for the proof of

"Prove that at most logn steps after transformation, a becomes repetitive."

Can't KiKoS provide a sample implementation to the logic given the editorial of Frog Traveler?

It's really a pain trying to decipher what the editorial wants to say and to even believe that it will work the right way if we try to implement it after reading.

Hi! I'm sorry you didn't like the editoral. Still, solution (in my opinion) has two main ideas:

calculate dp using bfs ~~~~~ queue q; q.push(0); while (!q.empty()) { int x = q.front(); q.pop(); FOR EACH i WHERE (i > x && i — a_i <= x) IS TRUE {

for (int j : gr[i]) { // gr is a reversed i -> i + bi falls if (dp[j] > dp[x] + 1) { dp[j] = dp[x] + 1; q.push_back(x); } } } } ~~~~~

`FOR EACH i WHERE (i > x && i - a_i <= x) IS TRUE`

can be found in O(n log n), if we find i with segtree.index_of_min([x, maxn]) where segtree is built on array i — a_iThanks I got your idea.

Thanks again for replying me :).

I have found $$$O(N)$$$ solution for 1602D — Frog Traveler. I couldn't do it during the contest but after some thinking I came to this idea. Here, we will perform a BFS from the bottom of the well to all nodes it can reach above it and store the maximum height we reach as integer $$$start$$$ . Now, when we start seeing all nodes one can visit from the new source node we will only check nodes which are at height greater than $$$max(start, source$$$ $$$node)$$$ and continue to update $$$start$$$ as the height increases.

I might not be very clear about how or why this works so here's my submission for the same.

code : 133240574

Can somebody tell me the key point for 1F? I don't understand the editorial.

In problem B div 1, you can do without a segment tree by storing a

`std::set`

of indices that bfs has not yet visited. Then, to find the index where you can get from`i`

, you need to take`j = lower_bound(i - a[i])`

and check that`j`

exists and`*j <= i`

. Like that: https://codeforces.com/contest/1602/submission/133263356whats the time complexity for this?

$$$ O (n \ log \ n) $$$. Each item will be removed no more once per $$$ log \ n $$$ and for each index we can make 1 extra lower_bound because bfs can visit each index only 1 time.

-133409059 i think my answer is wrong but it got accepted

its divine array

Hello, it was a great contest and nice editorial. But solution for problem div1 F is not clear from the editorial. Can you please post a code solution for it? Thanks ch_egor cdkrot LHiC

It was already posted

https://codeforces.com/contest/1601/submission/133081515

Oh thanks. Didn’t see it

There is a typo in the editorial of the problem 1602D - Frog Traveler:

It should be $$$dp[i] = min(dp[j + b[j]] + 1)$$$ instead of $$$dp[i] = min(dp[j] + b[j] + 1)$$$

Please correct me if I am wrong and the editorial is correct

I think you are right. I suppose it should be $$$\min\lbrace dp_{j + b_j}\rbrace + 1$$$, too.

I think the divide and conquer method of question 1601C is very difficult to think about.

I will disagree, coming to the conclusion that p1<=p2<=...<=pm is fairly straightforward, after that it is a well known trick that if for the optimal answer the index condition is satisfied than we can use divide and conquer.

Maybe this was your first time seeing this trick so you should remember it now. Same trick is used in divide and conquer dp optimization so you should definitely read that , as that was where I saw this trick for the first time.

https://codeforces.com/contest/1602/submission/134405261 can someone point out the error in my solution ?

Please correct me if I am wrong but for div2 b why is it possible to brute force? we have a proof that there are at most log(2000) steps before the list is repetitive, there are 100000 possible queries, and 2000 elements. If we brought force we will get log(2000)*100000*2000 = 2*10^9

1601B Frog Traveler, time complexity is O(n), just regard as a BFS, a shortterm path problem https://codeforces.com/contest/1602/submission/136659435