Thank you for participation! Here is the editorial:

1621A - Stable Arrangement of Rooks

**Editorial**

**Solution**

**Editorial**

**Solution**

**Editorial**

**Solution**

**Editorial**

**Solution**

**Editorial**

**Solution**

**Editorial**

**Solution**

1621G - Weighted Increasing Subsequences

**Editorial**

**Solution**

**Editorial**

**Solution**

**Editorial**

**Solution**

Hello, Fastest editorial of 2022 :)

You've achieved comedy

Also the slowest so far as of 2022

Also my favorite contest of 2022

Also

The one where the tourist beats himself.SpoilerCongrats magic man!

Funniest comment so far as of 2022

`I hate problem B`

I hate test case 1

I hate pretest 2

I hate test case 3

I have test case 4

Only B?

After trying to solve it for 1.5 hours and submitting 5 almost correct solutions. I didn't had time to try any other problem.

sounds like a you problem.

I don't think I ever said it's

-is-this-fft-'s problem or writer's problem. I know it's my problem that I'm not good enough but that being said my comment was simply his reply to why the OP only disliked B (like me). So, I replied and I don't think my original comment in any way says it's anyone else's problem.I do not understand at all what is wrong with problem B. After reading the reviews, I sat down and solved this problem in contest mode. She's great! There's a simple greedy idea in there. It is not another math puzzle. There is a place for a little coding in it.

If you have difficulties with its implementation, it rather means that you need to write more code. I'm afraid this is one of the basic skills that you can learn from CP: to accurately and quickly implement simple logic. Do this in such a way as to minimize the number of special cases.

The only minus of the problem is that it is a bit difficult for position B.

I was lucky to solve it in without any debugging. And it's a nice feeling.

My codeI guess what most of us meant was that, it is slightly more tougher than the usual Bs!

That means sir you are using multiple codeforces accounts....:)

Do you agree that hacking during div1 and div2 contests became obsolete after changing single test cases to packets of test cases? The amount of test cases increased and many corner cases are included.

For example B and F could have smaller amount of pretests, then participants could search for incorrect solutions, and possibly that round would become no less interesting.

How about splitting CF rounds (div1,div2 with uneven scores) into two variants: 1. Round without hacking (full tests) or hacking time 30 min.after solving time finishes, 2. Oldschool round: weaker pretests and hacking anytime (and possibly with novelty — progressive bounty for (un)successful hack).

Can someone explain why my solution to B (141591638) is getting Runtime Error instead of WA (I'm not handling the case of using a single longest segment, but I don't see anything in my code that wouldn't work) ?

NVM it's because 10^5 != 10000

problems are good but im noob lol Happy new year!

Problem D was nice

How excellent the problem B is！

it was just simple implementation

and i am just a simple pupil

and im just a simpleton.

Can anyone help me with B ?

Why do we need to maintain

`(iii)`

if we are already maintaining`(i)`

and`(ii)`

won't it lead to an optimal solution?`` Example

1 5 10

6 10 10

1 10 18

iii keeps track of last one else we may need to take 10 + 10 ``

Small typo on tutorial for problem E : It can be easy done with prefix sum arrays. --> It can be easily done with prefix sum arrays.

Is there any chance you could include the model solutions for our reference? Links to submissions would suffice. Thanks

I'm curious how you found the solution for problem D? For me, I tried to analyze the problem but got nothing. But since I saw so many people pass this problem, I started to guess some random strategies and verify them manually. But this doesn't seem plausible, so I'm curious if there's some more plausible way to get the solution?

At first I thought that the solution is to find the shortest path (the one with least snow to clean) from any of the top left squares to any of the bottom right. But there was no way to move the people along it without cleaning one of the 8 cells. Got AC but wasn't able to prove it throughout the competition. The fact that so many people had the problem, made me think of an easier solutin, otherwise I probably wouldn't have solved it.

when I want to find a way to go from the left-top to right-bottom, I notice these 8 cells are special, because we cannot go to the right-bottom without passing one of them, after that, I found all friends can go to the right-bottom just using one of these cells.

For problem E, how do you compare averages : convert them to doubles or convert them to fractions (p and q) ?

Doubles have precision issues, so it's better to compare them as fractions. A general trick to do the comparison $$$a/b < c/d$$$, is to multiply both sides by $$$b \times d$$$ (assuming $$$b, d >0$$$). Then you get $$$a \times d < c \times b$$$. These numbers could be big, so use long longs.

You can even take ceil value of the division since it has to be greater than or equal to that

I use double and Coordinate Compression, so you can only care about intergers with maxval = O(n)

In this contest, (and the previous one) I knew the solution (as far as it is covered by the editorial) to problem C after like ten minutes. And then was not able to implement it until end of contest.

Now, the question is, isn't that implementation part the interesting one (at least in such problem), the one that should be covered by the editorial?

I mean, this observation that there are cycles in a permutation is fairly trivial. I don't think that like 20k partitipants did not solved it because they did not had that idea.

C wasn't too bad to implement... 141526460

Yes, I did inspect some solutions, looks like a three liner. Still there are a lot of partitipants not able to write down that (simple looking) lines of code.

How is the process to come up with that? Which abstractions are usefull here, which are not?

usually, when you can't implement a solution, it means that you don't actually (or deeply) understand the solution you have come up with :v 141527334 btw here is my implementation.

I think the difficulty with typical implementation problems is more likely to confuse things, and simple off-by-one errors. The solution is then to use the right abstractions in the thought process and to sort them cleanly.

The solution "I have to see it, its simple!" did not work for me in this problem, and actually does not in general.

Edit: However, congrats to reaching a new color!

UwU thank you

Thanks for that implementation. I now understood it But why are you not printing cout.flush() after printing the query?

I think because endl does the job of printing '\n' as well as flushing the output.

141634306 and 141634429 why one get accepted and the other not

Yes C was pretty easy to implement as compared to B in my opinion.

Here is a simple implementation 141573460

Could you tell me what's wrong in my solution for task B? 141567260

I tried to maintain 5 values: the leftmost number, cost of optimal line containing it, the rightmost number and cost of optimal line containing it, and the flag — are these lines same.

InputExpected OutputYour OutputCommentsWhen all the segments are available, you purchase $$$[1, 2]$$$ for a cost of 1, and $$$[2, 3]$$$ for a cost of 1.

Ah, got it. Thank you!

tourist's solution to E is so genius, even cleaner than the editorial.

Just throw everything into a segtree then forget about casework.

In problem D, it took me about 30 minutes just to prove the following case doesn't exist

:(

I didnt realize this until now xD

I am also thinking about the same case. can you pls provide your proof.

lets see on 4 points (1,1),(1,n),(n,1),(n,n) when first of them moved one of them will go to one of 8 points mentioned in solution.

Say, you want to take this path. This will result in

Now, you want others to follow the same path for which you'll either do a column up or down, so that the element is placed on the same row (So that same path can be followed). This can be done on any column, either up or down.

The problem is that, one element moves out of the first n*n square, and takes another path (B)

So, the answer becomes Cost(A)+Cost(B) (for now). It will add up even more if you do the same with other columns. But, you can accomplish the same thing with only Path B too. So, that becomes the minimum cost path ( Cost(A)+Cost(B) > Cost(B) ). This way, all eight corners can be proved.

PS. Sorry if this didn't explain the problem

I only realized this in the last 2 minutes of the contest. Also couldn't implement C on time. :(

Could you PLEASE show the solution(code) for B? Have been trying to solve it for hours...

Video Tutorial B:https://www.youtube.com/watch?v=D4Jmm1Q9rF0

Thank you so much!

Can anybody show the solution for problem a in java? I don't know why mine is wrong. I used prettt much the same logic in the editorial

I don't know if anything else is wrong in your code but the first mistake i noticed is that you are taking k and n instead of n and k.

Yes.. k for the size of board and n for the rooks. Btw thank you for the reply.. I forgot to put the continue; after 2nd if.. jsbsvajnfnn.. but i hope that's not the only error or else i wont sleep tonight.

Here is the edited code 141579198 I just deleted the k==1&&n==1 and n==1 case as it was redundant

I literally got the exact solution described in the editorial for B and got WA on 2. What was B?

Can someone point out my mistake on B? 141546457

I liked C — interactive problems without binary search have become extinct nowadays. My solution is very short and the problem involved reasonable amount of thinking.

On the other side, D is not a great problem. I couldn't come up with the solution during the contest (or, to be correct, I came up with it, but it seemed to me totally wrong). However, the problem is ad-hoc and it is not very beautiful.

E was merely an implementation task. I think it's rather good problem. The detail about $$$m \le n$$$ (not $$$m = n$$$) was unexpected and confusing, though.

How do you calculate the number of increasing subsequences that starts in $$$a_i$$$ and ends in $$$a_{x_{y'}}$$$ in G? For me, it was the most difficult thing in it. Am I missing something? We have $$$O(n)$$$ different start points ($$$a_i$$$) and also $$$O(n)$$$ different end points ($$$a_{x_{y'}}$$$) and how we calculate all the numbers fast?

In G, fix the end point, and calculate the answer for all the starting point that corresponds to the current end point. We can just store the updates done to fenwick for current endpoint and undo them in the end.

Ah, thanks a lot! I was missing the moment that no value (no $$$a_i$$$) can correspond to different endpoints. I thought it is somehow possible, so we need to consider $$$O(n)$$$ values for every endpoint. But actually it is $$$O(n)$$$ in summary :)

How can you think that tasks where you just need to implement trivial idea are good, but tasks where you actually need to think — are bad. Do you think cp should become fast-typing competition?

Yes, indeed it sounds strange.

No, I don't think so. Competitive programming is solving problems and afterwards coding them. I don't like the problems that omit one or another part.

The problem 1375C - Element Extermination is great example of creating a problem that involves pretty much thinking while the correct solution is so simple, that you can't believe in it. Well, to be objective, it is just simple. I am pretty sure that the majority of the participants, who solved it, just guessed the answer. I was one of them (the contest was about to end and I just had different tries). The process of solving the problem didn't get me much joy.

I haven't solved the problem D from Hello 2022, but I think the problems are really similiar. The only difference is that it's harder to guess the answer in the problem you were talking about, while one-liner for 1375C - Element Extermination makes it somehow beautiful, while easier to guess the answer. (It is my personal opinion, I am not certain, what is more beautiful or more evident).

The proportion (thinking : coding) is really unjust in such problems. You make some complicated thought process and afterwards submit quite easy code. The problem would look much better not here, on CF, but on some mathematical contest. Of course, with proof required.

On the other side, in the E problem the proportion (thinking : coding) was the opposite. The implementation part was huge, and because of that I named it

implementation task. However I don't think it was stupid straightforward implementation. The problem required some observations: {you can't dynamically move elements in the set and compare all of them to corresponding values in given array. However, here all the values move at most one position to the right, or one to the left. It wasn't obvious for me, while it wasn't difficult too}. It required some data structure to achieve 'range logical AND of boolean array' (prefix sums).Probably, it is too weak for global round E. Well, it may be too standard. However, I managed to mess up something in binary search (it was really disappointing), so it's not only about fingers speed, but also about correctly implementing the solution. Here coding mattered more than thinking, but I believe the proportion was fine.

$$$\pm 1$$$ shift observation lets you do prefix sum-style solution, but it was not necessary to make it. As long as the array $$$b$$$ is split into a constant number of parts one may apply a certain rmq-like structure with a log overhead.

Specifically, for all segments $$$b[l..l+2^k)$$$ we can precompute the leftmost position $$$i$$$ such that $$$a_{i + j} \ge b_{l + j}$$$ for all $$$j \in [0, 2^k)$$$. Then we can compute the same position for any segment of $$$b$$$ with log overhead.

imo 141620052 is just plain implementation, underwhelming.

Very cool trick!

Alternative solution for 1621E - New School

If you fix the student to remove, you can check if the answer is

`1`

with the following algorithm:`1`

if there is always an available teacher.The key observation is that the algorithm works for any order of the groups of students. So, if you want to check the answer for all the students of a group, you can insert the other groups first, then try to remove each student from that group.

Now, we want to solve this problem:

This can be solved in $$$O(n \log^2 n)$$$ with D&C (similar to offline deletion). The D&C looks like

Implementation: 141541065

Is the answer to the bonus part of A: $$$\binom{n-k+1}{k}^2$$$ ?

Basically, there are $$$\binom{n-k+1}{k}$$$ ways to choose the rows containing a rook, and $$$\binom{n-k+1}{k}$$$ ways to choose the columns containing a rook, independently, such that the arrangement is stable.

Close, but there are $$$k!$$$ ways to arrange the rook given the column and rows.

Problem B goes to show many people are brick at implementation.

In Problem C, is there any specific theorem of any kind or its just based on observation??

"permutation the way it is presented here , exists as a set of disjoint cycles"-> something which is useful here picked from Abstract Algebra

Ok, thanks.

I undertand it, pass me!why Problem D can't acts like the following e.g

for simplify e.g, set i = 10e9

in this case, I think friends can move by (8, 3) -> (7, 3) -> (7, 4)

did I ignore something?

You ignored that one of the friends in the four corners

mustmoved one cell out of the upper left quarter. There is simply no way to move them at all without doing this.B was a hell

Why does solution to E work in time?

We check the removal of each of the 1e5 students. Foreach one its group will move some positions (i to j). We recheck all groups in range (i,j). i,j is not bounded, the dist can be also 1e5.

So how is this not O(n^2)?

`It can be easy done with prefix sum arrays.`

Here's a more complex but IMO more intuitive solution to E, which I believe was used by a lot of other participants in contest.

It's possible to assign teachers to groups if for every value v, there are more or equal teachers of age at least v compared to groups with average age of at least v. It's easy to see that this is equivalent to the original condition.

Let diff[i] be the (number of teachers with age >= v — number of groups with average age >= v). Then, the condition means for all i = 1...1e5, diff[i] >= 0, or min(diff[i]) for all i = 1...1e5 >= 0.

It's easy to see adding a teacher of age v will add diff[v], diff[v + 1], ..., diff[1e5] by 1, and a group with their rounded up average v will reduce diff[v], diff[v + 1], ..., diff[1e5] by -1. With this, the problem becomes a straightforward lazy segment tree problem: we add all teachers, then add all groups, when we remove a member of a group we temporarily change it's average, and then change it back. All in all, it uses about n + 4 * m segment tree operations, which is far fast enough to AC.

Nice approach! But shouldn't the condition be

`min(diff[i]) for all i from 1...1e5 >= 0`

?Thanks for pointing that out. That was written in not the best of mental conditions.

Alternatively, instead of using a lazy segment tree, you can use an ordinary segment tree which stores the sum and the minimum suffix sum in each node.

I don't understand can you explain your idea bit more.

(adding a teacher/group should actually add 1/-1 to diff[1..v], not diff[v..1e5] in the original solution)

Let a[i]=diff[i]-diff[i+1]. Then what adding 1/-1 to diff[1..v] does is adding 1/-1 to a[v]. You can get the diff array from the a array by noticing that diff is just the suffix sum of a. So, when you're looking for the minimum value in diff, you're looking for the minimum suffix sum in a, which a segment tree can easily handle.

Video Editorial for Problem B: Integers Shop

I use 3 pair data structures, instead of 6 variables as mentioned in the editorial, in order to reduce the length of the code and make it easier to understand.

You can do problem I in $$$O(n \log \log n)$$$ time (linear time per op)! Code: 141611373

We need 3 additional observations. First, we rephrase the copy operation into something a little simpler. Then, we complete the analysis of part 3 of the editorial to find a closed form for whether the the smallest subarray starts at $$$c_l$$$ or $$$c_k$$$. Finally, we show a linear time algorithm for finding the lexicographically smallest suffix of each prefix based on Duval's algorithm for Lyndon factorization.

The copy operationFirst, let's rephrase the copy operation. If, in the $$$i$$$th operation, we find the smallest substring of length at least $$$i$$$ starts at index $$$j \le n-i$$$, then $$$D_{i} = (D_{i-1}[1:n-i] + D_{i-1}[j:n])[1:n] = (D_{i-1}[1:n-i] + D_{i-1}[j:n-i] + D_{i-1}[n-i+1:n])[1:n]$$$ (here, we use inclusive substrings). In other words, we insert the string $$$D_{i-1}[j:n-i]$$$ (which is a possibly-empty suffix of $$$D_{i-1}[1:n-i]$$$) directly after the $$$n-i$$$th character, and then we truncate the result to the first $$$n$$$ characters. Then, there are a few useful observations:

From here on, let $$$A_i$$$ denote the suffix of $$$D_0[1:n-i]$$$ we inserted in the $$$i$$$th step, and let $$$B_i$$$ denote the suffix of $$$D_0[1:n-i+1]$$$ which we substituted for $$$D_{i-1}[n-i+1]$$$, so that $$$B_i = A_i + D_0[n-i+1]$$$. Part 3 of the editorial showed that $$$B_i$$$ is either the minimum suffix of $$$D_0[1:n-i+1]$$$, of it's the maximum number of copies of the minimum suffix. We make a few more observations:

Closed form for which suffixNow, using the last observation, we can classify whether $$$B_{i+1}$$$ starts at $$$c_l$$$ (the minimum suffix of $$$D_0[1:n-i]$$$) versus $$$c_k$$$ (the maximum number of copies of the minimum suffix of $$$D_0[1:n-i]$$$):

$$$B_{i+1}$$$ starts at $$$c_l$$$ if $$$A_i$$$ is empty, and $$$c_k$$$ otherwise.Note that $$$A_i$$$ is empty iff $$$B_i$$$ has length $$$1$$$.The proof of this is pretty similar to the analysis in part 3 of the editorial. Let $$$S$$$ be the minimum suffix of $$$D_0[1:n-i]$$$, and let $$$E = D_{i-1}[n-i+1:n]$$$. Then, either $$$A_i$$$ is empty, or $$$A_i$$$ starts with $$$S$$$, and $$$B_{i+1} = S$$$ or $$$B_{i+1} = S^l$$$ for some $$$l$$$ (this denotes $$$S$$$ repeated $$$l$$$ times). Then, note that $$$A_i + E < S + A_i + E$$$ iff $$$S + A_i + E < S^2 + A_i + E$$$ iff $$$S^2 + A_i + E < S^3 + A_i + E$$$ and so on, so we should take $$$B_{i+1} = S$$$ iff $$$S + A_i + E < S^l + A_i + E$$$ iff $$$A_i + E < S + A_i + E$$$.

Now, because $$$A_i$$$ is optimal on the $$$i$$$th step, we must have $$$A_i + E < P + E$$$ for any other prefix $$$P$$$ of $$$D_0[1:n-i]$$$. In particular, if $$$A_i$$$ is empty, then $$$A_i + E < S + E = S + A_i + E$$$, so we should take $$$B_{i+1} = S$$$. Otherwise, $$$A_i$$$ starts with $$$S$$$, so $$$A_i + E < A_i[len(S)+1:] + E \implies S + A_i + E < A_i + E$$$, so we should take $$$B_{i+1}=S^l$$$.

Thus, we've proven that $$$B_{i+1}$$$ is the minimum suffix exactly when $$$B_{i}$$$ has length $$$1$$$.

Fast algorithm for min suffixFinally, we sketch a linear time algorithm to find the minimum suffix of each prefix of $$$D_0$$$ (and the max number of copies).

Note that the minimum suffix of a string $$$S$$$ is exactly the final Lyndon word in the Lyndon factorization of $$$S$$$. Furthermore, the maximum number of copies is the number of equal Lyndon words at the end of the factorization of $$$S$$$. (This is pretty trivial/uninteresting statement, but it's useful terminology/framing.)

We can find both using an algorithm similar to Duval's algorithm for Lyndon factorization (emaxx writeup), which is itself similar to KMP. I'll refer to terms from the writeup, so please read that first. In essence, we build the KMP failure function to compare the string to its suffixes, but if the suffix is smaller than the whole string, then the suffix is part of a later Lyndon word, so we can cut off and output some prefixes as a maximal Lyndon word.

In this case, we need to modify it slightly. Duval's algorithm is already almost online, so we're pretty close being able to find the Lyndon factorization of every prefix. First, we'll modify it to be a little more online and less amortized: in the case where our next character is small and we want to cut a pre-simple string $$$ww\ldots w\overline{w}$$$ down to $$$\overline{w}$$$, the algorithm typically resets the state all the way back to an empty string and reprocesses $$$\overline{w}$$$. Instead, note that $$$\overline{w}$$$ is a prefix of $$$w$$$, so we can just store our current pre-simple string in a stack and pop all the way down to $$$\overline{w}$$$. (This adds linear memory, which is why it's probably not used in normal Duval's).

Now, to find the "tail" of the Lyndon factorization of a prefix, we need to process up to that prefix, and then append a terminator with value $$$-\infty$$$. Note that this will cause a series of cuts from $$$ww\ldots w\overline{w}$$$ to $$$\overline{w}$$$, each of which jumps back to some smaller stack size. The last cut we do is the tail of identical strings in the Lyndon factorization. Thus, we can just precompute the final cut starting at each location: it's just the final cut of the location you would jump to.

Thus, we have a linear time algorithm for finding all minimum suffixes of each prefix of $$$D_0$$$.

I think that Problem E has a serious problem: Nobody has the age of 1e5

can someone please explain problem 'c' in detail i am unable go through the editorial ?

are there are any prerequisite for that question ?

i have done like this(but after the contest). we can consider p array as directed graph. eg if p=[4,2,1,3] then the graph will be like this https://ibb.co/sJZ4tM2 now there are 2 type of edges, one is self loop. other is cycle. now what i do is, for every i in [1,n], i ask for the value at that index if i dont know p[i]. for every i, i ask multiple times, if corresponding values are equal, then it is a self loop and p[i]=i; else i ask q[i] until i see the complete cycle, lets say q1=a and q2=b, then p[q1]=q2.

here is my code: https://codeforces.com/contest/1621/submission/141614438

thanks! with an example i was able to see that elements present in a cycle simply exchange their positions as if they are in cycle .

Can any explanation be provided about why this is happening ?

Hi everyone, can anyone provide me a edge case for my code for problem C? CODE Thank You.

My implementation has same logic as the editorial has but I'm not able to find why I'm failing at test 2 subtest 34.

If you are looking for video solutions, HERE they are(for A-D)

what is wrong in this implementation for problem B. I tried to implement the same logic as is given in the editorial but could not get it right?

void solve() {

}

int main() { int t = 1; cin >> t; while (t--) solve(); }

Here's another (more systematic) way to arrive at what you want to compute in G.

Let $$$X_i$$$ be the indicator variable such that $$$X_i = 1$$$ if and only if $$$i$$$ (by index) is included in an increasing sequence and there exists $$$x$$$ such that $$$a_x > a_i$$$ and $$$x$$$ is past the end of the sequence. If $$$tot$$$ is the number of total increasing sequences of the array, we are interested in computing $$$tot \cdot E[X_1 + \cdots + X_n]$$$ by definition.

Let's focus on the $$$E$$$ term. By linearity, this collapses to $$$E[X_1] + \cdots + E[X_n]$$$. Note since $$$X_i$$$ are all indicator variables, $$$E[X_i] = P(X_i)$$$.

Let $$$lst_i$$$ be the largest index such that $$$a_{lst_i} > a_i$$$ (similar to editorial). Then,

If $ $$$tot_i$$$ denotes the total number of sequences including $$$a_i$$$, and $$$tot_{il}$$$ is the number of sequences including $$$i$$$ and $$$lst_i$$$ then this value is just

Proceed as mentioned in the editorial to find $ $$$tot_i$$$ and $$$tot_{il}$$$, and don't forget to multiply the entire sum by $$$tot$$$.

Here's my implementation (a bit messy).

This idea of assigning indicator variables to compute what we want works on a lot of mathy problems like these, especially on AtCoder. Also, excuse the wonky latex (was being annoying when typing this up).

~~The proof in $$$D$$$ may not be convincing much as we still can move the columns $$$[1, N]$$$ one by one through the $$$2^{nd}$$$ column for example. Although will not be able to complete the solution, this does not pass through the mentioned $$$8$$$ points in the first operations.~~Another proof that we must always pass by at least one of the mentioned $$$8$$$ points:

For the points $$$(n+1, n+1)$$$, $$$(n+1, 2n)$$$, $$$(2n, n+1)$$$, and $$$(2n, 2n)$$$ to be filled, they have to be filled from the bottom-right corner itself, as any filling from the other $$$3$$$ corners would pass through one of the mentioned $$$8$$$ points. Now let's try to fill one of the $$$4$$$ points, we will notice that any shifting in the bottom-right corner to fill one of those $$$4$$$ points will result in emptying another one of them, and this will just go indefinitely.

EDIT:It seems that the editorial's proof is complete as well. We can never make a first move with a row/column having one of the $$$4$$$ points $$$(1, 1)$$$, $$$(1, n)$$$, $$$(n, 1)$$$, or $$$(n, n)$$$ without moving another one of them to one of the $$$8$$$ mentioned points.Can anybody please tell me what's wrong with my solution for problem C? :( code-attempt-wa-testcase2

Can anybody help me to find out where my solution to B (using ordered set) is failing in test 2? I tried many random generated small test cases with correct output. Didn't able to find any failure test case. :( https://codeforces.com/contest/1621/submission/141576936

I didn't look into your code too much, but I found a testcase for which your code gives WA

testcase1

4

1 2 3

4 5 2

1 5 4

1 5 3

expected output3

5

4

3

Can someone please help me figure out a test case on which my code will fail for problem B. 141637071

Hey bro thanks a lot for providing them.

Some hints for people stuck at debugging problems

BandC(as per the comments above).avkonyahin

InputExpected OutputYour OutputCommentsWhen all the segments are available, you can purchase $$$[1, 3]$$$ for cost of 1.

practice_52

InputExpected OutputYour OutputCommentsWhen all the segments are available, you can purchase $$$[1, 3]$$$ for cost of 3.

pks18

InputExpected OutputYour OutputCommentsWhen all the segments are available, you can purchase $$$[1, 3]$$$ for cost of 3.

Abhishek357

InputExpected OutputYour OutputCommentsWhen all the segments are available, you can purchase $$$[1, 3]$$$ for cost of 3.

Mysticode

InputExpected OutputYour OutputCommentsWhen all the segments are available, you purchase $$$[1, 2]$$$ for a cost of 1, and $$$[2, 3]$$$ for a cost of 1.

_s_h_n_

InputExpected OutputYour OutputCommentsWhen all the segments are available, you can purchase $$$[1, 3]$$$ for cost of 3.

AnestheticCoder

InputExpected OutputYour OutputCommentsWhen all the segments are available, you can purchase $$$[1, 2]$$$ for a cost of 1.

Blinding_Lights

Participant-Jury InteractionJury printed the number of test cases as: 1

Jury is processing testcase: 1

Jury printed the permutation length as: 5 Jury picked the initial permutation as: 1 2 3 5 4

Participant asked for the element at index 1 Jury responded with 1 The permutation Q has now become: 1 2 3 5 4

Participant asked for the element at index 1 Jury responded with 1 The permutation Q has now become: 1 2 3 4 5

Participant asked for the element at index 1 Jury responded with 1 The permutation Q has now become: 1 2 3 5 4

Participant asked for the element at index 2 Jury responded with 2 The permutation Q has now become: 1 2 3 4 5

Participant asked for the element at index 2 Jury responded with 2 The permutation Q has now become: 1 2 3 5 4

Participant asked for the element at index 2 Jury responded with 2 The permutation Q has now become: 1 2 3 4 5

Participant asked for the element at index 3 Jury responded with 3 The permutation Q has now become: 1 2 3 5 4

Participant asked for the element at index 3 Jury responded with 3 The permutation Q has now become: 1 2 3 4 5

Participant asked for the element at index 3 Jury responded with 3 The permutation Q has now become: 1 2 3 5 4

Participant asked for the element at index 4 Jury responded with 5 The permutation Q has now become: 1 2 3 4 5

Participant has exhausted their query quota

Thanks a lot for providing that testcase. I was trying for hours. Thank you! :)

Alternative solution to G:

If $$$a_{x}$$$ affects the weight of a subsequence $$$a_{i_1},...,a_{i_k}$$$, then this subsequence doesn't maximize the value of $$$i_{k}$$$ across all increasing subsequence containing $$$a_{x}$$$. We can also see that the latter implies the former.

So if we can count the number of increasing subsequence starting in $$$a_i$$$ and maximize the position of the rightmost element, we can solve the problem. It turns out that it is possible in $$$O(n log n)$$$ using segment tree.

Implementation

Could you please tell me, why my code for problem B (Integer Shop) is wrong. Here is the implementation 141707198

Thanks in advance!!

[Edit: Resolved]

I have an interesting idea in problem C:

When the change $$$q_{i}'=q_{p_i}$$$ turn into $$$q_i'=q_{q_i}$$$ ,How to solve the problem?

PS: It's also the mistake I made in the contest

if q is not 1,2,3,4... initially and it is P then the same idea seems to work

Hi — I'm pretty sure I have a solution to problem E but i'm not sure it works, and I do not want to implement it right now. Maybe someone could read the following and comment.

After sorting as in the editorial (in decreasing order), we first check if before removing anyone, it's possible to start lessons.

If yes, the only way to make the lessons not possible is to remove a student with a lower value than the average of his group, which will make the average larger and move that group to the left in the sorted array. Then either that group itself is larger than teacher, or one of the shifted (shifted to the right) groups are larger than the corresponding teacher. Since each group can only move 1 index to the right, we will denote an index as

badif when moved one to the right, it will be bigger than the teacher. Suppose I remove a student and group is moved from index 9 to index 3, the only question is —is there aAnd that's a very easy thing to answer, all you have to do is save the closestbadindex between 3 and 8?badindex to the left of each index. if the closestbadindex to 9 is for example 7, then we know there is a bad index between 3 and 8. If the closestbadindex to the left is 2, we know there isn't.If no, it's simpler. Look at the set of all indices violating the conditions of being smaller than their teacher. If there is an index in the set such that moving it to the left doesn't fix the condition, it's never possible. Assuming that moving each of them one index to the left fixes the condition, the only way to make the lessons possible, is to move the entire set of these indices one index to the left. Note it's enough to look at the min and max of that set. Say the set includes 3,6,8,11. Then you just need to make sure to move the segment [3,11] to the left, and you fixed the condition (assuming of course that the moved group doesn't ruin it itself). So you just have to go over the groups 0,1,2 (i.e. the ones less than the minimum in the set), and see if removing a student moves the group to an index >= 11.

Of course some details are missing, but that's pretty much it. Thinking about it — it might've been faster coding it than describing it ^_^.

Thanks for the editorial mate :)

A bonus harder version of problem F: Solve if no two consecutive operations can have same type (instead of same parity, so types 1 and 3 can be consecutive).

Additional restriction on binary string s will be that the first and last characters must be one.

This can also be solved with a similar solution to the intended one, but with slightly more casework.