**Problems A(div2), B(div2)**

*B*wasn't recommended to use BigInteger in Java because of slow speed.

**Problem C(div2), A(div1)**

*s*1 should be taken greedily: take the left letter from the right of the last used letter, if there is no necessary letter from the right of the right used letter the the search should be started from the beginning of string

*s*1 and the answer should be increased by one. But the brute solution get

*TL*and have complexity

*O*(

*Ans** |

*s*1|).

This solution can be optimized using the following way. For every position in

*s*1 let's precalculate positions of the closest letters from the right of it from the alphabet. It can be done by moving from the right to the left ins

*s*1 and remembering the last position of every type of symbol. This solution have complexity

*O*(|

*s*1| *

*K*+ |

*s*2|), where

*K*is a size of alphabet.

**Problem D(div2), B(div1)**

There were a lot of different solutions but I will tell you the author solution. Let's precalculate [*i*, *n*]. It can be done with the time equal to *O*(*n*) moving from the right to the left. Define [*i*, *n*] as *Min*_{i}. Obviously, that *Min*_{i} < = *Min*_{i + 1}. Now for every position *i* using binary search let's find j (*j* > *i*), that *Min*_{j} < *a*_{i} and *Min*_{j + 1} > = *a*{*i*}. For such *j* there are no walruses who have age younger then walrus *i*. It's obviously because *Min*_{j} is the minimum on [*j*, *n*]. If there is no such *j* then print - 1 else print *j* - *i* - 1.

**Problem E(div2), C(div1)**

We will count the number of ski bases including the base consisted of empty subset of edges (before printing just subtract one). In the beginning the number of bases is equal to 1. If we connect vertexes in the same connected components then the result should be multiplied by 2 else do nothing. You should use DJS data structure to know information about connected components where vertexes are and to unite them.

Why is it correct?

To prove it we will use the matrix of incidence *I*, rows in it will be edges and columns will be vertexes. Let's define *xor* of two rows. *Xor* of two rows *a* и *b* will be row *c* such that *c*_{i} = *a*_{i} xor *b*_{i}. Notice if *xor* of some subset of rows is equal to a zero row then this subset form the ski base. It's correct because, the degree of contiguity of every vertex is even, so we can form an Euler cycle in every connected component. The answer is 2^{}(*m* - *rank*(*I*)).

Why it is correct? Let's write the number of edge from the right of each row which suit this row. While finding the matrix rank using gauss method with *xor* operation, we will *xor* the subsets from the right of the strings. In the end the subsets of edges written from the right of the zero rows will form the basis of the linear space. Thats why we can take any subset of vectors from basis and make up a new ski base. The number of these subsets is equal to 2^{k} = 2^{}(*m* - *rank*(*I*)), where k is the number of zero rows.

The last thing we should notice that the adding row is liner depended if and only if there is exist a way between the vertexes *a* and *b* (*a* and *b* are the ends of the adding edge).

**Задача D(div1)**

Let's find all cycles in substitution:

1 2 3 4 5 ...*a*_{1} *a*_{2} *a*_{3} *a*_{4} *a*_{5} ...

For example, in sequence *a* = {2, 1, 4, 5, 3} exist two cycles with length 2 and 3 on positions 1 2 and 3 4 5 respectively. If the length of the cycle is more than 5, then we can take 5 consecutive elements and make a new order in such way that 4 elements will take their places where they should be in sorted array. In common: if we take *p* elements in a cycle of length more than *p*, the we can make a new order in such way that *p* - 1 elements will take their places where they should be in sorted array. If the cycle length is *p* we can sort all the elements taking *p* elements from it. Next in the analysis I will define length of a cycle as a real length of a cycle - 1 (now I can say that if we take *p* consecutive elements then *p* - 1 will take the right places).

We also can take some elements in one and some elements in other cycles. We can make up division numbers from 2 to 5 into a sum to see how we can use them: 5, 4, 3, 2, 2 + 3, 2 + 2. We can take, for example, three elements in one cycle and two in other, their sizes were reduced by 2 and 1.

The task now is to get all cycle lengths equal to zero. Let's to suppose that an optimal solution have four operations which decrease the same cycle length by one. We will work with operations of 5 elements and which are not process the same cycle except one, because it's more stronger condition.

Such operation can be shown as a table where rows are operations, columns are cycles and cells of table are the values on which cycles will be reduced.

a b c d e

1 2

1 2

1 2

1 2

Replace operations this way:

a b c d e

4

2 1

1 2

2

The number of operations was not increased, but now we can see that if there is an optimal solution where one cycle reduced by one four times then there is an optimal solution where no cycles reduced by one four times. Look at some other replacements:

a b c

2 1

2 1

Replace by:

a b c

4

1 1

and

a b c d

2 1

1 2

1 2

Replace by:

a b c d

4

1 2

2

Now we can see that there is an optimal solution where no operation reducing one cycle in ways:1 + 1 + 1 + 1, 2 + 2, 1 + 1 + 2. That's why we can reduce any cycle by 4 while it's size is > = 4. Let's use zhe same method to prove that cycles of size 3 reduced by 3 in some optimal solution.

a b c

2 1

1 2

Replace by:

a b c d

3

1 2

and

a b c d

1 2

1 2

1 2

Replace by:

a b c d

3

2 1

1 2

Now we have cycles only with size 1 or 2. Let's brute the number of operations to process like {2, 2} - > {0, 1} (one cycle will be reduced from 2 to 1 and the other will be reduced from 2 to 0). Fix some such number and make these operations. Now we should process the maximum available operations like {2, 1} - > {0, 0}. After it we will have no cycles or some cycles of size 1 or some cycles of size 2. In the second case we should do the maximum of available operations like {1, 1} - > {0, 0} and if one cycle lasts we should do {1} - > {0}. In the third case we should to do operations like {2} - > {0}. Using such way we can find the optimal set of operations. We shouldn't divide cycle into a parts because it makes the number of operation bigger or the same. Stress test shows the uniting cycles will not improve the answer.

**Problem E(div1)**

We are given the array where the value in each cell is described by the formula depends on T *val*_{i} = *a*_{i} + *b*_{i} * *T* (in geometry mean it is a line equation). Lt's divide the array in blocks of size *d* ~ *sqrt*(*n*): [0;*d*), [*d*, *d* * 2), [*d* * 2, *d* * 3), ..., [*d* * *k*, *n*). Now let's precalculate for each block time moments when the leader of the block changes. So as we have line equation we can perform an intersection of half planes. On *Oy* axis values of the cells are marked and the time moments marked on *Ox* axis. we are interested on *x* coordinate of points of intersections and number of leader after time *x*. So now we know time moment of leader changing for every block. For each block it takes *O*(*d* * *log*(*d*)) time. The number of blocks (*k*) is about *n* / *d* ~ *n* / *sqrt*(*n*) ~ *sqrt*(*n*) so all the process will take *O*(*n* * *log*(*n*)) time.

Let's sort all queries by increasing *t*_{i}. Now we should to perform queries one by one. Brute all blocks which lay into the query interval. For each such block we will keep up the leader for current time. For this let's process all the time moment of leader changing until *t*_{i} inclusively and relax leader on the current block. The time moments which ware processed should be erased, they are not keep any useful information ,because all queries sorted by *t*_{i}. Let's process all cells which were not covered by the processed blocks but covered by the query and relax the leader for the query.

The number of erasing time moments of leader changing is not mere then n and every query need *O*(*sqrt*(*n*)) time to calculate answer without erasing time moments. So the complexity of algorithm is *O*(*n* * *log*(*n*) + *m* * *sqrt*(*n*)).

Interval tree also can be used to solve this problem. Such solution has complexity *O*(*n* * *log*(*n*)), but *O*(*n* * *log*(*n*)) memory.

~~sorry but where is the Div1 - Problem D editorial ?~~Problem E(div1)O(d*log(d)) time.O(d*log(d))?Thank you!

Hello, I think the editorial might have a small problem. Or maybe it's just because my understanding is wrong?

For problem D(div 1)

the editorial mentions:

In the third case we should to do operations like {2} - > {0}. Using such way we can find the optimal set of operations. We shouldn't divide cycle into a parts because it makes the number of operation bigger or the same. Stress test shows the uniting cycles will not improve the answer.However, in test case #9 9 2 3 1 5 6 4 8 9 7 The algorithm in the editorial to eliminate {2} -> {0} fails.

In my opinion, {2,2,2} -> {0,0,0} in 2 steps. then {2} -> {0}. It's better.

Can somebody please explain Egor's solution for the Div1 B problem? In particular what does the order function do exactly. I have seen Egor use it in other contexts, and I have not been able to comprehend it. Any help would be appreciated, I have spent an hour trying to understand what this code does, and have basically made no progress. Link to Egor's submission: 499718

I know that this comment was made 13 months ago :) Decided to write an answer since maybe there will be someone confused who might find my explanation helfpful.

The key in this approach lies in understanding what does "order" do. It just orders indexes according to increasing values in the array. In 1st step we look at the index with the smallest element, in the 2nd step at the one with the 2nd smallest and so on. Answer for current index is (index farthest away from done so far) — (current position), because all already considered have values smaller than the current one. Let's look at the first test:

6

10 8 5 3 50 45

"order" function allows to consider indexes in the following order:

4 3 2 1 6 5

We start with index 4 (arr[4] = 3, 3 is the smallest element in the array), 4 is better than the current max (-1), so 4 is now index farthest to the right from the ones already considered. And since it's max, the answer for index 4 is -1. Next we get to index 3 (arr[3] = 5, 5 is 2nd smalles element in the array). It's index is "worse" than the max so it means that the result for 3 will be max — 3 + 1. The same goes for indexes 2, 1. After that, we get to index 6 (arr[6] = 45, 45 is 5th smallest element). 6 is farthest to the right from the ones already considered, so our max is now 6. Result[6] = -1. Last step, index 5, not better than the max, result[5] = max — i + 1 = 6 — 5 + 1 = 0.

Problem E(div2), C(div1)

Very Nice problem and solution. :)

can someone please help me in the editorial for div2-E problem

In the end the subsets of edges written from the right of the zero rows will form the basis of the linear spacewhat is the linear space we are talking about? the third paragraph is confusing for me

i think i got it.

there is a small typo 2(

m-rank(I)) should be 2^{m - rank(I)}Is there a solution for DIV 2 E / DIV 1 C which doesn't require knowledge of matrix?