**Problems A(div2), B(div2)**

*B*wasn't recommended to use BigInteger in Java because of slow speed.

**Problem C(div2), A(div1)**

*s*1 should be taken greedily: take the left letter from the right of the last used letter, if there is no necessary letter from the right of the right used letter the the search should be started from the beginning of string

*s*1 and the answer should be increased by one. But the brute solution get

*TL*and have complexity

*O*(

*Ans** |

*s*1|).

This solution can be optimized using the following way. For every position in

*s*1 let's precalculate positions of the closest letters from the right of it from the alphabet. It can be done by moving from the right to the left ins

*s*1 and remembering the last position of every type of symbol. This solution have complexity

*O*(|

*s*1| *

*K*+ |

*s*2|), where

*K*is a size of alphabet.

**Problem D(div2), B(div1)**

There were a lot of different solutions but I will tell you the author solution. Let's precalculate [*i*, *n*]. It can be done with the time equal to *O*(*n*) moving from the right to the left. Define [*i*, *n*] as *Min*_{i}. Obviously, that *Min*_{i} < = *Min*_{i + 1}. Now for every position *i* using binary search let's find j (*j* > *i*), that *Min*_{j} < *a*_{i} and *Min*_{j + 1} > = *a*{*i*}. For such *j* there are no walruses who have age younger then walrus *i*. It's obviously because *Min*_{j} is the minimum on [*j*, *n*]. If there is no such *j* then print - 1 else print *j* - *i* - 1.

**Problem E(div2), C(div1)**

We will count the number of ski bases including the base consisted of empty subset of edges (before printing just subtract one). In the beginning the number of bases is equal to 1. If we connect vertexes in the same connected components then the result should be multiplied by 2 else do nothing. You should use DJS data structure to know information about connected components where vertexes are and to unite them.

Why is it correct?

To prove it we will use the matrix of incidence *I*, rows in it will be edges and columns will be vertexes. Let's define *xor* of two rows. *Xor* of two rows *a* и *b* will be row *c* such that *c*_{i} = *a*_{i} xor *b*_{i}. Notice if *xor* of some subset of rows is equal to a zero row then this subset form the ski base. It's correct because, the degree of contiguity of every vertex is even, so we can form an Euler cycle in every connected component. The answer is 2^{}(*m* - *rank*(*I*)).

Why it is correct? Let's write the number of edge from the right of each row which suit this row. While finding the matrix rank using gauss method with *xor* operation, we will *xor* the subsets from the right of the strings. In the end the subsets of edges written from the right of the zero rows will form the basis of the linear space. Thats why we can take any subset of vectors from basis and make up a new ski base. The number of these subsets is equal to 2^{k} = 2^{}(*m* - *rank*(*I*)), where k is the number of zero rows.

The last thing we should notice that the adding row is liner depended if and only if there is exist a way between the vertexes *a* and *b* (*a* and *b* are the ends of the adding edge).

**Задача D(div1)**

Let's find all cycles in substitution:

1 2 3 4 5 ...*a*_{1} *a*_{2} *a*_{3} *a*_{4} *a*_{5} ...

For example, in sequence *a* = {2, 1, 4, 5, 3} exist two cycles with length 2 and 3 on positions 1 2 and 3 4 5 respectively. If the length of the cycle is more than 5, then we can take 5 consecutive elements and make a new order in such way that 4 elements will take their places where they should be in sorted array. In common: if we take *p* elements in a cycle of length more than *p*, the we can make a new order in such way that *p* - 1 elements will take their places where they should be in sorted array. If the cycle length is *p* we can sort all the elements taking *p* elements from it. Next in the analysis I will define length of a cycle as a real length of a cycle - 1 (now I can say that if we take *p* consecutive elements then *p* - 1 will take the right places).

We also can take some elements in one and some elements in other cycles. We can make up division numbers from 2 to 5 into a sum to see how we can use them: 5, 4, 3, 2, 2 + 3, 2 + 2. We can take, for example, three elements in one cycle and two in other, their sizes were reduced by 2 and 1.

The task now is to get all cycle lengths equal to zero. Let's to suppose that an optimal solution have four operations which decrease the same cycle length by one. We will work with operations of 5 elements and which are not process the same cycle except one, because it's more stronger condition.

Such operation can be shown as a table where rows are operations, columns are cycles and cells of table are the values on which cycles will be reduced.

a b c d e

1 2

1 2

1 2

1 2

Replace operations this way:

a b c d e

4

2 1

1 2

2

The number of operations was not increased, but now we can see that if there is an optimal solution where one cycle reduced by one four times then there is an optimal solution where no cycles reduced by one four times. Look at some other replacements:

a b c

2 1

2 1

Replace by:

a b c

4

1 1

and

a b c d

2 1

1 2

1 2

Replace by:

a b c d

4

1 2

2

Now we can see that there is an optimal solution where no operation reducing one cycle in ways:1 + 1 + 1 + 1, 2 + 2, 1 + 1 + 2. That's why we can reduce any cycle by 4 while it's size is > = 4. Let's use zhe same method to prove that cycles of size 3 reduced by 3 in some optimal solution.

a b c

2 1

1 2

Replace by:

a b c d

3

1 2

and

a b c d

1 2

1 2

1 2

Replace by:

a b c d

3

2 1

1 2

Now we have cycles only with size 1 or 2. Let's brute the number of operations to process like {2, 2} - > {0, 1} (one cycle will be reduced from 2 to 1 and the other will be reduced from 2 to 0). Fix some such number and make these operations. Now we should process the maximum available operations like {2, 1} - > {0, 0}. After it we will have no cycles or some cycles of size 1 or some cycles of size 2. In the second case we should do the maximum of available operations like {1, 1} - > {0, 0} and if one cycle lasts we should do {1} - > {0}. In the third case we should to do operations like {2} - > {0}. Using such way we can find the optimal set of operations. We shouldn't divide cycle into a parts because it makes the number of operation bigger or the same. Stress test shows the uniting cycles will not improve the answer.

**Problem E(div1)**

We are given the array where the value in each cell is described by the formula depends on T *val*_{i} = *a*_{i} + *b*_{i} * *T* (in geometry mean it is a line equation). Lt's divide the array in blocks of size *d* ~ *sqrt*(*n*): [0;*d*), [*d*, *d* * 2), [*d* * 2, *d* * 3), ..., [*d* * *k*, *n*). Now let's precalculate for each block time moments when the leader of the block changes. So as we have line equation we can perform an intersection of half planes. On *Oy* axis values of the cells are marked and the time moments marked on *Ox* axis. we are interested on *x* coordinate of points of intersections and number of leader after time *x*. So now we know time moment of leader changing for every block. For each block it takes *O*(*d* * *log*(*d*)) time. The number of blocks (*k*) is about *n* / *d* ~ *n* / *sqrt*(*n*) ~ *sqrt*(*n*) so all the process will take *O*(*n* * *log*(*n*)) time.

Let's sort all queries by increasing *t*_{i}. Now we should to perform queries one by one. Brute all blocks which lay into the query interval. For each such block we will keep up the leader for current time. For this let's process all the time moment of leader changing until *t*_{i} inclusively and relax leader on the current block. The time moments which ware processed should be erased, they are not keep any useful information ,because all queries sorted by *t*_{i}. Let's process all cells which were not covered by the processed blocks but covered by the query and relax the leader for the query.

The number of erasing time moments of leader changing is not mere then n and every query need *O*(*sqrt*(*n*)) time to calculate answer without erasing time moments. So the complexity of algorithm is *O*(*n* * *log*(*n*) + *m* * *sqrt*(*n*)).

Interval tree also can be used to solve this problem. Such solution has complexity *O*(*n* * *log*(*n*)), but *O*(*n* * *log*(*n*)) memory.