### 1789A - Serval and Mocha's Array

Idea & Preparation: Bazoka13

**Tutorial**

Considering an array $$$a$$$ of $$$n$$$ ($$$n\geq 2$$$) positive integers, the following inequality holds for $$$2\leq i\leq n$$$:

Therefore, when the prefix $$$[a_1,a_2]$$$ of $$$a$$$ is good, we can show that all the prefixes of $$$a$$$ whose length is no less than $$$2$$$ are good, then $$$a$$$ is beautiful. It is obvious that $$$[a_1, a_2]$$$ is good when $$$a$$$ is beautiful. So we get the conclusion that $$$a$$$ is beautiful if and only if the prefix $$$[a_1, a_2]$$$ is good.

We can check if there exist $$$a_i, a_j$$$ ($$$i\neq j$$$) such that $$$\gcd(a_i, a_j)\leq 2$$$. If so, we can move $$$a_i,a_j$$$ to the front of $$$a$$$ to make it beautiful, then the answer is `Yes`

. If not, the answer is `No`

.

Time complexity: $$$O(n^2\log 10^6)$$$.

### 1789B - Serval and Inversion Magic

Idea & Preparation: Bazoka13

**Tutorial**

If $$$s$$$ is palindromic initially, we can operate on the interval $$$[1,n]$$$, the answer is `Yes`

.

Let's consider the other case. In a palindrome $$$s$$$, for each $$$i$$$ in $$$[1,\lfloor n/2\rfloor]$$$, $$$s_{i}=s_{n-i+1}$$$ must hold. For those $$$i$$$, we may check whether $$$s_{i}=s_{n-i+1}$$$ is true in the initial string. For all the illegal positions $$$i$$$, the operation must contain either $$$i$$$ or $$$n+1-i$$$, but not both. For the legal positions, the operation must contain neither of $$$i$$$ nor $$$n+1-i$$$, or both of them.

If the illegal positions is continuous (which means that they are $$$l,l+1,\dots,r-1,r$$$ for some $$$l$$$ and $$$r$$$), we may operate on the interval $$$[l,r]$$$ (or $$$[n+1-r,n+1-l]$$$), making the string palindromic. The answer is `Yes`

.

Otherwise, there must be some legal positions that lie between the illegal ones. Suppose the illegal positions range between $$$[l,r]$$$ (but not continuous), and the operation is $$$[o_{1},o_{2}]$$$. Without loss of generality, let the operation lies in the left part of the string. Then $$$o_{1}\le l,r\le o_{2}<n+1-r$$$ must hold to correct all the illegal positions. This interval covers all the legal positions that lie between the illegal ones but does not cover their symmetrical positions. Thus, such kind of operation will produce new illegal positions. In other words, there are no valid operations in this situation. The answer is `No`

.

Time complexity: $$$O(n)$$$.

### 1789C - Serval and Toxel's Arrays

Idea & Preparation: Toxel

**Tutorial**

Consider the contribution of each value. We only need to count the number of concatenated arrays each value appears in, and sum all those counts up. The answer to this problem only depends on the number of appearances of this value. Notice that the appearance of each value forms some intervals. Each interval starts when it modifies another element (or in the initial array), and ends when it is modified (or in the $$$m$$$-th array). As there are no duplicate elements, the intervals do not intersect, so we can simply sum their lengths up.

Let's use an array $$$\text{appear}$$$ to track the appearance of each value. We first set the appearance of the initial elements to $$$0$$$, and other elements to $$$-1$$$, which means the value does not appear. Then, in the $$$i$$$-th modification, suppose we modified some elements from $$$x$$$ to $$$y$$$, then we should add $$$i-\text{appear}_{x}$$$ to $$$\text{count}_{x}$$$, and set $$$\text{appear}_{x}$$$ to $$$-1$$$. We should also set $$$\text{appear}_{y}$$$ to $$$i$$$. After all operations, for all $$$x$$$, add $$$m+1-\text{appear}_{x}$$$ to $$$\text{count}_{x}$$$ if $$$\text{appear}_{x}$$$ is not $$$-1$$$.

Value $$$x$$$ appears in $$$\frac{m(m+1)}{2}-\frac{(m-\text{count}_{x})(m-\text{count}_{x}+1)}{2}$$$ concatenated arrays.

Time complexity: $$$O(n+m)$$$.

### 1789D - Serval and Shift-Shift-Shift

Idea & Preparation: Toxel

**Tutorial**

First of all, it could be proven that the answer exists if and only if $$$a$$$ and $$$b$$$ are both zero or $$$a$$$ and $$$b$$$ are both non-zero.

If $$$a$$$ is zero, it remains zero after any operations. Therefore it cannot become $$$b$$$ if $$$b$$$ is non-zero. If $$$a$$$ is non-zero, logical left shift it will definitely increase its lowest bit or make it zero, thus changing it into a different number. The same applies to logical right shift. Therefore, the xor result must be non-zero and there are no possible operations if $$$b$$$ is zero.

We will show that it is always possible to change $$$a$$$ into $$$b$$$ in the other cases. We denote $$$\text{lb}(a)$$$ as the lowest bit of $$$a$$$ and $$$\text{hb}(a)$$$ as the highest bit of $$$a$$$. If $$$a$$$ and $$$b$$$ are both zero, no operations are needed. If they are both non-zero, the construction consists of four steps:

- If $$$\text{hb}(a)<\text{lb}(b)$$$, logical left shift $$$a$$$ by $$$\text{lb}(b)-\text{hb}(a)$$$ bits. Then $$$\text{hb}(a)$$$ must be equal or greater than $$$\text{lb}(b)$$$.
- For each bit $$$i$$$ of $$$\text{lb}(b)-1,\text{lb}(b)-2,\dots,1$$$, if $$$a_{i}=1$$$, we may logical right shift $$$a$$$ by $$$\text{hb}(a)-i$$$ bits to erase it. After that, we have $$$\text{lb}(a)\ge\text{lb}(b)$$$.
- If $$$\text{lb}(a)>\text{lb}(b)$$$, logical right shift $$$a$$$ by $$$\text{lb}(a)-\text{lb}(b)$$$ bits. Now it is guaranteed that $$$\text{lb}(a)=\text{lb}(b)$$$.
- For each bit $$$i$$$ of $$$\text{lb}(b)+1,\text{lb}(b)+2,\dots,n$$$, if $$$a_{i}\neq b_{i}$$$, we may logical left shift $$$a$$$ by $$$i-\text{lb}(a)$$$ bits to erase it. After that, we must have $$$a=b$$$.

Step 2 and step 4 require at most $$$n-1$$$ operations. We may also note that step 1 and step 3 never appear simultaneously. If step 1 is operated, then $$$\text{lb}(a)=\text{lb}(b)$$$ is guaranteed after step 2. Thus, we need not operate step 3 in this case. In conclusion, we may use no more than $$$n$$$ operations to change $$$a$$$ into $$$b$$$ if they are both non-zero.

Time Complexity: $$$O(n^{2})$$$ or $$$O(\frac{n^{2}}{w})$$$ by using `std::bitset`

.

### 1789E - Serval and Music Game

Idea & Preparation: Serval

**Tutorial**

Consider the following two cases:

**Case 1**: $$$x$$$ is not a factor of $$$s_n$$$.

In this case we have $$$\left\lfloor{s_n\over x}\right\rfloor + 1 = \left\lceil{s_n\over x}\right\rceil$$$. Let $$$k = \left\lfloor{s_n\over x}\right\rfloor$$$. It can be shown that there are at most $$$2\sqrt{s_n}$$$ different values of $$$k$$$. The constraint of $$$s_i$$$ can be written in the following form:

For a certain $$$k$$$, such $$$p_i$$$ and $$$q_i$$$ do not exist if and only if $$$s_i\bmod k > \left\lfloor{s_i\over k}\right\rfloor$$$. To prove it, we show the contradiction that $$$q_i\bmod k = s_i\bmod k > \left\lfloor{s_i\over k}\right\rfloor \geq q_i$$$, and we can give a construction of $$$p_i$$$ and $$$q_i$$$ when $$$s_i\bmod k\leq \left\lfloor{s_i\over k}\right\rfloor$$$ that $$$q_i = s_i\bmod k$$$ and $$$p_i=\left\lfloor{s_i\over k}\right\rfloor - q_i$$$.

By observation, these $$$s_i$$$ are in one of the following $$$k-2$$$ intervals:

We can count the number of these $$$s_i$$$ by pre-calculating the prefix sums to calculate $$$f(x)$$$.

This case can be solved in $$$O(s_n)$$$ time, and we will show this fact:

- When $$$k\leq \sqrt{s_n}$$$, there are $$$k - 2$$$ intervals that need to be considered for a certain $$$k$$$. Since $$$\sum_{k\leq \sqrt{s_n}} k \leq s_n$$$, this part can be solved in $$$O(s_n)$$$ time.
- When $$$k>\sqrt{s_n}$$$, notice that there are at most $$$\left\lceil s_n\over k\right\rceil \leq \sqrt{s_n}$$$ intervals that need to be considered for a certain $$$k$$$. Recall that there are at most $$$\sqrt{s_n}$$$ different values of $$$k$$$ in this part, so it can be solved in $$$O(s_n)$$$ time.

**Case 2**: $$$x$$$ is a factor of $$$s_n$$$.

In this case we have $$$\left\lfloor{s_n\over x}\right\rfloor = \left\lceil{s_n\over x}\right\rceil$$$. Let $$$k = {s_n\over x}$$$. The constraint of $$$s_i$$$ becomes:

To calculate $$$f(x)$$$, we only need to count the number of multiples of $$$x$$$. To do this, we can first calculate $$$s'_i = \gcd(s_i, s_n)$$$ for all $$$1\leq i\leq n$$$ in $$$O(n\log s_n)$$$ time. It is obvious that $$$s_i'$$$ is a factor of $$$s_n$$$. For a certain $$$x$$$, we can enumerate all the factors of $$$s_n$$$, find out the multiples of $$$x$$$ among them, and sum up the times that they occurred in $$$s'$$$. Recall that $$$s_n$$$ has at most $$$2\sqrt{s_n}$$$ factors, so this takes $$$O(s_n)$$$ time.

This case can be solved in $$$O(n\log s_n + s_n)$$$ time in total.

Time complexity: $$$O(n\log s_n + s_n)$$$.

$$$O(s_n + \sigma(s_n))$$$ solutions can pass all the tests, where $$$\sigma(n)$$$ denotes the sum of all the factors of $$$n$$$. A well-implemented $$$O(s_n\log s_n)$$$ solutions may pass the tests, too.

**Bonus**: Solve this problem in $$$O(n + s_n)$$$ time.

### 1789F - Serval and Brain Power

Idea & Preparation: Serval

**Tutorial**

Assume that the longest powerful subsequence of the given string $$$S$$$ is $$$T$$$, which can be obtained by concatenating $$$k$$$ copies of string $$$T'$$$. Noticing that $$$|S|\leq 80$$$, we have the observation that $$$k\cdot |T'| \leq |S| \leq 80$$$, so it is impossible that both $$$k$$$ and $$$|T'|$$$ is large.

When $$$k < 5$$$, we only need to consider the $$$k = 2$$$ case and the $$$k = 3$$$ case. The $$$k = 4$$$ case is covered by $$$k = 2$$$ case, since $$$T = T'+T'+T'+T' = (T'+T') + (T'+T')$$$.

For the $$$k = 2$$$ case, we split $$$S$$$ into two parts $$$S=S_1+S_2$$$, then calculate the maximal length of $$$\operatorname{LCS}(S_1, S_2)$$$ by dynamic programming over all the possible splits. This case can be solved in $$$O(w_2\cdot|S|^3)$$$ time, where $$$w_2$$$ is a small constant.

It is similar to solve the $$$k = 3$$$ case. We split $$$S$$$ into three parts $$$S = S_1 + S_2 + S_3$$$, then calculate the maximal length of $$$\operatorname{LCS}(S_1, S_2, S_3)$$$ over all the possible splits. This case can be solved in $$$O(w_3\cdot|S|^5)$$$ time, where $$$w_3$$$ is a small constant. We will estimate $$$w_3$$$ later.

When $$$k \geq 5$$$, we have $$$|T'|\leq {|S|\over k}\leq {|S|\over 5}$$$. It can be shown that, if we split $$$S$$$ into $$$5$$$ parts, $$$T'$$$ will be the subsequence of at least one of them. We can split $$$S$$$ into equal lengths, then enumerate all the subsequences of these substrings as the possible $$$T'$$$. For a possible $$$T'$$$, we can find out corresponding $$$k$$$ by matching $$$T'$$$ and $$$S$$$ greedily. This case can be solved in $$$O(5\cdot 2^{|S|/5}|S|)$$$.

Now let us roughly estimate how small $$$w_3$$$ could be. The time that dynamic programming consumed for certain $$$S_1, S_2, S_3$$$ is $$$|S_1|\cdot|S_2|\cdot|S_3|$$$. Since $$$|S_1|+|S_2|+|S_3|=|S|$$$, we have $$$|S_1|\cdot|S_2|\cdot|S_3|\leq {1\over 27}|S|^3$$$. Recall that there are $$${|S|-1 \choose 2} \leq {1\over 2}|S|^2$$$ possible splits, then $$$w_3\leq {1\over 54}$$$ holds.

Time complexity: $$$O(w_3\cdot|S|^5 + 5\cdot 2^{|S|/5}|S|)$$$.

E is best problem

Found E very interesting, nice math trick which is very expandable and practical. Many problems use similar idea of piecewise algorithms.

In A complexity is $$$O(n^2 \log C)$$$, or you can check that $$$gcd \le 2$$$ in $$$O(1)$$$ somehow?

In F to estimate $$$w_3$$$ we can consider the following bijection: let's take $$$n$$$ white balls and 5 black balls and put them in a linear order. White balls will denote letters of the string, 2nd and 4th black balls will denote where we split our string in 3 parts, and 1st/3rd/5th black balls will denote where in each of 3 strings we stay in DP. Thus there is a bijection between orders of balls and all DP states over all ways to split the string into 3 parts (actually we allow splits with empty parts, so it's even a bit smaller than this). The number of ball orders is $$$\binom{n+5}{5} = \frac{n^5}{5!} + O(n^4)$$$, thus $$$w_3 = \frac{1}{120}$$$

Thanks for sharing! I'll fix the typo in A. :)

It's possible to solve it fastly.

## How to Quickly Calculate the GCD in E (Bonus) and A

Bonus of E: The only algorithm costing $$$\omega(s_n)$$$ in the official solution is calculating GCD. However, we can calculate $$$\gcd(x,s_n)$$$ for all $$$x\in[1,s_n]$$$ in $$$\Theta(s_n)$$$:since $$$\forall x,y\in \mathbb Z$$$ satisfying $$$\gcd(x,y)=1$$$ $$$, gcd(xy,s_n)=gcd(x,s_n)\cdot gcd(y,s_n)$$$ (this property is also called "Multiplicative"), we can use Euler's sieve to calc it in $$$\Theta(s_n)$$$, solving the bonus of E.

Bonus of A: there's a well-known tech that can also solve A in $$$O(\text{SIZE})$$$ precalc and $$$O(1)$$$ per query (where $$$\text{SIZE}=\max{a_i}$$$ and the number of pairs of GCD we query is $$$\frac{Tn(n-1)}{2}$$$ in this problem). You can solve this problem (on a mainland Chinese OJ, Luogu).But in this case, O(SIZE+K) will be of the order of 10^6 per case, so won't it be slower than O(n^2) as n<=100?

It's my typo. Initialization only runs once. Fixed&thx.

In fact we only run initialization like a Euler's sieve to reduce the range to $$$\sqrt {\text{SIZE}}$$$ and precalc $$$\gcd(a,b)$$$ for all $$$a,b\in[1,\sqrt {\text{SIZE}}]$$$(Like the bonus of E) within $$$O(\text{SIZE})$$$ and we can query any $$$a,b\in[1,\text{SIZE}]$$$ in $$$O(1)$$$.

Is it true that someone has O(n) on D or are those just rumors?

I didn't understand the concept of lb(a) and hb(a) in the editorial of D. Is it referring to the left-most and right-most bits of a? Sorry for the confusion.

$$$lb(x)$$$ is lowest bit of $$$x$$$

$$$hb(x)$$$ is highest bit of $$$x$$$

What does it mean by lowest bit of x or highest bit of x? Is it the count of 0's and 1's in x?

No, it is index of leftmost 1 and rightmost 1

Oh, I get it now. Thanks a lot.

$$$lb(x)$$$ means $$$lsb(x)$$$ ? bcoz if we do +1 we need to which direction are we moving ? and $$$lb(x)$$$ doesn't give information IMO

DP solution for problem C.

Let $$$ cnt(i, k) $$$ be the number of times the number $$$k$$$ appears in the arrays $$$A_0, A_1, \ldots, A_i$$$ and let $$$dp[i]$$$ the sum of the values of the concatenations $$$(A_i, A_j), 0 \leq j < i$$$.

When we go from the iteration $$$(i-1)$$$-th to the $$$i$$$-th only changes the number $$$a_p$$$ to $$$v$$$, so, in the transition $$$dp[i-1] \rightarrow dp[i]$$$ we only have to analyze how $$$a_p$$$ and $$$v$$$ affect the result.

Assume that the change is not yet made in $$$A_i \ (A_i = A_{i-1})$$$, then $$$dp[i] = dp[i - 1] + n $$$ (because $$$(A_i, A_{i-1})$$$ are equal, so just add $$$n$$$ to $$$dp[i]$$$). The change is made and for each $$$a_p$$$ in $$${A_0, A_1, \ldots, A_{i-1}}$$$ we add $$$+1$$$ to $$$dp[i]$$$ (because $$$a_p$$$ doesn't exist in $$$A_i$$$), and for each $$$v$$$ in $$${A_0, A_1, \ldots, A_{i-1}}$$$ we substract $$$-1$$$ from $$$dp[i]$$$.

So, the transition is:

And the answer is:

!

To share some random stuff as a tester:

The original contest had only current ABCEF, D is newly added to alleviate the gap between C and E (well, as you already found, D is harder than expected, but it would be worse with respect to the gap if the next problem of C were E).

The original statement of E is horribly longer than current's (maybe more than two times longer). For me the current version is way too better (maybe already the best for such a in fact complicated problem).

Personally I like the problem F and C is quite an educational problem (which I'd like to recommend every newbies to give it a shot, definitely will they learn a lot).

Why the interval in problem E does not contain $$${uk}$$$, where u is an integer? we can let p_i be u and q be 0.

I think the tutorials means that those intervals are illegal.

An interval is illegal means that if $$$s_i\in[l, r]$$$, $$$i$$$ will not be counted into $$$f(x)$$$.

I don't know what $$$\frac{m(m + 1)}{2} - \frac{(m−count_x)(m−count_x+1)}{2}$$$ meant in the interpretation of the problem C.

`m(m+1)/2`

is the amount of all concatenated arrays. And`(m−countx)(m−countx+1)/2`

is the amount of all concatenated arrays which don't have x. in m arrays we have countx arrays that have x .If x doesn't appear in concatenated arrays , the 2 arrays must be chosen from (m-countx) arrays which don't have x.In problem C shouldn't we add $$$ m+1-appear_x $$$ to $$$count_x$$$ after all operations if $$$appear_x != -1 $$$.

Sorry for the typo. It is fixed now. :3

I think that $$$m−appear_x$$$ in the solution of problem c is wrong, I think it is $$$m-appear_x+1$$$.

If you write $$$m-appear$$$ in the code, it will be WA.

Sorry for the typo. Fixed now. :3

Problem C

Shouldn't it be $$$m + 1 - appear_x$$$ ?!! I have been confused about it for hours!

Sorry for the typo. It is fixed now. :3

In Problem D step 4, "we may logical left shift a by i-hb(a) bits to erase it", should it not be i-lb(a) bits instead?

Sorry for the typo. Fixed now. :3

In problem D, my solution is consistently giving wrong answer on case 113 of test case 4. It says the sequence of operations doesn't give b. Kindly help

I have a stupid dp solution of C

i don't understand problem C's solution.Is there anybody have understandable solution?

every element is contributing in this fashion , for the first occurrence it is contributing m to the answer(as it would be a unique element for the next m arrays) , then in next occurrence it would contribute (m-1) (it would be unique for the next (m-1) arrays) and this would continue , so we just need to count the frequency of every element , as the range is only till n+m we can count them in a frequency array and then we just need to sum up from m + (m-1) + (m-2) .... + (m- count + 1) for each element in the frequency array

During modifications, Toxel guaranteed that the elements of each array are still pairwise distinct after each operation.does this means that the intersection of all the elements of the array is a null set? or the intersection of the elements of a particular array is a null set?

2nd one

Found this for problem f https://www.youtube.com/watch?v=VBke0Ooe1ys&t=7s

E is fantastic!!!

There is a very slight mistake in the tutorial of Problem E:

Actually there are $$$k - 1$$$ intervals. But, any way, it's insignificant. :D

Great

can anyone tell what bug is in this code

Thought my C concept is right, but couldn't implement it out. I can't manage to handle the situation where a position is modified more than twice with a same value.

if you struggling to understand solution of problem C then you can read my explanation

My approachthis is my fav problem now.

since it look quite complex when we consider numbers and maintain their count, the way i visualised the problem was below.

lets consider bitmask representation of array for better understanding:

our orignal array was [ 1 , 2 , 3 ] and through out the queries we will some elements that [ 4 , 5 ]

so total set of numbers are [ 1 , 2 , 3 , 4 , 5 ]

consider their index in this full set is the id of each element,

so orignal array bitmask is:

`11100`

after changes bitmask will look like this

`11100`

-> orignal`01110`

-> 1st move`00111`

-> 2nd moveso now we can see that even if the number is present in two array's it will only contribute once, and if for a pair there is no one it will contribute zero in this case.

for each bit index we can count,

`b-> numbers of 0's`

we have total pariring is:

`m*(m+1)/2`

but if those pairings have both zero's then they dont contribute anything. we need to remove them

that is:

`m*(m+1)/2 - b*(b-1)/2;`

the only problem now is how to mainting the count for each bit, as there can be 1e5 numbers in array if we do it naively then it would take N*M time which is we dont not want

so we can break numbers into two groups,

`zero_set`

— those numbers whose bit is currently zero`one_set`

— those numbers whose bit is currently one [ i did not use it my implementation it is just for understanding ]any move only change one number in each group that is they switch places, so we can maintain the count the of zero's by mainting the

`in time`

and`out time`

of number to and from zero setLook at my code for reference:

256787804