### 632A - Grandma Laura and Apples

The problem was suggested by unprost.

Consider the process from the end. The last buyer will always buy a half of an apple and get a half for free (so the last string always is *halfplus*). After that each buyer increases the number of apples twice and also maybe by one. So we simply have the binary presentation of the number of apples from the end. To calculate the answer we should simply restore that value from the end and also calculate the total money grandma should have.

С++ solution by me.

С++ solution by unprost.

Complexity: *O*(*p*).

### 632B - Alice, Bob, Two Teams

The problem was suggested by Lewin Gan lewin.

Let's calculate the prefix sums for all numbers (and store it in array *s*1) and for numbers with letter B (and store it in array *s*2). Now we can find the sum of all numbers in any segment in *O*(1) time and the sum of numbers with letter B.

Let's iterate over prefix or suffix to flip and calculate the sum in that case by formulas: *sum*(*s*1, 0, *n* - 1) + *sum*(*s*2, 0, *i*) - 2·*sum*(*s*1, 0, *i*) for prefixes and *sum*(*s*1, 0, *n* - 1) + *sum*(*s*2, *i*, *n* - 1) - 2·*sum*(*s*1, *i*, *n* - 1) for suffixes.

C++ solution by me.

Python solution by lewin.

Complexity: *O*(*n*).

### 632C - The Smallest String Concatenation

The problem was suggested by Lewin Gan lewin. The proof of the transitivity also belongs to him.

Let's sort all the strings by comparator *a* + *b* < *b* + *a* and concatenate them. Let's prove that it's the optimal answer. Let that operator be transitive (so if ). Consider an optimal answer with two strings in reverse order by that operator. Because of the transitivity of operator we can assume that pair of strings are neighbouring. But then we can swap them and get the better answer.

Let's prove the transitivity of operator. Consider the strings as the 26-base numbers. Then the relation *a* + *b* < *b* + *a* equivalent to . The last is simply the relation between real numbers. So we proved the transitivity of the relation *a* + *b* < *b* + *a*.

C++ solution by me.

Python solution by lewin.

Complexity: *O*(*nLlogn*), where *L* is the maximal string length.

### 632D - Longest Subsequence

The problem was suggested by Denis Bezrukov pitfall.

Let *cnt* _{ x} be the number of occurences of the number *x* in the given array (easy to see that we can ignore the numbers greater than *m*). Let's iterate over and 1 ≤ *k*, *x*·*k* ≤ *m* and increase the value in the position *k*·*x* in some array *z* by the value *cnt* _{ x}. So the value *z* _{ l} equals the number of numbers in the given array which divide *l*. Let's find the minimal *l* with the maximum value *z* _{ l} (1 ≤ *l* ≤ *m*). Easy to see that the answer to the problem is the numbers which divide *l*.

Let's calculate the complexity of the solution. The number of the pairs (*k*, *x*) we can bound with the value .

C++ solution by me.

Java solution by pitfall.

Complexity: *O*(*n* + *mlogm*).

### 632E - Thief in a Shop

The problem was suggested by Alexey Chesnokov CleRIC.

Let *k* = 2, then it is the standard problem which can be solved by FFT (Fast Fourier Transform). The solution is the following: consider the polynomial which the *i*-th coefficient equals to one if and only if there is the number *i* in the given array. Let's multiply that polynomial by itself and find *i* for which the coefficient in square not equals to 0. Those values *i* will be in the answer. Easy to modificate the solution for the arbitrary *k*. We should simply calculate the *k*-th degree of the polynomial. The complexity will be *WlogWlogk*, where *W* is the maximal sum.

We can improve that solution. Instead of calculating the *k*-th degree of the polynomial we can calculate the *k*-th degree of the DFT of the polynomial. The only problem is the large values of the *k*-th degrees. We can't use FFT with complex numbers, because of the precision problems. But we can do that with NTT (Number-theoretic transform). But that solution also has a problem. It can happen that some coefficients became equals to zero modulo *p*, but actually they are not equal to zero. To get round that problem we can choose two-three random modules and get the complexity *O*(*W*(*logW* + *logk*)).

The main author solution has the complexity *O*(*WlogWlogk*) (FFT with complex numbers), the second solution has the same complexity, but uses NTT and the third solution has the improved complexity (but it was already hacked by halyavin).

С++ solution, complex FFT by me.

С++ solution, NTT by me.

С++ solution, improved NTT by me.

С++ solution by CleRIC.

P.S.: To get faster solution you should each time multiply the polynomials of the required degree, but not of the degree 2^{20}.

Complexity: *O*(*WlogWlogk*) or *O*(*W*(*logW* + *logk*)), depending the bravery of the coder :-)

UPD: It turns out that the first approach also has complexity *O*(*W*(*logW* + *logk*)). See below the comment of halyavin.

### 632F - Magic Matrix

The problem was suggested by Lewin Gan lewin. The solution and proof also belongs to him.

Consider the undirected complete graph with *n* nodes, with an edge between nodes *i*, *j* with cost *a* _{ ij}. Let *B* _{ ij} denote the minimum possible value of the max edge of a path from *i* to *j*. We know that *a* _{ ij} ≥ *B* _{ ij} by definition.

If the matrix is magic, we can choose arbitrary *k* _{1}, *k* _{2}, ..., *k* _{ m} such that *a* _{ ij} ≤ *max*(*a* _{ i, k 1}, *a* _{ k 1, k 2}, ..., *a* _{ k m, j}) by repeating invocations of the inequality given. Also, you can show that if this inequality is satisfied, then the matrix is magic (by choosing an *m* = 1 and *k* _{1} arbitrary).

So, this shows that the matrix is magic if and only if *a* _{ ij} ≤ *B* _{ ij}. Thus, combining with *a* _{ ij} ≥ *B* _{ ij}, we have *a* _{ ij} = *B* _{ ij}.

We need a fast way to compute *B* _{ ij} for all pairs *i*, *j*. This can be computed as the MST, as the path in the MST minimizes the max edge between all pairs of nodes. So, the algorithm works as follows. First, find the MST on the complete graph. Then, the matrix is magic if and only if the max edge on the path between *i*, *j* in the MST is exactly equal to *a* _{ i, j}. Also you shouldn't forget to check symmetry of the matrix and diagonal for zeros.

P.S.: Unfortunately we couldn't increase the value *n* in this problem: the tests already had the size about 67MB and they couldn't be given with generator. So most of the users who solved this problem uses *bitset*-s. The complexity of their solution is , where *b* = 32 or *b* = 64.

C++ solution, binary lifts by me.

Java solution by lewin.

Complexity: *O*(*n* ^{2} *logn*) or *O*(*n* ^{2}).

I'm curious. Does there exist an efficient solution to this problem if the condition is instead:

a_{ ij}≤a_{ ik}+a_{ kj}?It seems you can simply run Floyd algorithm and check that the matrix remains the same.

But the solution will be

O(N^3)then. I think he asked if there is a solution for the same size of the data (O(N^2*log(N))or better).are you planning to write intended solutions for the other problems?

Another approach to F is that set of all edges that are less than any fixed constant is transitively closed. This property can be checked by filling the graph starting from shortest edges. Once all edges of the same length are added we need to check that all connected components are full graphs. We can compare the number of added edges and the sizes of connected components to do that.

what about editorial for problem D?

It's ready.

If you use the minimum power-of-two size instead of 2^20 in the problem E, you will get only O(W*log(W)) because time complexity of all multiplication steps except the last one doesn't matter since steps time complexity decreases exponentially. The i-th step, if you count from the end, will only work with the data of size W/2^i.

Nice! I'll update the editorial.

beautiful proof for C. keep the educational rounds coming : )

I also think so. Thanks to lewin for that beautiful proof!

So are you saying the first model solution is incorrect? Either way, you could also choose to clean up the vector after each multiplication (map each element

xtosign(x)), this seems to work fairly well: 16472285We will have precision problems only for improved solution. Of course model solution is correct and there are no precision problems for the first approach.

In problem C I don't get it why we can assume that pair of strings are neighboring because of the transitivity?

Please somebody helps me !!!

We just assume that the answer is S = s[1].s[2].s[3]....s[n](some string) in this case by the approach of proving from the opposite there are exist i and j that i < j and s[i].s[j] > s[i].s[j] then we can chech i and i+1 if there is alright then i+1 and i+2 and so on until we reach j.

can anyone explain logic behind 16447028 submission?

First, it is taking out the lowest value 'mn' from all the ai's .

Let , somehow we get to some PSEUDO total cost 'i' by using these modified ai values. and also, we used y slots out of the original k slots. [the order does not matter] here , y <= k . So , we can get to an ACTUAL total cost if we add 'mn' in ALL the 'k' slots. [in ALL 'k' slots, not only in the remaining k-y slots] The new ACTUAL total cost = i + mn*k ;

Here, The dp[i] means , to get to this PSEUDO total cost value 'i' , we can use MINIMUM dp[i] slots of out k slots. That is dp[i] is minimum 'y' of possible 'y's .

There may be a confusion , why we are working with the smallest one. Its because if we choose any other ai , then after subtraction some 'ai' values will get negative which is not acceptable.

Another point is that, it may seem that we are using 'mn' in every slot. But actually this is not the case. We have already done a subtraction.

I hope it helps.

You just translated code into words. Though my question was about proof of correctness of the mentioned submission. I am sorry, but you did not provide the proof.

It's a solution with complexity O(n*m*max(a)) = O(10^9).

He calculates dp[x] = the minimum objects with price a[i] — min a[i] that the thief need so that the total costs is x.

If dp[x] <= k, he can add as much product with price a[i] min — min a[i] = 0 as he needs in order to fill the bag with k products.

But the main problem here is he calculated with the formula use too much time. You can read in the code and see it.

Also found a simple dp solution for Problem E. 16454858 by yummy

Can you please explain this solution with a bit more detail? I am not really familiar with FFT concepts so it will be really helpful if you can explain the dp solution.

This comment may help .

I got the basic gist of the solution but one thing that we need to subtract min cost out of each cost when in the end we are adding it again to get final result?

That is right. If we consider a possible final combination with some modified ai values , then all we have to do is to add that 'minimum' in each of the 'k' slots.

What I am asking here is that, why is it necessary to subtract the min element? Can we do it without subtracting any element?

No we can't do without subtracting minimum element. Once we subtracted the minimum element, we reduce problem to a case where one number equals to zero. If one number equals to zero, the problem can be changed to require less than or equal to k numbers in the sum instead of exactly k numbers in the sum. This allows us to use dynamics where we store for each sum the minimum amount of numbers to get it.

Thanks @halyavin. I solved the question. But still I have one concern about its complexity. The complexity of solution as explained in editorial is O(akn).I also found the same complexity. This seems to be of the order of 10^9 . Isn't it too high to pass? How do we decide if some complexity is perfect for given time limit?

You are correct, it is 10^9. But the TL is 5 seconds for this problem, so it turned out to be ok. I think the reason this solution works so well is its cache-friendliness: it accesses dynamic programming array consequently although in 2 places up to |a| elements apart.

In general, it is hard to predict solution runtime even for me. I try to estimate cache-friendliness, complexity of operations in the algorithm and the likelihood that I found the author solution but I still regularly end up quite far from the real runtime. Luckily, it doesn't always matters — if you found only one solution, there is no downside in trying it.

why second test case of problem F is not magic?

got it

condition in problem f is given aij ≤ max(aik, ajk) but in editorial it is aij ≤ max(ai, k1, ak1, k2, ..., akm, j) . i think ajk and akm,j aren't same.

I think that C isn't fully proved. Suppose we sorted the strings using the comparator and have a subarray

s[i]...s[j] (i<j), such thats[i] =s[i+ 1] = ... =s[j], wheres[i] =s[j] meanss[i]s[j] =s[j]s[i]. Then we need to prove that, whichever permutation of the subarray we take we will get the same concatenation.Can someone explain why

?

Never mind, proved it myself. m+m/2+m/3+...+m/m = m(1+1/2+1/3+...+1/m). Expression in brackets is harmonic series, which is O(log m), so totally we get O(m log m)

In problem A :

Complexity:

`O(n)`

,`O(p)`

incorrect!!In problem F, what does the triple mean? Does it stand for Pythagorean triple?

Problem E: Instead of reducing polynomial exponentiation to polynomial multiplication with binary algorithm, can't you just do fft -> point-wise exponentiation -> ifft? same complexity, less ffts. However, and this is my second question: is fft solution possible in python? see 22923079

Can anyone explain DP solution of problem E. Thanks in advance:)