The analysis is being updated.

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Thanks Arpa for the proofs!

Tutorial is loading...

Tutorial is loading...

I used binary search to pass div1.E (have been accepted). but later, I found it was probably wrong. Can someone provide me with a data or prove it is right? thanks！ code goes here: http://codeforces.com/contest/827/submission/28488942

http://codeforces.com/contest/827/submission/28492511 It runs faster even than some fft solutions.

Please test this case

It seems that your solution can't give result in several minutes

Oh, I feel really sorry about it. During preparation of this task I did many tests to check correctness of solutions' answers, but when I did tests against slow solutions I invented only

O(cnt_{V}·cnt_{K}) solution. The comment below shows, that it's possible to fail this solution using random test with good distribution of letters, but I was too stupid and didn't thought that tests with this distribution can be bad for some solutions.Now I looked through all accepted during contest solutions, most of then uses FFT or NTT and only 2 or 3 looked incorrect for me, so I hope it didn't affected participants too much. Anyway, it was my big mistake and I apologize for it.

It is even more bruteforce than you think, because:

It should be

instead of

`vis[j]=0`

(which does nothing)And result is TL with more than 10s execution time.

thanks. I did the test in my machine.For a long time no results.This test data should be added

827D — Лучший вес для ребра-- you forgot to translate problem name.I still couldn't quite understand the analysis for div2E, can someone explain it to me plz?

In div2 C, can I sort an array with 10^6 elements? I thought that I could sort only arrays with ~10^5 size in order to pass TL. I mean array <startPos, string_id> in this problem.

Sort spend O(nlogn) time => it is (1000000 * 20) / 100000000 = 0.2 sec.

Sorting 10

^{6}`int`

s (or`char`

s) is usually fast enough in C++.Got it, big thanks!

How to use centroid decomposition in hint 3 part of div1D?

For problem div1D, after TLE with NTT(Fast Number-Theoretic Transform), I have a question:

Why NTT is slower than FFT? FFT uses Complex Number, so it's clear that you need at least twice times than integer in each calculation.

But the result is quite different than what I thought, FTT get Accepted easily, and NTT can't get answer during 3000 ms.

So I can't help to wondering, is that because I wrote something wrong, or NTT is really slower than FFT?

Maybe modulo operation"%" is very slow...

828C - String Reconstruction I did the same as is given in the tutorial.

I am getting MEMORY LIMIT EXCEEDED on test 8.

It would really be of great help if anybody can help me.

Thanks!!!

Code: 28528063

use vector<pair<int,int>> instead of vector<pair<int,string>> In pair<in,int> first is the start index of string and second is the string and store strings in vector

Also, don't do string tobe=v[i].second; it may give TLE

Thanks, MLE got eliminated and now i amngetting TLE on test 8. What i maybe doing wrong? Submission: 28528640

use vec[v[i].second]] instead of tobe you can check the reason : http://www.cplusplus.com/reference/string/string/operator=/

Thanks!!! now it got AC.

But i am still not getting how just storing the string in a variable can affect performance.

You can check the complexity of assignment operator in string

problem name: 828C - String Reconstruction

verdict: Memory Limit Exceeded

my submission: 28526470

Any suggestion please?

See discussion above.

Used the concept of dsu in div2C , code 28475861

Can you explain your idea a bit more?

sure,

but first you must understand my initial thought process, here is the code 28455243 , what i am trying to do in this code (which got TLE) is that i m simply going to the next position that is empty but i m not doing it efficiently, for example suppose you have the value dddd 1 1 then the vis array(which holds the next position that is not filled) will be [4,3,2,1] which simply tells how much forward we have to move in order to get to the next unfilled position but then you have the value dddd 2 2 3 then again the array is filled like the following(note that we will first jump to the position 4) [4,3,2,1,1,1,1] now observe for large value (like in test case 8) we can get 1,1,1,1.... for a long period and each time a new string is entered we have to move many positions one at a time hence complexity increases many folds.....So i thought how can i reduce the complexity to O(n) therefore i thought of the array as a segment and whenever i fill a sub-segment, i now update the array such that i move to the next unfilled sub-segment as compared to earlier in which i moved one by one in the worst case and just like in dsu the find operation is O(1) , here also i have used the similar concept of path compression and we can jump to the next segment in O(1) time and fill the value of result array with the currentpos(P) — given position of the string(pos) so as to get the exact value at that unfilled spot and if the next unfilled spot is out of bounds, i simply ignore it..I hope now u can understand the concept and my code, the only thing remains is that how is this correct?? if u think carefully then this is easy as if a position is filled we dont need to fill it again as the data is not contradictory...so we access each unfilled position only once and we dont need to access the whole string ti only the specific characters of it.. and there are at most 1e6 given position and hence O(n) complexity.. there fore path compression came to my rescue...

someone please help me visualize how the BITs are actually built in div2E/div1C

In 827A,"sort all given string by their positions" I wonder how to do it

and if we scanf all of the information it will scanf at most 10^5*10^6 figures,won't it TLE?

There can be at most 10

^{6}values to scanf.I didn't see clearly...sorry..

MLE in 827A — String Reconstruction.

here is my solution, please help.

consider the case when |s|=10^6 and k=10^6 ...

thanks, it worked :D

AC

Where is the proof of 827B? Why is it not published yet?

For 827B High Load: "Then the depths of all leaves are not greater than d/2." I don't think that it is true that if tree has diameter d, then all of the leaves have depth <=d/2. For counter-example take: 2 1 1 3 3 4 4 5 5 6 It's diameter is 5, while the depth of leaf 6 is 4 which is >5/2.

And i don't think we need the whole thing with depths as it's pretty intuitive that when rehanging that path to root the diameter is not increasing.

In Div 1 D. For the MST edges part, what is the solution using centroid decomposition ?

Solution using centroid decomposition 28779873

Idea being: Find the centroid of current tree and lets call it C. Root the current tree on centroid C and lets call it T. Find all those bad-edges E such that C is the largest centroid lying on MST-path from E.u to E.v. This can be done by checking if E is connecting two different subtrees of T.

Consider the paths (C, E.u) and (C, E.v). Minimize the answer of all the MST edges lying on both the paths by E.w-1. (This will be like taking the minimum of prefix of some array with given value!).

Delete C from the tree and repeat the same process for all the new trees created.

Hi all,

Thank you for the comments on the problems. However, regarding problem 827C DNA Evolution, I do not understand the english of the second paragraph. Can someone please clarify for me the second paragraph? I would like to get a clearer idea about the presented solution.

Cheers,

div1.C solvable using sqrt(n) decomposition too.. segment size must be a number S such that S % (1 .. 10) = 0 ... the minimum S is 2520 and since it's too big I calculated the answer for two segments size once for |e| = (1,2,3,4,5,6,8,9,10) where S = 360.. and once for |e| = (7) where S2 = 357.. we can choose any other two numbers such that (S % i = 0) or (S2 % i = 0) where i belongs to [1..10]. this is my implementation

actually it's possible to use S = 2520. it takes more time but still under a second.

Div 1 E, "So our task reduced to finding such all possible numbers that it is representable as the sum of a number from A and a number from B., ", why is that?

Can anyone elaborate the proof (or give in more simpler terms) of High Load?