We hope you liked problems!

We are sorry for some passing wrong solutions for problems G & H.

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

You can also check ivan100sic comment for other variant of this solution

nice problemset, me likey

problemset is fancy !!

Your photo is so beautiful! me likey.

Can you please add implementation of described approaches?

Del

Thanks for very good contest and very fast editorial!

I wrote a segment tree solution for H and it seems that in some cases it will become $$$O(qn)$$$. However, it passed the main tests.

After the contest, I fixed it and it should be $$$O(n+q \log^2 n)$$$ now. May explain it later. (need to sleep now XD)

Wrong (

UPD: Correct) solution: 67927407Fixed solution: 67931310

UPD:I figured out that my original solution was correct. You don't need to see the fixed solution, just see the original one. It has a time complexity of $$$O(n+q \log^2 n)$$$ in the worst case. It works fast because of the small constant factor, especially in the random cases. :D

I also write a $$$O(q \log^2 n)$$$ solution. So in this problem, $$$O(q \log^2 n)$$$ solutions may be faster than $$$O(q \log n)$$$.

Not necessarily, my $$$\mathcal{O}(q \log^{2} n)$$$ solution (67941714) took 6925 ms after lots of constant optimisation :Dd

My solution is $$$O(q \sqrt n )$$$ (67924579). After checking TL, I didn't think there would be any log time complexity solution :D

Can you explain your idea?

Build a segment tree on the numbers. For every edge between two positions $$$x < y$$$, there is some node in the segment tree such that the left interval contains $$$x$$$ and the right interval contains $$$y$$$. Hence we can maintain all edges by maintaining for every segment tree node the edges between its left interval and right interval.

To do this for some segment tree node, find minimum in left interval $$$\min(L)$$$ and maximum in right interval $$$\max(R)$$$. If $$$\min(L) > \max(R)$$$, there are no such edges. Otherwise, find leftmost $$$x$$$ in $$$L$$$ where $$$a[x] < \max(R)$$$, and rightmost $$$y$$$ in $$$R$$$ where $$$a[y] > \min(L)$$$. This can be done in $$$\mathcal{O}(\log n)$$$. Then all numbers in the interval $$$[x, y]$$$ must be in the same component: if $$$y' \leq y$$$, then either $$$y' < \min(L) < a[y]$$$ and $$$y', y$$$ are in the same component, or $$$y' > \min(L)$$$, then the minimum in $$$L$$$ has an edge to both $$$y$$$ and $$$y'$$$, hence $$$y', y$$$ are in the same component. Finally, there is an edge between the minimum in $$$L$$$ and maximum in $$$R$$$. This also proves all segments consist of consecutive elements.

In another segment tree, we will maintain values $$$v[i]$$$ s.t. $$$a$$$ and $$$b$$$, $$$a < b$$$ are in the same component iff $$$\min_{v}([a+1, b]) > 0$$$. Then the number of components is the number of zeroes in this segment tree. To count the number of zeroes, just store range minimum and count of minimum ($$$v[i]$$$ will never be negative). To update the values $$$v$$$, whenever we find $$$x, y$$$ in the first segment tree, add $$$-1$$$ to $$$[px+1, py]$$$ and $$$1$$$ to $$$[x+1, y]$$$, where $$$px, py$$$ is the previous pair of $$$x, y$$$ for that node. This operation also takes $$$\mathcal{O}(\log n)$$$ time.

Whenever we make an update, for each of $$$\mathcal{O}(\log n)$$$ segment tree nodes we do $$$\mathcal{O}(\log n)$$$ work in the form of three segment tree queries, hence the time complexity is $$$\mathcal{O}(q \log^{2} n)$$$.

Did anyone solve E by considering a graph? please share your solution if you did.

Well, i kinda did, I believe I have an $$$O(n^3)$$$ randomized solution that passes in 982ms :p (cant prove it though)

The solution exists, so there are at least $$$n - 1$$$ blue edges(connecting points from different groups), so we can pick a random edge, the expected number of tries unitll we hit a correct one is $$$O(n)$$$. From there we pick all the edegs that have the same length and do a dfs to divide points between two groups, then just check if the conditions are met in $$$O(n^2)$$$.

code

edit: it doesn't work, bad tests

My solution is in the same spirit as the editorial. Compute all pairwise distances. While they're all even, divide the distances by 2. Finally, take all the edges mod 2.

Then we have a complete graph with each edge either 0 or 1. Find a 2 coloring of this graph and output it. This is way more complex than the editorial solution, but still passes systests.

My code also doesn't handle the case when every distance is odd. Is that possible? Pls hack.

I did, consider the squares of all distances and calculate their gcd. Now if $$$dist ^ 2 / gcd \equiv 0 (mod 2)$$$ two points must be in the same component. Now there are at least 2 groups because after dividing all sq. distances by their gcd there is at least one which is odd and there are at most two groups because if the distances $$$(AB ^ 2 / gcd) \cdot (BC ^ 2 / gcd) \equiv (AC ^ 2 / gcd) (mod 2)$$$, where A, B and C are some points.

UPD:Downgrading the color didn't go as planned. The code for this solution is stupid short and the complexity is $$$O(n + logC)$$$, which is better than the editorial. 67939313Damn your mathematics!!!Nice Work!

First. It's important to note, that you don't consider all distances. You consider distances from first point to all other. After division of distances by $$$gcd$$$ you indeed will have at least one odd distnace. Otherwise, they all even after division, and it's contradiction.

Second. I don't understand your formula with A, B, C. Also, I don't see how it leads to pairwise distances.

Finally: complexity is $$$O(n \log(C))$$$, because you do $$$gcd$$$ $$$n$$$ times.

No, complexity is really O(N + log(C)). Not all gcd calls are made equal! You can think like this: there will be at most log(C) times that G will be reduced to a divisor of G in gcd, the rest of the time gcd(something, G) will return G directly.

Alright, thanks. Interesting stuff. Question about A, B, C is still up.

Imagine that $$$AB^2, BC^2$$$ and $$$AC^2$$$ are all divisible by some number $$$g$$$ (which can be at most their gcd). Then the formula will work for any three points A, B and C no matter what (it's easy to prove looking at the coordinates parity). So the final observation is this — if we have at least one odd edge we can go through it and end up in a different component.

I think, I finally got what you said.

Too long detailsFirst thing to notice, even though I read A, B, C are points, it took for me too long to realize that $$$AB$$$ is just

notationof distance between points between A and B. So, $$$AB^2$$$ is just distance between points squared. I want to stress it out.Now, to clarify all things up. First. You say

"Must" is not proved. You put them in same component. Distances may be both even but different.

Second. Formula about distances. Basic one that is easy to prove is:

Main difference with what you wrote is

plussign. I had to check 9 cases though. Now, if all coordinates are divisible by some number, then this is still holds if you divide all distances by the number squared.Things go wild when you trying to divide by arbitrary common factor. You can't say

consider parity of coordinateshere, just because you can't divide coordinates anymore, because it's common factor ofdistances. Or, I just don't know something.If common factor is odd, then parity of distances divided by the factor doesn't change, just because number is even if and only if it's divisible by two. And, when you divide by odd number, it's not divisible by two, so it doesn't change number of twos in factorization. Similarly, if number is odd, it won't become even. With even factor this doesn't work. Even number may become odd number after division.

For even common factor, I was trying to use fact, that gcd of pythagorean triple is perfect square. It doesn't work. Those three distances are not pythagorean triple.

Bruteforce doesn't show any countercase. But I have no proof if you divide by gcd. If you have a proof about division by gcd, then it's easy to show that for arbitrary common factor it still holds, because you can multiply by some number (parity doesn't matter) and equation still holds.

Lets say, equation is correct. What you're really intrested is following equation

It's just rearrangement of equation above divided by gcd. But now, it allows us to tell parity of distance

BCby two distances from the first point. And, because in your code gcd is actuallyso this doesn't work. Because equation works if gcd is common divisor of all three. Counterexample:

In this case, you get fraction. I can't find counterexample for your code though.

It's not about what my code does, it's about the proof. So this fact is true for any three points, but my code does something a little different to be faster. You can proceed to see the $$$O(n^2+logC)$$$ code 67922484. I think I can write a little more on it and maybe even write an entry.

Consider the different graphs with edges only of a specific length. If one of those graphs is not bipartite, we know that all vertices that are connected by an edge in that graph should be in the same component. That's our first observation. The second one is that if we have a bipartite graph, we can colour every connected component in it and then the vertices that are coloured the same will always be in the same component (note that we consider every connected component separately).

Well with these two observations we can easily create a 2-sat like approach with dsu. You can check my solution for some more details.

"If one of those graphs is not bipartite, we know that all vertices that are connected by an edge in that graph should be in the same component."

I'm not sure I understand this. If you have 4 points in a square, there are 4 distances equal to the side length, therefore we have a graph with 4 vertices and 4 edges: (1,2), (2,3), (3,4) and (4,1).

We can put points 1 and 3 in component A and points 2 and 4 in component B. Then all the 4 edges are "yellow" (i.e. the two endpoints of the edge are in different groups) and it's ok.

There are no regular polygon with all vertices on lattice points (except square itself), stands as a reason that every graph will be bipartite right ?

This is what I did:

failif it encounters a case where there are 2 edges of type 1 such that one of them is between nodes of same color and the other is between those with different colors). Otherwise, our choice is different color.You can see there are a couple of assumptions and greediness in this solution. I would appreciate if someone can uphack this 67920620.

Hacked. I thought about that test during the contest a lot, 2/2 heuristic solutions I found seem to fail it...

Thanks! The test is simpler than what I originally thought it would be. This seems to be a downside of such ad-hoc problem because there might be many unpredictable approaches and some of them might end up slipping through.

The solution to G seems magic. It'd be nice if somebody could share how they came up with the solution.

I can xD.

First if we order the elements in some order we can calculate prefix sums and if two of them turn out to be equal then we simply have a subsequence with sum 0.

Now run this algorithm on elements in the order they are in the input, sort them by value, absolute value, append first smallest positive and then until not all elements are in the sequence if current sum is positive append largest negative (by absolute value) such that sum stays nonnegative, or smallest (by absolute value) if it's impossible and if the sum is negative choose a positive element in a similar manner, do the same but start with smallest negative (by absolute value) and finally: if none of that worked random shuffle positive, negative, run through all of elements and if current sum is positive append the first negative element, and if not the first positive element.

How to come up with this? It's quite simple. Just add more and more different orders until you can't find a countertest neither by hand nor with a script running random tests.

I wish there could be a haha react on CF

Got uphacked :(. Well done Radewoosh. Apparently need more heuristics. Added removing elements if their absolute value is greater than absolute value of sum of all elements of opposite sign and basic dp on bitset that finds an answer as long as at no point absolute value of sum exceeds 250. Hopefully now it's unhackable.

mnbvmar ’s test

Your idea of ordering elemens to find two equal prefix sums actually can give a solution. One common way to ensure there are two equal prefix sums is to ensure that all $$$n+1$$$ prefix sums fall into a range of length $$$n$$$.

If we pick the range to be $$$0..n-1$$$, then we require that:

This is almost exactly the condition we're given! In particular, picking

exactly satisfies this inequalty. This gives rise to the same cycle-finding problem as the editorial.

Indeed. aryanc403 can you please share how did you came up with this idea

Can someone suggest alternate solution for B?

What I did: First, we can restrict to find a subarray where the maximum and minimum are its endpoints, this is so easy to prove. So, we must find a subarray $$$[a, b]$$$ such that $$$A[b]-A[a] > b-a$$$ or $$$A[a]-A[b] > b-a$$$. This is the same as $$$A[b]-b > A[a]-a$$$ and $$$A[a]+a > A[b]+b$$$, respectively. So, for each position $$$b$$$, we find a position $$$a: a \le b$$$ with minimum value $$$A[a]-a$$$ and the position $$$c: c \le b$$$ with maximum value $$$A[c]+c$$$. If $$$A[a]-a < A[b]-b$$$, then the range $$$[a, b]$$$ is interesting. If $$$A[c]+c > A[b]+b$$$, then the range $$$[c, b]$$$ is interesting.

Thanks for the idea... I'll try and implement it

Problem F has O(n (log n)^2) solution! 67930797

In my code, m <= 2*n/i (for i >= 1) after removing redundant segments.

Can you please explain your solution?

gs12117 asked me to translate his solution written in korean.

Given a binary string, think of a path in a grid where you go (x, y)->(x+1, y) if current character is 1 and (x, y)->(x, y+1) otherwise. For two distinct points u and v along this path, if the slope of the line joining u and v is an integer, they correspond to an awesome substring. So I counted, for each slope in range 0 to n, the number of intersection points.

Suppose we're currently checking slope >= k for some integer k. I claim that that we only have to check O(m/k) distinct number of x coordinates where m is the number of 1s.

First observe that in order to have two distinct points u and v with x-coordinates differing by p produce an awesome substring, we need at least k * p number of 0s. Now build a graph where each node represents an x-coordinate and two nodes c and d (c < d) are connected by an edge if and only if the slope of the line joining the the point(in the initial grid) with the smallest y-coordinate with x-coordinate c, and the point(in the initial grid) with largest y-coordinate with x-coordinate d, is equal or greater than k. ( Note that if two points are not connected by an edge, it's not possible to produce an awesome substring with the corresponding pair of x-coordinates in the first place )

Let A be one of the connected components of this graph with |A| >= 2. Note that A is always an interval. It turns out that each A requires at least k * |A| / 2 number of 0s. Here is the proof. Let a_0 -> a_1 -> ... -> a_p be a directed path where a_0 is the leftmost point and a_p is the rightmost point in A. Define b_1 = a_1 and b_i = (the leftmost point greater than b_(i-1)), and c_i = (the starting point of the directed edge in the directed path having b_i as its endpoint). Then c_(i+1) <= b_i and b_(i-1) < c_(i+1). Now for two sequences of directed edges {[c_1, b_1], [c_3, b_3], ...} and {[c2, b2], [c4, b_4], ...}, the edges in a sequence are non-overlapping for both of the sequences, and taking the union yields the entire range [a_0, a_p]. Therefore, at least one of them must contain k * (a_p — a_0) / 2 = k * |A| / 2 number of zeroes due to our initial observation.

Now sum of all k * |A| / 2 over all A should be equal or less than n and, therefore, the sum of |A| should be equal or less than 2 * n / k.

My solution now calculates for each of these x-coordinates in A the answer in O(log n), resulting in O(n log^2 n) complexity.

Where can we read more about those Lemmas from?

in Problem E i can't understand why we should subtract 1 from odd coordinates and then divide by two and why doing this we're guarantee ending up with more than 1 group. Can someone explain this, please?

We can move coordinate system so that one of the points becomes origin (0, 0). This point will belong to $$$A_{00}$$$ no matter how many times we will divide coordinates by 2. If all other points also belong to $$$A_{00}$$$, divide coordinates until there is a point with an odd coordinate.

If anyone is interested in the problem C solution that requires only 1 integer, then the answer would be to reason as follows: We shift X to the left, and add a zero in the end, since it is being multiplied by two. The last bit in S must be equal to the last bit in shifted xor (which is certainly zero). If we are to add an element, it will only affect the last bit in the sum, and cannot affect the last bit of the shifted xor; it affects the second to last bit (since it is shifted). If the last bit of the sum is 1, then the element must have a 1 in the last bit, since we need to make the last bit of the sum match the 0 of the xor. This 1 flips the second to last bit in the xor, and throws a carry to the second to last digit of the sum. If the sum has 0 as its last digit, the element too must have 0. We move on to the 2nd to last bits to determine the 2nd to last bit of the added element. As in the previous case, the 2nd to last bit of the element only affects the 2nd to last bit of the sum and not the xor; in fact it will affect the 3rd to last bit of the xor due to shifting. We force that bit of the sum to match the xor by setting the element second to last bit appropriately. We then flip the 3rd to last xor bit if necessary (if the element 2nd to last bit is a 1), and we make sure to ripple the carry if present. We keep doing this until all bits are matching.

I wonder if any of you guys went through the pain of thinking about and implementing this in contest... I certainly should have payed attention to the more relaxed constraints; as I would have saved a lot of time. But I am not very strong with bit arithmetic, so the smarter editorial methods did not come to my mind.

Yes I did. It didn't be hard. 67895496

I had also done it exactly like you but did not know 1<<j will lead to an overflow even if j is long long. 1LL<<j is the only mistake in the code I submitted. A great question indeed. Super interesting to see so many approaches!

I also am one of those who could not figure out the neat approaches in the editorial.

Each bit can be determined and it is possible to construct the final xor and final sum values bit by bit.

Exactly, it took me an hour to figure that (1<<i) is an overflow for i=32(long long int) and that it should be (1LL<<i). https://codeforces.com/contest/1270/submission/67956105

I thought this way. But I couldn't come up with an appropriate solution. So I have to leave the problem sadly.

Yes I also did the same way . As we try to match the next bit of the cumulative xor to previous bit of the cumulative sum , the number of bits in the final answer can go maximum upto 2 more bits than maximum of number of bits in cumulative xor and sum. Link -> https://codeforces.com/contest/1270/submission/67925984

$$$C$$$ answer for challenge is $$$1$$$ ($$$0$$$ if sequence is already good ofc). Observe that multiplying the xor by $$$2$$$ is just shifting its binary representation to the left by $$$1$$$ bit.

As $$$2 \cdot \text{xor} = \text{sum}$$$, the last bit of the sum must be $$$0$$$. Now, calculate the value of last bit of the sum by adding all the given numbers. If it is $$$1$$$, put $$$1$$$ as the last bit of the new number otherwise put $$$0$$$. Now, find xor of all these last bits of numbers including the number you just added. This must be the value of the second bit from the end of the sum. Keep repeating this process. Each time you fix the $$$i$$$ th bit of the new number, you xor all the $$$i$$$ th bits to get the $$$i+1$$$ th bit of the sum and hence you can fix the next bit.

Prove that this process cannot go on forever.

Complexity: $$$O(n\log{n})$$$

if s was odd why add a large odd num , why not smaller one

like 1

I thought my solution to C was a very stupid solution. Checking the editorial, I found it as

Solution 1:DAnyone came up with that solution?

Yes I did the same and added 10^15,10^15+1 although I dont know why adding 10^16,10^16+1 gave me WA on tc 2 as it will not be exceeding 10^18

I did added 10^16 and it got AC!

Ok my bad using pow was the cause of error pow(10,16)+1 returning 10^16 pow returns incorrect results for greater values than of order 15 beacuse of double precision

I thought about 1270E - Divide Points a slightly different way.

I divide the points by the parity of $$$x+y$$$. If this results in two non-empty sets, then we're done. If all the parities are even, then the points are all within a $$$45^\circ$$$-rotated lattice that's also scaled up by $$$\sqrt{2}$$$, so we can reduce the problem by rotating all points around the origin and scaling down by $$$\sqrt{2}$$$. This is easy to do with the mapping

If $$$x+y$$$ is odd, we can just shift everything over by $$$(1, 0)$$$ before dividing (just to avoid any issues with integer division/rounding).

Submission:

Sample codehi, thanks for sharing! can you explain more what exactly the mapping is doing?

i observed that the mapping is bringing the point (x,y) closer the origin by a factor of 2, but what i'm unsure of is if integer division messes up distances to other points in the process.

(Replying to both your comments here.)

Yes, the mapping rotates the points clockwise by $$$45^\circ$$$, and scales all distances down by a factor of $$$\sqrt{2}$$$. Either all $$$x+y$$$ and $$$x-y$$$ are even or all of them are odd, so we can safely divide by 2 (or shift everything over by $$$1$$$ to make them even and then divide by 2), to ensure there are no integer division issues (see my sample code).

To try a couple examples, this mapping maps $$$(1, 1)$$$ to $$$(1, 0)$$$ and maps $$$(3, 5)$$$ to $$$(4, -1)$$$. If you draw a shape or polygon on a grid and transform every point, you'll be able to see clearly that we're rotating and scaling (but by a factor of $$$\sqrt{2}$$$, not $$$2$$$).

I made a quick diagram of the 2nd sample test case. "BEFORE" is the original square, "AFTER" is the transformed square including the shift to keep the coordinates integers. The unlabeled blue square is the transformation without the shift.

gotcha, i see now what you mean by doing the shift to maintain integer coordinates. the blue points are equivalent to the green points.

are all the points rotated CW by $$$45 ^{\circ}$$$? for example, $$$(1,0)$$$ maps to $$$(0.5, 0.5)$$$ which seems CCW to me.

Oh, I'm not sure. I might have been a little sloppy with my transformation, since all I care about is preserving similarity -- I think maybe it rotates

andreflects the points? But it serves the same purpose. :)great solution!

Isn't that mapping rotates the point counter clockwise?

If all the parities are even, then the points are all within a 45∘-rotated lattice that's also scaled up by √2. Can someone please explain this?

Try drawing every point $$$(x,y)$$$ such that $$$x+y$$$ is even, or is odd. Then turn your head 45 degrees. :)

Here is a picture, the blue circles are $$$x+y$$$ even and the red crosses are $$$x+y$$$ odd.

If you turn your head and look at only the red crosses, you can see it looks the same as the original full lattice but with a bigger spacing between points.

Heh, solution 2 for c was so easy! I thought so hard. But I missed such easy thinking.

Can someone please explain why am i getting TLE in my code for solution B.Time complexity seems to be O(n (log n)^2) which should have passed in 2 sec?(I have used binary search and segment trees).

Here is the submission:67897851

Disappointing contest I have no idea why problems B and C was very difficult for me Anyone see a connection between the two problems ?

Please, someone tell me how many people solve it ? I was stuck for a bout 30 min then just guessed the solution.

_Disappointing contest I have no idea why problems B and C was very difficult for me _

Ok. I don't want to troll again. Lmao ... but your (actual) color says it all.

In my solution for problem B, I have taken max(a)-min(a),checked if greater than array size and printed their positions. What is wrong in this logic. This my solution67923904.

1 5 1 2 2 3 5 check fo this testcase. your logic is completely wrong. you need to find any subarray for which max-min>=n. but you are finding only for l=0,r=n-1.

What intution does exactly require one to solve d problem

I did solve during round but haven't found bug. Shame, I just thought that server reply value first.

The way of my thinking was following. You have $$$n$$$ and $$$k$$$. You look at range of $$$k$$$. It can be in range $$$[1,n-1]$$$. Alright, you think $$$1$$$ and $$$n-1$$$ are cornercases, but I wan't general case. It was hard to avoid this thinking, but as soon as I tried to solve cornercases, it worked. For $$$k=1$$$ it's easy. The only possible $$$m$$$ is $$$1$$$. How about $$$n-1$$$? You have $$$n$$$ possible queries. They are defined by missing number in query. This has link with restriction on maximum number of queries. Can you answer what is $$$m$$$ based on all those $$$n$$$ queries? It turns out: yes. But also, you can tell it by statement of problem. It says that you can always do that in this restrictions.

How to do that? Just imagine all $$$n$$$ elements. Now, you have $$$m$$$-th element in it. Request with gap is identical to all elements without some element. If you remove element behind $$$m$$$-th or $$$m$$$-th element itself, you'll get element right after it. If you remove element after it, you'll get $$$m$$$-th element itself. So, cases are devided into two outcomes. One of them happens $$$m$$$ times and other happens $$$n-m$$$ times.

From that on, you can try apply similar method by moving gap along array, with no result. Very important thing is left, and hard to came up with it. I spent at least half of hour for this part. And this thing, is the only thing that is left, is key observation. What if you can find out m for shorter array? Then... You did it! Who cares about rest of array? That's it! Just cut off all elements from a in such way that $$$n-1 = k$$$. Done. Honestly, I solved cornercase when I realized that you can cut array first. For the whole before, I knew that $$$k = n-1$$$ is solvable by moving gap, but didn't dig into details and was trying to move gap for smaller $$$k$$$.

Will anyone explain me the solution 1 for 1270C Make Good? Why are we adding 2^50 as one of the number?

Could you please explain 1270C — Make Good logic in easy terms ?

Solution 2 works like this: We have the sum as S and xor as X. Adding X as one number modifies the sum as (S + X) and xor as (0) [xor with itself gives 0] Now add (S + X) as another number, sum becomes 2(S + X) and xor becomes (S + X) [0 xor any number is that number] So we add 2 numbers, X and (S + X)

I saw this $$$O(N)$$$ solution for E. Is this correct?

~~That's $$$O(N \log N)$$$ because each~~Actually, in this problem it can be proven to take only $$$O(N + \log \max a_i)$$$ -- check out this comment below https://codeforces.com/blog/entry/72611?#comment-569387.`gcd`

call is $$$O(\log N)$$$.However, there is an $$$O(N)$$$ solution. Doing the transformation twice is equivalent to dividing both coordinate by 2 (ignoring the 90 degree rotation), and it's possible to compute the number of division by 2 required using bitwise operations, after that the maximum number of operations necessary is at most 1.

Oh, I see. Would you mind writing an explanation for your $$$O(N)$$$ solution. Thanks in advance!

I just implemented it. 67969860

We divide only when all points are in same group. This means, that up to some moment, all their parity match. First, solve easier problem: how many times parity match when you divide array by two? Answer is equal to the number of common least significant bits of all numbers within array. Let say $$$f(a,b)$$$ is number of same least significant bits. Then, number of times to divide by two until we have different parity within array is equal to $$$\min\limits_{i,\,j} f(a_i,a_j)$$$. Then $$$a_1$$$ should also have same least bits, so it's just $$$\min\limits_i f(a_1,a_i)$$$. Bitwise $$$\mathbb{xor}$$$ will tell us where are the different bits. So, $$$f(a,b) = g(a\,\mathbb{xor}\,b)$$$ where $$$g(a)$$$ is position where least significant bit that is set. To find out least significant bit of $$$x$$$, you just subtract one from it, it should turn lowest one into zero, and replace all zero below into ones. Now, if we make bitwise and of result and $$$x$$$, we will have $$$x$$$ without least one. So, formula is $$$h(x) = x - (x\,\mathbb{and}\,(x-1))$$$. Probably you may simplify it, but for sake of easy explanation I'll leave it as is. Now, what's important. First, if there is no $$$1$$$ in binary, then it'll return $$$0$$$. Second: it's a bit. So, it's $$$2^k$$$. But we want to divide $$$k$$$ times, so this is exactly what we need. And, $$$\min$$$ in formula will assist. Now, back to points with two coordinates. Divisor is equal to $$$\min(\min\limits_i h(x_1\,\mathbb{xor}\,x_i) , \min\limits_i h(y_1\,\mathbb{xor}\,y_i))$$$

Wow! You thought so much to just make it $$$O(N)$$$ from $$$O(NlogN)$$$. Thanks a lot for the reply!

Actually, that solution isn't O(N*logN) but O(N + logA). There will be at most O(N + logA) iterations of the gcd since for each 2 iterations in the same call the gcd will be scaled down by at least a factor of 1/2.

Oh nice observation!

using gcd will result in $$$O(n + log)$$$ not $$$O(n.log)$$$. please don't say this shits anywhere else.

both of these true btw

$$$O(N + log N)$$$

can anyone explain me in C solution 1 why S <= 2 * 2 ^ 50 <= X ?

2 * X sorry

$$$2^{50}$$$ is larger than constraints, so the bit will always be empty

ok thanks :D

please someone help me to understand how they approached F awesome strings problem?

Can someone please explain what is that thing with dividing by 2 in Problem E I am not getting How to Solve this problem ! Please Help me out !

According to editorial we are just scaling down our plane as it all depends upon making a group of points such that points from two different sets don't have distance equal to what two points in a group can have. To target that inequality we are focusing on parity of sum of co-ordinates, because if we make a group of points which have odd coordinate sum then square of intra group distances will always be odd and inter group distances will be even. Thus we can create a partition successfully if we have points whose co-ordinate sum is odd, but all N points must not possess this property because they all will fall in same group. Now, only two cases remain

Case 1: You have zero points with odd co-ordinate sum.Case 2: You have N points with odd co-ordinate sum. First case can be handled by scaling down the plane by one-fourth i.e. divide points by two, because it occured due to even sum , at some point of time its co-ordinate sum will change its parity if you keep dividing the points by two. Same parity change can be induced for handling second case.Instead of dividing the plane rotating the axes by 45 degree is safer and more elegant.

That was very a nice explanation! Thanks budyy for making it so damn crystal clear ! Kudos ! Add me in your friends :) :)

IDK why, the tutorial has clear proof but I still can't understand the logic for problem — B. Why are we checking if |ai+1−ai|≥2 ? Why is it working?

In B problem, Why you have considered

max>minalways?(it might happen that index of maximum element is lower than minimum element)It's assumed without loss of generality, i.e. there is a symmetry argument. In this case, the same argument will hold if you reverse the array and look at that sequence, but in this new array, the max/min elements are on opposite sides.

Thanks

Can someone tell me why is MrDindows mentioned in the editorial for problem? I don't know the story around him and bruteforces.

He once solved Div1 E with an obvious O(n2) solution

FYI

In E problem

I didn't get the "divide by 2" part,for eg:if points are (1,1),(3,3),(5,5)...all odd nos,how will we proceed? We can't just take the floor of all points right,odd numbers will be scaled down by 2.5 and even nos will be scaled down by 2?

example:(1,2) becomes (0,1),and (2,1) becomes (1,0),the ratio by which they shrank is different right?

Assume all of points are in $$$A_{ij}$$$ then you may subtract vector $$$(i,j)$$$ from all of them, and they will be in group $$$A_{00}$$$ and distances won't change. Now, notice that $$$(x - (x\,\mathbb{mod} \,2))\,\mathbb{div}\,2 = x\,\mathbb{div}\,2$$$. So, you may just divide by two. Notice, solution from editorial adds 1e6 to avoid negative coordinates, because in c++ division of -1 by 2 is still -1.

Thanks a lot got it now

In editorial code they are saying, clubbing A00 with A01 and A11 with A10 produces the right answer.

But if you see dist(A01, A11) ==> ODD where as dist(A11, A10) ==> ODD

Is it not possible that blue distance equals yellow distance?

Also are you sure about -1/2 = -1, I tried it on GNU G++ it is 0.

Okay got it they are checking that when there is only odd sum cords or only even sum cords.

Seems like I mis read their code.

NVM

Can someone help me understand, how would someone get an idea of how to solve C? (talking about solution 2)

I was thinking like this. Lets say $$$a$$$ is input array, and $$$b$$$ is what we add. First obvious thing: sum(a and b) is sum(a)+sum(b). Next, a little less known, but also very known fact, that xor(a and b) is xor(xor(a),xor(b)). A little more about this fact. In bitwise operations (except shifts and rotates) all bits are independent. So, if property holds for one bit, then it holds for all. Property xor(a and b) is xor(xor(a), xor(b)) for single bit comes from fact that for single bit $$$a \,\mathbb{xor}\,b = (a+b)\,\mathbb{mod}\,2$$$

Now, first idea comming from facts above: we have $$$x = sum(a)$$$ and $$$y = xor(a)$$$. So, instead of array, we have two numbers. Alright, we have three numbers to 'spend'. Words in your head "too hard", lets get rid of $$$y$$$. How? With easiest way, just add $$$y$$$, and sum will turn into $$$x+y$$$ and xor will turn into zero. Observation: xor by zero doesn't do anything. So, once we add something, xor will have this number. Now: hmmm, we have question, what value $$$z$$$ to add to some value $$$s$$$ to have $$$s + z = 2z$$$. Easy! Just $$$s$$$ itself.

Great explanation! Thanks!

In Editorial of problem E is is written "From now on we suppose that at least two groups are nonempty". How to prove this ?

if some number is even then it will come down to 2 eventually and then come down to 1 by further division by 2.

Now the case of odd numbers. After dividing them again and again you can make them zero so they become even.

Hence you can always change the parity and form two groups.

mhq Quadratic solution in F passed: 67995161 :D

Wow, it's unexpected :(

During testing, danya.smelskiy found bruteforce solution, which passed initially. It was basically this 68008487 (I added a few more pragmas). So, challenge is to find difference with 68008440.

So, maybe it's good that we didn't find this solution :) (well, of course is bad, but it would be really hard to cut this solution from slow solution with correct asympotics without setting constraints very high).

Crucial part is avx2, gather operations work much faster than avx

can u tell me in the solution of this question how X⊕X⊕(S+X)=(S+X) this is possible and for S+X+(S+X)=2(S+X) S+X+(S+X)=2(S+X) we have to find S⊕X⊕(S+X)

I did not fully understood editorial of F , can some one help me

The editorial for problem I is a bit hard to understand, anyway, here's how I solved it. Let $$$n = 2^k$$$, let $$$B$$$ be the matrix which describes all moves made, $$$B(i,j)$$$ will be equal to the xor of all $$$w$$$ such that a move $$$(i,j,w)$$$ was done. Clearly, there's no need to do more than one move on the same pair $$$(i,j)$$$. We're also given a matrix $$$F$$$ with $$$t \leq 99$$$ 1-entries, all others are zero ($$$t$$$ is odd). We can think of this problem in terms of cyclic matrix convolutions modulo 2. The problem asks us to find a matrix $$$B$$$ with the minimum number of non-zero entries such that $$$B * F = A$$$. The solution consists of two parts. First, finding the inverse element $$$F^{-1}$$$. Second, finding the product $$$A * F^{-1} = B$$$. Surprisingly, the first part can simply be done by finding $$$F^{n-1}$$$, this can be done efficiently using bitsets, by multiplying the identity element by $$$F$$$ a total of $$$n-1$$$ times, each multiplication can be done in time $$$O(tn^2/32)$$$, this part has complexity $$$O(tn^3 / 32)$$$.

Here's proof that $$$F^n = I$$$, where $$$I$$$ is the identity matrix for cyclic convolutions, i.e. the matrix $$$I$$$ has $$$I_{0,0} = 1$$$ and all other entries equal to $$$0$$$. This also implies that the matrix $$$B$$$ always exists and is unique!

It's sufficient to check that $$$F^2$$$ has nonzero entries only where both $$$i,j$$$ are even. By definition, $$$(F^2)_{i,j} = \sum_{x=0}^{n-1} \sum_{y=0}^{n-1}F_{x,y}F_{i-x,j-y}$$$. Notice that, whenever at least one of $$$i,j$$$ is odd, not a single element of $$$F$$$ is paired with itself in the sum, so each pair will appear exactly twice, so the sum is $$$0$$$. The main result follows by induction, we can kick out all odd-numbered rows and columns after one squaring. We will end up with $$$I$$$ because each squaring does not change the fact that $$$F$$$ has an odd number of ones (this is easy to verify).

The second part can be done as follows. First of all, we need to compute the product $$$A * F^{-1}$$$ for each bit position. Each multiplication can be done using 2D-FFT in time $$$O(n^2 \log n)$$$. It's better to use complex numbers because we only need results accurate to around $$$20$$$ binary digits. Also, we can cut the constant factor from $$$60$$$ to $$$30$$$ by making use of both real and imaginary parts (real part for one bit-position, imaginary part for another bit-position). This part has complexity $$$O(n^2 \log n \log M)$$$ but with a much worse constant factor.

Overall the solution barely passes the time limit.

There's another solution which just computes $$$A * F^{n-1}$$$ by repeatedly multiplying $$$A$$$ with $$$F$$$, it has time complexity $$$O(tn^3)$$$ but it is unfortunately not fast enough (but it's close).

Edit: As suggested by mhq, all powers of $$$F$$$ are sparse, i.e. have no more than $$$t$$$ nonzero entries. This allows fast multiplications by arbitrary powers of $$$F$$$ in $$$O(n^2 + t^2)$$$ and eliminates the need for using FFT for the final multiplication with $$$A$$$.

I want to correct this last statement. It isn't true that all powers of $$$F$$$ are sparse — $$$F^{511}$$$ is dense (about 1/2 of all entries are 1 and 1/2 are 0) even for a random set of $$$7$$$ initial cells. However, what's true is that $$$(F^a)^2$$$ is only as dense as $$$F^a$$$, since any $$$(F^a)_{i,j} = 1$$$ contributes $$$1$$$ to $$$(F^{2a})_{2i,2j}$$$, so all powers to powers of $$$2$$$ of $$$F$$$ have at most $$$t$$$ non-zero entries and we can efficiently multiply $$$A$$$ by $$$F^1$$$, $$$F^2$$$, etc. up to $$$F^{2^{k-1}}$$$.

I think I wanted to say that all iterated squares of $$$F$$$ are sparse. Thanks for claryfing this!

Can someone explain the answer B?

can anyone tell what's the problem in this code for Q2## include<bits/stdc++.h>

using namespace std; int main() { int t;cin>>t;

}#include<bits/stdc++.h> using namespace std; int main() { int t;cin>>t; while(t--) { int n;cin>>n; vectorv(n); int k; for(int i=0;i<n;i++)cin>>v[i]; // long long r=*max_element(v.begin(),v.end()); // long long l= *min_element(v.begin(),v.end()); // long long f = r-l; long long maxidx=max_element(v.begin(),v.end())-v.begin(); long long minidx=min_element(v.begin(),v.end())-v.begin(); long long idx=max(minidx,maxidx); k=idx+1; for(int i=0;i<idx;i++) { if( abs(v[idx]-v[i])>=k) { cout<<"YES"<<endl; cout<<i+1<<" "<<idx+1<<endl; break; } else { k--; } } if(k==1) { cout<<"NO"<<endl; }

}

In problem E , when the elements belongs to groups say A00 and A11 only then how can we say that when we will take elements from different groups i.e. A00 and A11 then it's sum will always give remainder 2 when divided by 4 but not zero?? is there any proof of it?? .

Can someone please explain problem F editorial or some another solution? I have been at it for 2 days.

Where can I see solutions on Python? I found link with only C++ solutions.

I'm trying to solve problem F without looking at the solution. I have to say, it's a beautiful problem, at least for a weakling like me.

Can any one explain C?