### Z38's blog

By Z38, 2 years ago, translation, , (Authors: Fedosik, ZZzzZZzzzZzzzZZzz38)

Consider each balloon color separately. For some color c, we can only assign all balloons of this color to Kefa's friends if c ≤ k. Because otherwise, by pigeonhole principle, at least one of the friends will end up with at least two balloons of the same color.
This leads us to a fairly simple solution: calculate number of occurrences for each color, like, cntc.
Then just check that cntc ≤ k for each possible c.
Complexity: O(N + K)

Godsend — Bdiv2
(Author: Fedosik)

First player wins if there is at least one odd number in the array. Let's prove this.
Let's denote total count of odd numbers at T.
There are two cases to consider:
1) T is odd. First player takes whole array and wins.
2) T is even. Suppose that position of the rightmost odd number is pos. Then the strategy for the first player is as follows: in his first move, pick subarray [1;pos - 1]. The remaining suffix of the array will have exactly one odd number that second player won't be able to include in his subarray. So, regardless of his move, first player will take the remaining numbers and win.
Complexity: O(N)

(Author: Fedosik)

First of all, let's understand what is the value of F(N, K).
For any subset of size K, say, a1, a2...aK, we can represent it as a sequence of numbers d1, d2...dK + 1, so that d1 = a1, d1 + d2 = a2, ..., .
We're interested in E[d1], expected value of d1. Knowing some basic facts about expected values, we can derive the following:
E[d1 + ... + dK + 1] = N + 1
E[d1] + ... + E[dK + 1] = (K + 1)·E[d1]
And we immediately get that .
We could also get the formula by using the Hockey Stick Identity, as Benq stated in his comment.
Now, according to rearrangement inequality, is maximized when A is increasing and B is decreasing.
Complexity: O(NlogN)

Leha and another game about graph — Bdiv1
(Authors: progmatic, ZZzzZZzzzZzzzZZzz38)

Model solution uses the fact that the graph is connected.
We'll prove that "good" subset exists iff  - 1 values among di can be changed to 0 / 1 so that is even. If the sum can only be odd, there is no solution obviously (every single valid graph has even sum of degrees). Now we'll show how to build the answer for any case with even sum.
First of all, change all  - 1 values so that the sum becomes even.
Then let's find any spanning tree and denote any vertex as the root. The problem is actually much easier now.
Let's process vertices one by one, by depth: from leaves to root. Let's denote current vertex as cur.
There are two cases:
1) dcur = 0
In this case we ignore the edge from cur to parentcur and forget about cur. Sum remains even.
2) dcur = 1
In this case we add the edge from cur to parentcur to the answer, change dparentcur to the opposite value and forget about cur. As you can see, sum changed its parity when we changed dparentcur, but then it changed back when we discarded cur. So, again, sum remains even.
Using this simple manipulations we come up with final answer.
Complexity: O(N + M)

On the bench — Cdiv1
(Authors: Fedosik, ZZzzZZzzzZzzzZZzz38)

Let's divide all numbers into groups. Scan all numbers from left to right. Suppose that current position is i. If groupi is not calculated yet, i forms a new group. Assign unique number to groupi. Then for all j such that j > i and a[ja[i] is a perfect square, make groupj equal to groupi. Now we can use dynamic programming to calculate the answer.
F(i, j) will denote the number of ways to place first i groups having j "bad" pairs of neighbors.
Suppose we want to make a transition from F(i, j). Let's denote size of group i as size, and total count of numbers placed before as total.
We will iterate S from 1 to min(size, total + 1) and D from 0 to min(j, S).
S is the number of subsegments we will break the next group in, and D is the number of currently existing "bad" pairs we will eliminate.
This transition will add T to F(i + 1, j - D + size - S) (D "pairs" eliminated, size - S new pairs appeared after placing new group).
T is the number of ways to place the new group according to S and D values.
Actually it's . Why? Because there are ways to break group of size size into S subsegments. ways to select those D "bad" pairs out of existing j we will eliminate.
And ways to choose placements for S - D subsegment (other D are breaking some pairs so their positions are predefined).
After all calculations, the answer is F(g, 0), where g is the total number of groups.
Complexity: O(N^3)

Destiny — Ddiv1
(Authors: Fedosik, ZZzzZZzzzZzzzZZzz38)

We will use classical divide and conquer approach to answer each query.
Suppose current query is at subsegment [L;R]. Divide the original array into two parts: [1;mid] and [mid + 1;N], where . If our whole query belongs to the first or the second part only, discard the other part and repeat the process of division. Otherwise, L ≤ mid ≤ R. We claim that if we form a set of K most frequent values on [L;mid] and K most frequent values on [mid + 1;R], one of the values from this set will be the answer, or there is no suitable value.

K most frequent values thing can be precalculated. Run recursive function build(node, L, R). First, like in a segment tree, we'll run this function from left and right son of node.
Then we need K most frequent values to be precalculated for all subsegments [L1;R1], such that L ≤ L1 ≤ R1 ≤ R and at least one of L1 and R1 is equal to .

We will consider segments such that their left border is mid in the following order: [mid;mid], [mid;mid + 1], ...[mid;R]. If we already have K most frequent values and their counts for [mid, i], it's rather easy to calculate them for [mid, i + 1]. We update the count of ai + 1 and see if anything should be updated for the new list of most frequent values.

Exactly the same process happens to the left side of mid: we are working with the subsegments in order [mid;mid], [mid - 1;mid], ..., [L;mid].

Now, having all this data precalculated, we can easily run divide and conquer and get the candidates for being the solution at any [L;R] segment. Checking a candidate is not a problem as well: we can save all occurrences in the array for each number and then, using binary search, easily answer the following questions: "How many times x appears from L to R?".

Complexity: O(KNlogN)

In a trap — Ediv1
(Author: Fedosik)

The path from u to v can be divided into blocks of 256 nodes and (possibly) a single block with less than 256 nodes. We can consider this last block separately, by iterating all of its nodes.
Now we need to deal with the blocks with length exactly 256. They are determined by two numbers: x — last node in the block, and d8 highest bits. We can precalculate this values and then use them to answer the queries.
Let's now talk about precalculating answer(x, d). Let's fix x and 255 nodes after x. It's easy to notice that lowest 8 bits will always be as following: 0, 1, ..., 255. We can xor this values: 0 with ax, 1 with anextx and so on, and store the results in a trie. Now we can iterate all possible values of d (from 0 to 255) and the only thing left is to find a number stored in a trie, say q, such that q xor 255·d is maximized.

Complexity: O(NsqrtNlogN) Comments (35)
 » Why a lot of likes for word "test".
•  » » Because russian version is not available. Switch the english one.
 » Can someone reword div1C/div2E ? What's the basic idea?
•  » » 2 years ago, # ^ | ← Rev. 7 →   Through observation, we noticed that if a * b is a perfect square, b * c is a perfect square, the a * c is a perfect square too.So we can get T groups, and the problem changed to SourceSolution from the siteFirst we group all the distinct values in the array. Then we can solve the problem using dynamic programming: Let dp[i][j] = the number of distinct arrays that can be obtained using the elements of the first i groups such that there are exactly j pairs of consecutive positions having the same value. The answer can be found in dp[distinctValues].Now let's say the sum of frequences of the first i values is X. This means the arrrays we can build using the elements from these i groups have size X, so we can insert the elments of group i + 1 in X + 1 gaps: before the first element, after the last, and between any two consecutive. We can fix the number k of gaps where we want to insert at least one element from group i + 1, but we also need to fix the number l of these k gaps which will be between elements that previously had the same value. State dp[i][j] will update the state dp[i + 1][j — l + frequence[i + 1] — k].The number of ways of choosing k gaps such that exactly l are between consecutive elements having the same value can be computed easily using combination formulas. We are left with finding out the number of ways of inserting frequence[i + 1] elements in k gaps. This is also a classical problem with a simple answer: Comb(frequence[i + 1] — 1, k — 1).
•  » » » 2 years ago, # ^ | ← Rev. 2 →   I didn't understand why is it "dp[i + 1][j — l + frequence[i + 1] — k]" and not "dp[i + 1][j — l + frequence[i + 1] — k — 1]"?We should add (frequence[i + 1] — k — 1) instead of (frequence[i + 1] — k) we add the count of pairs not of numbers
•  » » » » I've found my mistake
 » 2 years ago, # | ← Rev. 2 →   For those who are having some difficulty understanding the above analysis of Div-1 C, there is a very similar problem on Codechef which has a very nice detailed editorial with explanation of the time complexity. :)
 » •  » » 2 years ago, # ^ | ← Rev. 4 → » In problem Destiny — Ddiv1. I don't understand this editorial : What is K most frequent values ?If anyone understand this editorial, please explain for me ! Thank you !
•  » » I also don't get it, but maybe it's using following neat algorithm to find majority elements.Problem: from a sequence of N elements find one that appears more than N/2 times.Solution: Repeat following: "Find two distinct elements and erase them.". If there is something left in the end, it will be all the same number and it's the only possible candidate for a solution.Now we can generalize:Problem: from a sequence of N elements find one that appears more than N/K times.Solution: Repeat following: "Find K distinct elements and erase them.". If there is something left in the end, it will be at most K-1 different numbers and they are only possible candidates for a solution.To use it in problem D we can apply the algoritm to interval [l, r] and obtain K-1 candidates for which we count number of appearances and obtain the answer.Applying the algorithm to an interval can be done using segment tree.
•  » » » Why we should Find K distinct elements and erase them. I don't understand. Can you explain again for me ?
•  » » » » 2 years ago, # ^ | ← Rev. 3 →   Let's talk about K=2. If we repeat deleting two distinct numbers from the sequence, in the end, we'll have at most one distinct number.Now, imagine that in the original sequence there is a number that appears more than N/2 times. It's impossible that all its appearances will be killed by the algorithm.Why? Because every time we want to erase one of its appearances we also have to erase one other number. And we won't be able to do it because this number appears more times than all others combined. Thus, it will be the number that's only left at the end of the algorithm.This doesn't mean that generally number that's left in the and appears more than N/2 times, but only that if there is such a number it will be left at the end of the algorithm.Generalizing to K>2 shouldn't be hard once you get K=2.
 » There is another solution to problem D, with the same complexity (O(KNlogN)), using a wavelet tree (http://codeforces.com/blog/entry/52854). Build the wavelet tree for the input array. Queries are then handled similarly to "count the number of occurrences of x in [L,R]" queries, but instead of picking a branch depending on a number x, we explore both branches, but cut branches with less than elements. At each level of the tree, at most K branches will remain, so a query is handled in time O(KlogN).My code : 29654756. I only had to write the solve and main functions, and the solve function was obtained from cnt.
 » Can Div1 D be solved with Mo's algo somehow without TLE?
•  » » Yes you can use Mo to get the number of each number on each query segment and then you should use nondeterministic algorithm using rand around 80 times to draw a random id on query segment (l,r) and check whether number[id] pass the condition. The chance to win is around 99%
•  » » » how did you get these numbers 80 and 99?
•  » » » My logic was more like, use Mo separately for each k. but that will probably TLE
•  » » » » ((1 — (4/5)^80) ^ 300000) * 100% = 99%
•  » » » » » Neat! :D
 » Still waiting for Div1 E.
 » 2 years ago, # | ← Rev. 5 →   Since the editorial for Div-1 E is not yet available, I decided to explain my solution to it on the basis of this comment. Each mask can consist of atmost 16 bits. We deal with both halves separately. For a given query (u, v), we process the path in blocks of 256. For any vertex on the path from u to v (let it be a), the distance from v will be of the form 256*A+B, where A>=0 and 0<=B<256. If we process the path in blocks of 256, the "A" values for all vertices in the block will be same. Note that 256*200 > 50000, so there will be atmost 200 blocks. We precompute for each vertex, considering it as the lowest vertex of the block, for all i such that 0<=i<=200, what the max XOR we can get using the most significant 8 bits. If you see my code,best_most_significant_eight[v][i] denotes the max xor we can get considering only the most significant 8 bits in the block of size <= 256, with the lowest vertex being v and i is the "A" value of the vertices in this block. eg. For a node at distance 257 (256*1+1) from node v, the "A" value is 1. Now, we need to handle the lowest 8 bits. For any block, we should consider the lowest 8 bits of only those nodes whose highest 8 bits give us the max xor obtained previously. This we can do using precomputation too. Again, with reference to my code, best_lower_significant_eight[v][i] denotes the max xor we can get using lower 8 bits (i.e. using the "B" value) for the block with lowest vertex v and the highest 8 bits are i. We can calculate best_lower_significant_eight[][] by just iterating over the immediate 256 ancestors of every vertex. We can calculate best_most_significant_eight[][] by inserting all the values of the immediate 256 ancestors in the trie and then querying for the max XOR, for all i from 0 to 200. Now all that remains is to handle the vertices in the topmost (maybe incomplete) block while solving a query. In that case, since there are atmost 256 vertices to check, we can just run through them all and update our answer. You can see my code for this implementation. Time complexity is roughly O(N*256 + Q*256).
 » Hi! Can anyone please explain this sample test case from Div2D.Input: 4 5 0 0 0 -1 1 2 2 3 3 4 1 4 2 4 Output: 0 0 output means that there are no edges in the good subset and the original graph already satisfies all the mentioned conditions( for every vertex i, di =  - 1 or it's degree modulo 2 is equal to di. ).In this graph: deg=2, di=0 (2%2==0[satisfied]) deg=3, di=0 (3%2!=0[ how is this satisfied? ]) deg=2 di=0 ([satisfied]) deg=3 di=-1 (di=-1,[satisfied]) Moreover, if I remove 5th edge(2 4) then all conditions are satisfied but that would be a Wrong Answer.Can anyone point me in the right direction? I am unable to understand this question.Thanks :)