zscoder's blog

By zscoder, history, 4 days ago, In English,

Weekly Training Farm 22 is over. Congratulations to the winners :

  1. W4yneb0t (perfect score in < 1 hour!)

  2. aaaaajack (perfect score)

  3. eddy1021

Here is the editorial :

Problem A

This problem can be solved by greedy. We list down the positive integers one by one. We keep a pointer that initially points to the first letter of s. Whenever the pointed character in the string s matches the corresponding digit of the integer, we move the pointer one step to the right and continue. Repeat this process until the pointer reaches the end.

However, we still need to know whether the answer can be large. The key is to note that the answer will never exceed 106, because after writing down 10 consecutive numbers, at least one of them has last digit equals to the current digit, so the pointer will move to the right at least once when we write down 10 consecutive numbers. Thus, in the worse case, we'll only list down the numbers from 1 to 106, which is definitely fast enough.

Code

Problem B

This problem can be solved using dynamic programming. Firstly, observe that if we already determine which set of problems to solve, then it's best to solve the problem in increasing order of time needed to solve in order to minimize the time penalty. Thus, we can first sort the problems in increasing order of time needed, breaking ties arbitarily.

Let dp[i][j] denote the maximum number of problems solved and minimum time penalty acquired when doing so by using exactly j minutes and only solving problems among the first i ones. dp[0][0] = (0, 0) (the first integer denotes the number of problems solved and the second integer denotes the time penalty in order to do so). The transitions can be handled easily by simply considering whether to solve the i-th problem or not. The time complexity of this solution is O(nT) (T is the duration of the contest)

Code

Problem C

This is an ad hoc problem. Firstly, we can use two moves to determine what the value of the first bit is. (simply flipping it twice will tell you its value. Now, if the bit is 1, you don't need to flip it anymore. If it's 0, you'll need to flip it. In any case, we'll flip the second bit as well. (if the first bit needs to be flipped, we'll flip [1, 2] and flip [2, 2] otherwise) After flipping the second bit, we can determine whether it's a 1 or 0 by calculating from the total number of 1s of the string before the flip and after the flip. We can repeat this for every 2 consecutive bits until we arrive at the last two bits. At this point, we know what the second last bit is, and we also know the total number of 1 bits. So, we can easily deduce the value of the last bit from the information as well. Now, we just need to perform one last flip to make the last 2 bits become 1. The total number of moves made is n + 1.

Code

Problem D1

First, we can use 18 moves to determine the value of a, by asking 2 to 19 in increasing order and the first yes answer will be the value of a. If there're no "yes" answers, then the value of a is 20.

Call a number good if it can be represented as the sum of nonnegative multiples of as and b. Note that if x is good, then x + a, x + b are both good.

Now that we have the value of a, let's think about what b is. Consider the numbers ka + 1, ka + 2, ..., ka + (a - 1) for a fixed k. If none of these numbers are good, we can immediately say that b is larger than (k + 1)a. Why? Suppose b = qa + r. Clearly, r ≠ 0 since a and b are coprime. Note that xa + r for all x ≥ q will be the good, since xa + r = (qa + r) + (x - q)a = b + (x - q)a. So, b cannot be less than any of the numbers ka + 1, ka + 2, ..., ka + (a - 1), or else one of these numbers would've been good, a contradiction. Note that this also means that if y is the smallest integer such that ya + 1, ya + 2, ..., ya + (a - 1) are not all bad, then there will be exactly one good number, which will be b. Also note that for all integers k > y, there will have at least one good number among ka + 1, ka + 2, ..., ka + (a - 1). Thus, we can now binary search for the value of y. In each iteration of the binary search, we need to ask at most a - 1 ≤ 19 questions, and there are at most iterations, so the maximum number of operations needed is 19·19 + 18 = 379 < 380.

Code

Problem D2

This problem is the same as D1, but with higher constraints. Firstly, we find the value of a in 18 moves as in problem D. To proceed, we need to think about this problem from another angle. Suppose we know a number N that is good and not a multiple of a, and we can find the maximum number k such that N - ka is good, then what does this tell us? This means that N - ka is a multiple of b. Why? We know that N - ka = ax + by for some nonnegative integers x and y since N - ka is good. If x > 0, then N - (k + 1)a = a(x - 1) + by is also good, contradicting the maximality of k. Thus, x = 0 and so N - ka = by. Note that b > 0 since we choose N so that it's not a multiple of a.

To find a value of N such that N is good and not a multiple of a, it is sufficient to take 500000a - 1, since any number greater than ab - a - b is guaranteed to be good. (this is a well-known fact)

We can find the largest k such that N - ka is good via binary search, because if N - ma is not good then N - (m + 1)a can't be good. (or else if N - (m + 1)a = ax + by, then N - ma = a(x + 1) + by) This takes at most 19 questions.

What to do after finding a value which is a multiple of b? Let C = N - ka. We consider the prime factorization of C. The main claim is that if is good, then x must be a multiple of b. The reasoning is the same as what we did before. So, we can find the prime factorization of C, and divide the prime factors one by one. If the number becomes bad, we know that the prime factor cannot be removed, and proceed to the next prime factor. Since a number less than 10000000 can have at most 23 prime factors (maximum is 223), so this takes another 23 questions.

Thus, we only used at most 18 + 19 + 23 = 60 questions to find the values of a and b.

Code

Problem E

Firstly, note that a connected graph on n vertices with n edges contains exactly 1 cycle. Call the vertices on the cycle the cycle vertices. From each cycle vertex, there's a tree rooted at it. Thus, call the remaining vertices the tree vertices. Note that the number of useless edges is equal to the length of the cycle.

Now, we do some casework :

  • u is equal to a tree vertex

Note that this will not change the length of the cycle. Thus, we just have to count how many ways are there to change the value of au such that the graph remains connected. The observation is that for each tree node u, the only possible values of au are the nodes which are not in the subtree of u in the tree u belongs to. Thus, the number of possibilities can be calculated with a tree dp. For each tree, we calculate the subtree size of each node and add all these subtree sizes and subtract this from the total number of ways to choose a non-tree vertex u and choosing the value of au. This part can be done in O(n) time.

  • u is equal to a cycle vertex

For two cycle vertices u and v, let d(u, v) be the directed distance from u to v (We consider the distance from u to v in the functional graph for all 1 ≤ i ≤ n). Note that if we change au to x, and the root of the tree x is in is v (x = v is x is a cycle vertex), then the length of the cycle after the change will be d(v, u) + 1 + h[x], where h[x] is the height of x in its tree. The key is instead of fixing u and iterate through all other nodes x, we iterate through all endpoints x and see how it changes our answer. Note that if x is fixed, which also means that v is fixed, then we just have to add 1 to the answer for c = d(v, u) + 1 + h[x] for all cycle vertices u. However, note that d(v, u) ranges from 0 to C - 1 (where C denotes the length of the original cycle), so this is equivalent to adding 1 to the answer for c = h[x] + 1, h[x] + 2, ..., h[x] + C. Now, we can iterate through all vertices x and add 1 to the answer for c = h[x] + 1, h[x] + 2, ..., h[x] + C. To do this quickly, we can employ the "+1, -1" method. Whenever we want to add 1 to a range [l, r], we add 1 to ansl and subtract 1 from ansr + 1. Then, to find the actual values of the ans array, we just have to take the prefix sum of the ans array.

Finally, do not forget to subtract the cases where v = au from the answer. The total complexity is O(n).

Code

Read more »

 
 
 
 
  • Vote: I like it  
  • +45
  • Vote: I do not like it  

By zscoder, history, 5 days ago, In English,

Hi everyone!

I would like to invite you to the Weekly Training Farm 22 ! The problemsetter is me (zscoder) and the tester and quality controller is dreamoon.

It will be a contest in ACM-ICPC style and contains 6 problems. The difficulty is around 500-1500-1500-1750-2500-2500 (compared to Div. 2 Contests)

The contest begins at 19:30 UTC+8 and lasts for two hours.

To join the contest, join this group (as participant) first and find Weekly Training Farm 22 on the Group Contest tab.

In addition, there will be a few interactive problems in this round. Please check the Interactive Problems Guide if you're not familiar with interactive problems.

Good luck and hope you enjoy the problems!

UPD : Contest starts in around 4.5 hours.

UPD : You can find the editorial here

UPD : Since next week will be the lunar new year, there'll be no Weekly Training Farm next week. It will resume on February.

Read more »

 
 
 
 
  • Vote: I like it  
  • +68
  • Vote: I do not like it  

By zscoder, history, 3 weeks ago, In English,

Congratulations to the winners!

  1. spentplaying

  2. 0w1

  3. lawfung

Also special props to biGinNer for solving the last 3 problems (and the only one to solve F during contest)

Here are the editorials :

Problem A.

This is a simple problem. First, we calculate the position Harry ends in after making the moves 1 time. This can be done by directly simulating the moves Harry make. Now, suppose Harry is at (x, y) after 1 iteration. Note that after every iteration, Harry will move x units to the right and y units up, so after n moves he will end up in (nx, ny). The complexity of the solution is O(|s|).

Problem B.

This is a dp problem. Let dp[i] be the maximum possible sum of the remaining numbers in the range [1..i]. For 1 ≤ i ≤ k - 1 the value is just the sum of the numbers in the range. Let dp[0] = 0. For i ≥ k, we may choose to keep the element ai or remove a subsegment of length k which ends at ai. Thus, we arrive at the recurrence dp[i] = max(dp[i - 1] + ai, dp[i - k]). We can calculate the dp values in O(n).

Problem C.

Observe that we can consider each prime factor separately. For each prime p that appears in N, let's see what prime power pki should we pick from each number ai so that the sum of ki is equal to the power of p in the prime factorization of N. Firstly, we need to prime factorize all the numbers ai. We can use Sieve to find the primes and the factorization can be done in . From now on, we'll focus on a specific prime p. Now, we know the maximum prime power mi we can take from each number ai (so ki ≤ mi). From here, we can use a greedy method to decide what to take from each number ai. Note that mi ≤ 20 because 220 = 1048576 > 106. So, for each number ai, we know the cost needed if we take 1, 2, ..., mi factors of p from ai. We can store a vector and for each ai, we push wip, wi(p2 - p), wi(p3 - p2), ..., wi(pmi - pmi - 1) into the vector. Now, we sort the vector and take the first x elements, where x is the power of prime p in the prime factorization of N. If we can't take x elements, the answer is  - 1. We can repeat this for all primes and solve the problem in time.

Problem D.

To solve this problem, you need to know a bit about permutations. First, we need to determine how to find the minimum number of swaps to sort a permutation. This is a well-known problem. Let the permutation be P = p1, p2, ..., pn. Construct a graph by drawing edges from i to pi for all 1 ≤ i ≤ n. Note that the graph is formed by disjoint cycles. You can easily see swapping two elements can either split a cycle into two smaller cycles, or merge two cycles into one cycle. Since the identity permutation is formed by n cycles, the optimal way is to keep splitting cycles into two and increase the total number of cycles by 1 each step. Thus, if we denote c as the number of cycles in the current permutation, the number of moves needed to sort the permutation is n - c. Harry wins if and only if n - c is odd.

The key observation is that whenever there are exactly two question marks left, the first player will always win. Why? Consider how the current graph of the permutation looks like. It will be a union of few cycles and 2 chains (we consider the singleton, a component formed by a single vertex, as a chain). Now, the first player can either choose to close off one of the chains, or join the two chains together. The latter will leave exactly 1 less number of cycles than the former. So, one of them will guarantee the value of n - c to be odd. Thus, the first player only have to choose the correct move. This implies that if the number of question marks m is at least 2, then Harry wins if m is even and loses otherwise.

Now, the only case left is when there're only 1 question mark in the beginning. This means that Harry only have 1 possible move and we're left with the problem of deciding whether the final permutation have n - c odd. Thus, it is enough to count the number of cycles in the formed graph. This can be done by dfs. The complexity of the solution is O(n).

Problem E.

First, we form a trie of the given words. Now, the game is equivalent to the following :

  1. Start from the root of the trie.

  2. Each player can either move to one of the children of the current node, or delete one edge connecting the current node to one of the children.

The one who can't move loses. This reduced game can be solved with Tree DP. Let dp[u] denote the winner of the game if the first player starts at node u. The leaves have dp[u] = 2. Our goal is to find dp[0] (where 0 is the root). The recurrence is simple. Suppose we're finding dp[u] and the children of u are v1, v2, ..., vk. If one of the children has dp value of 2, then Player 1 can just move to that children and win. Otherwise, all children have dp value of 1. Thus, both players will try not to move down unless forced to. So, they'll keep deleting edges. If there are an even number of children, Player 2 will win, as he will either delete all edges or force Player 1 to move down. Otherwise, Player 1 wins. This gives a simple O(n) tree dp solution.

Problem F.

Firstly, we make the same observations as problem C. Swapping two elements will either split a cycle into two or merge two cycles. Note that if we swap two elements from the same cycle, the cycle will split into two. If we swap two elements from different cycles, the two cycles will combine. Also note that for a cycle of size c, we can always split it into two cycles a and b with a, b > 0 and a + b = c by choosing the appropriate two elements to swap from the cycle. Now, the game reduces to choose 2 possibly equal elements from one cycle, swap them, and delete one of the resulting cycles. So, for a given permutation, if the cycle sizes are c1, c2, ..., ck, then each move we can choose one of the sizes and the operation is equivalent to changing the size into any nonnegative number strictly smaller than it. Thus, we have reduced the problem to playing a game of Nim on c1, c2, ..., ck. Since Harry goes second, he wins if and only if the xor values of all the cycle sizes is 0. (This is a well-known fact)

Thus, we've reduced the problem into finding the number of permutations of length n which have the xor of all cycle sizes equal to 0. To do so, let dp[i][j] denote the number of permutations with length i and xor of all cycle sizes equal j. The dp transitions can be done by iterating through all possible sizes s of the cycle containing i. For each s, there are ways to choose the remaining elements of the cycle containing i and (s - 1)! ways to permute them. Thus, we can sum up the values of for all 1 ≤ s ≤ i. The whole solution works in O(n3) time.

Read more »

 
 
 
 
  • Vote: I like it  
  • +44
  • Vote: I do not like it  

By zscoder, history, 3 weeks ago, In English,

Hi everyone!

I would like to invite you to the Weekly Training Farm 20 ! The problemsetter is me (zscoder) and the tester and quality controllers are dreamoon and drazil.

It will be a contest in ACM-ICPC style and contains 6 problems. The difficulty is around 500-1250-1750-2000-2250-2250 (compared to Div. 2 Contests)

The contest begins at 20:00 UTC+8 and lasts for two hours.

To join the contest, join this group (as participant) first and find Weekly Training Farm 20 on the Group Contest tab.

Reminder : The contest will start in around 5 hours from now.

Update : Less than 1 hour before start. Good luck!

Here's the editorial.

Read more »

Announcement of Weekly Training Farm 20
 
 
 
 
  • Vote: I like it  
  • +58
  • Vote: I do not like it  

By zscoder, history, 3 months ago, In English,

Codechef October Challenge has just ended few hours ago. Every time I find that my weakest spot is in solving those approximation problems. How do you start solving them? There are people who get very high points and I'm curious how they manage to do that.

Read more »

 
 
 
 
  • Vote: I like it  
  • +47
  • Vote: I do not like it  

By zscoder, history, 3 months ago, In English,

Hi everyone! Following my last article, today I'm writing about a not-so-common trick that has nevertheless appeared in some problems before and might be helpful to you. I'm not sure if this trick has been given a name yet so I'd refer to it as "Slope Trick" here.

Disclaimer : It would be helpful to have a pen and paper with you to sketch the graphs so that you can visualize these claims easier.

Example Problem 1 : 713C - Sonya and Problem Wihtout a Legend

This solution originated from koosaga's comment in the editorial post here.

The solution below will solve this problem in , wheareas the intended solution is O(n2).

So, the first step is to get rid of the strictly increasing condition. To do so, we apply a[i] -= i for all i and thus we just have to find the minimum number of moves to change it to a non-decreasing sequence.

Define fi(x) as the minimum number of moves to change the first i elements into a non-decreasing sequence such that ai ≤ x.

It is easy to see that by definition we have the recurrences

fi(X) = minY ≤ X(|ai - Y|) when i = 1

and

fi(X) = minY ≤ X(fi - 1(Y) + |ai - Y|}.

Now, note that fi(X) is non-increasing, since it is at most the minimum among all the values of f for smaller X by definition. We store a set of integers that denotes where the function fi change slopes. More formally, we consider the function gi(X) = fi(X + 1) - fi(X). The last element of the set will be the smallest j such that gi(j) = 0, the second last element will be the smallest j such that gi(j) =  - 1, and so on. (note that the set of slope changing points is bounded)

Let Opt(i) denote a position where fi(X) achieves its minimum. (i.e. gi(Opt(i)) = 0) The desired answer will be fn(Opt(n)). We'll see how to update these values quickly.

Now, suppose we already have everything for fi - 1. Now, we want to update the data for fi. First, note that all the values x < ai will have its slope decreased by 1. Also, every value with x ≥ ai will have its slope increased by 1 unless we have reached the slope = 0 point, in which the graph never goes up again.

There are two cases to consider :

Case 1 : Opt(i - 1) ≤ ai

Here, the slope at every point before ai decreases by 1. Thus, we push ai into the slope array as this indicates that we decreases the slope at all the slope changing points by 1, and the slope changing point for slope = 0 is ai, i.e. Opt(i) = ai. Thus, this case is settled.

Case 2 : Opt(i - 1) > ai

Now, we insert ai into the set, since it decreases the slope at all the slope changing points before ai by 1. Furthermore, we insert ai again because it increases the slope at the slope changing points between ai and Opt(i - 1) by 1. Now, we can just take Opt(i) = Opt(i - 1) since the slope at Opt(i - 1) is still 0. Finally, we remove Opt(i - 1) from the set because it's no longer the first point where the slope changes to 0. (it's the previous point where the slope changes to  - 1 and the slope now becomes 0 because of the addition of ai) Thus, the set of slope changing points is maintained. We have fi(Opt(i)) = fi - 1(Opt(i - 1)) + |Opt(i - 1) - ai|.

Thus, we can just use a priority queue to store the slope changing points and it is easy to see that the priority queue can handle all these operations efficiently (in time).

Here's the implementation of this idea by koosaga : 20623607

This trick is called the "Slope Trick" because we're considering the general function and analyzing how its slope changes at different points to find the minimum or maximum value.

The next example is APIO 2016 P2 — Fireworks

This problem was the "killer" problem of APIO 2016, and was solved by merely 4 contestants in the actual contest.

I'll explain the solution, which is relatively simple and demonstrates the idea of slope trick.

So, the idea is similar to the last problem. For each node u, we store a function f(x) which denotes the minimum cost to change the weights on edges in the entire subtree rooted at u including the parent edge of u such that the sum of weights on each path from u to leaves are equal to x. We'll store the slope changing points of the function in a container (which we'll determine later) again. In addition, we store two integers a, b, which denotes that for all x ≥ X, where X is the largest slope changing point, the value of the function is aX + b. (clearly this function exists, since when X increases one can always increase the parent node by 1)

Now, for the child nodes i, it is clear that a = 1, b =  - ci, where ci is the cost of the parent edge of i, and the slope changing points are {ci, ci}.

For a non-leaf node u, we have to combine the functions from its children first. Firstly, we set the function as the sum of all functions of its child, and we'll correct it later. We set the value a of this node as the sum of all as of its children, and similarly for b. Also, we combine all the slope-changing points together. It is important that we merge the smaller sets into the larger set. (see dsu on tree, a.k.a. small-to-large technique)

Now, the function is still incorrect. Firstly, note that all the slope-changing points that have slope  > 1 is meaningless, because we can just increase the parent edge by 1 to increase the sum of the whole subtree, so we can remove these slope-changing points while updating the values of a, b. Suppose we remove a slope-changing point x with slope a, then we decrement a, increase b by x, and remove x from the set. (this is because ax + b = (a - 1)x + (b + x)) Repeat this till a becomes at most 1.

Next, since the cost of the parent edge is ci, we have to shift the slope 0 and 1 changing points to the right by ci. Note that the slope  - 1 changing point doesn't change, because we can just reduce the weight of ci until it reaches 0. (note that the condition that the weights can be reduced to 0 helped here)

Finally, we have to decrease b by ci, since we shifted the points to the right by ci. Thus, the function for this node is now complete.

Thus, we can do a dfs and keep merging functions until we get the function for the root node. Then, we just have to find the value of the function when a = 0. (using the same method by we decrease a until it reaches 0) Finally, the answer will be the updated value of b at the root node, and we're done.

We'll use a priority queue to store the slope changing points as it is the most convenient option.

Official Solution

Beyond APIO 2016 Fireworks

Now, the next example is the generalization of this problem. It has came from Codechef October Challenge — Tree Balancing. We'll solve this using the slope trick as well.

The Codechef problem is the same as the last problem, except :

  1. The weights of the edges can be changed to negative values

  2. You must output a possible construction aside from the minimum cost needed

  3. The edges now have a cost wi, and when you change the value of an edge by 1, your total cost increases by wi.

However, it is still possible to solve this using Slope Trick.

Firstly, we suppose that wi = 1, to simplify the problem. Now, since the edges can be changed to negative values, at each node there is no point with slope that has absolute value greater than 1, since changing the parent edge will yield a better result. Thus, each node actually have only 2 slope-changing points, the point where the slope changes from  - 1 to 0 and the point where the slope changes from 0 to 1. Thus, this means that we have to pop slope-changing points from the front as well as the back of the set. The best way to store the data is to use a multiset.

With this modification, we can find the minimum cost needed like before. Now, the second part of the question is, how to reconstruct the answer? This part is not hard if you understand what we're doing here. The problem reduces to solving for each node u, if I need to make the sum of weights from the parent of u to all leaves equal to x, what should the parent edge weight be, where x is given. We start from the childrens of the root, with value x which is equal to the point where the slope changes from 0 to 1. (i.e. the point that yields minimum value)

For each node we store the 2 slope-changing points li, ri in an array while we find the minimum cost. Now, if li ≤ x ≤ ri, then the best thing to do is not change the parent edge. If x > ri, then we should increase the parent edge value by x - ri. Otherwise, we should decrease the parent edge value by li - x.

Thus, we can find the required weights for the parent nodes and it remains to push the remaining sum of weights needed to its children and recurse until we get all the weights of the edges. The time complexity is the same.

My submission for this case, which gives 20 points

To get the full AC, we need to solve the cost-weighted case. It is actually similar to this case, but we have to modify the solution a bit.

The idea is still the same. However, the slope changing points has increased by a lot. To efficiently store these slope points, we will store the compressed form of the set. For example, the set {3, 4, 5, 5, 5, 5, 6, 6} will be stored as {(3, 1), (4, 1), (5, 4), (6, 2)}. Basically, we store the number of occurences of the integers instead of storing it one by one. We can use a map to handle this.

The base case is a bit different now. Suppose the leaf node is u and the cost of its parent edge is du. Then, a = du, b =  - cu × du, where cu is the weight of its parent edge. The slope changing points is {(cu, 2du)}.

Merging the functions to its parent will be the same. Now, we have to update the slope changing points and the function ax + b. First, we remove all points with slope  > du and  <  - du, as we can just change the parent edge. Then, we have to shift every slope changing point by cu. However, shifting the whole map naively is inefficient. The trick here is to store a counter shift for each node that denotes the amount to add for each slope changing point. Now, the shifting part is equivalent to just adding cu to the counter shift. Finally, we update a and b as before.

To recover the solution, we use the same method as above, with some changes. Firstly, l and r will be the minimum slope changing point of the function and maximum slope changing point of the function respectively. Secondly, if the sum of di of all children is less than the di of the parent edge, then we do not change the weight of the parent edge, as it is sufficient to just update all the children edges.

My implementation of this solution (100 points)

That's it for this post. If you know any other application of this trick, feel free to post them in the comments.

Read more »

 
 
 
 
  • Vote: I like it  
  • +192
  • Vote: I do not like it  

By zscoder, history, 3 months ago, In English,

Hi everyone! Today I want to share some DP tricks and techniques that I have seen from some problems. I think this will be helpful for those who just started doing DP. Sometimes the tutorials are very brief and assumes the reader already understand the technique so it will be hard for people who are new to the technique to understand it.

Note : You should know how to do basic DP before reading the post

DP + Bitmasks

This is actually a very well-known technique and most people should already know this. This trick is usually used when one of the variables have very small constraints that can allow exponential solutions. The classic example is applying it to solve the Travelling Salesman Problem in O(n2·2n) time. We let dp[i][j] be the minimum time needed to visit the vertices in the set denoted by i and ending at vertex j. Note that i will iterate through all possible subsets of the vertices and thus the number of states is O(2n·n). We can go from every state to the next states in O(n) by considering all possible next vertex to go to. Thus, the time complexity is O(2n·n2).

Usually, when doing DP + Bitmasks problems, we store the subsets as an integer from 0 to 2n - 1. How do we know which elements belong to a subset denoted by i? We write i in its binary representation and for each bit j that is 1, the j-th element is included in the set. For example, the set 35 = 1000112 denotes the set {0, 4, 5} (the bits are 0-indexed from left to right). Thus, to test if the j-th element is in the subset denoted by j, we can test if i & (1<<j) is positive. (Why? Recall that (1<<j) is 2j and how the & operator works.)

Now, we look at an example problem : 453B - Little Pony and Harmony Chest

So, the first step is to establish a maximum bound for the bi. We prove that bi < 2ai. Assume otherwise, then we can replace bi with 1 and get a smaller answer (and clearly it preserves the coprime property). Thus, bi < 60. Note that there are 17 primes less than 60, which prompts us to apply dp + bitmask here. Note that for any pair bi, bj with i ≠ j, their set of prime factors must be disjoint since they're coprime.

Now, we let dp[i][j] be the minimum answer one can get by changing the first i elements such that the set of primes used (i.e. the set of prime factors of the numbers b1, b2, ..., bi) is equal to the subset denoted by j. Let f[x] denote the set of prime factors of x. Since bi ≤ 60, we iterate through all possible values of bi, and for a fixed bi, let F = f[bi]. Then, let x be the complement of the set F, i.e. the set of primes not used by bi. We iterate through all subsets of x. (see here for how to iterate through all subsets of a subset x) For each s which is a subset of x, we want dp[i][s|F] = min(dp[i][s|F], dp[i - 1][s] + abs(a[i] - b[i])). This completes the dp. We can reconstruct the solution by storing the position where the dp achieves its minimum value for each state as usual. This solution is enough to pass the time limits.

Here are some other problems that uses bitmask dp :

678E - Another Sith Tournament

662C - Binary Table

Do we really need to visit all the states?

Sometimes, the naive dp solution to a problem might take too long and too much memory. However, sometimes it is worth noting that most of the states can be ignored because they will never be reached and this can reduce your time complexity and memory complexity.

Example Problem : 505C - Mr. Kitayuta, the Treasure Hunter

So, the most direct way of doing dp would be let dp[i][j] be the number of gems Mr. Kitayuta can collect after he jumps to island i, while the length of his last jump is equal to j. Then, the dp transitions are quite obvious, because we only need to test all possible jumps and take the one that yields maximum results. If you have trouble with the naive dp, you can read the original editorial.

However, the naive method is too slow, because it would take O(m2) time and memory. The key observation here is that most of the states will never be visited, more precisiely j can only be in a certain range. These bounds can be obtained by greedily trying to maximize j and minimize j and we can see that their values will always be in the order of from the initial length of jump. This type of intuition might come in handy to optimize your dp and turn the naive dp into an AC solution.

Change the object to dp

Example Problem : 559C - Gerald and Giant Chess

This is a classic example. If the board was smaller, say 3000 × 3000, then the normal 2D dp would work. However, the dimensions of the grid is too large here.

Note that the number of blocked cells is not too large though, so we can try to dp on them. Let S be the set of blocked cells. We add the ending cell to S for convenience. We sort S in increasing order of x-coordinate, and break ties by increasing order of y-coordinate. As a result, the ending cell will always be the last element of S.

Now, let dp[i] be the number of ways to reach the i-th blocked cell (assuming it is not blocked). Our goal is to find dp[s], where s = |S|.

Note that since we have sort S by increasing order, the j-th blocked cell will not affect the number of ways to reach the i-th blocked cell if i < j. (There is no path that visits the j-th blocked cell first before visiting the i-th blocked cell)

The number of ways from square (x1, y1) to (x2, y2) without any blocked cells is . (if x2 > x1, y2 > y1. The case when some two are equal can be handled trivially). Let f(P, Q) denote the number of ways to reach Q from P. We can calculate f(P, Q) in O(1) by precomputing factorials and its inverse like above.

The base case, dp[1] can be calculated as the number of ways to reach S1 from the starting square. Similarly, we initialize all dp[i] as the number of ways to reach Si from the starting square.

Now, we have to subtract the number of paths that reach some of the blocked cells. Assume we already fixed the values of dp[1], dp[2], ..., dp[i - 1]. For a fix blocked cell Si, we'll do so by dividing the paths into groups according to the first blocked cell it encounters. The number of ways for each possible first blocked cell j is equal to dp[jf(Sj, Si), so we can subtract this from dp[i]. Thus, this dp works in O(n2).

Another problem using this idea : 722E - Research Rover

Open and Close Interval Trick

Example Problem : 626F - Group Projects

First, note that the order doesn't matter so we can sort the ai in non-decreasing order. Now, note that every interval's imbalance can be calculated with its largest and smallest value. We start adding the elements to sets from smallest to largest in order. Suppose we're adding the i-th element. Some of the current sets are open, i.e. has a minimum value but is not complete yet (does not have a maximum). Suppose there are j open sets. When we add ai, the sum ai - ai - 1 will contribute to each of the j open sets, so we increase the current imbalance by j(ai - ai - 1).

Let dp[i][j][k] be the number of ways such that when we inserted the first i elements, there are j open sets and the total imbalance till now is k. Now, we see how to do the state transitions. Let v = dp[i - 1][j][k]. We analyze which states involves v.

Firstly, the imbalance of the new state must be val = k + j(ai - ai - 1), as noted above. Now, there are a few cases :

  1. We place the current number ai in its own group : Then, dp[i][j][val] +  = v.

  2. We place the current number ai in one of the open groups, but not close it : Then, dp[i][j][val] +  = j·v (we choose one of the open groups to add ai.

  3. Open a new group with minimum = ai : Then, dp[i][j + 1][val] +  = v.

  4. Close an open group by inserting ai in one of them and close it : Then, dp[i][j - 1][val] +  = j·v.

The answer can be found as dp[n][0][0] + dp[n][0][1] + ... + dp[n][0][k].

Related Problems :

466D - Increase Sequence

367E - Sereja and Intervals

"Connected Component" DP

Example Problem : JOI 2016 Open Contest — Skyscrapers

Previously, I've made a blog post here asking for a more detailed solution. With some hints from Reyna, I finally figured it out and I've seen this trick appeared some number of times.

Abridged Statement : Given a1, a2, ..., an, find the number of permutations of these numbers such that |a1 - a2| + |a2 - a3| + ... + |an - 1 - an| ≤ L where L is a given integer.

Constraints : n ≤ 100, L ≤ 1000, ai ≤ 1000

Now, we sort the values ai and add them into the permutation one by one. At each point, we will have some connected components of values (for example it will be something like 2, ?, 1, 5, ?, ?, 3, ?, 4)

Now, suppose we already added ai - 1. We treat the ? as ai and calculate the cost. When we add a new number we increase the values of the ? and update the cost accordingly.

Let dp[i][j][k][l] be the number of ways to insert the first i elements such that :

  • There are j connected components

  • The total cost is k (assuming the ? are ai + 1)

  • l of the ends of the permutations has been filled. (So, 0 ≤ l ≤ 2)

I will not describe the entire state transitions here as it will be very long. If you want the complete transitions you can view the code below, where I commented what each transition means.

Some key points to note :

  • Each time you add a new element, you have to update the total cost by ai + 1 - ai times the number of filled spaces adjacent to an empty space.

  • When you add a new element, it can either combine 2 connected components, create a new connected components, or be appended to the front or end of one of the connected components.

Code with comments

A problem that uses this idea can be seen here : 704B - Ant Man

 × 2,  + 1 trick

This might not be a very common trick, and indeed I've only seen it once and applied it myself once. This is a special case of the "Do we really need to visit all the states" example.

Example 1 : Perfect Permutations, Subtask 4

My solution only works up to Subtask 4. The official solution uses a different method but the point here is to demonstrate this trick.

Abridged Statement : Find the number of permutations of length N with exactly K inversions. (K ≤ N, N ≤ 109, K ≤ 1000 (for subtask 4))

You might be wondering : How can we apply dp when N is as huge as 109? We'll show how to apply it below. The trick is to skip the unused states.

First, we look at how to solve this when N, K are small.

Let dp[i][j] be the number of permutations of length i with j inversions. Then, dp[i][j] = dp[i - 1][j] + dp[i - 1][j - 1] + ... + dp[i - 1][j - (i - 1)]. Why? Again we consider the permutation by adding the numbers from 1 to i in this order. When we add the element i, adding it before k of the current elements will increase the number of inversions by k. So, we sum over all possibilities for all 0 ≤ k ≤ i - 1. We can calculate this in O(N2) by sliding window/computing prefix sums.

How do we get read of the N factor and replace it with K instead? We will use the following trick :

Suppose we calculated dp[i][j] for all 0 ≤ j ≤ K. We have already figured out how to calculate dp[i + 1][j] for all 0 ≤ j ≤ K in O(K). The trick here is we can calculate dp[2i][j] from dp[i][j] for all j in O(K2).

How? We will find the number of permutations using 1, 2, ..., n and n + 1, n + 2, ..., 2n and combine them together. Suppose the first permutation has x inversions and the second permutation has y inversions. How will the total number of inversions when we merge them? Clearly, there'll be at least x + y inversions.

Now, we call the numbers from 1 to n small and n + 1 to 2n large. Suppose we already fixed the permutation of the small and large numbers. Thus, we can replace the small numbers with the letter 'S' and large numbers with the letter 'L'. For each L, it increases the number of inversions by the number of Ss at the right of it. Thus, if we want to find the number of ways that this can increase the number of inversions by k, we just have to find the number of unordered tuples of nonnegative integers (a1, a2, ..., an) such that they sum up to k (we can view ai as the number of Ss at the back of the i-th L)

How do we count this value? We'll count the number of such tuples where each element is positive and at most k and the elements sum up to k instead, regardless of its length. This value will be precisely what we want for large enough n because there can be at most k positive elements and thus the length will not exceed n when n > k. We can handle the values for small n with the naive O(n2) dp manually so there's no need to worry about it.

Thus, it remains to count the number of such tuples where each element is positive and at most k and sums up to S = k. Denote this value by f(S, k). We want to find S(k, k). We can derive the recurrence f(S, k) = f(S, k - 1) + f(S - k, k), denoting whether we use k or not in the sum. Thus, we can precompute these values in O(K2).

Now, let g0, g1, g2, ..., gK be the number of permutations of length n with number of inversions equal to 0, 1, 2, ..., K.

To complete this step, we can multiply the polynomial g0 + g1x + ... + gKxK by itself (in O(K2) or with FFT, but that doesn't really change the complexity since the precomputation already takes O(K2)), to obtain the number of pairs of permutations of {1, 2, ..., n} and {n + 1, n + 2, ..., 2n} with total number of inversions i for all 0 ≤ i ≤ K.

Next, we just have to multiply this with f(0, 0) + f(1, 1)x + ... + f(K, K)xK and we get the desired answer for permutations of length 2n, as noted above.

Thus, we have found a way to obtain dp[2i][·] from dp[i][·] in O(K2).

To complete the solution, we first write N in its binary representation and compute the dp values for the number formed from the first 10 bits (until the number is greater than K). Then, we can update the dp values when N is multiplied by 2 or increased by 1 in O(K2) time, so we can find the value dp[N][K] in , which fits in the time limit for this subtask.

My code.

Example 2 : Problem Statement in Mandarin

This solution originated from the comment from WJMZBMR here

Problem Statement : A sequence a1, a2, ..., an is valid if all its elements are pairwise distinct and for all i. We define value(S) of a valid sequence S as the product of its elements. Find the sum of value(S) for all possible valid sequences S, modulo p where p is a prime.

Constraints : A, p ≤ 109, n ≤ 500, p > A > n + 1

Firstly, we can ignore the order of the sequence and multiply the answer by n! in the end because the numbers are distinct.

First, we look at the naive solution :

Now, let dp[i][j] be the sum of values of all valid sequences of length j where values from 1 to i inclusive are used.

The recurrence is dp[i][j] = dp[i - 1][j] + i·dp[i - 1][j - 1], depending on whether i is used.

This will give us a complexity of O(An), which is clearly insufficient.

Now, we'll use the idea from the last example. We already know how to calculate dp[i + 1][·] from dp[i][·] in O(n) time. Now, we just have to calculate dp[2i][·] from dp[i][·] fast.

Suppose we want to calculate dp[2A][n]. Then, we consider for all possible a the sum of the values of all sequences where a of the elements are selected from 1, 2, ..., A and the remaining n - a are from i + 1, i + 2, ..., 2A.

Firstly, note that .

Now, let ai denote the sum of all values of sequences of length i where elements are chosen from {1, 2, ..., A}, i.e. dp[A][i].

Let bi denote the same value, but the elements are chosen from {A + 1, A + 2, ..., 2A}.

Now, we claim that . Indeed, this is just a result of the formula above, where we iterate through all possible subset sizes. Note that the term is the number of sets of size i which contains a given subset of size j and all elements are chosen from 1, 2, ..., A. (take a moment to convince yourself about this formula)

Now, computing the value of isn't hard (you can write out the binomial coefficient and multiply its term one by one with some precomputation, see the formula in the original pdf if you're stuck), and once you have that, you can calculate the values of bi in O(n2).

Finally, with the values of bi, we can calculate dp[2A][·] the same way as the last example, as dp[2A][n] is just and we can calculate this by multiplying the two polynomials formed by [ai] and [bi]. Thus, the entire step can be done in O(n2).

Thus, we can calculate dp[2i][·] and dp[i + 1][·] in O(n2) and O(n) respectively from dp[i][·]. Thus, we can write A in binary as in the last example and compute the answers step by step, using at most steps. Thus, the total time complexity is , which can pass.

This is the end of this post. I hope you benefited from it and please share your own dp tricks in the comments with us.

Read more »

 
 
 
 
  • Vote: I like it  
  • +612
  • Vote: I do not like it  

By zscoder, history, 3 months ago, In English,

Contest link

Announcement

Start time : 21:00 JST as usual

Reminder that this contest actually exists on Atcoder :)

Let's discuss the problem after contest.

Read more »

 
 
 
 
  • Vote: I like it  
  • +53
  • Vote: I do not like it  

By zscoder, history, 3 months ago, In English,

I don't know about others, but recently I've been getting quite a number of private messages on CF and Hackerrank (well basically anywhere with a PM system) that sounds like this :

"Hi, regarding codechef long challenge october.. How to do that power sum problwm.. did u get any idea.. if so. then please drop me a hint.. thanks"

"Hi,

I was trying POWSUMS in this month's codechef long challenge. Can I get a hint for that problem?

Thanks"

"Can you send me the code for Simplified Chess engine or give me how to solve it ?"

"Hi Zi,

Any hints for Shashank and the Palindromic Strings

Thanks."

and more (FYI "POWSUMS", "Simplified Chess engine" and "Shashank and the Palindromic Strings" are live contest problems)

Is anyone else getting these PMs too? I find them annoying like it when you see "You received 2 new messages" and all of them are asking for hints/sols/code for a live contest problem. Why do people do this? It's not like anyone is going to tell them the solution anyway.

Read more »

 
 
 
 
  • Vote: I like it  
  • +77
  • Vote: I do not like it  

By zscoder, history, 4 months ago, In English,

We hope everyone enjoyed the problems. Here is the editorial for the problems. I tried to make it more detailed but there might be some parts that might not be explained clearly.

Div. 2 A — Crazy Computer

Prerequisites : None

This is a straightforward implementation problem. Iterate through the times in order, keeping track of when is the last time a word is typed, keeping a counter for the number of words appearing on the screen. Increment the counter by 1 whenever you process a new time. Whenever the difference between the time for two consecutive words is greater than c, reset the counter to 0. After that, increment it by 1.

Time Complexity : O(n), since the times are already sorted.

Code (O(n))

Div. 2 B — Complete The Word

Prerequisites : None

Firstly, if the length of the string is less than 26, output  - 1 immediately.

We want to make a substring of length 26 have all the letters of the alphabet. Thus, the simplest way is to iterate through all substrings of length 26 (there are O(n) such substrings), then for each substring count the number of occurrences of each alphabet, ignoring the question marks. After that, if there exist a letter that occurs twice or more, this substring cannot contain all letters of the alphabet, and we process the next substring. Otherwise, we can fill in the question marks with the letters that have not appeared in the substring and obtain a substring of length 26 which contains all letters of the alphabet. After iterating through all substrings, either there is no solution, or we already created a nice substring. If the former case appears, output  - 1. Otherwise, fill in the remaining question marks with random letters and output the string.

Note that one can optimize the solution above by noting that we don't need to iterate through all 26 letters of each substring we consider, but we can iterate through the substrings from left to right and when we move to the next substring, remove the front letter of the current substring and add the last letter of the next substring. This optimization is not required to pass.

We can still optimize it further and make the complexity purely O(|s|). We use the same trick as above, when we move to the next substring, we remove the previous letter and add the new letter. We store a frequency array counting how many times each letter appear in the current substring. Additionally, store a counter which we will use to detect whether the current substring can contain all the letters of the alphabet in O(1). When a letter first appear in the frequency array, increment the counter by 1. If a letter disappears (is removed) in the frequency array, decrement the counter by 1. When we add a new question mark, increment the counter by 1. When we remove a question mark, decrement the counter by 1. To check whether a substring can work, we just have to check whether the counter is equal to 26. This solution works in O(|s|).

Time Complexity : O(|s|·262), O(|s|·26) or O(|s|)

Code (O(26^2*|s|)
Code (O(26*|s|)
Code (O(|s|)

Div. 2 C/Div. 1 A — Plus and Square Root

Prerequisites : None

Firstly, let ai(1 ≤ i ≤ n) be the number on the screen before we level up from level i to i + 1. Thus, we require all the ais to be perfect square and additionally to reach the next ai via pressing the plus button, we require and for all 1 ≤ i < n. Additionally, we also require ai to be a multiple of i. Thus, we just need to construct a sequence of such integers so that the output numbers does not exceed the limit 1018.

There are many ways to do this. The third sample actually gave a large hint on my approach. If you were to find the values of ai from the second sample, you'll realize that it is equal to 4, 36, 144, 400. You can try to find the pattern from here. My approach is to use ai = [i(i + 1)]2. Clearly, it is a perfect square for all 1 ≤ i ≤ n and when n = 100000, the output values can be checked to be less than 1018

Unable to parse markup [type=CF_TEX]

which is a multiple of i + 1, and is also a multiple of i + 1.

The constraints ai must be a multiple of i was added to make the problem easier for Div. 1 A.

Time Complexity : O(n)

Code (O(n))

Div. 2 D/Div. 1 B — Complete The Graph

Prerequisites : Dijkstra's Algorithm

This problem is actually quite simple if you rule out the impossible conditions. Call the edges that does not have fixed weight variable edges. First, we'll determine when a solution exists.

Firstly, we ignore the variable edges. Now, find the length of the shortest path from s to e. If this length is  < L, there is no solution, since even if we replace the 0 weights with any positive weight the shortest path will never exceed this shortest path. Thus, if the length of this shortest path is  < L, there is no solution. (If no path exists we treat the length as .)

Next, we replace the edges with 0 weight with weight 1. Clearly, among all the possible graphs you can generate by replacing the weights, this graph will give the minimum possible shortest path from s to e, since increasing any weight will not decrease the length of the shortest path. Thus, if the shortest path of this graph is  > L, there is no solution, since the shortest path will always be  > L. If no path exists we treat the length as .

Other than these two conditions, there will always be a way to assign the weights so that the shortest path from s to e is exactly L! How do we prove this? First, consider all paths from s to e that has at least one 0 weight edge, as changing weights won't affect the other paths. Now, we repeat this algorithm. Initially, assign all the weights as 1. Then, sort the paths in increasing order of length. If the length of the shortest path is equal to L, we're done. Otherwise, increase the weight of one of the variable edges on the shortest path by 1. Note that this will increase the lengths of some of the paths by 1. It is not hard to see that by repeating these operations the shortest path will eventually have length L, so an assignment indeed exists.

Now, we still have to find a valid assignment of weights. We can use a similar algorithm as our proof above. Assign 1 to all variable edges first. Next, we first find and keep track of the shortest path from s to e. Note that if this path has no variable edges it must have length exactly L or strictly more than L, so either we're already done or the shortest path contains variable edges and the length is strictly less than L. (otherwise we're done)

From now on, whenever we assign weight to a variable edge (after assigning 1 to every variable edge), we call the edge assigned.

Now, mark all variable edges not on the shortest path we found as weight. (we can choose any number greater than L as ) Next, we will find the shortest path from s to e, and replace the weight of an unassigned variable edge such that the length of the path becomes equal to L. Now, we don't touch the assigned edges again. While the shortest path from s to e is still strictly less than L, we repeat the process and replace a variable edge that is not assigned such that the path length is equal to L. Note that this is always possible, since otherwise this would've been the shortest path in one of the previous steps. Eventually, the shortest path from s to e will have length exactly L. It is easy to see that we can repeat this process at most n times because we are only replacing the edges which are on the initial shortest path we found and there are less than n edges to replace (we only touch each edge at most once). Thus, we can find a solution after less than n iterations. So, the complexity becomes . This is sufficient to pass all tests.

What if the constraints were n, m ≤ 105? Can we do better?

Yes! Thanks to HellKitsune who found this solution during testing. First, we rule out the impossible conditions like we did above. Then, we assign all the variable edges with weight. We enumerate the variable edges arbitarily. Now, we binary search to find the minimal value p such that if we make all the variable edges numbered from 1 to p have weight 1 and the rest , then the shortest path from s to e has length  ≤ L. Now, note that if we change the weight of p to the length of shortest path will be more than L. (if p equals the number of variable edges, the length of the shortest path is still more than L or it will contradict the impossible conditions) If the weight is 1, the length of the shortest path is  ≤ L. So, if we increase the weight of edge p by 1 repeatedly, the length of the shortest path from s to e will eventually reach L, since this length can increase by at most 1 in each move. So, since the length of shortest path is non-decreasing when we increase the weight of this edge, we can binary search for the correct weight. This gives an solution.

Time Complexity : or

Code (O(mnlogn))
Code (O(mlogn(logm+logL))

Div. 2 E/Div. 1 C — Digit Tree

Prerequisites : Tree DP, Centroid Decomposition, Math

Compared to the other problems, this one is more standard. The trick is to first solve the problem if we have a fixed vertex r as root and we want to find the number of paths passing through r that works. This can be done with a simple tree dp. For each node u, compute the number obtained when going from r down to u and the number obtained when going from u up to r, where each number is taken modulo M. This can be done with a simple dfs. To calculate the down value, just multiply the value of the parent node by 10 and add the value on the edge to it. To calculate the up value, we also need to calculate the height of the node. (i.e. the distance from u to r) Then, if we let h be the height of u, d be the digit on the edge connecting u to its parent and val be the up value of the parent of u, then the up value for u is equal to 10h - 1·d + val. Thus, we can calculate the up and down value for each node with a single dfs.

Next, we have to figure out how to combine the up values and down values to find the number of paths passing through r that are divisible by M. For this, note that each path is the concatenation of a path from u to r and r to v, where u and v are pairs of vertices from different subtrees, and the paths that start from r and end at r. For the paths that start and end at r the answer can be easily calculated with the up and down values (just iterate through all nodes as the other endpoint). For the other paths, we iterate through all possible v, and find the number of vertices u such that going from u to v will give a multiple of M. Since v is fixed, we know its height and down value, which we denote as h and d respectively. So, if the up value of u is equal to up, then up·10h + d must be a multiple of M. So, we can solve for up to be  - d·10 - h modulo M. Note that in this case the multiplicative inverse of 10 modulo M is well-defined, as we have the condition . To find the multiplicative inverse of 10, we can find φ(M) and since by Euler's Formula we have xφ(M) ≡ 1(modM) if , we have xφ(M) - 1 ≡ x - 1(modM), which is the multiplicative inverse of x (in this case we have x = 10) modulo M. After that, finding the up value can be done by binary exponentiation.

Thus, we can find the unique value of up such that the path from u to v is a multiple of M. This means that we can just use a map to store the up values of all nodes and also the up values for each subtree. Then, to find the number of viable nodes u, find the required value of up and subtract the number of suitable nodes that are in the same subtree as v from the total number of suitable nodes. Thus, for each node v, we can find the number of suitable nodes u in time.

Now, we have to generalize this for the whole tree. We can use centroid decomposition. We pick the centroid as the root r and find the number of paths passing through r as above. Then, the other paths won't pass through r, so we can remove r and split the tree into more subtrees, and recursively solve for each subtree as well. Since each subtree is at most half the size of the original tree, and the time taken to solve the problem where the path must pass through the root for a single tree takes time proportional to the size of the tree, this solution works in time, where the other comes from using maps.

Time Complexity :

Code

Div. 1 D — Create a Maze

Prerequisites : None

The solution to this problem is quite simple, if you get the idea. Thanks to dans for improving the solution to the current constraints which is much harder than my original proposal.

Note that to calculate the difficulty of a given maze, we can just use dp. We write on each square (room) the number of ways to get from the starting square to it, and the number written on (i, j) will be the sum of the numbers written on (i - 1, j) and (i, j - 1), and the edge between (i - 1, j) and (i, j) is blocked, we don't add the number written on (i - 1, j) and similarly for (i, j - 1). We'll call the rooms squares and the doors as edges. We'll call locking doors as edge deletions.

First, we look at several attempts that do not work.

Write t in its binary representation. To solve the problem, we just need to know how to construct a maze with difficulty 2x and x + 1 from a given maze with difficulty x. The most direct way to get from x to 2x is to increase both dimensions of the maze by 1. Let's say the bottom right square of the grid was (n, n) and increased to (n + 1, n + 1). So, the number x is written at (n, n). Then, we can block off the edge to the left of (n + 1, n) and above (n, n + 1). This will make the numbers in these two squares equal to x, so the number in square (n + 1, n + 1) would be 2x, as desired. To create x + 1 from x, we can increase both dimensions by 1, remove edges such that (n + 1, n) contains x while (n, n + 1) contains 1 (this requires deleting most of the edges joining the n-th column and (n + 1)-th column. Thus, the number in (n, n) would be x + 1. This would've used way too many edge deletions and the size of the grid would be too large. This was the original proposal.

There's another way to do it with binary representation. We construct a grid with difficulty 2x and 2x + 1 from a grid with difficulty x. The key idea is to make use of surrounding 1s and maintaining it with some walls so that 2x + 1 can be easily constructed. This method is shown in the picture below. This method would've used around 120 × 120 grid and 480 edge deletions, which is too large to pass.

Now, what follows is the AC solution. Since it's quite easy once you get the idea, I recommend you to try again after reading the hint. To read the full solution, click on the spoiler tag.

Hint : Binary can't work since there can be up to 60 binary digits for t and our grid size can be at most 50. In our binary solution we used a 2 × 2 grid to multiply the number of ways by 2. What about using other grid sizes instead?

Full Solution

Of course, this might not be the only way to solve this problem. Can you come up with other ways of solving this or reducing the constraints even further? (Open Question)

Time Complexity :

Code

Div. 1 E — Complete The Permutations

Prerequisites : Math, Graph Theory, DP, Any fast multiplication algorithm

We'll slowly unwind the problem and reduce it to something easier to count. First, we need to determine a way to tell when the distance between p and q is exactly k. This is a classic problem but I'll include it here for completeness.

Let f denote the inverse permutation of q. So, the minimum number of swaps to transform p into q is the minimum number of swaps to transform pfi into the identity permutation. Construct the graph where the edges are for all 1 ≤ i ≤ n. Now, note that the graph is equivalent to and is composed of disjoint cycles after qi and pi are filled completely. Note that the direction of the edges doesn't matter so we consider the edges to be for all 1 ≤ i ≤ n. Note that if the number of cycles of the graph is t, then the minimum number of swaps needed to transform p into q would be n - t. (Each swap can break one cycle into two) This means we just need to find the number of ways to fill in the empty spaces such that the number of cycles is exactly i for all 1 ≤ i ≤ n.

Now, some of the values pi and qi are known. The edges can be classified into four types :

A-type : The edges of the form , i.e. pi is known, qi isn't.

B-type : The edges of the form , i.e. qi is known, pi isn't.

C-type : The edges of the form , i.e. both pi and qi are known.

D-type : The edges of the form , i.e. both pi and qi are unknown.

Now, the problem reduces to finding the number of ways to assign values to the question marks such that the number of cycles of the graph is exactly i for all 1 ≤ i ≤ n. First, we'll simplify the graph slightly. While there exists a number x appears twice (clearly it can't appear more than twice) among the edges, we will combine the edges with x together to simplify the graph. If there's an edge , then we increment the total number of cycles by 1 and remove this edge from the graph. If there is an edge and , where a and b might be some given numbers or question marks, then we can merge them together to form the edge . Clearly, these are the only cases for x to appear twice. Hence, after doing all the reductions, we're reduced to edges where each known number appears at most once, i.e. all the known numbers are distinct. We'll do this step in O(n2). For each number x, store the position i such that pi = x and also the position j such that qj = x, if it has already been given and  - 1 otherwise. So, we need to remove a number when the i and j stored are both positive. We iterate through the numbers from 1 to n. If we need to remove a number, we go to the two positions where it occur and replace the two edges with the new merged one. Then, recompute the positions for all numbers (takes O(n) time). So, for each number, we used O(n) time. (to remove naively and update positions) Thus, the whole complexity for this part is O(n2). (It is possible to do it in O(n) with a simple dfs as well. Basically almost any correct way of doing this part that is at most O(n3) works, since the constraints for n is low)

Now, suppose there are m edges left and p known numbers remain. Note that in the end when we form the graph we might join edges of the form and (where a and b are either fixed numbers or question marks) together. So, the choice for the ? can be any of the m - p remaining unused numbers. Note that there will be always m - p such pairs so we need to multiply our answer by (m - p)! in the end. Also, note that the ? are distinguishable, and order is important when filling in the blanks.

So, we can actually reduce the problem to the following : Given integers a, b, c, d denoting the number of A-type, B-type, C-type, D-type edges respectively. Find the number of ways to create k cycles using them, for all 1 ≤ k ≤ n. Note that the answer is only dependent on the values of a, b, c, d as the numbers are all distinct after the reduction.

First, we'll look at how to solve the problem for k = 1. We need to fit all the edges in a single cycle. First, we investigate what happens when d = 0. Note that we cannot have a B-type and C-type edge before an A-type or C-type edge, since all numbers are distinct so these edges can't be joined together. Similarly, an A or C-type edge cannot be directly after a B or C-type edge. Thus, with these restrictions, it is easy to see that the cycle must contain either all A-type edges or B-type edges. So, the answer can be easily calculated. It is also important to note that if we ignore the cyclic property then a contiguous string of edges without D must be of the form AA...BB.. or AA...CBB..., where there is only one C, and zero or more As and Bs.

Now, if d ≥ 1, we can fix one of the D-type edges as the front of the cycle. This helps a lot because now we can ignore the cyclic properties. (we can place anything at the end of the cycle because D-type edges can connect with any type of edges) So, we just need to find the number of ways to make a length n - 1 string with a As, b Bs, c Cs and d - 1 Ds. In fact, we can ignore the fact that the A-type edges, B-type edges, C-type edges and D-type edges are distinguishable and after that multiply the answer by a!b!c!(d - 1)!.

We can easily find the number of valid strings we can make. First, place all the Ds. Now, we're trying to insert the As, Bs and Cs into the d empty spaces between, after and before the Ds. The key is that by our observation above, we only care about how many As, Bs and Cs we insert in each space since after that the way to put that in is uniquely determined. So, to place the As and Bs, we can use the balls in urns formula to find that the number of ways to place the As is and the number of ways to place the Bs is . The number of ways to place the Cs is , since we choose where the Cs should go.

Thus, it turns out that we can find the answer in O(1) (with precomputing binomial coefficients and factorials) when k = 1. We'll use this to find the answer for all k. In the general case, there might be cycles that consists entirely of As and entirely of Bs, and those that contains at least one D. We call them the A-cycle, B-cycle and D-cycles respectively.

Now, we precompute f(n, k), the number of ways to form k cycles using n distinguishable As. This can be done with a simple dp in O(n3). We iterate through the number of As we're using for the first cycle. Then, suppose we use m As. The number of ways to choose which of the m As to use is and we can permute them in (m - 1)! ways inside the cycle. (not m! because we have to account for all the cyclic permutations) Also, after summing this for all m, we have to divide the answer by k, to account for overcounting the candidates for the first cycle (the order of the k cycles are not important)

Thus, f(n, k) can be computed in O(n3). First, we see how to compute the answer for a single k. Fix x, y, e, f, the number of A-cycles, B-cycles, number of As in total among the A-cycles and number of Bs in total among the B-cycles. Then, since k is fixed, we know that the number of D-cycles is k - x - y. Now, we can find the answer in O(1). First, we can use the values of f(e, x), f(f, y), f(d, k - x - y) to determine the number of ways to place the Ds, and the As, Bs that are in the A-cycles and B-cycles. Then, to place the remaining As, Bs and Cs, we can use the same method as we did for k = 1 in O(1), since the number of spaces to place them is still the same. (You can think of it as each D leaves an empty space to place As, Bs and Cs to the right of it) After that, we multiply the answer by to account for the choice of the set of As and Bs used in the A-only and B-only cycles. Thus, the complexity of this method is O(n4) for each k and O(n5) in total, which is clearly too slow.

We can improve this by iterating through all x + y, e, f instead. So, for this to work we need to precompute f(e, 0)f(f, x + y) + f(e, 1)f(f, x + y - 1) + ... + f(e, x + y)f(f, 0), which we can write as g(x + y, e, f). Naively doing this precomputation gives O(n4). Then, we can calculate the answer by iterating through all x + y, e, f and thus getting O(n3) per query and O(n4) for all k. This is still too slow to pass n = 250.

We should take a closer look of what we're actually calculating. Note that for a fixed pair e, f, the values of g(x + y, e, f) can be calculated for all possible x + y in or O(n1.58) by using Number Theoretic Transform or Karatsuba's Algorithm respectively. (note that the modulus has been chosen for NFT to work) This is because if we fix e, f, then we're precisely finding the coefficients of the polynomial (f(e, 0)x0 + f(e, 1)x1 + ... + f(e, n)xn)(f(f, 0)x0 + f(f, 1)x1 + ... + f(f, n)xn), so this can be handled with NFT/Karatsuba.

Thus, the precomputation of g(x + y, e, f) can be done in or O(n3.58).

Next, suppose we fixed e and f. We will calculate the answer for all possible k in similar to how we calculated g(x + y, e, f). This time, we're multiplying the following two polynomials : f(d, 0)x0 + f(d, 1)x1 + ... + f(d, n)xn and g(0, e, f)x0 + g(1, e, f)x1 + ... + g(n, e, f)xn. Again, we can calculate this using any fast multiplication method, so the entire solution takes or O(n3.58), depending on which algorithm is used to multiply polynomials.

Note that if you're using NFT/FFT, there is a small trick that can save some time. When we precompute the values of g(x + y, e, f), we don't need to do inverse FFT on the result and leave it in the FFTed form. After that, when we want to find the convolution of f(d, i) and g(i, e, f), we just need to apply FFT to the first polynomial and multiply them. This reduces the number of FFTs and it reduced my solution runtime by half.

Time Complexity : or O(n3.58), depending on whether NFT or Karatsuba is used.

Code (NFT)
Code (Karatsuba)

Read more »

 
 
 
 
  • Vote: I like it  
  • +173
  • Vote: I do not like it  

By zscoder, history, 4 months ago, In English,

Hi everyone, it's me again!

Codeforces Round #372 (Div. 1 + Div. 2) will take place on 17 September 2016 at 16:35 MSK,

After my last round, this will be my second round on Codeforces. I believe you'll find the problems interesting and I hope you'll enjoy the round.

This round would not be possible without dans who improved one of the problems that made this round possible, and also helped in preparing and testing the round. Also, thanks to all the testers, IlyaLos, HellKitsune and phobos and thanks to MikeMirzayanov for the awesome Codeforces and Polygon platforms.

ZS the Coder and Chris the Baboon's trip in Udayland is over. In this round, you'll help ZS the Coder solve the problems he have randomly came up with. Do you have what it takes to solve them all?

The problems are sorted by difficulty but as always it's recommended to read all the problems.

We wish you'll have many Accepted solutions and enjoy the problems. :)

As usual, the scoring will be published right before the contest.

UPD : There will be 5 problems in both division as usual.

Scoring :

Div. 2 : 500 — 1000 — 1500 — 2000 — 2500

Div. 1 : 500 — 1000 — 1500 — 25002750

Good luck and I hope you enjoy the problems!

UPD : Contest is over. I hope you enjoyed the contest and problems :) I'm sure some of you wants to see the editorial now, so here it is while we wait for System Test to start.

UPD : System tests is over. Here're the winners :

Division 1 :

  1. TooDifficuIt

  2. matthew99

  3. jerry

  4. Burunduk1

  5. izrak

  6. LHiC

  7. SanSiroWaltz

  8. dotorya

  9. Arterm

  10. zeliboba

Division 2 :

  1. amethyst0

  2. JOHNKRAM

  3. huzzah

  4. Vax

  5. KyleChen

  6. LeiQ

  7. Tomer.Adar

  8. boredtodeath

  9. OrangePlus

  10. Yukikaze

Congratulations to them!

Read more »

 
 
 
 
  • Vote: I like it  
  • +423
  • Vote: I do not like it  

By zscoder, history, 5 months ago, In English,

Here are the editorials for all the problems. Hope you enjoyed them and found them interesting!

Tutorial is loading...
Code
Tutorial is loading...
Code
Tutorial is loading...
Code (O(nkm^2))
Code (O(nkm))
Tutorial is loading...
Code
Tutorial is loading...
Code

Read more »

 
 
 
 
  • Vote: I like it  
  • +100
  • Vote: I do not like it  

By zscoder, history, 5 months ago, In English,

Important Update: Our friends have noticed that the upcoming round collides with their contest and also weekend is full of many another contests, so the round is now moved to Monday, 29 August 2016 15:05 MSK. We are sorry for the inconvenience caused and hope that you'll understand us.

Hi everyone!

Codeforces Round #369 (Div. 2) will take place on 27 August 2016 at 16:05 MSK. As usual, Div.1 participants can join out of competition.

I would like to thank dans for helping me with the preparation of the round, MikeMirzayanov for the amazing Codeforces and Polygon platforms and also Phyto for testing the problems.

I am the author of all the problems, and dans also helped making one of the problems harder. This is my first round on Codeforces! Hope everyone will enjoy the problems and find them interesting. It is advisable to read all the problems ;)

In this round, you will help ZS the Coder and Chris the Baboon while they are on an adventure in Udayland. Can you help them solve their problems? :)

Good luck, have fun, and wish everyone many Accepted Solutions. :)

UPD : Also thanks to IlyaLos and HellKitsune for testing the problems too.

UPD 2 : There will be 5 problems and the scoring is standard : 500-1000-1500-2000-2500.

UPD 3 : Editorial

UPD 4 :

Congratulations to the winners :

Div. 1 winners :

  1. matthew99

  2. uwi

  3. Egor

  4. Um_nik

  5. kmjp

Div. 2 Winners :

  1. _zhandos

  2. stilltooyoung

  3. YangJunzhao

  4. lych_cys

  5. Zharaskhan

Read more »

 
 
 
 
  • Vote: I like it  
  • +286
  • Vote: I do not like it  

By zscoder, history, 6 months ago, In English,

Hi everyone! I created a small group here which is open for public. There will be 3 5-hour contests held there featuring 3 problems each and the problems are taken from olympiads of different countries as well as problems from other sites (though these are rare) The contests will be in ACM-ICPC mode. (since this is the default CF mode)

Since almost all of the problems are unoriginal, it is very likely that you might have seen some of the problems before. Everyone is welcome to join the group and participate in any contest anytime.

The schedule of the contests have been posted in the group. Additionally, ziadxkabakibi told me he have uploaded some Croatian OI problems before, so he might also add it to the group as well.

Read more »

 
 
 
 
  • Vote: I like it  
  • +38
  • Vote: I do not like it  

By zscoder, history, 6 months ago, In English,

Problem statement :

There are N planets in the galaxy. For each planet i, it has two possible states : empty, or it is connected to some other planet j ≠ i with a one-way path. Initially, each planet is in the empty state (i.e. there are no paths between any pair of planets)

There are three types of operations :

  1. If planet i is in empty state, connect it with planet j with a one-way path (i and j are distinct)

  2. If planet i is currently connected to planet j with a one-way path, remove that path. Consequently, planet i is now in empty state again

  3. For a pair of planets u, v, find a planet x such that one can travel from u to x and v to x. If there are multiple such planets, find the one where the total distance travelled by u and v is minimized (distance is the number of paths it passes through). If there are no solutions, output  - 1.

Q, N ≤ 106. Time limit is 10 seconds.

One of the official solutions uses splay tree to solve this problem, but I have no idea how it works (I haven't use splay trees before). Can anyone tell me how to use splay tree on this problem? Thanks.

Read more »

 
 
 
 
  • Vote: I like it  
  • +6
  • Vote: I do not like it  

By zscoder, history, 6 months ago, In English,

Recently I encountered a problem which is very similar to RMQ.

Abridged Statement : First, you have an empty array. You have two operations :

  1. Insert a value v between positions x - 1 and x (1 ≤ x ≤ k + 1) where k is the current size of the array.(positions are 1-indexed)

  2. Find the maximum in the range [l, r] (i.e. the maximum from the positions l to r inclusive)

There are at most Q ≤ 250000 queries.

My current solution uses SQRT decomposition but it was not fast enough to get AC. I was wondering if there is an or solution.

Edit : Forgot to mention that the queries must be answered online (it's actually function call), so offline solutions doesn't work.

Read more »

 
 
 
 
  • Vote: I like it  
  • +26
  • Vote: I do not like it  

By zscoder, history, 7 months ago, In English,

Problem Statement

Abridged Problem Statement : Given a1, a2, ..., an, find the number of permutations of these numbers such that |a1 - a2| + |a2 - a3| + ... + |an - 1 - an| ≤ L where L is a given integer.

The editorial given is quite brief and the sample code is nearly unreadable. I have no idea how to do the dp.

Can anyone explain the solution? Thanks.

UPD : Thanks to ffao for the hint! I finally got how the dp works. The unobfuscated code with comments is here.

Read more »

 
 
 
 
  • Vote: I like it  
  • +38
  • Vote: I do not like it  

By zscoder, history, 8 months ago, In English,

Reminder that Google Distributed Code Jam Online Round 1 starts at this time.

Good luck to all people participating!

Read more »

 
 
 
 
  • Vote: I like it  
  • +42
  • Vote: I do not like it  

By zscoder, history, 8 months ago, In English,

I was trying to solve this problem.

I could only figure out the naive solution. (DFS from each vertex) I think I have encountered similar problems before but I couldn't solve them either. How do I solve this kind of problem?

Read more »

 
 
 
 
  • Vote: I like it  
  • +2
  • Vote: I do not like it  

By zscoder, history, 9 months ago, In English,

I keep getting this message when I try to package the problem in polygon :

PackageException: There exists a test where checker crashes.

Why does this error show up?

(I'm using the standard checker that compares sequences of integers)

Read more »

 
 
 
 

By zscoder, history, 9 months ago, In English,

Reminder that Ad Infinitum 15 — Math Programming Contest starts at Apr 15 2016, 11:30 pm CST.

T-shirts for top 10 ranks on leaderboard.

Read more »

 
 
 
 
  • Vote: I like it  
  • +28
  • Vote: I do not like it  

By zscoder, history, 9 months ago, In English,

1. COLOR

This problem is trivial. Note that we want the resulting string to be monochromatic and thus we can just choose the color with maximal occurences and change the remaining letters into the color.

2. CHBLLNS

This problem is also trivial. For each color, we take k - 1 balls, or if there're less than k - 1 balls, take all the balls. Currently, there are at most k - 1 balls for each color. Then, take one more ball. By pigeonhole principle, there exists k balls of same color. So, this is our answer.

3. CHEFPATH

This problem is not hard. WLOG, assume n ≤ m. If n = 1, then the task is possible if and only if m = 2 for obvious reasons. Otherwise, if m, n are both odd, then coloring the board in a checkerboard fashion can show that no such path exist. If one of m, n is even, then such path exists. (it's not hard to construct such path)

4. BIPIN3

In fact, the answer is the same for any tree. For the root vertex we can select k possible colors. For each children, note that we can select any color except the color of the parent vertex, so there're k - 1 choices. So, in total there are k·(n - 1)k - 1 possible colorings. To evaluate this value, we use modular exponentiation.

5. FIBQ

The idea of this problem is to convert the sum in matrix language. Let T be the matrix[\begin{bmatrix} 1 & 1 \ 1 & 0 \end{bmatrix} ]

Additionally, let I be the identity matrix. Then, our desired answer will be the top right element of (Tal + I)(Tal + 1 + I)...(Tar + I). Now, to support the update queries, just use a segment tree. The segment tree from this link is perfect for the job.

6. DEVGOSTR

Despite it's appearance, this problem is actually very simple. For A = 1, there is at most 1 possible string, namely the string consisting of all `a's. The background of this problem is Van der Waerden's Theorem. According to the Wikipedia page, the maximal length of a good string for A = 2 is 8 and the maximal length for a good string for A = 3 is 26. Now, for A = 2 we can brute force all possible strings. However, for A = 3 we need to brute force more cleverly. The key is to note that if a string s is not good then appending any letter to s will not make it good. So, we just need to attempt to append letters to good strings. This pruning will easily AC the problem.

7. AMAEXPER

Call a point that minimizes the answer for a subtree rooted at r the king of r. If there are mutiple kings choose the one closer to r. The problem revolves around an important lemma :

Lemma : For the subtree rooted at r, consider all children of r. Let l be the children such that the distance from r to the furthest node in the subtree rooted at l is the longest. If there are multiple l, then r is the king. Otherwise, either r is the king of the node in the subtree rooted at l is the king.

Sketch of proof : Suppose not, then the maximal distance from some other vertex will be the distance from that node to root + the distance from root to the furthest node in the subtree rooted at l, so choosing r is more optimal.

Thus, for we can actually divide the tree into chains. A chain starts from some vertex v and we keep going down to the children l. For each node r, we store the distance to root, the maximal distance from r to any vertex in its subtree, as well as the maximal distance IF we don't go down to children l. (this distance is 0 if r has only 1 children) Now, for each chain, we start from the bottom. We also maintain a king counter starting from the bottom. At first, the answer for the bottom node is the maximal distance stored for that node. Then, as we go up to the top of the chain, note that the possible places for the king is on the chain and will not go below the king for the node below. Thus, we can update the king counter in a sliding window like fashion. How do we find the answer for each node? This value can be calculated in terms of the numbers stored on each node, and using an RMQ we can find the desired answer. The details for this will not be included here.

8. FURGRAPH

The key observation for this problem is the following : Instead of weighing the edges, for each edge with weight w, we increase w to the endpoints of the edge (note that for self-loops the vertex will be added twice). Then, the difference of score of Mario and Luigi is just the difference of the sum of weights on the vertices they have chosen, divided by 2. Why? If an edge has its endpoints picked by Mario (or Luigi), then Mario (or Luigi)'s score will increase by 2w. If each person pick one of the endpoints, then the difference of score is unchanged, as desired. Now, Mario and Luigi's strategy is obvious : They will choose the vertex with maximal possible weight at the time, so letting a1 ≤ a2 ≤ ... ≤ an be the weights of the vertices in sorted order, the answer is an - an - 1 + an - 2... + ( - 1)n - 1a1, i.e. the alternating sum of weights in sorted order.

Using this observation alone can easily give us Subtask 2. We can naively store the vertex weights in a map and update as neccesary. For each query, just loop through the map and calculate the alternating sum.

Subtask 3 requires more optimization. We will use sqrt-decomposition with an order statistic tree to store the vertex weights in sorted order. Note that for each update, we're cyclicly shifting the values of al to ar, for some l, r which can be found from our order statistic tree. Then, we update ar as al + w. To efficiently perform these queries, we divide the array (of vertex weights) into parts of elements each. We start from element l. For each block we store a deque containing the elements, the sum of elements (we store the elements as negative if we want to subtract the value when calculating the alternating sum for convenience), and the sign of the elements in the block. We iterate through the elements and perform the neccesary updates until we reach the end of block. Then, for updating entire blocks, we perform the neccesary updates on the values, negate the sign, and since we're using deque we can pop front and push back the desired element (namely the next element). Then, when we reach the block containing r, we iterate naively again. The total complexity is . My solution with this algorithm ACs the problem.

9. CHNBGMT

Unfortunately, I only get the first 3 subtasks to this problem.

This problem is similar to this, except it's harder. The solution for that problem uses Lindstrom-Gessel-Viennot Lemma. We can apply that lemma to this problem as well.

You can see how to apply the lemma from the old CF problem. Now, the problem reduces to finding the number of ways to go from ai to bj for 1 ≤ i, j ≤ 2, and finding the determinant of the resulting 2 by 2 matrix.

For subtasks 1 and 2, M and N are small so we can find these values using a simple dp. In particular, we can use something like dp[i][j][k] =  number of ways to get to (i, j) passing through exactly k carrots. Since M, N ≤ 60, this solution can easily pass.

For subtask 3, N, M ≤ 105. However, we are given that C = 0. For N = 2 or M = 2 the answer can be calculated manually. So, from now onwards, we assume M, N ≥ 3. Then finding the values is equivalent to computing binomial coefficients. (The details are trivial). So, all it remains is to know how to compute binomial coefficients mod MOD where MOD ≤ 109 is not guaranteed to be a prime.

Firstly, the obvious step is to factor MOD into its prime factors. Thus, we can compute the answer mod all the prime powers that are factors of MOD and combine the results with Chinese Remainder Theorem. How do we find the result mod pk? This is trivial. We just need to store vp(n) of the numbers and the remainder of the number mod pk when we divide out all factors of p. Now, compute modular inverse mod pk assuming that the value is not divisible by p can be done using Euler's Theorem the same way we find modular inverse mod p.

I would like to know how to get the other subtasks where C > 0.

Read more »

 
 
 
 
  • Vote: I like it  
  • +58
  • Vote: I do not like it  

By zscoder, history, 12 months ago, In English,

For this problem, the tutorial states that maintaining the sum of a Geometric Progression is a simple problem that can be solved using Segment Tree. However, I don't know how to do this (I think Lazy Propogation should be involved). Any help please?

Read more »

 
 
 
 
  • Vote: I like it  
  • +4
  • Vote: I do not like it  

By zscoder, history, 12 months ago, In English,

I have ideas on problems and would like to write a codeforces round, so how should I do so? (I know some basics for writing problems (though I'm not quite familiar with the testlib library).)

UPD : I finished all problems and uploaded all of them on polygon, but I still haven't get a reply (my message wasn't even seen yet) :(

UPD2 : Got a reply. Thanks to everyone who commented!

Read more »

 
 
 
 
  • Vote: I like it  
  • +21
  • Vote: I do not like it  

By zscoder, history, 12 months ago, In English,

Abridged Problem Statement : You have a list of N distinct words, and you want to print K of them. You have a very basic text editor : It supports 3 types of operations: typing a letter, deleting the previous letter, and printing the current document. You need to leave the document empty after printing K words. What's the minimum number of operations required to get the job done?

Constraints : 1 ≤ K ≤ N ≤ 300 The total length of all N words in each set will be at most 100, 000 characters.

Solution :

Name the words s1, s2, ..., sN. Let s0 = "". We initially have s0 on the document. After some operations, we want to print k of s1, s2, ..., sN and change the document to s0. For some pair of string si, sj, how many operations do we need to print sj if we have a document with si? Let p be the length of the Longest Common Prefix of si and sj. We can compute this naively in O(min(|si|, |sj|)). Then, we need to delete |si| - p characters, type |sj| - p characters, and print it (1 operation), for a total of |si| + |sj| - 2p + 1 operations. We can precalculate this value for each pair of words si, sj in a short time.

Now, suppose we have chosen our K words s1, s2, ..., sK. In which order do we print them in order to use a minimum amount of operations? We claim that in fact, if we print the strings in lexicographical order, we will use the minimum amount of operations. Suppose not, let s1, s2, ..., sK be arranged in lexicographical order. Suppose we print the words in the sequence s1, s2, ..., si - 1, sj, si + 1, ..., sj - 1, si, sj + 1, ..., sK, with i < j. Let a1, a2, ..., K be the length of word s1, s2, ..., sK respectively and denote the length of the Longest Common Prefix of sa and sb by L(a, b). Note that L(i, j) ≥ L(i, k) if i < j < k. Suppose to the contrary the longest common prefix of si and sk have longer length than the longest common prefix of si and sj. Then, let p be the longest common prefix of si and sk. The prefix of sj does not contain p but this contradicts the fact that si, sj, sk are increasing in lexicographical order. Thus, the inequality holds. We'll use this inequality many times in the calculations below. It can be easily seen that the total number of operations used is 2(a1 + a2 + ... + aK) - 2(L(1, 2) + L(2, 3) + ... + L(i - 1, j) + L(j, i + 1) + ... + L(j - 1, i) + L(i, j + 1) + ... + L(K - 1, K)). Compare this to when the words are printed in lexicographical order. Firstly, if j > i + 1, we just need to compare L(i - 1, j) + L(j, i + 1) + L(j - 1, i) + L(i, j + 1) and L(i - 1, i) + L(i, i + 1) + L(j - 1, j) + L(j, j + 1). We want the former to be at most the latter. Indeed, L(i - 1, i) ≥ L(i - 1, j), L(i, i + 1) ≥ L(i, j - 1), L(j - 1, j) ≥ L(i + 1, j), L(j, j + 1) ≥ L(i, j + 1), since the words are arranged in lexicographical order. So, summing up yields the desired inequality. Now suppose j = i + 1. We want to compare L(i - 1, i + 1) + L(i + 1, i) + L(i, i + 2) and L(i - 1, i) + L(i, i + 1) + L(i + 1, i + 2). Again, we have L(i - 1, i) ≥ L(i - 1, i + 1), L(i + 1, i + 2) ≥ L(i, i + 2), as desired. So, printing the words lexicographically is the optimal way.

Now, we already have our cruical step. It remains to select the K words that minimize the amount of operations. We can do this by dynamic programming. First sort the N words in lexicographical order. Let dp[end][parts] be the minimum number of operations needed to print parts number of words all of whose index is at most end and the last word printed has index end. We want min(dp[1][k], dp[2][k], ..., dp[n][k]). If end < parts, we let dp[end][parts] be INF.

Let cost(i, j) be the number of operations needed to print word j if our current document is word i. Recall that we already know how to compute cost(i, j) in O(1) since we already precomputed the values.

Now, how do we find dp[end][parts]? It is the minimum of dp[i][parts - 1] + cost(i, end) among 1 ≤ i ≤ end - 1. Time complexity is O(n2k). Finally, don't forget to add the cost from s0 to si. (in the beginning and end).

Read more »

 
 
 
 
  • Vote: I like it  
  • -25
  • Vote: I do not like it