### ed1d1a8d's blog

By ed1d1a8d, history, 5 years ago,

Here is a problem my friend smurty posed:

Say we are given an array of non-negative integers of length n, representing stacks of dirt of certain heights. We can move one piece of dirt from one pile to an adjacent pile for cost 1. What is the minimum cost it takes to transform this array to be non-decreasing?

Neither me nor my friend could come up with a polynomial time solution in n to this problem. Can CF think of something?

EDIT: BUMP

• +43

By ed1d1a8d, history, 6 years ago,

DCJ Round 2 starts soon.

"You will advance to the onsite final round for Distributed Code Jam if you are one of the top-scoring 15 contestants from Distributed Code Jam Round 2."

Let us discuss problems here afterwards.

Best of luck to all competitors!

• +76

By ed1d1a8d, history, 7 years ago,

#### 608A - Saitama Destroys Hotel

Author: ed1d1a8d

The minimum amount of time required is the maximum value of ti + fi and s, where t_i and f_i are the time and the floor of the passenger respectively.

The initial observation that should be made for this problem is that only the latest passenger on each floor matters. So, we can ignore all passengers that aren't the latest passenger on each floor.

Now, assume there is only a passenger on floor s. Call this passenger a. The time taken for this passenger is clearly ta + fa (the time taken to wait for the passenger summed to the time taken for the elevator to reach the bottom).

Now, add in one passenger on a floor lower than s. Call this new passenger b. There are 2 possibilities for this passenger. Either the elevator reaches the passenger's floor after the passenger's time of arrival or the elevator reaches the passenger's floor before the passenger's time of arrival. For the first case, no time is added to the solution, and the solution remains ta + fa. For the second case, the passenger on floor s doesn't matter, and the time taken is tb + fb for the new passenger.

The only thing left is to determine whether the elevator reaches the new passenger before ti of the new passenger. It does so if ta + (fa - fb) > tb. Clearly this is equivalent to whether ta + fa > tb + fb. Thus, the solution is max of max(ta + fa, tb + fb).

A similar line of reasoning can be applied to the rest of the passengers. Thus, the solution is the maximum value of ti + fi and s.

#### 608B - Hamming Distance Sum

Author: ed1d1a8d

We are trying to find . Swapping the sums, we see that this is equivalent to .

Summing up the answer in the naive fashion will give an O(n2) solution. However, notice that we can actually find without going through each individual character. Rather, all we need is a frequency count of different characters. To obtain this frequency count, we can simply build prefix count arrays of all characters on b. Let's call this prefix count array F, where F[x][c] gives the number of occurrences of the character c in the prefix [0, x) of b. We can then write . as . This gives us a linear solution.

Time Complexity — O(|a| + |b|), Memory Complexity — O(|b|)

#### 607A - Chain Reaction

Author: Chilli

It turns out that it is actually easier to compute the complement of the problem — the maximum number of objects not destroyed. We can subtract this from the total number of objects to obtain our final answer.

We can solve this problem using dynamic programming. Let dp[x] be the maximum number of objects not destroyed in the range [0, x] given that position x is unaffected by an explosion. We can compute dp[x] using the following recurrence:

Now, if we can place an object to the right of all objects with any power level, we can destroy some suffix of the (sorted list of) objects. The answer is thus the maximum number of destroyed objects objects given that we destroy some suffix of the objects first. This can be easily evaluated as

Since this is the complement of our answer, our final answer is actually

Time Complexity — O(max(ai)), Memory Complexity — O(max(ai))

#### 607B - Zuma

Author: Amor727

We use dp on contiguous ranges to calculate the answer. Let D[i][j] denote the number of seconds it takes to collapse some range [i, j]. Let us work out a transition for this definition. Consider the left-most gemstone. This gemstone will either be destroyed individually or as part of a non-singular range. In the first case, we destroy the left-most gemstone and reduce to the subproblem [i + 1, j]. In the second case, notice that the left-most gemstone will match up with some gemstone to its right. We can iterate through every gemstone with the same color as the left-most (let k be the index of this matching gemstone) and reduce to two subproblems [i + 1, k - 1] and [k + 1, j]. We can reduce to the subproblem [i + 1, k - 1] because we can just remove gemstones i and k with the last removal of [i + 1, k - 1]. We must also make a special case for when the first two elements in a range are equal and consider the subproblem [i + 2, j].

Here is a formalization of the dp:

http://codeforces.com/blog/entry/22256?#comment-268876Why is this dp correct? Notice that the recursive version of our dp will come across the optimal solution in its search. Moreover, every path in the recursive search tree corresponds to some valid sequence of deletions. Since our dp only searches across valid deletions and will at some point come across the optimal sequence of deletions, the answer it produces will be optimal.

Time Complexity — O(n3), Space Complexity — O(n2)

#### 607C - Marbles

Author: ed1d1a8d

Define the reverse of a sequence as the sequence of moves needed to negate the movement. For example, EEE and WWW are reverses, and WWSSSEE and WWNNNEE are reverses. I claim is impossible to get both balls to the end if and only if some suffix of the first sequence is the reverse of a suffix of the second sequence.

Let us prove the forward case first, that if two suffixes are reverses, then it is impossible to get both balls to the end. Consider a sequence and its reverse, and note that they share the same geometric structure, except that the direction of travel is opposite. Now imagine laying the two grid paths over each other so that their reverse suffixes are laying on top of each other. It becomes apparent that in order to move both balls to their ends, they must cross over at some point within the confines of the suffix. However, this is impossible under the movement rules, as in order for this to happen, the two balls need to move in different directions at a single point in time, which is not allowed.

Now let us prove the backwards case: that if no suffixes are reverses, then it is possible for both balls to reach the end. There is a simple algorithm that achieves this goal, which is to move the first ball to its end, then move the second ball to its end, then move the first ball to its end, and so on. Let's denote each of these "move the x ball to its end" one step in the algorithm. After every step, the combined distance of both balls from the start is strictly increasing. Without loss of generality, consider a step where you move the first ball to the end, this increases the distance of the first ball by some value k. However, the second ball can move back at most k - 1 steps (only its a reverse sequence can move back k steps), so the minimum change in distance is  + 1. Hence, at some point the combined distance will increase to 2(n - 1) and both balls will be at the end.

In order to check if suffixes are reverses of each other, we can take reverse the first sequence, and see if one of its prefixes matches a suffix of the second sequence. This can be done using string hashing or KMP in linear time.

Time Complexity — O(n), Memory Complexity — O(n)

#### 607D - Power Tree

Author: ed1d1a8d

Let's solve a restricted version of the problem where all queries are about the root. First however, let us define some notation. In this editorial, we will use d(x) to denote the number of children of vertex x. If there is an update involved, d(x) refers to the value prior to the update.

To deal these queries, notice that each vertex within the tree has some contribution ci to the root power. This contribution is an integer multiple mi of each vertex's value vi, such that ci = mi·vi If we sum the contributions of every vertex, we get the power of the root.

To deal with updates, notice that adding a vertex u to a leaf p scales the multiplier of every vertex in p's subtree by a factor of . As for the contribution of u, notice that mu = mp.

Now, in order to handle both queries and updates efficiently, we need a fast way to sum all contributions, a way to scale contributions in a subtree, and a way to add new vertices. This sounds like a job for ... a segment tree!

We all know segment trees hate insertions, so instead of inserting new vertices, we pre-build the tree with initial values 0, updating values instead of inserting new vertices. In order to efficiently support subtree modification, we construct a segment tree on the preorder walk of the tree, so that every subtree corresponds to a contiguous segment within the segment tree. This segment tree will store the contributions of each vertex and needs to support range-sum-query, range-multiply-update, and point-update (updating a single element). The details of implementing such a segment tree and are left as an exercise to the reader.

Armed with this segment tree, queries become a single range-sum. Scaling the contribution in a subtree becomes a range-multiply (we don't need to worry about multiplying un-added vertices because they are set to 0). And adding a new vertex becomes a range-sum-query to retrieve the contribution of the parent, and then a point-set to set the contribution of the added vertex.

Finally, to solve the full version of the problem, notice that the power of a non-root vertex w is a scaled down range sum in the segment tree. The value of the scale is , the proof of which is left as an exercise to the reader.

Time Complexity — , Space Complexity — O(q)

#### 607E - Cross Sum

Author: GlebsHP

The problem boils down to summing the k closest intersections to a given query point.

We binary search on the distance d of kth farthest point. For a given distance d, the number of points within distance d of our query point is equivalent to the number of pairwise intersections that lie within a circle of radius d centered at our query point. To count the number of intersections, we can find the intersection points of the lines on the circle and sort them. Two lines which intersect will have overlapping intersection points on the circle (i.e. of the form ABAB where As and Bs are the intersection points of two lines). Counting the number of intersections can be done by DP.

Once we have d, we once again draw a circle of size d but this time we loop through all points in O(k) instead of counting the number of points.

It may happen that there are I < k intersections inside the circle of radius d but also I' > k inside a circle of radius d + ε. In this case, we should calculate the answer for d and add d(k - I).

Time Complexity — , Space Complexity — O(n)

• +78

By ed1d1a8d, history, 7 years ago,

Greetings! CodeForces Round #336 welcomes both divisions this Wednesday, December 23, 2015 at 16:35:00 UTC. The round is authored by me, Amor727, Chilli, and GlebsHP. We hope you'll like the problems. Scoring and score distribution: Not Dynamic; Div1: 500 — 1250 — 1500 — 2000 — 3000; Div2: 500 — 1000 — 1500 — 2250 — 2500

Much thanks to Amor727 and Chilli for writing and editing problems, GlebsHP for organizing the competition and for his very helpful attitude, Delinur for translations, winger for testing, Marina Kruglikova for statement fixes, and MikeMirzayanov for his amazing CF and Polygon platforms.

During this contest you will be assisting Genos from the series One Punch Man. His master Saitama will also make some appearances. We wish everyone good luck and high rating in assisting the two. From the contest crew and the two fellows below, happy holidays!

Congratulations to the winners:

Division 1:

Division 2:

Editorial of round: /blog/entry/22256

• +427

By ed1d1a8d, 7 years ago,

I'm having difficulty understanding the editorial for this problem: Cow Coupons. The editorial states that the algorithm is optimal because it chooses the minimum value for each cow added to the lineup. However, how do you know that we will never choose a cow with greater value, but that will contribute less to the "revoke" heap. I tried constructing a case as a counter-example, but obviously failed because the algorithm is correct. Additionally, I couldn't find a clear way to show why such a construction is impossible. My question is, why is the algorithm correct? I don't really get their explanation. Thanks.

• +15

By ed1d1a8d, 8 years ago,

Over the past few months, the frequency and magnitude of spam posts has increased dramatically on CodeForces, and today it has gotten to the point where it is a serious inconvenience to the community. When the entire recent actions list is full of spam blogs save for one post on the contest from that day: Picture Reference Link, something must be done.

Firstly, some direct observations:

• Almost all spam blog posts are created by users that are extremely new

• Spam blog users post their blogs almost immediately after account creation

• Spam blog users have NO submitted problems or registered contests

Secondly, I'd like to note (and to refresh your memory) that CodeForces does have a Captcha system in place for registration. This means that all the spam users are more likely than not created by real human beings and not bots (I would commend anyone who is able to make a sufficiently accurate OCR program).

From these observations I would like to propose some changes to the system that can potentially be of use. A word of caution first however: Because each spam blog is create by a real individual, no matter how hard we make the process of posting a blog, a spam user can always find a way. Thus, we are not trying to prevent spam users from posting, but we are rather trying to deter them from doing so. That is, we make the amount of work required to create a spam blog disproportionate to the amount the spammer gains from posting such material. Ideally, a solution would make life as hard as possible for the spammer, while at the same time, minimizing the amount of burden for regular users to post.

One such solution that I believe is adequate is to require users to submit an AC program before they are allowed to write a blog post. This is certainly a great deal of work for the spammer (as I assume most spammers are unfamiliar with competitive programming judges), while at the same time, easy enough that a beginner programmer could complete the task. The statistics for problems solved is already implemented into the CF system, so I would assume (naively), that it wouldn't be too difficult to link that to the post system.

Something needs to be done about the spam posts on CF, as they are really interfering with the community. I encourage everyone to post their opinions and potential solutions, as this issue would affect all new CodeForces registrees.

TL;DR Too many spam posts on CF. Require AC submission before posting blog to solve.

P.S. I remember there was a similar blog post a while back, but I couldn't find it.

TIME TO TAKE ACTION

I believe that we've come up with a pretty comprehensive list of possible solutions. The next step is to let the CF mods that we are serious about getting something done. Right now, the only method that I found is to message MikeMirzayanov using the CodeForces system. If anyone has another reliable method of contact, please do share. I would suggest against including a specific method and link to this discussion instead, as it provides a wider variety of options, as well as a better organized listing of potential solutions. I would also include in the message why getting rid of spam is important to you, as a personal message conveys a stronger message than just a complaint. If you don't have the time or can't think of anything to write, I've prepared a template message that you can send: http://pastebin.com/LHSzG17j

• +44

By ed1d1a8d, 8 years ago,

Last weekend (March 1-2) the 2014 HackerRank 101 Hack took place. In particular, a problem named Coloring Tree was present. Basically, the problem gives you a tree where each vertex is assigned a color and asks you queries where you have to find the number of distinct colors in a particular subtree.

My approach was to compute the number of distinct colors for every subtree using a DFS traverse of the tree. My algorithm went through the tree, and for every vertex it computed a set (std::set) of all the distinct elements within its subtree. I accomplished this for every vertex using the following steps:

1. Compute the sets for each of the children of the current vertex.
2. Merge the children together by inserting all elements of the child sets within the subtree into the set of the largest child. (NOTE: Do not copy the set of the largest child. Simply insert the rest of the elements such that the number of insertions is #_OF_ELEMENTS_IN_SUBTREE-#_OF_ELEMENTS_IN_LARGEST_CHILD).
3. Record the size of the merged set.

I accomplished this in my code through the use of an array of sets, and keeping pointers to the "active set" of a vertex, thus avoiding copying the largest child.

Here is my code on Ideone: http://ideone.com/Lje1R5

Now, onto my actual question. The reason I used this technique is because I remembered a similar technique from a past USACO problem. The USACO problem explained that the complexity of such a merging is O(nlog2n), if I understood it correctly. However, it did not provide a proof of the complexity. I've tried to deduce it on my own with induction, but am stuck on analyzing the merge step, since it involves the size of the largest subtree. I thought of maybe using something like Jensen's inequality to bound the sum of the subtrees, but wasn't able to get very far do too my inexperience with Jensen's. I am pretty sure the complexity is around O(nlog2n) because of my program's runtime: Runtimes (unless they had extremely weak test data?) My question is, how can such a complexity be deduced?

• +3

By ed1d1a8d, 9 years ago,

Here's the problem: SUB_PROB

I'm having trouble getting my solution to run in time, even though my algorithm should pass. If you didn't already read the problem statement, it asks you to check whether or not words exist in a piece of text. I'm currently using Aho-Corasick to solve the problem. Basically, I build up the search trie first in O(N*S), and then search through the text once in O(M). This should add up to a O(M + N*S) algorithm, which by my calculations — should AC.

Here is my code: OLD http://ideone.com/ndQcN4 OLD

EDIT — HERE IS UPDATED CODE: http://ideone.com/3P7XIj

Note: I had to split up the query strings into 3 batches since my trie would exceed the memory limit otherwise.

Is my algorithm flawed, or is my implementation not optimal. And if it is the latter case, what can I do to speed up the algorithm. I have already removed all recursive methods in exchange for iterative ones, so I don't know how I can speed it up further. Thanks.

• +1

By ed1d1a8d, 9 years ago,

I'm having trouble figuring out what is wrong with my code. It gets WA on test case #9. Some pointers (*no pun intended*) or advice would be appreciated.

Here is a link to my code: http://ideone.com/vcCrHZ

• +1

By ed1d1a8d, 10 years ago,

I'm a big fan of AI, and especially love it when your programs get to compete head to head with ones written by other people. I noticed no one's posted about this site, so I thought I'd bring it up.

HackerRank is basically a platform that allows programmers to compete in collection of intellectually stimulating AI problems. I haven't delved very deep into it yet, but I just wanted to share this cool site with everyone. (And I'm bored of waiting for Google's new AI challenge.)

Here is the site: HackerRank Site

• +13

By ed1d1a8d, 10 years ago,

What form of I/O should we use in competitive programming?

For example scanf and printf are undoubtedly faster than cin and cout, but the later two are easier and faster to code. How should we adress this tradeoff?