By darkshadows, 3 weeks ago, ,

Google Kick Start hosts online rounds throughout the year, giving participants the opportunity to test and grow their coding abilities while getting a sample of the programming skills needed for a technical career at Google. Top candidates might be invited for interviews.

I invite you to solve some fun and interesting problems on Sept 29 2019, 18:00 UTC.

Dashboard can be accessed here during the contest. Problem analyses will be published soon after the contest.

• +58

By darkshadows, history, 6 months ago, ,

Google Kick Start hosts online rounds throughout the year, giving participants the opportunity to test and grow their coding abilities while getting a sample of the programming skills needed for a technical career at Google. Top candidates might be invited for interviews.

I invite you to solve some fun and interesting problems on Apr 20 2019, 23:00 UTC.

Dashboard can be accessed here during the contest. Problem analysis will be published soon after the contest.

• +32

By darkshadows, 12 months ago, ,

Throughout the year, Google Code Jam hosts online Kickstart rounds to give participants the opportunity to develop their coding skills, get acquainted with Code Jam’s competition arena, and get a glimpse into the programming skills needed for a technical career at Google.

Each Kickstart round gives participants 3 hours to solve challenging, algorithmic problems developed by Google engineers. Participating is a fun way to grow your coding skills—and potentially explore opportunities at Google.

Inviting you to solve some fun and interesting problems on Sunday, Sept 30, 2018 19:00 UTC.

Dashboard can be accessed here during the contest. Problem analysis will be published soon after the contest.

• +61

By darkshadows, history, 15 months ago, ,

Throughout the year, Google Code Jam hosts online Kickstart rounds to give participants the opportunity to develop their coding skills, get acquainted with Code Jam’s competition arena, and get a glimpse into the programming skills needed for a technical career at Google.

Each Kickstart round gives participants 3 hours to solve challenging, algorithmic problems developed by Google engineers. Participating is a fun way to grow your coding skills—and potentially explore opportunities at Google.

Inviting you to solve some fun and interesting problems on Sunday, July 29, 2018 05:00 UTC.

Dashboard can be accessed here during the contest. Problem analysis will be published soon after the contest.

• +41

By darkshadows, 17 months ago, ,

Throughout the year, Google Code Jam hosts online Kickstart rounds to give participants the opportunity to develop their coding skills, get acquainted with Code Jam’s competition arena, and get a glimpse into the programming skills needed for a technical career at Google.

Each Kickstart round gives participants 3 hours to solve challenging, algorithmic problems developed by Google engineers. Participating is a fun way to grow your coding skills—and potentially explore opportunities at Google.

Inviting you to solve some fun and interesting problems on Sunday, May 27, 2018 05:00 UTC.

Dashboard can be accessed here during the contest. Problem analysis will be published soon after the contest.

• +21

By darkshadows, 18 months ago, ,

Throughout the year, Google Code Jam hosts online Kickstart rounds to give participants the opportunity to develop their coding skills, get acquainted with Code Jam’s competition arena, and get a glimpse into the programming skills needed for a technical career at Google.

Each Kickstart round gives participants 3 hours to solve challenging, algorithmic problems developed by Google engineers. Participating is a fun way to grow your coding skills—and potentially explore opportunities at Google.

Inviting you to solve some fun and interesting problems on Saturday, April 21, 2018 23:00 UTC.

Dashboard can be accessed here during the contest. Problem analysis will be published soon after the contest.

• +53

By darkshadows, history, 2 years ago, ,

Throughout the year, Code Jam hosts online Kickstart rounds to give students the opportunity to develop their coding skills, get acquainted with Code Jam’s competition arena, and get a glimpse into the programming skills needed for a technical career at Google.

Inviting you to solve some fun and interesting problems with Professor Shekhu and Akki this Sunday(hopefully in your timezone as well!) Oct 22 05:00 UTC – 08:00 UTC.

You'll find the link to contest dashboard at https://code.google.com/codejam/kickstart/ on the day of the contest. Also, we'll try to publish analysis as early as possible.

• +33

By darkshadows, history, 3 years ago, ,

Hi everyone,

I'm excited to invite you to participate in 101 Hack 47. The contest starts at 1500 UTC, March 21. Note the unusual timing!

All problems have been prepared by me. You might have seen some problems and contests set by me on various platforms. This is my third round on HackerRank, after 101 Hack 26 and 101 Hack 40. Also, here is an old collection of my problems.

There will be five tasks and two hours for you to solve them. Contest will be rated and top-10 contestants on the leaderboard will receive amazing HackerRank T-shirts!

I'd like to thank kevinsogo for testing the problems and his contribution towards fixing problem statements and editorials. It's always a pleasant experience to work with him.

I have tried to create a balanced and interesting problem set. I suggest you read all problems, since there are subtasks and partial scoring. Problems are very much on the easier side compared to previous contests and I bet a lot of full scores(like a lot, seriously!).

Editorials(written by me) will be published right after the contest.

Update 1: Scoring will be 15 — 30 — 45 — 65 — 80. Update 2: Contest has ended. Congratulations to top 10 for getting full scores. Top 10 are:

Update 3: Editorials are up.

• +95

By darkshadows, 3 years ago, ,

Hi everyone,

I'm excited to invite you to participate in Codechef October Cook-Off. The contest starts at Sunday, 23 October 2016, 16:00:00 UTC. In order to participate, you only need to have a CodeChef account. If you don't have one, you can register here.

All problems have been prepared by me. You might have seen some problems and contests set by me on various platforms. This is my second round on Codechef, after Cook-Off 50. Also, here is an old collection of my problems.

There will be five tasks and two and half hours for you to solve them. Contest will be rated and the top 10 performers in Global and Indian category will get CodeChef laddus; with them, you can claim CodeChef goodies. You can find out more here. (If you didn't get your previous goodies, you can contact winners@codechef.com.)

I'd like to thank Errichto for testing the problems, Antoniuk for Russian translations, Team VNOI for Vietnamese translations, huzecong for Mandarin translations. I specially want to thank PraveenDhinwa for his invaluable contributions in preparing the contest and being the contest admin and language verifier for problem statements.

I have tried to create a balanced and interesting problem set. I suggest you read all problems. I am expecting a lot of full scores. :)

Editorials(written by me) will be published right after the contest.
UPD 1: Editorials can be found at https://discuss.codechef.com/tags/cook75/
UPD 2: Applauds to top 6 who solved all 5 problems
1. anta
2. natsugiri
3. LHiC
4. bigINnnner
5. uwi
6. chemthan

Hope you had fun. I'd love to hear your feedback on the quality of the problemset. :)

• +94

By darkshadows, 3 years ago, ,

Hi everyone,

I'm excited to invite you to participate in 101 Hack 40. The contest starts at 1630 UTC, August 23.

All problems have been prepared by me. You might have seen some problems and contests set by me on various platforms. This is my second round on HackerRank, after 101 Hack 26. Also, here is an old collection of my problems.

There will be five tasks and two hours for you to solve them. Contest will be rated and top-10 contestants on the leaderboard will receive amazing HackerRank T-shirts!

I'd like to thank wanbo for testing the problems and his contribution. It's always a pleasant experience to work with him. Also, I'm thankful to my friends karanaggarwal and tanujkhattar for invaluable discussions on few of the problems.

I have tried to create a balanced and interesting problem set. I suggest you read all problems, since there are subtasks and partial scoring. I think problems are a little on the easier side compared to previous contests.

Editorials(written by me) will be published right after the contest.

PS: Announcement format has been shamelessly copied from previous entries by muratt and malcolm. :)

UPD 1: Bump. 12 hours to go!
UPD 2: Scoring will be 20-30-50-80-100.
UPD 3: Contest has ended. Congratulations to top 2 for getting full scores. Top 10 are:

• +201

By darkshadows, history, 3 years ago, ,

When you load the page for organisation rating, like this one: http://codeforces.com/ratings/organization/227/ and organisation has more than 200 members, you need to go to page 2. However, the page 2 button redirects to http://codeforces.com/ratings/page/2 instead of http://codeforces.com/ratings/organization/227/page/2

I hope someone from Codeforces team has a look at this.

• +47

By darkshadows, 4 years ago, ,

Recently someone asked me a question on Quora about how groups can help in competitive programming and I was reminded of I_love_Hoang_Yen's modelling of competitive programming as an obstacle jumping game and I went ahead to write this answer. I'd also like to share the text here.

There is a great post by I_love_Hoang_Yen where he tries to model competitive programming as an obstacle jumping game. I'd like to use same analogy here to help you. I have used some excerpts from his post, which I've italicized.

The 'science' of problem solving
For each problem, in order to solve it, you must jump over a gap or an obstacle. It can be either a difficult implementation, or some hard-to-see observation, difficult algorithm, etc.

For me, some problems are very easy (e.g. Codeforces div 2 A, B..), because those gaps seem so small to me, and passing through them feel just like casual walking.

Some problems are very hard. The gap is just too huge, or there are many many gaps, and you can get stuck in the middle because you're too tired after maybe first gap.

Or maybe there is some trick/concept that you don't know yet! Here you are, applying brute force or greedy technique to a DP problem; of course, you won't succeed.

But now comes your experienced friend, who has been doing competitive programming for a few months, knows how to do complexity analysis, so she knows brute force won't succeed, with some more convincing she proves to you why greedy is wrong and this is where working in groups is helpful.
Now, you'll easily cross the obstacle with some guidance.

If you have read Thanh Trung Nguyen's post you'll know what to do to improve yourself individually. You just need to keep learning new tricks and concepts, at the same time you need to make sure you aren't making same mistakes again. Occasionally, when are stuck, you can ask your friends for help.

Where to find these friends
You are looking for people who are doing competitive programming and a good community where people help each other to make the game more interesting; what can be a bigger community than Codeforces! If you are facing some difficulties you can always put up a blog post asking for help.
However, as I've noticed, some people might find online collaboration a little ineffective when compared with physical presence. Some abstractions might be difficult to understand when they are just written down, instead of someone explaining to you in front of you on a board or paper. If you are school or college student, best way is to look around you, you'll find plenty of people doing these contests and willing to discuss and share their knowledge with you. If there aren't any such people, you can form a programming club in your institution and inspire other people too to join you.

Good luck!

Foot note: All gifs made using Vector — Android Apps on Google Play

• +62

By darkshadows, 4 years ago, ,

### 100889A - A Beautiful Array

Given an array of integers, find a permutation of the array to maximize the sum .

If we observe the formula closely we see that we have to calculate . If the array is of odd length, we can simply skip the middle element. This can be maximized by putting the large numbers in the right half, and the small numbers in the left half. One good way to do this is to simply sort the array in ascending order.

Complexity: .
Code: http://ideone.com/nJddvt

### 100889B - Backward and Forward

Given an array A, make it palindromic using minimum merging operations. In one merging operation two adjacent elements can be replaced by their sum.

To make an array a palindromic we can simply apply merging operations n - 1 times where n is the size of array. In that case,size of array will be reduced to 1. But, in this problem we are asked to do it in minimum number of operations.

Let’s denote by f(i, j) as minimum merging operations to make a given array a mirror from index i to index j. If , i = j answer is 0.
If Ai =  = Aj, then there is no need to do any merging operations at index i or index j . Our answer in this case will be f(i + 1, j - 1).
But if arr[i] ≠ arr[j], then we need to do merging operations. If arr[i] > arr[j], then we should do merging operation at index j. We merge index j - 1 and j, and update Aj - 1 = Aj - 1 + Aj. Our answer in this case will be 1 + f(i, j - 1).
For the case when Ai < Aj, update Ai + 1 = Ai + Ai + 1. Our answer in this case will be 1 + f(i + 1, j).

Our answer will be f(0, n - 1), where n is size of array A. This problem can also be solved iteratively using 2 pointers(first pointer pointing to start of the array and second pointer pointing to last element of the array) method and keeping count of total merging operations done till now.

Complexity: O(N).
Code: http://ideone.com/lIHYJF

### 100889C - Chunin Exam

You are required to reach (N, M) from (1, 1), where in matrix P cells are blocked and you are allowed two type of queries:

You can use constant memory and at-most O(12 * P) queries.

How would you come out of a maze full of darkness(i.e. you can just check adjacent cells) and with you having memory just enough to remember the last cell you came from? The idea is very intuitive: just walk along the walls(assume blocked cells as a wall)! Its guaranteed that path exists, so in O(P + N + M) queries worst case, we should safely reach our destination.

So, we put our left/right hand on wall and start walking in that direction. For each cell that we are visiting, we maintain the current direction in which wall is. Based on that, we try to extend the wall in possible directions by looking at adjacent blocked cells.

The worst case where you might perform O(12 * P) queries is when there is a pattern like this:

| | | | |
| | |0| |
| |0| | |
| | |0| |
| |0| | |

Here 0 denotes a blocked cell.

### 100889D - Dicy Numbers

Author : vmrajas
Explanation :
If n is represented in its canonical prime factorization as follows :
n = p1k1 * p2k2...pjkj
The number of numbers having their number of positive divisors equal to n with their highest prime factor  ≤  m'th prime number is given by :

ans =  m + k1 - 1Cm *  m + k2 - 1Cm... *  m + kj - 1Cm

Explanation of Formula : The number of divisors of a number x :

x = p1α1 * p2α2...piαi
is given by :
y = (α1 + 1) * (α2 + 1) * ... * (αi + 1)
where y denotes number of divisors of number x. We need to find number of x such that their corresponding y = n.
The problem can be seen as for every kz where z goes from 1 to j , we need to distribute kz balls into i boxes. This reduces the problem to the formula stated above.

Computation of ans : Given the formula stated above, we can compute it as follows :
Prime factors of all number from 1 to 2 * 106 can be computed efficienlty using sieve. Now, if we know the prime factors of a given number n , the answer can be computed in O(number of prime factors of n). As this number is very low (~15) , 105 queries can be easily handled.

Code : http://ideone.com/w202fV

### 100889E - Everyone wants Khaleesi

You are given a Direct Acyclic Graph(DAG) with no multi-edges and self-loops. Two players play a game in which Player 1 starts at node 1 and has to reach node N by taking one existing directed edge in the graph in one move. Player 2, in one move can remove any existing edge of the graph. Game ends when player 1 has reached node N or he can't make a move. Report who wins if both play optimally.

This is a troll problem! Initially it might look really complex, but the very basic idea is that if player 1 is at distance 1 from his destination and its player 2's move, player 2 can just remove the edge that player 1 could use to reach node N in next move. The only way player 2 loses if when nodes 1 and N are directly connected, in which case player 1 will win in first move.

Complexity: O(1)
Code: http://ideone.com/oiXQTR

### 100889F - Flipping Rectangles

The problem asks for the maximum area of destination rectangle that can be covered by the source rectangle by flipping it atmost k times.

Let us formalize a good notation for the problem, so that it becomes easier to solve it.
Let us translate the two rectangles in such a way that the left bottom corner of the source rectangle becomes the origin. As translation does not affect the dimensions of the rectangles, the answer won't be affected because of this operation. Let a denote the side length of the source rectangle along X-Axis and b denote the side length of the source rectangle along Y-Axis. Let (c,d) denote the left bottom corner of the destination rectangle in new coordinate frame. Let e,f be its side lengths along X-Axis and Y-Axis respectively.Therefore the corners of the destination rectangle can be given by (c,d),(c + e,d),(c,d + f),(c + e,d + f).

Let Rect(i,j) denote the rectangle of length a along X-Axis and length b along Y-Axis such that its left bottom corner is at point (i * a,j * b). Therefore, the original position of the source rectangle can be referred to as Rect(0,0). Now, the given operation of flipping can be represented as follows

• Flipping it right Rect(i,j) goes to Rect(i + 1,j)
• Flipping it left Rect(i,j) goes to Rect(i - 1,j)
• Flipping it above Rect(i,j) goes to Rect(i,j + 1)
• Flipping it below Rect(i,j) goes to Rect(i,j - 1)

Now, the minimum number of flips required to take source rectangle Rect(0,0) to Rect(i,j) can be found out as |i|+|j|. Let Rect(x,y) denote the rectangle whose intersection area with the destination rectangle is non-zero and |x|+|y| is as minimum as possible.

Key Observation : Atleast one of the rectangles Rect(x,y), Rect(x + 1,y), Rect(x,y + 1), Rect(x + 1,y + 1), Rect(x - 1,y), Rect(x - 1,y - 1), Rect(x,y - 1), Rect(x + 1,y - 1), Rect(x - 1,y + 1) covers the maximum intersection area with the destination rectangle.

Therefore, if we are able to find such x, y, we can find the answer by finding the maximum intersection area of destination rectangle with the above 9 rectangles. To consider the extra constraint of maximum k flips, we also need to check if |i|+|j|  ≤  k while considering Rect(i,j) for maximum intersection area.

How do we find the required x and y?

These can be found out by the observation that Rect(x,y) will contain at least one of the corners of destination rectangle or the points (c,0), (c + e,0), (0,d), (0,d + f). Rect(x,y) will not contain the corners of destination rectangle in cases when the endpoints of the destination rectangle are on the opposite sides of the coordinate axes.

For a given point the required i, j such that Rect(i,j) contains the point can be computed easily using integer division of the coordinates of point by the sides of the source rectangles.

For more details http://ideone.com/u97AYD

Since XOR and AND are bit-wise operators, we can process x, y and z bit by bit separately. Lets try to construct x, y and z which gives optimal answer from MSB to LSB. Since they are bounded by 1018 we need to consider less than 63 bits.

Let's say x has i, y has j and z has k at a bit position , the contribution of this position in answer is

((i^j) + (j&k) + (k^i)) * 2pos

So now if there was no constraints on values of x, y and z we would move from MSB to LSB and simply chose the combination of bits of x, y and z at every bit-position that resulted in maximum contribution to the answer. But how to deal with constraints on them?

A thing to note is:
Let's say we are constructing a number p bit by bit from MSB to LSB. If at any position i we make a choice such that p becomes greater than a number q at that bit position, no matter what we choose for further bit position of p, it can never become  ≤ q.
Same thing can be said for p  ≥  q.

So while constructing x, y and z from MSB to LSB, the choice of values we have, for each of them, at every bit position depends on the previous choices of values L, R, A and B.
We solve the problem using dynamic programming.

Define a state as . Following code explains the transition between states.

//f1 denotes if x has become > L
//f2, g1, g2 denotes if x, y, z have become < R, < A, < B respectively
//pos is the bit position currently being looked at
lld solve(int pos, bool f1, bool f2, bool g1, bool g2)
{
if(pos<0) return 0; //base case
lld ret = dp[pos][f1][f2][g1][g2];
if(ret!=-1) return ret;
ret = 0;
int d1 = 0, d2 = 1; //min, max values the bit can take at this postion for x
if(bit(pos, L) == 1 && !f1) d1 = 1; //adjust min value at pos for x
if(bit(pos, R) == 0 && !f2) d2 = 0; //adjust max value at pos for x
int D1 = 0, D2 = 1; //min, max values the bit can take at this postion for y
if(bit(pos, A) == 0 && !g1) D2 = 0; //adjust max value at pos for y
int DD1 = 0, DD2 = 1; //min, max values the bir can take at this postion for z
if(bit(pos, B) == 0 && !g2) DD2 = 0; //adjust max value at pos for z

//try all possible combinations for this postion for x, y, z
for(int i=d1;i<=d2;i++)
for(int j=D1;j<=D2;j++)
for(int k = DD1;k<=DD2;k++)
{
//calculate the contribution this bit cam make in final answer
lld tmp = (1LL*(i^j) + 1LL*(j&k) + 1LL*(k^i))<<pos;

bool ff1 = f1, ff2 = f2;
bool gg1 = g1, gg2 = g2;
//adjust new flags (f1, f2, g1, g2) depending on current combinations
if(i > bit(pos, L)) ff1 = 1;
if(i < bit(pos, R)) ff2 = 1;
if(j < bit(pos, A)) gg1 = 1;
if(k < bit(pos, B)) gg2 = 1;
//update the ans depending on result for this combination
ret = max(ret, tmp + solve(pos-1, ff1, ff2, gg1, gg2));
}

//found the result for this state yay :D
return (dp[pos][f1][f2][g1][g2] = ret);
}


### 100889H - Hitting Points

You are given N points of a polygon pi in counter clockwise order. In each query, for a different l, k, we put polygon with origin at point pl and edge pl to p(l + 1)%N on x-axis; and we put an infinite rod on line x = k. When this rod falls in counter-clockwise direction, we are supposed to output the indices of vertices where this rod touches first time.

Most interesting observations in g### eometry come via visualisation and drawing. Consider the following image.

I have projected edges Vi to Vi + 1(for i ≥ 1) on new x-axis(defined by base). If the vertical rod lies in region (P0, P1), it'll touch vertex V1, if its in region (P1, P2), it'll touch vertex V2 and so on. So, we notice that all edges in counter-clockwise direction are creating a continuous region such that if k lies in that region, the rod will touch the first point of that edge. If you think about the reason, its quite intuitive. We just have to find the region in which k lies, which can be done via binary search on the edges of polygon in counter-clockwise direction. We need to consider those edges whose intersection with new x-axis lies to right side of polygon(i.e. +ve x-axis); so basically are searching over right hull of convex polygon.

For finding the distance on new x-axis we take intersection with the base line(i.e new x-axis) and see distance from base points. Also, we can easily take care of rod touching an whole edge, which happens when k is at the boundary of one of the regions of an edge.

Complexity: .
Code: http://ideone.com/w6DGKA

### 100889I - iChandu

Given a string S of length n consisting of only lower case letters, you have to replace a character at exactly 1 position with the character '$' such that the number of distinct palindromes in the final string is maximum. What we need to do is for every position i count the number of: • new palindromes that would be created, if the substitution was made at index i. • old palindromes that would be destroyed, if the substitution was made at index i. For the first case, since the new character is distinct from all the earlier ones it can only create odd length palindromes with '$' at the middle.

Hence the number of palindromes created when a substitution is made at index i, will be equal to the number of odd length palindromes that already exist with index i as the center. This can be found for every i in O(n) using Manacher's algorithm.

For the second case, what we need to note is that the maximum number of distinct palindromes in a string of length n is n, because at every index i, only 1 distinct palindrome can be ending there.

Now consider one of the distinct palindromes present in string S, let it be P.

P might occur in S more than once. After the substitution, P will no longer exist if and only if the index of the substitution is present in each and every occurrence of P. For index i to be present in each occurrence of P, i must be present in the overlap of every occurrence of P. The overlap of every occurrence of P is equal to the overlap of the first and last occurrence of P. So if the substitution is made in the overlap of the first and last occurrence of P, P will be 'destroyed'.

Hence we need to find all distinct palindromes and for each one we need to find the first and last occurrence of it. This can be done with a palindromic tree.

Construct a palindromic tree for S, this will provide the first occurrence of every distinct palindrome. Similarly construct a palindromic tree reverse of S, this will provide the last occurrence of every distinct palindrome. Now for every distinct palindrome preform a range update of +1 for the range of overlap of the first and last occurrence of that palindrome, because a substitution in that range will destroy that palindrome.

Now finally check which index i has maximum of number of palindromes created  -  number of palindrome destroyed.

Final complexity: O(n)

Given a weighted undirected tree, handle 2 types of queries:

• Update weight of edge between two nodes u, v.
• For a subset of nodes(special nodes), find for every special node the distance to the special node farthest from it.

First let’s look at a simpler version of the problem. Consider there were no updates and all nodes of the tree were special nodes. Let the tree be rooted at node 1. For every node u, the farthest node from it will either be in its subtree or not. With one DFS, we can compute for every node u, what is the maximum distance to any node in its subtree.

-> in_subtree[u] = max(in_subtree[v] + E(u,v)) for all ‘v’, where ‘v’ is a child of ‘u’ and E(u,v) is the weight of the edge between ‘u’ & ‘v’.

With another DFS, we can compute for every node u, what is the maximum distance to any node NOT in its subtree.

-> out_of_subtree[u] = max(in_subtree[v] + E(u,p) + E(v,p)) for all ‘v’, where ‘v’ is sibling of ‘u’ and ‘p’ is the parent of both.
-> out_of_subtree[u] = max(out_of_subtree[u], out_of_subtree[p] + E(u,p)), where ‘p’ is the parent of ‘u’.

Finally for every node u, answer will be the maximum of in_subtree[u] and out_of_subtree[u].

Coming to a slightly more advanced form of the problem, consider there are no updates but now special nodes can be any subset of nodes. When we have a query with K special nodes, we can construct an auxiliary tree of those K nodes and solve for that new tree with the DP used above.

To construct an auxiliary tree, we need all K special nodes and LCAs of every pair of special nodes. But LCAs of every pair of special nodes is equal to LCAs of every adjacent pair of special nodes when the special nodes are sorted in preorder of the tree(arrival time of nodes in DFS). Now for the weights of the edges in the auxiliary tree, weight of edge between node u and v will be equal to distance between nodes u and v in the original tree. This can be handled if distance between every node u and root of original tree is precomputed.

Now to handle updates as well. Note that for every node, the only information that we require and that can be changed after updates is the distance to root. If edge between node u and its parent p is updated from w1 to w2, the only changes will be for distance to root from every node v in u's subtree, and the change will be same for all, i.e. for every node v in subtree of u:
dist_to_root[v] = dist_to_root[v] - w1 + w2

Now if we maintain an array of dist_to_root[] sorted in preorder of nodes, any subtree is a continuous range. Thus the updates of edges can be handled with an array that can handle:

• Range Update(add w2 - w1 to range of subtree of u)
• Point query(query for dist_to_root[u])

This can be handled using a Segment Tree or Fenwick Tree.

Complexity:

• Update
• Query

Solution: http://ideone.com/UyhQDX

There are others approaches as well, such as:

• Updates can be handled using HLD (Longer code and slower).
• Centroid Decomposition; Solution: http://ideone.com/2YnsZc
• Breaking apart the queries into those with and those with . Apply O(K2) brute force for first case and O(N) DFS on entire tree for latter case.

### 100889K - Kill Bridges

Setter : tanujkhattar
Tester : itisalways42 , karanaggarwal
Problem Statement :
Given a connected undirected unweighted graph with N nodes and M edges, you can add at most K edges in the graph, tell the maximum number of bridges that can be removed from the graph. There will be Q queries, each specifying a different K .
Solution :
First of all, shrink the 2-edge biconnected components of the given graph and build it's Bridge Tree.The problem now reduces to :
Given a tree , we need to do select 2*K leaves of the tree and do their matching such that if a pair of leaves (u,v) is matched, we color the path from u to v in the tree. We want to maximize the number of colored edges after the paths joining all the K pairs has been colored. Note that in the process an edge might be colored multiple times but it would be counted only once in the final answer.
Before we solve the above problem, let us look at the following two lemmas :
Lemma-1 : In the optimal solution , the subgraph formed by the colored edges would be connected.
Proof : If it is not so, let (u,v) be an edge such that it is not colored and there exists colored subgraphs on either side of this edge in the tree. Let (x,y) be a pair of matched leaves towards the left side of the edge and let (p,q) be a pair of matched leaves towards the right side of the edge. Without loss of generality, we can match (x,p) and (y,q) such that whatever edges were colored earlier remain colored and the edge (u,v) is now also colored. Same argument can be inductively extended and we can say that in the optimal solution the subgraph formed would always be connected.
Lemma-2 : The center of the tree would always be a part of the optimal connected subgraph.
Proof : The proof of this is left as an exercise to the reader (Hint : For K=1, we would always join the end points of the diameter )

Using the above two lemma's, we can move on with the implementation of the greedy approach as follows :
1. Root the tree at its center.
2. Break the tree into chains using HLD type idea, where at any node a special child would be the one that has longest chain lying in it's subtree with one end point at that node.
3. For every leaf, define contribution of that leaf as the length of chain that ends at that leaf. The contribution can be found out as follows :

void dfs(int u,int p,int len=0)
{
bool isLeaf=true;
for(auto w:tree[u])
{
if(w==p)continue;
isLeaf=false;
if(far[u]==far[w])dfs(w,u,len+1); //special child
else dfs(w,u,1); //start a new chain
}
if(isLeaf)cntbn.PB(len);
}


Sort the leaves in descending order of their contributions and for a query K, pick the first 2*K leaves from the sorted order and add their contribution to get the final answer.

### 100889L - Lazy Mayor

Problem Idea : vmrajas
Author : itisalways42
Tester : hulkblaster
Problem Statement:
Given an undirected weighted graph of n nodes, m edges and a non-negative integer k, you need to find out for each pair of nodes u,v, Let the shortest distance between u and v using atmost k edges be dist[u][u]. We have to compute dist[u][v] and the count of paths from u to v with atmost k edges such that the distance of such paths is equal to dist[u][v].

Difficulty:Medium

Explanation:
The problem can be solved using dynamic programming. Let dp[x][y][z] be the shortest distance between node x and node y using atmost z edges and let cnt[x][y][z] denote its count. The recurrence to compute dp[x][y][z] can be given as :

dp[x][y][z] = min(dp[x][i][z/2] + dp[i][y][z-z/2],dp[x][y][z]) where i goes from 1 to n.


The base case :

dp[i][j][0] = 1  , if i==j
dp[i][j][0] = 0  , otherwise

dp[i][j][1] = g[i][j]



where g[i][j] is the length of the edge between node i and j (0 if it doesn't exist.)

Using the above recurrence the 2-D matrix representing the shortest distance between any pair of nodes using at-most K edges can be computed using matrix exponentiation. Also, the cnt table can also be computed along with the exponentiation to compute the dp table.

Have a look at the code below for implementation details.

Code : http://ideone.com/9r975m

• +41

By darkshadows, 4 years ago, ,

Hello,

As you might remember, CodeCraft 2016 was held online on the university's platform on 12th Feb 2016(find announcement, results). CodeCraft is an 5 hours ACM-ICPC style contest for individuals which is organised every year by IIIT Hyderabad. We don't have resources to keep our platform online for whole year, so we think GYM contest is a good option to create an archive and help people train better.

I invite you to GYM Replay of the contest to be held on 21st Feb 2016 1400 MSK in GYM. There will be 12 problems and duration 5.5 hours(as was original contest).

There are some really interesting problems in contest. In general, all participants would be able to find something that captivates their interest. For a better gradient, the level is a letter higher than last years contest. Also, you'll be able to compete with ghost participants(i.e. participants who actually participated in original contest).

Detailed editorials will be uploaded once the GYM contest is over.

Good luck and have fun!

UPD: Editorial

• +184

By darkshadows, 4 years ago, ,

Hi,

We present to you our Annual Programming Contest, CodeCraft. It will be an ACM-ICPC style 5-hour contest, but for individuals.

There will be problems of varying difficulty, so, beginners and advanced programmers can savour the problems equally. (hopefully :P) Come treat yourself!

Contest starts 12th February, 2016 20:30 IST
Register at: http://felicity.iiit.ac.in/register

We've also arranged a practice contest for you, which contains problems from past COCI and 1 interactive problem.
This practice contest will warm you up and hopefully help you get acquainted with the contest platform.
Time: 31st January, 2016 16:00 IST

Here is last year's scoreboard and also, you can see problems in GYM contest!

Good Luck!

UPD: Contest has ended! Out of 12, we saw atleast one AC on 10 problems.
Top 5 on final leaderboard, with all solving 8 are:
1. nfssdq
2. mkirsche
3. geniucos
4. Sumeet.Varma
5. Ming.Shen

You can see the image here showing top 10:

We'll be releasing editorials soon after a replay on GYM soon. We would really like to see all problems solved, even if its in GYM :)
For now, we are opening the submissions on our own domjudge in a while.

• +161

By darkshadows, 4 years ago, ,

As you might have noticed teams get a separate page of their own, like this one. Now, what would be really cool is to have a contest history available for the team on the same page. It would be really awesome to track the progress and recent contests of your competitor teams in ACM ICPC.

• +130

By darkshadows, history, 4 years ago, ,

Today I experienced something I don't know is intended or not, but surely doesn't seem like good from UX point of view. If you edit a blog post and save it as a draft or do a preview and not post it again, it gets deleted. Its only visible to you in your saved drafts and not to everyone.

• +15

By darkshadows, history, 4 years ago, ,

A certain question on Quora and some junior asking about DP on Trees is what inspired this post. Its been a long time since I wrote any tutorial, so, its a welcome break from monotonicity of events.

Pre-requisites:

• Will to read this post thoroughly. :)
• Also, you should know basic dynamic programming, the optimal substructure property and memoisation.
• Trees(basic DFS, subtree definition, children etc.)

Dynamic Programming(DP) is a technique to solve problems by breaking them down into overlapping sub-problems which follow the optimal substructure. We all know of various problems using DP like subset sum, knapsack, coin change etc. We can also use DP on trees to solve some specific problems.

We define functions for nodes of the trees, which we calculate recursively based on children of a nodes. One of the states in our DP usually is a node i, denoting that we are solving for the subtree of node i.

As we do examples, things will get clear for you.

## Problem 1

==============

The first problem we solve is as follows: Given a tree T of N nodes, where each node i has Ci coins attached with it. You have to choose a subset of nodes such that no two adjacent nodes(i.e. nodes connected directly by an edge) are chosen and sum of coins attached with nodes in chosen subset is maximum.

This problem is quite similar to 1-D array problem where we are given an array A1, A2, ..., AN; we can't choose adjacent elements and we have to maximise sum of chosen elements. Remember, how we define our state as denoting answer for A1, A2, ..., Ai. Now, we define our recurrence as (two cases: choose Ai or not, respectively).

Now, unlike array problem where in our state we are solving for first i elements, in case of trees one of our states usually denotes which subtree we are solving for. For defining subtrees we need to root the tree first. Say, if we root the tree at node 1 and define our DP as the answer for subtree of node V, then our final answer is .

Now, similar to array problem, we have to make a decision about including node V in our subset or not. If we include node V, we can't include any of its children(say v1, v2, ..., vn), but we can include any grand child of V. If we don't include V, we can include any child of V.

So, we can write a recursion by defining maximum of two cases.
.

As we see in most DP problems, multiple formulations can give us optimal answer. Here, from an implementation point of view, we can define an easier solution using DP. We define two DPs, and , denoting maximum coins possible by choosing nodes from subtree of node V and if we include node V in our answer or not, respectively. Our final answer is maximum of two case i.e. .

And defining recursion is even easier in this case. (since we cannot include any of the children) and (since we can include children now, but we can also choose not include them in subset, hence max of both cases).

About implementation now. You must notice that answer for a node is dependent on answer of its children. We write a recursive definition of DFS, where we first call recursive function for all children and then calculate answer for current node.

//adjacency list
//adj[i] contains all neighbors of i

//functions as defined above
int dp1[N],dp2[N];

//pV is parent of node V
void dfs(int V, int pV){

//for storing sums of dp1 and max(dp1, dp2) for all children of V
int sum1=0, sum2=0;

//traverse over all children
if(v == pV) continue;
dfs(v, V);
sum1 += dp2[v];
sum2 += max(dp1[v], dp2[v]);
}

dp1[V] = C[V] + sum1;
dp2[V] = sum2;
}

int main(){
int n;
cin >> n;

for(int i=1; i<n; i++){
cin >> u >> v;
}

dfs(1, 0);
int ans = max(dp1[1], dp2[1]);
cout << ans << endl;
}


Complexity is O(N).

## Problem 2:

==============

Given a tree T of N nodes, calculate longest path between any two nodes(also known as diameter of tree).

First, lets root tree at node 1. Now, we need to observe that there would exist a node x such that:

• Longest path starts from node x and goes into its subtree(denoted by blue lines in the image). Lets define by f(x) this path length.
• Longest path starts in subtree of x, passes through x and ends in subtree of x(denoted by red line in image). Lets define by g(x) this path length.

If for all nodes x, we take maximum of f(x), g(x), then we can get the diameter. But first, we need to see how we can calculate maximum path length in both cases.

Now, lets say a node V has n children v1, v2, ..., vn. We have defined f(i) as length of longest path that starts at node i and ends in subtree of i. We can recursively define f(V) as , because we are looking at maximum path length possible from children of V and we take the maximum one. So, optimal substructure is being followed here. Now, note that this is quite similar to DP except that now we are defining functions for nodes and defining recursion based on values of children. This is what DP on trees is.

Now, for case 2, a path can originate from subtree of node vi, and pass through node V and end in subtree of vj, where i ≠ j. Since, we want this path length to be maximum, we'll choose two children vi and vj such that f(vi) and f(vj) are maximum. We say that .

For implementing this, we note that for calculating f(V), we need f to be calculated for all children of V. So, we do a DFS and we calculate these values on the go. See this implementation for details.

If you can get the two maximum elements in O(n), where n is number of children then total complexity will be O(N), since we do this for all the nodes in tree.

//adjacency list
//adj[i] contains all neighbors of i

//functions as defined above
int f[N],g[N],diameter;

//pV is parent of node V
void dfs(int V, int pV){
//this vector will store f for all children of V
vector<int> fValues;

//traverse over all children
if(v == pV) continue;
dfs(v, V);
fValues.push_back(f[v]);
}

//sort to get top two values
//you can also get top two values without sorting(think about it) in O(n)
//current complexity is n log n
sort(fValues.begin(),fValues.end());

//update f for current node
f[V] = 1;
if(not fValues.empty()) f[V] += fValues.back();

if(fValues.size()>=2)
g[V] = 2 + fValues.back() + fValues[fValues.size()-2];

diameter = max(diameter, max(f[V], g[V]));
}


Now, we know the basics, lets move onto solving a little advanced problems.

## Problem 3:

==============

Given a tree T of N nodes and an integer K, find number of different sub trees of size less than or equal to K.

First, what is a sub tree of a tree? Its a subset of nodes of original tree such that this subset is connected. Note a sub tree is different from our definition of subtree.

Always think by rooting the tree. So, say that tree is rooted at node 1. At this moment, I define S(V) as the subtree rooted at node V. This subtree definition is different from the one in problem. In S(V) all nodes in subtree of V are included.

Now, lets try to count total number of sub trees of a tree first. Then, we'll try to use same logic for solving original problem.
Lets define f(V) as number of sub trees of S(V) which include node V i.e. you choose V as root of the sub trees that we are forming. Now, in these subtrees, for each child u of node V, we have two options: whether to include them in sub tree or not. If you are including a node u, then there are f(u) ways, otherwise there is only one way(since we can't choose any nodes from S(u), otherwise the subtree we are forming will get disconnected).

So, if node V has children v1, v2, ..., vn, then we can say that . Now, is our solution complete? f(1) counts number of sub trees of T which are rooted at 1. What about sub trees which are not rooted at 1? We need to define one more function g(V) as number of subtrees of S(V) which are not rooted at V. We derive a recursion for g(V) as i.e. for each child we add to g(V) number of ways to choose a subtree rooted at that child or not rooted at that child.

Our final answer is f(1) + g(1).

Now, onto our original problem. We are trying to count sub trees of T whose size doesn't exceed K. We need to have one more state in our DP at each node. Lets define f(V, k) as number of sub trees with k nodes and V as root. Now, we can define recurrence relation for this. Let's say for node V, there are direct children nodes v1, v2, ..., vn. Now, to form a subtree with k + 1 nodes rooted at V, lets say S(vi) contributes ai nodes. Of course, k must be since we are forming a sub tree of size k + 1(one node is contributed by V). We should realise that f(V, k) is sum of the value for all possible distinct sequences a1, a2, ..., an.

Now, to do this computation at node V, we will form one more DP denoted by . We say as number of ways to choose a total of j nodes from subtrees defined by v1, v2, ..., vi. The recurrence can be defined as , i.e. we are iterating over k assuming that subtree of vi contributes k nodes.

So, finally .
And our final solution is sum for all nodes V.

So, in terms of pseudo code we write:

f[N][K+1]

void rec(int cur_node){

f[cur_node][1]=1
dp_buffer[K] = {0}
dp_buffer[0] = 1

for(all v such that v is children of cur_node)
rec(v)

dp_buffer1[K] = {0}
for i=0 to K:
for j=0 to K-i:
dp_buffer1[i + j] += dp_buffer[i]*f[v][j]

dp_buffer = dp_buffer1

f[cur_node] = dp_buffer
}


Now, lets analyse complexity. At each node with n children, we are doing a computation of n * K2, so total complexity is O(N * K2).

Another similar problem is : We are given a tree with N nodes and a weight assigned to each node, along with a number K. The aim is to delete enough nodes from the tree so that the tree is left with precisely K leaves. The cost of such a deletion is the sum of the weights of the nodes deleted. What is the minimum cost to reduce to tree to a tree with K leaves? Now, think about the states of our DP. Derive a recurrence. Before actually proceeding to the solution give it atleast a good thinking. Find solution here.

## Problem 4:

==============

Given a tree T, where each node i has cost Ci. Steve starts at root node, and navigates to one node that he hasn't visited yet at random. Steve will stop once there are no unvisited nodes. Such a path takes total time equal to sum of costs of all nodes visited. What node should be assigned as root such that expected total time is minimised?

First, lets say tree is rooted at node 1, then we calculate total expected time for the tree formed. We define f(V) as expected total time if we start at node V and visit in subtree of V. If V has children v1, v2, ..., vn, we can say that , since with same probability we'll move down each of the children.

Now, we have to find a node v such that if we root tree at v, then f(v) is minimised. Now, f(v) is dependent on where we root the tree. If we do a brute force, it'll be O(N2). We need faster than this to pass.

We'll try to iterate over all nodes V and quickly calculate the value of f(V) if tree is rooted at V. We need to see the contribution of if tree is rooted at V. We already know the contribution of children of V. So, if we define one more quantity g(V) as the expected total time at node , if we don't consider contribution of subtree of V.

Now, if I want to root my whole tree at V, then total expected time at this node will be . To realise this is correct, have a look at definition of g(V).

Lets see how we can calculate g(V). Keep referring to image below this paragraph while reading. Consider a node p which has parent p' and children v1, v2, ..., vn. Now, lets try to find g(vi). g(vi) means root tree at node p and don't consider subtree of vi for calculating f(p). We can say that , since g(p) gives us the expected total time at p' without considering subtree of p. We divide by n, because p will have n children i.e. p', v1, ..., vi - 1, vi + 1, ..., vn.

We can calculate both functions f and g recursively in O(N).

## Problem 5:

==============

Another very interesting problem goes as: Given two rooted trees T1 and T2, you want to make T1 as structurally similar to T2. For doing that you can insert leaves one by one in any of the trees. You have to tell the minimum number of insertions required to do so.

Lets say both trees are rooted at nodes 1. Now, say T11 has children u1, u2, ..., un and T21 has children v1, v2, ..., vm, then we are going to create a mapping between nodes in set u and v i.e. we are going to make subtree of some node ui exactly same as vj, for some i, j, by adding required nodes. If n ≠ m, then we are going to add the whole subtree required.

Now, how do we decide which node in T1 is mapped to which in T2. Again, we use DP here. We define as minimum additions required to make subtree of node i in T1 similar to subtree of node j in T2. We need to come up with a recurrence.

Lets say node i has children u1, u2, ..., un and node j has children v1, v2, ..., vm. Now, if we assign node ui with node vj, then the cost is going to be . Now, to all nodes in u, we have to assign nodes from v such that total cost is minimised. This can be solved by solving assignment problem. In assignment problem there is a cost matrix C, where C(i, j) denotes cost if task i is assigned to person j. Our aim is to assign one task to one person such that total cost is minimised. This can be done in O(N3), if there are N tasks. Here in our problem and by solving this assignment problem, we can get value of .

Total complexity of this solution is O(N3), where N is maximum number of nodes in T1 and T2.

That's the end of it. Now time for some person advice :) The more you practice DP/DP on trees, the more comfortable you are going to be. So, get on your practice shoes and run over the obstacles! There are lot of DP on trees problem which you can try to solve and if you don't get the solution look at the tutorial/editorial, if you still don't get solution ask on various platforms.

Problems for practice:
1 2 3 4 5 6 7 (Solution for 7) 8 9 10 11 12 13

• +251

By darkshadows, history, 4 years ago, ,

A undirected complete bipartite graph G = (U, V, E) exists where |U| = |V|. For all vertices u and v, the weight of the edge between u and v is Wu, v(assume positive).

We have to do perfect matching such that Bitwise OR of weights of edges in matching is maximised.

I was trying to think of a polynomial time solution, but couldn't succeed.

• +18

By darkshadows, 4 years ago, ,

Hello CodeForces Community!

I felt Centroid Decomposition as a very interesting and powerful technique when I first come along it. My friend, tanujkhattar has written a well designed and insightful blog on it. Also, I am including some other resources that can be very well found on CF only.

• +58

By darkshadows, 4 years ago, ,

Hello Codeforces community!

I am glad to announce that HackerRank 101 Hack 26th edition will be held on 23rd June 2015 at 16:30 UTC. You can sign up for the contest here.
The problem statements will be available in English and Chinese. Though what's priceless is solving interesting problems and the thrill of competition, prizes make the competition fierce. Top 10 performers will get a HackerRank T-shirt.

Problems have been set by me(darkshadows) and tested by wanbo. The contest will consist of 5 problems with variable scoring distribution. I have tried my best to set a balanced problem set. Hopefully everyone will enjoy. Also, you'll be able to enjoy the detailed editorials written by the setter.

Good luck and have fun!

UPDATE1 Scoring is 30 — 40 — 50 — 100 — 120.
Cumulative time taken to solve the challenge is used as a tie-breaker.

UPDATE2 Editorials for all problems have been enabled.

• +123

By darkshadows, 5 years ago, ,

This valentine's day, do what you most love, yes, a programming contest.

HackerEarth is running a short 2 hour contest on 14th February. There will be 5 algorithmic problems of varying difficulty level. This contest is open for all.

Tester: akashdeep
Editorialist: phantom11

Prizes: Top 3 winners to get HackerEarth T-shirts!

Happy Coding!

• +5

By darkshadows, 5 years ago, ,

## 100589A - Queries on the Tree

Given a rooted tree(at node 1) of N nodes,you have to handle a total of Q operations of type:
1 U L X: Increase coins by X of all nodes at a distance of L from root.
2 X: Report sum of all coins in subtree rooted at X.
N <= 105 Q <= 104

Medium-Hard

#### Explanation:

This approach uses a very specific technique of maintaining a buffer of update queries and updating the whole tree once buffer size exceeds a constant limit(we maintain this limit as sqrt(M)).

buffer=[]
q=input
while q--:
query = input()
if query==update:
else:
//say query is “2 X”
prevans= ans(X)
for i=0 to buffer.size():
//add to prevans the effect of buffer[i]
//let’s say buffer[i]=”1 U L Y”
//if buffer[i] affected K nodes in subtree of X
//so, we need to count how many nodes of subtree X
//are at a level L, we’ll show later how to handle this
print prevans

if buffer.size() > sqrtM:
//update the whole tree and precalculate answer of all nodes



So, we need to look at two things:

1. Given X and L count how many nodes in subtree of X are at a level L(this level is measured from root).
For this we first DFS transform our tree such that all nodes in a subtree lie in contiguous range after new mapping(according to DFS). And then we maintain for each level an array which stores the new indexes of all nodes that are at that level.
For example, a vector level[L] stores all new indexes of nodes that are at level L. These vectors can easily be made in O(N) by a DFS.
Now, for a query “Given X and L count how many nodes in subtree of X are at a level L”, we know the range of new indexes of all nodes in subtree of X(say the range is S to R), we just have to count number of values in vector level[L] that lie in range [S,R], which can be done in O(log N) worst case.

2. Given at-most sqrt(M) queries of type “L X”,(which denote update X at all nodes at level L), we have to update the whole tree.
We traverse over all queries and mark in a count array(**count[i]** contains the total coins to be updated at level i). Now, while doing a DFS of tree we can easily update the current_sum on each node. And then to pre calculate answers for each node we do one more DFS.

So, we don’t update our tree more than sqrt(M) times and each update takes O(N). Also, for print queries we don’t process more than sqrt(M)*log(N) worst case. So, a loose upper bound on total complexity will be O(N * sqrt(M) * log N).

Many problems can be solved by this specific technique of making buffers of queries. For example this problem: 447E - DZY Loves Fibonacci Numbers

## 100589B - Count Palindromes

To find the number of palindromes which appear between two instants of time when seen on a digital clock. Upto 105 queries.

Easy

#### Explanation:

The number of distinct strings which one can see are 86400(from 00:00:00 to 23:59:59). Answering each query, would take O(86400 * 6) in the worst case. And there are 105 queries.
So, we can initially pre process for all possible times.
Let the time is ab:cd:ef then converting it to seconds s = ab*24*60 + cd*60 + ef.
As 1 hour = 60 minutes and 1 minute = 60 seconds.

X[s] denotes if s is a palindrome or not.
Now we maintain a prefix sum array so as to answer all queries in O(1).

MX = 86400 + 1
for i in xrange(0, MX) :
mem[i] = 0
mem[0] = 1
for i in xrange(1, MX) :
val = conv(i)
mem[i] = mem[i - 1]
if Palin(val) :
mem[i] += 1



Let mem[i] denote the number of palindromes from 0 till i.
conv is a function which converts a number x to string of length 6. Appends leading zeroes till the size is 6. Palin is a function which returns True or False depending on whether the given string is a palindrome or not.

Now for each query, we convert the given time to seconds. Let the converted times into seconds be a, b. The number of palindromes between a and b are
mem[b] : if a = 0
mem[b]-mem[a-1] : else

## 100589C - Find P'th Number

Given N and P, Either all even or all odd numbers have been removed from set [1, 2, 3 ... N], find the Pth smallest remaining number.
N <= 109

Cakewalk

#### Quick Explanation:

Answer is either 2 * P(if all odd are removed) or 2 * P — 1(if all even are required, one less than in other case because instead of 2 we report 1, 3 instead of 4 and so on).

## 100589D - Desolation of Smaug

Note : Since there are N nodes and N edges in the graph, the graph would be like a tree containing a single cycle (because a tree with N nodes has N-1 edges. On adding an edge in the tree , wherever we might add the edge, we shall always get a single cycle in the graph) . Imagine the given graph as a cycle with a tree hanging down at each node. This is a special property of a graph with N nodes and N edges which must be exploited to answer the queries in sublinear time.

#### Explanation

Once again we have many interesting things to observe in this question . Lets start by analyzing each part one by one.

First, What does the question ask us to do?
The problem statement is short and precise . Frodo needs to escape from Smaug. Given the initial positions of both and the destination of Frodo along with the velocities of both, print YES or NO depending on whether Frodo can escape or not. Sounds easy, a simple dijkstra once from Frodo and once from Smaug would get us the answer. But, then comes the interesting part, Q queries where Qmax = 105. Following the Dijkstra approach for each query will be O(Q * N logN) which undoubtedly would fetch us TLE.

So what to do?
Seeing the constraints, it is clear that we need to do some linear pre-processing on the given graph such that we can answer each query in sublinear time.
But before directly jumping to the implementation and seeing how to achieve the task of answering the queries in sublinear time, let us first do a theoretical analysis of the problem to completely understand what we need to do and then we shall focus on how to do it.

#### Theoretical Analysis

As explained in the note above, we need to imagine the given graph as a cycle with a tree (possibly with only a single node, the root) hanging down at each node in the cycle. (See diagram on below).

Now, let’s analyze the various possible cases based on the parameters which vary in each query, i.e. Vf, Vs, St, Ds, S.

Case 1: Vs >= Vf
We just need to check who reaches the destination(**Ds**) first, Frodo or Smaug because if Smaug can catch Frodo at some node on the way to destination , he can definitely catch him at the Destination . Hence

If(Dist(St,Ds) * Vs< Dist(S,Ds)*Vf)  //Or Dist(St,Ds)/Vf < Dist(S,Ds)/Vs
print "YES";
else
print "NO";


Case 2: Vs < Vf
In this case , if Frodo escapes/ gains lead over Smaug at any node, Smaug cannot catch him and Frodo is gone forever . To better understand this, let us visualize the various possible cases that arise based on different locations of St, Ds, S.

Sub-Case 1 : All Three St, Ds, S lie in the same tree and S lies within the subtree rooted at LCA(St, DS).

As shown in the diagram, if Frodo escapes Smaug at node 7, i.e. Frodo reaches before Smaug at node 7, he can safely reach Dt else he will surely get caught by Smaug at node 7.
In general, we compare the reaching time’s of Frodo and Smaug at the min(LCA(St,S),LCA(S,Dt)).
where min() represents the lower one(the one with greater level down the root) of the two because in each case, one of the above two will always be equal to LCA(St,Dt).

Sub-Case 2 : Both St, Ds lie in the same tree and S lies outside the subtree rooted at LCA(St, DS).

We just need to compare the reaching time of Frodo and Smaug at the LCA(St,Dt) s.t.

if(reachingTime[Frodo][LCA(ST,DT] < reachingTime[Smaug][LCA(St,Dt])
printf "YES"
else
printf "NO"


Sub-Case 3 : One of St or Dt and S lie in the same tree and the remaining lies in another tree.

We compare the reaching time of Frodo and Smaug at the LCA(St,S) or LCA(Dt,S) depending on which of St or Dt is in the same tree as that of S. If Frodo can reach this node before Smaug, he can reach the destination otherwise he will surely get caught.

Sub-Case 4 : All three St, Dt, S lie in different Trees.

In this case,
First of all, Check whether Frodo can reach the root of his Tree before Smaug. If he can't then no matter what, he will surely get caught at the root of his tree because he definitely needs to pass through that node in order to reach the destination.

If the Smaug cannot catch Frodo at the root of Frodo's tree, we will check whether Frodo can cross the root of Tree of S before Smaug reaches there and if he can, he shall surely escape.

If Smaug can catch Frodo at the root of S, we’ll compare the time taken by Frodo to reach the root of tree of Dt along the path not involving root of tree of S and the shortest time taken by Smaug to reach the root of Dt.
If time taken by Frodo is less than that of Smaug, Frodo shall escape or else he will get caught.

#### How to Implement?

Well, If you’re still alive and reading this editorial, Congratulations because we’ve reached the final part. :P

In the above Theoretical analysis, we made use of two functions :
LCA(A,B) : Returns us the Lowest Common Ancestor of two nodes A and B in the same tree.
Dist(A,B) : Returns us the distance between any two nodes A and B in the whole graph (not necessarily in the same subtree).
To handle the Sub-Case 4 under Case 2, we would need to maintain the prefix sum of the path along the cycle because for the cycle, we need to analyze both the clockwise and anticlockwise paths for Frodo depending on conditions mentioned above.

We’ll need shortest distance between any two nodes in the graph in logarithmic time.
Shortest distance between any two nodes in the graph:
For answer these types of queries, first we understand that given graph has a single cycle ie. trees are hanging from nodes in cycle.
So, first we detect the cycle* and then for each tree hanging at a cycle node we build the LCA DP table so that we can handle LCA queries in logarithmic time.

Also, for each such tree we pre-calculate that distance from root node(ie. cycle node) to tree node.
We can do this by a linear order DFS. Let’s call such array dist_root.

*For detecting cycle two methods are:
1. Store indegree of all nodes first and keep removing all nodes from set [1,2,..N] if indegree of node is 1(ie. keep removing leaf nodes). If indegree of any neighbor reduces to 1, remove it. All remaining nodes will be in cycle. We can easily do this using queue in a similar way to BFS.

1. Do a DFS and whenever detect a back edge, all vertices currently in recursion stack are cycle nodes.

Now, for distance between node u and v, there are two case:
u and v are in same tree(tree that hangs by a cycle node):
Shortest distance is dist_root[u] + dist_root[u] — dist_root[lca(u,v)].

u and v are in different trees(trees that hangs by two distinct cycle nodes):
Shortest distance is dist_root[u] + dist_root[u] + min-distance(root[u], root[v]).
So, we need to find minimum distance between any two nodes in cycle. First let’s say we map all cycle nodes(say total of K) values 1 to K. Now we pre-calculate a prefix sum array of array A where A[i] stores distance between node 1 and node i(if we travel by clockwise direction).

Now, for min distance between a and b(two cycle nodes):
Let’s say

K = total length of cycle
//assuming mapping[a] > mapping[b]
M = prefix_sum[a] - prefix_sum[b]
min distance = min(M, K-M)


So, we can handle overall min distance queries in worst case O(log N).

## 100589F - Count Ways

Given a grid of size N x M, where K given cells are blocked. Find number of ways to reach (N, M) from (1, 1) if you can move right or down.
N, M <= 105 K <= 103

Medium-Hard

#### Explanation:

First a basic formula, number of ways to reach (x2, y2) from (x1, y1) if x2 >= x1 and y2 >= y1:
Let x = x2-x1-1 and y = y2-y1-1
Number of ways F(x1, y1, x2, y2) = (x+y)!/(x!y!) where n! denotes n factorial.

Now, an interesting observation is that if I block a cell at (i, j) all cells with their respective coordinates greater than or equal to i and j will be affected by it(ie. number of ways to reach them will be changed).

Let's say our set S = {all blocked cells + cell(N, M)}. I now sort S on increasing basis of x coordinate and then increasing on y. Also I maintin an array ans where ans[i] denotes number of ways to reach cell at index i in sorted(S).
Intially ans[i] = F(1, 1, S[i].x, S[i].y).

Now, I traverse the sorted(S) in increasing order and updating the number of ways for all the cells that it affects.

for i=0 to S.size()-2:
for j=i+1 to S.size()-1:
if S[j].x<S[i].x or S[j].y<S[i].y:  //cell j not affected
continue

//ans[i] stores current number of ways to reach that cell
//now all paths from cell (1,1) to cell j are blocked
//so we subtract (number of ways to reach i * number of paths from i to j)
ans[j] -= ans[i]*F(S[i].x, S[i].y, S[j].x, S[j].y)

print ans[S.size()-1]


While making a decrement at ans[j] due to blocked cell i we ignore that there are some other blocked cells in between them(note that we are mutliplying with F(S[i].x, S[i].y, S[j].x, S[j].y)). We ignore them because ans[i] is currently storing valid paths to reach (S[i].x, S[i].y) and all possible paths now that pass through it are blocked. So we subtract each possible comibination from ans[j].

Complexity: O(K2).

## 100589G - Count Permutations

Given N(<= 15), K (<= N) count in how many permutations of [1,2,..N] no two adjacent elements differ by more than K.

Easy-Medium

#### Quick Explanation:

Maintain a DP of mask and last_used, where mask denotes the number of elements already used and last_used denotes the value of number that was just used before current index.

#### Explanation:

Naive solution would be O(N+1 factorial). But we can use dynamic programming here and try to reduce complexity. But considering that in a permutation each number from 1 to N is used only once, we can’t keep a generalized DP state like “number of permutations of length i ending in j”, because it doesn’t store information about what numbers we have used.

So, we use bitmasks. Bitmask is basically a N bit binary number expressed as a decimal. If i’th bit in mask is marked we assume that we have already used number i somewhere and it is not available for use anymore.

Now, let’s try to form our solution. For placing a number at a certain position, we should know which number was placed before it(because we’ll compare their absolute difference). So in our state we keep two things: mask and last_used, where last_used denotes the number that was used just before current position.

So, let’s denote by DP(mask, last_used) the number of permutations of numbers marked in mask and ending in last_used.

Let’s form recurrences now.

DP(last_used, mask):
ret=0
for all i unmarked in mask:
//we try to place number i at current position
if abs(i-last_used) <= K:
//last_used is now i
//we pass new mask by setting i’th bit in it
return ret



So, we use DP with memoization. So our complexity is number of states multiplied with transition cost.
So total worst case complexity will be: O(N2 * 2N).

## 100589H - Count Subarrays

Given an array A1, A2 ... AN and K count how many subarrays have inversion count greater than or equal to K.
N <= 105
K <= N*(N-1)/2

Medium

#### Quick Explanation:

Maintain two pointers( pt1 and pt2) and increase pt2 until inversions in subarray [pt1, pt2] are less than K. Now all the subarrays [pt1, i] have inversion count > K-1 where i > pt2-1.
Now, we increase pt1 until inversion count in subarray [pt1, pt2] is greater than or equal to K.
And we repeat the above process until we reach end of our array.
Inversion count can be handled easily using BIT.
See detailed explanation for more clarity.

#### Explanation:

The most important property to be observed it that if there are P inversions in subarray [S, E], the inversions in subarray [S, E+1] will be greater than or equal to P, because the value at index E+1 may contribute some inversions. How many inversions exactly will it contribute? It will be equal to number of elements that are greater than AE+1 in range [S, E].

So, for each index i, if we get the smallest j(let’s call such a value threshold(i)), such that inversions in subarray [i, j] is greater than or equal to K, we know that all subarrays [i, k] are valid(where k >= j). So, our aim is to get this smallest j for each i.

Let’s say we are index i (pt1) initially are we have found the respective j (pt2) for it.
We know that InvCount[i, j] >= K and
InvCount[i, j-1] < K

If I increase i by 1 now, I know that inversion count is going to reduce. After reduction if inversion count is still greater than or equal to K, we know that threshold for i+1 remains same because we know InvCount[i, j-1] < K, therefore InvCount[i+1, j-1] < K is also valid.

Now we keep increasing pt1 until inversion count is not less than K. Once we reach such an index, for this index we need to find it’s threshold, so we start increasing the pt2 until we reach threshold of pt1.

Now, we need to handle two operations. Let’s say we have right now inversion count for range [L, R], we need inversion count for [L, R+1] quickly and similarly for [L+1, R].

So, we use a BIT here. Let’s say in a BIT we have marked all values in range [L, R]. To get inversion count for [L, R+1], we need to count how many values in range [L, R] are greater than AR+1, which can be easily found by BIT(since all elements in range [L, R] are marked in BIT). Once we get new inversion count we also mark the value AR+1 in BIT.

Similarly for [L+1, R], we count how many values in BIT are less than AL. This count will be reduced from the current inversion count. Also, we unmark **AL from the BIT.

But since all values Ai are up to 109 and we only need to compare their greater and lesser(exact values doesn’t matter), we use coordinate compression(ie. map larger values to smaller distinct values). After this we can easily mark any Ai in the BIT.

Pseudo code:

BIT:
getsum(i)   //returns sum of of elements with index <=i
update(i,val)   //updates val at index i

A=input
rank[i] = rank of A[i] in sorted(A[i])

pt1 = pt2 = 0
BIT.update(1,rank);

cur_inv = 0 //inversions of current subarray denoted by [pt1, pt2]
ans = 0

while pt1<N:
//we increase pt2 until current inversions <K
while cur_inv<K and pt2<N
pt2 ++
//inversion increment due to addition of A[pt1]
//increment = number of elements greater in BIT less than A[pt1]
inv += r + 1 - BIT.getsum(A[pt1])
BIT.update(A[pt2], 1);

//all subarrays [pt1, x] are valid, where N > x >= pt2
ans += N-pt2

//remove A[pt1] from BIT and reflect the change in cur_inv
// and increment pt1
BIT.update(A[pt1], -1)
inv -= BIT.getsum(A[pt1] - 1)
pt1++


Another way would be to use segtree/trie for queries like "find number of elements in range L to R which are less than K".

## 100589I - Laughing Out Loud

Given a string S, you have to find out the number of length 3 sub-sequences which are equivalent to LOL. |S| <= 105

Difficulty : Easy

Explanation : Subsequence is a sequence that can be derived from another sequence by deleting some elements without changing the order of the remaining elements.

Assume initial string to be S. If we take a boolean string X of length len(S) (= n).

if we want to include i'th element of S as part of subsequence : X[i] = 1
else: X[i] = 0


Number of different strings X is equivalent to the number of subsequences.
We have a y bit string of 0’s and 1’s.
Let S = “ABA”, then n = 3 and all the y bit strings are as follows

000	‘’(None, empty string)
001	‘A’
010	’B’
011	‘BA’
100	‘A’
101	‘AA’
110	‘AB’
111	’ABA’


So, for a sequence of length n, 2n subsequences are possible. Finding all the subsequences would time out.

As we know that we only have to find the subsequences of length 3(LOL). A naive code for checking if S[i] = ‘L’, S[j] = ‘O’, S[k] = ‘L’ subject to the condition that 1 <= i < j < k <= n.

ans = 0
for i in xrange(1, n+1) :
if S[i] == ‘L’ :
for j in xrange(i+1, n+1) :
if S[j] == ‘O’ :
for k in xrange(j+1, n+1) : #1
if S[k] == ‘L’ :       #2
ans += 1          #3


Complexity of the above code is O(n3) which would time out.

for i in xrange(1, n+1) :
suf[i] = 0
if S[1] == ‘L’ :
suf[1] = 1
for i in xrange(2, n+1) :
suf[i] = suf[i-1]
if S[i] == ‘L’ :
suf[i] += 1


Suppose we maintain a suffix-sum array which tells us the number of L’s after index i till n. As we just to have to find if S[i] == ‘L’ , S[j] == ‘O’ and we know the number of k > j and k <= n is suf[j+1]. For finding answer we replace #1, #2, #3 by ans += suf[j+1]. We can reduce the complexity to O(n2), which would also timeout.

for i in xrange(1, n+1) :
pre[i] = 0
if S[n] == ‘L’ :
pre[n] = 1
for(i = n-1; i >= 1; i -= 1) :
pre[i] = pre[i+1]
if S[i] == ‘L’ :
pre[i] += 1


Similarly maintaining another prefix-sum array which tells the number of L’s from 0 to index i. If we know that S[j] == ‘O’, then pre[i-1] tells us the number of L’s before j and suf[j+1] tells us the number of L’s after j. Answer is the summation of product of suf[j+1] * pre[j-1] such that S[j] == ‘O’.
Complexity of this would be O(n) with a space of O(n), which fits the time limit.

If we consider a string of length (105) such that all the characters are L except the middle character which is O, then the product would not fit in a 32-bit integer data type. But would fit in a 64-bit integer data type. Maximum possible answer would be of the order 1012.

## 100589J - Three Sorted Arrays

The points to note in the problem are:
1. The arrays are sorted.
2. We need to find all triplets such that i ≤ j ≤ k and A[i] ≤ B[j] ≤ C[k].

In worst case the number of triplets will be in order of O(P*Q*R), hence brute force solution won’t work.

There are two approaches to solve this problem.

#### Binary Search:

Reducing the problem to finding j ≤ k and B[j] ≤ C[k]. This can easily be done using binary search, for each B[j] we need to find the index of C[k] which is just smaller than B[j], (say index is L) all the values, present in the index greater than L, will be greater than B[j], hence the number of values greater than B[j] are Q-i (assuming 1-based indexing).
The computation time will be Q(log(R)) (for each element we have to do a binary search).

The problem extends to finding i ≤ j ≤ k and A[i] ≤ B[j] ≤ C[k]. We have already found out j ≤ k and B[j] ≤ C[k].
For every A[i] we need to find out the index of B[j] which is just smaller than A[i](say it’s M) all the values, present in the index greater than M, will be greater than A[i] but we also need to find it’s corresponding value in C[k], hence a postfix sum array of the values j <= k and B[j] ≤ C[k] can be used. The example will help in better understanding.
The overall complexity of the algorithm is O(P(logQ) + Q(logR)).

Example:

A = [1, 5, 6]
B = [3, 7, 8]
C = [2, 4, 9]


First find the count of all j <= k such that B[j] ≤ C[k] and store it in an array.

t1 = [2,1,1]


convert it into postfix-sum array

t1_new = [4,2,1]


Now for every A[i] we need to find out the index of B[j](say x) which is just greater than A[i] and x >= i. The corresponding values would look like.

t2 = [1, 2, 3] (Array has 1-based indexing).

1. The value just greater than 1 is 3(at index 1) in B[].
2. The value just greater than 5 is 7(at index 2) in B[].
3. The value just greater than 6 is 8(at index 3) in B[].
All the values greater than these indexes will be added in the final answer.(hence the postfix-sum array has to be maintained).
The final answer is:



ans = (t1[1]+t1[2]+t1[3])+(t1[2]+t1[3])+(t1[3]) = t1_new[0+1]+t1_new[1+1]+t1_new[1+1] = 4+2+1 = 7 ~~~~~

##### 2-pointer search

2-pointer search just reduces the complexity of finding the index of C[k] which is just smaller than B[j] from O(n logn) to O(n). The approach is very intuitive and can be directly used to find the number of values which are larger than B[j] in C[k] such that j <= k directly.
Considering an example:

B = [1, 4, 5]
C = [2, 3, 6]


Let there be two pointers fixed at j=Q and k=R.
Move the pointers such that whenever:
1. B[j] ≤ C[k].
keep on decrementing k till B[j]>C[k]. The difference of R and present k (=R-k) is the number of values which are larger than B[j] in C[k] such that j <= k.
2. B[j] > C[k]
keep on decrementing j till B[j] ≤ C[k].

The same approach will be used to calculate i <= j <= k such that A[i] ≤ B[j] ≤ C[k] using a postfix sum array as described in method 1.

Complexity: O(P+Q+R)

Solution: http://goo.gl/N4qy6Z

http://www.geeksforgeeks.org/count-pairs-difference-equal-k/
-----------------------------------------------------------------------------------

## 100589K - Police Catching Thief

#### Basic Idea

Apply Multi-Source Dijkstra twice : First taking the K policemen's initial position as source and second taking the Q special nodes with power-ups, as source. This would get us the shortest time in which Police can reach any node in the graph in shortest time with or without using the power-up.
Apply a third Dijkstra using the initial position of Thief as source and just check if at any node the police can catch the thief (reach the node before or at equal time as thief) or not.

#### Note

The above approach works for a more general question when Vt and Vp need not be equal to 1 . Since in the above question Vt = Vp = 1 (to make the question simpler). For Thief , we can just check who reaches the final destination first, the police or thief because if the police can catch the thief on the way, it can definitely reach the destination before or at equal time as that of thief since Vp >= Vt always in this case.
(The proof of this is left to readers and also why it won’t work if Vp < Vt).

#### Explanation

There are many interesting things to note in this question. Lets analyze the question from the basics.

First , What does the question ask us to do? The Police needs to catch the thief. But the power-up makes the process interesting. Although it’s specifically mentioned that only single power-up is available for use, but the thief doesn’t know which policeman will avail which power up to increase his speed and we need to print the shortest time in which thief can escape regardless of whatever path the police might take. Therefore, when asked if the thief can escape or not, we can safely assume that the police will take the best possible combination of special node and power up to catch the thief. That is, for each node, we need to know the shortest time in which police can reach that node with or without taking the power up. This assumption doesn’t violate the fact that we have only a single power-up because suppose for some node “x”

reachingTimeOfPolice[x] <= reachingTimeOfThief[x]


Then, in such a case , the police can catch the thief at node x. This means that one of the K different Policemens can reach node ”x“ before the thief and can catch him there with or without using the power-up depending on the position of node x. Therefore, the thief cannot be sure of reaching his destination using this path because a single policeman using only a single power-up (or maybe even without it) can catch him.
Therefore, in short, we need to find the shortest time in which police (i.e any of the K different policemen) can reach any node (or only the destination in this case. See the NOTE) with or without using the power-up (whichever takes the shorter time). Once we have this information with us, we can simply check if the police reaches any node (or only the destination in this case) before thief and if this is the case, the thief cannot take that path. Like this, check for every node and just print the shortest time taken by the thief to reach the destination along such a path where no police can catch the thief at any node.

Second, How to do it?
A single multi-source Dijkstra taking the K different police-men’s initial positions as the source would give us the shortest time in which the police (i.e. any one of the K different policemen) can reach any node in the shortest time without taking the power-up. Next apply another multi-source Dijkstra using the Q different special nodes as source and this time, make sure to put :

startTimeOfSpecialNode[i] = policeTimeWithoutPowerUp[SpecialNode[i]];


After the second Dijkstra,

policeTime[i] = min(policeTimeWithoutPowerUp[i], policeTimeWithPowerUp[i]);


Next, apply a third Dijkstra for thief and add a condition that :

if(thiefTime[i]<policeTime[i])
Only Then explore its neighbours
else
thiefTime[i] = INF


The above condition is for the general case. For this specific question just check

if(thiefTime[Destination]<policeTime[Destination])
Print thiefTime[Destination]
else
print -1


• +81

By darkshadows, 5 years ago, ,

Although two hours late,
I invite you to the Project Euler-styled mathematical contest "Gordian Knot" by IIIT-Hyderabad.
See timings here: http://bit.ly/1x9GA9P
Total duration of contest: 48hrs.
Register yourself at: http://felicity.iiit.ac.in/register/ Prizes worth INR 12k/-

UPD:

• +53

By darkshadows, 5 years ago, ,

Hi,

I am happy to extend you to the invitation to participate in annual programming contest of IIIT Hyderabad. It's an ACM style programming contest, except that it's for individuals and not teams.

You will have a chance to devour some really nice(hopefully!) problems. I think considering the wide difficulty levels in problems everyone will find some interesting/challenging problems to solve.

It begins on 25th Jan, 2015 at 1400 IST.
To see time in other time zones: http://bit.ly/1u8sdBI
Register here: http://felicity.iiit.ac.in/register/
Also, there will be some attractive prizes for winners(to be announced soon).

By the way, here's the leaderboard for last years contest:

Happy Coding!

Contributors(setters and testers):
1nikhil9 aka_007 alok123t ashu1461 darkshadows darkscale karanaggarwal pulkitg10 sak_agg sherlock_holms tanmaysahay94 tanujkhattar thunderking viv001

UPD1: Prizes worth 20,000 INR to be won.
UPD2: Here is the contest link: http://felicity.iiit.ac.in/codecraft/public/
UPD3:
Like all good things, this one must come to an end as well. Yet another successful CodeCraft comes to an end.
With 2982 submissions, this was one of the most prolific editions of CodeCraft!
tourist wins CodeCraft '15 with 10 correct solutions.
anta comes a close 2nd, also with 10 accepted solutions.
natsugiri comes 3rd with 9 submissions.
Here's the link to the scoreboard:

Just like last year, 1 problem went unsolved. But, we'll be accepting solutions for the next 48 hours, in case you want to try out the problems at your pace.
Editorials will be released in a day or two.
Hope you enjoyed the event. See you next year!

UPD: Here are the detailed editorials by jury: http://codeforces.com/blog/entry/16099
For practice use gym: http://codeforces.com/gym/100589