.o.'s blog

By .o., history, 3 years ago, In English

While preparing problems with subtasks using Polygon, it's super annoying that we cannot set test groups using test scripts.

As a workaround, I wrote a simple script.

Test group of each test = the last token of the test generation command


function setPolygonTestGroupAutomatically() { const mapping = {}; $("#testsTable tbody tr code") .each((_, v) => { const vid = v.parentNode.parentNode.id; if(!vid.startsWith('test_')) return; const idx = parseInt(vid.substring('test_'.length)); const tokens = v.innerText.split(" "); // Test generation command const testGroup = tokens[tokens.length - 1]; // Last token of the test generation command if (!mapping[testGroup]) { mapping[testGroup] = []; } mapping[testGroup].push(idx); }); const keyToIndices = Object.entries(mapping); function go (idx) { if (idx == keyToIndices.length) { console.log("Changed everything!"); return; } const [groupName, indices] = keyToIndices[idx]; $.post("/data/tests", { action: "setMultipleTestGroup", session: $('#session').text(), testset: Object.fromEntries(new URLSearchParams($("#add-test").attr('href')))['testset'], requestString: indices.map(idx => `testIndex=${idx}`).join('&'), groupName, ccid: $('meta[name="ccid"]').attr('content') }).error((err) => console.error(err)) .then((data) => { console.log(`Done`, groupName, indices, data); setTimeout(go, 1000, idx+1); }); } go(0); } setPolygonTestGroupAutomatically();

Paste the script above to the developer console on the tests page.

UPD: Added 1-second delay between each API call. It seems if we call the API multiple times simultaneously, Polygon will automatically change the dollars of the test script into non-consecutive numbers, making the whole working copy inaccessible.

Full text and comments »

  • Vote: I like it
  • +94
  • Vote: I do not like it

By .o., history, 5 years ago, In English

Last week, there was a preliminary ICPC contest in Korea, whose result is used to select teams from various universities to advance to the Seoul Regional (regional contest in Korea)

This is the problemset: http://icpckorea.org/2018/preliminary/problems.pdf

Most problems were solved, at least algorithmically, by people during the contest or after the contest. However, anyone, including some top-tier people in our country, couldn't solve problem C.

In the contest, problem C was solved, but it was just because the test data was super weak to allow solutions with complexity O(n2 / k) :p Since the description is short enough, I will not describe the problem here, just click the link to see the task.

I am really curious about the solution of this problem. Could I get some hints or entire solutions for this task? Thanks in advance :)

Full text and comments »

  • Vote: I like it
  • +35
  • Vote: I do not like it

By .o., history, 6 years ago, In English

Hello CodeForces Community!

The May Long Challenge 2018 is at your doorstep to treat you with a programming fixture for 10 exciting days. You surely wouldn't want to miss this one! Joining me on the problem setting panel are:

I hope you will enjoy solving them. Please give your feedback on the problem set in the comments below, after the contest.

Contest Details:

Time: 4th May 2018 (1500 hrs) to 14th May 2018 (1500 hrs). (Indian Standard Time — +5:30 GMT) — Check your timezone.

Contest link: https://www.codechef.com/MAY18

Registration: You just need to have a CodeChef handle to participate. For all those, who are interested and do not have a CodeChef handle, are requested to register in order to participate.

Prizes: Top 10 global and top 20 Indian winners get 300 Laddus each, with which the winners can claim cool CodeChef goodies. Know more here: https://www.codechef.com/laddu. (For those who have not yet got their previous winning, please send an email to [email protected]). First to solve each problem individually: 100 laddus (For problems common to both Divisions, only one user will get laddus for that problem).


A request to problem setters out there: Recently we are short of problem proposals in general, but more in the hard level problems. If you want to set interesting problems, please feel free to send your proposals. More details at https://www.codechef.com/problemsetting/setting.

Full text and comments »

  • Vote: I like it
  • +60
  • Vote: I do not like it

By .o., 9 years ago, In English

First of all, sorry for the late editorial. I was too exhausted and immediately went to bed after the round finished, and didn't write the editorial.

There are many pictures in this editorial, to cover up my poor English skills! If you have detected any errors or have questions, feel free to ask questions by comments. Unfortunately, the authors only know English, so we can't answer you if you write comments in Russian.

500A - New Year Transportation

Let's assume that I am in the cell i (1 ≤ i ≤ n). If i ≤ n - 1, I have two choices: to stay at cell i, or to go to cell (i + ai). If i = t, we are at the target cell, so I should print "YES" and terminate. If i > t, I cannot go to cell t because I can't use the portal backwards, so one should print "NO" and terminate. Otherwise, I must go to cell (i + ai).

Using this method, I visit each cell at most once, and there are n cells, so the problem can be solved in linear time.

What is test 13?

There are 9 pretests in this problem. There are no test where n = t holds in pretests. Many people didn't handle this case, and got hacked or got Wrong Answer on test 13.

  • Time: O(n)
  • Memory: O(n) (maybe O(1))
  • Implementation: 9335683

500B - New Year Permutation

It seems that many contestants were confused by the statement, so let me clarify it first. Given a permutation p1, p2, ..., pn and a n × n-sized binary matrix A, the problem asks us to find the lexicographically minimum permutation which can be achieved by swapping two distinct elements pi and pj where the condition Ai, j = 1 holds. (From the statement, permutation A is prettier than permutation B if and only if A is lexicographically less than B.)

Let's think matrix A as an adjacency matrix of a undirected unweighted graph.

If two vertices i and j are in the same component (in other words, connected by some edges), the values of pi and pj can be swapped, using a method similar to bubble sort. Look at the picture below for easy understanding.

Because all the two distinct vertices in the same component can be swapped, the vertices in the same component can be sorted. In conclusion, we can solve the problem by the following procedure.

  • Find all the connected components.
  • For each component, sort the elements in the component.
  • Print the resulting permutation.

The size limit is quite small, so one can use any algorithm (DFS/BFS/..) for as many times as you want.

  • Time: O(n2) — because of the graph traversal. O(n3) is okay because of small constraints.
  • Memory: O(n2)
  • Implementation: 9335689 (.o.)

What is test 15? There are too many wrong answers in this case.

Actually, I made the pretests very weak in order to make many hacks. As mentioned in the Hack me! post by kostka, just swapping the elements greedily could pass the pretests easily. There were 14 pretests, and I made test 15 an counter-example of such algorithm in order to reduce judging time, and it seems that most of the wrong submissions failed in this test case :D

500C - New Year Book Reading

In order to calculate the minimum possible lifted weight, we should find the initial stack first. Let's find the positions one by one.

  • First, let's decide the position of book b1. After day 1, book b1 will be on the top of the stack, regardless of the order of the initial stack. Therefore, in order to minimize the lifted weight during day 1, book b1 must be on the top of the initial stack. (so actually, in day 1, Jaehyun won't lift any book)
  • Next, let's decide the position of book b2. (assume that b1 ≠ b2 for simplicity) Currently, b1 is on the top of the stack, and it cannot be changed because of the procedure. Therefore, in order to minimize the lifted weight during day 2, book b2 must be immediately below book b1.

Considering this, we can find the optimal strategy to create the initial stack: Scan b1, b2, ..., bm step by step. Let's assume that I am handling bi now. If bi has appeared before (formally, if ), erase bi from the sequence. Otherwise, leave bi. The resulting sequence will be the optimal stack.

You should note that it is not necessary for Jaehyun to read all the books during 2015, which means the size of the resulting sequence may be smaller than n. The books which doesn't appear in the resulting sequence must be in the bottom of the initial stack (obviously). However, this problem doesn't require us to print the initial stack, so it turned out not to be important.

In conclusion, finding the initial stack takes O(n + m) time.

After finding the initial stack, we can simulate the procedure given in the statement naively. It will take O(nm) time.

  • Time: O(nm)
  • Memory: O(n + m)
  • Implementation: 9335694 (.o.)

500D - New Year Santa Network

The problem asks us to calculate the value of

Let's write it down..

So, we need to calculate the value of . I'll denote the sum as S. How should we calculate this?

For simplicity, I'll set node 1 as the root of the tree. Let's focus on an edge e, connecting two vertices u and p. See the picture below.

Set A is a set of vertices in the subtree of vertex u, and set B is a set of vertices which is not in the subtree of vertex u. So, is the set of vertices in the tree, and is empty.

For all pairs of two vertices (x, y) where the condition and holds, the shortest path between x and y contains edge e. There are size(A) × size(B) such pairs. So, edge e increases the sum S by 2 × le × size(A) × size(B). (because d(x, y) and d(y, x) both contributes to S)

size(A) (the size of the subtree whose root is u) can be calculated by dynamic programming in trees, which is well known. size(B) equals to N - size(A).

So, for each edge e, we can pre-calculate how many times does this edge contributes to the total sum. Let's define t(e) = size(A) × size(B), (A and B is mentioned above). If the length of a certain road e decreases by d, S will decrease by t(e) × d. So we can answer each query.

  • Time: O(n) (pre-calculation) + O(q) (query)
  • Memory: O(n) (O(n + q) is okay too)
  • Implementation: 9335695 (.o.)

Something about precision error

Because of the tight constraints, the total sum of d(a, b) + d(b, c) + d(c, a) (it is equal to S × (n - 2)) can be so large that it can't be saved in long long(64-bit integer) type. Some contestants didn't handle this and got WA.

However, there are some contestants who saved this value divided by n × (n - 1) × (n - 2) in double type. double has 52 bits to store exact value, so it isn't enough. But because we allow 10 - 6 precision error, it seems to have been got accepted.

What if we allowed the length of the road could become 0? Then the precision error gets much more bigger, and the solution prints something that is far from the expected value (for example, negative value). ainu7 discovered this effect and suggested to apply this, but we didn't because it was hard to fix the statement..

Why does the road length decreases? It doesn't matter even if increases.

It's because of the weird legend. It is hard to explain, "People repairs a certain road per year, and its length may be increased or decreased." fluently for me..

500E - New Year Domino

From the statement, it is clear that, when domino i falls to the right, it makes domino j also fall if and only if pi < pj ≤ pi + li.

For each domino i, let's calculate the rightmost position of the fallen dominoes R[i] when we initially pushed domino i. From the definition, . This can be calculated by using a segment tree or using a stack, in decreasing order of i (from right to left). After that, we define U[i] as the smallest j where R[i] < pj holds. (In other words, the smallest j where domino j is not affected by pushing domino i)

Now, the problem becomes: Calculate P[U[xi]] - R[xi] + P[U[U[xi]]] - R[U[xi]] + ..., until U[xi] ≤ yi. Because i < U[i], this task can be solved by precalculating 'jumps' of 2something times, using the method described in here. You must read the "Another easy solution in <O(N logN, O(logN)>" part..

Formally, let's define Un + 1[k] = U[Un[k]] and Sn + 1[k] = Sn[k] + (P[Un + 1[k]] - R[Un[k]]). So, Sn[k] means the sum of domino length extensions when we initially push domino i and we prolonged the length exactly n times. U2i + 1[k] = U2i[U2i[k]], and S2i + 1[k] = S2i[k] + S2i[U2i[k]] holds. (If you don't understand this part, I suggest you to read the article linked above)

  • Time : or .
  • Memory :
  • Implementation: 9335698 (.o.)
  • We would like to know whether there is a linear offline solution or not.

500F - New Year Shopping

The i-th item is available during [ti, ti + p - 1]. In other words, at time T, item i is available if and only if ti ≤ T ≤ ti + p - 1. This can be re-written as (ti ≤ T and T - p + 1 ≤ ti), which is T - p + 1 ≤ ti ≤ T. With this observation, we can transform the item's purchasable range into a dot, and the candidate's visit time into a range: From now on, item i is only available at time ti. and each candidate pair (aj, bj) means that I can visit the store during [aj - p + 1, aj], and my budget is bj dollars at that visit. This transformation makes the problem easier to solve.

Each red circled point denotes an item which is sold at that time, and each black interval denotes a candidate. Let's only consider the intervals which passes time T. All the intervals' length is exactly p, so the left bound of the interval is (T - p) and the right bound of the interval is (T + p).

For each interval, we'd like to solve the 0/1 knapsack algorithm with the items that the interval contains. How can we do that effectively? There is an important fact that I've described above: all the interval passes time T. Therefore, the interval [aj - p + 1, aj] can be split into two intervals: [aj - p + 1, T - 1] and [T, aj].

So, let's solve the 0/1 knapsack problem for all intervals [x, T - 1] (T - p ≤ x ≤ T - 1) and [T, y] (T ≤ y ≤ T + p). Because one of the endpoints is fixed, one can run a standard dynamic programming algorithm. Therefore, the time needed in precalculation is O(S × W), where S is the number of items where the condition holds.

Let's define h(I, b), as the maximum happiness I can obtain by buying the items where the condition holds, using at most b dollars. For each candidate (aj, bj), we have to calculate h([aj - p + 1, aj], bj), which is equal to max0 ≤ k ≤ b{h([aj - p + 1, T - 1], k) + h([T, aj], b - k)}. So it takes O(bj) = O(W) time to solve each query.

The only question left is: How should we choose the value of T? We have two facts. (1) The length of intervals is exactly p. (2) The algorithm covers intervals which passes time T. Therefore, to cover all the intervals given in the input, we should let T be p, 2p, 3p, 4p, ... (of course, one by one), and run the algorithm described above.

Then, what is the total time complexity? Think of intervals [0, 2p], [p, 3p], [2p, 4p], [3p, 5p], .... For each point, there will be at most two intervals which contains the point. Therefore, each item is reflected in the pre-calculation at most twice, so the time needed to precalculate is O(n × W). The time needed to solve each query is O(bj), and the answer of the query is calculated at most once. Therefore, the time needed to answer all the queries is O(q × W).

Divide and Conquer approach

Let's assume that t1 ≤ t2 ≤ ... ≤ tn. We can easily know that at time T, the indexes of all the available items forms a segment [l, r]. (in other words, item l, item l + 1, ..., item r - 1, item r is available) We can use this fact to solve all the queries. Let's assume that item lj, item lj + 1, ..., item rj - 1, item rj is available at time aj.

Define a function solve(L, R). This function calculates the answer for queries where the condition L ≤ lj ≤ rj ≤ R holds. Let's assume that L < R holds. (In the case L ≥ R, the answer can be calculated easily) Let . Call solve(L, M) and solve(M + 1, R). After that, the queries needed to be calculated, satisfies the condition lj ≤ M < rj, which means all the intervals(queries) passes item M. Now, to calculate the answer for such queries, we can use the method described at the dynamic programming approach.

The time complexity is , because and the answer of the queries can be calculated in O(bj) per query.

  • Time : .
  • Memory : O(n × W)
  • Implementation: 9335709

Unfortunately, this approach is offline, because of the memory. Is there any available online approach using divide and conquer?

Blocking the divide and conquer solution

I tried my best to block divide and conquer solutions, but failed. It seems that many participants solved the problem with divide and conquer approach. My approach takes time, and it gets accepted in 546ms.

500G - New Year Running

Before starting the editorial, I'd like to give a big applause to Marcin_smu, who solved the problem for the first time!

Warning: The editorial is very long and has many mistakes. There are lots of lots of mistakes.. Keep calm, and please tell me by comments, if you discovered any errors.

This problem can be solved by an online algorithm. Let's focus on a single query (u, v, x, y). This means that JH runs between u and v, and JY runs between x and y.

Just like when we solve many problems with undirected trees, let vertex 1 be the root of the tree. Also, we will assume that they won't stop after when they meet, but they will continue running, in order to explain easily.

Some definitions:

  • Path (a, b): the shortest path between vertex a and vertex b.
  • d(a, b): the length of Path (a, b).
  • LCA(a, b): the lowest common ancestor of vertex a and vertex b.

Finding the common path of two paths

If there is no common vertex between Path (u, v) and Path (x, y), the answer will be obviously  - 1. So, let's find the common vertices first. Because the path between two vertices is always unique, the common vertices also form a path. So, I'll denote the path of the common vertices as Path (c1, c2). c1 and c2 may be equal to: u, v, x, y, and even each other.

The possible endpoints are P1 = LCA(u, v), P2 = LCA(x, y), P3 = LCA(u, x), P4 = LCA(u, y), P5 = LCA(v, x), and P6 = LCA(v, y). Figure out by drawing some examples. (some of you might think it's obvious :D)

See the pictures above, and make your own criteria to check whether the two paths intersects :D

A small hint. Let's assume that we already know, that a vertex a lies on path (x, y). If a lies on path (u, v), a is a common vertex of two paths. What if a is guaranteed to be an end point of the common path?

Additional definitions

  • Let's denote JH's running course is: . Then, there are two possible courses for JY: and ).

  • fJH : the time needed to run the course .
  • fJY : the time needed to run the course .
  • t1 : the first time when JH passes vertex c1, moving towards c2.
  • t2 : the first time when JH passes vertex c2, moving towards c1.
  • t3 : the first time when JY passes vertex c1, moving towards c2.
  • t4 : the first time when JY passes vertex c2, moving towards c1.

Case 1) When JH and JY meets while moving in the same direction

Obviously, they must meet at vertex c1 or vertex c2 for the first time. Without loss of generality, let's assume that they meet at vertex c1. In this case, both of them is moving towards vertex c2. (You can use the method similarly for c2)

Let's assume that JH and JY meets at time T. Because the movements of both runners are periodic, T must satisfy the conditions below:

Among possible T-s, we have to calculate the smallest value. How should we do? From the first condition, we can let T = fJH × p + t1, where p is a non-negative integer. With this, we can write the second condition as: . In conclusion, we have to find the smallest p which satisfies the condition

Using the Extended Euclidean algorithm is enough to calculate the minimum p. With p, we can easily calculate the minimum T, which is the first time they meet. If there is no such p, they doesn't meet forever, so the answer is  - 1.

Case 2) When JH and JY meets while moving in the opposite direction

In this case, they will meet at a vertex lying on Path (c1, c2). Without loss of generality, let's assume that JH is going from c1 to c2, and JY is going from c2 to c1.

I'll assume that JH and JY meets at time T.

[1] Let's see how JH moves: For all non-negative integer p,

  1. At time fJH × p + t1, he is at vertex c1, moving towards c2.
  2. At time fJH × p + t1 + d(c1, c2), he is at vertex c2.

Therefore, when fJH × p + t1 ≤  (current time)  ≤ fJH × p + t1 + d(c1, c2), JH is on Path (c1, c2). So, T must satisfy the condition:

  • fJH × p + t1 ≤ T ≤ fJH × p + t1 + d(c1, c2)

[2] Let's see how JY moves: Similar to JH, for all non-negative integer q,

  1. At time fJY × q + t4, he is at vertex c2, moving towards c1.
  2. At time fJY × q + t4 + d(c2, c1), he is at vertex c1.

Therefore, when fJY × q + t4 ≤  (current time)  ≤ fJY × q + t4 + d(c2, c1), JY is on Path (c2, c1). So, T must satisfy the condition:

  • fJY × q + t4 ≤ T ≤ fJY × q + t4 + d(c1, c2)

[3] These two conditions are not enough, because in this case, they can meet on an edge, but they cannot meet on a vertex.

We'd like to know when do they meet on a vertex, like the picture below.

As you see from the picture above, in this case, the condition d(c1, a) + d(a, c2) = d(c1, c2) holds. If this picture was taken at time s, this condition can be written as:

{s - (fJH × p + t1)} + {s - (fJY × q + t4)} = d(c1, c2)
2s - (fJH × p + fJY × q) - (t1 + t4) = d(c1, c2)
s = {d(c1, c2) + (fJH × p + fJY × q) + (t1 + t4)} / 2

Because JH and JY both travel their path back and forth, fJH and fJY are all even. Therefore, in order to make s an integer, d(c1, c2) + t1 + t4 must be even. This can be written as

[4] Merging [1], [2] and [3], we can conclude that if these two conditions holds,

  • max{fJH × p + t1, fJY × q + t4} ≤ min{fJH × p + t1 + d(c1, c2), fJY × p + t4 + d(c1, c2)}

JH and JY meets on a certain vertex lying on path (c1, c2). The first condition can be easily checked, so let's focus on the second condition. The second condition holds if and only if:

  • fJY × q + t4 - t1 - d ≤ fJH × p ≤ fJY × q + t4 - t1 + d

You can check this by changing "fJH × p" to the lower bound and the upper bound of the inequality. Therefore, the problem is to find the smallest p which satisfies the condition above.

Let's define a function g(M, D, L, R), where M, D, L, R are all non-negative integers. The function returns the smallest non-negative integer m which satisfies . This function can be implemented as follows.

  1. If L = 0, g(M, D, L, R) = 0. (because L = 0 ≤ D·0)
  2. If , it is "obvious" that there is no solution.
  3. If 2D > M, g(M, D, L, R) = g(M, M - D, M - R, M - L).
  4. If there exists an non-negative integer m which satisfies L ≤ D·m ≤ R (without modular condition), g(M, D, L, R) = m. Obviously, we should take the smallest m.
  5. Otherwise, there exists an integer m which satisfies D·m < L ≤ R < D·(m + 1). We should use this fact..

    If holds, there exists an non-negative integer k which satisfies L + Mk ≤ D·m ≤ R + Mk. Let's write down..
    D·m - R ≤ M·k ≤ D·m - L
     - R ≤ M·k - D·m ≤  - L
    L ≤ D·m - M·k  ≤ R
    Because , we can write the inequality as
    Therefore the minimum k among all possible solutions is equal to . We can easily calculate the smallest p using k.

Then, what is the time complexity of calculating g(M, D, L, R)? Because 2D ≤ M (the 2D > M case is handled during step 3), the problem size becomes half, so it can be executed in .

Conclusion

For each query, we have to consider both Case 1) and Case 2), with all possible directions. Both cases can be handled in , so the time complexity per query is , and it has really huge constant.

  • Time:
  • Memory:
  • Implementation: 9330235

Full text and comments »

Tutorial of Good Bye 2014
  • Vote: I like it
  • +278
  • Vote: I do not like it

By .o., 9 years ago, In English

Hi, Codeforces!

Welcome to the last Codeforces Round of 2014, Good Bye 2014! This round is very unusual; First, the round starts at December 30th, 18:00 MSK. Second, the round lasts for 2.5 hours. And lastly, the round will be division combined, which means Div1 contestants and Div2 contestants won't be separated.

The problems are prepared by me (.o.) and Seunghyun Jo (ainta). This is our second round at Codeforces. Because our first round caused(?) the Black Day, we hope this round won't cause any errors like before :D

Thanks to Won-seok Yoo(ainu7), who tested our round and gave us comments about the problemset.

We'd like to thank some people who were necessary to make this round: Maxim Akhmedov (Zlobober) gave us great help preparing the problems. Maria Belova (Delinur) translated problem statements in Russian. Mike Mirzayanov (MikeMirzayanov) made Codeforces and Polygon systems, which are really great. Let's give them an applause!

The score distribution will be posted just before the round starts, as usual.

We wish you all the best of luck. Happy New Year!

UPD (2014-12-30 17:33:21) The score for each problem is going to be 500-1000-1000-1500-2750-2750-3500. Thanks to Xellos for giving us some ideas.

UPD (2014-12-31 12:49:05) Round has finished, congratulations to the winners!

  1. tourist
  2. Petr
  3. rng_58
  4. HYPERHYPERHYPERCUBELOVER
  5. subscriber
  6. Merkurev
  7. al13n
  8. mmaxio
  9. mexmans
  10. GlebsHP

Also, thanks to Marcin_smu, who solved problem G after the contest for the first time.

UPD(2014-12-31 12:51:08) Editorial is published. Currently, only A-F is available, but I will add G as soon as possible. Sorry for the late editorial.

UPD(2015-01-02 21:30:44) I wrote the editorial of G. Sorry for the late update..

Full text and comments »

Announcement of Good Bye 2014
  • Vote: I like it
  • +758
  • Vote: I do not like it

By .o., 9 years ago, In English

Hi, I'm the author of Good Bye 2014. Good Bye 2014 will be held for 2.5 hours with 7 problems, and it will be division conbined, like last year. However, I have a doubt about the normal score distribution 500-1000-1500-2000-2500-3000-3500.

This post already brought up the problems of the normal distribution. Solving Div1E at the last moment of the contest may be worse than solving Div1A very quickly. So I asked Zlobober, whether the score leak (currently 0.4% per minute) could be changed or not, and unfortunately, there is no option to decrease the score leak currently.

So, I would like to know what would be the best score distribution for Good Bye 2014. I will consider all your opinions and set the score distribution. Thanks in advance! :D

  • I don't know how to write or read Russian, so please write in English.

Full text and comments »

  • Vote: I like it
  • +106
  • Vote: I do not like it

By .o., 10 years ago, In English

Hi, everyone!

2014 World Cup Brasil has started, and I was looking for interesting problems. I found a nice IOI-style problem from Japanese Olympiad In Informatics 2014 (this is not the online contest!). If you know how to read Japanese, you can find the problem "Kanji Shiritori" from the statement file. Here is the translation of the problem, powered by Google Translation and me :D


You are given a weighted directed graph. The graph consists of N (2 ≤ N ≤ 300) vertices and M (1 ≤ M ≤ N × (N - 1)) edges, and there are no loops or multiple edges in this graph. The vertices are numbered by integers from 0 to N - 1, and the edges are numbered by integers from 0 to M - 1. The i-th edge goes from vertex Ai to vertex Bi, and the weight of this edge is Ci. (0 ≤ Ai < N, 0 ≤ Bi < N, Ai ≠ Bi, 1 ≤ Ci ≤ 1016)

Anna and Bruno have to solve Q (1 ≤ Q ≤ 60) queries during a exam. Each query is given by two integers Si and Ti (0 < Si, Ti < N, Si ≠ Ti), which means that they have to find the shortest "path" from vertex Si to vertex Ti. It is guaranteed that such path always exists. If there are many possible answers, they can write any of them.

Both Anna and Bruno are given the whole graph and the description of the queries. (Of course, those are the same) Unfortunately, Bruno has forgotten the weight of K (1 ≤ K ≤ 5) distinct edges. (He just forgot the 'weight', not the 'whole edge') Fortunately(?), the start vertex of each queries are all the same.

(Formally, Bruno has forgotten the weight of the U0-th, U1-th, ..., UK - 1-th edge. So Bruno doesn't know the values of CU0, CU1, ..., CUK - 1. It is guaranteed that AU0 = AU1 = ... = AUK - 1.)

Bruno told Anna that he had forgotten the weight of some edges and gave Anna U0, U1, ..., UK - 1. Anna wanted to tell Bruno the weights, but the exam was going to start, so Anna decided to give information during the test by tapping the desk. By one tap, Anna can send a single bit to Bruno.

If Anna taps the desk too much, the supervisor will know that they were cheating. So Anna has to give Bruno as least bits as possible. Let the number of such bits L. Can you make a strategy for Anna and Bruno? If L ≤ 64 you can get full score.


At first I read this problem, I couldn't find any solution near 64 bits. But after I introduced this problem to my friends, unbelievably (at least for me), gs12117 found an algorithm that just requires 22 bits!

His solution is quite simple: For each weight-unknown edge, Anna sends the number of queries whose shortest path passes that edge. With this information, Bruno can find which shortest path(answer for the queries) passes each weight-unknown edge and restore the answer.

Here is the proof below. Because of my poor English skill, I think it is hard to understand. I'm sorry about that.

For all u, v (0 < u, v < N, u ≠ v), the shortest path from vertex u to vertex v passes at most one unknown edge. This is because the start vertex of the weight-unknown edges are all the same. If the path passes more than one weight-unknown edge, there must be an positive cycle in that path, and it is obviously not optimal.

For each query(shortest path) that passes an weight-unknown edge, we define an "benefit" for this (query, weight-unknown edge) pair.

  • Let the weight-unknown edge goes from vertex p to vertex q.
  • Let the query (s, t).
  • Let dist(a, b) the length of the shortest path from vertex a to vertex b. If undefined, it is inf.
  • (benefit) = dist(a, b) - dist(a, p) - dist(q, b)

If the shortest path passes (p, q), the length of the shortest path decreases by "(the benefit achieved by passing (p, q)) — (the length of edge (p, q))".

Let's fix the query. (i.e. the source and the sink of the shortest path) If this value is negative for all weight-unknown edges, it is better for the shortest path not to pass any weight-unknown edge. Otherwise, the shortest path must pass the weight-unknown edge which has the maximum benefit.

Bruno can calculate the benefit for all (query, weight-unknown edge) pairs, and find the shortest path. For this, Bruno has to maximize the total benefit following the restriction that Anna has given to her. This can be done by solving the assignment problem. There are many ways to make such matrix. We can use Hungarian method, but because of the low constraints, DP is enough. Now, let's prove that the result we have found is correct.

Take a look the sum of the total weight of all shortest paths that Bruno calculated for all queries. Let's denote this sum as S1. This equals to the sum of,

  • (the sum of the total weight of all shortest paths except the weight of the weight-unknown edges)
  • for each weight-unknown edge, (the number of queries that passes the edge)  ×  (the length of that edge)

Let's calculate the sum described above with no weight-unknown edges, and say this S2

If we decrease S1 by (the total benefit achieved by passing the weight-unknown edges) minus the sum of (the number of queries that passes the edge)  ×  (the length of that edge) for each weight-unknown edge, the value becomes S2.

The increased value is a constant for Bruno because it is a restriction given by Anna. Thus, maximizing the total benefit obeying Anna's restriction equals to minimizing the sum of the total weight of the shortest paths.

What if Bruno's result is not optimal? It means that S1 > S2. And it also means that (the total benefit calculated by Anna) is bigger than (the total benefit calculated by Bruno), and this is false because Bruno followed the restriction given by Anna.

Therefore we can conclude that Bruno's result is optimal.

There are 6H60 = 8259888 ≤ 223 - 1 different states, so we can use only 22 bits. If you don't want to code this part, you can just send the number of queries passing each weight-unknown edge. This costs 30 bits.

This is the implemention of the solution: Anna, Bruno Most part is coded by ainta.


How do you think about this solution? I think it's brilliant and optimal for this problem, but it is quite hard to implement. I saw the official implementation, and it seemed quite simple, but I can't get the idea.

  • Sorry for the confusing title :| If you have a great title, please write in the comments

Full text and comments »

  • Vote: I like it
  • +59
  • Vote: I do not like it

By .o., 10 years ago, In English

Please write here if you had written some comments about any suggestions or questions on the announcement. There are a bunch of notification emails but I couldn't read most of them :(

Here are the final top 100 : Div1, Div2. Congrats to the winners!

(These links are all from google search cache, so it may be disappear)


First of all, we are sorry to make this round unrated. This is an unexpected incident, and I believe that everyone in Codeforces would understand that. And second, we're sorry for estimating the difficulties of the problems wrong.

  • Actually, I thought that somebody would get Accepted on 398A - Cards within 5 minutes. But there wasn't any submissions during the first 4 minutes, and many people got wrong answer on pretests! I think almost everybody who solved this got at least one wrong answer.. I'm really sorry for measuring A easy. I think what cgy4ever said is right — it would be better not asking to print the sequence.

  • At first, I purposed 398C - Tree and Array as B(!!) because ainta solved it very easily than I expected. (I could hardly solve this, and thought it must be at least D, before I heard ainta's solution.) We're sorry for giving this problem the same score with 398B - Painting The Wall, which was much easier than 398C - Tree and Array. Next time I'll remember that ainta is really a genius :)

  • Congrats to four people who solved 398C - Tree and Array during the contest! : cgy4ever, Shef, SillyHook06, and Qwaz.

  • We are surprised that Petr solved 398E - Sorting Permutations during the contest! Even though he implemented by Java, his solution is faster than ours :p Congratulations to him!

  • Actually, we purposed an another problem for Div1-E, but the same problem popped out at OpenCup last Sunday, so we couldn't use that. Thanks to Gerald, who helped us preparing a new problem in such a short time. By the way, can you guess what problem was that? :D

  • ...


399A - Pages — Author : .o.

You can solve this problem by just following the statement. First, print << if p - k > 1. Next, iterate all the integers from p - k to p + k, and print if the integer is in [1..n]. Finally, print >> if p + k < n.

  • Time complexity: O(n).
  • Code : 5926588

399B - Red and Blue Balls — Author : xtalclr

Change the blue balls to bit 1, and blue balls to bit 0. Interpret the stack as an integer represented in binary digits. Now, one may see that a single operation decreases the integer by 1, and will be unavailable iff the integer becomes zero. So the answer will be 2a1 + 2a2 + ... + 2ak where the (ai + 1)-th (1 ≤ i ≤ k) ball is blue in the beginning.

  • Time complexity: O(n)
  • Code : 5926598

398A - Cards — Author : .o.

Let the number of o blocks p, and the number of x blocks q. It is obvious that |p - q| ≤ 1 and p, q ≤ n. So we can just iterate all p and q, and solve these two problems independently, and merge the answer.

1. Maximize the value of x12 + x22 + ... + xp2 where x1 + x2 + ... + xp = a and xi ≥ 1.

  • Assign a - p + 1 to x1, and assign 1 to x2, x3, ..., xp.
  • The value is (a - p + 1)2 + (p - 1).

2. Minimize the value of y12 + y22 + ... + yq2 where y1 + y2 + ... + yq = b and yi ≥ 1.

  • For all , assign to yi.
  • For all , assign to yj.
  • The value is .

The proof is easy, so I won't do it. With this, we can solve the problem. We can print y1 xs, and then x1 os, ...

  • Time complexity: O(n).
  • Code : 5926607
  • Bonus: Is there any solution faster than O(n)? We tried to use ternary search, but we couldn't prove that was correct.

398B - Painting The Wall — Author : ainta

This can be solved by dynamic programming. Let T[i, j] the expected time when i rows and j columns doesn't have any painted cells inside.

Let's see all the cases. If we carefully rearrange the order of rows and columns, we can divide the wall into 4 pieces.

  1. Choose a cell in rectangle 1. The size of this rectangle is i × j. The expectation value is T[i - 1, j - 1].
  2. Choose a cell in rectangle 2. The size of this rectangle is i × (n - j). The expectation value is T[i - 1, j].
  3. Choose a cell in rectangle 3. The size of this rectangle is (n - i) × j. The expectation value is T[i, j - 1].
  4. Choose a cell in rectangle 4. The size of this rectangle is (n - i) × (n - j). The expectation value is T[i, j].

Merging these four cases, we can get:

T[i, j] = 1 + {T[i - 1, j - 1] × i × j + T[i - 1, j] × i × (n - j) + T[i, j - 1] × (n - i) × j + T[i, j] × (n - i) × (n - j)} / n2

Moving the T[i, j] on the right to the left, we can get the formula which can be used to calculate the value T[i, j]. There are some corner cases, like i = 0 or j = 0, but it can be simply handled.

  • Time complexity: O(n2)
  • Code : 5926617

398C - Tree and Array — Author : .o.

There are two intended solutions. One is easy, and the other is quite tricky.

Easy Solution

Make a tree by the following method.

  • For all , make an edge between vertex i and with weight 1.
  • For all , make an edge between vertex i and i + 1 with weight .

You can see that are all good pairs. In addition, (1, 3) is also a good pair. So we can print the answer.

But, if n = 5, this method doesn't work because (1, 3) is not a good pair. In this case, one may solve by hand or using random algorithm.

  • Time complexity: O(n)
  • Code: 5926624
  • Bonus I: Is there any method to make more than good pairs?
  • Bonus II: Is there any method that the maximum weight is relatively small and make at least good pairs?

Tricky Solution

If you got stuck with the problem, one can run any random algorithm that solves small cases, and merge these cases. With this approach, we can even limit the weight to a very small constant!

But I think this is not a good solution, so I won't mention about unless anybody wants to know the details. I came up with this solution, and tried to purpose this as D. It seems all the 6 people who passed pretests during the contest didn't use this solution :)

  • Time complexity: ???
  • Code : 5926632

398D - Instant Messanger — Author : .o.

Let xi an integer representing the state of the i-th user. If user i is online, xi  =  1, otherwise, xi  =  0. Also, let Si the set consisting of i-th friends.

One can see that the queries become like this:

  • Online/Offline: change the value of xi to 1  -  xi.
  • Add: add u to set Sv, and v to set Su.
  • Delete: delete u from set Sv, v from set Su.
  • Count: calculate .

Let's use sqrt-decomposition to perform these queries. Let user u heavy if |Su| ≥ C, or light otherwise. Also, we will define some arrays and sets:

  • Hu is a set that stores friends with user u who are heavy users.
  • O[u] stores the sum of .

If the Online/Offline(u) query is given,

  • If u is a light user: for all , change the value of O[v]. It takes O(|Su|) time.
  • Otherwise: just update the state of itself.

If the Add(u, v) query is given,

  • If vertex u becomes heavy if we add this edge, for all , add v into Hw and update the value of O[w]. This can be done in O(C).
  • If vertex u is still light even if we add the edge, update O[v]. This can be done in constant time.
  • If vertex u is heavy (after the operations above), update Hv.
  • Add u to set Sv, and v to set Su.

If the Delete(u, v) query is given,

  • Delete u from set Sv, and v from set Su.
  • Update the value of O[*], Hu and Hv. Because for all |Hw| ≤  E| /  C, this can be done in O(|E| / C).

If the counting query is given, will be the answer. Because |Hu| ≤ |E| / C, it will take O(|E| / C) time.

So for each query, O(|E| / C) or O(C) time is required. To minimize the time, .

We can also solve this by dividing the queries into parts. For each block, use O(n + m + q) time to build the online & offline state and the answer table for each vertices. For each query, you can use time to get the answer. If you want more details, please ask in the comments.

398E - Sorting Permutations — Author : xtalclr, eyTns, and great help of Gerald

Here is the draft solution.


Feel free to ask of or discuss about the solutions! Unfortunately, I don't know how to read or write Russian, so if you ask me in Russian, I can't response to it. (also, I think the Russian editorial won't be available..) Sorry for that.

Full text and comments »

  • Vote: I like it
  • +100
  • Vote: I do not like it