Zlobober's blog

By Zlobober, 6 years ago, translation,

Hi everybody!

Hope you liked our Revolution of Colors and Titles. Maybe you even have took part in a contest in new status. The last round have set an incredible record: 8000 participants! And I'm glad to tell you that there was no single technical issue during the round time! Consider the 15-minute delay as a part of our evil plan for setting up a new record :)

I'm glad to tell that Codeforces team is able not only to tune colors and formulas, but also to work on new features for you. You may see on contests page that there is a list of authors for each of the rounds! Moreover, in profile of a person there is now an entry called "problemsetting" that allows you to see the list of all contests in whose preparation a person took a part.

Endagorion looks like this.

• +673

By Zlobober, 6 years ago,

With Revolution of Colors and Titles the new color was introduced (somehow similar to TopCoder's target), the color for legendary coders that now looks like the first letter being black.

Under my browser and operating system (Chrome and Linux Mint) that looks, IMHO, more like a style defect, rather like a nice effect:

My variant and its minor variation are:

What do you think? Feel free to comment or to provide your own vision how the best-coders-ever should look like! The main requirement is to not lose the unique CodeForces style.

• +145

By Zlobober, history, 6 years ago, translation,

The VK Cup championship has just finished. It gave us a great opportunity to try new contest format when contestants participate in teams consisting of two people. While setting up online rounds, finals and finals mirror, we found out many nuances and tricky things that should be considered while running such special contests, but in order to came up with a full picture we need some kind of a feedback from the community.

That's why Codeforces team wants to ask a community to share some thoughts regarding what was done right, what was done wrong, what could be improved and what should be added. Do not hesitate to tell us about your experience or to give some advices regarding such unusual contest format, we would like to hear any reasonable thoughts.

We want any kind of feedback for online rounds, for onsite finals from finalists and for finals mirror that happened yesterday.

• +51

By Zlobober, history, 6 years ago, translation,

Thanks everybody for participating. Tasks follow in the order of the original contest (the mirror order is given in the brackets).

562A - Logistical Questions

(in mirror: 566C - Logistical Questions)

Let's think about formal statement of the problem. We are given a tricky definition of a distance on the tre: ρ(a, b) = dist(a, b)1.5. Each vertex has its weight wi. We need to choose a place x for a competition such that the sum of distances from all vertices of the tree with their weights is minimum possible: f(x) = w1ρ(1, x) + w2ρ(x, 2) + ... + wnρ(x, n).

Let's understand how function f(x) works. Allow yourself to put point x not only in vertices of the tree, but also in any point inside each edge by naturally expanding the distance definition (for example, the middle of the edge of length 4 km is located 2 km from both ends of this edge).

Fact 1. For any path in the tree the function ρ(i, x) is convex. Actually, the function dist(i, x) plot on each path [a, b] looks like the plot of a function abs(x): it first decreases linearly to the minimum: the closes to i point on a segment [a, b], and then increases linearly. Taking a composition with a convex increasing function t1.5, as we can see, we get the convex function on any path in the tree. Here by function on the path we understand usual function of a real variable x that is identified with its location on path x: dist(a, x). So, each of the summands in the definition of f(x) is convex on any path in the tree, so f(x) is also convex on any path in the tree.

Let's call convex functions on any path in the tree convex on tree. Let's formulate few facts about convex functions on trees.

Fact 2. A convex function on tree can't have two different local minimums. Indeed, otherwise the path between those minimums contradicts the property of being convex on any path in the tree.

So, any convex function f(x) on the tree has the only local minimum that coincides with its global minimum.

Fact 3. From each vertex v there exists no more than one edge in which direction the function f decreases. Indeed, otherwise the path connecting two edges of function decrease would contradict the definition of a convex function in a point v.

Let's call such edge that function decreases along this edge to be a gradient of function f in point x. By using facts 2 and 3 we see that in each vertex there is either a uniquely defined gradient or the vertex is a global minimum.

Suppose we are able to efficiently find a gradient direction by using some algorithm for a given vertex v. If our tree was a bamboo then the task would be a usual convex function minimization that is efficiently solved by a binary search, i. e. dichotomy. We need some equivalent of a dichotomy for a tree. What is it?

Let's use centroid decmoposition! Indeed, let's take a tree center (i. e. such vertex that sizes of all its subtrees are no larger than n / 2). By using fact 3 we may consider the gradient of f in the center of the tree. First of all, there may be no gradient in this point meaning that we already found an optimum. Otherwise we know that global minimum is located in a subtree in direction of gradient, so all remaining subtrees and the center can be excluded from our consideration. So, by running one gradient calculation we reduced the number of vertices in a considered part of a tree twice.

So, in runs of gradient calculation we almost solved the problem. Let's understand where exactly the answer is located. Note that the global optimum will most probably be located inside some edge. It is easy to see that the optimum vertex will be one of the vertices incident to that edge, or more specifically, one of the last two considered vertices by our algorithms. Which exactly can be determined by calculating the exact answer for them and choosing the most optimal among them.

Now let's calculate the gradient direction in a vertex v. Fix a subtree ui of a vertex v. Consider a derivative of all summands from that subtree when we move into that subtree. Denote this derivative as deri. Then, as we can see, the derivative of f(x) while moving from x = v in direction of subtree ui is  - der1 - der2 - ... - deri - 1 + deri - deri + 1 - ... - derk where k is a degree of vertex v. So, by running one DFS from vertex v we may calculate all values deri, and so we may find a gradient direction by applying the formula above and considering a direction with negative derivative.

Finally, we got a solution in .

562B - Clique in the Divisibility Graph

(in mirror: 566F - Clique in the Divisibility Graph)

Order numbers in the sought clique in ascending order. Note that set X = {x1, ..., xk} is a clique iff for (1 ≤ i ≤ k - 1). So, it's easy to formulate a dynamic programming problem: D[x] is equal to the length of a longest suitable increasing subsequence ending in a number x. The calculation formula: for all x in set A.

If DP is written in "forward" direction then it's easy to estimate the complexity of a solution. In the worst case we'll process transitions.

562C - Restoring Map

(in mirror: 566E - Restoring Map)

Let's call a neighborhood of a vertex — the set consisting of it and all vertices near to it. So, we know the set of all neighborhoods of all vertices in some arbitrary order, and also each neighborhood is shuffled in an arbitrary order.

Let's call the tree vertex to be internal if it is not a tree leaf. Similarly, let's call a tree edge to be internal if it connects two internal vertices. An nice observation is that if two neighborhoods intersect exactly by two elements a and b then a and b have to be connected with an edge, in particular the edge (a, b) is internal. Conversely, any internal edge (a, b) may be represented as an intersection of some two neighborhoods С and D of some two vertices c and d such that there is a path c – a – b – d in the tree. In such manner we may find all internal edges by considering pairwise intersections of all neighborhoods. This can be done in about n3 / 2 operations naively, or in 32 times faster, by using bitsets technique.

Note that knowing all internal edges we may determine all internal vertices except the only case of a star graph (i. e. the graph consisting of a vertex with several other vertices attached to it). The case of a star should be considered separately.

Now we know the set of all leaves, all internal vertices and a tree structure on all internal vertices. The only thing that remained is to determine for each leaf, to what internal vertex is should be attached. This can be done in following manner. Consider a leaf l. Consider all neighborhoods containing it. Consider a minimal neighborhood among them; it can be shown that it is exactly the neighborhood L corresponding to a leaf l itself. Consider all internal vertices in L. There can be no less than two of them.

If there are three of them or more then we can uniquely determine to which of them l should be attached — it should be such vertex from them that has a degree inside L larger than 1. If there are exactly two internal vertices in L (let's say, a and b), then determining the attach point for l is a bit harder.

Statement: l should be attached to that vertex among a, b, that has an internal degree exactly 1. Indeed, if l was attached to a vertex with internal degree larger than 1, we would have considered this case before.

If both of vertices a and b have internal degree — 1 then our graph looks like a dumbbell (an edge (a, b) and all remaining vertices attached either to a or to b). Such case should also be considered separately.

The solution for two special cases remains for a reader as an easy exercise.

562D - Restructuring Company

(in mirror: 566D - Restructuring Company)

This problem allows a lot of solution with different time asymptotic. Let's describe a solution in .

Let's first consider a problem with only a queries of second and third type. It can be solved in a following manner. Consider a line consisting of all employees from 1 to n. An observation: any department looks like a contiguous segment of workers. Let's keep those segments in any logarithmic data structure like a balanced binary search tree (std::set or TreeSet). When merging departments from x to y, just extract all segments that are in the range [x, y] and merge them. For answering a query of the third type just check if employees x and y belong to the same segment. In such manner we get a solution of an easier problem in per query.

When adding the queries of a first type we in fact allow some segments to correspond to the same department. Let's add a DSU for handling equivalence classes of segments. Now the query of the first type is just using merge inside DSU for departments which x and y belong to. Also for queries of the second type it's important not to forget to call merge from all extracted segments.

So we get a solution in time.

562E - Max and Min

(in mirror: 566G - Max and Min)

Consider a following geometrical interpretation. Both Max and Min have a set of vectors from the first plane quadrant and a point (x, y). During his turn Max may add any of his vectors to a point (x, y), and Min — may subtract any of his vectors. Min wants point (x, y) to be strictly in the third quadrant, Max tries to prevent his from doing it. Denote Max vectors as Mxi and Min vectors as Mnj.

Consider a following obvious sufficient condition for Max to win. Consider some non-negative direction in a plane, i. e. such vector (a, b) that a, b ≥ 0 and at least one of numbers a, b is not a zero. Then if among Max vectors there is such vector Mxi, that it's not shorter than any of Min vectors Mnj along the direction (a, b) then Max can surely win. Here by the length of vector v along a direction (a, b) we mean a scalar product of vector v and vector (a, b).

Indeed, let Max always use that vector Mxi. Then during the turns of Max and Min point (x, y) is shifted by a vector Mxi - Mnj for some j, so its overall shift along the vector (a, b) is equal to ((Mxi - Mnj), (a, b)) = (Mxi, (a, b)) - (Mnj, (a, b)) ≥ 0. By observing that initially the scalar produt ((x, y), (a, b)) = ax + by > 0 we see that at any moment ax + by will be strictly positive. This means that Min won't be able at any moment to make x and y both be negative (since it would mean that ax + by < 0).

Now let's formulate some kind of converse statement. Suppose Max vector Mxi lies strictly inside the triangle formed by Min vectors Mnj and Mnk. In particular, vector Mxi endpoint can't lie on a segment [Mnj, Mnk], but it may be collinear one of vectors Mnj and Mnk.

Note that since Mxi lies strictly inside the triangle formed by vectors Mnj and Mnk it can be extended to a vector Mx'i, whose endpoint lies on a segment [Mnj, Mnk]. By using linear dependence of Mx'i and Mnj, Mnk we have that Mx'i = (p / r)Mnj + (q / r)Mnk, where p + q = r and p, q, r — integer non-negative numbers. This is equivalent to a formula rMx'i = pMnj + qMnk. This means that if per each r turns of Max in Mxi we will respond with p turns of Min in Mnj and q turns of Min in Mnk, then the total shift will be equal to  - pMnj - qMnk + rMxi =  - rMx'i + rMxi =  - r(Mx'i - Mxi), that is the vector with strictly negative components. So, we are able to block that Max turn, i. e. it does not give any advantage to Max.

The natural wish is to create a convex hull of all Min turns and to consider all Max turns in respect to it. If Max turn lies inside the convex hull of Min turns, then by using the previous fact this turn is meaningless to Max. Otherwise, there are two possibilities.

First, this turn may intersect the hull but go out of it somewhere; in this case this Max turn is no shorter than all Min turns in some non-negative direction (more specifically, in its own direction), so Max wins.

On the other hand, Max vector lies to aside from the Min turns convex hull. Let's suppose the vector Mxi lies to the left of the Min turns. This case requires a special analysis. Consider the topmost of the Min vectors Mnj. If Mxi is no lower than Mxj, then by using the first fact Max is able to win by using only this vector. Otherwise the difference Mni - Mxj is a vector with strictly negative components, by using which we are able to block that Max vector.

So, the full version of a criteria for Min being a winner looks like the following. Consider a convex hull of Min turns and expand it to the left of the topmost point and to the bottom of the rightmost point. If all Max turns lie strictly inside the formed figure then Min wins, otherwise Max wins.

562F - Matching Names

(в трансляции: 566A - Matching Names)

Form a trie from all names and pseudonyms. Mark with red all vertices corresponding to names, and with blue all vertices corresponding to the pseudonyms (a single vertex may be marked several times, possibly with different colors). Note that if we match a name a and a pseudonym b, then the quality of such match is lcp(a, b) = 1 / 2(2 * lcp(a, b)) = 1 / 2(|a| + |b| - (|a| - lcp(a, b)) - (|b| - lcp(a, b))), that is equal to a constant 1 / 2(|a| + |b|), with subtracted half of a length of a path between a and b over the trie. So, what we need is to connect all red vertices with blue vertices with paths of a minimum possible total length.

This can be done with a following greedy procedure: if we have a vertex v with x red vertices and y blue vertices in its subtree then we must match min(x, y) red vertices of its subtree to min(x, y) blue vertices of its subtree and leave the remaining max(x, y) - min(x, y) ref or blue vertices to the level higher. The correctness of such algorithm may be easily shown by the next idea. Give each edge of each path a direction from a red vertex to a blue. If some edge recieves two different directions after this procedure, we may cross-change two paths through it so that their total length is reduced by two.

So, we get a solution in O(sumlen) where sumlen is a total length of all names and pseudonyms.

562G - Replicating Processes

(в трансляции: 566B - Replicating Processes)

==== UNTRANSLATED SECTION, PLEASE WAIT A FEW MINUTES... ====

Kitten to take your attention :)

This problem may be solved by simulating the replication process. Let's keep a list of all replications that may be applied by the current step. Apply an arbitrary replication, after that update a list by adding/removing all suitable or now unsuitable replications touching all affected on current step servers. The list of replications may be kept in a "double-linked list" data structure, that allows to add and remove elements to/from the set and to extract an arbitrary element of the set in O(1).

The proof of correctness of such algorithm is not hard and is left as an exercies (maybe it will appear here later).

We got a solution in O(n) operation (though, the constant hidden by O-notation is pretty large; the input size is already 12n numbers and the solution itself hides a constant 36 or higher).

Tutorial of VK Cup 2015 - Finals

• +100

By Zlobober, 6 years ago, translation,

The registrations before 00:00 have been deleted, because the form didn't support teams. Please, register again if your registration has been affected.

VK Cup 2015 Final Round has ended two days ago. It's very likely that you've seen our previous posts. The last event to happen is online mirror of the final round. It will be held on Thursday, July 30th, at 19:00 Moscow time. Individual contestants as well as teams consisting of two people may participate in this round. Round duration is three hours, problems will be shuffled in comparison with to the original order. Both division participants may take part, but we want to warn 2nd division contestants that problemset may be hard for them. This round is a rated Codeforces round.

Finally, we want to thank all people that made this Championship. Following VK developers, Codeforces team members and the other people suggested their help to us while creating and preparing problems: PavelKunyavskiy, burunduk3, Dmitry_Egorov, Kurpilyansky, dark_ai, MikeMirzayanov, Zlobober, MaximShipko, kuviman, Nickolas, Errichto, sankear и malcolm. We want to thank the people that helped us very much by testing our rounds and giving great advices: winger и AlexFetisov. Also we want to say thank you to all VK members that helped us to run the onsite Finals: burunduk3, Burunduk2, KOTEHOK and many others. Thank to all of them!

Good luck and have fun on our Online Mirror!

UPD: Note that during the round the team is allowed to use only one computer. This means that you may code/use console/succeed in solving problems in any other manner by using only one computer at time. The only thing that is allowed from two computers is reading the statements.

UPD2: Since this is a team contest, specially for your convenience we publish the encryped zip-archive with pdf-statements of problems: vkcup2015-mirror-statements.zip. When round starts, we'll publish a password for it.

UPD3: The round will use the dynamic scoring with 250 points step.

UPD4: Due to technical reasons the round starts at 19:20 Moscow time.

UPD5: Password for statements archive: vkcup4ever. Good luck!

UPD6: Online mirror has ended! Congratulations to winners:

1. rng_58
2. Zenith: I_love_Hoang_Yen, ngfam_kongu
3. OrOrZZZ!: zld3794955, KFDong
4. Petr team: Petr, ilyakor
5. jcvb_matthew99: matthew99, jcvb

Also, my personal respects for a team "Petr team: Petr, ilyakor" for only solution for a problem Е in this mirror, user rng_58 and a team "Excited: YuukaKazami, Retired_MiFaFaOvO" for two correct solutions for problem С.

Congratulations to a user rng_58 that showed that a single contestant can compete with teams consisting of two people!

Rating will be updated shortly.

UPD7: Editorial!

Announcement of VK Cup 2015 - Finals

• +195

By Zlobober, 6 years ago, translation,

Hi everybody!

Yesterday, on July 24th, we started VK Cup 2015 Finals! During the day all participants of the competition successfully went to the St. Petersburg, those who got here earlier enjoyed a great trip over the roofs of this beautiful city. And now we are waiting for a practice round that will happen in a few hours, giving participants an option to test their contest environment and do their best while solving several usual (and not only usual) problems. After the lunch, contestants will compete in the first round of the Finals that is called CodeGame Coding, where they have to write the strategy for a battle wizard.

During today and tomorrow not all of the Codeforces features will be available. On this weekend you may take a rest from writing contests and enjoy the results of the one of the biggest competitions of 2015.

The results of the practice round will be available for spectators by this link. Stay tuned!

• +9

By Zlobober, history, 6 years ago,

Round #309 was first moved from Monday to Tuesday, and now it is moved to Wednesday because of our technical reasons and schedule of other online contests. Don't be confused with that. Stay tuned!

• +148

By Zlobober, history, 6 years ago,

Google Code Jam Distributed Round starts in 15 minutes. It's a brand new format of contest, top-10 will advance to its own onsite final round in Seattle.

Let's discuss problems here after round finishes.

GL & HF everybody!

• +26

By Zlobober, history, 6 years ago, translation,

Today at 14:00 UTC there will be the third round of Google Code Jam. Top-25 contestants will be invited to the final round in the Seattle.

Good luck to everybody!

• +105

By Zlobober, history, 6 years ago,

Introduction

Checker is a program that should be written when task allows more than one correct solution. Although it seems that it is very easy to write checker, there are lots of important technical details that are easy to forget if you don't use a special library like testlib.h.

A common convention is that a checker should be a program taking three command-line arguments: the testdata filename, the participant output filename and the jury answer filename. Checker should read the contents of the input, output and answer, decide whether participant answer is correct (and optimal being compared to the jury's answer if there can be unoptimal answer in this task) and return one of several pre-defined verdicts:

Verdict testlib macro meaning
Ok _ok The output is correct, follows the output format and represents an optimal answer (if applicable in this problem).
Wrong Answer _wa The output is wrong or incorrect.
Presentation Error _pe The output doesn't follow output format specification of the task. Although, on Codeforces this verdict is being replaced by Wrong Answer since it's usually hard to distinguish Presentation Error and Wrong Answer
Partially Correct _pc(score) If there is a partial scoring in the task, this verdict should be used in situation when solution is partially correct or to give solution some number of points. Here score should be an integer between 0 and 200 where 0 represents the lowest mark (no points) and 200 represents the highest mark (maximum possible number of points)
Fail _fail This verdict means that checker has encountered a critical internal error or that the jury's answer is incorrect or that contestant found the more optimal answer than jury. This verdict means that something wrong has happened and it requires a special investigation from jury.

Usually the verdict returned by checker is indicated by the return code of its executable, but it may possibly be transfered to the testing system by many other ways: by creating a special xml-file with checker outcome, by writing to stdout or somehow else. When using testlib.h for writing a checker all those system-dependent ways are combined in a single expression quitf(VERDICT, "comment", ...).

Simplest checker example

Problem statement: You are given two integers a and b ( - 1000 ≤ a, b ≤ 1000). Find their sum and output it.

Let's write a checker for this problem. It will be very simple:

#include "testlib.h"

int main(int argc, char* argv[]) {
// This command initializes checker environment.
registerTestlibCmd(argc, argv);
// Now there are three global variables specifying testlib streams:
// inf - stream with the testdata.
// ouf - stream with the contestant output.
// ans - stream with the jury answer.
// All those streams provide the similar interface for reading data.

// This function reads a single integer from the participant output that
// should be between -2000 and 2000. If it doesn't belong to the specified
// range, checker finishes with verdict _pe and comment saying that [sum of numbers]
// is outside of the specified range.
int pans = ouf.readInt(-2000, 2000, "sum of numbers");

// This function reads a single integer from the jury output. Here we suppose
// that jury's answer is correct and we do not need to additionally verify it.
int jans = ans.readInt(); // We suppose that jury's answer is correct

if (pans == jans)
quitf(_ok, "The sum is correct."); // This finishes checker with verdit OK.
else
// quitf handles a comment like printf, i. e. you may use specifiers like
// %d, %s etc in the comment.
quitf(_wa, "The sum is wrong: expected = %d, found = %d", jans, pans);
}


Available methods

There are lots of methods useful for writing checkers.

Method Description
stream.readXXX All methods of form readXXX (like readInt, readLong, readDouble, readToken etc) are common for all testlib uses: checkers, validators and interactors. TODO: put all such methods on the separate page.
void quit(TResult verdict, string message);
void quit(TResult verdict, const char* message);
void quitf(TResult verdict, const char* message, ...);
Finishes the checker with a given verdict and comment.
void quitif(bool condition, TResult verdict, const char* message, ...); if condition is true then performs quitf(verdict, message, ...)
void ensuref(bool condition, const char* message, ...); An equivalent of assert. Checks that condition is true, otherwise finishes with _fail verdict. Useful for debugging checkers.

TODO: finish this list.

Suppose you have a task that asks contestant to find a complex composite answer that is not just a single number. Example:

Problem statement You are given a connected undirected weighted graph witn n vertices and m edges. Find a simple path between vertex s and vertex t (s ≠ t) of maximum weight and output it. Samples (input and output format is clarified in square brackets):

Sample input Sample output
4 5 [n, m]
1 2 4 [edges]
2 4 2
1 4 4
1 3 5
3 4 3
1 4 [s, t]
3 [number of vertices in path]
1 3 4 [path]

#include "testlib.h"
#include <map>
#include <vector>
using namespace std;

map<pair<int, int>, int> edges;

int main(int argc, char* argv[]) {
registerTestlibCmd(argc, argv);
int m = inf.readInt(); // there is no need to validate input file in the checker
for (int i = 0; i < m; i++) {
edges[make_pair(a, b)] = edges[make_pair(b, a)] = w;
}

int jvalue = 0;
vector<int> jpath;
for (int i = 0; i < jlen; i++) {
}
for (int i = 0; i < jlen - 1; i++) {
jvalue += edges[make_pair(jpath[i], jpath[i + 1])];
}

int pvalue = 0;
vector<int> ppath;
vector<bool> used(n, false);
int plen = ouf.readInt(2, n, "number of vertices"); // path should at least contain s and t
for (int i = 0; i < plen; i++) {
int v = ouf.readInt(1, n, format("path[%d]", i + 1).c_str());
if (used[v - 1]) // checking that no vertex is used twice
quitf(_wa, "vertex %d was used twice", v);
used[v - 1] = true;
ppath.push_back(v);
}
// checking that path is actually between s and t
if (ppath.front() != s)
quitf(_wa, "path doesn't start in s: expected s = %d, found %d", s, ppath.front());
if (ppath.back() != t)
quitf(_wa, "path doesn't finish in t: expected t = %d, found %d", t, ppath.back());
// checking that each pair of adjacent vertices in the path is indeed connected by an edge
for (int i = 0; i < plen - 1; i++) {
if (edges.find(make_pair(ppath[i], ppath[i + 1])) == edges.end())
quitf(_wa, "there is no edge (%d, %d) in the graph", ppath[i], ppath[i + 1]);
pvalue += edges[make_pair(ppath[i], ppath[i + 1])];
}

if (jvalue != pvalue)
else
}


Here are the main two issues that appear in this checker.

1. It believes that the jury's answer is absolutely correct. In case when jury's answer is unoptimal and contestant have really found the better answer, he will get the verdict WA that is not fair. There is a special verdict Fail exactly for this situation.
2. It contains the duplicating code for extracting the answer value for jury and contestant. In this case extracting the value is just one "for" cycle but for a harder task it may be a very complicated subroutine, and as usual, using the duplicated code makes twice harder to fix it, rewrite it or change it when output format changes.

In fact, reading an answer value is a subroutine that works exactly the same for contestant and jury. That's why it is usually being put into a separate function receiving the input stream as a parameter.

Good checker implementation

#include "testlib.h"
#include <map>
#include <vector>
using namespace std;

map<pair<int, int>, int> edges;
int n, m, s, t;

// checks its correctness (i. e. that it is indeed a correct path from s to t in the graph),
// calculates its value and returns it. If the path is incorrect, it stops the execution
// with _wa outcome if stream = ouf (contestant) or with _fail outcome if stream = ans (jury).
int value = 0;
vector<int> path;
vector<bool> used(n, false);
int len = stream.readInt(2, n, "number of vertices"); // path should at least contain s and t
for (int i = 0; i < len; i++) {
int v = stream.readInt(1, n, format("path[%d]", i + 1).c_str());
if (used[v - 1]) { // checking that no vertex is used twice
// stream.quitf works as quitf but it modifies the verdict according
// to what stream it is being invoked from. If stream == ouf then
// it works exactly like quitf, otherwise if stream == ans then
// any verdict will work like _fail (because it's bad when jury's answer is incorrect)
stream.quitf(_wa, "vertex %d was used twice", v);
}
used[v - 1] = true;
path.push_back(v);
}
// checking that path is actually between s and t
if (path.front() != s)
stream.quitf(_wa, "path doesn't start in s: expected s = %d, found %d", s, path.front());
if (path.back() != t)
stream.quitf(_wa, "path doesn't finish in t: expected t = %d, found %d", t, path.back());
// checking that each pair of adjacent vertices in the path is indeed connected by an edge
for (int i = 0; i < len - 1; i++) {
if (edges.find(make_pair(path[i], path[i + 1])) == edges.end())
stream.quitf(_wa, "there is no edge (%d, %d) in the graph", path[i], path[i + 1]);
value += edges[make_pair(path[i], path[i + 1])];
}
return value;
}

int main(int argc, char* argv[]) {
registerTestlibCmd(argc, argv);
m = inf.readInt(); // there is no need to validate input file in the checker
for (int i = 0; i < m; i++) {
edges[make_pair(a, b)] = edges[make_pair(b, a)] = w;
}

if (jans > pans)
quitf(_wa, "jury has the better answer: jans = %d, pans = %d\n", jans, pans);
else if (jans == pans)
else // (jans < pans)
quitf(_fail, ":( participant has the better answer: jans = %d, pans = %d\n", jans, pans);
}


Notice that by using this paradigm we also check the jury answer for being correct. Checkers written in such form are usually shorter and easier to understand and fix. It is also applicable when task output is NO/(YES+certificate).

• Always use optional arguments for methods like readInt(), readLong() etc. If you forget to check range for some variable, your checker may work incorrectly or even face runtime-error that will result in Check Failed in testing system.

// ....
vector<int> lst;
for (int i = 0; i < k; i++)       // This will behave similarly for k = 0 as well as for k = -5.
lst.push_back(ouf.readInt()); // But we don't want to accept list of length -5, don't we?
// ....
int x = A[pos]; // 100% place for runtime-error. There will surely be a contestant who will output
// -42, 2147483456 or some other garbage instead of correct value.
// ....


Good:

// ....
int k = ouf.readInt(0, n); // Now negative values of k will cause PE.
vector<int> lst;
for (int i = 0; i < k; i++)
// ....
int pos = ouf.readInt(0, (int)A.size() - 1); // That's how we prevent index out of range.
int x = A[pos];
// ....

• Use optional comments in readXXX methods. With them it is much easier to understand where did checker stop its execution (if not you don't specify it, the comment will be like "integer doesn't belong to range [23, 45]" and it's not clear what integer caused such behavior). Also write informative comments in quitf calls; remember that your comments will be probably available to the contestants after the competition (or even during the competition like while hacking on Codeforces).

• +33

By Zlobober, 6 years ago, translation,

Let's fix the TV channel window and look for a commerical having the largest intersection with it. There are four types of commercials: lying inside the window, overlapping the window and partially intersecting the window from the left and from the right.

It's easy to determine if there is overlapping commercial: it's enough to sort commercials in increasing order of the left end and then while iterating over them from left to right, keep the minimum value of a right end of a commercial. If when we pass the window j and see that current value of the maximum right end is no less than bj then there exists a commercial overlapping our window and the value is equal to the (ri - lici.

Among all commercials that lie inside our window we need the longest one. It can be done by similar but a bit harder manner. Let's use sweepline method. While passing through the end of the commercial ri, let's assign in some data structure (like segment tree) the value ri - li in point li. While passing through the end of a window, let's calculate answer for it as a maximum on segment [aj, bj]. By doing this, we consider all commercials inside the window.

We can process partially intersecting commercials in the similar way. While passing the right end of the commercial ri let's put the value ri in the point li in our data structure. While passing through the right end of a window bj let's calculate the answer for it as a maximum on the segment [aj, bj] minus aj.

Among all answers for all commercials we need to choose the largest one. So we have the solution in .

Challenge. What if there are weights not only on windows, but on commercials also, and those weights are multiplied with the intersection length? You can solve this task in and get a virtual medal!

542B - Duck Hunt

First of all, let's say that ducks stay still and the one that moves is the hunter. Let's define the value F[x][s] as the minimum number of ducks among having the right end no further than x, that we can't shot if the last shot was in the point s. In particular, the value F[x][s] includes all ducks located inside the segment [s + 1, x]. Values F[x][s] for s > x let's consider as undefined.

Let's look on F[x] as on a function of s. This function is defined on all s from 0 to x inclusive. The key idea is in investigating how F[x + 1][·] differs from F[x][·].

Let's first suppose that in point x + 1 there is no end of the duck. Then in definition of F[x + 1][·] we consider the same set of the ducks as for the definition f F[x][·]. That means that all values F[x][s] for s ≤ x are the same for F[x + 1]. Let's understand what happens with F[x + 1][x + 1]. It's easy to see that the shot in point x + 1 can't kill any of the ducks that end no further than x + 1 (since we just supposed that there are no ducks ending in exactly x + 1). So, F[x + 1][x + 1] = min F[x + 1][0... x + 1 - r] = min F[x][0... x + 1 - r].

Now let's suppose that in point x + 1 the duck [l, x + 1] ends. In this case we can say that all values of F[x + 1][0... l - 1] should be increased by 1 (since at the moment of shot in point s < l the duck [l, x + 1] can't be killed yet). From the other hand, all values F[x + 1][l... x + 1] remain the same since the last shot kills the newly added duck.

Now let's understand how to implement all this stuff. Function F[x][·] is piecewise constant, so it can be stored in a Cartesian tree as a sequence of pairs (beginning of segment, value on segment). Such storage allows us to easily add 1 on prefix and take minimum on prefix of a function.

Now let's think that F[x][·] is defined on all posistive values but the values F[x][s] for s > x do not satisfy the definition of F[x][s] above. In other words, let's just suppose that the lats segment in structure F[x] is infinite in right direction.

Let's sweep with the variable x. The function F[x + 1][·] changes in comparsion to F[x][·] very rarely For example, the value F[x + 1][x + 1] is almost always the same as F[x][x + 1] (that is equal to F[x][x] as said above). Indeed, if we suppose that there is no duck ending in x + 1 then F[x][x] = min(F[x][0... x - r]) and F[x + 1][x + 1] = min(F[x + 1][0... x + 1 - r]) = min(F[x][0... x + 1 - r]) ≤ min(F[x][0... x - r]). So, there is an interesting event only if value F[x][x - r + 1] is smaller than the whole prefix before it. On the other hand, the value min(F[x][0... x - r]) can't increase more than n times by 1 (each time when we pass through the right end of the duck) and so, it also can't decrease more then n times (since it is a non-negative value).

So, the events "we passed through the right end of the duck" and "we should non-trivially calculate F[x][x]" are in total linear. Each of them can be processed in O(1) operation with Cartesian tree, that gives as totally an solution. Whooray!

542C - Idempotent functions

In order to solve this task it's good to understand how does the graph corresponding the function from {1, ..., n} to itself looks. Let's consider a graph on vertices 1, ..., n with edges from vertex i to the vertex f(i). Such graph always looks like a set of cycles with several trees leading to that cycles. How should the graph look like for function to be the idempotent? It's easy to see that in such graph all cycles should be of length 1 and all vertex that are not cycles of length 1 (i. e. all not fixed points) should immediatly lead to some fixed point.

So, we should satisfy two conditions. First, all cycles should become of length 1 — in order to do that k should be divisible by lcm(c1, c2, ..., cm) where ci are the lengths of all cycles. Second, all vertices not lying on cycles should become leading to the vertices lying on cycles. In other words, k should be no less than the distance from any vertex x to the cycle it goes into (or, that the same, the length of pre-period in the sequence f(x), f(f(x)), f(f(f(x))), ...).

So, are task is about finding the smallest number k divisible by A that is no less than B, it is not hard at all.

542D - Superhero's Job

First step is to understand the properties of a Joker function. It's important property is that it is multiplicative: J(ab) = J(a)J(b) for (a, b) = 1, so we can write the value of function knowing the factorization of an argument: J(p1k1p2k2... pmkm) = J(p1k1)J(p2k2)... J(pmkm) = (p1k1 + 1)(p2k2 + 1)... (pmkm + 1). Let's use this knowledge in order to solve the task with dynamic programming.

Let's denote as P the set of prime p such that the multiple including it may appear in A = J(x). There are not that many such primes: each divisor d of number A can correspond no more than one such prime, namely, the one whose power the number d - 1 is (if it is a prime power at all). Let's calculate the set P and also remember for each prime number p in it in which powers k the value (pk + 1) divides A.

Now we can calculate value D[p][d] that is equal to the number of ways to get a divisor d of a number A as a product of brackets of first p primes in set P. Such value can be caluclated by using dynamic programming in time where is the number of divisors of A (as it was shown above that ). So, overall complexity of the solution is .

Challenge. How the behaves when A increases? What is the maximum values of over all A ≤ 1012? (The answer: . Nice estimate that is not an exact asymptotic, though, is that )

Challenge. What is the maximum answer in this task? If you are able to create a test with answer larger than million, a great respect from me!

542E - Playing on Graph

First, if the original graph isn't bipartite then the answer is (-1). Indeed, any odd cycle, while being contracted by the pair of vertices, always produces an odd cycle of smaller size, so at some point we will get a triangle that blocks us from doing anything.

From the other way, each bipartite component can be contracted in order to get a chain whose length is a diameter of the component. Suppose that the pair of vertices (a, b) is a diamater of some connected component. Then, by contracting all vertices located on the same distance from a we can achieve the chain we want.

The last step is that the answer fror the original graph is the sum of the answers for all the connected components since we can attach all of them together. So, the answer is the sum of diameters of all connected components if all of them are bipartite, otherwise answer is -1. Solution has the complexity O(E + VE) (The first summand is checking for being bipartite, the second one is calculation diameters by running BFS from each vertex).

542F - Quest

This task can be solved in lot of ways. The most straightforward is DP. The task can be seen in the following way. We want some set of vertices as leaves and for each potential leaf we know the upper bound for its depth and its cost.

Let's sweep over the tree from down to up. Let's calculate the value D[h][c] that is the maximum possible cost that we can achieve if we stay on the level h and have c vertices on it. For transition let's fix how many leaves of the deepness exactly h we will take (it's easy to see that among them we should task several greatest). Suppose we will take k of them. Then from this state we can move to D[h - 1][⌈(c + k) / 2⌉]. The answer will be located in D[0][1] because when we are on the level 0 we should have the only vertex that is the root of the tree.

The complexity of such solution is O(n2T).

Challenge Improve the solution above to and then to

• +97

By Zlobober, 6 years ago, translation,

This Sunday, May 3-rd, 19:00 Moscow Time there will be Round 3 of VK Cup 2015 Championship!

As in Round 2, there will also be an online mirror that is rated round available only for Div1-contestants. There will be 6 task in random order and a smooth dynamic scoring system.

Round was brought to you by Codeforces team, VK team and yeputons. As usual, we want to thank winger and AlexFetisov for a great testing help.

Top-50 participants of online mirror will get a nice VK Cup T-Shirt!

Good luck and have fun!

UPD1 The round is over! Congratulations to all contestants in top-50, you will get a nice VK Cup 2015 Championship T-Shirt soon! There will be an editorial soon, stay tuned...

UPD2 Finally, the editorial is ready!

Announcement of VK Cup 2015 - Раунд 3

• +375

By Zlobober, 6 years ago, translation,

532A - Berland Miners

We can add n - k miners with height 0 and it won't affect answer. So we can assume that numbers of miners and caves are the same.

For every cave let's define ti as maximal possible height of miner working in cave i if we wouldn't change any cave. We can calculate it from root to leaves with line ti = min(tfather, hi).

Let's say we don't change anything. We will try to assign all workers if it's possible or to do the best possible assignment otherwise — the one where there are few free (not occupied) caves and they are high (value ti is big). I will say later why we want them to be high. Formal definition (you don't have to read the next paragraph):

For every assignment let's sort free caves by ti. In the best assignment number of free caves is minimal possible. And for every position in such a list free cave in the best assignment has value ti not lower than in any other assignment. It can be proven that best possible assignment exists (it's not so obvious though).

How to find the best possible assignment? Let's sort caves ascending by ti and for every cave let's assign the tallest free miner who can work here. It will give us the best possibble assignment. Why? Let's say we've just made first bad decision (different than in the best assignment). It doesn't make sense to leave a cave empty if we can assign here someone. So we put a worker somewhere and we won't be able to do assignment now (we assumed that we've just made bad decision). From definition "we put here tallest possible miner" we know that we couldn't assign here taller guy. Maybe we want to assign here shorter miner and this "highest possible" goes somewhere else? But we can swap them and everything will be ok. So there remains last option: we don't want to put anyone here. But we will have to assign our guy to some higher cave so we can leave his destiny cave empty and put him here. To sum up, it's ok to assign highest possible free worker with iterating over caves sorted by ti. Almost the same sentences are the proof for other lemma:

If we want to have few free (not assigned) miners and we want them to be short it's optimal to iterate somehow over caves and to assign the tallest possible free miner in every cave. It works for every order of iterating over caves. And every order gives us the same set of free miners (but not necessarily the same set of free caves).

Why did we want free caves to be high? Because to assign everyone we must change height of cave not higher than the lowest free cave. Why? In short: otherwise that lowest free cave will remain free after running our assignment-algo (described above) on new tree. But we managed to find maximal possible height of lowest free cave. Let's call this value as LIM. And we know minimal set of free miners.

Changing height of cave i from a to something bigger does something only when ti = a ≤ LIM. And then in set of ti some changes happen. There were caves blocked before by cave i so they had t equal to a. These caves will have bigger t so in set of values t we have change e.g. from {5, 5, 5} to {7, 10, 8} (a was equal to 5). Let's throw out miners from caves with changed tc (maybe some of these caves were empty anyway). If we can't assign free miners (we found them before) to new caves then assigning everything isn't possible. Otherwise it is — we assign them in these caves with changed t and there are some threw out miners. But all of them were in caves with t = a ≤ LIM so they are not higher than LIM. And we know that every free cave has tfree ≥ LIM because LIM is height of lowest free cave. So we can put them there.

Solution is to find result with binary search and to answer question: can we assign miners to caves with changing one cave by at most D? With our assignment-algo we calculate optimal lowest free cave and set of free miners. Then for every cave we try to increase its height by D if it had t not higher than LIM. It's also important that checking change of every cave has amortized linear complexity. If increasing height of cave A affects t of cave B below then later changing height of B does nothing — B is blocked by A anyway.

532B - Work Group

This problem can be solved by using dynamic programming over subtrees of company hierarchy. Denote as D[v][e] maximum possible efficiency that can be obtained by taking several people from subtree of v so that pairity of their number is e in order condition from statement is fullfilled for all already taken people. Then it is easy to calculate D[v][e] from values of children of v by considering intermediate value T[i][e] — maximum possible efficiency that we can obtain by using first i subtrees of v with overall pairity e.

It's important to not to forget that there are two cases: we may take v itself or we may decide to not take it. In first case it is important that all subtrees have overall even number of taken people. In the second case there is no such restriction.

532C - Board Game

We will consider three cases:

1) xp + yp ≤ max(xv, yv). In this case Polycarp can be in (0, 0) after xp + yp moves and Vasiliy will always be ,,behind''. It's enough for Polycarp to make any move and he is always able to do it. It makes Polycarp closer to (0, 0) and after Vasiliy's move we again have xp + yp ≤ max(xv, yv) condition fulfilled and in some moment Polycarp will reach (0, 0). It's impossible that Vasiliy wins because our condition would be unfulfilled.

2) xp ≤ xv, yp ≤ yv. In this scenario Polycarp must block Vasiliy somehow. He must make such a move that after any Vasiliy's response condition will be fulfilled again.

• If xp = xv > 0 he goes to (xp - 1, yp).
• If yp = yv > 0 he goes to (xp, yp - 1).
• Otherwise he makes any move.

With this strategy Vasiliy is unable to get out of our new condition.

3) Otherwise we can consider any shortest path to (0, 0) for Vasiliy. Lenght of it is max(xv, yv). For any cell on this path Polycarp has greater distance than Vasiliy to it so he can't be there before Vasiliy and he can't block him. Vasiliy wins. Alternative explanation: after any possible Polycarp move Vasiliy can make a move that none of conditions (1) and (2) aren't fulfilled.

532D - Landmarks

First observation: column crashes only if distance between its neighbours is greater than 2di so it doesn't matter where exactly is this column. The only important thing is how far are left and right neighbour of it.

For every column C let's calculate does there exist subset of columns on the left that everything is stable between C and leftmost bearing column. If answer is yes then how close can be left neighbour of C? Then we will know how far the right neighbour can be. We will use dynamic programming.

Slow approach: For every previous column let's check if it can be neighbours with C. The closest column fulfilling this condition is best left neighbour of C.

Faster approach: Let's denote far[i] as the biggest possible coordinate where right neighbour of column i can be. In our dp we need an extra stack with possible candidates for being left neighbour of new column. In this stack columns are sorted in ascending order by index (and coordinate) and in descending order by far[i]. For every new column we must remove from the top of stack columns which have too low far[i]. Then last column on stack is the best left neighbour and we can calculate value far for current column. It is O(n) algorithm.

Some columns can't be connected with leftmost bearing column and for them we have far[i] = 0. If there exists column with far[i] not less than coordinate of rightmost bearing column then we don't have to add new column and answer is 0.

Ok. Now let's run the same dp from right to the left. Some columns are connected with leftmost bearing column, some other columns with righmost one. And we will want to place new column somewhere between them. Brute force solution is to check every pair of columns and to say: we want these two columns to be neighbours of added column. With values far[i] calculated in dp we check if we can have such a situation and we eventually consider result to be half of a distance between these two columns.

How to make this last part faster? We must create two stacks with best candidates for neighbours of new column. One stack with columns connected to the leftmost column, one with the ones connected to the rightmost one. On these stacks we can find answer with two pointers technique.

Whole solution is linear in time and memory.

532E - Correcting Mistakes

Suppose that S is obtained from W by deleteing the earlier symbol than T. Then it is true that W = A + x + B + y + C, S = A + x + B + C, T = A + B + y + C, where x and y are deleted symbols and A, B и C are some (possibly, empty) strings.

Let's calculate A as a longest common prefix of S and T and C as a longest common suffix. Remove both of them from strings. Now we now that x and y are respectively the first letter of string S and last letter of string T. Remove them too. The only thing left is to check if remaining parts of strings are equal.

Perform such operation for S and T and for T and S.

532F - Encoding

There are two possible ideas for solving this task.

Fix pair of letters x and y. Replace all letters x in S with 1s and all remaining letters with 0s. Do the same for y with string T. By using KMP algorithm or Z-function determine all positions where string T can be attached to string S so there is a match. If such condition is fullfilled for pair (x, y), and for pair (y, x) then this position is a possible match position if we use pair (x, y) and possibly some other pairs.

Now for each suitable position we need to check if letters can be distributed in pairs according to the information we know. This can be done in O(sigma) where sigma = 26 — the size of the alphabet. So, this solution works in O(n * sigma2 + n * sigma) = O(n * sigma2). It fits in time limit if implementation is efficient enough.

Another way is to perform such transformation with both strings that allows us to compare them up to letters renaming. Let's replace each letter with distance from it to the closes letter to the left from it that is the same (or with -inf if there is no such letter). Now for strings to be equal we just need to check that string T matches the substring of S in all positions except, possibly, first occurence of each letter in T. This can be done by modified prefix-function or by hashing.

Now suppose we know that in some position string T is the same as string S up to renaming letters. It's not hard to determine the letter permutation for this renaming (by just checking what matches in S with first occurence of each letter in string T). Let's check that this permutation is a set of transpositions in O(sigma). So, we have a solution in O(n * sigma).

Tutorial of VK Cup 2015 - Round 2

• +123

By Zlobober, 6 years ago, translation,

This Friday, April 17th, 19:00 there will be Round 2 of VK Cup 2015! For all unofficial participants there will be an online mirror that is a usual rated div1-round. Any div1 contestant that does not participate in official Round 2 is able to compete in this mirror.

Round consists of 6 problems that are shuffled randomly. There will be a smooth dynamic scoring system.

Round is brought to you by Codeforces team, VK team and user Errichto, that offered his important help as a part of his donation for "Codeforces 5 years campaign". Significant testing effort was made by user winger.

Good luck and have fun!

UPD: Thanks everybody for participating! Editorial has just appeared. See you on Wild-card Round 2 and mirror of Round 3!

Announcement of VK Cup 2015 - Round 2

• +319

By Zlobober, 6 years ago, translation,

Problemsets slightly differ in main round and in an online mirror, in editorial problems follow in an order of the main round.

524A - Возможно, вы знаете этих людей?

This problem didn't appear in an online mirror. Its editorial will be left as an exercise in Russian language for those who are curious what it is about =)

524B - Фото на память - 2 (round version)

In an original version there was no constraint that no more than n / 2 friends should lie. This version can be solved pretty easy. Iterate over all possible values of H. For a fixed H we have to minimize total width of all rectangles on the photo. So, for each rectangle we need to choose in which orientation it fits into photo having the minimum possible width.

529B - Group Photo 2 (online mirror version)

In an online mirror version the problem was slightly harder. Let's call people with w ≤ h \textit{high}, and remaining people \textit{wide}. Let's fix photo height H. Let's consider several following cases:

• If a high person fits into height H, we leave him as is.
• If a high person doesn't fit into height H, the we have to ask him to lie down, increasing the counter of such people by 1.
• If a wide person fits into height H, but doesn't fit lying on the ground, then we leave him staying.
• If a wide person fits into height H in both ways, then we first ask him to stay and write into a separate array value of answer decrease if we ask him to lie on the ground: w - h.
• If somebody doesn't fit in both ways, then such value of H is impossible.

Now we have several people that have to lie on the ground (from second case) and if there are too many of them (more than n / 2) then such value of H is impossible.

After we put down people from second case there can still be some vacant ground positions, we distribute them to the people from fourth case with highest values of w - h. Then we calculate the total area of the photo and relax the answer.

524C - The Art of Dealing with ATM

Intended solution has the complexity or . For each possible value x that we can get write a pair (x, m) where m is number of bills to achieve this value. Sort this array in ascending order of x and leave only the best possible number of bills for each value of x. Then to answer a query we should iterate over the first summand in resulting sum and look for the remainder using binary search. The alternate way is the method of two pointers for looking in an array for a pair of numbers with a given sum that works in amortized O(1) time. Check that we used no more than k bills totally and relax the answer if needed.

524D - Social Network

Let's follow greedily in following way. Iterate over all requests in a chronological order. Let's try to associate each query to the new person. Of course we can't always do that: when there are already M active users on a site, we should associate this request with some existing person. Now we need to choose, who it will be.

Let's show that the best way is to associate a request with the most recently active person. Indeed, such "critical" state can be represented as a vector consisting of M numbers that are times since the last request for each of the active people in descending order. If we are currently in the state (a1, a2, ..., aM), then we can move to the one of the M new states (a1, a2, ..., aM - 1, 0), (a1, a2, ..., aM - 2, aM, 0), ... , (a2, a3, ..., aM, 0) depending on who we will associate the new request with. We can see that the first vector is component-wise larger then other ones, so it is better than other states (since the largest number in some component of vector means that this person will probably disappear earlier giving us more freedom in further operations).

So, all we have to do is to simulate the process keeping all active people in some data structure with times of their last activity. As a such structure one can use anything implementing the priority queue interface (priority_queue, set, segment tree or anything else). Complexity of such solution is .

524E - Rooks and Rectangles

Let's understand what does it mean that some cell isn't attacked by any rook. It means that there exists row and column of the rectangle without rooks on them. It's hard to check this condition, so it is a good idea to check the opposite for it. We just shown that the rectangle is good if on of the two conditions holds: there should be a rook in each row of it or there should be a rook in each column.

We can check those conditions separately. How can we check that for a set of rectangles there is a point in each row? This can be done by sweeping vertical line from left to right. Suppose we are standing in the right side of a rectangle located in rows from a to b with the left side in a column y. Then if you denote as last[i] the position of the last rook appeared in a row number i, the criteria for a rectangle looks like . That means that we can keep the values last[i] in a segment tree and answer for all rectangles in logarithmic-time. Similarly for columns. This solution answers all queries in off-line in time O((q + k)log(n + m)).

524F - And Yet Another Bracket Sequence

The main idea is that the bracket sequence can be seen as a sequence of prefix balances, i. e sequence (ai) such that ai + 1 = ai ± 1.

Calculate the number of opening brackets A and closing brackets B in original string. It is true that if A >  = B then the string can be fixed by adding A - B closing brackets at the end and shifting the resulting string to the point of balance minimum, and if A ≤ B, then the string can be similarly fixed by adding B - A opening brackets to the beginning and then properly shifting the whole string. It's obvious that it is impossible to fix the string by using the less number of brackets. So we know the value of the answer, now we need to figure out how it looks like.

Suppose that we first circularly shift and only then add brackets. Suppose that we add x closing brackets. Consider the following two facts:

• If it is possible to fix a string by adding closing bracket to some x positions then it is possible to fix it by adding x closing brackets to the end of the string.
• From all strings obtained from a give one by adding closing brackets to x positions, the minimum is one that obtained by putting x closing brackets to the end.

Each of those statements is easy to prove. They give us the fact that in the optimal answer we put closing brackets at the end of the string (after rotating the initial string). So we have to consider the set of the original string circular shifts such that they transform to the correct bracket sequence by adding x = A - B closing brackets to the end and choose the lexicographically least among them. Comparing circular shifts of the string is the problem that can be solved by a suffix array. The other way is to find lexicographical minimum among them by using hashing and binary search to compare two circular shifts.

The case when A ≤ B is similar except that opening brackets should be put into the beginning of the string.

So, overall complexity is .

Tutorial of VK Cup 2015 - Round 1

• +90

By Zlobober, 6 years ago, translation,

527A - Playing with Paper

It’s easy to see that described process is equivalent to the following loop:

while a > 0 and b > 0:
if a ⩾ b:
a = a - b
else:
b = b - a
ans = ans + 1


But such naive approach will obviously lead to verdict TLE, since it makes ~10, 2015 - 03 - 1912 operations even on the third sample test. The key idea is to replace repeating subtraction operations with integer division operations. This leads to the logarithmic-time solution that looks similar to the Euclid algorithm:

while a > 0 and b > 0:
if a ⩾ b:
ans = ans + a div b
a = a mod b
else:
ans = ans + b div a
b = b mod a


527B - Error Correct System

The first observation is that the new Hamming distance may not be less than the old one minus two, since we change only two characters. So the task is to actually determine, if we can attain decrease by two, one or can’t attain decrease at all.

The decrease by two is possible if there are two positions with the same two letters in two strings but that appear in different order (like “double” <-> “bundle”).

If there are no such positions, then we just need to check that we may decrease the distance. This can be done by just “fixing” the character that stands on the wrong position, like in “permanent” <-> “pergament” (here n stands in wrong pair with m, and there is also unmatched m, so we may fix this position).

Otherwise, the answer is to keep everything as it is. Implementation can be done by keeping for each pair (x, y) of symbols position where such pair appears in S and T and then by carefully checking the conditions above.

528A - Glass Carving

Obviously the largest glass piece at any moment is the one that is product of the largest horizontal segment by the largest vertical segment. One of the possible solutions is to carefully implement what described in the statement and keep all horizontal segments and all vertical segments in priority queue or std::set, or some logarithmic data structure. This solution works in .

But there is also a nice linear solution if we answer all queries in reverse order. Suppose segments are not cutting, but merging. In this case we may keep the horizontal and vertical cut lines in double-linked lists and track the current maximum (that can only increase and become equal to the newly-merged segment each time). This solution works in O(k + n + m).

528B - Clique Problem

One may think that this task is about graph theory, but it after some investigation and several equivalent changes in task statement it can be reduced to the well-known greedy problem.

Initially you have that points may lie together in a set if they are not too close, i. e. |xi - xj| ≥ wi + wj. This is obviously equivalent to the following condition. Let’s consider interval of radius wi with center in point xi and call this interval to be the interval of point i. Then the statement actually says that no two such intervals should be intersecting.

This task is well-known and can be solved greedily after sorting segments in ascending order of right endpoint:

Sort segments S in ascending order of S.x + S.w

last = 0
ans = 1
for i = 1..n - 1:
if S[i].x - S[i].w ⩾ S[last].x + S[last].w:
last = i
ans = ans + 1


It’s easy to prove that this solution is correct. Among all ways to choose first k segments, the best way is the one that minimizes x-coordinate of the right endpoint of the last segment (since it restricts us in the least possible way).

528C - Data Center Drama

Problem legend asks you to add minimum number of edges to the given connected undirected graph (possibly, with loops and duplicating edges) and choose direction for its edges so that both the incoming and outgoing degrees of all vertices are even.

First idea is that the resulting graph before we choose the direction (but after we added some edges) will contain Euler circuit, since all degrees are even. That’s almost what we need: if we have an Euler circuit that contains even number of edges, we may direct them like following: a <- b -> c <- d -> e … It’s easy to see that each vertex appearance in this cycle adds 2 to its ingoing or outgoing degree, so the resulting degrees will be even.

But if the Euler circuit is odd (meaning that there is odd number of edges in the graph), we must add some extra edge to the graph before we continue, the easiest way is to add a loop from vertex 0 to itself, since it doesn’t affect the Euler tour, but now tour length is even, so everything is ok.

Now we should think how to add edges optimally. It’s easy to see that the optimal way is to first fix all odd degrees of vertices (i. e. combine all odd vertices by pairs and put an edge in each pair), and then, possibly, add an extra loop as described above. The last part is to actually find an Euler circuit, and to print the answer.

528D - Fuzzy Search

There were issues with this task. Intended constraints were actually n, m, k ≤ 500000, and the intended solution was using Fast Fourier Transformation, that leads to running time. But unfortunately the statement contained wrong constraints, so we reduced input size during the tour. Nevertheless, we will add the harder version of this task and you will be able to submit it shortly.

Key idea is to reduce this task to a polynomial multiplication. Let’s solve the task in following manner. For each position i of the S for each character c from “ATGC” we will calculate match(c, i) that is equal to the number of c characters that have matching symbol in S if we put string T in position i. Then the criteria for us to have an occurrence at position i is that match(A, i) + match(T, i) + match(G, i) + match(C, i) == |T| (that means exactly that each character from T being put at position i has a corresponding character in S).

Now let’s find out how to calculate match(c, i). Let’s keep only c characters and “not c” characters in both strings and denote them by 1 and 0 respectively. Let’s also spread each 1 in string S by the distance k to the left and to the right. For example, k = 1 for the sample string AGCAATTCAT and the character A corresponding bit vector will be 111110111, and for the character C it will be 0111001110. This bitvector can be calculated in O(n) by putting two events “+1” and “-1” in string S in positions x - k and x + k for each 1 in original string S and then sweeping from left to right over the string S and processing those events.

Now our task is reduced to searching all positions where the bitvector T is the submask of the bitvector S. In constraints n, m, k ≤ 200000 this can be done by using bitsets in O(m(n - m) / 32). Nevertheless, this task can be seen as calculation of polynomials S and reversed(T) product. We will keep this as an exercise for those who decide to submit the harder version of this task.

528E - Triangles 3000

Let’s draw a bounding box that contains all intersection points. Let’s fix a triangle and consider three angles shown on the picture. Calculate area of intersection of those area with the bounding box and call this area to be the “area of an angle”. Then it’s easy to see, that those three angles are complement to the triangle itself in the bounding box, i. e. triangle area is bounding box area minus three angle areas.

This leads us to the idea how to solve this task by carefully calculating for each possible formed angle on the plane, how much times does it appear in total answer if we sum all values like (S - angle_area(a, b) - angle_area(b, c) - angle_area(c, a)) over all triples (a, b, c) of lines.

Actually, the angle is considered as many times, as many lines there are that intersect both sides of its right adjacent angle. So, our task is reduced to calculate for each angle on plane how much lines intersect its sides (i. e. its rays).

This can be done in by fixing the first side of the angle and then adding lines in ascending order of polar angle, and then by keeping the number of lines that intersect the base line to the left and that intersect the base line to the right. Key idea is that the exact of four angles formed by the pair of lines (a, b) that is crossed by some third line c, can be determined by two numbers: its polar angle alpha and its crossing with a coordinate x. Further details are shown on the picture below.

There is also a nice short O(n2) solution from enot110 here.

• +55

By Zlobober, 6 years ago, translation,

Hi everybody! There will be a Codeforces Round for both divisions today at standard time set up by me.

My teammates from ICPC team Moscow SU Trinity sankear and malcolm helped me a lot while preparing this round, also there were lots of useful tips and advices from MikeMirzayanov, applause for them. English translation was made by our veteran translator Delinur.

As usual, there will be five tasks for two hours. The scoring will be announced later.

See you on the round! I hope that each contestant will be able to find something nice in ongoing problems.

UPD: Scoring is standard.

UPD2: Due to technical issues the round is delayed by 15 minutes. We are sorry for an inconvenience.

UPD3: We are sorry for an inconvenience with the task Div1-D. The pretest #16 wasn't satisfying the constraints n, m, k <= 200000. The system testing for the first division will be delayed. If you think that you were affected by this test, you may write me a message and we will make this round unrated for you.

The system testing for the second division will happen shortly, as usual.

UPD4: System testing is complete. Congratulations to the winners!

Div1:

Div2:

UPD5: the English editorial was added.

• +368

By Zlobober, 6 years ago, translation,

Note that schedule have been changed for a New Year contest Good Bye 2014. Round starts at December 30th, 18:00 MSK and lasts for 2 hours 30 minutes. It will also be division combined.

Wait for an announcement for further details. New Year is coming!

• +434

By Zlobober, 6 years ago, translation,

Hi everybody!

Today at 12:00 Moscow Time there will be Codeforces Round #279, dedicated for second division contestants. Round is based on the problems of the Saratov second round of All-Russian School Olympiad in Informatics 2014-2015 that is held at the same time in Saratov.

This round was brought to you by ikar, HolkinPV, IlyaLos, fcspartakm that are all members and ex-members of Saratov SU 2 team.

There will be 6 tasks for 2 hours 30 minutes. Scoring will be announced just before the round starts.

Round will be rated for second divison participants. First division contestants may participate out of competition.

UPD: Scoring: 500-1000-1500-2000-2000-2500.

Good luck!

• +149

By Zlobober, 7 years ago,

Code Jam R3 starts today at 14:00 UTC. Don't miss!

Top-25 contestants will pass to onsite this August in Google's LA office.

• +67

By Zlobober, 7 years ago,

Code Jam R2 starts today at 14:00 UTC. Don't miss!

Top 1000 contestants will receive t-shirts.

• +81

By Zlobober, 7 years ago, translation,

It's not the first time when I receive round notification after the round ends, or don't receive at all.

That's not good =(

• +138

By Zlobober, 7 years ago, translation,

Tomorrow at 11:00 UTC -4 there will be SRM #613. Don't miss!