Xellos's blog

By Xellos, history, 5 months ago, In English,

UPD: Contest over. Did you participate?

Online Physics Brawl will take place on November 30th. Registration is possible until November 28th.

It's a team competition intended for secondary school students from Yugoslavia Czechoslovakia Czech Republic and Slovakia, but there's a category for foreign secondary schools and an open category for anyone. Given the target audience, I don't think it'd be impossibly hard for CF users.

You can find the rules and past problems at online.fyziklani.cz. For each problem, you only submit a number, so programming knowledge may be useful (but the problems are fully solvable without it).

I'm a minor problemsetter/tester and the translator this year.

Read more »

 
 
 
 
  • Vote: I like it  
  • +38
  • Vote: I do not like it  

By Xellos, history, 7 months ago, In English,

I got to IOI solving — and upsolving — a bit late and I just recently finished it (with help of various comments). One thing I didn't really see there was: how would you have done?

Here are my comments about the problems:

  • aliens: a common DP + convex hull trick; apparently, gets TLE for the 5th subtask and even O(NK) has trouble fitting within the TL; 60pts

  • shortcut: I knew that there's binsearch (from a similar TC problem); afterwards, finding an solution isn't hard, but improving it enough to get more points is; 93pts

  • railroad: mfw — I didn't move beyond a bitset DP; even though I had a lot of ideas including what turned out to be the correct one (finding a flow), I thought it'd be some kind of greedy, but those are numerous and hard to check in this case; the last subtask was very easy after solving the 3rd; 34pts

  • messy: the bounds are 7·27 with N = 27, so it has to be D&C; full score

  • paint, molecules: easy 100pt problems

So that'd give me 7th place.

The first solution I got 93pts for in shortcut was this: when checking if the diameter is  ≤ D and considering i to be the left endpoint, find the largest j = j1(i) such that a shortcut between i and j gives all distances within the created cycle  ≤ D, the largest j = j2(i) such that all distances between vertices from the cycle and vertices to the left of i are  ≤ D, then symmetrically for each j the smallest i = i3(j) such that all distances between vertices from the cycle and those to the right of j are  ≤ D. There are also distances between pairs of vertices not on the cycle, but those are easy.

For j2(i), we can find the leftmost vertex k2(i) to the right of i such that it's farther from some vertex to the left of i without a shortcut; then, all other vertices on the cycle must be reachable using the shortcut, so pos(j2) - pos(a) + d(a) + C + lft(i) ≤ D for all vertices a between k2 and j2; here, pos is the position of each vertex on the main line (prefix sum of l) and lft(i) is the maximum distance to the left from i; we can extend it to all vertices a to the right of k2, which gives us a bound on pos(j2); we can find j2 by binsearching. Also, k2 can be found using an RMQ table for max(pos+d) and an RMQ-like search.

With j1(i), we can do something similar to finding j2 for each vertex, but only for distances to i exactly, not to the left of i (so there's d(i) instead of lft(i)); j1(i) can be computed as their suffix minima.

We're left with this problem: there's an interval Il(i) and Ir(i) for each i; are there some i, j such that and ? This can be solved e.g. using another RMQ table, in which we'll store the minimum left ends of Ir in ranges of j; the right ends of Ir aren't important. Then, for each i, we can find out if the minimum in Il(i) is  ≤ i. (Alternatively, we can use a sweepline algorithm and store opened Ir-s in a set<>.)

How simple. /sarcasm

I tried to optimise this, but there was no way to squeeze it even into 97 points — surprisingly, since I didn't really need to optimise to get 93pts.

I got full score on shortcut using an approach based on http://codeforces.com/blog/entry/46518?#comment-310338 (note: extending to all pairs i, j is probably unnecessary and wrong — if i = j, we get a non-path which can be longer than D). The hardest part is computing the smallest j0 = j > i and largest i0 = i < j such that u[i] + v[j] > D, since u[i] and v[j] don't have to be monotonous; afterwards, we can find the smallest/largest sum/difference of x1, x2 by considering all j ≥ j0 / i ≤ i0, for which we only need prefix/suffix maxima of u[] and v[] and check if the answer exists using 2x 2 pointers in O(N).

To compute j0, let's move from i = N - 1 to i = 0 and recompute the sequence of closest increasing v[j]-s using the well-known stack algorithm. Obviously, we can find j0 by binsearching among them, but that gives time complexity. However, if j0(i) < j0(i') for i > i', then we can decrease j0(i') to j0(i) without affecting the bounds on the sum/difference of x1, x2; that means we can keep a pointer on the element in the stack (actually an array in the implementation) corresponding to v[j0] and only move this pointer in one direction and it's in total.

This got 97 points instantly, but I still needed some optimisations to get to 100. That time limit was really tight. Other than that, the problems were pretty good and difficult (as expected from Russia); I just missed problems with non-binary scoring per subtask.

My codes: molecules railroad shortcut paint messy ayyliens

Read more »

 
 
 
 
  • Vote: I like it  
  • +167
  • Vote: I do not like it  

By Xellos, history, 9 months ago, In English,

A reminder that AtCoder doesn't only have Grand Contests, but also these. Starting time: in the past.

Also post-contest discussion, I guess.

Read more »

 
 
 
 
  • Vote: I like it  
  • +55
  • Vote: I do not like it  

By Xellos, 9 months ago, In English,
const int *
int const *
int * const
int const * const
const int * function (const arg) const

link to repo

Based on the e-maxx implementation and other stuff I found online, I pieced together a powerful treap. It can be used as a set<>, array, segment tree or (within limits) all of them at once.

Consider a set<> represented as a sorted array of distinct elements. You can insert/remove elements just like in a set<>. You can also find the index of any element in this array, remove an element by index — or insert an element at an index, but that will probably break the sortedness and you can't use the operations which rely on it anymore (you can still insert/remove elements by index). As long as the array is sorted, you can query lower bound / upper bound of any element, given as a pair (element, index).

Next, you can reverse a range, add to a range and query range sums; this will work all the time, but range addition can and reversing will break the sortedness. You can implement your own queries (like min/max) or combined range updates (paint+add) like in a segment tree. Since we need lazy propagation, everything is immutable (you can work around it e.g. with erase/insert) and I decided not to bother with iterators.

I took special care with randomisation. It uses testlib random_t and if equal priorities are detected (at insert), they're all regenerated, but it shouldn't happen with reasonably small data — the priorities are 60-bit, which is far above the Birthday Paradox threshold for just about anything.

The DS is templated over some type T, which has to be comparable and support addition / subtraction / multiplication by int (like a vector space in math).

It supports the following operations, all online in (with N being the current number of elements, ofc):

function action returns
insert(element) inserts element into a set bool(was it inserted?)
insert_pos(index, element) inserts element at index void
erase(element) removes element from a set bool(was it removed?)
erase_pos(index) removes element at index void
get_index(element) finds the element's index, size() if non-existent int in [0..size()]
[index] finds the element at index T (not T&)
lower_bound(element) finds the lower_bound() of that element, with index pair<T,int>
upper_bound(element) the same with upper_bound() pair<T,int>
shift(left,right,element) add element to everything in the range [left,right] void
reverse(left,right) reverse the range [left,right] void
sum(left,right) find the sum of the range [left,right] T

Also, there's the usual empty(), size(), is_sorted() (returns true if the sortedness hasn't been broken yet) and srand(), which lets you choose your seed if you don't want the testlib default.

I compared it with STL set<> on a million insert/erase/lower_bound operations; the treapset is ~3-4 times slower (which makes sense, since the structure supports many more operations). I also tested it on a million insert_pos/get_index/shift/reverse/sum operations; it took ~4 seconds locally. It seems to work...

Read more »

 
 
 
 
  • Vote: I like it  
  • +68
  • Vote: I do not like it  

By Xellos, history, 10 months ago, In English,

There has been talk about this already, but in case anyone missed it:

The 2016 edition of Internet Problem Solving Contest is going to take place today (starting time).

It's a 5-hour contest for teams of up to 3 people, but there's also an individual division, so feel free to participate even if you can't find yourself a team at this point.

There's going to be an ACM-like number of problems (12+), ranging from classic algorithmic problems to quite unusual ones. Most problems have an easy subproblem worth 1 point and a hard one worth 2 points (think CodeJam); ties are broken using ACM rules (sum of times).

The practice session is over. The contest is over!

Read the rules or my previous blogposts about this for more info.

Belated, yet necessary warning!

Since this is a 5-hour contest where you can execute codes locally, some inputs will be YUGE (gigabytes). Accordingly, they will have fairly high system requirements. Get a good computer. While the official solutions quite comfortably run on my mediocre laptop, if you write something too inefficient, you can encounter a nasty surprise, e.g. frozen system. It happened to me last year.

If an input is big, you won't have to download it; instead, there will be a generator (typically a Python script, meaning they aren't very fast). It's usually a good idea to run all generators as early as possible — as long as it doesn't slow down your computer too much, you can avoid a situation where you can't submit a problem because the contest ended before your generator.

Actually, you should just try to read as many problems as possible and determine your strategy after you can guess your chances well enough.

Some quick stats

11145 submissions
5370 successful submissions (48% success rate)
797 active teams out of 1376 (58% did not give up before even trying)
773 teams with positive score (97% of active teams)
12/13 problems solved by someone
maximum number of fully solved problems: 10/13
lowest non-zero success rate: D (D2: 20%)
highest success rate: C,F (C2,F2: 85%)

highest success rate (easy subproblems): G1 (85%)
lowest success rate (easy subproblems): J1,M1 (25%)

hard problems (<50 teams solved) sorted by difficulty:
E: 0/13
M: 2/10
J: 4/17
H: 11/17
B: 11/18
L: 16/46
K: 40/107

Read more »

 
 
 
 
  • Vote: I like it  
  • +193
  • Vote: I do not like it  

By Xellos, history, 11 months ago, In English,

Seeing as there's no blogpost from chrome about the round a few hours before it...

TC SRM 692 starts soon. If you're up at night, you can participate!

UPD: Contest over! As you may have noticed, I was the writer; it's worth it to read the problem of both divisions just for the stories.

Hints:

TreeSums
LinenCenter
HardProof
Dubs

Read more »

 
 
 
 
  • Vote: I like it  
  • +107
  • Vote: I do not like it  

By Xellos, history, 13 months ago, In English,

is here! Finally, solutions can be spoilered!

For example, my solution to F from CROC 2016 - Elimination Round:

We need LaTeX in spoilers, too.

UPD: Hell yeah, finals! I'll just leave this here.

Read more »

 
 
 
 
  • Vote: I like it  
  • +43
  • Vote: I do not like it  

By Xellos, history, 13 months ago, In English,

Hello!

Today (20.3.), another monthly Cook-off takes place. It should last for 2.5 hours (21:30 to 00:00 IST) — starting time.

Contest link: https://www.codechef.com/COOK68

There will be 5 problems of varying difficulties — I think they aren't that hard (not for lack of trying on my part), there should be a lot more full scores than last month :D. Feel free to discuss the problems / provide feedback in the comments (after the contest ends).

In order to participate, you only need to have a CodeChef account. If you don't have one, you can register here.

Problem Setter: me
Problem Tester & Russian Translator: Antoniuk (Vasya Antoniuk)
Editorialist: PraveenDhinwa (Praveen Dhinwa)
Mandarin Translator: huzecong (Hu Zecong)
Vietnamese Translator: Team VNOI
Language Verifier & Contest Admin: PraveenDhinwa (Praveen Dhinwa)

Prizes: The prize system has changed this month. The top 10 performers in Global and Indian category will get CodeChef laddus; with them, you can claim CodeChef goodies. You can find out more here: https://www.codechef.com/laddu. (If you didn't get your previous goodies, you can contact winners@codechef.com.)

BTW, Codechef is celebrating its 7th birthday this month.

Hopefully, there won't be any technical difficulties this time! Good luck!

The contest is over!

Very simple hints for someone who doesn't want to read editorials yet:

ONOZ, ALTARAY
TWONIM
NPLANAR
KTHCON

Read more »

 
 
 
 
  • Vote: I like it  
  • +87
  • Vote: I do not like it  

By Xellos, history, 17 months ago, In English,

Hints:

div2A: Try conversions between bases.

div2B: Solve a simpler version of the problem where Ai + 1 ≠ Ai for all i.

div1A: What are the shortest paths of the vehicles? what's the shorter of those paths?

div1B: Forget about the ceiling function. Draw points (i, A[i]) and lines between them — what's the Lipschitz constant geometrically?

div1C: Some dynamic programming. Definitely not for the exp. score of one person — look at fixed scores instead.

div1D: Compute dif(v) in O(N) (without hashing) and then solve the problem in O(N2). You need some smart merges.

div1E: Can you solve the problem without events of type 1 or 2? Also, how about solving it offline — as queries on subsets.

Div. 2 A: Two Bases


It's easy to compare two numbers if the same base belong to both. And our numbers can be converted to a common base — just use the formulas

A straightforward implementation takes O(N + M) time and memory. Watch out, you need 64-bit integers! And don't use pow — iterating is better.

Div. 2 B: Approximating a Constant Range


Let's process the numbers from left to right and recompute the longest range ending at the currently processed number.

One option would be remembering the last position of each integer using STL map<>/set<> data structures, looking at the first occurrences of Ai plus/minus 1 or 2 to the left of the current Ai and deciding on the almost constant range ending at Ai based on the second closest of those numbers.

However, there's a simpler and more efficient option — notice that if we look at non-zero differences in any almost constant range, then they must alternate: ..,  + 1,  - 1,  + 1,  - 1, ... If there were two successive differences of  + 1-s or  - 1-s (possibly separated by some differences of 0), then we'd have numbers a - 1, a, a, ..., a, a + 1, so a range that contains them isn't almost constant.

Let's remember the latest non-zero difference (whether it was +1 or -1 and where it happened); it's easy to update this info when encountering a new non-zero difference.

When doing that update, we should also check whether the new non-zero difference is the same as the latest one (if Ai - Ai - 1 = Aj + 1 - Aj). If it is, then we know that any almost constant range that contains Ai can't contain Aj. Therefore, we can keep the current leftmost endpoint l of a constant range and update it to j + 1 in any such situation; the length of the longest almost constant range ending at Ai will be i - l + 1.

This only needs a constant number of operations per each Ai, so the time complexity is O(N). Memory: O(N), but it can be implemented in O(1).

Bonus: the maximum difference permitted in an almost constant range is an arbitrary D.

Div. 2 C / Div. 1 A: The Two Routes


The condition that the train and bus can't meet at one vertex except the final one is just trolling. If there's a railway , then the train can take it and wait in town N. If there's no such railway, then there's a road , the bus can take it and wait in N instead. There's nothing forbidding this :D.

The route of one vehicle is clear. How about the other one? Well, it can move as it wants, so the answer is the length of its shortest path from 1 to N... or  - 1 if no such path exists. It can be found by BFS in time O(N + M) = O(N2).

In order to avoid casework, we can just compute the answer as the maximum of the train's and the bus's shortest distance from 1 to N. That way, we compute ; since the answer is  ≥ 1, it works well.

In summary, time and memory complexity: O(N2).

Bonus: Assume that there are M1 roads and M2 railways given on the input, all of them pairwise distinct.

Bonus 2: Additionally, assume that the edges are weighted. The speed of both vehicles is still the same — traversing an edge of length l takes l hours.

Div. 2 D / Div. 1 B: Lipshitz Sequence


Let for i ≠ j.

Key observation: it's sufficient to consider j = i + 1 when calculating the Lipschitz constant. It can be seen if you draw points (i, Ai) and lines between them on paper — the steepest lines must be between adjacent pairs of points.

In order to prove it properly, we'll consider three numbers Ai, Aj, Ak (i < j < k) and show that one of the numbers L1(i, j), L1(j, k) is  ≥ L1(i, k). W.l.o.g., we may assume Ai ≤ Ak. There are 3 cases depending on the position of Aj relative to Ai, Ak:

  • Aj > Ai, Ak — we can see that L1(i, j) > L1(i, k), since |Aj - Ai| = Aj - Ai > Ak - Ai = |Ak - Ai| and j - i < k - i; we just need to divide those inequalities

  • Aj < Ai, Ak — this is similar to the previous case, we can prove that L1(j, k) > L1(i, k) in the same way

  • Ai ≤ Aj ≤ Ak — this case requires more work:

    • we'll denote d1y = Aj - Ai, d2y = Ak - Aj, d1x = j - i, d2x = k - j
    • then, L1(i, j) = d1y / d1x, L1(j, k) = d2y / d2x, L1(i, k) = (d1y + d2y) / (d1x + d2x)
    • let's prove it by contradiction: assume that L1(i, j), L1(j, k) < L1(i, k)
    • d1y + d2y = L1(i, j)d1x + L1(j, k)d2x < L1(i, k)d1x + L1(i, k)d2x = L1(i, k)(d1x + d2x) = d1y + d2y, which is a contradiction

We've just proved that to any L1 computed for two elements A[i], A[k] with k > i + 1, we can replace one of i, j by a point j between them without decreasing L1; a sufficient amount of such operations will give us k = i + 1. Therefore, the max. L1 can be found by only considering differences between adjacent points.

This is actually a huge simplification — the Lipschitz constant of an array is the maximum abs. difference of adjacent elements! If we replace the array A[1..n] by an array D[1..n - 1] of differences, D[i] = A[i + 1] - A[i], then the Lipschitz constant of a subarray A[l, r] is the max. element in the subarray D[l..r - 1]. Finding subarray maxima actually sounds quite standard, doesn't it?

No segment trees, of course — there are still too many subarrays to consider.

So, what do we do next? There are queries to answer, but not too many of them, so we can process each of them in O(N) time. One approach that works is assigning a max. difference D[i] to each subarray — since there can be multiple max. D[i], let's take the leftmost one.

We can invert it to determine the subarrays for which a given D[i] is maximum: if D[ai] is the closest difference to the left of D[i] that's  ≥ D[i] or ai = 0 if there's none, and D[bi] is the closest difference to the right that's  > D[i] or bi = n - 1 if there's none (note the strict/non-strict inequality signs — we don't care about differences equal to D[i] to its right, but there can't be any to its left, or it wouldn't be the leftmost max.), then those are all subarrays D[j..k] such that ai < j ≤ i ≤ k < bi.

If we don't have the whole array D[1..n - 1], but only some subarray D[l..r], then we can simply replace ai by and bi by . The number of those subarrays is Pi = (i - ai)(bi - i), since we can choose j and k independently.

All we have to do to answer a query is check all differences, take ai, bi (as the max/min with some precomputed values) and compute Pi; the answer to the query is . We only need to precompute all ai, bi for the whole array D[1..n - 1] now; that's a standard problem, solvable using stacks in O(N) time or using maps + Fenwick trees in time.

The total time complexity is O(NQ), memory O(N).

Bonus: Q ≤ 105.

Div. 1 C: Kleofáš and the n-thlon


As it usually happens with computing expected values, the solution is dynamic programming. There are 2 things we could try to compute: probabilities of individual overall ranks of Kleofáš or just some expected values. In this case, the latter option works.

  • "one bit is 8 bytes?"
  • "no, the other way around"
  • "so 8 bytes is 1 bit?"

After some attempts, one finds out that there's no reasonable way to make a DP for an expected rank or score of one person (or multiple people). What does work, and will be the basis of our solution, is the exact opposite: we can compute the expected number of people with a given score. The most obvious DP for it would compute E(i, s) — the exp. number of people other than Kleofáš with score s after the first i competitions.

Initially, E(0, 0) = m - 1 and E(0, s > 0) = 0. How can we get someone with score s in competition i? That person can have any score k from 1 to m except xi (since Kleofáš has that one) with the same probability . The expected values are sums with probabilities P(i, s, j) that there are j people with score s:

Considering that the probability that one of them will get score k is , we know that with probability , we had j people with score s before the competition and one of them had score s + k after that competition — adding 1 to E(i + 1, s + k). By summation over j, we'll find the exp. number of people who had overall score s and scored k more:

Lol, it turns out to be so simple.

We can find the probability E(i + 1, t) afterwards: since getting overall score t after i + 1 competitions means getting score k in the currently processed competition and overall score s = t - k before, and both distinct k and expectations for people with distinct s are totally independent of each other, then we just need to sum up the exp. numbers of people with those scores (which we just computed) over the allowed k:

The formulas for our DP are now complete and we can use them to compute E(n, s) for all 1 ≤ s ≤ mn. Since E(n, s) people with s greater than the overall score sk of Kleofáš add E(n, s) to the overall rank of Kleofáš and people with s ≤ sk add nothing, we can find the answer as

This takes O(m2n2) time, since there are O(mn) scores, O(mn2) states of the DP and directly computing each of them takes O(m) time. Too slow.

We can do better, of course. Let's forget about dividing by m - 1 for a while; then, E(i + 1, t) is a sum of E(i, s) for one or two ranges of scores — or for one range minus one value. If you can solve div1C, then you should immediately know what to do: compute prefix sums of E(i, s) over s and find E(i + 1, t) for each t using them.

And now, computing one state takes O(1) time and the problem is solved in O(mn2) time (and memory).

Bonus: Really, how fast can you solve this problem?

Div. 1 D: Acyclic Organic Compounds


The name is really almost unrelated — it's just what a tree with arbitrary letters typically is in chemistry.

If you solved problem TREEPATH from the recent Codechef November Challenge, this problem should be easier for you — it uses the same technique, after all.

Let's figure out how to compute for just one fixed v. One more or less obvious way is computing hashes of our strings in a DFS and then counting the number of distinct hashes (which is why there are anti-hash tests :D). However, there's another, deterministic and faster way.

Compressing the subtree Tv into a trie.

Recall that a trie is a rooted tree with a letter in each vertex (or possibly nothing in the root), where each vertex encodes a unique string read along the path from the root to it; it has at most σ sons, where σ = 26 is the size of the alphabet, and each son contains a different letter. Adding a son is done trivially in O(σ) (each vertex contains an array of 26 links to — possibly non-existent — sons) and moving down to a son with the character c is then possible in O(1).

Compressing a subtree can be done in a DFS. Let's build a trie Hv (because Tv is already used), initially consisting only of one vertex — the root containing the letter sv. In the DFS, we'll remember the current vertex R of the tree T and the current vertex cur of the trie. We'll start the DFS at v with cur being the root of Hv; all we need to do is look at each son S of R in DFS, create the son curs of cur corresponding to the character sS (if it didn't exist yet) and run DFS(S, curs). This DFS does nothing but construct Hv that encodes all strings read down from v in Tv. And since each vertex of Hv encodes a distinct string, is the number of vertices of Hv.

This runs in O(|Tv|σ) time, since it can create a trie with |Tv| vertices in the worst case. Overall, it'd be O(N2σ) if T looks sufficiently like a path.

The HLD trick

Well, what can we do to improve it? This trick is really the same — find the son w of v that has the maximum |Tw|, add sv to Hw and make it Hv; then, DFS through the rest of Tv and complete the trie Hv as in the slow solution. The trick resembles HLD a lot, since we're basically remembering tries on HLD-paths.

If v is a leaf, of course, we can just create Hv that consists of one vertex.

How do we "add" v to a trie Hw of its son w? Well, v should be the root of the trie afterwards and the original Hw's root should become its son, so we're rerooting Hw. We'll just create a new vertex in Hw with sv in it, make it the root of Hw and make the previous root of Hw its son. And if we number the tries somehow, then we can just set the number of Hv to be the number of Hw.

It remains true that dif(v) is |Hv| — the number of vertices in the trie Hv, which allows us to compute those values directly. After computing dif(v) for each v, we can just compute both statistics directly in O(N).

Since each vertex of T corresponds to vertices in at most tries (for each heavy edge that's on the path from it to the root), we aren't creating tries with a total of O(N2) vertices, but . The time complexity is therefore . However, the same is true for the memory, so you can't waste it too much!

Bonus: you have an additional tiebreaker condition for vertices with identical . Count the number of distinct strings which occurred exactly k times for each k in an array Pr[]; take the vertex/vertices with lexicograhically maximum Pr[] (as many strings as possible which occur only once, etc).

Bonus 2: Can you get rid of the logarithm in the time complexity?

Comic strip name: Indy. Go read the whole thing, it's not very long, but pretty good.

Div. 1 E: A Museum Robbery


In this problem, we are supposed to solve the 0-1 knapsack problem for a set of items which changes over time. We'll solve it offline — each query (event of type 3) is asked about a subset of all N exhibits appearing on the input.

Introduction

If we just had 1 query and nothing else, it's just standard knapsack DP. We'll add the exhibits one by one and update s(m) (initially, s(m) = 0 for all m). When processing an exhibit with (v, w), in order to get loot with mass m, we can either take that exhibit and get value at least s(m - w) + v, or not take it and get s(m); therefore, we need to replace s(m) by ; the right way to do it is in decreasing order of m.

In fact, it's possible to merge 2 knapsacks with any number of items in O(k2), but that's not what we want here.

Note that we can add exhibits this way. Thus, if there were no queries of type 2, we would be able to solve whole problem in O(Nk) time by just remembering the current s(m) and updating it when adding an exhibit. Even if all queries were of type 2 (with larger n), we'd be able to solve it in O(nk) time in a similar way after sorting the exhibits in the order of their removal and processing queries/removals in reverse chronological order.

The key

Let's have q queries numbered 1 through Q in the order in which they're asked; query q is asked on some subset Sq of exhibits.

MAGIC TRICK: Compute the values s(m) only for subsets — the intersections of pairs of queries 2q, 2q + 1 (intersection of the first and the second query, of the third and fourth etc.), recursively. Then, recompute s(m) for all individual queries in O((N + Q)k) time by adding elements which are missing in the intersection, using the standard knapsack method.

What?! How does this work?! Shouldn't it be more like O(N2) time? Well, no — just look at one exhibit and the queries where it appears. It'll be a contiguous range of them — since it's displayed until it's removed (or the events end). This element will only be missing in the intersection, but present in one query (so it'll be one of the elements added using knapsack DP), if query 2q + 1 is the one where it appears first or query 2q the one where it appears last. That makes at most two addittions of each element and O(N) over all of them; adding each of them takes O(k) time, which gives O(Nk).

The second part of the complexity, O(Qk) time, is spent by copying the values of s(m) first from the intersection of queries 2q and 2q + 1 to those individual queries.

If we're left with just one query, we can solve it in O(Nk) as the usual 0-1 knapsack.

Since we're halving the number of queries when recursing deeper, we can only recurse to depth and the time complexity is .

A different point of view (Baklazan's)

We can also look at this as building a perfect binary tree with sets S1, ..., SQ in leaves and the intersection of sets of children in every other vertex.

For each vertex v of this tree, we're solving the knapsack — computing s(m) — for the set Dv of displayed exhibits in it. We will solve the knapsack for the root directly and then proceed to the leaves. In each vertex v, we will take s(m), the set Dp of its parent p and find s(m) for v by adding exhibits which are in Dv, but not in Dp.

We know that the set Dp is of the form for some a, b and Dv is either of the form or for (depending on whether it's the left or the right son). In the first case, only elements removed between the m-th and b-th query have to be added and in the second case, it's only elements added between the a-th and m + 1-th query. Since each element will only be added/removed once and the ranges of queries on the same level of the tree are disjoint, we will do O((N + Q)k) work on each level and the overall time complexity is .

Finding the intersections and exhibits not in the intersections

Of course, bruteforcing them in O(NQ) isn't such a bad idea, but it'd probably TLE — and we can do better. We've just described how we can pick those exhibits based on the queries between which they were added/removed. Therefore, we can find — for each exhibit — the interval of queries for which it was displayed and remember for each two consecutive queries the elements added and removed between them; finding the exhibits added/removed in some range is then just a matter of iterating over them. Since we're actually adding all of them, this won't worsen the time complexity.

In order to efficiently determine the exhibits in some set , we can remember for each exhibit the interval of time when it was displayed. The exhibit is in the set if and only if it was displayed before the a-th query and remained displayed at least until the b-th query.

To conclude, the time complexity is and since we don't need to remember all levels of the perfect binary tree, but just the one we're computing and the one above it, the memory complexity is O(qk).

Read more »

 
 
 
 
  • Vote: I like it  
  • +490
  • Vote: I do not like it  

By Xellos, history, 18 months ago, In English,

Hello everyone!

CF round #333 (both divisions) is going to take place today. The authors of this round are me and Baklazan.

By total coincidence, I'm the author of even-indexed problems and Baklazan the author of odd-indexed problems. Now I'm going to let you wonder if it's 0-based or 1-based :D.

As usual, thanks to GlebsHP for his help (in particular, helping us not overkill the problems), MikeMirzayanov for CF and Polygon, Delinur for problem statement translations and our testers: misof, Mimino, AlexFetisov and winger.

I wish good results and rating changes to those who earn it, as well as a zero-sum rating update to everyone.

Score distribution:

Div. 2: 500-1000-1500-2250-2250

Div. 1: 500-1250-1250-2000-2500

Winrars:

Div. 1

  1. TooSimple

  2. izrak

  3. rng_58

  4. PavelKunyavskiy

  5. mnbvmar

Div. 2

  1. liuyibo

  2. Mi_Sawa

  3. Gaia

  4. C0LDBL00D

  5. EasyRed29

(homogeneous colours :D)

Read more »

 
 
 
 
  • Vote: I like it  
  • +770
  • Vote: I do not like it  

By Xellos, history, 18 months ago, In English,

As usual, 10 days from the first Friday of the month (link to the contest; starting time), a Long Challenge contest takes place. As usual again, there are 10 problems — 9 with subtask scoring and one challenge problem with relative scoring. Feel free to participate!

The people involved are:

UPD: The editorials are up. I still have stuff like links to add, but everything should be complete otherwise.

Read more »

 
 
 
 
  • Vote: I like it  
  • +90
  • Vote: I do not like it  

By Xellos, history, 19 months ago, In English,

Hello, and, again, you are invited to a monthly contest: HackerEarth October Clash.

The problemsetter this time is RomaWhite; I'm the tester and editorialist. After all, I have to give I_love_Tanya_Romanova a chance to compete! :D

The contest takes place this weekend (starting time) and runs for 24 hours. There should be 6 algorithmic problems with partial scoring (points are given independently for passing each test file) and 1 challenge problem. I think the problems are easier than last month, when two subtasks of one problem went unsolved by anyone.

Good luck!

Btw, round 1 of COCI overlaps with this Clash for 3 hours. That, of course, doesn't mean it's not possible to participate in both at the same time — from my own experience. It gets harder when trying to take part in all contests which take place tomorrow...

UPD: The contest has finished, congratulations to the colourful top 5:

  1. Catalin Stefan Tiseanu
  2. izrak
  3. rantd
  4. fataleagle
  5. lewin

The editorials were delayed (because I overslept), almost all of them are up now.

Read more »

 
 
 
 
  • Vote: I like it  
  • +76
  • Vote: I do not like it  

By Xellos, history, 23 months ago, In English,

The next edition of Internet Problem Solving Contest is here!

In case you don't know (yet), IPSC problems come in a great variety, from standard algorithmic ones to problems where you interact with the grader or need to find a specific input.

Most problems have an easy input worth 1 point and hard input worth 2 points; otherwise, the scoring is ACM-like, with WAs on easy inputs giving time penalty 10 minutes instead of 20. The input files are available for download and you only upload your outputs (in a similar manner to GCJ, except there's no additional time limit).

It's a team competition for teams of three. IPSC 2015 takes place on 20th of June, from 11:00 UTC and lasts 5 hours. You may register here.

CONTEST IS OVER.

Registered teams

(I won't be adding teams here anymore, because who cares anymore)

Place Points Time Team name Member 1 Member 2 Member 3
5 30 2775 Warsaw Capybaras mnbvmar Swistakk Errichto
6 29 2155 Havka-papstvo Egor pashka Petr
12 28 2166 Knifeproof Tank niyaznigmatul VArtem tourist
16 27 2577 sudo set-position rand()%N fsouza marcoskwkm stefanot
26 25 1815 Andromeda Express ainu7 Jongman astein
27 25 1873 Team Accepted Limit Exceeded popoffka Alex_2oo8 Ingugugus
32 24 2417 ThankYouMikeMirzRAYanovForYou- (sic) CodeforceAndPolygonPlatforms abyssmall izrak junkbot
33 24 2423 Unpretired ilyakor Jacob gusakov
35 23 1803 SPb SU 8/3 Dmitry_Egorov PavelKunyavskiy nk.karpov
36 23 2029 Corridors of Time flydutchman Riatre this_isssssyy
40 23 2468 Dig AliB PrinceOfPersia Swift
42 23 2565 stigma sugim48 evima stqn
44 22 1528 RaccoonRush subscriber enot.1.10 antonkov
52 21 1826 W4yneb0t W4yneb0t
55 21 1890 MooOOoOooOOOOoOooooP DemiGuo ksun48 yummy
61 20 1503 iThan chaotic_iak jonathanirvings nathanajah
63 20 1639 itmo150216 izban vlad107 belonogov
64 20 1706 My Igloo Is Melting Kuroba FatalEagle zxqfl
67 20 1773 Return of Celtic Warriors Tanaeem sunny dragoon
72 20 2014 ALREADY HAVE DONE HYEA koosaga Jiyong Youn
80 19 1345 dolphin secret agents stan acherepanov permin
91 19 1837 MSU Tashkent Old Coders Timur_Sitdikov SergeyLazarev helThazar
102 19 2425 Ural FU Dandelion mpivko Umqra Um_nik
104 18 1335 PAPFans M.Mahdi PAP SeyedParsa
115 18 1692 GD-PSP Jokser sweiss pvasilyev
128 17 1244 Frozen Heart Nikitosh Tehnar ComradePetr
131 17 1433 Zenith langtrunghieu I_love_Hoang_Yen flashmt
141 17 2324 12.0ngskar dolphinigle Gyosh fushar
142 16 1244 CodeFights ---Grigor--- aram90 armen.tsirunyan
143 16 1281 AOI2 gdisastery fleimgruber
156 16 1722 BajaB shayanH Amdi AliA
167 15 1042 kraskevich_team ILoveCoding
174 15 1309 Please explain why havka eto papstvo OutSide FxF Fcdkbear
180 15 1472 Bangladesh Avengers emo moshiur sohelH
183 15 1533 Saratov SU 1 kuviman IlyaLos dans
212 14 1319 ZER zholnin Elgun _resul
243 13 1314 cup_of_team cup_of_tea
255 13 1748 Dirsa kristapuciitis sanja Mishutnik
265 12 1073 Masr Islamia (╥‿╥) khaledkee ahmednaoum Mohamed Al-Jamal
270 12 1197 B-b-b-b-bones! mysterion knst
271 12 1240 ☺ I can't see plus-plus ☺ tjandra
279 12 1389 namelist.insert("দৌড়ের উপর") enzam VUAcoder wasi.ahmed
280 12 1425 kvark161: kvark161
282 12 1564 8-HD-720p-YIFY-Ganool.3gp azaky farisv makanbeling
301 11 1128 For the watch ashish1610 rohangarg kshitij_jain
303 11 1164 Chega de saudades ivanilos
309 11 1246 sorry_helli SaDDaS Silver_Soul I_Love_Yuuki_Asuna
318 10 739 Donkey Fly EKGMA HamidReza_H Rubik_AA
320 10 802 dpsd_team rajat1603 sidhbansal additya1998
321 10 817 Flawless fdg Furko mgch
335 10 982 unemployed, useless dropout & cute woman vadimmm Rubanenko baba_beda
343 10 1062 Alexander Udalov udalov
348 10 1104 85-85 farzad.shbfn shamir0xe
361 10 1303 Samne Porikkha...Asen Contest Kori zubaer.kh amlansaha Honour_00
366 10 1348 Choker riad LinKin jehad131
376 9 673 bambino 2shraaf badry mohamed.mehany
383 9 824 SPiromanii&Messi LUN4M0R1P5 teoionescu george_stelian
414 8 786 Never Slowdown reza_h DemoVersion
428 8 1023 les apparences sont trompeuses members safrout KarimElSheikh MedoN11
430 8 1057 UHv6 jcg marX
464 7 697 NHSPC Juniors ruhan.habib39 tasmeemreza rubabredwan
485 6 228 TeamUFRN heliobdf Zailton RailtonT
488 6 254 milkyway ptnk_1215_panaka_13 touristv2 TiChuot97
505 6 462 code_phoenix ajinkya1p3 yogeshg39 prabhu_3095
519 6 647 Vypiem_za_Lubov Nicolas_Cage Montezuma vip_gevorg
565 5 489 Nalin Bhardwaj NibNalin
576 5 1031 Sandor team SandorGarcia
588 4 87 GG izi commend me jlr.terceiro ronisds
595 4 110 Nemidunam MR.GoryTnych Alimol
618 4 188 ONU_Feuerbach ONU_V_V_Ivanov Dubzzy illarionovam_onu
623 4 204 Hot Tomato Sauce nirob_mh shm0007
633 4 259 DS & DP^2 besher Alex7 joudzouzou
660 4 846 Primo Drizlerz zulkan
689 3 76 BK.Troll farmerboy thomas
717 3 159 The Deterministics asdoc PowerShell xennygrimmato
746 3 312 hichzat bayram98 horojyk Kerim.K
769 3 512 thehandofgod belowthebelt deepankarak pulkit_180894
773 3 606 KBTU Tarjan Azizkhan Madiyar Temirulan
N/A N/A N/A 2017 ACM-ICPC absolute winners T0RRES
N/A N/A N/A Abdelkarim abdelkarim
N/A N/A N/A Binam bijan Nastaran75 sPooky
N/A N/A N/A DivineByCode amankedia Kanish_The_Vista ace_d_king
N/A N/A N/A Dodgers vlade087 balle deinier
N/A N/A N/A guptashvm gupta_shvm
N/A N/A N/A Istrebitel DARIUS_XIX Suhaylee
N/A N/A N/A j1k7_7 jaskamalkainth
N/A N/A N/A Korol', mudak, i rzhavaya zapchast' silap-aliyev bayram osmanuss
N/A N/A N/A Panda Po chrome N.Elibay ErzhaNN
N/A N/A N/A shockers Ajay Sundar Karthik
N/A N/A N/A Team Zabava radoslav11 vanjo9800 P_Nyagolov
N/A N/A N/A Tmcoteam Allanur_tkm Bekmyrat _NinjA
N/A N/A N/A Whatever ikbal EMINAYAR enesoncu
N/A N/A N/A Zerry2015 lamngo96 nhathuyen95 duongtnhat

Read more »

 
 
 
 
  • Vote: I like it  
  • +167
  • Vote: I do not like it  

By Xellos, 2 years ago, In English,

These days (26.-27.3.2015), another national round of Slovak Olympiad in Informatics took place. Just short problem statements for now, I'll write it in more detail and with solutions later:

1.

There's a hazelnut chocolate of size N × M, which can be viewed as a 2D grid. In each cell of the grid, there's a given number of nuts. Two players play a game, alternating turns: each player has to cut off either the bottom row or the rightmost column of the remaining part of the chocolate in her turn (and do with it guess what). The cut off row/column has to contain an even number of nuts. The player that can't make a move loses. Determine the winner if both play optimally.

2.

You have two numbers N, K (), one input file with N - K distinct numbers between 1 and N and 2K + 2 empty files. Find the missing K numbers.

Your program has a very limited amount of memory (the files' content isn't stored in that memory), just 10 kB for K ≤ 100; its memory complexity can't depend on N. Primarily, you need to minimise the worst-case number of reads+writes to files; only then are you minimising the time and memory complexity of your algorithm.

Files work like queues and can be erased in small constant time.

3.

a very long introductory text and some heavily theoretical magic with simulating boolean conditions using other conditions and proving that it's always possible/impossible

4.

There are N numbers up to 109 and another number R, also up to 109. Find up to 4 numbers (one number can't be used multiple times) among these N that sum up to R or decide that they don't exist. N ≤ 4000.

5.

There's a straight river of constant width, N points on one side of it and M on the other side. You need to connect some of them with (obviously straight) lines that have the minimum total length in such a way that each of these N + M points has at least one line connecting it with another point on the other side of the river. N, M ≤ 20000.

The first 3 problems are theoretical (points for explaining your algorithm), the other 2 practical (typical contest style, partial scoring); the max. time limit was 10s in both.

Read more »

 
 
 
 
  • Vote: I like it  
  • +31
  • Vote: I do not like it  

By Xellos, 2 years ago, In English,

Complete problemset + tests + original presentation of solutions.

Official scoreboard, public contest scoreboard, used for training elsewhere.

Short hints first, more detailed solutions afterwards.

A. Avoiding the Apocalypse

(difficulty: medium-hard, code)

Maxflow in a suitably selected graph. Ford-Fulkerson is sufficient.


The vertices of our graph need to correspond to states that one person can be in when traversing the original road network; the number of people moving in a group is then represented by flow along edges. Therefore, the most natural choice is to use vertices corresponding to states (location, time).

The rules from the problem statement say that at most c people can move along an edge e = (u -  > v) in the original at any time. That means if we add an edge from (u, t) to (v, t + s) (s is the time it takes to traverse edge e), it needs to have capacity c; such edges fully describe these rules.

Staying in place is also moving, just into the same vertex; it takes one time unit. That adds some more edges (with infinite capacity, as any number of people can stay in place).

Now, a path of some person is represented by a unit flow in this graph. In order to find the maximum number of people that can get somewhere, we clearly want a maximum flow in this graph.

Wait, people can't wait forever! We still haven't used the restriction on time S till zombification, but that's simple — just don't use vertices corresponding to time  > S. This also bounds the number of vertices to O(NS) and edges to O((N + M)S).

Furthermore, we need a source vertex vS, a sink vertex vT and edges from/to them; the answer is the maxflow from vS to vT. It's obvious that we need 1 edge from the source to the starting vertex of our group of G people (at time 0), and that edge should have capacity G. For edges to the sink, we'll use the last remaining part of the input (finally, the input is spent!): they need to go from the medical facilities, at time S (there's no harm in waiting for a film-like cliffhanger), and have infinite capacity.

All that's left is applying a standard maxflow algorithm, for example Ford-Fulkerson. That one has complexity O((V + E)F), where F is our answer (the maxflow), V and E are vertices and edges of our graph — in this case, it's O((N + M)SG). It actually runs in time, because its constant is fairly small and most tests are small or have small answers.

B. Button Bashing

(diff: easy-medium, code)

BFS in a graph of cooking times.


Again, we can construct a directed graph of states. In this case, each state will be a time set on the microwave so far; from it, you have N edges corresponding to button presses that lead you to a different time as described in the problem statement.

Obviously, you want to find the minimum distance from vertex (t = 0) to (t = T), or min. distances to vertices (t > T) (since you can always keep pressing one positive button, at least vertex (t = 3600) is guaranteed to be reachable) when necessary. One BFS is enough to find the min. distances of all vertices from (t = 0) and loop over all vertices (t ≥ T) until you find one with non-infinite distance.

There are O(3600N) edges in our graph, O(N) vertices, so our BFS takes O(3600N) time.

C. Citadel Construction

(diff: med-hard, code)

Convex hull; fix 1 vertex (u) of the quadrilateral, iterate the opposite one (w) along the convex hull and recalculate the furthest vertices v, x from line u - w on each of its sides. O(N2).


We need to pick a quadrilateral (possibly degenerated into a triangle) with maximum area. Obviously, a triangle is better than a concave quadrilateral, so let's stick to convex ones.

Suppose that we picked one diagonal of our quadrilateral. Its area is the sum of triangles made by this line segment and 2 vertices on opposite sides from it. Using the well-known formula: area (of a triangle) = base * height / 2, we realize that the most distant points from the line give the largest area, because the chosen diagonal is each triangle's base.

We can imagine taking a line parallel to our diagonal and moving it perpendicularly to it; the last vertices it crosses (depending on the direction of its movement) are the most distant ones. But this is equivalent to saying that these most distant vertices must lie on the convex hull — if we draw a line through a point that's not on the convex hull, there will be points (stricty) on both sides from it, the ones belonging to the convex hull. Applied to both diagonals, it means we only need to use points on the convex hull.

Let's now pick a diagonal between 2 vertices on the convex hull and rotate our coordinate system so this diagonal coincides with the x-axis. We need to find the topmost point (farthest from the x-axis on one side, the other side can be done analogously). By definition of convexity, as we go from left to right on the hull, the angle between the x-axis and the side of the hull we're currently moving along will be non-increasing, which means the y-coordinates of points we pass first increase (up to the first segment with negative angle) and then decrease.

If we rotate this diagonal around one vertex u of the hull, starting with position parallel (angle ) to one side u - v of the hull in that place, adding vertices to the part "above" the diagonal, the angles of all segments with the diagonal increase equally and the first segment with negative angle moves along the hull in the same direction we're adding vertices in. Therefore, we can use two pointers to store the other point of the diagonal w and the topmost point v.

Since the "bottommost" point x can be recomputed using the 2 pointers' method with w in parallel to recomputing v and we can calculate the respective area easily using vector cross products, this gives us (together with the convex hull) an O(N2) algorithm.

D. Dropping Directions

(diff: med-hard, code)

Build a graph with vertices (intersection, direction); it's a specific union of cycles. Adding a signpost points all locations on 0 or 2 cycles to the goal.


The graph we're given is not very good, since the next intersection depends on the previous one. A much better graph is a directed one with vertices (intersection, direction). In this graph, the next vertex is given uniquely and the vertex from which we had to arrive is also given uniquely, so it must be a union of cycles.

Furthermore, if we had a cycle in it along some intersections, then there must be another cycle in which the order of intersections is opposite — it corresponds to traversing the same path in the original graph, just taking the opposite direction at each intersection.

Now, what would happen if we added some signposts? Each vertex would still have outdegree 1, but some would have indegree 0, so each connected component would be a cycle with some trees (directed to the root) appended to its vertices as roots. The situation we want to obtain is for each component's cycle to contain one of the goal's vertices (there are 4 of them, and at most 4 such cycles).

Let's look at the intersections in the goal's cycles, and at signposts in them. Such an intersection corresponds to 4 vertices, two of which originally lied in the goal's cycles already. Adding a signpost would therefore point at most 2 other cycles to the goal. After we added such a signpost, we could look at the graph again, find an intersection that leads tourists to the goal in two directions (either directly along an original goal's cycle, or by finding the previously added signpost) and add a signpost to it, and repeat until the goal is always reachable.

In fact, we can always find an intersection such that adding a signpost would point 2 cycles to the goal this way. Thanks to our strategy and the symmetric property of our graph, we know that pointing one cycle to the goal means pointing its opposite direction cycle to the goal as well, so we point 0 or 2 cycles. And if we could only point 0 cycles, that would mean the original graph isn't connected.

Therefore, we can calculate the answer very easily: it's the number of components that don't contain the goal's intersection / 2. All it takes is one BFS for each component, in O(N) time total.

E. Excellent Engineers

(diff: medium, code)

A classical problem solvable simply using sorting and minimum-BIT in .


The order of engineers doesn't matter, so let's sort them by their first ranks. Now, the only people who can kick an engineer out of the shortlist now have to be before him in the sorted order, so let's process them in that order.

For an engineer (r2, r3), we need to find out if, among people processed before him and with smaller r2, there's someone also with smaller r3. That's straightforward if we use a minimum-BIT: if we store in the BIT an array A[] with A[r2] = r3 for already processed engineers and A[r2] = ∞ for the rest, then it's equivalent to asking if . Based on the answer, we can decide whether to add him to the shortlist. Then, we need to add the currently processed engineer by updating A[r2] to .

Sorting can be done in O(N), queries and updates on BIT work in , so we have an algorithm.

F. Floating Formation

(diff: hard, code)

Boats are edges, designs are edges. Compress the graph to a tree with its root marked, then keep marking the vertex that has the most unmarked vertices on the path from it to the root. It can be done with preorder numbering and a segment tree.


The input graph has specific form that simplifies stuff a lot — it's connected and has a part that always stays afloat. We can identify this part and compress it into the root of a tree. How? By identifying the part that doesn't stay afloat, and that can be done simply by cutting off leaves of the graph (stored in a queue) and adding their neighbours to the queue if they become leaves.

Now, we have a rooted tree, with only its root marked as "stays afloat". If we connect a boat to its unmarked vertex, that vertex and all on the path from it to the root will also become marked that way. Obviously, we want to connect boats to leaves only; in fact, we want to connect a boat to a deepest vertex (if there are more, any one will do). If we didn't connect a boat to any of the deepest vertices, we could look at the first vertex w on the path from one of them (v) to the root that becomes marked in the end, take a boat from its subtree (connected to u) and connect it to that deepest vertex (v) instead, gaining at least one marked vertex, because we mark all vertices from w to v, unmark at most all vertices from u to w and v must have been deeper than u.

We can connect the first boat to this vertex and mark all vertices on the path from it to the root; this can be done by simply moving along the path until we encounter a previously marked vertex. Where to put the next boat? Again, to the vertex with the most unmarked ones on the path from it to the root (let's call this number "true depth"). The argument is that the cost is the same as if we just took all marked vertices and merged them into the root (we did this at the beginning, remember?), obtaining another tree with all but the root unmarked, which is the same situation as when choosing the vertex for the first boat.

We now have a clear idea of what to do — K times "pick the truly deepest vertex and mark all vertices above it (including itself)". Then, we just need to count unmarked vertices to get the answer. The only question remaining is how to do this.

Marking vertices is straightforward, the only problem is updating true depths and finding the truly deepest vertex. What often helps with this type of problems is renumbering vertices so that each subtree would contain vertices from one interval. For example, with preorder numbering, which can be computed with one DFS — the root gets number 0, its first son gets number 1, then the subtree of the first son is numbered recursively, the second son gets the first unused number, then its subtree is numbered recursively, the 3rd son gets the first unused number again etc. This produces the desired numbering, and updating true depths when vertex v is marked turns into subtracting 1 from all true depths in an interval (corresponding to v's subtree).

So, we need to find the maximum and subtract a constant from an interval. Does that remind you of anything? Yes, maximum interval/segment tree. With lazy updates and able to find the vertex that this maximum belonged to. Lazy propagation is a standard thing, so I won't describe the tree here, but I'll mention that in order to get the vertex, you can keep the maximum of pairs (true depth, corresponding vertex). The time complexity is , because all but the query answering/updating takes just linear time and a segment tree can be constructed in O(N).

G. Growling Gears

(diff: easy, code)

Take the derivative of function F(R), find the optimal R and F.


This is one of the most basic problems in calculus. F(R) is a polynomial, which is differentiable in , so its maxima must satisfy

In this case, it means 2aR = b, -2a < 0$. The second rule is always satisfied, the first one has exactly one solution: , with the corresponding maximum F(Rmx) = b2 / 4a + c.

We want just solutions with R > 0, but in this problem, it's always satisfied.

We get a straightforward O(N) code; doubles are fully sufficient for max. values of F.

H. Highway Hassle

(diff: harder, I'm not sure if "very hard" is accurate)

Lol nope. Look for it down in the comments.

I. Interesting Integers

(diff: medium, code)

N = Gn = Fn - 1b + Fn - 2a; try all reasonable n, solve the equation (think why it's equivalent to N=F_{n-1}b+F_{n-2}a$)


We know that N = Gn = Fn - 1b + Fn - 2a with F - 1 = 1, F0 = 0 (it's not called a linear recurrence without reason); proof by seeing that it satisfies all conditions it needs to satisfy. For a, b > 0, Gn ≥ Fn and Fibonacci numbers increase quickly, so we can try all n, for which Fn ≤ N holds.

Let's fix n ≥ 3. Modulo Fn - 1, we know that N ≡ Fn - 2a; consecutive Fibonacci numbers are also coprime (easy proof by induction), so Fn - 2 has a modular inverse and

a = kFn - 1 + NFn - 2φ(Fn - 1) - 1 .

We can precompute Euler's φ of all Fn - 1 ≤ N after factorising them. Then, we can compute the smallest positive a0 that can lead to a good pair (a, b) using fast exponentiation and its respective b0.

We can add any multiple of Fn - 1 to a0 and subtract the same multiple of Fn - 2 from b0 to get all the other pairs (a, b). The largest k, for which a0 + kFn - 1 ≤ b0 - kFn - 2 holds, is (integer division). If b0 < a0, there's clearly no way to make a good pair (a, b) with this n (a0 must be the smallest possible a here, but we'd try to subtract something from it). Otherwise, the pair corresponding to this k must be good.

We just need to pick the optimal one of so found good pairs (a, b). There are possible n-s, each requires factorisation for φ and exponentiation; all remaining steps are O(1), so we get complexity per query with precomputation.

J. Jury Jeopardy

(diff: easy, code)

Trivial simulation of what's described in the statement (walk around the maze and mark cells as empty), just bug-prone. Nothing else to add, you can read my code to understand better.

K. Key to Knowledge

(diff: medium, code)

Meet in the middle, pick one half of correct answers. 3112 < 1018, so a vector can be compressed into a 64-bit integer.


The simplest bruteforce would try all possible combinations of correct answers, count the number of correct answers each student got and decide if it's correct. But that's too slow.

A better solution uses meet-in-the-middle approach. We can bruteforce all combinations of the first correct answers and count how many of them each student answered correctly, as a vector v of size N. The same can be done for the last answers, obtaining vectors w of how many answers each student needs to answer correctly in the other half. The answer to the problem is the number of pairs (v, w) with v = w.

We could just throw all vectors v into a map<> and compute the answer directly by looping over all w. But to make sure it doesn't TLE, we can convert the vectors into numbers in base M + 1. Since these numbers are at most (M + 1)N ≤ 1018 here, 64-bit integers are sufficient for this representation. Comparing vectors in O(N) now turns into comparing integers in O(1).

There are O(2M / 2) halves of correct answers to check (O(MN)), compress (O(N)) and drill into or search in a map (), so the time for this is O(2M / 2MN).

Read more »

 
 
 
 
  • Vote: I like it  
  • +116
  • Vote: I do not like it  

By Xellos, 2 years ago, In English,

The last blog got downvote bombed into oblivion, but I have enough contribution to spare.

TC SRM 638 takes place soon (ಠ_ಠ).

Feel free to discuss the problems here after the round ends (because TC apparently doesn't have placeholders for new matches in its forum anymore...).

Read more »

 
 
 
 
  • Vote: I like it  
  • +121
  • Vote: I do not like it  

By Xellos, 2 years ago, In English,

takes place soon (ಠ_ಠ).

Feel free to discuss the problems here after the round ends (because TC apparently doesn't have placeholders for new matches in its forum anymore...).

UPD: this blog post is now obsolete, because it doesn't show on the Recent actions list due to negative votes. Comment in here instead.

Read more »

 
 
 
 
  • Vote: I like it  
  • -21
  • Vote: I do not like it  

By Xellos, 3 years ago, In English,

This was originally intended as an answer for this comment, but eventually I realized that it's too long (also, the Preview feature takes way too long to process it), so it's a blog post instead.

This has complexity of and a horrible constant. If there's a better solution, write it in the comments.

UPD: KADR provides a solution using polynomial interpolation in O(K2).

Problem statement

There are N boxes numbered 1..N; box number i contains i candies and 1 stone. We pick 1 random object from each box. Calculate , where p is the probability that K of these N objects are candies.

Constraints: 1 ≤ K ≤ N ≤ 109, 2·109 ≥ M ≥ 109 and M is prime.

Solution

It's clear that since the probability of picking a candy from box k is and the probability of not picking one is , the answer is (Sn denotes the set )

Let's denote this sum as P(N, K) and introduce another sum (N / 2 is integer division)

which calculates the same sum P(N, K), but with an additional restriction that S can only be subsets of range .

If that restriction was instead that S can only be subsets of range , the sum would obviously be just P(N / 2, K).

Also, let's denote .

Naive solution

If the constraints were small, we could use simple DP on P(N, K). If we don't use N in S, then we sum up π(S') of the same subsets S' as in P(N - 1, K). If , we only need K - 1 elements from the range SN - 1, so we sum up π(S) = Nπ(S') of the same subsets S' as in P(N - 1, K - 1).

This runs in O(NK) and would obviously give TLE. What we can do instead is splitting the range SN into 2 halves of size N / 2 (and possibly 1 integer N / 2 + 1 in the center for odd N).

Calculating P(N, K) — even N

Suppose we knew how to calculate P2(, ) already. In order to calculate P(N, K) for even N, we can split any disjointly and uniquely into and .

If |Sa| = i, then |Sb| = K - i; any 2 sets Sa and Sb can be merged to form one set S and so we have for fixed i

Since i can be any integer in [0, K], we get

Calculating P(N, K) — odd N

In this case, we can again split S disjointly and uniquely into Sa, Sb as above and another set . We have 2 choices for Sc: either it's empty or contains N / 2 + 1; if it's empty, then we get the same sum as for even N.

If Sc is non-empty, we can again use that π(S) = π(Sa)π(Sb)π(Sc) = π(Sa)π(Sb)(N / 2 + 1), where we can take N / 2 + 1 out of the sum and get a similar sum as for even N — the only change is that if |S1| = i, then |S2| = K - 1 - i and

Again iterating over all i from 0 to K - 1, we get a formula for odd N:

Calculating P2(N, K)

Let's expand the sum as

It's clear that if |S'| = K - i, then

where denotes the number of sets S (of size K) satisfying the condition.

How to count that number? We can just exclude set S' from all S and SN / 2 (since they all contain S'); then, we can see that we just need to count the number of subsets of SN / 2\ S' of size i. That's obviously just , so

Binomial coefficient

The binomial coefficient has kinda big arguments, no? We can't just pre-compute a Pascal triangle or factorials, but i is sufficiently small, so we can just use one of the most basic formulas for binomial coefficients

and so, for given N, K, compute the necessary binomial coefficients along with corresponding terms in the sum for P2(N, K). It's useful to have modular inverses precomputed; here, we can use that M is a prime larger than K, so by Fermat's little theorem, .

Complexity

We now have a fairly simple way to compute P(N, K) and P2(N, K) in O(K) time with some pre-computation. The important thing is that thanks to integer division, the N in the argument can only be N from the input divided by powers of 2; since there are just such powers that don't lead to the trivial case N = 0, and with fixed N, we only spend O(K2) time on computing P() and P2() for all possible K, the complexity is .

The limits are too tight, though — the biggest problem is there are a lot of modulos that take a lot of time. One possible improvement is only taking the modulo when computing the sums for P() and P2() after every 4 additions, that gives me worst-case runtime of around 2.1 seconds (so close TLE T_T). Also, we can precompute , which saves us some modulo operation compared to precomputing (N + 1)i and separately. Code

Read more »

 
 
 
 
  • Vote: I like it  
  • +60
  • Vote: I do not like it  

By Xellos, 3 years ago, In English,

Badum badum...

The Internet Problem Solving Contest takes place again in 2014! The registration's started (those of you who competed last year should've gotten an e-mail about it) and will be open until the end of the contest.

The contest will take place from 15.6.2014 10:00 UTC 15.6.2014 11:00 UTC to 15.6.2014 15:00 UTC 15.6.2014 16:00 UTC and will be preceded by a short practice session.

You can find the registration form, archive and everything at the contest site.

The rules can be found at the contest site or in my blog post about last year's IPSC. That's why this post is short — you can just read that one and replace the dates.

Also, if you think one blog post isn't a good enough emphasis, here are two more :D

*CONTEST HAS ENDED!* You can find the solutions in the archive.


Here is a summary of some registered teams by CF handles. It's not very long, but better than nothing.

Team name Member 1 Member 2 Member 3 Place
Zenith MLGuy flashmt __tutankhamun__ 39
MSU Trinity Zlobober sankear malcolm 31
cutie, dropout & useless woman vadimmm Rubanenko baba_beda 113
XZ Team eatmore AlexFetisov winger 26
KPZ eduardische popoffka Alex_2oo8 28
Break a leg Olja Oleg805 andgein 270
Charles University Legion fhlasek k21 simsa.st 12
SPb SU 4 Dmitry_Egorov PavelKunyavskiy yeputons 9

And separately for single-person teams.

Team name Member 1 Place
Alexander Udalov udalov 28
duolCauqA VVan 170
Xellos Xellos 15
Snowbear marat.snowbear 44
Futures Leaguer gs12117 12

Read more »

 
 
 
 
  • Vote: I like it  
  • +68
  • Vote: I do not like it  

By Xellos, 3 years ago, In English,

This is a continuation of my blog post about day 1.

You can find the constraints and samples in the original problem statements (in Slovak, but that doesn't matter with numbers :D).

UPD2: The contest is in Gym now (both days merged into 1 contest, one problem omitted due to being too theoretical). It only took a year!

Problems: 4 5 6

Solutions: 4 5 6

Read more »

 
 
 
 

By Xellos, 3 years ago, In English,

The last USACO contest, that is, US Open, takes place this weekend. You can start the contest anytime in the 3-day window starting at April 4th.

Well, at least it's not during the next weekend (Crazy Weekend of April 2014, there's one every month :D).

You'll be able to access the contest from the USACO front page. You need to register for an account on the site to compete.

I hope I'll be able to post my solutions again here after the contest ends. I also hope that I'll be able to compete at all, because I'll be who knows where (I certainly don't :D) during the weekend and I wonder if there'll be Internet connection...

Also note that this contest lasts 5 hours, unlike the previous ones. For the same number of problems.

Problems and solutions (just gold so far; post the silver/bronze problems if you have access to them):

Problem 1: Fair Photography.

Given N ≤ 105 integers in range [1, 8] placed on integer points of the x-axis and an integer K ≥ 2, find the longest interval that satisfies the following conditions:

  • at least K distinct integers occur in the interval,

  • all integers that do occur in it must occur there an equal number of times.

The interval must start and end at some of the given points. Print the maximum length of such an interval, or -1 if there's none.

Solution

If all integers must occur in the interval, the problem can be solved easily in time. This is basically problem ABCSTR of Codechef; read its editorial for further explanation.

In this case, however, a subset of integers from the range [1, 8] could occur in the chosen interval. Ok, why not just run the algorithm on all subsets? :D We only need to count the differences as in ABCSTR for integers from the present subset, and we'll remove all entries from the map<> whenever we encounter an integer that's not in that subset — this way, we can make sure that none of these integers can occur in the selected substring. We get complexity , which isn't particularly great.

We can get rid of the logarithmic factor by storing hashes of vector<>s in the unordered_map<> (actually, we can't get rid of it in the contest without coding an entire hash table... shame on USACO for not updating the compiler). A polynomial hash of elements of the vector<>by 2 moduli (109 + 7 and 109 + 9, for example) is strong enough.

We can also get rid of the factor M by updating hashes in O(1). The only 2 updates we need to do are: add 1 to one element of the vector; subtract 1 from all its elements. For polynomial hashes ( of vector A, where the  + N is in order to have all terms non-zero and thus remove obvious collisions), these operations correspond to adding pi - 1 and subtracting (you don't have to calculate that, just take the difference of hashes of A filled with 1-s and filled with 0-s).

Notice that so far, we can't check if the selected interval actually contains K distinct integers. We can separately calculate for each right endpoint of an interval the largest possible left endpoint for which this condition is satisfied. That can be done by processing points in sorted order and, for each integer, storing its rightmost occurence in a map<>; the rightmost left endpoint is the K-th last of these occurences and can be found by iterating over the map<>. Adding a point only leads to changing 1 entry of the map<>, so this takes time (the whole algorithm now takes time). Afterwards, we can easily check if some interval contains at least K distinct integers.

This leads to a more efficient algorithm. Notice that any point can only be the right endpoint in O(M) different subsets — if you decrease the left endpoint, you can only add integers to the subset, which can be done just O(M) times. The above mentioned algorithm gives us an easy way to compute these subsets, even. Similarly, any point can only be the left endpoint in O(M) different subsets, which can be found by running that same algorithm in the opposite direction.

Let's not try subsets separately, but keep 2M unordered_map<>s, in which we'll store the hashes of vector<>s.

We need a way that allows us to modify hashes quickly — for example, the i-th element of the vector<> is the number of occurences of integer i among the first j points if it isn't in the subset; otherwise, it's the difference between number of occurences of i and of the smallest integer in the subset. Note that when adding integers into the subset, these numbers change, but only all in the subset by the same number c, which makes polynomial hashes rather easy to modify — decreasing all numbers in the subset S by c means decreasing the hash by , where the sums for all subsets can be pre-computed.

Try all subsets which have j as the right endpoint and contain at least K elements, in the order in which they go when we move the left endpoint to the left; for each of them, find out if there was some vector<> before that could occur in that subset (was placed in the hash-map of that subset) and was identical to the one we have now — such identical vectors correspond to balanced intervals. Then, add our current vector<> to the hashmaps of all subsets that can have the j + 1-st (think why not j-th) point as their left endpoint.

The resulting complexity is . There are, of course, simpler algorithms where you don't use hashes or clever updates of them at the cost of larger powers of M.

Problem 2

There's a laser placed at point (0, 0) in a plane, shining in the direction of positive y-axis, and N ≤ 105 mirrors (reflecting on both sides) placed at integer points and tilted at 45° to the coordinate axes, that is, as '/' and '\'. The laser beam reflects from these mirrors. You should add one such mirror so that the laser beam passes through a given point (Bx, By). There are also additional restrictions:

  • the laser beam can't pass through the point (0, 0) again (this condition is satisfied in the initial configuration),

  • the new mirror can't be placed at the same point as one of the original ones.

It's also guaranteed that the laser beam doesn't pass through (Bx, By) in the initial configuration. Print the number of points at which the new mirror can be placed. Coordinates can be large, up to 109 in absolute value.

Solution

It's obvious that the laser beam will always be parallel to one coordinate axis.

Store the mirrors in every row (equal y-coordinate) in sorted order (by x-coordinate), and in every column (equal x-coordinate) also in sorted order (by y-coordinate). Trace the beam's path by remembering from which point in which direction it went and finding the first mirror in its path, until you find out that it started to loop or went out of bounds.

Trace back the beam's path that could lead to it hitting point from each of the 4 possible directions. This time, stop when it starts to loop (including hitting point again), goes out of bounds or hits point (0, 0).

We've obtained line segments of 2 types — "real" and "desired" and 2 orientations — vertical and horizontal. The new mirror should and can be placed at the intersection of any two segments of both different types and orientations, because we can pick the direction in which it could be placed to tilt the "real" beam by 90° and have it go in the direction of the "imaginary" one into point .

The problem now reduces to a classic "compute the number of intersections between vertical and horizontal segments". This can be solved by compressing y-coordinates, sorting and sweeplining the segments by the x-coordinate of the larger endpoint. Then, iterate over the vertical segments and remember how many times points covered by horizontal ones; you can calculate the number of horizontal segments that intersect a vertical one using a Fenwick tree. You can keep track of the horizontal segments to add using 2 pointers and of the ones to remove using a map<> of their left endpoints, or just replace each horizontal segment by 2 half-segments which add 1 and -1 to the Fenwick tree.

Time complexity: .

Problem 3

You're given a rooted tree of N ≤ 2·104 vertices. Each vertex can contain a digit from 0 to 9, inclusive. You're also given M ≤ 5·104 rules as pairs (vertex, 5-digit string); string s at vertex x says that if you read the first 5 letters from x (inclusive) to the root, you have to get a string different from s.

The tree is rooted at vertex 1 and vertex i has parent p(i) < i.

Print the number of ways in which you can't assign digits to vertices of the tree (e.g. such that at least 1 of the M rules is broken), modulo 1234567.

Solution

Instead of computing the number of ways in which we can't assign the digits, we'll compute the number of ways in which we can; the answer is then 10N minus that (there are 10N ways to assign digits, in total).

This can be solved by dynamic programming. Let DP[i][j] be the answer for the subtree rooted at vertex i, in case the 4 digits above that vertex are read as j, but this time from top to bottom. There are at most 10 digits we could put in vertex i; for each digit d that we can put there, the number of ways to fill the subtree is the product of DP[x][(10j + d)%10000] over all sons x, because we can fill the subtrees of sons independently and the last 4 digits read on the path from the root to each son will be (10j + d)%10000. Then, DP[i][j] is the sum of these results over all d. We can then find the final answer in DP[1][j] (we can pick any j, it's like creating a path of 4 vertices above the root and choosing the digits in them; there are obviously no rules starting at vertices with depth  < 4 below the root, so these digits won't affect anything).

This approach has complexity O(105N) — every edge of the tree is used O(105) times.

We can improve this by noticing that in case all digits are allowed, then the first digit of j (taken as a 4-digit number with leading zeroes) doesn't really matter. In fact, there are just M possible states of DP where it can matter, because each of the M rules forbids one digit d for one state of the DP. So if we calculated for 3-digit numbers j, then if some state of the DP has all digits allowed, we can immediately say that DP[i][j] = A[i][j%1000]. If a state doesn't have all digits allowed, we can just compute it the same way we did before.

This approach gives a complexity of O(104N + 100M), which I dare say could work in a second or two, because it's just DP, with really just that many operations.

It'd be even better if we could always replace moduling by subtracion if necessary, but we can't, because we need to compute products sometimes, not sums. The large modulus (around 106) also forces us to use 64-bit variables. BTW, this got me thinking about how the Chinese Remainder Theorem can sometimes be useful — if the modulus is composite, we can decompose it into prime powers, compute partial answers modulo each of them and the final answer is just LCM of the partial ones. In this case, 1234567 = 127·9721, which would allow us to use ints, but I doubt it'd be actually helpful.

I don't know USACO's time/memory limits, but I suppose this would still need memory optimization. If the tree was thin enough (there were few vertices in any depth), the DP could be done by decreasing depth, where we'd remember just the answer for states of vertices in the last 2 depths, but I'm not sure what to do in a general case.

UPD: The results are up. Optics and Code have really bad results...

Read more »

 
 
 
 
  • Vote: I like it  
  • +51
  • Vote: I do not like it  

By Xellos, 3 years ago, In English,

This Thursday (27.3.2014), the first (theoretical) day of the national round of Slovak Olympiad in Informatics took place (and I appeared there, much to everyone's surprise :D). So, I bring you the problems and solutions. The second, practical day, can be found here.

UPD: The original version of the problems and solutions is up. In case you were curious, final results. In Slovak, of course :D

You can have fun trying to find CF handles of some contestants :D

UPD2: The contest is in Gym now (both days merged into 1 contest, one problem omitted due to being too theoretical). It only took a year!

Problems: 1 2 3

Solutions: 1 2 3

Read more »

 
 
 
 
  • Vote: I like it  
  • +53
  • Vote: I do not like it  

By Xellos, 3 years ago, In English,

Continuing the recent streak of competitions, TopCoder SRM 612 takes place at this time (in Europe, this night). Still, that's no reason not to participate!

Read more »

 
 
 
 
  • Vote: I like it  
  • +37
  • Vote: I do not like it  

By Xellos, 3 years ago, In English,

Note that this weekend (Friday 7.3. to Monday 10.3. inclusive,  ±  time zone), this year's last round of USACO (before US Open) takes place.

You'll be able to access the contest from the USACO page. Registration is required.

Feel free to discuss the problems here after the contest ends (which is, btw, not very clear, because there are no official start/end times posted — around Tuesday afternoon in UTC). I'll probably also post my solutions here.

Also notice how similar this is to my blog post about COCI :D. This is one of the usual Crazy Weekends — there's COCI, USACO and start of the Codechef Long Challenge. It's like the contest organizers do this on purpose...

UPD: The results are up now!

UPD: Negative votes... trolls growing strong here!

UPD^2: Trolls are gone now. Lol.

**UPD **

Solutions (gold)

The Lazy Cow (code)

We're given N points (xi, yi) in a plane, each with a weight gi and are supposed to find a point (x, y) which maximizes the sum of gi of points located at a distance of at most K from (x, y). The distances are Manhattan ones — the distance of point (xi, yi) from our (x, y) is |xi - x| + |yi - y|. Constraints: N ≤ 105, coordinates 0 ≤ xi, yi ≤ 106, 1 ≤ K ≤ 2·106.

Let's say that we picked a point (x, y). We can transform coordinates (x, y) to (x', y') = (x + y, x - y) and similarly for all points i. In these new coordinates, the distance of our point from any given one is (evaluating all possibilities for what the abs. values can look like) , and the region for which it's  ≤ K is a rectangle around (x', y').

Let's sort the given points by x'-coordinate. Now, we can use a sweepline + interval tree, which supports operations "add value v to all elements of interval I" and "get maximum of all elements". We'll try adding the points one by one, in the order of increasing x'-coordinate, and in the interval tree, we'll remember the sum of gi of points that we could get by picking y' as the other coordinate in the element y'. That's done by adding gi to all elements from yi' - K to yi' + K inclusive; removing a point is the same, just with subtracting gi from all those elements.

We need to remove the points whose xi'-coordinates are  < x' - 2K (we're basically forcing the right side of our rectangle to touch some point and store in the interval tree only answers when accounting for points that have the xi'-coordinate  ≥ x' - 2K; if it didn't, we can move the rectangle to the left without changing the result). Since the points are already sorted, we can use two pointers... well, just one additional "pointer", which points to the leftmost point still in the interval tree.

The time complexity of constructing an interval tree is O(maxX); each query can be processed in time and the sorting takes time, so the resulting complexity is . With y'-coordinate compression, we can get runtime.

Sabotage (code)

We're given an array A of size N ≤ 105, and are asked to compute the minimum arithmetic average of all elements of this array after removing a substring (contiguous subsequence) from it. Plus some additional constraint that the elements of this array are non-negative, at most 104 and we can't remove the first or last element from A.

This type of problems (serarching for some arithmetic average) can usually be solved with bin-search. It's clear that if we can achieve an average of at most x, then we can achieve also at most y > x. So let's fix x and check if we can achieve  ≤ x.

Prefix sums are useful. Let S[i] denote the sum of the first i elements (S[0] = 0). Let's say that we erased the subarray A[i..j] (2 ≤ i ≤ j ≤ N - 1) and got an average  ≤ x. It means that

Let's say that we fixed j. Now, the best choice (minimizing the left side of this inequality) is obviously picking the smallest S[i - 1] - x(i - 1). This gives us a simple way of checking if this inequality ever holds: iterate over j from 2 to N - 1, remember and update the minimum of all S[i - 1] - x(i - 1) (always just picking the minimum of smallest the one found so far and S[j - 1] - x(j - 1)) and check if the smallest value for given j is  ≤ 0. If it never is for given x, there's no way to get an average  ≤ x.

This check works in O(N) time; plus binsearch, it's time.

Counting friends (code)

We're given an array of N + 1 elements (N ≤ 500) and are supposed to find, and list in sorted order, all possible elements of this array such that after removing said element, the remaining ones can be degrees of a simple undirected graph.

Why not try to remove each element and check if the remaining ones are degrees of a graph? We have a simple way of checking whether they are: the Erdős–Gallai theorem, which provides a necessary and sufficient condition.

Let's sort the numbers beforehand. Now, we can get a sorted array after removing an element in O(N) time without much difficulty. For each removed element, we can just compute the sums on both sides of the inequality from Erdős–Gallai theorem for all k (possibly with some optimizations), which gives us an O(N3) time complexity.

Read more »

 
 
 
 
  • Vote: I like it  
  • +55
  • Vote: I do not like it  

By Xellos, 3 years ago, In English,

Note that this Saturday, this year's last (before the Olympiad's Open) round of COCI takes place.

Registration and entry into the contest: here. Watch out — don't register for HONI (local version), but COCI.

Feel free to discuss problems here after the contest ends. I'll probably post my solutions here, too.

The contest is over. The results are accessible in your account on evaluator.hsin.hr under the Results tab. The top 5 are:

  1. bmerry
  2. a_h1926
  3. eduardische
  4. svanidz1
  5. Topi Talvitie

Solutions

(all full, except the last problem; you can find the solution of that problem in the comments)

VJEKO

(code)

Do I even have to write anything? For every word on the input, check if the part of the pattern before the asterisk is its prefix and the part after it is its suffix. If they both are, the answer's yes. Can be implemented more comfortably using the STL substr() function. Any bruteforce works.

FONT

(code)

Assign a bitmask of 26 bits to each word — the i-th bit is 1 iff the i-th letter of the alphabet is present in the string. It's obvious that we're asked to compute sets of words whose bitwise OR is equal to 226 - 1.

We face 2 difficulties here. First, the time limit is too low for an O(N2N) solution. We need an O(2N) one, using something like Gray code. But we can't do a classic Gray code memoization, because we don't have enough memory. We need to find some kind of trick to get a full score. I iterated over all numbers with N - 1 bits (corresponding to subsets of the last N - 1 words) and used the lastone() function from Fenwick trees to always get the last non-zero bit of the number; I remembered the bitmasks of the words corresponding to all bits (in 2 arrays, to save memory; it just adds an if-else condition) and used bitwise magic to make my program faster. When I know the OR value of bitmasks of all words in that subset, I just check separately whether that value's good and whether its OR with the last word's bitmask is good. Time complexity: O(2N); memory: O(2N / 2).

The TEST option helps a lot in estimating what's fast enough.

KOCKICE

(code)

We can only choose the middle column's height. Let's say that this height is H; then, the cost for changing the i-th column of array R is , and similarly for array S. So then, let's define a new array A with 2N elements: , and sort it. If we subtracted H from all elements of A (the sorted order of elements remains the same), the result would be the sum of absolute values of all elements of A.

If, after subtracting H > 0 from all elements of A, we got A[N + 1] < 0, then subtracting H - 1 would give us a smaller answer. That's because all absolute values of A[1..N + 1] would decrease and at most N - 1 remaining ones could increase — the answer would decrease by at least 1. The same applies for A[N] > 0 (so A[N + 1] ≥ A[N] > 0) and subtracting H + 1 instead of H ≥ 0.

That means: if A[N + 1] ≤ 0, then H = 0 (H < 0 is digging a hole in the ground...); if A[N + 1] > 0, we want H = A[N + 1]. Pick the right H, subtract and calculate the answer. Time complexity: due to sorting.

KRUZNICE

(code)

This problem would be called the same in Slovak :D

Observe that we don't need whole circles, the intervals that are formed by their intersection (as full circles) with the x-axis are sufficient. That's because the circles can touch only at the x-axis (proof to the reader). Each circle cuts out one new region from the whole plane or a circle that it's inside of; for each circle c, one more region's added if there are two or more smaller circles that form a chain from one to another end of c. In this case, a chain of circles (or intervals) is a sequence of circles that touch (or intervals with a common endpoint). It's obvious that if such a chain exists, the inside of circle c is cut into 2 symmetric halves; otherwise, those 2 halves are connected, because no 2 circles can intersect, and no other regions can be cut out (once again, proof to the reader; during a contest in this case, an insight into this is sufficient and there's no need for solid proofs).

This sounds like a graph with endpoints of intervals being vertices and intervals connecting the endpoints as edges — participants of Codechef Feb Long Challenge would agree :D. A chain in this case is a cycle formed when adding circles (intervals, edges) by increasing radius.

This kind of adding edges sounds a lot like DSU. In fact, you can apply the DSU algorithm (the answer is then N + 1, for each circle plus the region that remains from the initial plane, plus the number of cyclic edges found during DSU), or just realize that the number of cyclic edges is given just by the number of connected components and vertices of the graph.

Time complexity: , if we want to compress the x-coordinates or store them in a map<>.

HASH

(code)

This problem was another pain in the ass when optimizing time and memory. Seriously, was it so hard to use N ≤ 8 or a time limit of 3 seconds?

My solution is a bruteforce using the meet-in-the-middle trick. The point is that we can calculate hashes of the first letters for all ways of picking those letters, and also what those hashes would have to be in order to give K when hashing them with the last letters.

The first bruteforce can be done by remembering hashes in a 2D array: H[i] stores all hashes that we can get with i letters (26i elements), and we can get the elements of H[i + 1] by iterating over all elements of H[i] and trying to add each of the 26 i + 1-st letters. It's clear that the hash for every word with  ≤ i letters will be computed exactly once and in O(1) time, and the number of those words, and so the time complexity of this part, is .

What about the second bruteforce? Xor is its own inverse and 33 is coprime to the modulus, so it has a modular inverse; its value %2M is I = 33φ(2M) - 1 = 332M - 1 - 1. So if the last letter had an ordinal value of x, then the hash of our word without that letter is ((K^xI)%2M (modulo trashes all but the last M bits, so xor doesn't affect it, and multiplying by modular inverse is an inverse operation to multiplying by 33 %2M, so it gives us the hash we're looking for by definition). This is almost identical to the first bruteforce, we just start with K instead of 0 and do the operations (xor, multiplication) in reverse order. The time complexity is the same.

How to merge the 2 results? One more array with some 225 = 3.3·107 elements can still fit into the memory. This array, P, will be the number of occurences of hash P among the first bruteforce's results (the hashes of elements). Traverse all those hashes H, for each of them increment P[H], then traverse the results of the second bruteforce (the required hashes) and for hash H add P[H] to the answer. Once again, every word with hash K will be counted in the answer exactly once.

Time and space complexity: O(26N / 2 + 2M). It actually fits into the limit, if you don't do something stupid like sorting the bruteforces' resulting hashes and using 2 pointers to compute the answer (quicksort is fast, but not fast enough, even if it cuts off the 2M factor from complexities).

GRASKRIZA

(code)

You can look for an optimal solution in the comments. But let me tell you that a stupid bruteforce (add traffic lights to all grid points of the smallest rectangle bounding the input traffic lights, that are above any already existing traffic light) gives 96 points :D

I tell you, I don't miss the times (first COCI contest of 2013/14) when a solution that was significantly faster than a bruteforce got 30% of the points just like a bruteforce.

Read more »

 
 
 
 
  • Vote: I like it  
  • +51
  • Vote: I do not like it