### Xellos's blog

By Xellos, history, 13 months ago,

UPD: 10 points! Ha!

implementatio programmi

This is a continuation of https://codeforces.com/blog/entry/81365#comment-719271. I'll use what we proved there.

Let's assign all consecutive '0'-s to the cloest non-'0' before them. We get a sequence of tokens between which we may add "**" arbitrarily.

How should we pick the first base? There are special cases if the whole string is "102" and "1d0...01" where we don't split at all. Otherwise, the first base won't be "102" or "1d0...01", only a 1-digit or 2-digit number, "d0...01", "1d0...0" or "d0...0".

• If the first token is $1$, it'll be "1d0...0". We can't add more tokens and we obviously don't want $1$ as a base. From now on we can eliminate "1d0...0" as an option.
• If the second token is at least $10$, the first token forms the first base. This also follows directly from Step 2.
• If the second token is at least $2$ and we include it in the first base:
• The first token must be one digit.
• If the exponent (rest of the string) has at least two digits, then $e \ge 10$, the first token/digit is $a \ge 2$, the second token/digit is $b \ge 2$ and we need $a^{b^e} \lt (10a+b)^e$.
• Since $10a+b \le a^5$ (proof by bruteforce) and $b^e \ge 5e$ in this case, then $a^{b^e} \ge a^{5e} \ge (10a+b)^e$.
• The only possible case is when the exponent is just one digit, i.e. one token. Then we only have three one-digit tokens and this can be solved easily since there's at most one option greater than $9^{99}$. If you're worried about time complexity, precompute.
• In all other cases, the first base is just the first token.
• Now the second token is $1$. Do we put it at the end of the first base or the start of the second base? Probably the second, right?

Once again, let's use the rapid increase of the remaining exponent. We set two conditions that cover the vast majority of the remaining cases:

• There are at least three digits in the third+ tokens.
• The remaining tokens don't form only one (the last) base. They form an exponential "h**r", where $r \ge 2$ (Step 1) and "h" is the second base, with length $k$.

We want to prove that putting the second token in the second base instead would be better, i.e. that $(10a+1)^{h^r} \le a^{(10^k + h)^r}$. Since $(10a+1)^{h^r} \le (21a/2)^{h^r}$, and $a \ge 2$, we can simplify that to $\left((10^k+h)^r-h^r\right)/h^r \gt \ln (21/2) / \ln a$ and further to $(10^k/h+1)^r \ge \frac{\ln (21/2)}{\ln 2} + 1$.

• Since $h \lt 10^k$, it obviously holds for $r \ge 3$, where the left side is greater than $8$ and the right side smaller than $5$.
• For $r = 2$, we can further simplify to $h/10^k \le 91/100$, i.e. $h \le 91 \cdot 10^{k-2}$.
• Let's keep in mind that "h" must be a valid base that's not at the end and has at least two digits, so it's either an exactly 2-digit number ($k = 2$), "d0...01", "1d0...0" or "d0...0".
• The last three options obviously satisfy $h \le 91 \cdot 10^{k-2}$.
• For $92 \le h \le 99$, though, we can immediately see that $92^2 \lt 9^{22}$ and $99 \gt 9^3$, so these are always suboptimal bases if they aren't at the end of the string.

All that remains is dealing with remaining, quite special, cases.

First, if our string is "d0...01c", where $c$ and $d$ are non-zero digits and there are $k$ '0'-s in the first token (which isn't "1"), then "d0...0**1c" and "d0...01**c" are the only options (we eliminate the case with no "**" since the first token isn't "1" and there are $3$ non-zero digits). We're putting the token "1" in the second base if $(d \cdot 10^k)^{10+c} \ge (d \cdot 10^{k+1} + 1)^c$.

• Let's simplify it to $(d \cdot 10^k)^{10} \ge \left(10 + \frac{1}{d \cdot 10^{k+1}}\right)^c$.
• The right side is $\le \left(10 + \frac{1}{20}\right)^c \lt 11 \cdot 10^8$.
• For $k \ge 1$, the left side is $\ge 10^{10}$, so the inequality holds.
• For $k = 0$, we can bruteforce. We don't get a clean rule for when "1" goes to the first base, but since we're dealing with 3-digit strings, we don't need it.

If our string is "d0...01cg", where we just added an arbitrary digit $g$, then "d0...0**1cg", "d0...0**1c**g", "d0...01**cg" and "d0...01**c**g" are the only options.

• Let's start with leaving $k = 0$ to bruteforce/precomputation just like before. That shit's ugly.
• We can eliminate "d0...01**c**g", it's worse than "d0...0**1c**g". Proof just above.
• The same proof works for $(d \cdot 10^k)^{100+10c+g} \ge (d \cdot 10^{k+1} + 1)^{10c+g}$: $(d \cdot 10^k)^{100} \ge 10^{100} \gt \left(10 + \frac{1}{20}\right)^{99} \gt \left(10 + \frac{1}{20}\right)^{10c+g}$. We eliminate "d0...01**cg" too.
• Now we know that the first base is just the first token.

Finally, if there are at least three digits in the third+ tokens, but they can form one base, then it's a suffix "102", "1c0...01", "c00...01", "1c00...0" or "c000...0" (the whole string minus the prefix "d0...01", where the first token isn't "1").

• With "102", we can form "d0...0**110**2", i.e. $(d \cdot 10^k)^{12,100}$. Among other options, only "d0...01**102" isn't completely obviously bad, and since $\left(10+1/(d \cdot 10^k)\right)^{102} \le (10+1/2)^{102} \lt 2^{12,100-102}$, that has a smaller value for any valid $k$ and $d$. We can throw out this case.
• With "1c0...01", the only options are "d0...01**1c0...01" and "d0...0**11**c0...01". Here, $(d \cdot 10^{k+1} + 1)^{(10+c) \cdot 10^{l+1} + 1} \lt (d \cdot 10^k)^{11^{c \cdot 10^{l+1} + 1}}$ holds since it simplifies to
$\left(10 + 1/(d \cdot 10^k)\right)^{(10+c) \cdot 10^{l+1} + 1} \lt (d \cdot 10^k)^{11^{c \cdot 10^{l+1} + 1} - (10+c) \cdot 10^{l+1} - 1}\,;$

the exponent on the right side is $11^{c \cdot 10^{l+1} + 1} - (10+c) \cdot 10^{l+1} - 1 \gt \left((10+c) \cdot 10^{l+1} + 1\right) \cdot 10$ (prove that it's increasing, by induction on $c$ and $l$), so even for $d \cdot 10^k = 2$, the right side is at least ${2^{10}}^{(10+c) \cdot 10^{l+1} + 1}$. We can also throw out this case.

• With "1c00...0" (at least one '0'), only "d0...0**11**c00...0" and "d0...01**1c0...0" are options. Since there's at least one '0', this is also bad and the proof is almost identical to the one just above — we get an exponent that's at least $11^{10}$ instead of $11^{11}$, but that's still a huge number and everything works the same way.
• With "c000...0", only "d0...0**1c000...0" and "d0...01**c000...0" are options. The number of '0'-s at the end doesn't matter since multiplying the exponent by $10$ is here just taking the $10$-th power of the previous exponential's value, for both options.
• It's easy to prove that if $d \cdot 10^k \ge 11$, then $(d \cdot 10^k)^{10+c} \ge (d \cdot 10^{k+1}+1)^c$ since $(d \cdot 10^k)^{10} \ge \left(10+1/(d \cdot 10^k)\right)^c$, where the left side is $\ge 11^{10}$ and the right side $\le 11^c$.
• If $d \cdot 10^k = 10$, then $10^{10} \ge \left(10+1/10\right)^9 \ge \left(10+1/10\right)^c$ still holds.
• We can see that for $k \ge 1$, the second token ("1") doesn't go to the first base. For $k = 0$, we bruteforce all remaining cases.
• With "c00...01", this suffix must stay together, so the only options are "d0...0**1c0...01" and "d0...01**c0...01". This looks similar to the previous case. For $k \ge 1$, we'd still expect $(d \cdot 10^k)^{(10+c) \cdot 10^{l+1} + 1} \ge (d \cdot 10^{k+1} + 1)^{c \cdot 10^{l+1} + 1}$, i.e.
$(d \cdot 10^k)^{10^{l+2}} \ge \left(10 + 1/(d \cdot 10^k)\right)^{9 \cdot 10^{l+1} + 1} \ge \left(10 + 1/(d \cdot 10^k)\right)^{c \cdot 10^{l+1} + 1} \,;$

for $d \cdot 10^k \gt 10$, it obviously holds, and for $= 10$, $10^{10^{l+1}-1} \ge \left(1 + 1/100\right)^{10^{l+2}-10} \gt \left(1 + 1/100\right)^{9 \cdot 10^{l+1} + 1}$ still holds since $l \ge 1$ and $10 \ge (1+1/100)^{10}$.

• Finally, if it's between "d**1c0...01" and "d1**c0...01", the first option is better if $10 \ln d / \ln(10+1/d) \ge c + 10^{-l-1}$; even though any $l \ge 1$ is possible, the last term is so small ($\le 1/100$) that it doesn't matter — we decide based on $c$ and $d$ in the same way as with "c000...0".

This is a huge proof (I didn't find a shorter one that I'd be sure of), so I'm sure to have made some small mistakes at least. Of course, this isn't how you solve the problem during a contest — you can precompute more cases, use intuition to decide that all bases except the last few must follow some nice rules, just pray that you didn't miss anything and still submit... or test your solution on small diverse cases. Nevertheless, this ensures that no failing test cases are missed.

• +66

By Xellos, history, 15 months ago,

I just got a mail about Yandex contests this year. Good news: besides the 4 tracks they used last year, they're adding back Algorithms!

Seems like it won't be the exact same contest as before though. The time frame is much shorter — qualification is in the week from 19th to 25th October, the finals (presumably online) are 2 weeks later. It's unclear if there will still be 3 short main rounds in the qualification although it's described as several chances to qualify. T-shirts aren't mentioned, only financial prizes for top 3 and Yandex promocodes for top 100.

At least they're saying it's a continuation of Yandex.Algorithm from 2018 and offering its problems as example problems. Let's hope it will really be like that.

• +29

By Xellos, history, 2 years ago,

I want to promote the option of virtual participation in contests, since I noticed it's not used as often as it could be. Perhaps it's just not something people keep in mind, but I'm sure that if you use it often, it will help you a lot.

What does it do? It allows you to compete in past contests as if you were actually competing when the contest took place — there are only two differences:

• it's unrated
• there are no hacks and so far, no pretests

Virtual participation is a great way to compete without worrying about your rating. Instead of making a fake account that you then throw away, you can just participate with your real account with no risk. If you care about embarassing yourself with poor performance, don't worry about it. Nobody cares.

Virtual participation is also a great way to train. Unlike regular upsolving, you have a timer, so you can't just take your time trying stupid shit or overthinking. This way, it's much better for training fast thinking. Taking your time with problems can also be useful if you want to learn more difficult things or try to solve a problem in the best way, not just in some way, so I'm not saying to disregard that, but combining both ways to train is surely better than just sticking to one.

Are you in div2 and do you want to train above your level? Virtual participation is the way to go.

Do you want to train for ACM as a team, but don't want to or can't meet up with a physical computer? Instead of making one team account, you also have the option of creating a team, adding your CF handles to it and trying virtual participation as a team.

The main disadvantage of virtual participation right now is that the format doesn't perfectly correspond to normal CF rounds. During the contest, the scoreboard you're shown is ACM-style, but it's back in CF style when it ends and your score in the final scoreboard follows the rules of the contest. More importantly, the judging is also ACM-style: there are no pretests. It would be a useful improvement for CF if this was fixed and virtual participation always followed the real contest rules, i.e. for CF rounds, the results during the virtual round would be shown just on pretests and updated to what they'd be after systests when the contest ends.

So, what do you do?

1. Click "Contests" or "Gym".
2. Pick a contest you haven't tried. Click "Virtual participation" under its name.
3. Set the starting time (default: up to 5 minutes from now), click "Register".
4. Improve your competitive programming skills!

• +207

By Xellos, history, 2 years ago,

Hello!

As you may have noticed, this year's Central European Olympiad in Informatics (link) will take place from July 23rd to 29th, in Slovakia, and we'll hold mirror contests for each day here. We want to preserve the (subtasks, no hacks) format of the competition, so the mirrors will be unrated, but I still encourage you to find the time and participate.

I think the problems will be fairly well-balanced — not very hard for LGMs, but not too easy either, and everyone should be able to find subtasks on their own level.

The mirrors will start shortly after the official contest starts (that's how it's scheduled right now) or shortly after it ends.

The problems were prepared by majk, Monika Steinová, misof, Hodobox, Peter Perešíni and me.

Also, thanks to MikeMirzayanov for Codeforces and Polygon.

• +147

By Xellos, history, 2 years ago,

As you may know, this year, CEOI is held in Slovakia (and we've finally got a website that doesn't look like it's from 1990!). There's a list of participants for IOI, but never for CEOI. So let's make it!

Since there are just a few participating countries, I won't bother automating it. Just mention your country's participants in the comments.

UPD: What about other countries? I added Slovakia, but there still aren't many listed.

UPD2: The teams are listed on the official website now.

• +46

By Xellos, history, 3 years ago,

Hi!

Two years ago, I wrote a post about a physics competition — primarily for secondary school students, but there's also an open category intended for everyone. It took place last year too, I just didn't think about writing a blog post here.

If you like physics problems, go ahead and register on online.fyziklani.cz/en/!

UPD: Now with fixed registration.

• +27

By Xellos, history, 3 years ago,

(In case you haven't noticed the occasional blogs about it,) Internet Problem Solving Contest is going to take place on 6th October.

For those who don't know, this is a contest for teams of up to 3 people which features both algorithmic and "unusual" problems. You can find more information in my blogs about past editions... and the link to the contest site in Bugman's comment below.

UPD: Only a short time left!

• +167

By Xellos, history, 3 years ago,

A while ago, a link for the mirror contest of IOI day 1 has appeared!

Day 1

If you want to discuss anything about the problems in comments, use spoilers.

UPD: Day 2

• +46

By Xellos, history, 3 years ago,

A long time ago, I started writing syntax definitions for Q# in Sublime Text. Motivated again by the recent contest, I significantly improved them, so they're actually usable in a competition. You can check the current version at https://gitgud.io/Xellos/qsharp-sublime-syntax.

• +43

By Xellos, history, 4 years ago,

As you may have noticed, this year's edition of IPSC will take place a month later than usual — 8.7. (time). The registration has been open for a while now. Don't forget to participate!

You can find details of the competition in my previous blog or on its website.

Contest is over.

• +123

By Xellos, history, 5 years ago, translation,

Questions That Don't Deserve Their Own Blog — I guess it'd be good to have something like this as a place to ask random questions about problems ("I accidentally my code, wat do?") without having your blog post get downvotebombed?

old question

Next: I was upsolving DCJ problems during the second practice round (which ended yesterday). The only non-final problem I didn't manage to solve in time was highest_mountain and only because it gives me weird RE.

code

I'm merging O(nodes) small convex hulls of size O(N / nodes) by sending only every K-th point in each hull, merging them in master and sending back the range that remains in the merged convex hull; then, each node sends just the smallest O(N / nodes / K) leftmost or rightmost points of this range to each other node and each node recomputes the convex hull of its own O(N / nodes) points plus all O(N / K) points it got sent.

I proved that this is correct and it passes the small subproblem (with K = 10, nodes = 10, N = 1000), while slightly incorrect implementations don't, so it really should be correct. I also verified that it passes at least one official maxtest, but gives RE later.

The cause of RE should not be too much stuff sent, since each node sends/receives only O(N / K) of data (numerically 2-3 MB, I tried "if this array is too large then return 0" checks and it changed nothing too). The memory limit shouldn't be exceeded too.

Any ideas? Btw don't forget about GCJ/DCJ last online rounds this weekend.

• +41

By Xellos, history, 5 years ago,

UPD: Contest over. Did you participate?

Online Physics Brawl will take place on November 30th. Registration is possible until November 28th.

It's a team competition intended for secondary school students from Yugoslavia Czechoslovakia Czech Republic and Slovakia, but there's a category for foreign secondary schools and an open category for anyone. Given the target audience, I don't think it'd be impossibly hard for CF users.

You can find the rules and past problems at online.fyziklani.cz. For each problem, you only submit a number, so programming knowledge may be useful (but the problems are fully solvable without it).

I'm a minor problemsetter/tester and the translator this year.

• +38

By Xellos, history, 5 years ago,

I got to IOI solving — and upsolving — a bit late and I just recently finished it (with help of various comments). One thing I didn't really see there was: how would you have done?

• aliens: a common DP + convex hull trick; apparently, gets TLE for the 5th subtask and even O(NK) has trouble fitting within the TL; 60pts

• shortcut: I knew that there's binsearch (from a similar TC problem); afterwards, finding an solution isn't hard, but improving it enough to get more points is; 93pts

• railroad: mfw — I didn't move beyond a bitset DP; even though I had a lot of ideas including what turned out to be the correct one (finding a flow), I thought it'd be some kind of greedy, but those are numerous and hard to check in this case; the last subtask was very easy after solving the 3rd; 34pts

• messy: the bounds are 7·27 with N = 27, so it has to be D&C; full score

• paint, molecules: easy 100pt problems

So that'd give me 7th place.

The first solution I got 93pts for in shortcut was this: when checking if the diameter is  ≤ D and considering i to be the left endpoint, find the largest j = j1(i) such that a shortcut between i and j gives all distances within the created cycle  ≤ D, the largest j = j2(i) such that all distances between vertices from the cycle and vertices to the left of i are  ≤ D, then symmetrically for each j the smallest i = i3(j) such that all distances between vertices from the cycle and those to the right of j are  ≤ D. There are also distances between pairs of vertices not on the cycle, but those are easy.

For j2(i), we can find the leftmost vertex k2(i) to the right of i such that it's farther from some vertex to the left of i without a shortcut; then, all other vertices on the cycle must be reachable using the shortcut, so pos(j2) - pos(a) + d(a) + C + lft(i) ≤ D for all vertices a between k2 and j2; here, pos is the position of each vertex on the main line (prefix sum of l) and lft(i) is the maximum distance to the left from i; we can extend it to all vertices a to the right of k2, which gives us a bound on pos(j2); we can find j2 by binsearching. Also, k2 can be found using an RMQ table for max(pos+d) and an RMQ-like search.

With j1(i), we can do something similar to finding j2 for each vertex, but only for distances to i exactly, not to the left of i (so there's d(i) instead of lft(i)); j1(i) can be computed as their suffix minima.

We're left with this problem: there's an interval Il(i) and Ir(i) for each i; are there some i, j such that and ? This can be solved e.g. using another RMQ table, in which we'll store the minimum left ends of Ir in ranges of j; the right ends of Ir aren't important. Then, for each i, we can find out if the minimum in Il(i) is  ≤ i. (Alternatively, we can use a sweepline algorithm and store opened Ir-s in a set<>.)

How simple. /sarcasm

I tried to optimise this, but there was no way to squeeze it even into 97 points — surprisingly, since I didn't really need to optimise to get 93pts.

I got full score on shortcut using an approach based on http://codeforces.com/blog/entry/46518?#comment-310338 (note: extending to all pairs i, j is probably unnecessary and wrong — if i = j, we get a non-path which can be longer than D). The hardest part is computing the smallest j0 = j > i and largest i0 = i < j such that u[i] + v[j] > D, since u[i] and v[j] don't have to be monotonous; afterwards, we can find the smallest/largest sum/difference of x1, x2 by considering all j ≥ j0 / i ≤ i0, for which we only need prefix/suffix maxima of u[] and v[] and check if the answer exists using 2x 2 pointers in O(N).

To compute j0, let's move from i = N - 1 to i = 0 and recompute the sequence of closest increasing v[j]-s using the well-known stack algorithm. Obviously, we can find j0 by binsearching among them, but that gives time complexity. However, if j0(i) < j0(i') for i > i', then we can decrease j0(i') to j0(i) without affecting the bounds on the sum/difference of x1, x2; that means we can keep a pointer on the element in the stack (actually an array in the implementation) corresponding to v[j0] and only move this pointer in one direction and it's in total.

This got 97 points instantly, but I still needed some optimisations to get to 100. That time limit was really tight. Other than that, the problems were pretty good and difficult (as expected from Russia); I just missed problems with non-binary scoring per subtask.

• +167

By Xellos, history, 5 years ago,

A reminder that AtCoder doesn't only have Grand Contests, but also these. Starting time: in the past.

Also post-contest discussion, I guess.

• +55

By Xellos, 5 years ago,
const int *
int const *
int * const
int const * const
const int * function (const arg) const
return void(l = r = 0);


VERSION 2.0 RELEASED! Now with 100% less memory leaks or MLE verdicts. Project moved from Github.

The following text is for version 1.0.

Based on the e-maxx implementation and other stuff I found online, I pieced together a powerful treap. It can be used as a set<>, array, segment tree or (within limits) all of them at once.

Consider a set<> represented as a sorted array of distinct elements. You can insert/remove elements just like in a set<>. You can also find the index of any element in this array, remove an element by index — or insert an element at an index, but that will probably break the sortedness and you can't use the operations which rely on it anymore (you can still insert/remove elements by index). As long as the array is sorted, you can query lower bound / upper bound of any element, given as a pair (element, index).

Next, you can reverse a range, add to a range and query range sums; this will work all the time, but range addition can and reversing will break the sortedness. You can implement your own queries (like min/max) or combined range updates (paint+add) like in a segment tree. Since we need lazy propagation, everything is immutable (you can work around it e.g. with erase/insert) and I decided not to bother with iterators.

I took special care with randomisation. It uses testlib random_t and if equal priorities are detected (at insert), they're all regenerated, but it shouldn't happen with reasonably small data — the priorities are 60-bit, which is far above the Birthday Paradox threshold for just about anything.

The DS is templated over some type T, which has to be comparable and support addition / subtraction / multiplication by int (like a vector space in math).

It supports the following operations, all online in (with N being the current number of elements, ofc):

 function action returns insert(element) inserts element into a set bool(was it inserted?) insert_pos(index, element) inserts element at index void erase(element) removes element from a set bool(was it removed?) erase_pos(index) removes element at index void get_index(element) finds the element's index, size() if non-existent int in [0..size()] [index] finds the element at index T (not T&) lower_bound(element) finds the lower_bound() of that element, with index pair upper_bound(element) the same with upper_bound() pair shift(left,right,element) add element to everything in the range [left,right] void reverse(left,right) reverse the range [left,right] void sum(left,right) find the sum of the range [left,right] T

Also, there's the usual empty(), size(), is_sorted() (returns true if the sortedness hasn't been broken yet) and srand(), which lets you choose your seed if you don't want the testlib default.

I compared it with STL set<> on a million insert/erase/lower_bound operations; the treapset is ~3-4 times slower (which makes sense, since the structure supports many more operations). I also tested it on a million insert_pos/get_index/shift/reverse/sum operations; it took ~4 seconds locally. It seems to work...

• +103

By Xellos, history, 5 years ago,

The 2016 edition of Internet Problem Solving Contest is going to take place today (starting time).

It's a 5-hour contest for teams of up to 3 people, but there's also an individual division, so feel free to participate even if you can't find yourself a team at this point.

There's going to be an ACM-like number of problems (12+), ranging from classic algorithmic problems to quite unusual ones. Most problems have an easy subproblem worth 1 point and a hard one worth 2 points (think CodeJam); ties are broken using ACM rules (sum of times).

The practice session is over. The contest is over!

#### Belated, yet necessary warning!

Since this is a 5-hour contest where you can execute codes locally, some inputs will be YUGE (gigabytes). Accordingly, they will have fairly high system requirements. Get a good computer. While the official solutions quite comfortably run on my mediocre laptop, if you write something too inefficient, you can encounter a nasty surprise, e.g. frozen system. It happened to me last year.

If an input is big, you won't have to download it; instead, there will be a generator (typically a Python script, meaning they aren't very fast). It's usually a good idea to run all generators as early as possible — as long as it doesn't slow down your computer too much, you can avoid a situation where you can't submit a problem because the contest ended before your generator.

Actually, you should just try to read as many problems as possible and determine your strategy after you can guess your chances well enough.

#### Some quick stats

11145 submissions
5370 successful submissions (48% success rate)
797 active teams out of 1376 (58% did not give up before even trying)
773 teams with positive score (97% of active teams)
12/13 problems solved by someone
maximum number of fully solved problems: 10/13
lowest non-zero success rate: D (D2: 20%)
highest success rate: C,F (C2,F2: 85%)

highest success rate (easy subproblems): G1 (85%)
lowest success rate (easy subproblems): J1,M1 (25%)

hard problems (<50 teams solved) sorted by difficulty:
E: 0/13
M: 2/10
J: 4/17
H: 11/17
B: 11/18
L: 16/46
K: 40/107


• +193

By Xellos, history, 5 years ago,

Seeing as there's no blogpost from chrome about the round a few hours before it...

TC SRM 692 starts soon. If you're up at night, you can participate!

UPD: Contest over! As you may have noticed, I was the writer; it's worth it to read the problem of both divisions just for the stories.

Hints:

TreeSums
LinenCenter
HardProof
Dubs

• +107

By Xellos, history, 6 years ago,

is here! Finally, solutions can be spoilered!

For example, my solution to F from CROC 2016 - Elimination Round:

We need LaTeX in spoilers, too.

UPD: Hell yeah, finals! I'll just leave this here.

• +43

By Xellos, history, 6 years ago,

Hello!

Today (20.3.), another monthly Cook-off takes place. It should last for 2.5 hours (21:30 to 00:00 IST) — starting time.

There will be 5 problems of varying difficulties — I think they aren't that hard (not for lack of trying on my part), there should be a lot more full scores than last month :D. Feel free to discuss the problems / provide feedback in the comments (after the contest ends).

In order to participate, you only need to have a CodeChef account. If you don't have one, you can register here.

Problem Setter: me
Problem Tester & Russian Translator: Antoniuk (Vasya Antoniuk)
Editorialist: PraveenDhinwa (Praveen Dhinwa)
Mandarin Translator: huzecong (Hu Zecong)
Vietnamese Translator: Team VNOI
Language Verifier & Contest Admin: PraveenDhinwa (Praveen Dhinwa)

Prizes: The prize system has changed this month. The top 10 performers in Global and Indian category will get CodeChef laddus; with them, you can claim CodeChef goodies. You can find out more here: https://www.codechef.com/laddu. (If you didn't get your previous goodies, you can contact winners@codechef.com.)

BTW, Codechef is celebrating its 7th birthday this month.

Hopefully, there won't be any technical difficulties this time! Good luck!

The contest is over!

Very simple hints for someone who doesn't want to read editorials yet:

ONOZ, ALTARAY
TWONIM
NPLANAR
KTHCON

• +87

By Xellos, history, 6 years ago,

### Hints:

div2A: Try conversions between bases.

div2B: Solve a simpler version of the problem where Ai + 1 ≠ Ai for all i.

div1A: What are the shortest paths of the vehicles? what's the shorter of those paths?

div1B: Forget about the ceiling function. Draw points (i, A[i]) and lines between them — what's the Lipschitz constant geometrically?

div1C: Some dynamic programming. Definitely not for the exp. score of one person — look at fixed scores instead.

div1D: Compute dif(v) in O(N) (without hashing) and then solve the problem in O(N2). You need some smart merges.

div1E: Can you solve the problem without events of type 1 or 2? Also, how about solving it offline — as queries on subsets.

### Div. 2 A: Two Bases

It's easy to compare two numbers if the same base belong to both. And our numbers can be converted to a common base — just use the formulas

A straightforward implementation takes O(N + M) time and memory. Watch out, you need 64-bit integers! And don't use pow — iterating is better.

### Div. 2 B: Approximating a Constant Range

Let's process the numbers from left to right and recompute the longest range ending at the currently processed number.

One option would be remembering the last position of each integer using STL map<>/set<> data structures, looking at the first occurrences of Ai plus/minus 1 or 2 to the left of the current Ai and deciding on the almost constant range ending at Ai based on the second closest of those numbers.

However, there's a simpler and more efficient option — notice that if we look at non-zero differences in any almost constant range, then they must alternate: ..,  + 1,  - 1,  + 1,  - 1, ... If there were two successive differences of  + 1-s or  - 1-s (possibly separated by some differences of 0), then we'd have numbers a - 1, a, a, ..., a, a + 1, so a range that contains them isn't almost constant.

Let's remember the latest non-zero difference (whether it was +1 or -1 and where it happened); it's easy to update this info when encountering a new non-zero difference.

When doing that update, we should also check whether the new non-zero difference is the same as the latest one (if Ai - Ai - 1 = Aj + 1 - Aj). If it is, then we know that any almost constant range that contains Ai can't contain Aj. Therefore, we can keep the current leftmost endpoint l of a constant range and update it to j + 1 in any such situation; the length of the longest almost constant range ending at Ai will be i - l + 1.

This only needs a constant number of operations per each Ai, so the time complexity is O(N). Memory: O(N), but it can be implemented in O(1).

Bonus: the maximum difference permitted in an almost constant range is an arbitrary D.

### Div. 2 C / Div. 1 A: The Two Routes

The condition that the train and bus can't meet at one vertex except the final one is just trolling. If there's a railway , then the train can take it and wait in town N. If there's no such railway, then there's a road , the bus can take it and wait in N instead. There's nothing forbidding this :D.

The route of one vehicle is clear. How about the other one? Well, it can move as it wants, so the answer is the length of its shortest path from 1 to N... or  - 1 if no such path exists. It can be found by BFS in time O(N + M) = O(N2).

In order to avoid casework, we can just compute the answer as the maximum of the train's and the bus's shortest distance from 1 to N. That way, we compute ; since the answer is  ≥ 1, it works well.

In summary, time and memory complexity: O(N2).

Bonus: Assume that there are M1 roads and M2 railways given on the input, all of them pairwise distinct.

Bonus 2: Additionally, assume that the edges are weighted. The speed of both vehicles is still the same — traversing an edge of length l takes l hours.

### Div. 2 D / Div. 1 B: Lipshitz Sequence

Let for i ≠ j.

Key observation: it's sufficient to consider j = i + 1 when calculating the Lipschitz constant. It can be seen if you draw points (i, Ai) and lines between them on paper — the steepest lines must be between adjacent pairs of points.

In order to prove it properly, we'll consider three numbers Ai, Aj, Ak (i < j < k) and show that one of the numbers L1(i, j), L1(j, k) is  ≥ L1(i, k). W.l.o.g., we may assume Ai ≤ Ak. There are 3 cases depending on the position of Aj relative to Ai, Ak:

• Aj > Ai, Ak — we can see that L1(i, j) > L1(i, k), since |Aj - Ai| = Aj - Ai > Ak - Ai = |Ak - Ai| and j - i < k - i; we just need to divide those inequalities

• Aj < Ai, Ak — this is similar to the previous case, we can prove that L1(j, k) > L1(i, k) in the same way

• Ai ≤ Aj ≤ Ak — this case requires more work:

• we'll denote d1y = Aj - Ai, d2y = Ak - Aj, d1x = j - i, d2x = k - j
• then, L1(i, j) = d1y / d1x, L1(j, k) = d2y / d2x, L1(i, k) = (d1y + d2y) / (d1x + d2x)
• let's prove it by contradiction: assume that L1(i, j), L1(j, k) < L1(i, k)
• d1y + d2y = L1(i, j)d1x + L1(j, k)d2x < L1(i, k)d1x + L1(i, k)d2x = L1(i, k)(d1x + d2x) = d1y + d2y, which is a contradiction

We've just proved that to any L1 computed for two elements A[i], A[k] with k > i + 1, we can replace one of i, j by a point j between them without decreasing L1; a sufficient amount of such operations will give us k = i + 1. Therefore, the max. L1 can be found by only considering differences between adjacent points.

This is actually a huge simplification — the Lipschitz constant of an array is the maximum abs. difference of adjacent elements! If we replace the array A[1..n] by an array D[1..n - 1] of differences, D[i] = A[i + 1] - A[i], then the Lipschitz constant of a subarray A[l, r] is the max. element in the subarray D[l..r - 1]. Finding subarray maxima actually sounds quite standard, doesn't it?

No segment trees, of course — there are still too many subarrays to consider.

So, what do we do next? There are queries to answer, but not too many of them, so we can process each of them in O(N) time. One approach that works is assigning a max. difference D[i] to each subarray — since there can be multiple max. D[i], let's take the leftmost one.

We can invert it to determine the subarrays for which a given D[i] is maximum: if D[ai] is the closest difference to the left of D[i] that's  ≥ D[i] or ai = 0 if there's none, and D[bi] is the closest difference to the right that's  > D[i] or bi = n - 1 if there's none (note the strict/non-strict inequality signs — we don't care about differences equal to D[i] to its right, but there can't be any to its left, or it wouldn't be the leftmost max.), then those are all subarrays D[j..k] such that ai < j ≤ i ≤ k < bi.

If we don't have the whole array D[1..n - 1], but only some subarray D[l..r], then we can simply replace ai by and bi by . The number of those subarrays is Pi = (i - ai)(bi - i), since we can choose j and k independently.

All we have to do to answer a query is check all differences, take ai, bi (as the max/min with some precomputed values) and compute Pi; the answer to the query is . We only need to precompute all ai, bi for the whole array D[1..n - 1] now; that's a standard problem, solvable using stacks in O(N) time or using maps + Fenwick trees in time.

The total time complexity is O(NQ), memory O(N).

Bonus: Q ≤ 105.

### Div. 1 C: Kleofáš and the n-thlon

As it usually happens with computing expected values, the solution is dynamic programming. There are 2 things we could try to compute: probabilities of individual overall ranks of Kleofáš or just some expected values. In this case, the latter option works.

• "one bit is 8 bytes?"
• "no, the other way around"
• "so 8 bytes is 1 bit?"

After some attempts, one finds out that there's no reasonable way to make a DP for an expected rank or score of one person (or multiple people). What does work, and will be the basis of our solution, is the exact opposite: we can compute the expected number of people with a given score. The most obvious DP for it would compute E(i, s) — the exp. number of people other than Kleofáš with score s after the first i competitions.

Initially, E(0, 0) = m - 1 and E(0, s > 0) = 0. How can we get someone with score s in competition i? That person can have any score k from 1 to m except xi (since Kleofáš has that one) with the same probability . The expected values are sums with probabilities P(i, s, j) that there are j people with score s:

Considering that the probability that one of them will get score k is , we know that with probability , we had j people with score s before the competition and one of them had score s + k after that competition — adding 1 to E(i + 1, s + k). By summation over j, we'll find the exp. number of people who had overall score s and scored k more:

Lol, it turns out to be so simple.

We can find the probability E(i + 1, t) afterwards: since getting overall score t after i + 1 competitions means getting score k in the currently processed competition and overall score s = t - k before, and both distinct k and expectations for people with distinct s are totally independent of each other, then we just need to sum up the exp. numbers of people with those scores (which we just computed) over the allowed k:

The formulas for our DP are now complete and we can use them to compute E(n, s) for all 1 ≤ s ≤ mn. Since E(n, s) people with s greater than the overall score sk of Kleofáš add E(n, s) to the overall rank of Kleofáš and people with s ≤ sk add nothing, we can find the answer as

This takes O(m2n2) time, since there are O(mn) scores, O(mn2) states of the DP and directly computing each of them takes O(m) time. Too slow.

We can do better, of course. Let's forget about dividing by m - 1 for a while; then, E(i + 1, t) is a sum of E(i, s) for one or two ranges of scores — or for one range minus one value. If you can solve div1C, then you should immediately know what to do: compute prefix sums of E(i, s) over s and find E(i + 1, t) for each t using them.

And now, computing one state takes O(1) time and the problem is solved in O(mn2) time (and memory).

Bonus: Really, how fast can you solve this problem?

### Div. 1 D: Acyclic Organic Compounds

The name is really almost unrelated — it's just what a tree with arbitrary letters typically is in chemistry.

If you solved problem TREEPATH from the recent Codechef November Challenge, this problem should be easier for you — it uses the same technique, after all.

Let's figure out how to compute for just one fixed v. One more or less obvious way is computing hashes of our strings in a DFS and then counting the number of distinct hashes (which is why there are anti-hash tests :D). However, there's another, deterministic and faster way.

#### Compressing the subtree Tv into a trie.

Recall that a trie is a rooted tree with a letter in each vertex (or possibly nothing in the root), where each vertex encodes a unique string read along the path from the root to it; it has at most σ sons, where σ = 26 is the size of the alphabet, and each son contains a different letter. Adding a son is done trivially in O(σ) (each vertex contains an array of 26 links to — possibly non-existent — sons) and moving down to a son with the character c is then possible in O(1).

Compressing a subtree can be done in a DFS. Let's build a trie Hv (because Tv is already used), initially consisting only of one vertex — the root containing the letter sv. In the DFS, we'll remember the current vertex R of the tree T and the current vertex cur of the trie. We'll start the DFS at v with cur being the root of Hv; all we need to do is look at each son S of R in DFS, create the son curs of cur corresponding to the character sS (if it didn't exist yet) and run DFS(S, curs). This DFS does nothing but construct Hv that encodes all strings read down from v in Tv. And since each vertex of Hv encodes a distinct string, is the number of vertices of Hv.

This runs in O(|Tv|σ) time, since it can create a trie with |Tv| vertices in the worst case. Overall, it'd be O(N2σ) if T looks sufficiently like a path.

#### The HLD trick

Well, what can we do to improve it? This trick is really the same — find the son w of v that has the maximum |Tw|, add sv to Hw and make it Hv; then, DFS through the rest of Tv and complete the trie Hv as in the slow solution. The trick resembles HLD a lot, since we're basically remembering tries on HLD-paths.

If v is a leaf, of course, we can just create Hv that consists of one vertex.

How do we "add" v to a trie Hw of its son w? Well, v should be the root of the trie afterwards and the original Hw's root should become its son, so we're rerooting Hw. We'll just create a new vertex in Hw with sv in it, make it the root of Hw and make the previous root of Hw its son. And if we number the tries somehow, then we can just set the number of Hv to be the number of Hw.

It remains true that dif(v) is |Hv| — the number of vertices in the trie Hv, which allows us to compute those values directly. After computing dif(v) for each v, we can just compute both statistics directly in O(N).

Since each vertex of T corresponds to vertices in at most tries (for each heavy edge that's on the path from it to the root), we aren't creating tries with a total of O(N2) vertices, but . The time complexity is therefore . However, the same is true for the memory, so you can't waste it too much!

Bonus: you have an additional tiebreaker condition for vertices with identical . Count the number of distinct strings which occurred exactly k times for each k in an array Pr[]; take the vertex/vertices with lexicograhically maximum Pr[] (as many strings as possible which occur only once, etc).

Bonus 2: Can you get rid of the logarithm in the time complexity?

Comic strip name: Indy. Go read the whole thing, it's not very long, but pretty good.

### Div. 1 E: A Museum Robbery

In this problem, we are supposed to solve the 0-1 knapsack problem for a set of items which changes over time. We'll solve it offline — each query (event of type 3) is asked about a subset of all N exhibits appearing on the input.

#### Introduction

If we just had 1 query and nothing else, it's just standard knapsack DP. We'll add the exhibits one by one and update s(m) (initially, s(m) = 0 for all m). When processing an exhibit with (v, w), in order to get loot with mass m, we can either take that exhibit and get value at least s(m - w) + v, or not take it and get s(m); therefore, we need to replace s(m) by ; the right way to do it is in decreasing order of m.

In fact, it's possible to merge 2 knapsacks with any number of items in O(k2), but that's not what we want here.

Note that we can add exhibits this way. Thus, if there were no queries of type 2, we would be able to solve whole problem in O(Nk) time by just remembering the current s(m) and updating it when adding an exhibit. Even if all queries were of type 2 (with larger n), we'd be able to solve it in O(nk) time in a similar way after sorting the exhibits in the order of their removal and processing queries/removals in reverse chronological order.

#### The key

Let's have q queries numbered 1 through Q in the order in which they're asked; query q is asked on some subset Sq of exhibits.

MAGIC TRICK: Compute the values s(m) only for subsets — the intersections of pairs of queries 2q, 2q + 1 (intersection of the first and the second query, of the third and fourth etc.), recursively. Then, recompute s(m) for all individual queries in O((N + Q)k) time by adding elements which are missing in the intersection, using the standard knapsack method.

What?! How does this work?! Shouldn't it be more like O(N2) time? Well, no — just look at one exhibit and the queries where it appears. It'll be a contiguous range of them — since it's displayed until it's removed (or the events end). This element will only be missing in the intersection, but present in one query (so it'll be one of the elements added using knapsack DP), if query 2q + 1 is the one where it appears first or query 2q the one where it appears last. That makes at most two addittions of each element and O(N) over all of them; adding each of them takes O(k) time, which gives O(Nk).

The second part of the complexity, O(Qk) time, is spent by copying the values of s(m) first from the intersection of queries 2q and 2q + 1 to those individual queries.

If we're left with just one query, we can solve it in O(Nk) as the usual 0-1 knapsack.

Since we're halving the number of queries when recursing deeper, we can only recurse to depth and the time complexity is .

#### A different point of view (Baklazan's)

We can also look at this as building a perfect binary tree with sets S1, ..., SQ in leaves and the intersection of sets of children in every other vertex.

For each vertex v of this tree, we're solving the knapsack — computing s(m) — for the set Dv of displayed exhibits in it. We will solve the knapsack for the root directly and then proceed to the leaves. In each vertex v, we will take s(m), the set Dp of its parent p and find s(m) for v by adding exhibits which are in Dv, but not in Dp.

We know that the set Dp is of the form for some a, b and Dv is either of the form or for (depending on whether it's the left or the right son). In the first case, only elements removed between the m-th and b-th query have to be added and in the second case, it's only elements added between the a-th and m + 1-th query. Since each element will only be added/removed once and the ranges of queries on the same level of the tree are disjoint, we will do O((N + Q)k) work on each level and the overall time complexity is .

#### Finding the intersections and exhibits not in the intersections

Of course, bruteforcing them in O(NQ) isn't such a bad idea, but it'd probably TLE — and we can do better. We've just described how we can pick those exhibits based on the queries between which they were added/removed. Therefore, we can find — for each exhibit — the interval of queries for which it was displayed and remember for each two consecutive queries the elements added and removed between them; finding the exhibits added/removed in some range is then just a matter of iterating over them. Since we're actually adding all of them, this won't worsen the time complexity.

In order to efficiently determine the exhibits in some set , we can remember for each exhibit the interval of time when it was displayed. The exhibit is in the set if and only if it was displayed before the a-th query and remained displayed at least until the b-th query.

To conclude, the time complexity is and since we don't need to remember all levels of the perfect binary tree, but just the one we're computing and the one above it, the memory complexity is O(qk).

• +490

By Xellos, history, 6 years ago,

Hello everyone!

CF round #333 (both divisions) is going to take place today. The authors of this round are me and Baklazan.

By total coincidence, I'm the author of even-indexed problems and Baklazan the author of odd-indexed problems. Now I'm going to let you wonder if it's 0-based or 1-based :D.

As usual, thanks to GlebsHP for his help (in particular, helping us not overkill the problems), MikeMirzayanov for CF and Polygon, Delinur for problem statement translations and our testers: misof, Mimino, AlexFetisov and winger.

I wish good results and rating changes to those who earn it, as well as a zero-sum rating update to everyone.

Score distribution:

Div. 2: 500-1000-1500-2250-2250

Div. 1: 500-1250-1250-2000-2500

Winrars:

Div. 1

Div. 2

(homogeneous colours :D)

• +770

By Xellos, history, 6 years ago,

As usual, 10 days from the first Friday of the month (link to the contest; starting time), a Long Challenge contest takes place. As usual again, there are 10 problems — 9 with subtask scoring and one challenge problem with relative scoring. Feel free to participate!

The people involved are:

UPD: The editorials are up. I still have stuff like links to add, but everything should be complete otherwise.

• +90

By Xellos, history, 6 years ago,

Hello, and, again, you are invited to a monthly contest: HackerEarth October Clash.

The problemsetter this time is RomaWhite; I'm the tester and editorialist. After all, I have to give I_love_Tanya_Romanova a chance to compete! :D

The contest takes place this weekend (starting time) and runs for 24 hours. There should be 6 algorithmic problems with partial scoring (points are given independently for passing each test file) and 1 challenge problem. I think the problems are easier than last month, when two subtasks of one problem went unsolved by anyone.

Good luck!

Btw, round 1 of COCI overlaps with this Clash for 3 hours. That, of course, doesn't mean it's not possible to participate in both at the same time — from my own experience. It gets harder when trying to take part in all contests which take place tomorrow...

UPD: The contest has finished, congratulations to the colourful top 5:

1. Catalin Stefan Tiseanu
2. JoeyWheeler
3. rantd
4. FatalEagle
5. Lewin

The editorials were delayed (because I overslept), almost all of them are up now.

• +76

By Xellos, history, 7 years ago,

The next edition of Internet Problem Solving Contest is here!

In case you don't know (yet), IPSC problems come in a great variety, from standard algorithmic ones to problems where you interact with the grader or need to find a specific input.

Most problems have an easy input worth 1 point and hard input worth 2 points; otherwise, the scoring is ACM-like, with WAs on easy inputs giving time penalty 10 minutes instead of 20. The input files are available for download and you only upload your outputs (in a similar manner to GCJ, except there's no additional time limit).

It's a team competition for teams of three. IPSC 2015 takes place on 20th of June, from 11:00 UTC and lasts 5 hours. You may register here.

#### CONTEST IS OVER.

Registered teams

(I won't be adding teams here anymore, because who cares anymore)

Place Points Time Team name Member 1 Member 2 Member 3
5 30 2775 Warsaw Capybaras mnbvmar Swistakk Errichto
6 29 2155 Havka-papstvo Egor pashka Petr
12 28 2166 Knifeproof Tank niyaznigmatul VArtem tourist
16 27 2577 sudo set-position rand()%N fsouza marcoskwkm StefanoT
26 25 1815 Andromeda Express ainu7 JongMan Astein
27 25 1873 Team Accepted Limit Exceeded popoffka Alex_2oo8 Ingus
32 24 2417 ThankYouMikeMirzRAYanovForYou- (sic) CodeforceAndPolygonPlatforms xxTastyHypeBeast666xx JoeyWheeler junkbot
33 24 2423 Unpretired ilyakor Jacob gusakov
35 23 1803 SPb SU 8/3 Dmitry_Egorov PavelKunyavskiy nk.karpov
36 23 2029 Corridors of Time flydutchman Riatre this_isssssyy
40 23 2468 Dig LiTi PrinceOfPersia HosseinYousefi
42 23 2565 stigma sugim48 evima stqn
44 22 1528 RaccoonRush subscriber enot110 antonkov
52 21 1826 W4yneb0t W4yneb0t
55 21 1890 MooOOoOooOOOOoOooooP DemiGuo ksun48 yummy
61 20 1503 iThan chaotic_iak jonathanirvings nathanajah
63 20 1639 itmo150216 izban vlad107 Belonogov
64 20 1706 My Igloo Is Melting Kuroba FatalEagle zxqfl
67 20 1773 Return of Celtic Warriors Tanaeem Sunny dragoon
72 20 2014 ALREADY HAVE DONE conflict ko_osaga Jiyong Youn
80 19 1345 dolphin secret agents stan acherepanov permin
91 19 1837 MSU Tashkent Old Coders Timur_Sitdikov SergeyLazarev helThazar
102 19 2425 Ural FU Dandelion mpivko Umqra Um_nik
104 18 1335 PAPFans M.Mahdi PAP SeyedParsa
115 18 1692 GD-PSP Jokser sweiss pvasilyev
128 17 1244 Frozen Heart Nikitosh Tehnar ComradePetr
131 17 1433 Zenith ngfam_kongu I_love_Hoang_Yen flashmt
141 17 2324 12.0ngskar dolphinigle Gyosh fushar
142 16 1244 CodeFights ---Grigor--- aram90 armen.tsirunyan
143 16 1281 AOI2 GaryYe fleimgruber
156 16 1722 BajaB ShayanH Lost aliasadiiii
174 15 1309 Please explain why havka eto papstvo OutSide FxF Fcdkbear
180 15 1472 Bangladesh Avengers emo moshiur sohelH
183 15 1533 Saratov SU 1 kuviman IlyaLos danilka.pro
212 14 1319 ZER zholnin e19-un AClover
243 13 1314 cup_of_team cup_of_tea
255 13 1748 Dirsa how_to_become_purple Sanja Mishutnik
265 12 1073 Masr Islamia (╥‿╥) KhaledKEE ahmednaoum Mohamed Al-Jamal
270 12 1197 B-b-b-b-bones! mysterion knst
271 12 1240 ☺ I can't see plus-plus ☺ tjandra
279 12 1389 namelist.insert("দৌড়ের উপর") enzam VUAcoder wasi.ahmed
280 12 1425 kvark161: Kvark161
282 12 1564 8-HD-720p-YIFY-Ganool.3gp azaky farisv makanbeling
301 11 1128 For the watch ashish1610 rohangarg kshitij_jain
303 11 1164 Chega de saudades ivanilos
309 11 1246 sorry_helli SaDDaS Reyna I_love_Yuuki_Asuna
318 10 739 Donkey Fly EKGMA ITDOI Teshnizi
320 10 802 dpsd_team rajat1603 sidhant additya1998
321 10 817 Flawless Fdg Furko mgch
335 10 982 unemployed, useless dropout & cute woman vadimmm Rubanenko baba_beda
343 10 1062 Alexander Udalov udalov
348 10 1104 85-85 farzad.shbfn shamir0xe
361 10 1303 Samne Porikkha...Asen Contest Kori zubaer.kh amlansaha Honour_00
376 9 673 bambino 2shraaf Badry mohamed.mehany
383 9 824 SPiromanii&Messi patrick.sava teoionescu george_stelian
414 8 786 Never Slowdown Reza_H DemoVersion
428 8 1023 les apparences sont trompeuses members Safrout KarimElSheikh MedoN11
430 8 1057 UHv6 jcg marX
464 7 697 NHSPC Juniors ruhan.habib39 tasmeemreza rubabredwan
485 6 228 TeamUFRN xyz222 Zailton RailtonT
488 6 254 milkyway ptnk_1215_panaka_13 touristv2 TiChuot97
505 6 462 code_phoenix ajinkya1p3 yogeshg39 InnocentFool
519 6 647 Vypiem_za_Lubov Nicolas_Cage Montezuma JoriQ
565 5 489 Nalin Bhardwaj NibNalin
576 5 1031 Sandor team SandorGarcia
588 4 87 GG izi commend me jlr.terceiro ronisds
595 4 110 Nemidunam ArtinTD Alimol
618 4 188 ONU_Feuerbach VVI BronfenBova illarionovam_onu
623 4 204 Hot Tomato Sauce nirob_mh shm0007
633 4 259 DS & DP^2 besher Alex7 RedNextCentury
660 4 846 Primo Manurung zulkan
689 3 76 BK.Troll farmerboy thomas
717 3 159 The Deterministics asdoc asawasa xennygrimmato
746 3 312 hichzat bayram98 horojyk Kerim.K
769 3 512 thehandofgod belowthebelt deepankarak pulkit_180894
773 3 606 KBTU Tarjan azizkhan Madiyar Temirulan
N/A N/A N/A 2017 ACM-ICPC absolute winners T0RRES
N/A N/A N/A Abdelkarim abdelkarim
N/A N/A N/A Binam bijan Nastaran75 spOoky
N/A N/A N/A DivineByCode amankedia Kanish_The_Vista ace_atul
N/A N/A N/A Dodgers vlade087 balle deinier
N/A N/A N/A guptashvm gupta_shvm
N/A N/A N/A Istrebitel Alnair Suhaylee
N/A N/A N/A j1k7_7 j1k7_7
N/A N/A N/A Korol', mudak, i rzhavaya zapchast' accidentallygivenfuck bayram osmanuss
N/A N/A N/A Panda Po chrome Elibay ErzhaNN
N/A N/A N/A shockers AJAY Sundar karthik
N/A N/A N/A Team Zabava radoslav11 vanjo9800 P_Nyagolov
N/A N/A N/A Tmcoteam Allanur Bekmyrat.A _NinjA
N/A N/A N/A Whatever ikbal EMINAYAR enesoncu
N/A N/A N/A Zerry2015 lamngo96 nhathuyen95 duongtnhat

• +167

By Xellos, 7 years ago,

These days (26.-27.3.2015), another national round of Slovak Olympiad in Informatics took place. Just short problem statements for now, I'll write it in more detail and with solutions later:

#### 1.

There's a hazelnut chocolate of size N × M, which can be viewed as a 2D grid. In each cell of the grid, there's a given number of nuts. Two players play a game, alternating turns: each player has to cut off either the bottom row or the rightmost column of the remaining part of the chocolate in her turn (and do with it guess what). The cut off row/column has to contain an even number of nuts. The player that can't make a move loses. Determine the winner if both play optimally.

#### 2.

You have two numbers N, K (), one input file with N - K distinct numbers between 1 and N and 2K + 2 empty files. Find the missing K numbers.

Your program has a very limited amount of memory (the files' content isn't stored in that memory), just 10 kB for K ≤ 100; its memory complexity can't depend on N. Primarily, you need to minimise the worst-case number of reads+writes to files; only then are you minimising the time and memory complexity of your algorithm.

Files work like queues and can be erased in small constant time.

#### 3.

a very long introductory text and some heavily theoretical magic with simulating boolean conditions using other conditions and proving that it's always possible/impossible

#### 4.

There are N numbers up to 109 and another number R, also up to 109. Find up to 4 numbers (one number can't be used multiple times) among these N that sum up to R or decide that they don't exist. N ≤ 4000.

#### 5.

There's a straight river of constant width, N points on one side of it and M on the other side. You need to connect some of them with (obviously straight) lines that have the minimum total length in such a way that each of these N + M points has at least one line connecting it with another point on the other side of the river. N, M ≤ 20000.

The first 3 problems are theoretical (points for explaining your algorithm), the other 2 practical (typical contest style, partial scoring); the max. time limit was 10s in both.