# | User | Rating |
---|---|---|

1 | Petr | 3353 |

2 | tourist | 3341 |

3 | fateice | 3306 |

4 | Syloviaely | 3274 |

5 | OO0OOO00O0OOO0O0…O | 3253 |

6 | Um_nik | 3179 |

7 | Radewoosh | 3159 |

8 | dotorya | 3145 |

9 | LHiC | 3111 |

10 | mnbvmar | 3096 |

# | User | Contrib. |
---|---|---|

1 | tourist | 178 |

2 | rng_58 | 168 |

3 | Petr | 159 |

4 | csacademy | 157 |

5 | lewin | 150 |

5 | Swistakk | 150 |

7 | ko_osaga | 140 |

7 | Errichto | 140 |

9 | matthew99 | 139 |

10 | adamant | 138 |

Questions That Don't Deserve Their Own Blog — I guess it'd be good to have something like this as a place to ask random questions about problems ("I accidentally my code, wat do?") without having your blog post get downvotebombed?

I have a question about Yandex.Algorithm 2015 round 2.2E: the problem can be reduced to computing the min. bipartite vertex cover *that can't fully cover one side* and since the dual of min. vertex cover is max. matching, I tried to find the dual of this problem.

I mean, that vertex cover can be described as a linear programming problem

*x*_{i}≥ 0*x*_{uj}+*x*_{vj}≥ 1 for each edge*e*_{j}= (*u*_{j}-*v*_{j}), 1 ≤*j*≤*E*

Its dual should be

- for 1 ≤
*i*≤*N*, (sum over edges incident on vertex*i*in the left part) - for
*N*+ 1 ≤*i*≤*N*+*M*, (same as above for the right part)

*y*_{j} are variables assigned to edges (flow through them); are assigned to the left and right part of the graph and if they're fixed, finding the optimal *y*_{j} is just bipartite matching/flow. However, this gives the optimal solution as for example for an almost complete bipartite graph *K*_{N, N} that's missing just one edge, which doesn't make sense.

What am I missing here?

Next: I was upsolving DCJ problems during the second practice round (which ended yesterday). The only non-final problem I didn't manage to solve in time was highest_mountain and only because it gives me weird RE.

```
#include <bits/stdc++.h>
#include "highest_mountain.h"
// iostream is too mainstream
#include <cstdio>
// bitch please
#include <iostream>
#include <algorithm>
#include <cstdlib>
#include <vector>
#include <set>
#include <map>
#include <queue>
#include <stack>
#include <list>
#include <cmath>
#include <iomanip>
#include <time.h>
#include <message.h>
#include <stdio.h>
#define dibs reserve
#define OVER9000 1234567890
#define ALL_THE(CAKE,LIE) for(auto LIE =CAKE.begin(); LIE != CAKE.end(); LIE++)
#define tisic 47
#define soclose 1e-8
#define chocolate win
// so much chocolate
#define patkan 9
#define ff first
#define ss second
#define abs(x) ((x < 0)?-(x):x)
#define uint unsigned int
#define dbl long double
#define pi 3.14159265358979323846
using namespace std;
// mylittledoge
typedef long long cat;
#ifdef DONLINE_JUDGE
// palindromic tree is better than splay tree!
#define lld I64d
#endif
#define MASTER_NODE 0
int main() {
cin.sync_with_stdio(0);
cin.tie(0);
cout << fixed << setprecision(10);
int nodes =NumberOfNodes(), my_id =MyNodeId();
int N =NumberOfPeaks();
vector<int> ilen(nodes,N/nodes);
for(int i =0; i < N%nodes; i++) ilen[i]++;
int L =0, R =0;
for(int i =0; i < my_id; i++) L +=ilen[i];
for(int i =0; i <= my_id; i++) R +=ilen[i];
vector< pair<cat,int> > H;
H.dibs(R-L);
for(int i =L; i < R; i++) {
cat h =GetHeight(i);
int s =H.size();
while(s >= 2 && (H[s-1].ff-h)*(H[s-2].ss-i) > (H[s-2].ff-h)*(H[s-1].ss-i)) {
H.pop_back();
s--;}
H.push_back(make_pair(h,i));}
int Hs =H.size();
vector<int> comp1;
int K =2000;
for(int i =0; i < Hs; i +=K) comp1.push_back(H[i].ss);
if(Hs > 0 && comp1.back() != H[Hs-1].ss) comp1.push_back(H[Hs-1].ss);
PutInt(MASTER_NODE,comp1.size());
ALL_THE(comp1,it) PutInt(MASTER_NODE,*it);
Send(MASTER_NODE);
if(my_id == MASTER_NODE) {
vector< pair<cat,int> > Hcomp;
Hcomp.dibs(nodes*2000);
vector< pair<int,int> > I(nodes,make_pair(N+1,-1));
vector< pair<int,int> > I2(nodes,make_pair(N+1,-1));
vector< vector<int> > V(nodes);
for(int i =0; i < nodes; i++) {
Receive(i);
int l =GetInt(i);
V[i].resize(l);
for(int j =0; j < l; j++) {
V[i][j] =GetInt(i);
I[i].ff =min(I[i].ff,V[i][j]);
I[i].ss =max(I[i].ss,V[i][j]);}
}
for(int i =0; i < nodes; i++) {
for(int j =0; j < (int)V[i].size(); j++) {
int a =V[i][j];
cat h =GetHeight(a);
int s =Hcomp.size();
while(s >= 2 && (Hcomp[s-1].ff-h)*(Hcomp[s-2].ss-a) > (Hcomp[s-2].ff-h)*(Hcomp[s-1].ss-a)) {
Hcomp.pop_back();
s--;}
Hcomp.push_back(make_pair(h,a));}
for(int j =(int)Hcomp.size()-1; j >= 0; j--) {
if(Hcomp[j].ss < I[i].ff) break;
I2[i].ff =Hcomp[j].ss;}
}
Hcomp.clear();
for(int i =nodes-1; i >= 0; i--) {
for(int j =(int)V[i].size()-1; j >= 0; j--) {
int a =V[i][j];
cat h =GetHeight(a);
int s =Hcomp.size();
while(s >= 2 && (Hcomp[s-1].ff-h)*(Hcomp[s-2].ss-a) < (Hcomp[s-2].ff-h)*(Hcomp[s-1].ss-a)) {
Hcomp.pop_back();
s--;}
Hcomp.push_back(make_pair(h,a));}
for(int j =(int)Hcomp.size()-1; j >= 0; j--) {
if(Hcomp[j].ss > I[i].ss) break;
I2[i].ss =Hcomp[j].ss;}
}
for(int i =0; i < nodes; i++) {
PutInt(i,I2[i].ff);
PutInt(i,I2[i].ss);
Send(i);}
}
Receive(MASTER_NODE);
int mi =GetInt(MASTER_NODE);
int mx =GetInt(MASTER_NODE);
int l =0, r =Hs-1;
while(l < Hs && H[l].ss < mi) l++;
while(r >= 0 && H[r].ss > mx) r--;
vector<int> comp2l,comp2r;
for(int i =max(0,l-K); i <= min(Hs-1,r+K); i++) {
if(i <= l+K) comp2l.push_back(H[i].ss);
if(i >= r-K) comp2r.push_back(H[i].ss);
}
for(int i =0; i < my_id; i++) {
PutInt(i,comp2l.size());
ALL_THE(comp2l,it) PutInt(i,*it);
Send(i);}
for(int i =my_id+1; i < nodes; i++) {
PutInt(i,comp2r.size());
ALL_THE(comp2r,it) PutInt(i,*it);
Send(i);}
vector<int> Pi;
Pi.dibs(Hs+nodes*5000);
vector< pair<cat,int> > Hi;
Hi.dibs(Hs+nodes*5000);
for(int i =0; i < nodes; i++) {
if(i == my_id) ALL_THE(H,it) Pi.push_back(it->ss);
else {
Receive(i);
l =GetInt(i);
for(int j =0; j < l; j++) Pi.push_back(GetInt(i));
}
}
ALL_THE(Pi,it) {
int a =*it;
cat h =GetHeight(a);
int s =Hi.size();
while(s >= 2 && (Hi[s-1].ff-h)*(Hi[s-2].ss-a) > (Hi[s-2].ff-h)*(Hi[s-1].ss-a)) {
Hi.pop_back();
s--;}
Hi.push_back(make_pair(h,a));}
int inH =0;
ALL_THE(Hi,it) if(it->ss >= L && it->ss < R) inH++;
PutInt(MASTER_NODE,inH);
Send(MASTER_NODE);
if(my_id == MASTER_NODE) {
int ans =0;
for(int i =0; i < nodes; i++) {
Receive(i);
ans +=GetInt(i);}
cout << ans << "\n";}
}
// look at my code
// my code is amazing
```

I'm merging *O*(*nodes*) small convex hulls of size *O*(*N* / *nodes*) by sending only every *K*-th point in each hull, merging them in master and sending back the range that remains in the merged convex hull; then, each node sends just the smallest *O*(*N* / *nodes* / *K*) leftmost or rightmost points of this range to each other node and each node recomputes the convex hull of its own *O*(*N* / *nodes*) points plus all *O*(*N* / *K*) points it got sent.

I proved that this is correct and it passes the small subproblem (with *K* = 10, *nodes* = 10, *N* = 1000), while slightly incorrect implementations don't, so it really should be correct. I also verified that it passes at least one official maxtest, but gives RE later.

The cause of RE should not be too much stuff sent, since each node sends/receives only *O*(*N* / *K*) of data (numerically 2-3 MB, I tried "if this array is too large then return 0" checks and it changed nothing too). The memory limit shouldn't be exceeded too.

Any ideas? Btw don't forget about GCJ/DCJ last online rounds this weekend.

UPD: Contest over. Did you participate?

Online Physics Brawl will take place on November 30th. Registration is possible until November 28th.

It's a team competition intended for secondary school students from ~~Yugoslavia~~ ~~Czechoslovakia~~ Czech Republic and Slovakia, but there's a category for foreign secondary schools and an open category for anyone. Given the target audience, I don't think it'd be impossibly hard for CF users.

You can find the rules and past problems at online.fyziklani.cz. For each problem, you only submit a number, so programming knowledge may be useful (but the problems are fully solvable without it).

I'm a minor problemsetter/tester and the translator this year.

I got to IOI solving — and upsolving — a bit late and I just recently finished it (with help of various comments). One thing I didn't really see there was: how would you have done?

Here are my comments about the problems:

aliens: a common DP + convex hull trick; apparently, gets TLE for the 5th subtask and even

*O*(*NK*) has trouble fitting within the TL; 60ptsshortcut: I knew that there's binsearch (from a similar TC problem); afterwards, finding an solution isn't hard, but improving it enough to get more points is; 93pts

railroad: mfw — I didn't move beyond a bitset DP; even though I had a lot of ideas including what turned out to be the correct one (finding a flow), I thought it'd be some kind of greedy, but those are numerous and hard to check in this case; the last subtask was very easy after solving the 3rd; 34pts

messy: the bounds are 7·2

^{7}with*N*= 2^{7}, so it has to be D&C; full scorepaint, molecules: easy 100pt problems

So that'd give me 7th place.

The first solution I got 93pts for in shortcut was this: when checking if the diameter is ≤ *D* and considering *i* to be the left endpoint, find the largest *j* = *j*_{1}(*i*) such that a shortcut between *i* and *j* gives all distances within the created cycle ≤ *D*, the largest *j* = *j*_{2}(*i*) such that all distances between vertices from the cycle and vertices to the left of *i* are ≤ *D*, then symmetrically for each *j* the smallest *i* = *i*_{3}(*j*) such that all distances between vertices from the cycle and those to the right of *j* are ≤ *D*. There are also distances between pairs of vertices not on the cycle, but those are easy.

For *j*_{2}(*i*), we can find the leftmost vertex *k*_{2}(*i*) to the right of *i* such that it's farther from some vertex to the left of *i* without a shortcut; then, all other vertices on the cycle must be reachable using the shortcut, so *pos*(*j*_{2}) - *pos*(*a*) + *d*(*a*) + *C* + *lft*(*i*) ≤ *D* for all vertices *a* between *k*_{2} and *j*_{2}; here, *pos* is the position of each vertex on the main line (prefix sum of *l*) and *lft*(*i*) is the maximum distance to the left from *i*; we can extend it to all vertices *a* to the right of *k*_{2}, which gives us a bound on *pos*(*j*_{2}); we can find *j*_{2} by binsearching. Also, *k*_{2} can be found using an RMQ table for max(pos+d) and an RMQ-like search.

With *j*_{1}(*i*), we can do something similar to finding *j*_{2} for each vertex, but only for distances to *i* exactly, not to the left of *i* (so there's *d*(*i*) instead of *lft*(*i*)); *j*_{1}(*i*) can be computed as their suffix minima.

We're left with this problem: there's an interval *I*_{l}(*i*) and *I*_{r}(*i*) for each *i*; are there some *i*, *j* such that and ? This can be solved e.g. using another RMQ table, in which we'll store the minimum left ends of *I*_{r} in ranges of *j*; the right ends of *I*_{r} aren't important. Then, for each *i*, we can find out if the minimum in *I*_{l}(*i*) is ≤ *i*. (Alternatively, we can use a sweepline algorithm and store opened *I*_{r}-s in a set<>.)

How simple. /sarcasm

I tried to optimise this, but there was no way to squeeze it even into 97 points — surprisingly, since I didn't really need to optimise to get 93pts.

I got full score on shortcut using an approach based on http://codeforces.com/blog/entry/46518?#comment-310338 (note: extending to all pairs *i*, *j* is probably unnecessary and wrong — if *i* = *j*, we get a non-path which can be longer than *D*). The hardest part is computing the smallest *j*_{0} = *j* > *i* and largest *i*_{0} = *i* < *j* such that *u*[*i*] + *v*[*j*] > *D*, since *u*[*i*] and *v*[*j*] don't have to be monotonous; afterwards, we can find the smallest/largest sum/difference of *x*_{1}, *x*_{2} by considering all *j* ≥ *j*_{0} / *i* ≤ *i*_{0}, for which we only need prefix/suffix maxima of *u*[] and *v*[] and check if the answer exists using 2x 2 pointers in *O*(*N*).

To compute *j*_{0}, let's move from *i* = *N* - 1 to *i* = 0 and recompute the sequence of closest increasing *v*[*j*]-s using the well-known stack algorithm. Obviously, we can find *j*_{0} by binsearching among them, but that gives time complexity. However, if *j*_{0}(*i*) < *j*_{0}(*i*') for *i* > *i*', then we can decrease *j*_{0}(*i*') to *j*_{0}(*i*) without affecting the bounds on the sum/difference of *x*_{1}, *x*_{2}; that means we can keep a pointer on the element in the stack (actually an array in the implementation) corresponding to *v*[*j*_{0}] and only move this pointer in one direction and it's in total.

This got 97 points instantly, but I still needed some optimisations to get to 100. That time limit was really tight. Other than that, the problems were pretty good and difficult (as expected from Russia); I just missed problems with non-binary scoring per subtask.

A reminder that AtCoder doesn't only have Grand Contests, but also these. Starting time: in the past.

Also post-contest discussion, I guess.

```
const int *
int const *
int * const
int const * const
const int * function (const arg) const
```

Based on the e-maxx implementation and other stuff I found online, I pieced together a powerful treap. It can be used as a set<>, array, segment tree or (within limits) all of them at once.

Consider a set<> represented as a sorted array of distinct elements. You can insert/remove elements just like in a set<>. You can also find the index of any element in this array, remove an element by index — or insert an element at an index, but that will probably break the sortedness and you can't use the operations which rely on it anymore (you can still insert/remove elements by index). As long as the array is sorted, you can query lower bound / upper bound of any element, given as a pair (element, index).

Next, you can reverse a range, add to a range and query range sums; this will work all the time, but range addition can and reversing will break the sortedness. You can implement your own queries (like min/max) or combined range updates (paint+add) like in a segment tree. Since we need lazy propagation, everything is immutable (you can work around it e.g. with erase/insert) and I decided not to bother with iterators.

I took special care with randomisation. It uses testlib random_t and if equal priorities are detected (at insert), they're all regenerated, but it shouldn't happen with reasonably small data — the priorities are 60-bit, which is far above the Birthday Paradox threshold for just about anything.

The DS is templated over some type T, which has to be comparable and support addition / subtraction / multiplication by int (like a vector space in math).

It supports the following operations, all online in (with *N* being the current number of elements, ofc):

function | action | returns |

insert(element) | inserts element into a set | bool(was it inserted?) |

insert_pos(index, element) | inserts element at index | void |

erase(element) | removes element from a set | bool(was it removed?) |

erase_pos(index) | removes element at index | void |

get_index(element) | finds the element's index, size() if non-existent | int in [0..size()] |

[index] | finds the element at index | T (not T&) |

lower_bound(element) | finds the lower_bound() of that element, with index | pair<T,int> |

upper_bound(element) | the same with upper_bound() | pair<T,int> |

shift(left,right,element) | add element to everything in the range [left,right] | void |

reverse(left,right) | reverse the range [left,right] | void |

sum(left,right) | find the sum of the range [left,right] | T |

Also, there's the usual `empty()`

, `size()`

, `is_sorted()`

(returns true if the sortedness hasn't been broken yet) and `srand()`

, which lets you choose your seed if you don't want the testlib default.

I compared it with STL set<> on a million insert/erase/lower_bound operations; the treapset is ~3-4 times slower (which makes sense, since the structure supports many more operations). I also tested it on a million insert_pos/get_index/shift/reverse/sum operations; it took ~4 seconds locally. It seems to work...

There has been talk about this already, but in case anyone missed it:

The 2016 edition of Internet Problem Solving Contest is going to take place **today** (starting time).

It's a 5-hour contest for teams of up to 3 people, but there's also an individual division, so feel free to participate even if you can't find yourself a team at this point.

There's going to be an ACM-like number of problems (12+), ranging from classic algorithmic problems to quite unusual ones. Most problems have an easy subproblem worth 1 point and a hard one worth 2 points (think CodeJam); ties are broken using ACM rules (sum of times).

The practice session is over. **The contest is over!**

Read the rules or my previous blogposts about this for more info.

Since this is a 5-hour contest where you can execute codes locally, some inputs will be YUGE (gigabytes). Accordingly, they will have fairly high system requirements. Get a good computer. While the official solutions quite comfortably run on my mediocre laptop, if you write something too inefficient, you can encounter a nasty surprise, e.g. frozen system. It happened to me last year.

If an input is big, you won't have to download it; instead, there will be a generator (typically a Python script, meaning they aren't very fast). It's usually a good idea to run all generators as early as possible — as long as it doesn't slow down your computer too much, you can avoid a situation where you can't submit a problem because the contest ended before your generator.

Actually, you should just try to read as many problems as possible and determine your strategy after you can guess your chances well enough.

```
11145 submissions
5370 successful submissions (48% success rate)
797 active teams out of 1376 (58% did not give up before even trying)
773 teams with positive score (97% of active teams)
12/13 problems solved by someone
maximum number of fully solved problems: 10/13
lowest non-zero success rate: D (D2: 20%)
highest success rate: C,F (C2,F2: 85%)
highest success rate (easy subproblems): G1 (85%)
lowest success rate (easy subproblems): J1,M1 (25%)
hard problems (<50 teams solved) sorted by difficulty:
E: 0/13
M: 2/10
J: 4/17
H: 11/17
B: 11/18
L: 16/46
K: 40/107
```

Seeing as there's no blogpost from chrome about the round a few hours before it...

TC SRM 692 starts soon. If you're up at night, you can participate!

**UPD**: Contest over! As you may have noticed, I was the writer; it's worth it to read the problem of both divisions just for the stories.

**Hints:**

The optimal node is either centroid.

Centroids can be computed using something like LCA or a binary search.

Think in terms of various sums over 2^{k} ancestors of each vertex, just like the jumps to the 2^{k}-th ancestor used to compute LCAs.

Solve the case *K* = 1 first.

Fast exponentiation of arrays.

Try all minima.

Binary search (or 2 pointers in a different approach).

What's the answer when *R* and *L* are divisible by 100?

is here! Finally, solutions can be spoilered!

For example, my solution to F from CROC 2016 - Elimination Round:

The first observation is that for a fixed set of species, the max. number of farms is the GCD of their a_i (we can say c_j=a_{N+j}). So we want the sum of GCDs over all K-element subsets.

The second observation is that this sum for the first x+1 species = sum for the first x species + number of K-1-element subsets containing the x-th species. In order to recalculate it, we only need to find out the number of K-1-element subsets such that their numbers and a_x have GCD equal to g for each divisors g of a_x (obviously, that GCD can't be a non-divisor of a_x); let's call the number of such subsets for given x p(g), then the answer in step x+1 = answer in step x + sum gp(g).

It's much easier to calculate the number of K-1-element subsets whose GCD is a multiple of g; let's call that number P(g). We'll get to that later, it's more important to show how to get p-s from P-s.

We'll calculate p-s in the order of decreasing g. At first, let's take p(g)=P(g); we need to subtract the subsets whose GCD is strictly bigger than g and a multiple of g. Since we calculated p(g') for all g' > g already, we can just subtract all p(g') such that g' > g is a multiple of g. However, we need to list divisors for all numbers anyway, so it's simpler to subtract p(g') from all p-s of proper divisors of g' right after calculating it.

The time complexity of this is O(number of pairs g | g' | a_x). The maximum number of such pairs can be found easily if we know divisor lists; it's 8505 for a_x <= 10^6. We don't need any modulos there, so that's manageable.

Now to computing P(g). If there are M_g species with numbers divisible by g, then P(g)={M_g over K-1}. Since M <= N+Q, we can precompute factorials and their modular inverses, which allows us to compute P(g) in O(1) time.

Computing M_g is very similar to p-s from P-s: when the number of species with f flowers (an analogy of P) increases by 1, M_d for all d|f (which is an analogy of p) increases by 1.

We also need divisor lists; for them, we can use a simplified sieve of Erathostenes. There are O(Alog{A}) divisors, where A=10^6 and the maximum number of divisors of one number <= A is D ~ 300, which means the total time complexity is O((N+Q)(D+8500)+Alog{A}), where the constant next to D is larger than the one next to 8500 due to modulos and more stuff to compute.

This may be too slow; to speed it up, we can use the fact that we don't need online processing for the first N added flowers.

UPD: Hell yeah, finals! I'll just leave this here.

Hello!

Today (20.3.), another monthly Cook-off takes place. It should last for 2.5 hours (21:30 to 00:00 IST) — starting time.

**Contest link:** https://www.codechef.com/COOK68

There will be 5 problems of varying difficulties — I think they aren't that hard (not for lack of trying on my part), there should be a lot more full scores than last month :D. Feel free to discuss the problems / provide feedback in the comments (after the contest ends).

In order to participate, you only need to have a CodeChef account. If you don't have one, you can register here.

**Problem Setter:** me

**Problem Tester & Russian Translator:** Antoniuk (Vasya Antoniuk)

**Editorialist:** PraveenDhinwa (Praveen Dhinwa)

**Mandarin Translator:** huzecong (Hu Zecong)

**Vietnamese Translator:** Team VNOI

**Language Verifier & Contest Admin:** PraveenDhinwa (Praveen Dhinwa)

**Prizes:** The prize system has changed this month. The top 10 performers in Global and Indian category will get CodeChef laddus; with them, you can claim CodeChef goodies. You can find out more here: https://www.codechef.com/laddu. (If you didn't get your previous goodies, you can contact winners@codechef.com.)

BTW, Codechef is celebrating its 7th birthday this month.

Hopefully, there won't be any technical difficulties this time! Good luck!

**The contest is over!**

Very simple hints for someone who doesn't want to read editorials yet:

Trivial.

When the game ends, what piles remain?

What are the vertices with regard to triangles? Bitwise operations are good. Some sort of optimised recursion is also good.

Think about convex hulls.

div2A: Try conversions between bases.

div2B: Solve a simpler version of the problem where *A*_{i + 1} ≠ *A*_{i} for all *i*.

div1A: What are the shortest paths of the vehicles? what's the shorter of those paths?

div1B: Forget about the ceiling function. Draw points (*i*, *A*[*i*]) and lines between them — what's the Lipschitz constant geometrically?

div1C: Some dynamic programming. Definitely not for the exp. score of one person — look at fixed scores instead.

div1D: Compute *dif*(*v*) in *O*(*N*) (without hashing) and then solve the problem in *O*(*N*^{2}). You need some smart merges.

div1E: Can you solve the problem without events of type 1 or 2? Also, how about solving it offline — as queries on subsets.

It's easy to compare two numbers if the same base belong to both. And our numbers can be converted to a common base — just use the formulas

A straightforward implementation takes *O*(*N* + *M*) time and memory. Watch out, you need 64-bit integers! And don't use `pow`

— iterating is better.

Let's process the numbers from left to right and recompute the longest range ending at the currently processed number.

One option would be remembering the last position of each integer using STL `map<>`

/`set<>`

data structures, looking at the first occurrences of *A*_{i} plus/minus 1 or 2 to the left of the current *A*_{i} and deciding on the almost constant range ending at *A*_{i} based on the second closest of those numbers.

However, there's a simpler and more efficient option — notice that if we look at non-zero differences in any almost constant range, then they must alternate: .., + 1, - 1, + 1, - 1, ... If there were two successive differences of + 1-s or - 1-s (possibly separated by some differences of 0), then we'd have numbers *a* - 1, *a*, *a*, ..., *a*, *a* + 1, so a range that contains them isn't almost constant.

Let's remember the latest non-zero difference (whether it was +1 or -1 and where it happened); it's easy to update this info when encountering a new non-zero difference.

When doing that update, we should also check whether the new non-zero difference is the same as the latest one (if *A*_{i} - *A*_{i - 1} = *A*_{j + 1} - *A*_{j}). If it is, then we know that any almost constant range that contains *A*_{i} can't contain *A*_{j}. Therefore, we can keep the current leftmost endpoint *l* of a constant range and update it to *j* + 1 in any such situation; the length of the longest almost constant range ending at *A*_{i} will be *i* - *l* + 1.

This only needs a constant number of operations per each *A*_{i}, so the time complexity is *O*(*N*). Memory: *O*(*N*), but it can be implemented in *O*(1).

Bonus: the maximum difference permitted in an almost constant range is an arbitrary *D*.

The condition that the train and bus can't meet at one vertex except the final one is just trolling. If there's a railway , then the train can take it and wait in town *N*. If there's no such railway, then there's a road , the bus can take it and wait in *N* instead. There's nothing forbidding this :D.

The route of one vehicle is clear. How about the other one? Well, it can move as it wants, so the answer is the length of its shortest path from 1 to *N*... or - 1 if no such path exists. It can be found by BFS in time *O*(*N* + *M*) = *O*(*N*^{2}).

In order to avoid casework, we can just compute the answer as the maximum of the train's and the bus's shortest distance from 1 to *N*. That way, we compute ; since the answer is ≥ 1, it works well.

In summary, time and memory complexity: *O*(*N*^{2}).

Bonus: Assume that there are *M*_{1} roads and *M*_{2} railways given on the input, all of them pairwise distinct.

Bonus 2: Additionally, assume that the edges are weighted. The speed of both vehicles is still the same — traversing an edge of length *l* takes *l* hours.

Let for *i* ≠ *j*.

Key observation: it's sufficient to consider *j* = *i* + 1 when calculating the Lipschitz constant. It can be seen if you draw points (*i*, *A*_{i}) and lines between them on paper — the steepest lines must be between adjacent pairs of points.

In order to prove it properly, we'll consider three numbers *A*_{i}, *A*_{j}, *A*_{k} (*i* < *j* < *k*) and show that one of the numbers *L*_{1}(*i*, *j*), *L*_{1}(*j*, *k*) is ≥ *L*_{1}(*i*, *k*). W.l.o.g., we may assume *A*_{i} ≤ *A*_{k}. There are 3 cases depending on the position of *A*_{j} relative to *A*_{i}, *A*_{k}:

*A*_{j}>*A*_{i},*A*_{k}— we can see that*L*_{1}(*i*,*j*) >*L*_{1}(*i*,*k*), since |*A*_{j}-*A*_{i}| =*A*_{j}-*A*_{i}>*A*_{k}-*A*_{i}= |*A*_{k}-*A*_{i}| and*j*-*i*<*k*-*i*; we just need to divide those inequalities*A*_{j}<*A*_{i},*A*_{k}— this is similar to the previous case, we can prove that*L*_{1}(*j*,*k*) >*L*_{1}(*i*,*k*) in the same way*A*_{i}≤*A*_{j}≤*A*_{k}— this case requires more work:- we'll denote
*d*_{1y}=*A*_{j}-*A*_{i},*d*_{2y}=*A*_{k}-*A*_{j},*d*_{1x}=*j*-*i*,*d*_{2x}=*k*-*j*

- then,
*L*_{1}(*i*,*j*) =*d*_{1y}/*d*_{1x},*L*_{1}(*j*,*k*) =*d*_{2y}/*d*_{2x},*L*_{1}(*i*,*k*) = (*d*_{1y}+*d*_{2y}) / (*d*_{1x}+*d*_{2x})

- let's prove it by contradiction: assume that
*L*_{1}(*i*,*j*),*L*_{1}(*j*,*k*) <*L*_{1}(*i*,*k*)

*d*_{1y}+*d*_{2y}=*L*_{1}(*i*,*j*)*d*_{1x}+*L*_{1}(*j*,*k*)*d*_{2x}<*L*_{1}(*i*,*k*)*d*_{1x}+*L*_{1}(*i*,*k*)*d*_{2x}=*L*_{1}(*i*,*k*)(*d*_{1x}+*d*_{2x}) =*d*_{1y}+*d*_{2y}, which is a contradiction

- we'll denote

We've just proved that to any *L*_{1} computed for two elements *A*[*i*], *A*[*k*] with *k* > *i* + 1, we can replace one of *i*, *j* by a point *j* between them without decreasing *L*_{1}; a sufficient amount of such operations will give us *k* = *i* + 1. Therefore, the max. *L*_{1} can be found by only considering differences between adjacent points.

This is actually a huge simplification — the Lipschitz constant of an array is the maximum abs. difference of adjacent elements! If we replace the array *A*[1..*n*] by an array *D*[1..*n* - 1] of differences, *D*[*i*] = *A*[*i* + 1] - *A*[*i*], then the Lipschitz constant of a subarray *A*[*l*, *r*] is the max. element in the subarray *D*[*l*..*r* - 1]. Finding subarray maxima actually sounds quite standard, doesn't it?

No segment trees, of course — there are still too many subarrays to consider.

So, what do we do next? There are queries to answer, but not too many of them, so we can process each of them in *O*(*N*) time. One approach that works is assigning a max. difference *D*[*i*] to each subarray — since there can be multiple max. *D*[*i*], let's take the leftmost one.

We can invert it to determine the subarrays for which a given *D*[*i*] is maximum: if *D*[*a*_{i}] is the closest difference to the left of *D*[*i*] that's ≥ *D*[*i*] or *a*_{i} = 0 if there's none, and *D*[*b*_{i}] is the closest difference to the right that's > *D*[*i*] or *b*_{i} = *n* - 1 if there's none (note the strict/non-strict inequality signs — we don't care about differences equal to *D*[*i*] to its right, but there can't be any to its left, or it wouldn't be the leftmost max.), then those are all subarrays *D*[*j*..*k*] such that *a*_{i} < *j* ≤ *i* ≤ *k* < *b*_{i}.

If we don't have the whole array *D*[1..*n* - 1], but only some subarray *D*[*l*..*r*], then we can simply replace *a*_{i} by and *b*_{i} by . The number of those subarrays is *P*_{i} = (*i* - *a*_{i})(*b*_{i} - *i*), since we can choose *j* and *k* independently.

All we have to do to answer a query is check all differences, take *a*_{i}, *b*_{i} (as the max/min with some precomputed values) and compute *P*_{i}; the answer to the query is . We only need to precompute all *a*_{i}, *b*_{i} for the whole array *D*[1..*n* - 1] now; that's a standard problem, solvable using stacks in *O*(*N*) time or using maps + Fenwick trees in time.

The total time complexity is *O*(*NQ*), memory *O*(*N*).

Bonus: *Q* ≤ 10^{5}.

As it usually happens with computing expected values, the solution is dynamic programming. There are 2 things we could try to compute: probabilities of individual overall ranks of Kleofáš or just some expected values. In this case, the latter option works.

- "one bit is 8 bytes?"
- "no, the other way around"
- "so 8 bytes is 1 bit?"

After some attempts, one finds out that there's no reasonable way to make a DP for an expected rank or score of one person (or multiple people). What does work, and will be the basis of our solution, is the exact opposite: we can compute the expected number of people with a given score. The most obvious DP for it would compute *E*(*i*, *s*) — the exp. number of people other than Kleofáš with score *s* after the first *i* competitions.

Initially, *E*(0, 0) = *m* - 1 and *E*(0, *s* > 0) = 0. How can we get someone with score *s* in competition *i*? That person can have any score *k* from 1 to *m* except *x*_{i} (since Kleofáš has that one) with the same probability . The expected values are sums with probabilities *P*(*i*, *s*, *j*) that there are *j* people with score *s*:

Considering that the probability that one of them will get score *k* is , we know that with probability , we had *j* people with score *s* before the competition and one of them had score *s* + *k* after that competition — adding 1 to *E*(*i* + 1, *s* + *k*). By summation over *j*, we'll find the exp. number of people who had overall score *s* and scored *k* more:

Lol, it turns out to be so simple.

We can find the probability *E*(*i* + 1, *t*) afterwards: since getting overall score *t* after *i* + 1 competitions means getting score *k* in the currently processed competition and overall score *s* = *t* - *k* before, and both distinct *k* and expectations for people with distinct *s* are totally independent of each other, then we just need to sum up the exp. numbers of people with those scores (which we just computed) over the allowed *k*:

The formulas for our DP are now complete and we can use them to compute *E*(*n*, *s*) for all 1 ≤ *s* ≤ *mn*. Since *E*(*n*, *s*) people with *s* greater than the overall score *s*_{k} of Kleofáš add *E*(*n*, *s*) to the overall rank of Kleofáš and people with *s* ≤ *s*_{k} add nothing, we can find the answer as

This takes *O*(*m*^{2}*n*^{2}) time, since there are *O*(*mn*) scores, *O*(*mn*^{2}) states of the DP and directly computing each of them takes *O*(*m*) time. Too slow.

We can do better, of course. Let's forget about dividing by *m* - 1 for a while; then, *E*(*i* + 1, *t*) is a sum of *E*(*i*, *s*) for one or two ranges of scores — or for one range minus one value. If you can solve div1C, then you should immediately know what to do: compute prefix sums of *E*(*i*, *s*) over *s* and find *E*(*i* + 1, *t*) for each *t* using them.

And now, computing one state takes *O*(1) time and the problem is solved in *O*(*mn*^{2}) time (and memory).

Bonus: Really, how fast can you solve this problem?

The name is really almost unrelated — it's just what a tree with arbitrary letters typically is in chemistry.

If you solved problem TREEPATH from the recent Codechef November Challenge, this problem should be easier for you — it uses the same technique, after all.

Let's figure out how to compute for just one fixed *v*. One more or less obvious way is computing hashes of our strings in a DFS and then counting the number of distinct hashes (which is why there are anti-hash tests :D). However, there's another, deterministic and faster way.

Recall that a trie is a rooted tree with a letter in each vertex (or possibly nothing in the root), where each vertex encodes a unique string read along the path from the root to it; it has at most σ sons, where σ = 26 is the size of the alphabet, and each son contains a different letter. Adding a son is done trivially in *O*(σ) (each vertex contains an array of 26 links to — possibly non-existent — sons) and moving down to a son with the character *c* is then possible in *O*(1).

Compressing a subtree can be done in a DFS. Let's build a trie *H*_{v} (because *T*_{v} is already used), initially consisting only of one vertex — the root containing the letter *s*_{v}. In the DFS, we'll remember the current vertex *R* of the tree *T* and the current vertex *cur* of the trie. We'll start the DFS at *v* with *cur* being the root of *H*_{v}; all we need to do is look at each son *S* of *R* in DFS, create the son *cur*_{s} of *cur* corresponding to the character *s*_{S} (if it didn't exist yet) and run *DFS*(*S*, *cur*_{s}). This DFS does nothing but construct *H*_{v} that encodes all strings read down from *v* in *T*_{v}. And since each vertex of *H*_{v} encodes a distinct string, is the number of vertices of *H*_{v}.

This runs in *O*(|*T*_{v}|σ) time, since it can create a trie with |*T*_{v}| vertices in the worst case. Overall, it'd be *O*(*N*^{2}σ) if *T* looks sufficiently like a path.

Well, what can we do to improve it? This trick is really the same — **find the son w of v that has the maximum |T_{w}|, add s_{v} to H_{w} and make it H_{v}; then, DFS through the rest of T_{v} and complete the trie H_{v} as in the slow solution.** The trick resembles HLD a lot, since we're basically remembering tries on HLD-paths.

If *v* is a leaf, of course, we can just create *H*_{v} that consists of one vertex.

How do we "add" *v* to a trie *H*_{w} of its son *w*? Well, *v* should be the root of the trie afterwards and the original *H*_{w}'s root should become its son, so we're rerooting *H*_{w}. We'll just create a new vertex in *H*_{w} with *s*_{v} in it, make it the root of *H*_{w} and make the previous root of *H*_{w} its son. And if we number the tries somehow, then we can just set the number of *H*_{v} to be the number of *H*_{w}.

It remains true that *dif*(*v*) is |*H*_{v}| — the number of vertices in the trie *H*_{v}, which allows us to compute those values directly. After computing *dif*(*v*) for each *v*, we can just compute both statistics directly in *O*(*N*).

Since each vertex of *T* corresponds to vertices in at most tries (for each heavy edge that's on the path from it to the root), we aren't creating tries with a total of *O*(*N*^{2}) vertices, but . The time complexity is therefore . However, the same is true for the memory, so you can't waste it too much!

Bonus: you have an additional tiebreaker condition for vertices with identical . Count the number of distinct strings which occurred exactly *k* times for each *k* in an array *P*_{r}[]; take the vertex/vertices with lexicograhically maximum *P*_{r}[] (as many strings as possible which occur only once, etc).

Bonus 2: Can you get rid of the logarithm in the time complexity?

Comic strip name: Indy. Go read the whole thing, it's not very long, but pretty good.

In this problem, we are supposed to solve the 0-1 knapsack problem for a set of items which changes over time. We'll solve it offline — each query (event of type 3) is asked about a subset of all *N* exhibits appearing on the input.

If we just had 1 query and nothing else, it's just standard knapsack DP. We'll add the exhibits one by one and update *s*(*m*) (initially, *s*(*m*) = 0 for all *m*). When processing an exhibit with (*v*, *w*), in order to get loot with mass *m*, we can either take that exhibit and get value at least *s*(*m* - *w*) + *v*, or not take it and get *s*(*m*); therefore, we need to replace *s*(*m*) by ; the right way to do it is in decreasing order of *m*.

In fact, it's possible to merge 2 knapsacks with any number of items in *O*(*k*^{2}), but that's not what we want here.

Note that we can add exhibits this way. Thus, if there were no queries of type 2, we would be able to solve whole problem in *O*(*Nk*) time by just remembering the current *s*(*m*) and updating it when adding an exhibit. Even if all queries were of type 2 (with larger *n*), we'd be able to solve it in *O*(*nk*) time in a similar way after sorting the exhibits in the order of their removal and processing queries/removals in reverse chronological order.

Let's have *q* queries numbered 1 through *Q* in the order in which they're asked; query *q* is asked on some subset *S*_{q} of exhibits.

MAGIC TRICK: Compute the values *s*(*m*) only for subsets — the intersections of pairs of queries 2*q*, 2*q* + 1 (intersection of the first and the second query, of the third and fourth etc.), **recursively**. Then, recompute *s*(*m*) for all individual queries in *O*((*N* + *Q*)*k*) time by adding elements which are missing in the intersection, using the standard knapsack method.

What?! How does this work?! Shouldn't it be more like *O*(*N*^{2}) time? Well, no — just look at one exhibit and the queries where it appears. It'll be a contiguous range of them — since it's displayed until it's removed (or the events end). This element will only be missing in the intersection, but present in one query (so it'll be one of the elements added using knapsack DP), if query 2*q* + 1 is the one where it appears first or query 2*q* the one where it appears last. That makes at most two addittions of each element and *O*(*N*) over all of them; adding each of them takes *O*(*k*) time, which gives *O*(*Nk*).

The second part of the complexity, *O*(*Qk*) time, is spent by copying the values of *s*(*m*) first from the intersection of queries 2*q* and 2*q* + 1 to those individual queries.

If we're left with just one query, we can solve it in *O*(*Nk*) as the usual 0-1 knapsack.

Since we're halving the number of queries when recursing deeper, we can only recurse to depth and the time complexity is .

We can also look at this as building a perfect binary tree with sets *S*_{1}, ..., *S*_{Q} in leaves and the intersection of sets of children in every other vertex.

For each vertex *v* of this tree, we're solving the knapsack — computing *s*(*m*) — for the set *D*_{v} of displayed exhibits in it. We will solve the knapsack for the root directly and then proceed to the leaves. In each vertex *v*, we will take *s*(*m*), the set *D*_{p} of its parent *p* and find *s*(*m*) for *v* by adding exhibits which are in *D*_{v}, but not in *D*_{p}.

We know that the set *D*_{p} is of the form for some *a*, *b* and *D*_{v} is either of the form or for (depending on whether it's the left or the right son). In the first case, only elements removed between the *m*-th and *b*-th query have to be added and in the second case, it's only elements added between the *a*-th and *m* + 1-th query. Since each element will only be added/removed once and the ranges of queries on the same level of the tree are disjoint, we will do *O*((*N* + *Q*)*k*) work on each level and the overall time complexity is .

Of course, bruteforcing them in *O*(*NQ*) isn't such a bad idea, but it'd probably TLE — and we can do better. We've just described how we can pick those exhibits based on the queries between which they were added/removed. Therefore, we can find — for each exhibit — the interval of queries for which it was displayed and remember for each two consecutive queries the elements added and removed between them; finding the exhibits added/removed in some range is then just a matter of iterating over them. Since we're actually adding all of them, this won't worsen the time complexity.

In order to efficiently determine the exhibits in some set , we can remember for each exhibit the interval of time when it was displayed. The exhibit is in the set if and only if it was displayed before the *a*-th query and remained displayed at least until the *b*-th query.

To conclude, the time complexity is and since we don't need to remember all levels of the perfect binary tree, but just the one we're computing and the one above it, the memory complexity is *O*(*qk*).

Tutorial of Codeforces Round #333 (Div. 1)

Tutorial of Codeforces Round #333 (Div. 2)

(original)

Hello everyone!

CF round #333 (both divisions) is going to take place today. The authors of this round are me and Baklazan.

By total coincidence, I'm the author of even-indexed problems and Baklazan the author of odd-indexed problems. Now I'm going to let you wonder if it's 0-based or 1-based :D.

As usual, thanks to GlebsHP for his help (in particular, helping us not overkill the problems), MikeMirzayanov for CF and Polygon, Delinur for problem statement translations and our testers: misof, Mimino, AlexFetisov and winger.

I wish good results and rating changes to those who earn it, as well as a zero-sum rating update to everyone.

Score distribution:

Div. 2: 500-1000-1500-2250-2250

Div. 1: 500-1250-1250-2000-2500

Winrars:

Div. 1

Div. 2

(homogeneous colours :D)

As usual, 10 days from the first Friday of the month (link to the contest; starting time), a Long Challenge contest takes place. As usual again, there are 10 problems — 9 with subtask scoring and one challenge problem with relative scoring. Feel free to participate!

The people involved are:

setters: Furko, eartemov_tsogu, EG0R, aangairbender, Rebryk, cenadar, Cherrytree, KaiZeR, tehnar

tester: iscsi

editorialist: myself

language verifier: RahulArora

translators: Cherrytree — Russian, Team VNOI — Vietnamese, huzecong — Mandarin

contest admin: I_lost_my_handle

**UPD:** The editorials are up. I still have stuff like links to add, but everything should be complete otherwise.

Hello, and, again, you are invited to a monthly contest: HackerEarth October Clash.

The problemsetter this time is RomaWhite; I'm the tester and editorialist. After all, I have to give I_love_Tanya_Romanova a chance to compete! :D

The contest takes place this weekend (starting time) and runs for 24 hours. There should be 6 algorithmic problems with partial scoring (points are given independently for passing each test file) and 1 challenge problem. I think the problems are easier than last month, when two subtasks of one problem went unsolved by anyone.

Good luck!

Btw, round 1 of COCI overlaps with this Clash for 3 hours. That, of course, doesn't mean it's not possible to participate in both at the same time — from my own experience. It gets harder when trying to take part in all contests which take place tomorrow...

UPD: The contest has finished, congratulations to the colourful top 5:

- Catalin Stefan Tiseanu
- izrak
- rantd
- fataleagle
- lewin

The editorials were delayed (because I overslept), ~~almost~~ all of them are up now.

The next edition of Internet Problem Solving Contest is here!

In case you don't know (yet), IPSC problems come in a great variety, from standard algorithmic ones to problems where you interact with the grader or need to find a specific input.

Most problems have an easy input worth 1 point and hard input worth 2 points; otherwise, the scoring is ACM-like, with WAs on easy inputs giving time penalty 10 minutes instead of 20. The input files are available for download and you only upload your outputs (in a similar manner to GCJ, except there's no additional time limit).

It's a team competition for teams of three. IPSC 2015 takes place on 20th of June, from 11:00 UTC and lasts 5 hours. You may register here.

Registered teams |
---|

(I won't be adding teams here anymore, because who cares anymore)

Place | Points | Time | Team name | Member 1 | Member 2 | Member 3 |
---|---|---|---|---|---|---|

5 | 30 | 2775 | Warsaw Capybaras | mnbvmar | Swistakk | Errichto |

6 | 29 | 2155 | Havka-papstvo | Egor | pashka | Petr |

12 | 28 | 2166 | Knifeproof Tank | niyaznigmatul | VArtem | tourist |

16 | 27 | 2577 | sudo set-position rand()%N | fsouza | marcoskwkm | stefanot |

26 | 25 | 1815 | Andromeda Express | ainu7 | Jongman | astein |

27 | 25 | 1873 | Team Accepted Limit Exceeded | popoffka | Alex_2oo8 | Ingugugus |

32 | 24 | 2417 | ThankYouMikeMirzRAYanovForYou- (sic) CodeforceAndPolygonPlatforms | abyssmall | izrak | junkbot |

33 | 24 | 2423 | Unpretired | ilyakor | Jacob | gusakov |

35 | 23 | 1803 | SPb SU 8/3 | Dmitry_Egorov | PavelKunyavskiy | nk.karpov |

36 | 23 | 2029 | Corridors of Time | flydutchman | Riatre | this_isssssyy |

40 | 23 | 2468 | Dig | AliB | PrinceOfPersia | Swift |

42 | 23 | 2565 | stigma | sugim48 | evima | stqn |

44 | 22 | 1528 | RaccoonRush | subscriber | enot.1.10 | antonkov |

52 | 21 | 1826 | W4yneb0t | W4yneb0t | ||

55 | 21 | 1890 | MooOOoOooOOOOoOooooP | DemiGuo | ksun48 | yummy |

61 | 20 | 1503 | iThan | chaotic_iak | jonathanirvings | nathanajah |

63 | 20 | 1639 | itmo150216 | izban | vlad107 | belonogov |

64 | 20 | 1706 | My Igloo Is Melting | Kuroba | FatalEagle | zxqfl |

67 | 20 | 1773 | Return of Celtic Warriors | Tanaeem | sunny | dragoon |

72 | 20 | 2014 | ALREADY HAVE DONE | HYEA | koosaga | Jiyong Youn |

80 | 19 | 1345 | dolphin secret agents | stan | acherepanov | permin |

91 | 19 | 1837 | MSU Tashkent Old Coders | Timur_Sitdikov | SergeyLazarev | helThazar |

102 | 19 | 2425 | Ural FU Dandelion | mpivko | Umqra | Um_nik |

104 | 18 | 1335 | PAPFans | M.Mahdi | PAP | SeyedParsa |

115 | 18 | 1692 | GD-PSP | Jokser | sweiss | pvasilyev |

128 | 17 | 1244 | Frozen Heart | Nikitosh | Tehnar | ComradePetr |

131 | 17 | 1433 | Zenith | langtrunghieu | I_love_Hoang_Yen | flashmt |

141 | 17 | 2324 | 12.0ngskar | dolphinigle | Gyosh | fushar |

142 | 16 | 1244 | CodeFights | ---Grigor--- | aram90 | armen.tsirunyan |

143 | 16 | 1281 | AOI2 | gdisastery | fleimgruber | |

156 | 16 | 1722 | BajaB | shayanH | Amdi | AliA |

167 | 15 | 1042 | kraskevich_team | ILoveCoding | ||

174 | 15 | 1309 | Please explain why havka eto papstvo | OutSide | FxF | Fcdkbear |

180 | 15 | 1472 | Bangladesh Avengers | emo | moshiur | sohelH |

183 | 15 | 1533 | Saratov SU 1 | kuviman | IlyaLos | dans |

212 | 14 | 1319 | ZER | zholnin | Elgun | _resul |

243 | 13 | 1314 | cup_of_team | cup_of_tea | ||

255 | 13 | 1748 | Dirsa | kristapuciitis | sanja | Mishutnik |

265 | 12 | 1073 | Masr Islamia (╥‿╥) | khaledkee | ahmednaoum | Mohamed Al-Jamal |

270 | 12 | 1197 | B-b-b-b-bones! | mysterion | knst | |

271 | 12 | 1240 | ☺ I can't see plus-plus ☺ | tjandra | ||

279 | 12 | 1389 | namelist.insert("দৌড়ের উপর") | enzam | VUAcoder | wasi.ahmed |

280 | 12 | 1425 | kvark161: | kvark161 | ||

282 | 12 | 1564 | 8-HD-720p-YIFY-Ganool.3gp | azaky | farisv | makanbeling |

301 | 11 | 1128 | For the watch | ashish1610 | rohangarg | kshitij_jain |

303 | 11 | 1164 | Chega de saudades | ivanilos | ||

309 | 11 | 1246 | sorry_helli | SaDDaS | Silver_Soul | I_Love_Yuuki_Asuna |

318 | 10 | 739 | Donkey Fly | EKGMA | HamidReza_H | Rubik_AA |

320 | 10 | 802 | dpsd_team | rajat1603 | sidhbansal | additya1998 |

321 | 10 | 817 | Flawless | fdg | Furko | mgch |

335 | 10 | 982 | unemployed, useless dropout & cute woman | vadimmm | Rubanenko | baba_beda |

343 | 10 | 1062 | Alexander Udalov | udalov | ||

348 | 10 | 1104 | 85-85 | farzad.shbfn | shamir0xe | |

361 | 10 | 1303 | Samne Porikkha...Asen Contest Kori | zubaer.kh | amlansaha | Honour_00 |

366 | 10 | 1348 | Choker | riad | LinKin | jehad131 |

376 | 9 | 673 | bambino | 2shraaf | badry | mohamed.mehany |

383 | 9 | 824 | SPiromanii&Messi | LUN4M0R1P5 | teoionescu | george_stelian |

414 | 8 | 786 | Never Slowdown | reza_h | DemoVersion | |

428 | 8 | 1023 | les apparences sont trompeuses members | safrout | KarimElSheikh | MedoN11 |

430 | 8 | 1057 | UHv6 | jcg | marX | |

464 | 7 | 697 | NHSPC Juniors | ruhan.habib39 | tasmeemreza | rubabredwan |

485 | 6 | 228 | TeamUFRN | heliobdf | Zailton | RailtonT |

488 | 6 | 254 | milkyway | ptnk_1215_panaka_13 | touristv2 | TiChuot97 |

505 | 6 | 462 | code_phoenix | ajinkya1p3 | yogeshg39 | prabhu_3095 |

519 | 6 | 647 | Vypiem_za_Lubov | Nicolas_Cage | Montezuma | vip_gevorg |

565 | 5 | 489 | Nalin Bhardwaj | NibNalin | ||

576 | 5 | 1031 | Sandor team | SandorGarcia | ||

588 | 4 | 87 | GG izi commend me | jlr.terceiro | ronisds | |

595 | 4 | 110 | Nemidunam | MR.GoryTnych | Alimol | |

618 | 4 | 188 | ONU_Feuerbach | ONU_V_V_Ivanov | Dubzzy | illarionovam_onu |

623 | 4 | 204 | Hot Tomato Sauce | nirob_mh | shm0007 | |

633 | 4 | 259 | DS & DP^2 | besher | Alex7 | joudzouzou |

660 | 4 | 846 | Primo | Drizlerz | zulkan | |

689 | 3 | 76 | BK.Troll | farmerboy | thomas | |

717 | 3 | 159 | The Deterministics | asdoc | PowerShell | xennygrimmato |

746 | 3 | 312 | hichzat | bayram98 | horojyk | Kerim.K |

769 | 3 | 512 | thehandofgod | belowthebelt | deepankarak | pulkit_180894 |

773 | 3 | 606 | KBTU Tarjan | Azizkhan | Madiyar | Temirulan |

N/A | N/A | N/A | 2017 ACM-ICPC absolute winners | T0RRES | ||

N/A | N/A | N/A | Abdelkarim | abdelkarim | ||

N/A | N/A | N/A | Binam | bijan | Nastaran75 | sPooky |

N/A | N/A | N/A | DivineByCode | amankedia | Kanish_The_Vista | ace_d_king |

N/A | N/A | N/A | Dodgers | vlade087 | balle | deinier |

N/A | N/A | N/A | guptashvm | gupta_shvm | ||

N/A | N/A | N/A | Istrebitel | DARIUS_XIX | Suhaylee | |

N/A | N/A | N/A | j1k7_7 | jaskamalkainth | ||

N/A | N/A | N/A | Korol', mudak, i rzhavaya zapchast' | silap-aliyev | bayram | osmanuss |

N/A | N/A | N/A | Panda Po | chrome | N.Elibay | ErzhaNN |

N/A | N/A | N/A | shockers | Ajay | Sundar | Karthik |

N/A | N/A | N/A | Team Zabava | radoslav11 | vanjo9800 | P_Nyagolov |

N/A | N/A | N/A | Tmcoteam | Allanur_tkm | Bekmyrat | _NinjA |

N/A | N/A | N/A | Whatever | ikbal | EMINAYAR | enesoncu |

N/A | N/A | N/A | Zerry2015 | lamngo96 | nhathuyen95 | duongtnhat |

These days (26.-27.3.2015), another national round of Slovak Olympiad in Informatics took place. Just short problem statements for now, I'll write it in more detail and with solutions later:

There's a hazelnut chocolate of size *N* × *M*, which can be viewed as a 2D grid. In each cell of the grid, there's a given number of nuts. Two players play a game, alternating turns: each player has to cut off either the bottom row or the rightmost column of the remaining part of the chocolate in her turn (and do with it guess what). The cut off row/column has to contain an even number of nuts. The player that can't make a move loses. Determine the winner if both play optimally.

You have two numbers *N*, *K* (), one input file with *N* - *K* distinct numbers between 1 and *N* and 2*K* + 2 empty files. Find the missing *K* numbers.

Your program has a very limited amount of memory (the files' content isn't stored in that memory), just 10 kB for *K* ≤ 100; its memory complexity can't depend on *N*. Primarily, you need to minimise the worst-case number of reads+writes to files; only then are you minimising the time and memory complexity of your algorithm.

Files work like queues and can be erased in small constant time.

a very long introductory text and some heavily theoretical magic with simulating boolean conditions using other conditions and proving that it's always possible/impossible

There are *N* numbers up to 10^{9} and another number *R*, also up to 10^{9}. Find up to 4 numbers (one number can't be used multiple times) among these *N* that sum up to *R* or decide that they don't exist. *N* ≤ 4000.

There's a straight river of constant width, *N* points on one side of it and *M* on the other side. You need to connect some of them with (obviously straight) lines that have the minimum total length in such a way that each of these *N* + *M* points has at least one line connecting it with another point on the other side of the river. *N*, *M* ≤ 20000.

The first 3 problems are theoretical (points for explaining your algorithm), the other 2 practical (typical contest style, partial scoring); the max. time limit was 10s in both.

Complete problemset + tests + original presentation of solutions.

Official scoreboard, public contest scoreboard, used for training elsewhere.

Short hints first, more detailed solutions afterwards.

(difficulty: medium-hard, code)

Maxflow in a suitably selected graph. Ford-Fulkerson is sufficient.

The vertices of our graph need to correspond to states that one person can be in when traversing the original road network; the number of people moving in a group is then represented by flow along edges. Therefore, the most natural choice is to use vertices corresponding to states (location, time).

The rules from the problem statement say that at most *c* people can move along an edge *e* = (*u* - > *v*) in the original at any time. That means if we add an edge from (*u*, *t*) to (*v*, *t* + *s*) (*s* is the time it takes to traverse edge *e*), it needs to have capacity *c*; such edges fully describe these rules.

Staying in place is also moving, just into the same vertex; it takes one time unit. That adds some more edges (with infinite capacity, as any number of people can stay in place).

Now, a path of some person is represented by a unit flow in this graph. In order to find the maximum number of people that can get somewhere, we clearly want a maximum flow in this graph.

Wait, people can't wait forever! We still haven't used the restriction on time *S* till zombification, but that's simple — just don't use vertices corresponding to time > *S*. This also bounds the number of vertices to *O*(*NS*) and edges to *O*((*N* + *M*)*S*).

Furthermore, we need a source vertex *v*_{S}, a sink vertex *v*_{T} and edges from/to them; the answer is the maxflow from *v*_{S} to *v*_{T}. It's obvious that we need 1 edge from the source to the starting vertex of our group of *G* people (at time 0), and that edge should have capacity *G*. For edges to the sink, we'll use the last remaining part of the input (finally, the input is spent!): they need to go from the medical facilities, at time *S* (there's no harm in waiting for a film-like cliffhanger), and have infinite capacity.

All that's left is applying a standard maxflow algorithm, for example Ford-Fulkerson. That one has complexity *O*((*V* + *E*)*F*), where *F* is our answer (the maxflow), *V* and *E* are vertices and edges of our graph — in this case, it's *O*((*N* + *M*)*SG*). It actually runs in time, because its constant is fairly small and most tests are small or have small answers.

(diff: easy-medium, code)

BFS in a graph of cooking times.

Again, we can construct a directed graph of states. In this case, each state will be a time set on the microwave so far; from it, you have *N* edges corresponding to button presses that lead you to a different time as described in the problem statement.

Obviously, you want to find the minimum distance from vertex (*t* = 0) to (*t* = *T*), or min. distances to vertices (*t* > *T*) (since you can always keep pressing one positive button, at least vertex (*t* = 3600) is guaranteed to be reachable) when necessary. One BFS is enough to find the min. distances of all vertices from (*t* = 0) and loop over all vertices (*t* ≥ *T*) until you find one with non-infinite distance.

There are *O*(3600*N*) edges in our graph, *O*(*N*) vertices, so our BFS takes *O*(3600*N*) time.

(diff: med-hard, code)

Convex hull; fix 1 vertex (*u*) of the quadrilateral, iterate the opposite one (*w*) along the convex hull and recalculate the furthest vertices *v*, *x* from line *u* - *w* on each of its sides. *O*(*N*^{2}).

We need to pick a quadrilateral (possibly degenerated into a triangle) with maximum area. Obviously, a triangle is better than a concave quadrilateral, so let's stick to convex ones.

Suppose that we picked one diagonal of our quadrilateral. Its area is the sum of triangles made by this line segment and 2 vertices on opposite sides from it. Using the well-known formula: area (of a triangle) = base * height / 2, we realize that the most distant points from the line give the largest area, because the chosen diagonal is each triangle's base.

We can imagine taking a line parallel to our diagonal and moving it perpendicularly to it; the last vertices it crosses (depending on the direction of its movement) are the most distant ones. But this is equivalent to saying that these most distant vertices must lie on the convex hull — if we draw a line through a point that's not on the convex hull, there will be points (stricty) on both sides from it, the ones belonging to the convex hull. Applied to both diagonals, it means we only need to use points on the convex hull.

Let's now pick a diagonal between 2 vertices on the convex hull and rotate our coordinate system so this diagonal coincides with the *x*-axis. We need to find the topmost point (farthest from the *x*-axis on one side, the other side can be done analogously). By definition of convexity, as we go from left to right on the hull, the angle between the *x*-axis and the side of the hull we're currently moving along will be non-increasing, which means the *y*-coordinates of points we pass first increase (up to the first segment with negative angle) and then decrease.

If we rotate this diagonal around one vertex *u* of the hull, starting with position parallel (angle 0°) to one side *u* - *v* of the hull in that place, adding vertices to the part "above" the diagonal, the angles of all segments with the diagonal increase equally and the first segment with negative angle moves along the hull in the same direction we're adding vertices in. Therefore, we can use two pointers to store the other point of the diagonal *w* and the topmost point *v*.

Since the "bottommost" point *x* can be recomputed using the 2 pointers' method with *w* in parallel to recomputing *v* and we can calculate the respective area easily using vector cross products, this gives us (together with the convex hull) an *O*(*N*^{2}) algorithm.

(diff: med-hard, code)

Build a graph with vertices (intersection, direction); it's a specific union of cycles. Adding a signpost points all locations on 0 or 2 cycles to the goal.

The graph we're given is not very good, since the next intersection depends on the previous one. A much better graph is a directed one with vertices (intersection, direction). In this graph, the next vertex is given uniquely and the vertex from which we had to arrive is also given uniquely, so it must be a union of cycles.

Furthermore, if we had a cycle in it along some intersections, then there must be another cycle in which the order of intersections is opposite — it corresponds to traversing the same path in the original graph, just taking the opposite direction at each intersection.

Now, what would happen if we added some signposts? Each vertex would still have outdegree 1, but some would have indegree 0, so each connected component would be a cycle with some trees (directed to the root) appended to its vertices as roots. The situation we want to obtain is for each component's cycle to contain one of the goal's vertices (there are 4 of them, and at most 4 such cycles).

Let's look at the intersections in the goal's cycles, and at signposts in them. Such an intersection corresponds to 4 vertices, two of which originally lied in the goal's cycles already. Adding a signpost would therefore point at most 2 other cycles to the goal. After we added such a signpost, we could look at the graph again, find an intersection that leads tourists to the goal in two directions (either directly along an original goal's cycle, or by finding the previously added signpost) and add a signpost to it, and repeat until the goal is always reachable.

In fact, we can always find an intersection such that adding a signpost would point 2 cycles to the goal this way. Thanks to our strategy and the symmetric property of our graph, we know that pointing one cycle to the goal means pointing its opposite direction cycle to the goal as well, so we point 0 or 2 cycles. And if we could only point 0 cycles, that would mean the original graph isn't connected.

Therefore, we can calculate the answer very easily: it's the number of components that don't contain the goal's intersection / 2. All it takes is one BFS for each component, in *O*(*N*) time total.

(diff: medium, code)

A classical problem solvable simply using sorting and minimum-BIT in .

The order of engineers doesn't matter, so let's sort them by their first ranks. Now, the only people who can kick an engineer out of the shortlist now have to be before him in the sorted order, so let's process them in that order.

For an engineer (*r*_{2}, *r*_{3}), we need to find out if, among people processed before him and with smaller *r*_{2}, there's someone also with smaller *r*_{3}. That's straightforward if we use a minimum-BIT: if we store in the BIT an array *A*[] with *A*[*r*_{2}] = *r*_{3} for already processed engineers and *A*[*r*_{2}] = ∞ for the rest, then it's equivalent to asking if . Based on the answer, we can decide whether to add him to the shortlist. Then, we need to add the currently processed engineer by updating *A*[*r*_{2}] to .

Sorting can be done in *O*(*N*), queries and updates on BIT work in , so we have an algorithm.

(diff: hard, code)

Boats are edges, designs are edges. Compress the graph to a tree with its root marked, then keep marking the vertex that has the most unmarked vertices on the path from it to the root. It can be done with preorder numbering and a segment tree.

The input graph has specific form that simplifies stuff a lot — it's connected and has a part that always stays afloat. We can identify this part and compress it into the root of a tree. How? By identifying the part that doesn't stay afloat, and that can be done simply by cutting off leaves of the graph (stored in a queue) and adding their neighbours to the queue if they become leaves.

Now, we have a rooted tree, with only its root marked as "stays afloat". If we connect a boat to its unmarked vertex, that vertex and all on the path from it to the root will also become marked that way. Obviously, we want to connect boats to leaves only; in fact, we want to connect a boat to a deepest vertex (if there are more, any one will do). If we didn't connect a boat to any of the deepest vertices, we could look at the first vertex *w* on the path from one of them (*v*) to the root that becomes marked in the end, take a boat from its subtree (connected to *u*) and connect it to that deepest vertex (*v*) instead, gaining at least one marked vertex, because we mark all vertices from *w* to *v*, unmark at most all vertices from *u* to *w* and *v* must have been deeper than *u*.

We can connect the first boat to this vertex and mark all vertices on the path from it to the root; this can be done by simply moving along the path until we encounter a previously marked vertex. Where to put the next boat? Again, to the vertex with the most unmarked ones on the path from it to the root (let's call this number "true depth"). The argument is that the cost is the same as if we just took all marked vertices and merged them into the root (we did this at the beginning, remember?), obtaining another tree with all but the root unmarked, which is the same situation as when choosing the vertex for the first boat.

We now have a clear idea of *what* to do — *K* times "pick the truly deepest vertex and mark all vertices above it (including itself)". Then, we just need to count unmarked vertices to get the answer. The only question remaining is *how* to do this.

Marking vertices is straightforward, the only problem is updating true depths and finding the truly deepest vertex. What often helps with this type of problems is renumbering vertices so that each subtree would contain vertices from one interval. For example, with preorder numbering, which can be computed with one DFS — the root gets number 0, its first son gets number 1, then the subtree of the first son is numbered recursively, the second son gets the first unused number, then its subtree is numbered recursively, the 3rd son gets the first unused number again etc. This produces the desired numbering, and updating true depths when vertex *v* is marked turns into subtracting 1 from all true depths in an interval (corresponding to *v*'s subtree).

So, we need to find the maximum and subtract a constant from an interval. Does that remind you of anything? Yes, maximum interval/segment tree. With lazy updates and able to find the vertex that this maximum belonged to. Lazy propagation is a standard thing, so I won't describe the tree here, but I'll mention that in order to get the vertex, you can keep the maximum of pairs (true depth, corresponding vertex). The time complexity is , because all but the query answering/updating takes just linear time and a segment tree can be constructed in *O*(*N*).

(diff: easy, code)

Take the derivative of function *F*(*R*), find the optimal *R* and *F*.

This is one of the most basic problems in calculus. *F*(*R*) is a polynomial, which is differentiable in , so its maxima must satisfy

In this case, it means 2*aR* = *b*, -2a < 0$. The second rule is always satisfied, the first one has exactly one solution: , with the corresponding maximum *F*(*R*_{mx}) = *b*^{2} / 4*a* + *c*.

We want just solutions with *R* > 0, but in this problem, it's always satisfied.

We get a straightforward *O*(*N*) code; doubles are fully sufficient for max. values of *F*.

(diff: harder, I'm not sure if "very hard" is accurate)

Lol nope. Look for it down in the comments.

(diff: medium, code)

*N* = *G*_{n} = *F*_{n - 1}*b* + *F*_{n - 2}*a*; try all reasonable *n*, solve the equation (think why it's equivalent to N=F_{n-1}b+F_{n-2}a$)

We know that *N* = *G*_{n} = *F*_{n - 1}*b* + *F*_{n - 2}*a* with *F*_{ - 1} = 1, *F*_{0} = 0 (it's not called a linear recurrence without reason); proof by seeing that it satisfies all conditions it needs to satisfy. For *a*, *b* > 0, *G*_{n} ≥ *F*_{n} and Fibonacci numbers increase quickly, so we can try all *n*, for which *F*_{n} ≤ *N* holds.

Let's fix *n* ≥ 3. Modulo *F*_{n - 1}, we know that *N* ≡ *F*_{n - 2}*a*; consecutive Fibonacci numbers are also coprime (easy proof by induction), so *F*_{n - 2} has a modular inverse and

We can precompute Euler's φ of all *F*_{n - 1} ≤ *N* after factorising them. Then, we can compute the smallest positive *a*_{0} that can lead to a good pair (*a*, *b*) using fast exponentiation and its respective *b*_{0}.

We can add any multiple of *F*_{n - 1} to *a*_{0} and subtract the same multiple of *F*_{n - 2} from *b*_{0} to get all the other pairs (*a*, *b*). The largest *k*, for which *a*_{0} + *kF*_{n - 1} ≤ *b*_{0} - *kF*_{n - 2} holds, is (integer division). If *b*_{0} < *a*_{0}, there's clearly no way to make a good pair (*a*, *b*) with this *n* (*a*_{0} must be the smallest possible *a* here, but we'd try to subtract something from it). Otherwise, the pair corresponding to this *k* must be good.

We just need to pick the optimal one of so found good pairs (*a*, *b*). There are possible *n*-s, each requires factorisation for φ and exponentiation; all remaining steps are *O*(1), so we get complexity per query with precomputation.

(diff: easy, code)

Trivial simulation of what's described in the statement (walk around the maze and mark cells as empty), just bug-prone. Nothing else to add, you can read my code to understand better.

(diff: medium, code)

Meet in the middle, pick one half of correct answers. 31^{12} < 10^{18}, so a vector can be compressed into a 64-bit integer.

The simplest bruteforce would try all possible combinations of correct answers, count the number of correct answers each student got and decide if it's correct. But that's too slow.

A better solution uses meet-in-the-middle approach. We can bruteforce all combinations of the first correct answers and count how many of them each student answered correctly, as a vector *v* of size *N*. The same can be done for the last answers, obtaining vectors *w* of how many answers each student needs to answer correctly in the other half. The answer to the problem is the number of pairs (*v*, *w*) with *v* = *w*.

We could just throw all vectors *v* into a `map<>`

and compute the answer directly by looping over all *w*. But to make sure it doesn't TLE, we can convert the vectors into numbers in base *M* + 1. Since these numbers are at most (*M* + 1)^{N} ≤ 10^{18} here, 64-bit integers are sufficient for this representation. Comparing vectors in *O*(*N*) now turns into comparing integers in *O*(1).

There are *O*(2^{M / 2}) halves of correct answers to check (*O*(*MN*)), compress (*O*(*N*)) and drill into or search in a map (), so the time for this is *O*(2^{M / 2}*MN*).

The last blog got downvote bombed into oblivion, but I have enough contribution to spare.

TC SRM 638 takes place soon (ಠ_ಠ).

Feel free to discuss the problems here after the round ends (because TC apparently doesn't have placeholders for new matches in its forum anymore...).

takes place soon (ಠ_ಠ).

Feel free to discuss the problems here after the round ends (because TC apparently doesn't have placeholders for new matches in its forum anymore...).

UPD: this blog post is now obsolete, because it doesn't show on the Recent actions list due to negative votes. Comment in here instead.

This was originally intended as an answer for this comment, but eventually I realized that it's too long (also, the Preview feature takes way too long to process it), so it's a blog post instead.

This has complexity of and a horrible constant. If there's a better solution, write it in the comments.

UPD: KADR provides a solution using polynomial interpolation in *O*(*K*^{2}).

There are *N* boxes numbered 1..*N*; box number *i* contains *i* candies and 1 stone. We pick 1 random object from each box. Calculate , where *p* is the probability that *K* of these *N* objects are candies.

Constraints: 1 ≤ *K* ≤ *N* ≤ 10^{9}, 2·10^{9} ≥ *M* ≥ 10^{9} and *M* is prime.

It's clear that since the probability of picking a candy from box *k* is and the probability of not picking one is , the answer is (*S*_{n} denotes the set )

Let's denote this sum as *P*(*N*, *K*) and introduce another sum (*N* / 2 is integer division)

which calculates the same sum *P*(*N*, *K*), but with an additional restriction that *S* can only be subsets of range .

If that restriction was instead that *S* can only be subsets of range , the sum would obviously be just *P*(*N* / 2, *K*).

Also, let's denote .

**Naive solution**

If the constraints were small, we could use simple DP on *P*(*N*, *K*). If we don't use *N* in *S*, then we sum up π(*S*') of the same subsets *S*' as in *P*(*N* - 1, *K*). If , we only need *K* - 1 elements from the range *S*_{N - 1}, so we sum up π(*S*) = *N*π(*S*') of the same subsets *S*' as in *P*(*N* - 1, *K* - 1).

This runs in *O*(*NK*) and would obviously give TLE. What we can do instead is splitting the range *S*_{N} into 2 halves of size *N* / 2 (and possibly 1 integer *N* / 2 + 1 in the center for odd *N*).

**Calculating P(N, K) — even N **

Suppose we knew how to calculate *P*_{2}(, ) already. In order to calculate *P*(*N*, *K*) for even *N*, we can split any disjointly and uniquely into and .

If |*S*_{a}| = *i*, then |*S*_{b}| = *K* - *i*; any 2 sets *S*_{a} and *S*_{b} can be merged to form one set *S* and so we have for fixed *i*

Since *i* can be any integer in [0, *K*], we get

**Calculating P(N, K) — odd N **

In this case, we can again split *S* disjointly and uniquely into *S*_{a}, *S*_{b} as above and another set . We have 2 choices for *S*_{c}: either it's empty or contains *N* / 2 + 1; if it's empty, then we get the same sum as for even *N*.

If *S*_{c} is non-empty, we can again use that π(*S*) = π(*S*_{a})π(*S*_{b})π(*S*_{c}) = π(*S*_{a})π(*S*_{b})(*N* / 2 + 1), where we can take *N* / 2 + 1 out of the sum and get a similar sum as for even *N* — the only change is that if |*S*_{1}| = *i*, then |*S*_{2}| = *K* - 1 - *i* and

Again iterating over all *i* from 0 to *K* - 1, we get a formula for odd *N*:

**Calculating P_{2}(N, K) **

Let's expand the sum as

It's clear that if |*S*'| = *K* - *i*, then

where denotes the number of sets *S* (of size *K*) satisfying the condition.

How to count that number? We can just exclude set *S*' from all *S* and *S*_{N / 2} (since they all contain *S*'); then, we can see that we just need to count the number of subsets of *S*_{N / 2}\ *S*' of size *i*. That's obviously just , so

**Binomial coefficient**

The binomial coefficient has kinda big arguments, no? We can't just pre-compute a Pascal triangle or factorials, but *i* is sufficiently small, so we can just use one of the most basic formulas for binomial coefficients

and so, for given *N*, *K*, compute the necessary binomial coefficients along with corresponding terms in the sum for *P*_{2}(*N*, *K*). It's useful to have modular inverses precomputed; here, we can use that *M* is a prime larger than *K*, so by Fermat's little theorem, .

**Complexity**

We now have a fairly simple way to compute *P*(*N*, *K*) and *P*_{2}(*N*, *K*) in *O*(*K*) time with some pre-computation. The important thing is that thanks to integer division, the *N* in the argument can only be *N* from the input divided by powers of 2; since there are just such powers that don't lead to the trivial case *N* = 0, and with fixed *N*, we only spend *O*(*K*^{2}) time on computing *P*() and *P*_{2}() for all possible *K*, the complexity is .

The limits are too tight, though — the biggest problem is there are a lot of modulos that take a lot of time. One possible improvement is only taking the modulo when computing the sums for *P*() and *P*_{2}() after every 4 additions, that gives me worst-case runtime of around 2.1 seconds (so close TLE T_T). Also, we can precompute , which saves us some modulo operation compared to precomputing (*N* + 1)^{i} and separately. Code

Badum badum...

The Internet Problem Solving Contest takes place again in 2014! The registration's started (those of you who competed last year should've gotten an e-mail about it) and will be open until the end of the contest.

The contest will take place from ~~15.6.2014 10:00 UTC~~ 15.6.2014 11:00 UTC to ~~15.6.2014 15:00 UTC~~ 15.6.2014 16:00 UTC and will be preceded by a short practice session.

You can find the registration form, archive and everything at the contest site.

The rules can be found at the contest site or in my blog post about last year's IPSC. That's why this post is short — you can just read that one and replace the dates.

Also, if you think one blog post isn't a good enough emphasis, here are two more :D

Here is a summary of some registered teams by CF handles. It's not very long, but better than nothing.

Team name | Member 1 | Member 2 | Member 3 | Place |
---|---|---|---|---|

Zenith | MLGuy | flashmt | __tutankhamun__ | 39 |

MSU Trinity | Zlobober | sankear | malcolm | 31 |

cutie, dropout & useless woman | vadimmm | Rubanenko | baba_beda | 113 |

XZ Team | eatmore | AlexFetisov | winger | 26 |

KPZ | eduardische | popoffka | Alex_2oo8 | 28 |

Break a leg | Olja | Oleg805 | andgein | 270 |

Charles University Legion | fhlasek | k21 | simsa.st | 12 |

SPb SU 4 | Dmitry_Egorov | PavelKunyavskiy | yeputons | 9 |

And separately for single-person teams.

Team name | Member 1 | Place |
---|---|---|

Alexander Udalov | udalov | 28 |

duolCauqA | VVan | 170 |

Xellos | Xellos | 15 |

Snowbear | marat.snowbear | 44 |

Futures Leaguer | gs12117 | 12 |

This is a continuation of my blog post about day 1.

You can find the constraints and samples in the original problem statements (in Slovak, but that doesn't matter with numbers :D).

**UPD2: The contest is in Gym now (both days merged into 1 contest, one problem omitted due to being too theoretical). It only took a year!**

The last USACO contest, that is, US Open, takes place this weekend. You can start the contest anytime in the 3-day window starting at April 4th.

Well, at least it's not during the next weekend (Crazy Weekend of April 2014, there's one every month :D).

You'll be able to access the contest from the USACO front page. You need to register for an account on the site to compete.

I hope I'll be able to post my solutions again here after the contest ends. I also hope that I'll be able to compete at all, because I'll be who knows where (I certainly don't :D) during the weekend and I wonder if there'll be Internet connection...

Also note that this contest lasts 5 hours, unlike the previous ones. For the same number of problems.

Problems and solutions (just gold so far; post the silver/bronze problems if you have access to them):

Given *N* ≤ 10^{5} integers in range [1, 8] placed on integer points of the *x*-axis and an integer *K* ≥ 2, find the longest interval that satisfies the following conditions:

at least

*K*distinct integers occur in the interval,all integers that

*do*occur in it must occur there an equal number of times.

The interval must start and end at some of the given points. Print the maximum length of such an interval, or -1 if there's none.

If all integers must occur in the interval, the problem can be solved easily in time. This is basically problem ABCSTR of Codechef; read its editorial for further explanation.

In this case, however, a subset of integers from the range [1, 8] could occur in the chosen interval. Ok, why not just run the algorithm on all subsets? :D We only need to count the differences as in ABCSTR for integers from the present subset, and we'll remove all entries from the `map<>`

whenever we encounter an integer that's not in that subset — this way, we can make sure that none of these integers can occur in the selected substring. We get complexity , which isn't particularly great.

We can get rid of the logarithmic factor by storing hashes of `vector<>`

s in the `unordered_map<>`

(actually, we can't get rid of it in the contest without coding an entire hash table... shame on USACO for not updating the compiler). A polynomial hash of elements of the `vector<>`

by 2 moduli (10^{9} + 7 and 10^{9} + 9, for example) is strong enough.

We can also get rid of the factor *M* by updating hashes in *O*(1). The only 2 updates we need to do are: add 1 to one element of the vector; subtract 1 from all its elements. For polynomial hashes ( of vector *A*, where the + *N* is in order to have all terms non-zero and thus remove obvious collisions), these operations correspond to adding *p*^{i - 1} and subtracting (you don't have to calculate that, just take the difference of hashes of *A* filled with 1-s and filled with 0-s).

Notice that so far, we can't check if the selected interval actually contains *K* distinct integers. We can separately calculate for each right endpoint of an interval the largest possible left endpoint for which this condition is satisfied. That can be done by processing points in sorted order and, for each integer, storing its rightmost occurence in a `map<>`

; the rightmost left endpoint is the *K*-th last of these occurences and can be found by iterating over the `map<>`

. Adding a point only leads to changing 1 entry of the `map<>`

, so this takes time (the whole algorithm now takes time). Afterwards, we can easily check if some interval contains at least *K* distinct integers.

This leads to a more efficient algorithm. Notice that any point can only be the right endpoint in *O*(*M*) different subsets — if you decrease the left endpoint, you can only add integers to the subset, which can be done just *O*(*M*) times. The above mentioned algorithm gives us an easy way to compute these subsets, even. Similarly, any point can only be the left endpoint in *O*(*M*) different subsets, which can be found by running that same algorithm in the opposite direction.

Let's not try subsets separately, but keep 2^{M} `unordered_map<>`

s, in which we'll store the hashes of `vector<>`

s.

We need a way that allows us to modify hashes quickly — for example, the *i*-th element of the `vector<>`

is the number of occurences of integer *i* among the first *j* points if it isn't in the subset; otherwise, it's the difference between number of occurences of *i* and of the smallest integer in the subset. Note that when adding integers into the subset, these numbers change, but only all in the subset by the same number *c*, which makes polynomial hashes rather easy to modify — decreasing all numbers in the subset *S* by *c* means decreasing the hash by , where the sums for all subsets can be pre-computed.

Try all subsets which have *j* as the right endpoint and contain at least *K* elements, in the order in which they go when we move the left endpoint to the left; for each of them, find out if there was some `vector<>`

before that could occur in that subset (was placed in the hash-map of that subset) and was identical to the one we have now — such identical vectors correspond to balanced intervals. Then, add our current `vector<>`

to the hashmaps of all subsets that can have the *j* + 1-st (think why not *j*-th) point as their left endpoint.

The resulting complexity is . There are, of course, simpler algorithms where you don't use hashes or clever updates of them at the cost of larger powers of *M*.

There's a laser placed at point (0, 0) in a plane, shining in the direction of positive *y*-axis, and *N* ≤ 10^{5} mirrors (reflecting on both sides) placed at integer points and tilted at 45° to the coordinate axes, that is, as '/' and '\'. The laser beam reflects from these mirrors. You should add one such mirror so that the laser beam passes through a given point (*B*_{x}, *B*_{y}). There are also additional restrictions:

the laser beam can't pass through the point (0, 0) again (this condition is satisfied in the initial configuration),

the new mirror can't be placed at the same point as one of the original ones.

It's also guaranteed that the laser beam doesn't pass through (*B*_{x}, *B*_{y}) in the initial configuration. Print the **number of points** at which the new mirror can be placed. Coordinates can be large, up to 10^{9} in absolute value.

It's obvious that the laser beam will always be parallel to one coordinate axis.

Store the mirrors in every row (equal *y*-coordinate) in sorted order (by *x*-coordinate), and in every column (equal *x*-coordinate) also in sorted order (by *y*-coordinate). Trace the beam's path by remembering from which point in which direction it went and finding the first mirror in its path, until you find out that it started to loop or went out of bounds.

Trace back the beam's path that could lead to it hitting point from each of the 4 possible directions. This time, stop when it starts to loop (including hitting point again), goes out of bounds or hits point (0, 0).

We've obtained line segments of 2 types — "real" and "desired" and 2 orientations — vertical and horizontal. The new mirror should and can be placed at the intersection of any two segments of both different types and orientations, because we can pick the direction in which it could be placed to tilt the "real" beam by 90° and have it go in the direction of the "imaginary" one into point .

The problem now reduces to a classic "compute the number of intersections between vertical and horizontal segments". This can be solved by compressing *y*-coordinates, sorting and sweeplining the segments by the *x*-coordinate of the larger endpoint. Then, iterate over the vertical segments and remember how many times points covered by horizontal ones; you can calculate the number of horizontal segments that intersect a vertical one using a Fenwick tree. You can keep track of the horizontal segments to add using 2 pointers and of the ones to remove using a `map<>`

of their left endpoints, or just replace each horizontal segment by 2 half-segments which add 1 and -1 to the Fenwick tree.

Time complexity: .

You're given a rooted tree of *N* ≤ 2·10^{4} vertices. Each vertex can contain a digit from 0 to 9, inclusive. You're also given *M* ≤ 5·10^{4} rules as pairs (vertex, 5-digit string); string *s* at vertex *x* says that if you read the first 5 letters from *x* (inclusive) to the root, you have to get a string different from *s*.

The tree is rooted at vertex 1 and vertex *i* has parent *p*(*i*) < *i*.

Print the number of ways in which you can't assign digits to vertices of the tree (e.g. such that at least 1 of the *M* rules is broken), modulo 1234567.

Instead of computing the number of ways in which we can't assign the digits, we'll compute the number of ways in which we can; the answer is then 10^{N} minus that (there are 10^{N} ways to assign digits, in total).

This can be solved by dynamic programming. Let *DP*[*i*][*j*] be the answer for the subtree rooted at vertex *i*, in case the 4 digits above that vertex are read as *j*, but this time from top to bottom. There are at most 10 digits we could put in vertex *i*; for each digit *d* that we can put there, the number of ways to fill the subtree is the product of *DP*[*x*][(10*j* + *d*)%10000] over all sons *x*, because we can fill the subtrees of sons independently and the last 4 digits read on the path from the root to each son will be (10*j* + *d*)%10000. Then, *DP*[*i*][*j*] is the sum of these results over all *d*. We can then find the final answer in *DP*[1][*j*] (we can pick any *j*, it's like creating a path of 4 vertices above the root and choosing the digits in them; there are obviously no rules starting at vertices with depth < 4 below the root, so these digits won't affect anything).

This approach has complexity *O*(10^{5}*N*) — every edge of the tree is used *O*(10^{5}) times.

We can improve this by noticing that in case all digits are allowed, then the first digit of *j* (taken as a 4-digit number with leading zeroes) doesn't really matter. In fact, there are just *M* possible states of DP where it can matter, because each of the *M* rules forbids one digit *d* for one state of the DP. So if we calculated for 3-digit numbers *j*, then if some state of the DP has all digits allowed, we can immediately say that *DP*[*i*][*j*] = *A*[*i*][*j*%1000]. If a state doesn't have all digits allowed, we can just compute it the same way we did before.

This approach gives a complexity of *O*(10^{4}*N* + 100*M*), which I dare say could work in a second or two, because it's just DP, with really just that many operations.

It'd be even better if we could always replace moduling by subtracion if necessary, but we can't, because we need to compute products sometimes, not sums. The large modulus (around 10^{6}) also forces us to use 64-bit variables. BTW, this got me thinking about how the Chinese Remainder Theorem can sometimes be useful — if the modulus is composite, we can decompose it into prime powers, compute partial answers modulo each of them and the final answer is just LCM of the partial ones. In this case, 1234567 = 127·9721, which would allow us to use ints, but I doubt it'd be actually helpful.

I don't know USACO's time/memory limits, but I suppose this would still need memory optimization. If the tree was thin enough (there were few vertices in any depth), the DP could be done by decreasing depth, where we'd remember just the answer for states of vertices in the last 2 depths, but I'm not sure what to do in a general case.

UPD: The results are up. Optics and Code have really bad results...

This Thursday (27.3.2014), the first (theoretical) day of the national round of Slovak Olympiad in Informatics took place (and I appeared there, much to everyone's surprise :D). So, I bring you the problems and solutions. The second, practical day, can be found here.

**UPD**: The original version of the problems and solutions is up. In case you were curious, final results. In Slovak, of course :D

You can have fun trying to find CF handles of some contestants :D

**UPD2: The contest is in Gym now (both days merged into 1 contest, one problem omitted due to being too theoretical). It only took a year!**

*this night*). Still, that's no reason not to participate!

Codeforces (c) Copyright 2010-2018 Mike Mirzayanov

The only programming contests Web 2.0 platform

Server time: May/28/2018 10:33:19 (d2).

Desktop version, switch to mobile version.

User lists

Name |
---|