Please subscribe to the official Codeforces channel in Telegram via the link: https://t.me/codeforces_official. ×

### FMota's blog

By FMota, history, 10 months ago, ,

How to solve C and D?
I think I have seen the problem J somewhere before

•
• +28
•

 » 10 months ago, # |   +8 How to solve E and H? They are both pretty interesting problem.
•  » » 10 months ago, # ^ | ← Rev. 3 →   +39 H:First of all, we can ask n queries to find all leaves of the tree. To check whether a vertex is a leaf, we ask a query containing all n - 1 other vertices; if the answer is n - 1, the vertex we excluded from a query is a leaf.Then let's root our tree at some leaf. Let L(x) be the set of leaves in the subtree of vertex x. Since there are no vertices with degree 2, for any two vertices x and y L(x) ≠ L(y); furthermore, x is an ancestor of y iff . So if we obtain the information about L(x) for every vertex, then we can reconstruct the tree.For the root, L(root) is the set of all leaves. For every other leaf z, L(z) = {z}. So we need to find L(x) only for non-leaves. To check whether a leaf z is in a subtree of non-leaf vertex y, we may ask a query 3 root y z.So if the number of leaves is l, we have to ask (l - 1)(n - l) queries to find L(x) for all non-leaf vertices. This won't exceed 2450, and so we won't ask more than 2550 queries.
•  » » » 10 months ago, # ^ |   +10 Cool! It is a really simple solution.
•  » » » 10 months ago, # ^ |   0 We had solution we the same queries but second part used different perspective.So, for each leaf we know the "path" to the root (only vertices, not their order). Now intersection of any two such "paths" is also path to the root from some vertex(their lca). And each vertex is lca of 2 leaves (because of no 2-degree vertices). So now we have list of "paths" for all vertices and need to reconstruct tree which may be done by simple dfs.
•  » » » 6 months ago, # ^ | ← Rev. 2 →   0 If the tree is a bamboo, answers for all queries of the first part of the algorithm should be n - 1, shouldn't it? If no, please explain why.
•  » » 10 months ago, # ^ |   +18 E : In DAG every node can be reached by A and reach B, iff A has no indegree B has no outdegree |indegree| >= 1 and |outdegree| >= 1 for all nodes not A, B Now you should select two disjoint edge set EA, EB such that edges in EA have distinct endpoints edges in EB have distinct startpoints This can be found by maximum matching in O(mn0.5)
•  » » » 10 months ago, # ^ |   +8 Wow, flow for 1000000 edges, that's a new record I've ever seen :)By the way it can be solved without flow.
•  » » » » 10 months ago, # ^ |   +8 I don't understand why it's a new record for you, but bipartite matching on 10^6 vertices should obviously run on time :) Clearing the graph for every TC was a pain, though.
•  » » » 10 months ago, # ^ |   +8 Also, it can be done in O(m), and even it can be done with minmal total cost of edges in O(m) time. Let's add two edges from B to A. Now we have only a condition on degrees. Lets get a bipartite grpah with 2 copies of vertices in left part (corresponding to vertex in EA and to vertex in EB) and edges in right part. Every vertex in second part will have exactly two edges (to start in EA and to and in EB). The problem is to find perfect (by left part) matching in this graph. This can be done in two ways. First, every chain we are searching for in Kuhn method have unique edge on each step. So we can just maintain end of this chain for each vertex in disjoint set union. The other way, which seem to be more beautiful for me, is think about vertex of degree 2 as about an edge between it's neighbors. It can be shown, that matching is same as covering this graph by set of edges, where no component have more edges then vertices (that is a tree, or a cycle with trees on it). Such covering can be done by something like Kruskal algorithm. And two be honest, produces exactly the same code, that first method. And if we sort edges by increasing of cost in any of this methods, it will lead us to min-cost solution.
 » 10 months ago, # |   +5 Anyone know the intended solution to B? How are you supposed to do the computations quickly after arriving at a formula for the answer?
•  » » 10 months ago, # ^ | ← Rev. 2 →   +15 While I haven't got AC, I got something in python that might pass in a faster language with builtin GMP-gcd with some constant optimizations. Codefrom fractions import Fraction from fractions import gcd import operator def prod(a): if len(a)<50 : return reduce(operator.mul, a, 1) m = len(a)//2 return prod(a[:m]) * prod(a[m:]) def pow_binomial(n, k): if k<0 or k>n: return 0 def eratosthenes_simple_numbers(N): yield 2 nonsimp = set() for i in xrange(3, N + 1, 2): if i not in nonsimp: nonsimp |= {j for j in xrange(i * i, N + 1, 2 * i)} yield i def calc_pow_in_factorial(a, p): res = 0 while a: a //= p res += a return res ans = 1 #a = tuple(eratosthenes_simple_numbers(n)) #for p in a: # ans *= p ** (calc_pow_in_factorial(n, p) - calc_pow_in_factorial(k, p) - calc_pow_in_factorial(n - k, p)) a = tuple(map(lambda p: p ** (calc_pow_in_factorial(n, p) - calc_pow_in_factorial(k, p) - calc_pow_in_factorial(n - k, p)), eratosthenes_simple_numbers(n))) #ans = reduce(operator.mul, a, 1) ans = prod(a) return ans def binom(n, k): ret = 1 for i in xrange(k): ret = ret*(n-i)//(i+1) return ret def npr(n, k): ret = 1 for i in xrange(k): ret = ret*(n-i) return ret def fact(n): return npr(n, n) # Lehmer's gcd algorithm; revised version DIGIT_BITS = 30 BASE = 1 << DIGIT_BITS def nbits(n): """Number of bits in binary representation of the positive integer n, or 0 if n == 0. """ return n.bit_length() def lehmer_gcd(a, b): """Greatest common divisor of nonnegative integers a and b.""" # initial reductions if a < b: a, b = b, a while b >= BASE: size = nbits(a) - DIGIT_BITS x, y = int(a >> size), int(b >> size) # single-precision arithmetic from here... A, B, C, D = 1, 0, 0, 1 while True: if y+C == 0 or y+D == 0: break q = (x+A)//(y+C) if q != (x+B)//(y+D): break A, B, x, C, D, y = C, D, y, A-q*C, B-q*D, x-q*y # ...until here if B: a, b = A*a + B*b, C*a + D*b else: a, b = b, a % b if b==0: return a a, b = int(b), int(a%b) # final single-precision gcd computation while b: a, b = b, a%b return a n, m, b = map(int, raw_input().split()) a = list(map(int, raw_input().split())) reason = [x for x in a if x<=b] bad = [x for x in a if x>b] avg_reason = Fraction(0, 1) if len(reason)==0 else Fraction(sum(reason), len(reason)) avg_bad = Fraction(0, 1) if len(bad) == 0 else Fraction(sum(bad), len(bad)) bads = len(bad) mm = min(bads, m) binom_1 = pow_binomial(n, mm+1) binom_2 = pow_binomial(bads, mm+1) den = (n+1-bads)*binom_1 if binom_1 == 0: ans = m*avg_reason + Fraction(mm, 1)*(avg_bad - avg_reason) AA = ans.numerator BB = ans.denominator else: #ans = Fraction(m*avg_reason*den + (avg_bad - avg_reason)*((n-mm)*binom_2 - bads*binom_1), den) A1 = -(avg_bad.numerator*avg_reason.denominator - avg_reason.numerator*avg_bad.denominator)*(n-mm)*binom_2 A2 = (avg_bad.numerator*avg_reason.denominator - avg_reason.numerator*avg_bad.denominator)*bads*binom_1 AA = m*avg_reason.numerator*avg_bad.denominator*den + A1 + A2 BB = avg_reason.denominator*avg_bad.denominator*den CC = lehmer_gcd(AA, BB) CC = abs(CC) AA = AA // CC BB = BB // CC print(AA) print(BB)  Profile Table ncalls tottime percall cumtime percall filename:lineno(function) 1 0.009 0.009 3.307 3.307 :1() 2 0.000 0.000 0.000 0.000 _weakrefset.py:16(__init__) 2 0.000 0.000 0.000 0.000 _weakrefset.py:20(__enter__) 2 0.000 0.000 0.000 0.000 _weakrefset.py:26(__exit__) 2 0.000 0.000 0.000 0.000 _weakrefset.py:36(__init__) 2 0.000 0.000 0.000 0.000 _weakrefset.py:52(_commit_removals) 4 0.000 0.000 0.000 0.000 _weakrefset.py:58(__iter__) 6 0.000 0.000 0.000 0.000 _weakrefset.py:70(__contains__) 2 0.000 0.000 0.000 0.000 _weakrefset.py:83(add) 4 0.000 0.000 0.000 0.000 abc.py:128(__instancecheck__) 2/1 0.000 0.000 0.000 0.000 abc.py:148(__subclasscheck__) 2 0.000 0.000 0.000 0.000 fractions.py:18(gcd) 5 0.000 0.000 0.000 0.000 fractions.py:261(numerator) 7 0.000 0.000 0.000 0.000 fractions.py:265(denominator) 2 0.000 0.000 0.000 0.000 fractions.py:68(__new__) 113046 0.107 0.000 0.107 0.000 gp_saratov_B_2_2.py:14() 339144 0.174 0.000 0.174 0.000 gp_saratov_B_2_2.py:16(calc_pow_in_factorial) 5431 0.005 0.000 0.008 0.000 gp_saratov_B_2_2.py:47(nbits) 1 0.724 0.724 0.732 0.732 gp_saratov_B_2_2.py:53(lehmer_gcd) 2 1.144 0.572 1.752 0.876 gp_saratov_B_2_2.py:6(pow_binomial) 1 0.545 0.545 3.299 3.299 gp_saratov_B_2_2.py:89(main) 113050 0.326 0.000 0.434 0.000 gp_saratov_B_2_2.py:9(eratosthenes_simple_numbers) 1 0.000 0.000 0.000 0.000 {abs} 2 0.000 0.000 0.000 0.000 {built-in method __new__ of type object at 0x8f7820} 6 0.000 0.000 0.000 0.000 {getattr} 4 0.000 0.000 0.000 0.000 {isinstance} 3/1 0.000 0.000 0.000 0.000 {issubclass} 5 0.000 0.000 0.000 0.000 {len} 2 0.161 0.081 0.161 0.081 {map} 1 0.000 0.000 0.000 0.000 {method '__subclasses__' of 'type' objects} 2 0.000 0.000 0.000 0.000 {method '__subclasshook__' of 'object' objects} 4 0.000 0.000 0.000 0.000 {method 'add' of 'set' objects} 5431 0.002 0.000 0.002 0.000 {method 'bit_length' of 'long' objects} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 2 0.000 0.000 0.000 0.000 {method 'remove' of 'set' objects} 2 0.063 0.032 0.063 0.032 {method 'split' of 'str' objects} 1 0.000 0.000 0.000 0.000 {min} 3 0.041 0.014 0.041 0.014 {raw_input} 2 0.004 0.002 0.004 0.002 {sum} Let 'avgbad' be the average cost of an unreasonable item and 'avgreason' be the average cost of a reasonable item. Let 'bads' be the number of unreasonable items. Then the answer is (The formula can be gotten with some calculations and [this](https://www.wolframalpha.com/input/?i=sum_%7Bk%3D1%7D%5E%7Bm%7D+binom(b,+k)%2Fbinom(n,+k)) Identity.)The binomials can be computed quite a bit faster by sieving over primes, this could probably be speed up more by computing the product in divide&conquer fashion. The main bottleneck was reducing the fraction. The buitin python gcd is to slow, I found a faster lehmer-gcd here, but it's still to slow.Loading GMP into c++ does not work on Yandex unfortunately (at least the method that works on spoj doesn't work on yandex), so I couldn't try GMP-gcd. I'll try some more stuff tomorow.Edit: Got some speedup in pow_binomal by computing the product in a smarter way, now TLE 36 (previously TLE 34).
•  » » 10 months ago, # ^ |   +13 Alright, I got AC in 1.5s with some more optimizations. Codefrom fractions import Fraction from fractions import gcd import operator def mul2(a, b): return (a[0]*b[0],a[1]*b[1]) def prod(a): if len(a)<50 : return reduce(operator.mul, a, 1) m = len(a)//2 return prod(a[:m]) * prod(a[m:]) def prod2(a): if len(a)<50 : return reduce(mul2, a, (1,1)) m = len(a)//2 return mul2(prod2(a[:m]),prod2(a[m:])) def pow_binomial(n, k): if k<0 or k>n: return 0 def eratosthenes_simple_numbers(N): yield 2 nonsimp = set() for i in xrange(3, N + 1, 2): if i not in nonsimp: nonsimp |= {j for j in xrange(i * i, N + 1, 2 * i)} yield i def calc_pow_in_factorial(a, p): res = 0 while a: a //= p res += a return res ans = 1 #a = tuple(eratosthenes_simple_numbers(n)) #for p in a: # ans *= p ** (calc_pow_in_factorial(n, p) - calc_pow_in_factorial(k, p) - calc_pow_in_factorial(n - k, p)) a = tuple(map(lambda p: p ** (calc_pow_in_factorial(n, p) - calc_pow_in_factorial(k, p) - calc_pow_in_factorial(n - k, p)), eratosthenes_simple_numbers(n))) #ans = reduce(operator.mul, a, 1) ans = prod(a) return ans def pow_binomials(n, m, k): if k<0 or k>n: return (0, 0) if k>m: return (pow_binomial(n, k), 0) def eratosthenes_simple_numbers(N): yield 2 nonsimp = set() for i in xrange(3, N + 1, 2): if i not in nonsimp: nonsimp |= {j for j in xrange(i * i, N + 1, 2 * i)} yield i def calc_pow_in_factorial(a, p): res = 0 while a: a //= p res += a return res def pow_in_binom(p): a = (calc_pow_in_factorial(n, p) - calc_pow_in_factorial(n - k, p)) b = (calc_pow_in_factorial(m, p) - calc_pow_in_factorial(m - k, p)) c = min(a, b) a-=c b-=c return (p**a, p**b) ans = 1 #a = tuple(eratosthenes_simple_numbers(n)) #for p in a: # ans *= p ** (calc_pow_in_factorial(n, p) - calc_pow_in_factorial(k, p) - calc_pow_in_factorial(n - k, p)) a = tuple(map(pow_in_binom, eratosthenes_simple_numbers(n))) #ans = reduce(operator.mul, a, 1) ans = prod2(a) return ans def binom(n, k): ret = 1 for i in xrange(k): ret = ret*(n-i)//(i+1) return ret def npr(n, k): ret = 1 for i in xrange(k): ret = ret*(n-i) return ret def fact(n): return npr(n, n) # Lehmer's gcd algorithm; revised version DIGIT_BITS = 30 BASE = 1 << DIGIT_BITS def nbits(n): """Number of bits in binary representation of the positive integer n, or 0 if n == 0. """ return n.bit_length() def lehmer_gcd(a, b): """Greatest common divisor of nonnegative integers a and b.""" # initial reductions if a < b: a, b = b, a while b >= BASE: size = nbits(a) - DIGIT_BITS x, y = int(a >> size), int(b >> size) # single-precision arithmetic from here... A, B, C, D = 1, 0, 0, 1 while True: if y+C == 0 or y+D == 0: break q = (x+A)//(y+C) if q != (x+B)//(y+D): break A, B, x, C, D, y = C, D, y, A-q*C, B-q*D, x-q*y # ...until here if B: a, b = A*a + B*b, C*a + D*b else: a, b = b, a % b if b==0: return a a, b = int(b), int(a%b) # final single-precision gcd computation while b: a, b = b, a%b return a n, m, b = map(int, raw_input().split()) a = list(map(int, raw_input().split())) reason = [x for x in a if x<=b] bad = [x for x in a if x>b] avg_reason = Fraction(0, 1) if len(reason)==0 else Fraction(sum(reason), len(reason)) avg_bad = Fraction(0, 1) if len(bad) == 0 else Fraction(sum(bad), len(bad)) bads = len(bad) mm = min(bads, m) if n <= mm: ans = m*avg_reason + Fraction(mm, 1)*(avg_bad - avg_reason) AA = ans.numerator BB = ans.denominator else: #binom_1 = pow_binomial(n, mm+1) #binom_2 = pow_binomial(bads, mm+1) binom_1, binom_2 = pow_binomials(n, bads, mm+1) #G = lehmer_gcd(binom_1, binom_2) #binom_1 //= G #binom_2 //= G den = (n+1-bads)*binom_1 A1 = -(avg_bad.numerator*avg_reason.denominator - avg_reason.numerator*avg_bad.denominator)*(n-mm)*binom_2 A2 = (avg_bad.numerator*avg_reason.denominator - avg_reason.numerator*avg_bad.denominator)*bads*binom_1 AA = m*avg_reason.numerator*avg_bad.denominator*den + A1 + A2 BB = avg_reason.denominator*avg_bad.denominator*den CC = lehmer_gcd(AA, BB) CC = abs(CC) AA //= CC BB //= CC print(AA) print(BB) #print(CC) Optimizations used: Formula involving only two different binomial coefficients and some smaller numbers. (See my post above.) Computing the binomials by counting the number of occurrences of each prime. Computing the resulting product of prime-powers by binary splitting. (new) Cancel primes that occur in both binomials early. Use lehmer gcd. Use pypy4, not python2.7.
•  » » » 10 months ago, # ^ |   0 Oh my...You're awesome :)
 » 10 months ago, # |   +10 How to solve problem F? How do we use the constraint of ?
•  » » 10 months ago, # ^ | ← Rev. 10 →   +21 I think we need to use the trick from here, if we choose a random index and with probability (which by our condition is at least so we'll check around 10 cases to be sure) it will appear in the final sequence. Then we find its biggest divisor such that there are at least n - k numbers that divide this number.UPD. About the last part, suppose we chose number N, it has at most divisors. Suppose its divisors are 1 = d1 < d2 < ... < dK = N and prime numbers the divide it are p1 < p2 < ... < pt (1 ≤ t ≤ 15). We'll calculate an array cnt with cnti being the number of values in our sequence that divide di. In the problem mentioned above we are allowed to do the following thing: First set cnti to be equal to the number of values such that gcd between it and N is equal to i. Second, just iterate for every i from K downto 1 and for every j from i + 1 to K if we have that dj ≡  0 (mod di) then we increase cntj by cnti. Now we need to optimise it because O(K2) is too much, that's why I introduced the sequence of primes. Basically for every prime pi and for every j we increase by cntj the position in array cnt corresponding to the divisors (1 ≤ r, dj ≡ 0 (mod pir)). This can be done again in linear time. So the complexity becomes which in practice is way smaller.If you find something wrong, please let me know. I hope I didn't miss any easy solution :) .
•  » » 10 months ago, # ^ |   +8 I got accepted with following heuristic.If n is less than 80, just factorize all numbers and solve the problem. Else, let's get 80 random numbers, factorize them, and get all divisors that are in at least 16 of them. For each of this divisors starting with big solve the problem in time O(number of diffrent numbers). Idea is that probably, if we have many big numbers which are divisors of a lot of random numbers, but of n - k numbers, we have a lot of same numbers in input. No idea about formal proofs.
 » 10 months ago, # |   +5 C: On the line this problem could be solved with simple greedy: take a path with the leftest right end, take it's right end, delete all the paths that contain this point and repeat until we cover all paths. On the tree the same greedy works, except we always take a path with the lowest lca.D: Let dpi be answer for the i-th prefix. Then dpi = minj < i(max(t[i], dp[j]) + 2 * maxj < k ≤ i(ak)). Notice that dpi is nondecreasing. Then there exists pos such that dpj < ti for j < pos and dpj ≥ ti for j ≥ pos. We can maintain pos with 2 pointers. Now we can consider j < pos and j ≥ pos separately. For j < pos we need to calculate minj < pos(ti + 2 * maxj < k ≤ i(ak)). We can mantain segment tree where j-th value is maxj < k ≤ i(ak) and update it easily with stack of maximums. The case j ≥ pos is almost the same.
 » 10 months ago, # |   +15 How to solve J?
 » 10 months ago, # |   +10 How to solve J faster than O(nm3) or O(nlognm2) ?
•  » » 10 months ago, # ^ |   +10 I found the solution just after asking it : the approach actually runs in , and the problem is almost the same as problem K from http://codeforces.com/blog/entry/55467
•  » » 10 months ago, # ^ | ← Rev. 2 →   +23 (Maybe that's the same thing you described in previous comment, but) We did it in :Using divide and conquer, fix center of the array. All the queries are either to the left or to the right or goes through the center. THose to the left and to the right we will do recursively. Now, from the center calculate dp[l:center] and dp[center:r] for all l and r in O(nm) Now for each query you'll answer in O(m2). So, overall complexity is O(nm2) for queries (well, O(qm2) but who cares) and for divide&conquer part
•  » » » 10 months ago, # ^ |   0 Isn't it O(qm)? if you let modulo as x on left part, then modulo on right part is (m - x)
•  » » » » 10 months ago, # ^ | ← Rev. 2 →   0 Thanks. What we actually submitted was O(qm2) but you are right, it is easier to do in O(qm)
 » 10 months ago, # |   +10 I : Suppose you can afford a trie on given strings, then you can try DP on tree. DP[i] =  (number of subset for subtree of i). Then you have simple recurrences.Our solution is online. We will build a tree from a set of string S, where a parent of node is a longest prefix that is not v and in S. Basically this is a sort of compressed trie, so you can try identical DP if you have this one.We sort all strings in lexicographical order (done by suffix array). This will be a preorder traversal of such tree. With suffix array you can find whether given two substrings are prefix or not — you can decide whether string x is an ancestor of string y. Now you can use stack to iteratively make a tree. Time complexity : O(nlogn + qlogn)
 » 10 months ago, # |   0 G: Do a binary search for T. Rearrange the inequality to oaj ≤ obj + T - dj. This corresponds to an edge bj → aj with weight T - dj when interpreting ox as the distance to x. Initialize the o as given in the input and replace '?' by 109. Run Bellman-ford, if a negative cycle is found or one of the given o changes, T is too small.L: Use Dijkstra's to compute all edges that are in some shortest path starting at the capital. Then use the solution of problem C from GP of Poland to get the number of edges which are used in all shortest paths to a given node.K: I have solution in with rolling hashes and the fact the the query strings can have at most different lengths. Is there something faster?For F passes in 2s if you use bitset and prune divisor candidates that are smaller than the current best or that are divided by an already discarded candidate when generating the divisors of a recursively from the prime factors.
•  » » 10 months ago, # ^ | ← Rev. 2 →   0 K: You can do with the same complexity but with trie and kmp, you build a trie for every word in the set that has length smaller than for every node in the trie you keep the the current answer and the last index last of the occurrence then updates this values running in the substrings of size smaller than of the text and for the rest of the strings in the set you do kmp. I don't know if it's faster, passed with 275 ms
•  » » » 10 months ago, # ^ |   0 Actually, you can skip part with KMP. If you'll go in trie by suffix links, you can visit only sqrt(M) terminate (in which ends some word from input) nodes, where M is sum of words lengths.
 » 10 months ago, # |   0 Is there any easy to code solution for A?
•  » » 10 months ago, # ^ |   +13 Count each triplet at its smallest element. When fixing the smallest element x, the other two elements need to be in [x, x + d], so we can count them with binary searches on the two other arrays. Slight care has to be taken to not count triplets with duplicate elements multiple times. Code#include using namespace std; using ll = long long; signed main() { int d, n, m, o; while(cin >> d >> n >> m >> o){ vector u(n), v(m), w(o); for(auto &e:u) cin >> e; for(auto &e:v) cin >> e; for(auto &e:w) cin >> e; auto lower_cnt = [&](vector const&v, ll const&x){ return upper_bound(v.begin(), v.end(), x) - v.begin(); }; ll ans = 0; for(auto &e:u){ ll x = lower_cnt(v, e+d) - lower_cnt(v, e-1); ll y = lower_cnt(w, e+d) - lower_cnt(w, e-1); ans+= x*y; } for(auto &e:v){ ll x = lower_cnt(u, e+d) - lower_cnt(u, e); ll y = lower_cnt(w, e+d) - lower_cnt(w, e-1); ans+= x*y; } for(auto &e:w){ ll x = lower_cnt(u, e+d) - lower_cnt(u, e); ll y = lower_cnt(v, e+d) - lower_cnt(v, e); ans+= x*y; } cout << ans << "\n"; } return 0; } 
•  » » » 10 months ago, # ^ | ← Rev. 2 →   0 It may also be done in O(n) with two pointers technique insted of binary search (it also helps to take care of equal minimums automatically)
 » 10 months ago, # |   +51 My problems are E and G. Hope you liked it!
•  » » 10 months ago, # ^ |   +1 E is very cool, G is good, but isn't fresh. I am sure that already have seen it.
 » 10 months ago, # |   0 Where can we access the problem statements? Can you please provide me with the link to the problem set?
•  » » 10 months ago, # ^ |   0 It's hereBTW, statement of most opencup rounds can be found here some times after the contest.
•  » » » 10 months ago, # ^ |   0 Can you please tell me, where can we submit our solutions to above problems?
•  » » » » 10 months ago, # ^ |   -13 You can submit your code to me, I will review your code and give you a verdict (Accepted/ Wrong Answer/ Time limit exceeded/ Presentation error)
•  » » » » 9 months ago, # ^ |   +12 The contest is also in Codeforces gym here where you can submit.