Let's discuss problems.

How to solve 2, 3, 6, 7, 11?

Please subscribe to the official Codeforces channel in Telegram via the link: https://t.me/codeforces_official.
×

# | User | Rating |
---|---|---|

1 | tourist | 3372 |

2 | Radewoosh | 3353 |

3 | OO0OOO00O0OOO0O0…O | 3264 |

4 | V--o_o--V | 3247 |

5 | scott_wu | 3168 |

6 | Benq | 3154 |

7 | mnbvmar | 3143 |

8 | Petr | 3139 |

8 | ksun48 | 3139 |

10 | LHiC | 3118 |

# | User | Contrib. |
---|---|---|

1 | Radewoosh | 195 |

2 | Errichto | 175 |

3 | rng_58 | 158 |

4 | Ashishgup | 157 |

5 | neal | 156 |

6 | tourist | 154 |

7 | PikMike | 150 |

7 | Petr | 150 |

9 | Um_nik | 149 |

10 | kostka | 147 |

10 | Swistakk | 147 |

10 | Vovuh | 147 |

Let's discuss problems.

How to solve 2, 3, 6, 7, 11?

↑

↓

Codeforces (c) Copyright 2010-2018 Mike Mirzayanov

The only programming contests Web 2.0 platform

Server time: Dec/10/2018 21:57:04 (d2).

Desktop version, switch to mobile version.

Supported by

User lists

Name |
---|

And 4, please :)

`dp[L][R]`

= cost to join stations of interval [L,R] at point (x[L],y[R]).Calculate it as min value

`dp[L][K] + dp[K+1][R] + (x[K+1] - x[L]) + (y[K] - y[R])`

.Answer is dp[0][n-1] + path from (

x[0],y[n- 1]) to (0, 0)Hey-ho, the contest is still running!

9 ?

You need to find the number of primes (let

Pbe a prime)`2 <= P <= 10^7`

such thatNis the smallest number so`Q ^ N % P = 1`

From Fermat's little theorem,

`Q^(P - 1) % P = 1`

, sincePis prime, soN < P.Also, since

`Q ^ N % P = 1`

`Q ^ (2 * N) % P = 1`

and`Q ^ (P - 1) % P = 1`

we know thatNis a divisor ofP — 1If N is not the smallest number so

`Q^N % P = 1`

, there must be a numberD, soDis a divisor ofN(likeNis a divisor ofP — 1)For each divisor of

N,D, check if`Q ^ D % P = 1`

, if that's true,Nis not the first number that respects the property.If

Nis ok and all divisors fail,Pis part of the solution forNOne can only check divisors of the form , where

pis prime divisor ofNYou can also check a divisor if it'a both a divisor of both

NandP- 1In fact, you can get a very fast

O(T) solution (T= 10^{7}) without even doing fast exponentiation (after you precalc primes, of course).Since you search only primes

P=k*N+ 1, there are onlyT/Ncandidates. So you can check each of them inO(N) time with trivial exponentiation, i.e. take 1 and multiply itNtimes.Is 6 like a typical "Write the most lazy segment tree you can"?

I think you can solve 6 with cascading on segment tree. You also must keep a strange thing in each node. For the lazy part, on each node it will amortize at the end at O(number of els on that node)

the lazy part is like: "Ok, so on this node every value bigger than X will become X". When doing updates on first array, some values may become less than X. You will keep how many values are bigger than X and the product of elements lower than X.

The way I implement lazy is like this: If you pass throw a node, you propagate the lazy to the children and the current node has no lazy. Here, every node will have a X value, and when propagating the lazy, the value X will only increase, so the overall complexity will be O(sizeofnode)

The lazy part comes from how you compute an array c[i] = max(initial_swag, b[1] .. b[i — 1])

I didn't implement this, but it seems good. O(NlogN) time and memory.

Don't know if this will help, but it's the last line of the input. It's the only place where i saw this information

`"It is guaranteed that with each change the older parameter value is strictly smaller than the new value."`

Not only

`"Each change either raises ... or raises ...."`

. Raising means increasing7

Mx regex size if 16

`((a|(c|(g|t)))*)`

which matches anything.Can you do better than using regex implementation for every string?

PS:this gets TLEThis is the indended solution: check each regex of length < 16 against all strings.

However, your success largely depends on how many candidate regexes you have (there are various ways to prune regexes), and how you check a string against a regex (again, a lot of options here).

I had around 110K regexes, Which is a lot.

Also, is C++'s std::regex_match enough to pass the test-data?

Java's java.util.Pattern is enough, but std::regex is very slow.

In this article, std::regex_match has exponential time complexity. In this problem,

`(((a*)*)t)`

and`aa..aag`

takes extremely long time.I don't know std::regex_match could get AC, but it must be used carefully.

These sometimes called "evil" regular expressions. Some examples of slow working regexes constructions to avoid are given here.

I'm pretty sure that std::regex was not enough while java Pattern was enough. Meet the problem where C++ library sucks, while Java library works =) The right way to match regexes is to build nondeterministic automaton.

Also, it is rather easy to decrease number of regexes to 40K by removing 0, pruning closing or "or"-ing a closure, ensuring A < B for (A|B), and removing cases like (A(BC)) in favor of ((AB)C).

Was interested in this problem and solved it in around a week.

Used to generate all possible regexes, excluding "duplicates" and longer equivalents if there are shorter ones. Got TLE.

Here are some ideas which I used to get AC:

SpoilerFound minimal number of Kleene stars regex can contain.

Found minimal cardinality of sets consisting of different letters, otherwise regex

`/[<set>]*/`

(in Simon's form) can be used as the shortest one.Later used "abstract" regular expressions consisted of dots (dot = match any letter). Earlier algorithm generated only few hundreds of such regexes. So an input data meets only some hundred "abstract" regular expressions, and often only some decades of them survive. Later I feed "abstract" regex with permutated letters and got some thousand normal regexes. This increased a speed or program significantly and still brought TLE.

Finally, remembered that one should pre-compile all regexes (

`qr`

operator) before use them against plenty of test sets, otherwise they compile each time. Got AC 2.5s.Perl code.

Link to the problems?

How to solve 8?

Lets make a list c so that c[i]=a[i]-d[i].

for any [l,r], you need to make (r-l+1)/3 players with the lowest c forwards, (r-l+1)/3 with the highest c defenders, and other (r-l+1)/3 midﬁelders.

It can be easily done with persistent segment tree in O(q*logn*logn).

Isn't it an overkill? Binary indexed tree + parallel binary search is enough.

Could you please describe how parallel binary search works?

http://codeforces.com/blog/entry/45578

can you please elaborate on your persistent segment tree?

We want to find k-th smallest ellement in [l,r] segment. We construct Persistent tree with values c[i]. Then We use binary search for know index k-th minimum element and for checking this we use persistent tree. code https://ideone.com/bk5Qag

There is also a solution, although implementing it during the contest was probably not the best idea.

Define value of player to be difference of his attack and defense. Now sort players by value (and by index in case of ties). From this point build a segment tree on them, and in each node of the tree store all its players sorted by index, and prefix sums of their values alongside.

Now you can perform queries like "how many players with indices in [L..R) have values in [Vmin..Vmax)" in time using standard segment tree operation (you have to run binary search by L and R in each node of the tree). So you can use another bin.search to find values V1 and V2, such that the number of players with indices [L..R) is equal for the value ranges [Vmin..V1), [V1..V2), [V2..Vmax). You can use prefix sums to find sum of values in each of the value ranges, so this gives per query solution.

Now recall that in a standard RSQ segment tree we can do operation like "which maximal prefix of array has sum less than X" in time, which is better that doing binary search by position and calculating sum on prefix inside (which is ). Now apply this idea to the segment tree above, and you don't need the outer binary search by value. So the solution becomes per query.

Finally, you can use fractional cascading to avoid running binary search by L and R in each node, so that you need only one binary search at the root node. This gives per query solution.

There is a similar/equivalent solution with Wavelet trees.

Just modify the k-th element function with some prefix-sums to return the sum of the k smallest elements instead.

2: Keep going forward. At the right moment turn right. Keep going forward again. At the right moment stop and dig.

When you keep going forward, you get a sequence of 'c' (closer) and 'f' (farther or same). If you analyze various patterns, you can notice that the run lengths of this sequence has a period of 8. You should stop when you just get 'c', you expect to get 'f' next, and the run length of current 'c' is maximum possible.

I haven't solve (fail on TC #2) and tried another approach: go forward if answer was "c", otherwise go random to L or R, until you stay in a cell from which you can get only "f" answer for all 4 directions. This means that you stay in correct cell, or in cell which is in mirror side of the cube. Then you go 1 circle around cube with two stops at c/f boundary, and dig depending on lengths of last "f" and "c" sequences.

I solved with same approach

11?

Binary search over answer. Let's check the distance

`d`

, i.e. check if`n`

segments is enough. It should be obvious that the greedy strategy should work, i.e. each segment should be as long as possible. Easy to see, that the best way to approximate some interval`[x0, x1]`

is a line that passes through the points`(x0, f(x0) + d)`

,`(x1, f(x1) + d )`

, let's say it's a line`y = kx + b`

, then the maximum distance to`f(x)`

will be at point with`x = c / k`

or at the ends of the segment. So we can use one more binary search to find the maximum length of each segment, and build them one by one.per each test.

There is a solution of

O(1).3?

I tried to access these problems on yandex but it said I need "coordinator login & pass".

How can I look at these problems?

We make this cube to solve 2 in contest, and this is very helpful.