# | User | Rating |
---|---|---|

1 | Benq | 3650 |

2 | Radewoosh | 3648 |

3 | tourist | 3579 |

4 | ecnerwala | 3551 |

5 | Um_nik | 3494 |

6 | Petr | 3452 |

7 | ksun48 | 3432 |

8 | jiangly | 3349 |

9 | maroonrk | 3338 |

10 | Miracle03 | 3314 |

# | User | Contrib. |
---|---|---|

1 | 1-gon | 210 |

2 | awoo | 188 |

3 | Errichto | 187 |

4 | rng_58 | 186 |

5 | SecondThread | 182 |

6 | Um_nik | 177 |

7 | Ashishgup | 176 |

8 | maroonrk | 172 |

9 | antontrygubO_o | 171 |

10 | -is-this-fft- | 169 |

↑

↓

Codeforces (c) Copyright 2010-2021 Mike Mirzayanov

The only programming contests Web 2.0 platform

Server time: May/06/2021 15:40:58 (j3).

Desktop version, switch to mobile version.

Supported by

User lists

Name |
---|

How to solve L, E, C?

C: The answer is $$$\phi(n) \cdot \binom{n-1}{k-1}$$$, which you can compute in $$$O(k)$$$ time (note that $$$k \le \frac{n}{4}$$$ so it's barely fast enough.

E: Compute a maximum matching (of size $$$M$$$) using Edmond's Blossom Algorithm. Basically, you can reduce the problem of whether there is a vertex cover of size $$$M$$$ to 2-SAT (because the vertex cover must contain at least one vertex per edge in the matching), which you can solve in $$$O(m+n)$$$ time. For the case $$$M+1$$$, try all extra unmatched vertex to take into the vertex cover as well as all edges in the matching where both endpoints are taken.

How to solve H?

Auto comment: topic has been updated by Um_nik (previous revision, new revision, compare).The contest is still running for

Div.2:) As it was postponed 3 times, and then started, but without problems(!), and so later it restarted again :)UPD.It's just finished. How to solve Div.2 ProblemH. "Ending by 3"?Thanks for participation! Link to the editorial in the post, it should be clickable, but if it isn't, please write me.

How do you solve B?

I'm not sure if it's intended, but problem I can also be solved using the observation that to get minimum cost to buy $$$i$$$ souvenirs it always makes sense to keep all the $$$i - 1$$$ souvenirs from before, and buy one extra (

not necessarily at time $$$i$$$). So, one can solve it by iteratively picking the next cheapest souvenir and updating the costs. Fast simulation of this procedure can be done with sqrt decomposition and CHT. Complexity is $$$O(N \sqrt{N})$$$. CodeIf that holds, then you can also solve the problem in $$$\mathcal{O}(n C^{1/3})$$$ without assuming that all $$$p_i$$$ are distinct, by grouping items with equal $$$p_i$$$, and noticing that if we have a group of elements with equal $$$p_i$$$, and in the optimal solution to $$$dp[i][k]$$$ we take $$$m$$$ of them, then in $$$dp[i][k+1]$$$ we take $$$m$$$ or $$$m+1$$$ of them.

I didn't notice that all $$$p_i$$$ are distinct during the contest, so this was what I did. I don't know how to prove that that observation holds, though.

You can prove the observation by considering this as a min-cost matching problem. The addition of one new "slot" can only add one new augmenting path, which must consist of taking one new souvenir and shuffling some others around.

What could be

any unproven algorithm of your choicein problem E? Before, I usually used "many times take any unmatched vertex and match it with the random neighbor", but it let me down this time.I'm pretty sure you can't just solve max matching just by trying random matchings; there are graphs with few max matchings, so finding one randomly is probably impossible.

I think the editorial is referring to randomized algorithms like https://github.com/kth-competitive-programming/kactl/blob/master/content/graph/GeneralMatching.h.

I think referring to some heuristic algorithms for finding augmenting paths which don't have good time bound in non-bipartite graph.

In fact, L can be solved much simpler in Python. Extended Stirling formula $$$\ln(n!) = \frac12\ln(2\pi) + (n + \frac12)\ln{n} - n + \frac1{12n} + O(n^{-2})$$$ works like a charm even for $$$n = 10^4$$$, as the coefficient in $$$O(n^{-2})$$$ is really small (I believe it is something like $$$\frac{1}{144}$$$). The only problem is that we can't calculate $$$\ln$$$ precise enough in C++, so we need to use Bigdecimal type

Or use a "better formula" (courtesy of Wikipedia):

Of course, with a brute force when $$$k$$$ is small.

The issue is not in calculation of $$$\ln$$$ itself, the issue is in taking difference of them, which creates awful relative error.

But it's good enough if you use Python and tell its Bigdecimal to have, say, 50 decimal digits of accuracy.