Автор Radewoosh, 5 лет назад, По-английски

Hello everybody! A new round is upcoming and I'm honored to be its author. The round will first appear as the onsite for Dasha.AI Code Championship and later we will hold a rated mirror. Everybody is welcomed to participate in Codeforces and I wish good luck for people in Saint Petersburg and Novosibirsk.

Using the opportunity, I want to thank:

  • mnbvmar for helping in the preparation of a great part of everything. Really, without you, I wouldn't do it.
  • 300iq for coordination and help with preparation.
  • As always, MikeMirzayanov for taking care of international competitive programming and making such great platforms as Codeforces and Polygon.
  • KAN, zscoder, Lewin, Jeffrey, Darkstalker, Darko and Gtaumaturgo for testing the problems and great advice.
  • Dasha.Ai for an organization and a sponsorship.

Scoring will appear later.

Good luck and see you during the contest!

UPD1a: Scoring in div2: 500-750-1250-1750-2500-3000

UPD1b: Scoring in div1: 500-1000-1500-2250-(1500+1250)-3500

UPD2: editorial

UPD3: Congratulations to the winners!

In div1:

  1. Um_nik
  2. yosupo
  3. ksun48

In div2:

  1. lqs2o15
  2. rainboy
  3. HoshigakiKisame

In the onsite competition in Saint Petersburg:

  1. aid
  2. vintage_Vlad_Makeev
  3. cookiedoth

In the onsite competition in Novosibirsk:

  1. Timonnable
  2. Pechalka
  3. Maho
  • Проголосовать: нравится
  • +717
  • Проголосовать: не нравится

»
5 лет назад, # |
  Проголосовать: нравится -154 Проголосовать: не нравится

Let's upvote this blog to make Radewoosh top 1 contributor

»
5 лет назад, # |
  Проголосовать: нравится +19 Проголосовать: не нравится

Will this contest also have some awesome theme (and cool figures)?

»
5 лет назад, # |
  Проголосовать: нравится -45 Проголосовать: не нравится

I think it would be then called a "Replay" not a "Mirror", if it is not happening parallel to the original contest :)

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +44 Проголосовать: не нравится

    The Rules for Mirror Contests don't say anything like that. Of course, that's because there are no Rules for Mirror Contests and the term has always been used for contests that don't run in parallel as well.

»
5 лет назад, # |
  Проголосовать: нравится +19 Проголосовать: не нравится

300iq, а условия на русском будут??

»
5 лет назад, # |
  Проголосовать: нравится -75 Проголосовать: не нравится

orz orz

»
5 лет назад, # |
  Проголосовать: нравится -129 Проголосовать: не нравится

Is it Rated?

»
5 лет назад, # |
  Проголосовать: нравится -46 Проголосовать: не нравится

Hope for interesting tasks as Radewoosh is the author!

»
5 лет назад, # |
  Проголосовать: нравится +18 Проголосовать: не нравится

Sorry if I misunderstood, but are the problems same as Dasha Code Championship Finals? If they first appear in Dasha Code Championship Finals, wouldn't some of the participants of finals know the problem before this contest starts? Sorry if I'm wrong. I have a poor English understandings... Thank you!

  • »
    »
    5 лет назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    It's standard for mirror rounds. In the interest of sportsmanship, onsite participants should not participate in the mirror rounds or discuss the problems until the mirror rounds have concluded.

»
5 лет назад, # |
  Проголосовать: нравится +11 Проголосовать: не нравится

first time to div.1 qwq...

scared worried but happy.

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Wow! Subtask again.

»
5 лет назад, # |
  Проголосовать: нравится +14 Проголосовать: не нравится

I hope ethical and beautiful behavior now

»
5 лет назад, # |
  Проголосовать: нравится -31 Проголосовать: не нравится

I want to do both div1 + div2 at once so I make 2 accounts. If problems D of div 2 coincides with problems A div 1 and I submit the same code, will I be skipped the contest?

»
5 лет назад, # |
  Проголосовать: нравится +11 Проголосовать: не нравится

queueForces

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Another queuforces :)

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

queue is soooooo slow 5 mins and i still dont know if its accepted

»
5 лет назад, # |
Rev. 6   Проголосовать: нравится -56 Проголосовать: не нравится

Graphforces

»
5 лет назад, # |
  Проголосовать: нравится +15 Проголосовать: не нравится
Комментарий удален по причине нарушения правил Codeforces
»
5 лет назад, # |
  Проголосовать: нравится +4 Проголосовать: не нравится

How to solve Div2 D / Div1 A ?

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится +211 Проголосовать: не нравится

I was one of the testers of the round. Here is a brief description of my solutions for d2A-d1E (couldnt solve d1F T_T):

d2A: easiest way is to try all possible subsets

d2B: try to make all digits 0 (or 1 if first digit) from the front. be careful of the 1-digit case (given in samples)

d2C: try every possible number from 1-6 to be represented by each node and see how many distinct dominoes you can place

d1A: Note that if >= 2 students have the same set of solved problems, you can take them all without affecting anything. Suppose you take them all, and call the current set S. Now, for every other student (with no other student sharing the same set of solved problems with them), you can take them iff their set of solved problems is a subset of the set of solved problems of one of the guys in S (if not, their problems solved must be a proper subset of some other guy in the set, then just induct on size of problems solved).

d1B: Relatively standard idea. Build a sparse table s.t. for each node x, we know the gcd from that node to the 2^i-th parent. Now, for each node u, consider the sequence of gcds to root. Since gcd is either 0 or divided by >= 2 each time it changes, we only have O(log10^12) distinct gcds. So, for each node x, we can use O(log n) time to determine the range of the current gcd and keep moving up to the node where gcd changes. Complexity: O(nlognlog10^12)

d1C: Straightforward sqrt decomp works. We call a node big if it has >= 400 neighbours and small otherwise. For each node x, we maintain h[x], the number of neighbours with more rubles than x. For each big node, we store the list of small neighbours with more rubles than it (idea being that when we have an update on the big node, these are the only small nodes we nee to update). For each update, if the node u is small, we just update all its neighbours and itself naively, and append u to all the neighbour big nodes. If node u is big, we update all the small neighbours in the list for u and also update the big neighbours if necessary. One can see that each query can increase the sum of size of "list of small neighbours" by sqrt(n), so the amortized complexity is O(nsqrt(n)) (treating n~m~q for convenience)

d1D: Pretty straightforward if you have some group theory knowledge. Each shuffling method can be considered as an element of $$$S_5$$$ (the symmetric group on 5 elements). $$$f(l,r)$$$ is literally just asking about the size of the group generated by the $$$l$$$-th to $$$r$$$-th shuffling method.

Note that there are < 200 (actually around 150) subgroups of $$$S_5$$$. Consider the subgroups formed by the elements starting from l. The sequence of distinct subgroups formed will be short (since each subgroup must be contained in the next). Hence, we can use the same idea as Div1B: Just build a sparse table where each element denotes the subgroup formed by [l,l+2^x-1] and then find the set of segments representing the same subgroup (so complexity should be something like nlogn*some small constant).

d1E1: You can just use the method from d1E2 to solve this, or you can be dumb like me and try to solve this subtask alone (this solution looks overcomplicated when you see the d1E2 solution). My idea that works for this subtask is that 6C3=20, and so we can use a meet-in-the-middle approach. For each half of the left vertices, we brute force all possible graphs (2^18 of them since there are <= 3*6 edges) and for each graph, compute the set of all possible perfect matchings. Let L[S] denote the probability that a graph with only the first n/2 left vertices can match the set S of subsets of size n/2 of the right-hand-side (<= 6C3 possible subsets, so # of possible S is <= 2^20). Define R[S] similarly, but as the probability that a graph with only the last n/2 left vertices is such that the set S of subsets of size n/2 of the right-hand-side can possibly be the set that is NOT matched by any of the last n/2 left vertces. Then, we are looking for sum(L[A]*R[B]) where (A&B)!=0, so we can just do a FWT on the arrays L and R to find this sum.

d1E2: Consider the following way of determining if the graph has a perfect matching: Start from the first vertex, and when we consider the first i vertices, keep track of the set of all subsets of vertices that could have been matched by the first i left vertices. It turns out that we can also use the exact same method to compute our answer! The key is that there is actually not many "set of all subsets of vertices that could have been matched by the first i left vertices", so if we do something like dp[i][S] = probability that if we consider first i left vertices, set S is the set of all subsets of right vertices that can be matched by first i left vertices, it will still work in time (there are < 200k possible S for each layer if not mistaken). What's the proof? Generate all possible states by program XD

  • »
    »
    5 лет назад, # ^ |
    Rev. 2   Проголосовать: нравится -10 Проголосовать: не нравится

    So in div1A the case when several students have the same set of problems was not part of the 20 pretests, was it? ohh boy...

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится +10 Проголосовать: не нравится

      No, there were tests with answer different than zero if you are asking about it.

      • »
        »
        »
        »
        5 лет назад, # ^ |
          Проголосовать: нравится -10 Проголосовать: не нравится

        Nevermind, I'm just dumb. I was thinking that when decreasing the set you may be eliminating more elements at once with equal masks that could themselves create a potentially valid group, but it's not possible since that would imply one of them beats the other

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    for Div1B I actually used the same algorithm but got TLE :(( , How could you do it in nlognlog10^12?

    I had to use binary search so it was nlog^2nlog10^12

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I solved DIV2E using the same approach but don't know why got WA on test 6. Can you please help? code link

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится +3 Проголосовать: не нравится

      Try this task: ~~~~~ 6 9 12 4 4 4 8 1 2 2 3 3 4 4 5 5 6 ~~~~~ And the real answer 88 while my code(WA) get 96

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +37 Проголосовать: не нравится

    Problem B and D were very similar, the key insight was the same in both of them.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    How would you be able to find the GCD from a node X to K-th Ancestor of X in logn? shouldn't it be logn(for binary lifting) * log(n)(for GCD)?

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    For d1B, is there not another log(10^12) factor from gcd operation itself? I mean when binary lifting, I take gcd at each step, so for each node is O(log n log(10^12)) to go from one gcd to the next

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится +5 Проголосовать: не нравится

      In my code, what I did was when I start from one node $$$u$$$, I try to go up 2^18,2^17,...,2^0 steps. When I find a step size that still maintains the same gcd, I move up. To check if the current step size maintains the same gcd, I check if the sparse table entry is divisible by the current gcd (special case handling when gcd=0), so I don't need to do another gcd operation for every step size.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    For Div1B, we only need to maintain the list of ancestors where gcd decreases and remove the $$$log_n$$$ part.

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      You're right we can just maintain the gcd list when we move down the tree. This indeed seems easier (I got too accustomed to doing binary lifting lol)

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      can you explain a little bit more please?

      • »
        »
        »
        »
        5 лет назад, # ^ |
          Проголосовать: нравится +3 Проголосовать: не нравится

        Let's solve the problem on an array. Assume you have this array:

        6 9 12 30 24 36

        Imagine we are at 36 and we want to compute the gcd for every suffix, the gcd array will be:

        3 3 6 6 12 36

        For repeated values, we can easily compress them by storing the maximum index it appears. So what we store would be something like:

        (3, 1) (6, 3) (12, 4) (36, 5)

        When you add a new value, you can just iterate through the gcd array and update it. Assume we add 18, the gcd array will be:

        (3, 1) (6, 3) (6, 4) (18, 5) (18, 6)

        And then just remove (6, 3) and (18, 5).

  • »
    »
    5 лет назад, # ^ |
    Rev. 4   Проголосовать: нравится +63 Проголосовать: не нравится

    There is an easier solution for problem D1C.

    Let's make a directed edge from vertex $$$u$$$ to vertex $$$v$$$ if $$$u$$$ earns more money than $$$v$$$ and they dislike each other. A $$$v$$$-th employee salary revision changes the direction of all edges, which ends in vertex $$$v$$$. We can maintain the list of these edges for each vertex and recalculate the answer iterating through the list for current vertex.

    Let's prove that the whole algorithm works in $$$O(n \cdot \sqrt{n}))$$$ (we assume that $$$O(n) = O(m) = O(q)$$$).

    We can note that the number of iterations is equal to the number of edge direction changes. Let $$$x_{i}$$$ be number of $$$i$$$-th employee salary revisions. Edge $$$(u, v)$$$ changes its direction at most $$$O(min(x_{u}, x_{v}))$$$ times. Let's $$$S_{i}$$$ be a subset of vertexes $$$v$$$ such as $$$x_{v} \geq i$$$. We can note that the sum of minimums on the edges is equal to sum of numbers of edges in induced subgraphs $$$S_{i}$$$. For each subgraph ratio between number of edges and number of vertices can't be more than $$$O(\sqrt{n})$$$. Also, sum of sizes of $$$S_{i}$$$ is exactly number of queries ($$$O(n)$$$), so the sum of minimums is at most $$$O(n \cdot \sqrt{n})$$$.

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

How to solve C (div 2)?

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +1 Проголосовать: не нравится

    More specifically, is there a better way to do it other than try every possible arrangement?

    • »
      »
      »
      5 лет назад, # ^ |
      Rev. 2   Проголосовать: нравится +21 Проголосовать: не нравится

      I dont know if my solution is perfect, but this was my thought process:

      • If N <= 6, then you can always put dominoes to every edge. So answer is M.
      • If there are more than one component, you can put dominoes on all edges.
      • Now the case comes when there are 7 nodes in one component. Two of them must have the same color. So double loop to assign which two nodes should have the same number (lets say 1). Mark all the edges that these two nodes go to in a set (we can assign then numbers like 1-2, 1-3 and so on). The answer must be the size of set. If they have the edge to one-another we can also add a domino there (1-1). And of course, we can add domnioes to edges going through all other nodes except these two.
      • »
        »
        »
        »
        5 лет назад, # ^ |
          Проголосовать: нравится +1 Проголосовать: не нравится

        Sounds like the model solution. :)

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          I put values from 1 to n in decreasing order of degrees of nodes, and then if there is a node without a value, I brute forced its value. (and that was when I realised I could have just brute forced the value of all the nodes...)

»
5 лет назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

Was Div 1 E1 and E2 based on Hall's theorem.

»
5 лет назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

Test case problem B Div2 is quite strong.

»
5 лет назад, # |
  Проголосовать: нравится -73 Проголосовать: не нравится

suck contest

»
5 лет назад, # |
  Проголосовать: нравится +26 Проголосовать: не нравится

Solved problem E and could not press the submit button in the last second
Waiting for the system testing to finish and be able to submit my solution,
If it is correct then I am gonna kill myself

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Am I the only one who thought DIV.2 E Was about finding the sum of all pairs??? And didn't notice the ancestor part :(

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    The setter underestimate my dumbness and didn't put this is neon in the statement. I endedup loosing more than 1 hours until i find out that only ancestors count.

»
5 лет назад, # |
  Проголосовать: нравится -34 Проголосовать: не нравится

At first full input need to be taken.. If the code can't take full input ... How the problem can be Ac...!! correct ans or not is the later priority... I got 50 negative for this in hacking phase... ****

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +11 Проголосовать: не нравится

    No one demands from solutions to read full input. If there is already enough information to return answer and finish, this is still okay.

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

When can I submit my code?Is it testing formally now?

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

What is the answer of D2-D for case -> A = 5, 4, 1 and B = 5, 5, 5 ?

»
5 лет назад, # |
Rev. 5   Проголосовать: нравится 0 Проголосовать: не нравится

[deleted]

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

If I solved a problem after contest is done and I wanna check if it is correct, I should wait when system testing is over?

»
5 лет назад, # |
  Проголосовать: нравится +18 Проголосовать: не нравится

So, is it sufficient in d1C to simply store list of neighbors with greater value for all nodes? I couldn't come up with any testcase having complexity more than $$$O(q \sqrt m)$$$ for such solution and submitted it. Any counter-examples?

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +31 Проголосовать: не нравится

    I have the same solution, but came to it in a little different way. I say that the edges are directed from highest to lowest and some time we want to flip it. Suppose $$$n = m = q$$$ (for simplicity), and it seems that the number of such flips will be $$$O(n\sqrt n)$$$.

    The proof is as follows. Consider $$$w_v$$$ as the number of occurrences of $$$v$$$ in the queries. The vertex is "light" if $$$w_v < \sqrt n$$$, and "heavy" otherwise. Now the following is true: the edge $$$u \rightarrow v$$$ will be flipped at most $$$\max(w_v, w_u)$$$. So, the edges with at least one "light" vertex at the end will be flipped no more than $$$O(\sqrt n)$$$ times. Now consider the edges between two "heavy" vertices. Consider we just removed the "light" vertices. So, now our graph has $$$O(\sqrt n)$$$ vertices, so we will flip at one query no more than $$$O(\sqrt n)$$$ such edges. So, the amount of flips is $$$O(n\sqrt n)$$$.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    Let's prove the complexity. Let's say that nodes with degree less than K are light and others are heavy. Update of lights nodes add no more than nK. Consider one heavy node. In sequential updates, the number of updated nodes on the second one is no more than distance between the updates because all these nodes should've been touched (otherwise nothing changes for them). Thus, all updates except for the first one, sum up to n for any node. First updates are no more than m in total.

    Thus, if we set $$$K=\sqrt{n}$$$, we get $$$m+n\sqrt{n}$$$ in total

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +18 Проголосовать: не нравится

    I don't know how to prove it (or even if it is correct), but I think complexity of that strategy is $$$O(n + m + q)$$$ amortized. Do you know a test that proves it wrong?

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится +44 Проголосовать: не нравится

      Full graph on $$$\sqrt{m}$$$ vertices, and queries 1 2 $$$\ldots$$$ $$$n$$$ 1 2 $$$\ldots$$$. Each query works in $$$\sqrt{m}$$$ time.

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится +8 Проголосовать: не нравится

      Yes. Suppose the complete graph of $$$n$$$ vertices and $$$\frac{n\cdot(n-1)}2$$$ edges. Also there are $$$q = n^2$$$ queries $$$n\ n-1\ \dots\ 1\ 2\ 3\ \dots\ n-1\ n\dots$$$. The number of flips here is $$$O(n^3) > O(n + m + q) = O(n^2)$$$, because on each block of $$$q$$$ queries we flip all the edges.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +54 Проголосовать: не нравится

    Let's look at the $$$\sqrt{m}$$$ consecutive queries. We will change each of at most $$$m$$$ edges not inside this group of vertices at most once + at most $$$\sqrt{m}$$$ edges inside this group of vertices for each query, so total complexity is $$$O(q \sqrt{m})$$$

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +5 Проголосовать: не нравится

    Another way to prove it is to consider a potential function. Call a node heavy if it has at least $$$\sqrt{m}$$$ edges. Direct each edge from higher node to lower node. Let the potential be the total number of edges to a heavy node.

    When we access a light node, we change the direction of at most $$$\sqrt{m}$$$ edges and do at most $$$\sqrt{m}$$$ work, so we use at most amortized $$$O(\sqrt{m})$$$ time.

    When we access a heavy node, say we do $$$k$$$ work. Then the potential function decreases by $$$k$$$, but can increase by at most $$$\sqrt{m}$$$ (since there are at most $$$\sqrt{m}$$$ heavy nodes). Therefore the time used is $$$O(\sqrt{m} - k + k) = O(\sqrt{m})$$$.

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

How you solved Div2 — C (Anadi and Domino) ?

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Do all possible permutations [1-6] between first 6 vertexes. Also test all possible values for the 7th vertex.

    Iterate through edges and maintain a set of already used edges. Return the size of the set.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I just tried 6^7 variants of coloring vertices and tried to add as many dominoes as possible with a simple DFS/BFS, got AC.

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится +57 Проголосовать: не нравится

First time learning that shifting an unsigned long long by 64 bits is not defined behavior ...

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Is it? How does it work? Is shifting by 63 bits ok?

    • »
      »
      »
      5 лет назад, # ^ |
      Rev. 2   Проголосовать: нравится +10 Проголосовать: не нравится

      Yup. It's only defined when you're shifting by [0, sizeinbits-1] bits*.

      In practice (but you should never, ever, ever rely on this), The Way Our Computers Do It™ is that the processors ignore all but five least-significant digits of the shift (or six if we're shifting 64-bit values): ideone.

      [*] Disclaimer: small integer types will usually be promoted to int before the shift, so it's totally possible to shift short 20 bits to the left. Unless int overflows, of course.

»
5 лет назад, # |
Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

I was scared by all the "system test failed" solutions of div2-C. I was completely sure that my solution with no adjacency list + 7 nested loop + (i dont know what i did to save it from tle.) will fail. Surprisingly got Accepted. 61168827(Weirdest solution ever)

»
5 лет назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

Why we can't choose all the three students in second test case of Div2 D i.e. 01, 10, 11 (their binary representation) none of them is better than other two?

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится -8 Проголосовать: не нравится

Can I get full input for 8th test of d2 D?

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

congratulation codelegend

»
5 лет назад, # |
  Проголосовать: нравится -24 Проголосовать: не нравится

I got a message from system that my solution 61138460 for the problem 1230A significantly coincides with solutions nafiscode/61138460, tourist_plus_kan/61141927.

It was really a coincidence. I solved the problem by myself. I didn't take help from anyone. I didn't publish the code anywhere.

  • »
    »
    5 лет назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    Are you sure you didn't publish your code? With wrong settings on ideone anyone can see your code.

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Yes, i used ideone. I always use it.but I didn't share my source code link with anyone .Then how come anyone could see my code??

      • »
        »
        »
        »
        5 лет назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        If you left default settings for visibility (which is "Open"), link to your code is published in the "Recent codes" page.

»
5 лет назад, # |
  Проголосовать: нравится +19 Проголосовать: не нравится

Why I used Doubling Algorithm(O(nlong(n)*long(max(a[i])))) to solve problem B(div 1) and get a TLE?

  • »
    »
    5 лет назад, # ^ |
    Rev. 2   Проголосовать: нравится +5 Проголосовать: не нравится

    I looked at your last contest submission: 61172578 and I found a bug that worsened your time complexity to $$$O(n^2)$$$ or something a bit worse. In solve, you've got an incorrect condition testing if you can use a jump-pointer without changing the gcd. It's not enough to compare your current gcd with the gcd of values on a jump-pointer (why?).

    Your submission with this condition fixed passes in 1s/4s: 61238834.

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Thank you very much and I realized that I should use gcd(gd[j][i],g) to check instead of g becouse there may be a tree chain with all value 0.Appreciating your help again.

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Really amazing contest! Enjoyed a lot. Hope you make another one soon :D

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can anyone please explain me the solution C all possible check. Actually I don't understand the part when n>6. Can you explain a bit when n>6. An example would be better understandable for me. :) Thanks :)

  • »
    »
    5 лет назад, # ^ |
    Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

    as we know if n <= 6, we can build a complete graph, so we just need to put all of the dominoes.

    and what about if n == 7, you just need to do permutation and take 6 vertex from 7 vertex to make it as a complete graph (n <= 6 condition).

    and for each of the permutation you need to do iteration and change the unused vertex to one of the taken vertex (6 vertex) and count numbers of dominoes which hasn't placed in graph.

    example: n = 7 m = 9

    • 1 2
    • 2 3
    • 3 4
    • 4 5
    • 5 6
    • 6 7
    • 7 1
    • 6 2
    • 7 2

    so if we just take vertex 1-6 and ignoring vertex 7, there won't be appear same dominoes (complete graph)

    and for vertex 7 you can change it to vertex (1-6) but if you change it to vertex 6, the used dominoes is just 8. since dominoes (6,2) has been used.

    • 1 2
    • 2 3
    • 3 4
    • 4 5
    • 5 6
    • 6 6
    • 6 1
    • 6 2
    • 6 2 (unused)

    and if you change vertex 7 to 1 you can used all of the vertex

    • 1 2
    • 2 3
    • 3 4
    • 4 5
    • 5 6
    • 6 1
    • 1 1
    • 6 2
    • 1 2

    here is my solution: https://codeforces.com/contest/1230/submission/61308257

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Problem sets from Polish people are like Belgian chocolates :D

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I am not able to understand the case of n>6 in Div 2/Problem C... Can anyone explain me in detail...

  • »
    »
    5 лет назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    Assume that each side of the domino denotes a color, so basically you need to find a coloring of the graph which covers the maximum number of edges from the given set of 21 dominoes. You can do this by checking all possible colorings. Check out my implementation here: https://codeforces.com/contest/1230/submission/61163475

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I understood the problem... but I am not able to understand the implementation that is the else statement in your code... Can you explain to me how you used the map and iteration...