Блог пользователя SomethingNew

Автор SomethingNew, 20 месяцев назад, перевод, По-русски

Извините за не очень хорошие примеры и поздний разбор. Это был наш первый раунд, поэтому он был очень напряженным для нас, но мы надеемся, что вам понравились задачи, несмотря на это!

1719A - Игра с фишкой

Подсказка 1
Подсказка 2
Подсказка 3
Разбор

1719B - Математический цирк

Подсказка 1
Подсказка 2
Подсказка 3
Разбор

1719C - Турнир по борьбе

Подсказка 1
Подсказка 2
Подсказка 3
Разбор

1718A2 - Бурёнка и традиции (сложная версия)

Подсказка 1
Подсказка 2
Подсказка 3
Разбор

1718B - Фибоначчиевы строки

Общая подсказка
Путь математика
Путь программиста
Разбор

1718C - Тоня и Бурёнка-179

Подсказка 1
Подсказка 2
Подсказка 3
Подсказка 4
Разбор

1718D - Перестановка для Бурёнки

Подсказка 1
Подсказка 2
Подсказка 3
Разбор

1718E - Импрессионизм

Подсказка 1
Подсказка 2
Подсказка 3
Решение

1718F - Бурёнка, массив и запросы

Подсказка
Разбор

Пожалуйста, оцените задачи, это поможет нам сделать задачи лучше в следующий раз!

Оставить фидбек
Разбор задач Codeforces Round 814 (Div. 1)
Разбор задач Codeforces Round 814 (Div. 2)
  • Проголосовать: нравится
  • +129
  • Проголосовать: не нравится

»
20 месяцев назад, # |
  Проголосовать: нравится +25 Проголосовать: не нравится

In Div1D, how it is easy to see that suitable d will form a contiguous segment. Can somebody please elaborate?

  • »
    »
    20 месяцев назад, # ^ |
    Rev. 2   Проголосовать: нравится +66 Проголосовать: не нравится

    I think it's by far the hardest part of the entire problem, so the editorial's claim that it's easy is quite baffling.

    But here's how someone explained it to me: by Hall's theorem, for there to be a matching between $$$x$$$ and intervals, you want for all sets of intervals $$$I$$$, the number of elements of $$$S$$$ inside these intervals must be greater than the number of intervals (let's say $$$x(I) \ge N(I)$$$). In other words, your added value $$$d$$$ is valid if and only if it is inside the union of intervals for every set $$$I$$$ such that $$$x(I) < N(I)$$$.

    Now the trick is to realize that sets of intervals whose union is not a contiguous segment don't matter -- if a set whose union is two or more disjoint parts has $$$x(I) < N(I)$$$, then definitely one of the parts violates the condition as well, and that part alone would be a more restrictive condition on the added value $$$d$$$. Therefore the valid values of $$$d$$$ are in the intersection of some contiguous segments, which is a contiguous segment.

    • »
      »
      »
      20 месяцев назад, # ^ |
      Rev. 2   Проголосовать: нравится +28 Проголосовать: не нравится

      Wow!! Really nice proof. My approach didn't utilize this observation, it might be interesting to you.

      Approach: The problem is same that we have $$$k$$$ intervals and $$$k-1$$$ points, one query point and we have to check whether we can perfectly match them or not.

      The idea is to find those intervals, removing which we can perfectly match the given $$$k-1$$$ points and remaining $$$k-1$$$ intervals.

      Let's match the intervals greedily:

      • Sort the intervals according to left endpoints

      • Sort the points in increasing order

      • Move from left to right and for the current point say $$$x$$$ insert all right endpoint values in a set whose left endpoint $$$\le x$$$(can be done using a pointer).

      • Match the point with the segment having the smallest $$$r$$$ value.

      • Also for each point, we can calculate what it will be matched to if the segment in the previous step was removed (-1 for those points that don't have such an option).

      In the end, the set should have exactly one spare segment.

      Let $$$seg_i$$$ be the segment alloted to $$$i^{th}$$$ $$$(1 \le i \le k - 1)$$$ interval and $$$seg_k$$$ be the only spare segment.

      We can be sure that we can match $$$seg_k$$$ to $$$d$$$ ensuring that the remaining segments still have a perfect match.

      Let's make a DP where $$$dp_i$$$ denotes whether we can remove $$$seg_i$$$ such that the remaining segments perfectly match with given $$$k-1$$$ points. Initially, $$$seg_k = true$$$.

      • Iterate from $$$k-1$$$ to $$$1$$$, and for each $$$seg_i$$$ we know the segment it will be replaced with (i.e the segment that will be matched to $$$interval_i$$$) if $$$seg_i$$$ didn't exist, let it be $$$nxt_i$$$.
      • $$$nxt_i$$$ is guaranteed to be greater than $$$i$$$ because that segment would have been allotted later only to some point.
      • $$$dp_i = dp_{nxt_i}$$$

      Finally, we have all the segments in which $$$d$$$ can lie. Assuming segments do not always form a contiguous section, we can answer queries offline (using sweepline) or online (using segment tree).

      Code

      • »
        »
        »
        »
        20 месяцев назад, # ^ |
          Проголосовать: нравится +13 Проголосовать: не нравится

        "Transposed" version of your solution can be used to prove that "suitable $$$d$$$ will form a contiguous segment" as well (and as a bonus, with a few standard tricks it can be implemented in $$$O(n \alpha(n))$$$, faster not only in theory but in practice).

        • Store $$$S$$$ in a set, sort segments in the order of increasing right ends.
        • Iterate through segments.
        • Greedily assign to the current segment the first point to the right of its left end, removing it from the set.
        • If such a point can't be assigned (either it's beyond the right end of the segment or there's no such point), we know that another point must be added somewhere to the left of the right end of this segment, so we get an upper bound $$$R$$$ on $$$d$$$ and we pretend to match this segment with $$$R$$$.
        • If such situation happens a second time, even "optimal" $$$d$$$ wasn't enough to save $$$S$$$, so for every $$$d$$$ the answer is no.
        • Repeat the same from another direction, sorting segments in the order of decreasing left ends, finding a lower bound $$$L$$$. We don't have to continue after finding the first problematic segment, since if $$$L$$$ being in its own greedy way optimal isn't good enough, $$$R$$$ couldn't have been good enough either.
        • If we determined that there is at least some solution for $$$d$$$, imagine adding $$$d = L$$$ and again matching segments with points from the left, as we did looking for $$$R$$$. Without this additional point there was a segment that had no matching point, but with $$$L$$$ it must have a match -- either it matched $$$L$$$ itself, or some previous segment did and every segment after shifted its match one point left, and some point eventually "carried over" to the problematic segment.
        • If we shift the added point from $$$L$$$ to some position $$$x, L < x < R$$$, up until this point segments will be matched to the same points as without $$$x$$$. The segment that will try to match $$$x$$$ was able to match both a point to the left (when we added $$$L$$$, which might be this "point to the left" of $$$x$$$ too) and a point to the right (when we added no point) (except if it is the problematic segment, then there is no suitable point to the right, but $$$x < R$$$ so $$$x$$$ satisfies the right end of this segment), so it can match $$$x$$$, and every next segment will match the same point as with $$$L$$$ added. So every such $$$x$$$ is good too.
        Tricks for better complexity
      • »
        »
        »
        »
        20 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Can we also just go in order of decreasing right endpoint and match every segment to its rightmost available value? I feel like this strategy works and is simpler.

      • »
        »
        »
        »
        20 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Neat! Eventually it needs to be looked at in a way similar to this because we still need a way to construct the interval of valid $$$d$$$, even if we know it's an interval.

        And it's easy to show it's an interval from your solution: $$$dp_i = dp_{nxt_i}$$$, and obviously $$$i$$$ and $$$nxt_i$$$ have at least one common point by the definition of $$$nxt_i$$$.

»
20 месяцев назад, # |
  Проголосовать: нравится +27 Проголосовать: не нравится

1718A2, I think the observations (segments of length 1 and 2) are pretty simple and obvious. The more interesting part is how to implement a solution from that. I worked like an hour on it and even did not solve A1.

The editorial answers this with "this amount can be calculated by dynamic programming or greedily." Thanks a lot!

  • »
    »
    20 месяцев назад, # ^ |
    Rev. 2   Проголосовать: нравится +1 Проголосовать: не нравится

    Yeah, the editorial should definitely elaborate on that. And some of these observations aren't even needed for D1.

    My DP approach: the optimal solution for an array of $$$n$$$ elements can be constructed by taking the optimal solution for the first $$$k$$$ elements (for some $$$k < n$$$) and then applying the XOR chain (overlapping segments of size 2) on the rest of the elements. We need to try each value of $$$k$$$, however, which leads to an $$$O(n^2)$$$ solution, maintaining both an array of optimal time as well as an array of accumulating XOR chain results for each possible starting index. This doesn't work in D1.

    Greedy approach: maintain a running XOR while also storing each intermediate result in a set. If a result appears twice, i.e., $$$r \oplus a_i \oplus \oplus a_{i + 1} \oplus \cdots \oplus a_{i + k} = r$$$, then the subarray $$$a_i, \ldots, a_{i + k}]$$$ has a XOR sum of 0. We don't need to know where the subarray starts ($$$i$$$), but we just need to detect when it ends (when the running XOR is not a new value), so we can update the saving counter and reset the running XOR chain. You can check out just how simple it is over here: https://codeforces.com/contest/1719/submission/168640804

    That being said, the observation that a running XOR will encounter an old element if a subarray has a XOR sum of 0 should definitely be included in the editorial, in my opinion, since that's essential to the greedy approach (unless the authors had a different greedy approach in mind).

    • »
      »
      »
      20 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Hi, I found a nice dp solution for div1 A1 :- https://codeforces.com/contest/1718/submission/168713136

      Can you please help me to understand the transitions in following solution

    • »
      »
      »
      13 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      why we dont need to know where the subarray starts??

      Say I have XOR equal to 0 for a subarray from index 4....20 and inside that we have a subarray from index 9....16 which have its XOR = 0 as well Then shouldn't not considering where the subarray would start be wrong in this case as I would have 2 saves in this case while if I clear the set when reaching index 16 then how would I know about the save for 4....20 ??

      Just curious about this and couldn't actually understand how this works.

      Any help would be really appreciated :)

      • »
        »
        »
        »
        12 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        We only want to count the number of non-overlapping subarrays with XOR of 0. If a 0-XOR subarray is inside a larger 0-XOR subarray, we will not benefit from exploiting the larger subarray, since we don't want to waste time in revisiting the smaller subarray.

        For your example, if we find that 9-16 has a XOR of 0, then we can save 1 second from the 9-16 subarray. The "saving" means that we can perform the operation on [9, 10], [10, 11], [11, 12], [13, 14], [14, 15], and [15, 16]. We always choose $$$x$$$ as the current value of the first index we pick (so the first index turns into 0). Since indices 9-16 have a XOR value of 0, the latest operation of [15, 16] will also turn index 16 to 0, allowing us to make the 9-16 subarray (7 elements) into all 0s in 6 seconds.

        The fact that the subarray from 4-20 also has a XOR of 0 is not something we can exploit anymore. It's true that, if we then perform the operation on pairs like [4, 5], [5, 6], ..., [18, 19], and [19, 20], then we will get them all to 0, but this requires revisiting indices 9 to 16, which are already 0! The operations from [8, 9] to [16, 17] take 8 seconds to perform, when it would have been much faster to just apply the operation on [8] alone, and then move on to [17, 18] and so on.

        So it makes no sense to try to take advantage of BOTH the 9-16 subarray and the 4-20 subarray, because the moment you decide to exploit the 9-16 subarray, you turn this range into 0 (saving 1 second) and you don't want to touch this 9-16 subarray ever again. Now, it's possible that we could instead decide to exploit the 4-20 subarray directly without exploiting the 9-16 array; however, it's always better to exploit the subarray that ends the earliest first, since then you have more indices available for future exploits (e.g., if there is a subarray from say, 18-25 that has a XOR of 0, then we can save 1 second from 9-16 and another 1 second from 18-25; whereas if we instead decided to save 1 second from 4-20, then we are no longer able to save another second through 18-25 since 18-20 have been zero'd out).

        Let me know if you're still confused about anything here.

  • »
    »
    20 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I am quite the opposite of this, I couldn't do anything during the contest to both of the versions, and only when I was told the hint of segments of length 1 and 2, I solved the hard version instantly

    • »
      »
      »
      20 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Well, the above code is fairly simple, but actually I still do not really understand why this works.

      What is the key idea we can see here instantly?

      • »
        »
        »
        »
        20 месяцев назад, # ^ |
        Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

        The Key Idea is to notice that the extra XOR that comes to a[i] before turning a[i] to zero with one operation can be represented as the xor subarray ending at index i-1 (can be empty) (where this subarray represents consecutive subarray of size 2 before going to index i), so to make a[i] already 0 before reaching it, we have to find a subarray ending at index i with xor sum equal to 0 (i.e find maximum j such that pref[i]^pref[j] = 0)

      • »
        »
        »
        »
        20 месяцев назад, # ^ |
        Rev. 3   Проголосовать: нравится +8 Проголосовать: не нравится

        As far as I understood:
        You can always use n steps to turn all elements to zero by just taking all indexes consecutively. And you want to improve this result. How you can do it?
        You notice that there's no sense to use range of size greater than 2 since the given formula of seconds calculation. What can you do with this?
        If you take two numbers a[i] and a[i+1] and apply xor with a[i] to them you can get two results:
        1) [0,0] it means a[i]==a[i+1] => you saved 1 second here!
        2) [0,X] now you can apply xor with X to next pair [X,a[i+2]] and so on. you can apply it until you get [0,0] pair or get to end of array and turn last element to zero with xor with range of 1. However you saved 1 second!
        So what do these both situations have in common? xor of all elements in covered ranges equal to 0.
        so now you can convert problem into finding count of subarrays with xor=0 (every subarray is a saved 1 second) and the answer will be n-(count of such subarrays)

        • »
          »
          »
          »
          »
          12 месяцев назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          MrOtter has gave a very nice explanation. To be more detailed for 168713136, the reason why we should calculate the dp[i][0-8191] is before the next v[i+1] comes, we don't know what is the result of v[i]^v[i+1]. Therefore, for every v[i], we should consider every possible value of v[i]^v[i+1] which is 0-8191 since the maximum value is not greater than 5000(1001110001000 < 1111111111111=8191).

»
20 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

"Unable to parse markup [type=CF_MATHJAX]" in problem D.

»
20 месяцев назад, # |
  Проголосовать: нравится +10 Проголосовать: не нравится

Can anyone share their div 1c solution using segment tree which passes for all factors of $$$n$$$ as claimed in the editorial? I used segment tree but got TLE.

»
20 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Another approach for Div2 C, based on Editorial one.

Let's calculate next greater element and previous greater element. Then, for each $$$a_i$$$, we can calculate how many rounds athlete will win for infinite $$$k$$$. If previous greater element exist, he won't win a single round. Otherwise, the answer equals to distance between athlete and its next greater element.

When we answering queries, one should notice that an athlete start to fight after $$$i-1$$$ rounds — this is the time he should wait to get into second index and start fighting.

Time complexity: $$$O({n})$$$

https://codeforces.com/contest/1719/submission/168605787

»
20 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Oh, wow, am I the only one who actually started filling up Sprague-Grundy values for Div2A (Chip Game) to realize and prove that they form a binary chessboard? I felt that this was more straightforward (albeit requiring knowledge on the Sprague-Grundy Theorem) than the number-theoretic observations.

Is it planned for solutions to be posted later? I'm curious to see whether the authors' solution for Div2E/Div1B actually passes some of the really nasty but valid input test cases.

»
20 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I solve problem C (div2) in $$$O(n)$$$ time complexity:

submission : 168668282

  • »
    »
    20 месяцев назад, # ^ |
    Rev. 3   Проголосовать: нравится +1 Проголосовать: не нравится

    There are $$$2$$$ conditions in this problem.

    I use $$$tmp$$$ to mark the number of the steps to make the biggest value to the front of the array.

    And array $$$cnt_i$$$ is used to count the number of the victories of the i_th player.

    If $$$k \lt tmp$$$

    There are two subtask in this conditions:

    • $$$val_{id}$$$ is the biggest number ($$$val_id == n$$$) then the answer should be cnt[id] + k - tmp, because this player will win after round $$$k - tmp - 1$$$

    • Otherwise : the answer is $$$cnt_i$$$.

    Otherwise

    There are also two conditions:

    • If $$$val_{id}$$$ is the biggest number in the prefix.

    I use array $$$r$$$ to mark the first element which is on its right and has a bigger value than it ( you can use a stack to get array $$$r$$$ in $$$O(n)$$$ in the begining.

    The answer of this condition is (id != 1) + min(r[id] - id - 1, k - id + 1)

    • Otherwise : the answer is $$$0$$$

    In this way, I can get each answer in $$$O(1)$$$ and use $$$O(n)$$$ time to get $$$tmp$$$ and array $$$cnt$$$ and $$$r$$$.

»
20 месяцев назад, # |
Rev. 2   Проголосовать: нравится +3 Проголосовать: не нравится

Is there an English editorial for this contest?

Edit: The issue has been fixed. Thanks.

»
20 месяцев назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

Any proof for Div 1B/2E why greedily choosing the letter with maximum frequency as the current last block will work?

  • »
    »
    20 месяцев назад, # ^ |
      Проголосовать: нравится +5 Проголосовать: не нравится

    You could maybe search more on Zeckendorf Representation, that's a formal name for the fibonacci-sum representation of a positive integer.

  • »
    »
    20 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    assume the answer is YES, with n+1 blocks total, then we can prove that the number of letters with third highest frequency cant be greater than F_n. If the last block used the letter with second highest frequency, then we can prove that ( because F_n = F_(n-1) + F_(n-2) ) the next two segments has to BOTH be from the letter from maximum frequency, that leads to a contradiction. Therefore the last block has to come from the letter of max frequency.

    the parts I didn't prove can be done by messy but routine bounding

»
20 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I'm sorry but i really do not understand the editorial of Div2D. Especially the "n − (the maximum number of disjoint sub-segments with a xor of 0) " part.

»
20 месяцев назад, # |
Rev. 2   Проголосовать: нравится +36 Проголосовать: не нравится

Another way to understand the solution to div1B is the Zeckendorf Theorem, which states that each positive integer can be uniquely represented by the sum of a set of unique fibonacci numbers where no two are adjacent.

We can apply this representation directly to the frequency of each letter and solve the problem.

And due to this uniqueness of the representation, the greedy solution will also always work.

»
20 месяцев назад, # |
  Проголосовать: нравится +20 Проголосовать: не нравится

DIV2 (ABCDEF)video Editorial for Chinese :

Bilibili

»
20 месяцев назад, # |
  Проголосовать: нравится +24 Проголосовать: не нравится

Сделал разбор на русском и получил ноль русских комментариев(

  • »
    »
    20 месяцев назад, # ^ |
      Проголосовать: нравится +8 Проголосовать: не нравится

    Держи русский комментарий.

  • »
    »
    20 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Ещё один русский комментарий. Спасибо за русский разбор, очень удобно)))

  • »
    »
    20 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Мне кажется можно в разборе d2/div2 сказать, как решить d1 и в чём отличие от d2

    • »
      »
      »
      20 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Я думаю, что подсказок для д1 хватает

      по техническим причинам их нет в русской версии)

»
20 месяцев назад, # |
  Проголосовать: нравится +5 Проголосовать: не нравится

I remember now that there was a problem similar to Div2. D on USACO, except that it had to count the number of subsegments whose sum is divisible by some integer $$$k$$$ if I remember correctly. Similar observations to that problem but not quite the same.

»
20 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Does anyone have the DP approach to Div2 D1? Also, is there a mistake in the Hint for A1 2? Is it a_i = v?

»
20 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Hello there! A question to the authors of the round. I see that there are a lot of problems about Buryatia in this round. Is one of you from it, or is it just a meme? Thanks.

»
20 месяцев назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

During contest my solution of problem B was accepted, but after thr contest it is showing that my time limit was exceeded at test case 5.Was that a bug?

  • »
    »
    20 месяцев назад, # ^ |
      Проголосовать: нравится +1 Проголосовать: не нравится

    Nope, not a bug. Looks like you failed system tests.

    So, to put it short, the test cases that are run on your code during the contest are called pretests. These are normally not too big but are certainly meant to be strong. after the round is over, there will be system tests, which are basically extra cases added on to the pretests to make sure that your solution is indeed efficient/correct

»
20 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

My comment will be published soon

»
20 месяцев назад, # |
Rev. 4   Проголосовать: нравится +18 Проголосовать: не нравится

I have a solution with bitset in $$$O(\frac{nC\log n}{p(k)}+\frac{nC\log n}{w}+\frac{qC}{w}+\frac{2^kC}{w})$$$.

We just calculate the bitset of every subset of the first $$$k$$$ primes. For larger primes, we can use the trick called "猫树分治"(i don't know how to translate it into English, maybe it's called "cat tree divide and conquer").

Since std::bitset have an extremely small constant, we can solve this problem by this weird method.

Edit:

$$$p(k)$$$ is the $$$k$$$-th prime, i can't prove its exact bound, but it has a small constant.

Edit2:

I think $$$k=14$$$ or $$$k=15$$$ would be the fastest one, but it takes too much memory. It runs 1668 ms with $$$k=13$$$.

»
20 месяцев назад, # |
Rev. 9   Проголосовать: нравится 0 Проголосовать: не нравится

Hello SomethingNew, please can you explain , in problem 1718A2(hard version): What does disjoint sub-segments with XOR 0 actually mean ??

I know it is a very noob question and you can say that you have googled it and I have searched it and kind of understood that also. But in this problem , when using my that limited knowledge I kind of failed on working my own test cases/examples of this problem.

For example incase you want to know: the array I am confused on :

1 3 2 1 2 3 1 

  Prefix xor =:  1 2 0 1 3 0 1

Not clearly able to find out "distinct subs-segments with XOR 0" .

  • »
    »
    20 месяцев назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    if prefix_xor[i] = prefix_xor[j] then a[i+1] ^ a[i + 2] ^ ... ^ a[j] = 0, disjoint sub-segments means disjoint subarrays.

»
20 месяцев назад, # |
  Проголосовать: нравится +22 Проголосовать: не нравится

Если вдруг хочется, 1718F - Бурёнка, массив и запросы можно решать за $$$O(n \cdot q)$$$ — 169446727 и это работает достаточно быстро, чтобы получать АС на текущих тестах.

Идея в том, чтобы сделать сканлайн по левой границе запроса, и для каждого $$$x$$$ поддерживать минимальную позицию $$$first[x]$$$ такую, что $$$gcd(x, a[first[x]]) \ne 1$$$. Тогда для ответа на запрос $$$r$$$ нужно посчитать количество элементов в $$$first$$$, которые не больше $$$r$$$.

Вот тут можно почитать про решение детальнее.

»
20 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Div.1 A~D solutions for Chinese readers. https://www.cnblogs.com/jklover/p/16595927.html

»
20 месяцев назад, # |
Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

A2. Burenka and Traditions (hard version) https://codeforces.com/contest/1718/submission/169707649 I just can't understand Why Is This Giving TLE ??? Although its O(N)

»
20 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I don't think the editorial of 1E is understandable, but my poor English doesn't allow me to write a whole editorial.

But I can give a proof for the time complexity. Consider divide the connected components of the two graphs into groups, such that each component in a group has the same number of left vertices. Then we can consider only matchings between vertices that their connected components are in the same group.

Now we only consider components in the $$$i$$$-th group. Let the $$$x_i$$$ be the number of left vertices in each component, $$$y_i$$$ be the number of components in the graph $$$a$$$, $$$z_i$$$ be the number of edges in the graph $$$b$$$. If $$$x_i=0$$$, each of the components is made up of only a right vertices. It's easy to deal with this situation.

Otherwise $$$x_i\ne0$$$. Consider a component in $$$a$$$ and a vertex $$$u$$$ in it, we are looking for a vertex to match with $$$u$$$. For an edge in $$$b$$$, suppose the edge belongs to the component $$$S$$$, then the edge will contribute to the time only when we try to match $$$u$$$ with some left vertex in $$$S$$$. So the time complexity of dealing with this group is $$$O(x_iy_iz_i)$$$, and the total time is $$$\sum x_iy_iz_i\le(\sum x_iy_i)(\sum z_i)=O(n^2m)$$$. Now we can solve the whole problem in $$$O(nm\sqrt{nm})$$$ time.

»
20 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

coincidentally, i notice div2C this time is so similar with 569 div2C. XD

»
20 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Stuck in Fibonacci Strings : 1718B (link: https://codeforces.com/problemset/problem/1718/B) since the past few days, cant think why it is going wrong, any help would be really nice.

Subsmission: https://codeforces.com/problemset/submission/1718/170409697 Test 9: WA in test 136 — found YES instead of NO.

»
17 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can anyone find out my error in this code? It WA on test 2.560 :(

#include <bits/stdc++.h>
using namespace std;

int a[100005];
vector<int> ans[100005];
deque<int> que;

int main() {
	int t;
	scanf("%d", &t);
	while (t--) {
		int n, q;
		scanf("%d %d", &n, &q);
		int before;
		for (int i = 0; i < n; i++) {
			ans[i].clear();
			scanf("%d", &a[i]);
			que.push_back(a[i]);
			if (a[i] == n) {
				before = i;
			}
		}
		for (int i = 0; i < before; i++) {
			int one = que.front();
			que.pop_front();
			int two = que.front();
			que.pop_front();
			ans[max(one, two)].push_back(i);
			que.push_front(max(one, two));
			que.push_back(min(one, two));
		}
		for (int i = 0; i < n; i++) {
			que.pop_back();
			cerr << ans[a[i]].size() << " ";
		}
		cerr << endl;
		while (q--) {
			int i, k;
			scanf("%d %d", &i, &k);
			i--;
			if (k <= before) {
				printf("%d\n", upper_bound(ans[a[i]].begin(), ans[a[i]].end(), k - 1) - ans[a[i]].begin());
			} else {
				if (a[i] == n) {
					printf("%d\n", k - before + (i != 0));
				} else {
					printf("%d\n", ans[a[i]].size());
				}
			}
		}
	}
	return 0;
}
»
17 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

1718A1 — Burenka and Traditions (easy version)

"Note that if a segment of length 2 intersects with a segment of length 1, they can be changed to 2 segments of length 1."

Can someone help me understand this. If a segment of length 1 intersects a segment of length 2, it lies completely inside it, right? I do not understand the purpose of this statement. I know that I'm missing something very silly.

I just did the basic 2-length segment thing, which fails 183824117. I cannot find a case where it fails though. Please if somebody could help me out. Thanks!

Also if possible, it will be very helpful if the problem setters could create small test cases ( preferably corner cases ) which can fit the status screen, for say first five test cases. It will help a lot while upsolving/practising. I know WA is a mistake from our part, but doing this will help us a lot. Thanks!

»
17 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Funny, but there is bruteforce soultion of problem Div1/B: 184648373

»
14 месяцев назад, # |
Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

Fibonacci Strings : 1718B submission: https://codeforces.com/contest/1718/submission/194429652

Why am i getting TLE??

got it, solution was int to long long int!!

»
10 месяцев назад, # |
Rev. 4   Проголосовать: нравится 0 Проголосовать: не нравится

//ONE OF THE BEST Solution for C (1719C — Fighting Tournament) //Simple and easy //using my approach time complexity will be only O(n). and memory used is O(n) only. //here is my code you can check this out click here

»
9 месяцев назад, # |
Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

[Deleted]

»
2 месяца назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

it's been a while but for problem C we can use monotonic stack good luck everyone !!!

»
2 месяца назад, # |
Rev. 4   Проголосовать: нравится 0 Проголосовать: не нравится

Another stupid solution to div1F:

  • First, do divide and conquer on the queries to remap them to "answer this query given the set of these prime numbers in my range" (storing them in a bitset of O(primes under M) size)

  • Then, sort the queries in bitset lexicographical order (where primes[0] = 2, primes[1] = 3, etc.). Process them left to right, and process adjacent differences in bitsets; whenever we add/remove a bit, naively do O(C / prime[i]) work on all of the multiples of the number to maintain how many times that multiple is hit. We want to maintain how many indices are non-zero.

To really squeeze and cache optimize this, make everything chars since max # distinct prime factors under 2e4 = 6.

you can show that the sum of O(C / prime[i]) ops takes ~2e4 * MAXC operations in the worst case [sum of 2 *(1/2) + 4 * (1/3) + 8 * (1/5)... until we exhaust (max # distinct prime factors) * N terms] * 2 * MAXC, but I guess 2e9 simple operations are fast enough.