Блог пользователя Enigma27

Автор Enigma27, 5 лет назад, По-английски

1208A — XORinacci

The sequence is $$$a$$$, $$$b$$$, $$$a\oplus b$$$, $$$a$$$, $$$b$$$, $$$a\oplus b$$$ $$$\cdots$$$

Since, the sequence has a period of $$$3$$$, $$$f[i] = f[i \mod 3]$$$.

Code

1208B — Uniqueness

After removing a sub-segment, a prefix and a suffix remain, possibly of length $$$0$$$. Let us fix the prefix which does not contain any duplicate elements and find the maximum suffix we can get without repeating the elements. We can use map/set to keep track of the elements.

Time complexity: $$$O(n^2 \cdot log(n))$$$

Code
Bonus

1208C — Magic Grid

Divide the grid into four quadrants. Assign distinct integers to the first quadrant from $$$0$$$ to $$$(\frac{N^2}{4} - 1)$$$. Copy this quadrant to the other three. This way XOR of each row and column becomes $$$0$$$.

Now, to make numbers distinct among the quadrants, multiply the numbers by $$$4$$$. Add $$$1$$$, $$$2$$$, and $$$3$$$ to the numbers in $$$1^{st}$$$, $$$2^{nd}$$$ and $$$3^{rd}$$$ quadrants respectively. The XOR of each row and column would still remain $$$0$$$ as $$$N/2$$$ is also even but the elements will become distinct while being in the range $$$[0, N^2-1].$$$

Another approach in this problem is to use a $$$ 4 \times 4$$$ grid given in the sample itself and replicate it in $$$N \times N$$$ grid by adding $$$16, 32, 48 \cdots $$$ to make the elements distinct.

Of course, there are multiple ways to solve the problem. These are just a few of them.

Code

1208D — Restore Permutation

Approach 1

Let us fill the array with numbers from $$$1$$$ to $$$N$$$ in increasing order.

$$$1$$$ will lie at the last index $$$i$$$ such that $$$s_{i} = 0$$$. Find and remove this index $$$i$$$ from the array and for all indices greater than $$$i$$$, reduce their $$$s_{i}$$$ values by $$$1$$$. Repeat this process for numbers $$$2, 3, ...N$$$. In the $$$i^{th}$$$ turn, reduce the elements by $$$i$$$.

To find the last index with value zero, we can use segment tree to get range minimum query with lazy propagation.

Time complexity: $$$O(N \cdot log(N))$$$

Code
Approach 2

For every i from $$$N$$$ to 1, let's say the value of the $$$s_{i}$$$ is x. So it means there are $$$k$$$ smallest unused numbers whose sum is $$$x$$$. We simply put the $$$k+1$$$st number in the output permutation at this $$$i$$$, and continue to move left. This can be implemented using BIT and binary lifting.

Thanks to real.emerald for expressing the solution in the above words.

Code

1208E — Let them slide

For every array $$$i$$$ from $$$1$$$ to $$$N$$$, let us maintain 2 pointers $$$L[i]$$$ and $$$R[i]$$$, representing the range of elements in $$$i_{th}$$$ array, that can be accessed by the current column index $$$j$$$.

Initially all $$$L[i]$$$ and $$$R[i]$$$ would be set equal to 0.

As we move from $$$j_{th}$$$ index to $$$(j+1)_{th}$$$ index, some $$$L[i]$$$ and $$$R[i]$$$ would change. Specifically, all those arrays which have $$$size \ge min(j,W-j-1)$$$ would have their $$$L[i]$$$ and $$$R[i]$$$ change.

Since overall movement of $$$L[i]$$$ and $$$R[i]$$$ would be equal to $$$2 \cdot$$$ size_of_array($$$i$$$). Overall change would be of order of $$$O(\sum a[i])$$$. For every array we need range max query in $$$(L[i], R[i])$$$.

We can use multisets/ segment trees/ deques to update the answers corresponding to an array if its $$$L[i], R[i]$$$ changes. This way we can get complexity $$$O(N)$$$ or $$$O(N \cdot log(N))$$$ depending upon implementation.

Code

1208F — Bits And Pieces

The idea is to first fix some $$$a[i]$$$ and try to get the bits which are off in $$$a[i]$$$ from any $$$2$$$ elements to the right of $$$i$$$. Since, we need to maximize the value, we will try to get higher bits first. What we need now is, for every number $$$x$$$ from $$$0$$$ to $$$2^{21}-1$$$, the $$$2$$$ right most positions such that the bits present in $$$x$$$ are also present in the elements on those positions. This can be done by iterating over submasks(slow) or SOS-DP(fast).

Once we process the positions for every $$$x$$$, let us fix some $$$a[i]$$$ and iterate over the bits which are off in $$$a[i]$$$ from the highest to the lowest. Lets say the current maximum we have got is $$$w$$$ and we are going to consider the $$$y^{th}$$$ bit. We can get this bit if the $$$2$$$ positions for $$$w|2^{y}$$$ are to the right of $$$i$$$ else we can not.

The final answer would be the maximum of $$$a[i]|w$$$ over all $$$i$$$ from $$$1$$$ to $$$N$$$.

Time complexity $$$O((M+N)\cdot logM)$$$ where $$$M$$$ is the max element in the array.

Code

1208G — Polygons

If we choose a polygon with side length $$$l$$$, it is profitable to choose polygons with side lengths as divisors of $$$l$$$ as well, because this will not increase the answer.

So our final set would be such that for every polygon with side length $$$l$$$, we would have polygons with side length as divisors of $$$l$$$ as well.

All polygons should have at least one common point in the final arrangement, say $$$P$$$ or else we can rotate them and get such $$$P$$$. For formal proof, please refer this comment by orz.

Let us represent points on the circle as the distance from point $$$P$$$. Like for $$$k$$$ sided polygon, $$$0$$$,$$$\frac{1}{k} ,\frac{2}{k} ,…\frac{k-1}{k}$$$.

Now the number of unique fractions over all the polygons would be our answer, which is equal to sum of $$$ \phi (l)$$$ over all side lengths $$$l$$$ of the polygons because for $$$l$$$ sided polygon there will be $$$\phi(l)$$$ extra points required by it as compared to its divisors.

One observation to get to the final solution is $$$\phi(l) \ge \phi(divisor(l))$$$. So, we can sort the side lengths by their $$$\phi$$$ values and take the smallest $$$k$$$ of them. This will minimize the number of points as well as satisfy the property of our set.

NOTE: We can not consider polygon of side length $$$2$$$. This can be handled easily.

Code
Tutorial is loading...
Code
  • Проголосовать: нравится
  • +133
  • Проголосовать: не нравится

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится +10 Проголосовать: не нравится

In H, what is the easiest way to consider a vertex which has many children in the compressed tree? We have to calculate $$$the~number~of~such~children+ 1$$$ boundary values, how to do it without sorting the boundary values from the rest of the children?

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +27 Проголосовать: не нравится

    I don't understand what you mean by $$$the\,number\,of\,such\,children+1$$$ boundary values. I don't compute any boundary value for an interesting vertex, I compute its color only when a query with fixed $$$k$$$ comes.

    btw, the tutorial is updated to a more complete version (someone posted a sketch instead of the tutorial first by mistake).

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится +7 Проголосовать: не нравится

I did a $$$O(n^2\log{n})$$$ solution for question B and it got TLE. I don't understand why it got TLE.

My solution :

https://codeforces.com/contest/1208/submission/59465481

After contest I did a $$$O(n\log{n})$$$ solution. (binary search + two pointer)

»
5 лет назад, # |
  Проголосовать: нравится +13 Проголосовать: не нравится

You can also reduce the runtime for B by $$$\log n$$$ using a constant-time map instead of a log-time one, so it's possible to solve in $$$O(n)$$$. (example: 59466996)

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится -15 Проголосовать: не нравится

So Problem C could have been a multiple of 2 as well, since all you need is to divide into 4 quadrants?

Or use the following $$$2x2$$$ matrix as building block:

0 1
2 3
  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +34 Проголосовать: не нравится

    Apparently no, because we don't just need to divide the grid in $$$4$$$ quadrants but also add some number to each quadrant. If $$$N$$$ is of the form $$$4k+2$$$, the element would be added odd times in a row/col of a quadrant, which changes the xor value.

    Also, the $$$2$$$ x $$$2$$$ matrix you mentioned does not have xor of a row equal to that of a col.

    • »
      »
      »
      5 лет назад, # ^ |
      Rev. 2   Проголосовать: нравится +1 Проголосовать: не нравится

      I used the $$$2\times 2$$$ matrix that mkagenius mentioned as a building block -- but it also relies on $$$N$$$ being a multiple of $$$4$$$. The 2x2 building block has xor $$$2$$$ in every column and $$$1$$$ in every row, so pairs of building blocks cancel out as long as you have an even number of building blocks.

      See submission 59457320 for an example.

»
5 лет назад, # |
  Проголосовать: нравится +19 Проголосовать: не нравится

As for D, I have something of an idea, but I'm not sure whether it's possible to come up with an efficient algo. So, for every index from n-1 to 0, let's say the value of the input array is x at a given index. So it means there are k smallest unused numbers whose sum is x. We simply put the k+1st number in the output permutation at this index, and continue (move left). The question is, can this be implemented efficiently?

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +21 Проголосовать: не нравится

    Exactly, this is the 2nd approach. Please take a look and thanks for expressing it well.

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится +9 Проголосовать: не нравится

      you put his comment in the editorial, Nice trick there

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Oh, now I get it, but what's BIT?

      • »
        »
        »
        »
        5 лет назад, # ^ |
          Проголосовать: нравится +6 Проголосовать: не нравится

        Binary Indexed Tree.

        It is a useful data structure for calculating prefix sums.

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          Thanks! I'll try to implement it, then (during the contest I didn't solve D, only got this idea)

          • »
            »
            »
            »
            »
            »
            5 лет назад, # ^ |
              Проголосовать: нравится -6 Проголосовать: не нравится

            your rating is reaching 1900 and you still don't know what BIT is?

            • »
              »
              »
              »
              »
              »
              »
              5 лет назад, # ^ |
                Проголосовать: нравится +3 Проголосовать: не нравится

              Knowing BIT is not necessary. It is useful, but not necessary. A segment tree can do everything a BIT can do.

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
          Rev. 3   Проголосовать: нравится +4 Проголосовать: не нравится

          Second approach for D(about searching in BIT in O(logN)) has been well explained in this(Link) blog.

          • »
            »
            »
            »
            »
            »
            5 лет назад, # ^ |
              Проголосовать: нравится +3 Проголосовать: не нравится

            Wow, thanks a lot!

          • »
            »
            »
            »
            »
            »
            5 лет назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится

            i guess normal bs will work in this case, i saw tourist solution, he has implemented naively.but thnx for the link, learnt something new.

            • »
              »
              »
              »
              »
              »
              »
              5 лет назад, # ^ |
                Проголосовать: нравится 0 Проголосовать: не нравится

              This is because N <= 2e5 so N(LogN)^2 is roughly 3e7 which will work easily in 2sec but what if the constraints are higher like N <= 1e6 then N(logN)^2 will be almost 4e8 which might not work in 2 sec.

          • »
            »
            »
            »
            »
            »
            5 лет назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится

            sorry i dont understand, what is the element of the BIT represent? and why always follow a pattern like 1 3 3 10 5...? thanks in advance

            • »
              »
              »
              »
              »
              »
              »
              5 лет назад, # ^ |
                Проголосовать: нравится 0 Проголосовать: не нравится

              You can learn about Binary Indexed Tree from here BIT. It has the best explanation till now i have found online.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Can you explain what you mean by unused here and the basic intution used here?

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Here, the term "unused" means the numbers that have not yet been included in the permutation (while iterating from the last). When a number is used, you can replace it with 0, so that it is not counted for numbers larger than it that occur before it.

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

any Easy way to solve E ?

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    I think maybe segment tree can solve it.

  • »
    »
    5 лет назад, # ^ |
    Rev. 20   Проголосовать: нравится +8 Проголосовать: не нравится

    The first optimization tip I didn't quite cotton on at first is, instead of producing the answer values one by one, to instead create an array as a difference map (opposite of prefix sum, i.e. $$$D_i = ans_i - ans_{i-1}$$$), then getting the final answer from its prefix sum. This allows the input arrays to be processed individually, greatly reducing memory usage and avoiding the need for sorting. (Be sure to add a check to skip the "boring" changeless middle part for the smaller arrays to avoid $$$O(nw)$$$ or worse)

    When processing the arrays, there are several ways you can use to get range maximums:

    • Segment tree. Main difficulty is implementing one, but it's worth learning, as you can use it to solve other problems. Once you have one, it's pretty easy to use; just query for the range between your pointers. $$$O(\log n) \times O(n)$$$ queries = $$$O(n \log n)$$$

    • Sorted multiset/countmap. Really easy to use, just add and remove (if using a countmap, be sure to delete zero-count entries), then grab the maximum. Same asymptotics as segment tree, but tends to be faster for this problem, as the multiset tends to keep a smaller size compared to the segment tree.

    • Deque. Fastest of these, with $$$O(n)$$$, but requires some strategic thinking to use here; details in spoiler.

    Spoiler

    Another tip is that whichever method you use, it can be helpful to think of the arrays as having fictitious "0" elements to its left and right (to represent the empty space). Whether you actually add those elements to the array is up to you, but be sure to adjust your math accordingly.

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

I am using unordered_map in B with binary search, but still getting tle, Complexity after using unordered map should be O(n^2log(n)) right?. Here my submission: https://codeforces.com/contest/1208/submission/59490232

»
5 лет назад, # |
Rev. 3   Проголосовать: нравится +3 Проголосовать: не нравится

For a $$$O(n $$$ $$$log$$$ $$$n)$$$ solution to problem B, it is possible to do it with Maximum Segment Tree + Two Pointers. It lead us to 31ms and 256KB.

For example, my code: 59492725

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can someone help me understand error in my logic for E please?

The submission is https://codeforces.com/contest/1208/submission/59494009 .

I tried doing it with max heaps for each input array and using restrictions on possible positions reachable tried to implement them.

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can someone explain why i got TLE, and how to fix it?

Problem B : https://codeforces.com/contest/1208/submission/59492817.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    Your time complexity is $$$O(n^2 \cdot log(n)$$$ but due to the constant factor of unordered_map it is getting TLE. Try this test case :

    testcase
»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Missed problems on Graphs :)

»
5 лет назад, # |
  Проголосовать: нравится +161 Проголосовать: не нравится

What is the proof that all polygons in G should have a common point?

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Is G a well-known problem? I was surprised to see so many people getting it AC'ed in <10 minutes.

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится +5 Проголосовать: не нравится

      I don't think that it's known, but it is definitely possible to see the trick pretty quickly and then coding is just a matter of 3 minutes. I think that difficulty-wise it is more like Div1C.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +32 Проголосовать: не нравится

    Most of the people did proof by AC. Even after getting WA on pretest 11 I was still quite confident in the approach and just looked for some special case/overflow/off-by-one.

    It is a really nice problem, but I am also very much interested in the proof.

  • »
    »
    5 лет назад, # ^ |
    Rev. 5   Проголосовать: нравится +221 Проголосовать: не нравится

    The fact that in problem G all the polygons should contain the same vertex was in Saint Petersburg Lyceum 239 Mathematical Olympiad in 2011 as a problem.

    Zeroly, we will consider a single point and two diametrically opposite points as regular 1-polygon and 2-polygon as well.

    Firstly, all the polygons may be considered contained in a one big regular polygon $$$F$$$ with $$$N$$$ vertices inscribed in the same circle (otherwise the graph formed by the vertices and edges of our regular polygons is disconnected, and thus the number of vertices can be reduced). We will now prove by induction that for every positive integer $$$N$$$, if all the polygons are rotated so that they contain a common vertex, the total number of vertices shall not increase. The basis $$$N=1$$$ is obvious, so we shall now concentrate on the induction step from all numbers $$$1, 2, \ldots, N-1$$$ to $$$N$$$.

    Let $$$p$$$ be any prime dividing $$$N$$$. $$$F$$$ can now be separated into $$$p$$$ regular polygons $$$F_1, F_2, \ldots, F_p$$$ with $$$\frac{N}{p}$$$ vertices each. We are now able to highlight two types of our inscribed polygons:

    1. Ones that are completely contained in one of the $$$F_i$$$'s.
    2. The $$$k$$$-polygons that intersect with each of the $$$F_i$$$'s by a regular $$$\frac{k}{p}$$$ polygon.

    It's easy to prove that the first case takes place if and only if the multiplicity of the prime $$$p$$$ occurring in the prime factorization of $$$N$$$ is greater than in $$$k$$$, and the second takes place if and only if the multiplicities are the same. Since $$$k$$$ is divisor of $$$N$$$, exactly one of the cases takes place for each of the polygons.

    Let us now choose an arbitrary vertex $$$A_1$$$ of $$$F_1$$$ and rotate all polygons of the second type so that they contain $$$A_1$$$. It's easy to prove that since now there exist points $$$A_2, \ldots, A_p$$$ in polygons $$$F_2, \ldots, F_p$$$ respectively such that each of the polygons of the second type contains all the vertices $$$A_1, \ldots, A_p$$$. Now we will rotate each of the first-type polygons: if one is contained in $$$F_k$$$, we will rotate it so that it begins to contain $$$A_k$$$.

    After such two series of rotations, what happened? In each of the $$$F_k$$$ there are some regular polygons of the first type and some pieces of regular polygons of the second type that are regular polygons themselves. We rotated these polygons so that they still lie in $$$F_k$$$, but now they contain point $$$A_k$$$. By induction hypothesis after the rotations the number of vertices of $$$F_k$$$ covered by our polygons won't increase. Let's sum this up for all $$$\frac{N}{p}$$$-polygons, and we will acquire the following: the total number of vertices of $$$F$$$ covered by all our polygons won't increase.

    Finally, let's rotate the polygons of the first type so that they contain $$$A_1$$$ (we can rotate the polygons of the second type as well, but they will just stay still because they already have $$$A_1$$$ among their vertices). This won't increase the total number of vertices (but it can decrease it since some vertices from polygons of the first type may glue up).

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится +31 Проголосовать: не нравится

      Thanks for the proof. The one we thought of during the preparation of the problem was flawed. Fortunately, the assumption is still correct. We would be more careful regarding the proofs in future.

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

In F, the SOS DP is more like Sum over Supersets DP instead of Sum over Subsets DP. So shouldn't the inner loop go from Max (1 << bits) to 0 instead of 0 to Max?

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

https://codeforces.com/contest/1208/submission/59496981 I am getting wrong answer on tc 54. Can't really figure out whats wrong.

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

For B bonus you can do it in O(NLogN) with binary search on rangeSize, then you loop through every possbile rangeSize using a sliding window + hashamp to find the number of distinct elements.

»
5 лет назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

I think B can solved with O(n) using Hash and two pointer. Am I wrong?

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

May I clarify why the first approach for problem D has complexity O(n log n)? It seems to me that for i = 1 to N, we are doing a linear scan to decrease the elements accordingly, so shouldn't the complexity be quadratic? Thank you!

  • »
    »
    5 лет назад, # ^ |
    Rev. 2   Проголосовать: нравится +3 Проголосовать: не нравится

    For each index from 1 to N, he uses a segment tree that handles both updating to the left of index i and getting the answer for the number i. So, as the segment tree do these operations in $$$O(log$$$ $$$ n)$$$, we achieve a $$$O(N$$$ $$$log$$$ $$$n)$$$ time complexity.

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

In problem C, I solved it by making the first column to be first N Evens numbers,

then secound column to be first N odds numbers,

third column to be the next N evens numbers etc.. So if N = 4 it will be like

0 1 8 9

2 3 10 11

4 5 12 13

6 7 14 15

I can figure out that for every column the result Xor will be equals to 0, but why it is also true for every row, is there any proof for that ?

my solution: https://codeforces.com/contest/1208/submission/59502655

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can someone explain the nlogn approach for problem B what do we do after binary searching the length we want to delete ?

  • »
    »
    5 лет назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    The binary search is to applied on the size of sub-segment we want to delete. Let's take an example for size of array 5 so we will set lo = -1 and high = 5 and then we will find mid and assume if we can delete the sub-segment of size mid and have unique elements after that. If we can have unique elements with this mid size then we will try explore the range lo to mid similarly and if not then we will explore mid to high.

    Now if we want to check uniqueness then you can do this by using sliding window with two pointers. Let's say you want to check for sub-segment size s then except this sub-segment you have to check whether the left part is having unique or not and also the elements present on the left side should not be present on the right side of the sub-segment and right side should also have unique elements.

    In order to implement this we can use map, you can see the implementation here 59494540

    • »
      »
      »
      5 лет назад, # ^ |
      Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

      Ahh thanks a lot bro. Cool solution But our solution is nlog^2n is there a nlogn solution also ?

      • »
        »
        »
        »
        5 лет назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Yes you can compress the elements which will take NlogN time and then use an array to hash in check function of binary search, instead of using map which will drop one LogN factor from your solution. So total time in worst case will be NlogN(for compressing the elements) and NlogN for binary search.

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Nice contest. Level of questions was good.

»
5 лет назад, # |
  Проголосовать: нравится +11 Проголосовать: не нравится

Problem H can be solved in $$$O(n \log n )$$$ .

The key is that $$$ f_{i}-f'_{i} \in {-1,0,1} $$$ , $$$f'_{i}$$$ means if we don't consider a son of $$$i$$$,what $$$f_i$$$ will be.

So we can use HLD to maintain the information simply.

And you can see the fastest submission to know more.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +92 Проголосовать: не нравится

    Have you ever considered using spaces and enters in your code?

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +10 Проголосовать: не нравится

    How do you remove the 2nd log factor ($$$O(n \log^2 n)$$$ to $$$O(n \log n)$$$)? I think I see how to maintain $$$f_i'$$$ with linked lists instead of BST's, but how do you update heavy chains in $$$O(1)$$$ time?

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится +10 Проголосовать: не нравится

      Maybe LCT can do this thing in $$$O(n \log n)$$$.

      And it's amortized,not $$$O(1)$$$ for each heavy chain.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +11 Проголосовать: не нравится

    You should put a link to the submission since it is not the fastest submission anymore.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Could you explain what your solution is? I don't see what you are maintaining in the HLD/LCT.

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Better explanation of Approach-1 for problem D ?

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

Can anyone please explain how are we using seg tree for getting index of last 0 in problem D? Could not get it during contest too. smh. TIA.

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

What the time complexity in B? My solve in O(n) on Python.

n = int(input())
a = list(map(int, input().split()))
s, d = {}, {}
for q in range(n):
    s[a[q]] = q
    d[a[q]] = d.get(a[q], 0)+1
q2, ans = 0, n-1
for q1 in d:
    while d[q1] > 1:
        d[a[q2]] -= 1
        q2 += 1
f = set()
for q in range(n):
    ans = min(ans, q2-q)
    if a[q] in f:
        break
    f.add(a[q])
    q2 = max(q2, s[a[q]]+1, q+1)
print(ans)
  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится -8 Проголосовать: не нравится

    This is not O(n). You can check the time complexity of Python collections here: https://wiki.python.org/moin/TimeComplexity

    Not having a for doesn't mean it's O(1). Python abstracts the most it can, so some builtin structure actually have a high overhead and it's important to understand the complexities of each one you usually use (otherwise you may think it's an O(n) algorithm when it's actually an O(n^2) like this one).

    Funny enough dict/set in Python seem to not be binary trees, so the complexities are O(n) instead of O(logn).

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I'm sorry for my English, this is Yandex translator.

      Yes, in the worst case, the dict / set time will be O (n), but on average everything will happen for O (1) — this is called amortized O (1). Therefore, To consider that the operation works for O (n) is impractical.

      And that means the solution work for O (n) multiply by a constant.

      • »
        »
        »
        »
        5 лет назад, # ^ |
          Проголосовать: нравится +12 Проголосовать: не нравится

        I would not believe in these average cases hahah Too many TLE for using hash tables instead of log structures. You will see.

        And these are not amortized, average case is not amortized, don't go around saying that average is called amortized. You can search for the difference, but basically average case is considering a somewhat general input, you can still have worst case in every operation on a specific input. Amortized is independent of input cases, it basically take all operations times and divide by the number of operations.

        This is a hash table, the average case is expected to be O(1), worst case is still O(n). If you find some structure that can work as a map/set/hash table and has amortized O(1) for insert/query/remove, please share, you will be one of the most important people from our generation xD

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
            Проголосовать: нравится +7 Проголосовать: не нравится

          It even says it in the link I sent: "The Average Case times listed for dict objects assume that the hash function for the objects is sufficiently robust to make collisions uncommon. The Average Case assumes the keys used in parameters are selected uniformly at random from the set of all keys."

          Assuming that your input is random it's exactly what hashtables require. Since some cases are made by hand I could just see how python hash function works and create a case that every number would end in with the same key. Of course this problem has a big time limit and O(n^2) is still accepted, but in other problems that's usually an issue.

          I'm not imposing my beliefs, this is how hash tables work, I'm just trying to help you so in the future you understand that some TLE or MLE using hash tables are expected in non random input.

          Sad to see so many downvotes haha people should search when someone says something to try to see if it makes sense or not instead of believing in magic data structures that can sort every input in O(n) xD

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Here is my O(n) solution to 1208B — Uniqueness

Code
»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can someone explain me how fast bitset works because my binary search of n^2*(logn)^2 got TLE in main tests , which is obvious. But then i submitted the same using bitset which got AC with time complexity of n^2*(bitset optimization).I want to know how fast is it optimized? link to the solution

»
5 лет назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

Can you explain, please, why in problem C after we multiply the numbers by 4 and make adds (+1, +2, +3 to each zone), XOR still remain 0 in each row and column.

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

Can anybody hack my solution for F or prove it's correct ?

here

The main idea is brute force .

»
5 лет назад, # |
Rev. 3   Проголосовать: нравится +1 Проголосовать: не нравится

In solution of problem C, "The XOR of each row and column would still remain 0 as N/2 is also even" , How ?

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +9 Проголосовать: не нравится

    The main fact on which this relies is that $$$a$$$ $$$xor$$$ $$$a=0$$$ for an integer $$$a$$$, so in an array in which a number appears an even number of times the xor is $$$0$$$.Also, xor is independent of bits, so if all bits appear an even number of times in an array the xor will be $$$0$$$. After copying the numbers in the $$$4$$$ quadrants, each row/column will become a duplicated array(first half equal to the second).This way the xor is $$$0$$$ (each number appears $$$2$$$ times by being duplicated, and in such an array xor is $$$0$$$).Now, by multiplying by $$$4$$$ you just shift the numbers by $$$2$$$ bits, so xor remains $$$0$$$.In last operation, of adding $$$0,1,2,3$$$ you just set the last $$$2$$$ bits.Thinking of each row/column as an duplicated array as before, we just need to consider the last $$$2$$$ bits, because we proved that the rest don't add anything to the xor.If you think how you add, the last 2 bits of each row/column will be the same for the first half ($$$n/2$$$ numbers), and same for the second half ($$$n/2$$$ numbers).Because $$$n/2$$$ is even the xor of each half for the last $$$2$$$ bits will be $$$0$$$, so the xor of a row/column will be $$$0$$$.

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      by multiplying by 4 you just shift the numbers by 2 bits.

      how?

      suppose in first quadrant number is 2 then in second quadrant number will be 8

      but 2 xor 8 !=0

      • »
        »
        »
        »
        5 лет назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Multiply by 4 is done in all quadrants. All numbers get shifted by 2 bits.

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          if we multiply every thing by 4 wouldn't the numbers exceed n^2-1

          • »
            »
            »
            »
            »
            »
            5 лет назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится

            No, because the numbers used in the quadrants were between 0 to n*n/4-1.

            • »
              »
              »
              »
              »
              »
              »
              5 лет назад, # ^ |
                Проголосовать: нравится 0 Проголосовать: не нравится

              say for first quadrant for n=4 {1 2} {3 4} then how do u proceed after that i see why we are shifting bits by 2 and alterring the last two bits even number of times but numbers are not starting form 1 if i multiply by 4 on every quadrant

              • »
                »
                »
                »
                »
                »
                »
                »
                5 лет назад, # ^ |
                  Проголосовать: нравится 0 Проголосовать: не нравится

                The numbers used in the quadrant start from 0 to n*n/4 -1 each exactly once. After multiply by 4, numbers range from 0 to n*n-4. Now changing last 2 bits increases number by at max 3. So, range becomes 0 to n*n-1.

    • »
      »
      »
      3 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      very helpful explanation.

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

Can someone help me understand what's wrong with my logic in Problem B? My code failed in system test case 54. Here is my code ; https://codeforces.com/contest/1208/submission/59480026

»
5 лет назад, # |
  Проголосовать: нравится +32 Проголосовать: не нравится

59495293 passes system tests(test 139 is an after-contest hack) even though it ignores i<j<k...

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

A lot of contestants had a problem about unordered_map (1208B - Uniqueness), when they write bin search. So they got TLE. I know how to fix it! You can only give an id for each number in your array. And it will be correct, because MAX_ID can be only 2000 (if we have 2000 different numbers). So, after that we can do like this :

how?

for each i.

So, my solution, which get TLE : https://codeforces.com/contest/1208/submission/59492817.

My "OK" solution : https://codeforces.com/contest/1208/submission/59534462.

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

For the 2nd approach in D, why are we moving from the last position? Can this be proved somehow?

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +28 Проголосовать: не нравится

    "If $$$s_i=x$$$, it means there are $$$k$$$ smallest unused numbers whose sum is $$$x$$$."

    Here "unused" means the elements $$$p_1,p_2,\dots,p_{i-1}$$$, so we need to check the array $$$s$$$ from $$$N$$$ to $$$1$$$. In other words, we need to move from the last position.

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

someone tell pls why to use segment tree in D ? and also how to remove elemnt with value 0 cant we update this value to any negative value instead of removing it pls clarify?

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Here, using segtree you get the rightmost minimum value and place current value i.e if you are checking for X then placing X over there. Yeah, for removal you update the value with a very large number not a small one

    • »
      »
      »
      4 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      can you tell how to find rightmost element with 0 value, i have made all functions of segment tree (create, rangeupdate, query) but can't understand how to find index of last element with s[i] as 0

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится -8 Проголосовать: не нравится

can anyone tell me why it is getting WA on test case 9 ? 59586904 I have implemented using binary search on BIT. But still getting WA..please help

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Hi! Can someone help me understand sliding window approach in question B? Thank you! :)

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

why in problem C if i used the given example with N=4 as building block and keep on adding multiples of 16 (16,32,48,...) to make numbers distinct what does make me sure that the xor of all rows and columns will be the same ?

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Hello!

My code for 1208D is giving TLE on the first test case itself, but is running perfectly locally. Can anybody help?

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

Hello,

So, I've solved 1208D — Restore Permutations by using fenwick trees and binary search. But it's giving TLE, and rightfully, it should. So this is what I've done:

  1. Initialize BIT with all 0s.
  2. Do range updates for the BIT for each sum from 1,1+2,1+2+3,...,1+2+3+...n, by doing updates for ranges [1,1],[2,2],[3,3],[4,4],...,[n,n].
  3. Now, we're given the sum array. We go from i=n-1 to i=0, while doing the following:

    Find the sum inside the Fenwick tree,by doing a Binary Search, by using the Query method. We get the position of the sum inside the Fenwick tree. We know that this is the number that must be present at this position. So, we deduct this value (say v), from every element after i. This can simply use the normal update function.

  4. We do 3, and we get the answer array.

Now, time complexities!:

  1. O(n)

  2. O(n log n)

  3. O(n log n log n)

  4. O(n)

So, O(n log n log n + n log n + n) = 2 x 10^5 x ~16 x ~16+ 2 x 10^5 x ~16 + 2 x 10^5) > 5.12 x 10^7

That is way too much. So, where am I wrong? What should I optimize more? Here is my submission

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +1 Проголосовать: не нравится

    I also used BIT and binary search, and my complexities is $$$O(nlognlogn)$$$. Here is my solution: 59884969

    Let's define a function called sum: $$$sum(n)=1+2+...+(n-1)+n=\frac{n(n+1)}{2}$$$

    You can calculate the raw value ( the result ) from right to left ( i from n to 1) . Because at the position n, the $$$a[n]$$$ must be $$$sum(res[n])$$$, because all numbers smaller than $$$res[n]$$$ are on it's left side.

    Then you add this number to the BIT, and let i--. When you use binary search to find the number of $$$res[i]$$$, the sum you get on the number $$$mid$$$ is $$$sum(mid-1)-bit.query(mid)$$$, which means the sum of the arithmetic progression minus the sum of numbers that appears on the right side of it and smaller than $$$mid$$$. Then you can get $$$res[i]$$$ by using that binary search.

    Here's a small trick that if number $$$mid$$$ is used and $$$cur==a[i]$$$, then you should let l=l+1, because at this case the answer must be greater than this value.

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится +8 Проголосовать: не нравится

Nice contest

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can we implement Approach-1 for problem D using BIT? Thanks

»
4 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится
»
4 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

The tutorial of problem H has a typo. In the second line, and the right expression should be "Note that when k=−inf all internal vertices are blue. "

»
4 года назад, # |
Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

In problem C, a hint to my way of solution is two observations:

1) XOR of $$$4k, 4k+1, 4k+2, 4k+3$$$ is zero.

2) XOR of $$$16k, 16k+4, 16k+8,$$$ and $$$16k+12$$$ is zero.

»
4 года назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

Add 1, 2, and 3 to the numbers in 1st, 2nd and 3rd quadrants respectively. The XOR of each row and column would still remain 0 as N/2 is also even

Can someone please tell me why xor will not change ? Thanks in advance.

»
4 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Problem n^2 log(n) solution of mine gives TLE at test case 29

https://codeforces.com/problemset/submission/1208/95378891

  • »
    »
    4 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I'm not sure if this is the exact reason for it, but you are using unordered_set, which has a high constant and can be hacked to run in linear time. I haven't really used unordered_set before, mostly I have just stuck to using set. See if changing to that helps in any case. I'm also not 100% familiar with the way that you did binary search, but if you have used it in the past then I would expect it works fine.

    • »
      »
      »
      4 года назад, # ^ |
      Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

      Hey thanks so much for coming around. I've set and TLE exceeded at Test case 5 this time. But din't we expect that coz a BST just blows up access time from constant to logarithmic ?

      https://codeforces.com/problemset/submission/1208/95397240

      And yeah the binary search way that I used works fine — I learnt that from Codeforces EDU :) Same style of doing it worked well in the practice exercises given

      Also there is apparently an N log(N) solution that exists(as per the editorial) which I spent a lot of time thinking but couldn't get. If you get an idea to fo that please include it in the reply as well

      truly I say thanks again :) for reaching out

      • »
        »
        »
        »
        4 года назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        unordered_set is usually O(1), but there are cases which could make it O(n) if you aren't careful. It doesn't seem like the test case that you TLE on though is a hack case, so this probably isn't the issue. O(n^2 log n) should definitely work for this question, so I'm not sure why your code is TLEing specifically on this testcase. The reason why I believe that it is something to do with unordered_set is because the previous testcase doesn't TLE even though the size of the list is approximately the same. An easy way to speed this up would be the compress the numbers and then use an array to store counts of numbers, but this doesn't seem necessary for this problem. I suggest trying to find another solution with the same complexity, I think that the constant factor just might not be good enough.

»
3 дня назад, # |
Rev. 2   Проголосовать: нравится +5 Проголосовать: не нравится

Sorry for necroposting. I was solving problem E and got ml14 then I did some optimizations and got AC. After that, I looked for the editorial and author's code. The code was the same with my solution that got ml14. So I copied it and submitted it in C++ 20 (GCC 13-64) and surprisingly this got ml14 too. Then I resubmitted it in C++ 17 and got AC. Why it could get ml?