Sereja's blog

By Sereja, 10 years ago, translation, In English

Hello everyone!

Codeforces Round #243 will take place on Sunday, April 27th at 19:30 MSK. This is my eleventh Codeforces round and I hope not the last.

I'd like to thank Gerald and KADR for helping me to prepare the round. Special thanks to Delinur for translation of all problem statements into English.

I strongly recommend you to read ALL the problems.

Problem point values in first division 500-1000-1500-2000-2000. In second division — standard.

Gl & hf ! :)

Tutorial.
Statistics.

  • Vote: I like it
  • +234
  • Vote: I do not like it

| Write comment?
»
10 years ago, # |
  Vote: I like it -18 Vote: I do not like it

Thanks for the recommendation. :)

This information might be handy during the competition.

»
10 years ago, # |
  Vote: I like it +24 Vote: I do not like it

nice to have the regular rounds back :)

»
10 years ago, # |
  Vote: I like it +7 Vote: I do not like it

Another Sereja round to enjoy. Thanks and good luck to everyone :D

»
10 years ago, # |
  Vote: I like it -12 Vote: I do not like it

ًWhat's meant by Gl & hf ?

»
10 years ago, # |
  Vote: I like it -34 Vote: I do not like it

hey sereja ! u r my best consest writer. I think u prepare too smart problems :)

»
10 years ago, # |
  Vote: I like it -14 Vote: I do not like it

Time for some Sereja and [?] problems ... :-)

»
10 years ago, # |
  Vote: I like it -10 Vote: I do not like it

Lets hope for math/ad-hoc problems!

»
10 years ago, # |
  Vote: I like it -7 Vote: I do not like it

yes , another Sereja round :D

»
10 years ago, # |
Rev. 4   Vote: I like it -13 Vote: I do not like it

I strongly recommend you to read ALL the problems.

Not usually written by other authors... :)

»
10 years ago, # |
  Vote: I like it +8 Vote: I do not like it

Thanks for your efforts for preparing problems:)

»
10 years ago, # |
  Vote: I like it 0 Vote: I do not like it

*standard — not standart ;)

»
10 years ago, # |
Rev. 3   Vote: I like it +2 Vote: I do not like it

Problem D : I think that you wanted to say : two cells are connected if they are adjacent in sAme row or sAme column of the table NOT : two cells are connected if they are adjacent in sOme row or sOme column of the table

Am I right ? ;)

»
10 years ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

DELETED

»
10 years ago, # |
  Vote: I like it +4 Vote: I do not like it

What is the approach to solve Div2 C / Div1 A?

  • »
    »
    10 years ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    Simple brute force in O(N^3) + Sorting in O(NlogN) should easily pass the time limit.

  • »
    »
    10 years ago, # ^ |
      Vote: I like it +2 Vote: I do not like it

    Bruteforce, more or less. Try all possible ranges [l, r]; for each of them, put the numbers from that range into a multiset and the numbers not from it into another multiset. As long as swapping the largest number out of the range and the smallest number in the range improves the result (and you still have swaps available), swap them. Time: .

    • »
      »
      »
      10 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      So we have an O(N^2) for choosing the ends of the interval, but how can we do the rest in O(KlogN)?

      • »
        »
        »
        »
        10 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Since you are using a set, elements will be searched in O(log n) and you have to choose 'k' such elements.

        • »
          »
          »
          »
          »
          10 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          But inserting N elements with take O(N*logN) in a multiset. I feel the time complexity for this solution will be O(N^3*K*logN)

          • »
            »
            »
            »
            »
            »
            10 years ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            My solution along the same lines works in .

            • »
              »
              »
              »
              »
              »
              »
              10 years ago, # ^ |
                Vote: I like it +1 Vote: I do not like it

              I'm sorry for the offtopic, but how do you get such O?

              • »
                »
                »
                »
                »
                »
                »
                »
                10 years ago, # ^ |
                  Vote: I like it +15 Vote: I do not like it

                Via \mathcal{O} in

                Unable to parse markup [type=CF_TEX]

                :
                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  10 years ago, # ^ |
                    Vote: I like it 0 Vote: I do not like it

                  this looks really cool! :)
                  and i think looks cooler than and :D

          • »
            »
            »
            »
            »
            »
            10 years ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            My solution's complexity is O(N^2*logN + N*K*logN)

      • »
        »
        »
        »
        10 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        As long as swapping the largest number out of the range and the smallest number in the range improves the result (and you still have swaps available), swap them.

        There are at most K allowed swaps and each can be done in (because multiset).

    • »
      »
      »
      10 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Any way to solve it without multiset?

      • »
        »
        »
        »
        10 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Sort the numbers that you'd put into multisets otherwise and do the swapping the same way.

    • »
      »
      »
      10 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Can you provide the link to a properly implemented solution? Thanks in advance!

  • »
    »
    10 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Bruteforce all l..r on array (for for), and try to calculate the best value on l..r by swapping worst values with best from 1..l-1 & r+1..n. I did it with sorting, but the best approach I saw is to create two piority queues "in, out" and take r-l+1 values from them (don't take more than "k" values from "out" queue)

»
10 years ago, # |
Rev. 3   Vote: I like it +61 Vote: I do not like it

Div1 D was same as this ONTAK problem

EDIT : Also it exist at stackoverflow

  • »
    »
    10 years ago, # ^ |
      Vote: I like it +45 Vote: I do not like it

    Yes, it seems that it is a very well-known problem.

  • »
    »
    10 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Is there any detailed analysis of the complexity?

  • »
    »
    10 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    The one on StackOverflow does not requires sides parallel to axis.

    • »
      »
      »
      10 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Yes, but the algorithms described in the answers are very, very slow.

    • »
      »
      »
      10 years ago, # ^ |
        Vote: I like it +5 Vote: I do not like it

      At the bottom there is a link to a paper.

  • »
    »
    10 years ago, # ^ |
      Vote: I like it +37 Vote: I do not like it

    Btw, I like how the answer on StackOverflow which is clearly the most valuable has -1 votes :)

    • »
      »
      »
      10 years ago, # ^ |
      Rev. 4   Vote: I like it +18 Vote: I do not like it

      Because it just contains a link. Link-only answers are only valuable as long as the link is valid, so they are very susceptible to link-rot. One is expected to at least summarize the contents of the linked resource so that the answer does not lose its value if the link goes dead. Note that this particular answer does not even mention the title of the paper, which could be used to locate the resource later.

      Also I feel that pointing to a 23-page paper does not cut it for this problem. At the very least I would expect the poster to point to the page where the actual problem of finding squares is addressed. But I would really prefer to see a paraphrased overview over the algorithm, so that I can quickly examine whether it actually addresses the problem without reading through pages of scientific notation. Remember that Stack Overflow is for regular users not necessarily with a background in Computer Science.

      Stack Overflow still has the vision to create an archive of programming knowledge for the future, not just to make a single question asker happy for the moment.

»
10 years ago, # |
  Vote: I like it +8 Vote: I do not like it

what is the idea behind the problem D(div 1) ?

  • »
    »
    10 years ago, # ^ |
      Vote: I like it +34 Vote: I do not like it

    I had an solution. A square is uniquely defined by its 2 top (or 2 bottom etc.) points. Split the horizontal lines into large (at least points on the line) and small (otherwise); then, there are just squares whose 2 top points lie on a small line, same for those whose 2 bottom points lie on a small line and 2 top ones on a large one.

    There are at most large lines. For each point on one of these lines and each line below it, you can also check if there's a square for which these are 2 right points. There are also these questions.

    How to check the existence of many pairs of points (out of which all lie on 1 horizontal line)? Just make a vector saying if there's a point with some x-coordinate. You can do this for many lines with 1 vector, taking just O(1) time per query + O(N) time amortized for filling and cleaning the vector.

  • »
    »
    10 years ago, # ^ |
      Vote: I like it +18 Vote: I do not like it

    Or you could use the fact that unordered_set is extremely fast and write something very short like this: click ^_^

    • »
      »
      »
      10 years ago, # ^ |
        Vote: I like it +1 Vote: I do not like it

      Wow, that looks useful. I don't know more about unordered_set than it erasing a -factor in theoretical complexity (and decreasing the practical one) :D

    • »
      »
      »
      10 years ago, # ^ |
      Rev. 3   Vote: I like it 0 Vote: I do not like it

      I really liked next comparation: vx[x[i]].size() < vy[y[i]].size() in HellKitsune's solution.

      I believe hides behind it, does it? How to prove that? :)

      • »
        »
        »
        »
        10 years ago, # ^ |
        Rev. 2   Vote: I like it 0 Vote: I do not like it

        After reading the code again, I think it is O(n^1.5)

      • »
        »
        »
        »
        10 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        To be honest, I have no idea how to prove it formally. :(

        But informally, we will most likely achieve the highest number of iterations if for each point we minimize the difference between the number of points lying on the same horizontal and the same vertical line with it. It then only seems logical to assume that the worst case scenario for this algo would be when the points form a square, which would give us complexity.

»
10 years ago, # |
  Vote: I like it +4 Vote: I do not like it

Paper for problem D and harder versions of it!!!

Link

  • »
    »
    10 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Uhm, this paper is about checking if set of points contains desired figure, not about counting them.

    • »
      »
      »
      10 years ago, # ^ |
      Rev. 3   Vote: I like it +8 Vote: I do not like it

      But the algorithm can be easily used to count number of figures.

      It's proved in the article that there's at most O(N1.5) squares in the plane and existence of the square is checked using brute force over all possible candidates.

      I used this article to solve this problem. Is there faster approach than O(N1.5log(N))? log(N) is for checking if the point is in set and I think we can avoid that logarithm but what about better estimation than O(N1.5)?

»
10 years ago, # |
Rev. 2   Vote: I like it +8 Vote: I do not like it

How to solve B ? It was awsome. :)) But I did not get the idea.

  • »
    »
    10 years ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    Deleted.

  • »
    »
    10 years ago, # ^ |
    Rev. 2   Vote: I like it +14 Vote: I do not like it

    n, m <= k : bruteforce by any possible first row
    otherwise : freese one row and calculate min changes in case of this row is unchanged
    6492596

  • »
    »
    10 years ago, # ^ |
    Rev. 2   Vote: I like it +26 Vote: I do not like it

    If n>10 or m>10 there must be at least 1 line that does not change. Then iterate all such possible line. We know one line, so we know vertical partition of initial table and can easily find minimum number of changes.

    If n<=10 and m<=10 iterate all possible first lines(2^m) and we know one line again.

    • »
      »
      »
      10 years ago, # ^ |
        Vote: I like it +11 Vote: I do not like it

      I observed that if you have a line fixed you can compute the answer, but I didn't noticed that. Now it seems kind of obvious. Thanks. :)

    • »
      »
      »
      10 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      "If n>10 or m>10 there must be at least 1 line that does not change"-why?

»
10 years ago, # |
  Vote: I like it +5 Vote: I do not like it

very strong pretests
low chance for hacks

»
10 years ago, # |
  Vote: I like it 0 Vote: I do not like it

when will the ratings be updated?

»
10 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Div.1 B What about this case, what's the answer please? 3 4 10 1 0 1 1 0 0 1 1 1 1 1 1

»
10 years ago, # |
  Vote: I like it +3 Vote: I do not like it

I could not understand the problem C correctly until someone explained to me after the contest. a quote from the problem statement "the element of sequence a with the maximum index among the chosen ones must be equal to the element of sequence b with the maximum index among the chosen ones; remove the chosen elements from the sequences."

If the maximum indices of two chosen values are equal, it means you take same number of elements from the sequence, isn't it? Shouldn't the wording be "the index of sequence a with the maximum value" instead of "the element of sequence a with the maximum index"?

  • »
    »
    10 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    This means that the last deleted elements must be same in both sequences.

    • »
      »
      »
      10 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Ah... I think I was just wrong. Thank you.

  • »
    »
    10 years ago, # ^ |
      Vote: I like it -8 Vote: I do not like it

    What's wrong with removing the same number of elements from each sequence? It's a perfectly valid move.

    But since taking any more elements than the equal ones seems to be a non-optimal strategy, "maximum value" should mean the same as "maximum index".

    • »
      »
      »
      10 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      That's a clear explanation. I now understand the problem statement :) Thanks!

  • »
    »
    10 years ago, # ^ |
      Vote: I like it +61 Vote: I do not like it

    Though I got it correctly during the contest I still don't see why it wasn't smth like: "both prefixes should end with same number", which is shorter and easier to understand. Sometimes formal language is too formal.

    • »
      »
      »
      10 years ago, # ^ |
      Rev. 2   Vote: I like it +60 Vote: I do not like it

      Yes, I feel that sometimes problems on CodeForces are about deciphering the problem statement rather than understanding it and then having insights. Some authors also seem to be really into mathematical symbolic notation :)

»
10 years ago, # |
  Vote: I like it +10 Vote: I do not like it

why is rating update delayed?

  • »
    »
    10 years ago, # ^ |
      Vote: I like it -60 Vote: I do not like it

    Because it's going to be unrated!!!! hahahaha =)))))))

    • »
      »
      »
      10 years ago, # ^ |
        Vote: I like it -13 Vote: I do not like it

      That wasn't very funny, and that you needed to add your laughter at the end didn't help. I guess you're going to be downvoted.

»
10 years ago, # |
Rev. 2   Vote: I like it +1 Vote: I do not like it

What exactly is the special point of test 22 of Problem B(Div 2)? Many got WA.

»
10 years ago, # |
  Vote: I like it +10 Vote: I do not like it

Good round, thanks.

»
10 years ago, # |
  Vote: I like it +14 Vote: I do not like it

The problems in div1 are pretty good except problem D. Problem D in div1 is so classic that so many people have solved it before (at least I knew several well-known OJs have problems almost the same). And its time limit is so tight I thought. I first used std::set but got TLE at pretest 13 several times. But I changed my code to use lower_bound in a sorted std::vector, I got accepted. That's why I thought its time limit is too tight somehow. Does anyone agree with me?

  • »
    »
    10 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    No, I don't agree with you. Using operations like lower_bound take log N complexity, so you got complexity O(n^3/2 log n) instead of O(n^3/2) so you shouldn't be surprised that you got TLE. Even more — tests weren't in fact maxtests. They could have been more time-demanding.

  • »
    »
    10 years ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    Besides, set<> has a large constant factor. Sorting is generally much faster than set<> operations, and this could serve as a lesson for you to prefer sorting in the future.

  • »
    »
    10 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I got my solution passed after changing from finding a point in O(log (n^2)) (i.e. 2 log n) to O(log n). Simply store a set for each possible x instead for all. However, I think the time limit is fine after all.

  • »
    »
    10 years ago, # ^ |
    Rev. 3   Vote: I like it +3 Vote: I do not like it

    Look at my solution 6490475

    Although I aware of N^1.5 solution, my solution, which base on two pointer method is O(N^2)

    If the test case is strong enough to contains test which is two far away parallel line ( <0,0>, <100000,0>, <0,1>, <100000,1>, ... ) my program would be easily TLE

    I realize it after the contest already end. So I'm lucky one to pass the problem D

»
10 years ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

At least to me, C/Div.2 was much easier to implement than B/Div.2. Any one thinks the same?

»
10 years ago, # |
  Vote: I like it +11 Vote: I do not like it
»
10 years ago, # |
  Vote: I like it 0 Vote: I do not like it

My solution for D problem got AC luckily. I have thought that the complexity of my solution is O(n^1.5) but it is really O(n^2). For each point, my solution calculate right points, below points, and right-below points, then use three pointer to count how many square. The complexity become O(n^2) in this kind of input:

20
0 1
0 2
0 3
0 4
0 5
0 6
6 1
6 2
6 3
6 4
6 5
6 6
6 7
6 8
6 9
6 10
6 11
6 12
6 13
6 14

Execute the solution with larger input (with the same type of above input), I have the following statistic: - In my computer, Intel Pentium 2.2GHz, n=100000, the time is 22.312s - In my computer, compile with -O2, n=100000, the time is 3.444s I can not try to run the solution in Codeforces because my input is larger than 256KB (my input is 1020KB)