Блог пользователя hmehta

Автор hmehta, история, 4 года назад, По-английски

Topcoder SRM 790 is scheduled to start at 11:00 UTC -4, Sept 10, 2020.

Registration is now open for the SRM in the Arena or Applet and will closes at 10:55 UTC-4. The coding phase will start at 11:05 UTC-4, so make sure that you are all ready to go.

Thanks to nikhil_chandak for writing the problem set. Also thanks to misof for testing and coordinating the round.

Choose Your Competition Arena

There are two ways to compete in Algorithm Competitions or Single Round Matches (SRMs). Read here to find which works best for you.

Some Important Links

Match Results (To access match results, rating changes, challenges, failure test cases)
Problem Archive (Top access previous problems with their categories and success rates)
Problem Writing (To access everything related to problem writing at Topcoder)
Algorithm Rankings (To access Algorithm Rankings)
Editorials (To access recent Editorials)

Good luck to everyone!

  • Проголосовать: нравится
  • +49
  • Проголосовать: не нравится

»
4 года назад, # |
  Проголосовать: нравится +4 Проголосовать: не нравится

Gentle Reminder: The Round begins in 3 hours!

»
4 года назад, # |
  Проголосовать: нравится +36 Проголосовать: не нравится

Is this a div.3 ? I'm sorry, but this guy's maximal rating is lower than my minimal rating. Are you sure that this should be rated for div.1 and affect TCO points?

»
4 года назад, # |
Rev. 2   Проголосовать: нравится +58 Проголосовать: не нравится

I'm so ashamed that I cannot become the fastest submit person among all people in today's 250 pt problem because it's the same with past 500pt SRM problem authored by me.

»
4 года назад, # |
  Проголосовать: нравится +46 Проголосовать: не нравится

Thanks everyone for participating in the round! I hope you all found something interesting and challenging. If you have any feedback, you can share it here anonymously : https://www.admonymous.co/nikhil_chandak

Unfortunately, no one was able to solve Div1-Hard (I guess ksun48's solution was very close but failed in system tests due to some bug). Rough stats of % of users who successfully solved (ie. passed system tests) is quite disheartening:

  • Total Participants ~ 150
  • Div1-Easy ~ 90%
  • Div1-Med ~ 10%
  • Div1-Hard ~ 0%

I didn't expect such drastic drop. Anyway, I hope future TC Rounds turn out to be more balanced. I'm adding the editorial for Div1-Med and Div1-Hard below for convenience. Editorial for the whole problem set will be published on Topcoder later.

Div1-Med
Div1-Hard
  • »
    »
    4 года назад, # ^ |
      Проголосовать: нравится +38 Проголосовать: не нравится

    I still don't understand how to implement div1 Hard. I'm referring to the part where we change basis.

    What do you mean by swapping two vectors when we are about to xor them? To do gaussian elimination we have to maintain some king of row echelon form, so vectors in this form are already linear combinations of something from basis. It's not clear what's index of $$$b_{i}$$$ since it's a linear combination. Do you do gaussian elimination somehow differently?

    Also some trivia:
    1. In ptz 2016 there was a simpler problem with the same ideas which was unsolved during the contest. Did we evolve such that problems that were unsolved by the best teams in 5 hours 4 years ago are now ok to give in 75-minutes personal contest?
    2. I was the author of that problem and I still don't understand how to write this one.

    • »
      »
      »
      4 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Hey, I think a simple code illustrating that would do more justice than words, so I have attached it below:

      GreedyBasisMaintenance

      The above code's idea can be easily extended to the original solution using bitsets and some Fenwick tree or set.

      Thanks for sharing the trivia. I believe roughly two major ideas were required to solve the problem:

      1. Distinctly handling large and small primes.
      2. Greedily maintaining the basis for answering queries.

      Initially, we thought of keeping the problem without queries, but I felt that it required just one observation, which is not so hard once you imagine simulating gauss naively and noting the redundancy. The version of static queries can generally be solved by divide-and-conquer trick (or maybe even overkill it with segtree?). If this was the intended method, it wouldn't really add much to the existing problem and feel like just another mechanical step as top coders would already be familiar with it. However, the method of greedily maintaining the basis for queries felt like an interesting challenge and it certainly feels non-trivial to come up with.

      Overall, the problem did turn out to be a little too difficult for 75-minutes. I think it would have been fine had the contest been around 90-minutes long. After all, the implementation looks fairly doable to me once you have figured out the solution.

      On your point of whether we have actually evolved so much, I don't have any such comments. Maybe other experienced coders will be better able to address that. On a side note, recently, we did witness the very first problem of AGC 045 requiring non-trivial application of Gaussian Elimination.

      • »
        »
        »
        »
        4 года назад, # ^ |
        Rev. 2   Проголосовать: нравится +10 Проголосовать: не нравится

        When we have trouble with talking about a solution, it is generally good and helpful to mention the complexity (and the (coming?) editorial should contain it).

        I got accepted with time $$$\Theta(N (\#\{ p: \text{prime} \mid p \equiv 3 \pmod{4},\, p \le \sqrt{\max A_i}\})^2 / W + Q \log N)$$$ where $$$W$$$ is the word size (i.e. /64 from bitset). Not sure about worst case. Is the intended solution better than this?

        I think the difficulty is OK (hard enough, of course), but I see no reason why the there were only few and small examples if you knew it was hard.

        • »
          »
          »
          »
          »
          4 года назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          Our solution has the same complexity as yours, so that's the intended time complexity. Here, I just wanted to mention the idea on how to solve it (shouldn't have called these solution sketch editorial, I guess). The proper editorial will contain the complexity analysis and reference solution.

          I do regret not having added enough examples in Div1-Hard. The fact that a generator was being used, we could have easily added larger samples. I am sorry for that.

    • »
      »
      »
      4 года назад, # ^ |
      Rev. 4   Проголосовать: нравится 0 Проголосовать: не нравится

      You can check ksun's code (apparently there's a one line bug) or my code for Easy Win.

      Explanation (Easy Win)

      The concept in Div1 Hard (maintaining lexicographically maximum or minimum basis) is almost the same; set the weight of index $$$i$$$ to be equal to $$$i$$$. Using vectors of indices instead of bitsets seems to be fast enough.

  • »
    »
    4 года назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    I wonder how many people failed med because of precision errors. I'm not quite used to seeing binsearch problems with $$$10^{-9}$$$ precision asked, so I contemplated coding this solution for a bit.

    Other than that I don't even see how a solution with the correct idea that passes examples can fail systests :/ And there were a lot of such solutions.

»
4 года назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

Any updates on the complete editorial? Thanks in advance!