Блог пользователя hmehta

Автор hmehta, история, 5 лет назад, По-английски

Hi!

Topcoder SRM 755 is scheduled to start at 11:00 UTC -4, April 15, 2019. Registration is now open for the SRM in the Web Arena or Applet and will close 5 minutes before the match begins.

Editorials: https://www.topcoder.com/blog/single-round-match-755-editorials/

Problem Setter: misof

This is the seventh SRM of Stage 3 of TCO19 Algorithm. This stage will also help you qualify and win reimbursements/tickets to TCO19 Regional Events.

Stage 3 TCO19 Points Leaderboard | Topcoder Java Applet | Next SRM 756 — April 25, 07:00 UTC-4

All the best!

  • Проголосовать: нравится
  • +20
  • Проголосовать: не нравится

»
5 лет назад, # |
  Проголосовать: нравится +24 Проголосовать: не нравится

755, cool number!
By the way, who is the writer(s)?

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

How does one register? The web arena just shows the IIT Roorkee contest, with no mention of SRM 755.

»
5 лет назад, # |
  Проголосовать: нравится +85 Проголосовать: не нравится

Did I participate in Div2 or what happened?

»
5 лет назад, # |
  Проголосовать: нравится +49 Проголосовать: не нравится

Is the story real?

  • »
    »
    5 лет назад, # ^ |
    Rev. 2   Проголосовать: нравится +28 Проголосовать: не нравится

    Yeah, it's true, I was clumsy and for a while I had some bad cuts and had to avoid using my left hand to allow them to heal.

    It's back to more-or-less normal now, I can already type with both hands and there will be no permanent damage.

    (The parts that were the actual problems are not necessarily true. I tend to avoid physical work, so definitely no road painting and such :) )

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится +17 Проголосовать: не нравится

      I wish you health, but is it some kind of post irony to give such simple problems for the contest?

      • »
        »
        »
        »
        5 лет назад, # ^ |
        Rev. 2   Проголосовать: нравится +41 Проголосовать: не нравится

        Hey :) I already posted about this multiple times in the past, but I'll gladly repeat myself again.

        This is actually reasonably close to the intended difficulty for our regular SRMs. We are trying to vary SRM difficulty a bit up and down between different rounds, but in general we don't want SRMs to be about one problem for most of the div1 field, so we are aiming to make most of them such that the success rate is 80-90% for the easy, somewhere around 50% for the medium, and 10-20% for the hard. This covers the range of contestants' abilities much better than a problem set where a medium problem has 20 solutions and hard has 0-2. There's enough time for those in the later TCO rounds :)

        (Also, we are looking into changing the format at some point in the future, allowing for a wider range of problem difficulties, but until that happens, we like this result better than the alternative where the problems are too hard for most of the field.)

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
            Проголосовать: нравится -10 Проголосовать: не нравится

          I see, but the medium today is not even a problem. The difficulty of medium was to guess what the author might have considered a problem there. My guess was that the "difficulty" was in summation of arithmetic progression, but I'm not sure.

          With this approach topcoder might lose all red coders who will not participate in SRMs if they are sure they will learn nothing. I don't see it as good thing.

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
            Проголосовать: нравится +5 Проголосовать: не нравится

          But these rounds decide who goes to TCO so they should have problems suitable for TCO finalists.

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
            Проголосовать: нравится +58 Проголосовать: не нравится

          This is actually reasonably close to the intended difficulty for our regular SRMs.

          Oh my God. These problems are jokes. If you had some explanation like "I am so sorry I had to use these problems, we are currently really lacking interesting prepared problems and resources to prepare better ones, but we have many new ideas, so stay tuned!" then maybe I would somehow forgive you. But if a coordinator of TC states that this is actually intended difficulty of SRM then I seriously start thinking whether I will participate in TC ever again. And I am sure I am not alone. These problems required less than 5 seconds of thinking in total, this is less than I think about median Div1A on Codeforces. Hardest problem is incredibly standard, easy one is incredibly standard even for an easy problem and medium is just a joke as well.

          Why would you want to lower difficulty of problem so much? Moreover, against will of your users? 50% for medium is ridiculously high.

          • »
            »
            »
            »
            »
            »
            5 лет назад, # ^ |
            Rev. 2   Проголосовать: нравится +30 Проголосовать: не нравится
            1. I really don't want to blame my injury, it feels like a cheap way out.

            2. Ideally, I would like a contest format that allows me both to give reasonable challenges to the middle-rated coders to help them grow, and to give reasonable challenges to the (sometimes oh-so-very-vocal :) ) people at the top, to help them grow as well. The current contest format does not really allow that, so sometimes there will be an easy round. Not all of them will be as easy as this.

            3. This particular problem set would have been ideal if it came with a fourth problem harder than these three to give the top 20-30 people something more to do. Other than that, the round went fine.

            4. Even though you claim that the three problems were ridiculously easy, I don't think that the results agree with you. I think the problem set still managed to discriminate quite well at the very top (albeit by testing other parts of the skill set, not "being able to solve a ridiculously hard problem"), but also gave meaningful results for lower-rated coders.

            5. We do have interesting problems in the pipeline, and most SRMs will be at least somewhat harder than this one. Stay tuned!

            • »
              »
              »
              »
              »
              »
              »
              5 лет назад, # ^ |
                Проголосовать: нравится +41 Проголосовать: не нравится

              Regarding your point 3, I suggested adding another problem to Div1 SRMs at Topcoder Forums almost a year ago. If possible, please (re)consider at least trying it out.

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
            Проголосовать: нравится +8 Проголосовать: не нравится

          Please, don't even necessarily respond to us, but ask yourself: do you really think that these three tasks represent a good complete problemset for a div1 round with costs 250 (not 200), 500 (not 450) and 1000 (not 900)? Forget about statistics, about 80-90% for the easy etc. Do you think these problems suit each other and can live in a single problemset? Are these problems good enough to be on their positions in any possible problemsets? Do you think they contain at least one worthy idea? Do you personally like this problemset? Would you enjoy solving a contest like this? Do you think the contest went well? Have you reached your goal, whatever it was?

          • »
            »
            »
            »
            »
            »
            5 лет назад, # ^ |
            Rev. 2   Проголосовать: нравится +22 Проголосовать: не нравится

            I'll pick your post as the one to reply to, as you are asking a lot of interesting questions.

            Do I really think that these tasks represent a good div1 problem set? Yes. It's close to the bottom (significantly easier would definitely be bad) but the problems gave a lot of non-red coders reasonable challenges, and even among reds the differences are obvious: tourist's final score is twice as high as the scores of some other people who managed to solve the set close to the end of the time. (See below for a comment on changing difficulty between SRMs.)

            Re the problems not being 250-450-900, I am surprised how often this is misunderstood, even by red coders. The score should not be some magical signal telling you "this problem is too easy, this one is too hard (for its slot)", the score determines how much the problem should be worth 1. when compared to other problems in that set, and 2. when compared to a single challenge, which is always worth 50. By setting them to 500 and 1000 I'm not claiming that they are as hard as other div1 mediums and hards, I'm saying that I don't want the challenge phase to matter more for this particular round.

            Do I think they contain at least one worthy idea? Yes, absolutely: I think the div1 hard is a lovely non-trivial problem. If you want to claim that it's obvious to all the div1 participants, I strongly disagree.

            Do I personally like the problem set? I'm not in love with it. I like the div2 part of it. The div1 easy and medium were... let's say ordinary. If you are red, div1 medium is just about careful implementation, but to lower-rated coders it can be a good lesson to prove your greedy algorithm, as many of them can be tempted into doing greedy in the wrong direction. As stated above, I do like the div1 hard.

            Would I enjoy solving a contest like this? In two weeks I probably wouldn't remember it because it wouldn't give me a challenge.

            Do I think that it went well? Have I reached my goal? Given the state in which I was preparing the contest (re:injury) I think it went well. There were no bugs, the statements were clear as always, and the results we got (and the rating changes for most participants) are more meaningful than they were for the unintended-too-hard round we had previously. I do think the round was a bit boring in terms of problem solving for the people at the very top, but we do need at least occasional rounds like this, we cannot have half of div1 solving nothing on a regular basis.

            P.S.: I understand the need to complain, and honestly, I might have done the same in your place. Taking it as science, it's a good strategy: you want contests that give more to you personally. That being said, note that the people complaining today are red and higher rated. With the current format we have no better way to please everyone than to oscillate the difficulty up and down somewhat. Also, with three problems per round you will get this oscillation regardless of what you do, it's impossible to keep the difficulty constant. There will be easier SRMs, there will be harder ones. At least until we can come up with something better.

            • »
              »
              »
              »
              »
              »
              »
              5 лет назад, # ^ |
                Проголосовать: нравится +58 Проголосовать: не нравится

              5-6 years ago I completely stopped solving Topcoder because almost every round for me was like "solve first in 10 min, then do nothing", so I think a lot of people like me would support decreasing difficulty of the second problem.

              • »
                »
                »
                »
                »
                »
                »
                »
                5 лет назад, # ^ |
                  Проголосовать: нравится -22 Проголосовать: не нравится

                Maybe it wouldn't be that bad if you did something in the remaining 65 minutes.

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
            Проголосовать: нравится +28 Проголосовать: не нравится

          I agree with the 90%-50%-10% rule (if I remember correctly it was stated somewhere long ago) if the distribution of the skills of contestants is close to normal distribution. However, at least in div1, the distribution is heavily skewed. In this case I'm not sure if the "success rate" is the best measure of the difficulty.

          Codeforces uses ratings to represent difficulties of problems: Mike's post. What are your target difficulties in terms of "difficulty rating"? (I suggest something like 1000-2000-3000 in TC rating.)

          • »
            »
            »
            »
            »
            »
            5 лет назад, # ^ |
            Rev. 2   Проголосовать: нравится +3 Проголосовать: не нравится

            Difficulty rating based contest making is good, but I definitely don't think that 1000-2000-3000 is the best decision.

            The main reason is that the gap is too big between 1000 and 2000. I know that TC rating is not based on Elo rating, but assuming Elo rating system, the probability of solving Div1 Easy is like 5.3%, and also for Div1 Medium. But things are not true for very difficult problem, I guess. (I suggest 1300-2000-2900 like thing, which means Div1 Easy can be solved halfly for rating 1300)

            This target rating (not target solving rate) is surely controversial and it should be well-discussed. Also I think there's a gap of opinion between misof and others, and opinion between reds and yellows/blues.

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
            Проголосовать: нравится +45 Проголосовать: не нравится

          Just increase the required rating to be in div1 instead of making problems easy.

          • »
            »
            »
            »
            »
            »
            5 лет назад, # ^ |
              Проголосовать: нравится +21 Проголосовать: не нравится

            This is an interesting idea. When I have some time, I'll look into it. Thanks!

          • »
            »
            »
            »
            »
            »
            5 лет назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится

            +1

          • »
            »
            »
            »
            »
            »
            5 лет назад, # ^ |
            Rev. 2   Проголосовать: нравится -19 Проголосовать: не нравится

            I don't think it's a good idea to mess with division boundaries. That system works perfectly fine for almost 20 years. Besides any change at this point will result in the number of div1 competitors to be around 30-50?

            • »
              »
              »
              »
              »
              »
              »
              5 лет назад, # ^ |
                Проголосовать: нравится +18 Проголосовать: не нравится

              Unfortunately I can't say that everything perfectly fine with state of topcoder algo track this days

              • »
                »
                »
                »
                »
                »
                »
                »
                5 лет назад, # ^ |
                  Проголосовать: нравится 0 Проголосовать: не нравится

                I meant the rating system. Infrastructure is not perfect, tasks might have been better for a moment and the participation level is lower. But the rating works fine. No inflation, no erasing history, etc.

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  5 лет назад, # ^ |
                    Проголосовать: нравится +5 Проголосовать: не нравится

                  Well, how do you know why people left? May be it was because previously blue and yellow coders had little to do during rounds

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
            Проголосовать: нравится +1 Проголосовать: не нравится

          As someone at the mid — div1 level, I found the set challenging enough. But I can see how some of the higher rated coders will find it standard. There are a few ways in which we can fix the imbalance:

          1. Increase div 1 rating boundary to 1500: Now, even applying the 90/50/10 rule allows for having harder problems. Applying the same rule to div 2 however means making the div2 hard problem harder than it is currently. I think this will require the least effort to implement.

          2. Have 3 divisions: Have a separate division for blue/lower yellow range. This will ensure all divisions have appropriate level problems. IMO, this is the ideal solution. But the number of participants are low so not sure how that will affect room allocation and hence challenges.

          3. Have 4 problems in 2 divisions: Will have similar consequences as point 2. But will the total contest time be enough for it? I feel having this option will be tricky as for most coders it will be a gamble deciding whether to open the 3rd or 4th problem after solving first 2.

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
            Проголосовать: нравится +8 Проголосовать: не нравится

          I'm very low 1200's and thought the previous SRM's hit the 90% easy -50% medium target perfect for me for knowing how to solve the problem (my success rates were lower due to implementation bugs). SRM fun lies much more on problems than number of problems solved, so I liked the recent SRM with the 300 easy problem with bridges much more even if I didn't solve anything.

»
5 лет назад, # |
Rev. 8   Проголосовать: нравится +26 Проголосовать: не нравится

Some solutions using $$$2 \cdot 10^9$$$ operations in Medium work fine, for example: https://imgur.com/a/fRdHzAo.

Good job, problemsetters!

UPD surprisingly, this solution failed systests. However, both challenges

$$$\{1\}, \{2 \cdot 10^9\}, 1$$$

and

$$$\{2i \cdot 10^8 + 1\}, \{2 (i + 1) \cdot 10^8\}, 2$$$

were unsuccessful (I tried second one because of guess 'maybe it's some optimization by the compiler' and thus decided to use $p=2$, leading to $$$10^9$$$ operations). Any thoughts why did this happen?

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +20 Проголосовать: не нравится

    Be aware that computers executing systests are slower than computers measuring your time when testing through interface xD. Summon mnbvmar. Do not know what about these executing hacks.

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится +41 Проголосовать: не нравится

      Yeah. So once I fell victim of a mysterious FST. There was a task whose main part of solution was a costly precomputing; you could either find some patterns in your preproc or be me and just try to squeeze the precomputing in 2s (this required something like 5-6x speedup). Then you had to answer some queries, but this was instantaneous (a couple hundred queries, $$$O(\log n)$$$ time per query).

      I optimized the precomputing so that it always worked in 1600-1700ms on manual tests. The precomputing didn't depend on the tests in any way so I assumed that my solution would pass the system tests with a comfortable margin. Now imagine my surprise when it turned out that the solution failed due to TLE. :/

      Later i took the failing test and ran it as a manual test. Aaaand... it finished in 1600-1700ms. Submitted to practice, FST again because of some other random test. Copied it back and ran it manually on the servers... 1600-1700ms again. This repeated a few times consistently. Therefore, I've got a suspicion that the servers might work a bit slower when running systests. Who knows, maybe the system load is then higher and the solutions work slower?

      Either way, this taught me that it's quite dangerous to submit the solutions to Topcoder even if you're pretty sure you've got ~20% TL margin. Be safe, try to keep the execution times below 1.5s.

      • »
        »
        »
        »
        5 лет назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        That doesn't seem like slower machines at all. If test A runs on machine X within TL, but exceeds TL on machine Y, then you resubmit it on machine Y and it runs within TL, but you get TLE on test B, there isn't any correlation between machine and running time for a fixed test.

        At first glance, it seems like random variation in running time. I've encountered it on CF too, although probably not on such a scale (only at ~100ms), so it could be something more complex.

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          Just a fluctuation doesn't explain why manual run gives 1600-1700 ms. Seems more like that one of the servers is slower than the others, and running on all systests almost surely forces using that server, while running on a particular test usually chooses a good server.

          Afair it's what was with codeforces once.

          • »
            »
            »
            »
            »
            »
            5 лет назад, # ^ |
            Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

            Maybe it's that when you run everything at once, there's greater fluctuation (or random wait time) because of greater load, but the average running time is still the same. In that case, it would be a question of how running time is calculated.

            Alternatively, maybe you he didn't test the running time sufficiently to spot that it isn't always 1600-1700 ms even in single-test custom invocation. I know I never want to test problems super rigorously, even though the samples are usually trash and it's necessary to make sure I didn't make a stupid mistake...

            One or a small subset of slower machines are also possible. I'm just skeptical about a large difference between the custom testing and systesting environments because it's an unnatural architecture. Usually, you have one pool of machines you can run jobs on in an automated way so you don't have to think about details.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Actually now I wonder whether I was lucky or there is an actual difference between these two solutions:

    the one which Kostroma unsuccessfully tried to hack
    the one which I've successfully hacked

    Does anyone see a crucial (from the architectural point of view like vectorization or something) difference between these two?

»
5 лет назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

So what happened with this round? Everyone is discussing something but I don't understand the situation at all...

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +19 Проголосовать: не нравится

    The problems were bad (because it was easy and also boring), and people are blaming that, which is just another day of TC. IMHO, TC is overrated and people take it too seriously. If such failure is a setter’s will then I’d rather accept and not participate. It’s not an IOI.

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится +32 Проголосовать: не нравится

      One reason I take TC seriously is that, after you graduate from IOI/ICPC, not so many onsite contests exist ("Big 3" TCO+GCJ+FHC, maybe AtCoder WTF, even Yandex was cancelled recently, what else?). Another reason is its great history.

      • »
        »
        »
        »
        5 лет назад, # ^ |
          Проголосовать: нравится +6 Проголосовать: не нравится

        For mere mortals like me, taking 8 contests in a row for TCO is a crazy idea. TC results are highly luck-dependent (due to its tricky contest/grading system, and low-quality problems). And 8 contests, mostly in different times, are very hard to afford for anyone in university. If I'm going to devote some serious efforts by pulling out appointments or sorts, then I need some expectations, but I'm not like you, so...

        For similar reasons, I don't like Atcoder WTF, but problems are usually great, and GP scoring is quite reasonable, so it's much worth looking for.

        For history, I think that would be at least before 2017, when I first learned how to use Java applet. Well, I think I saw some cool rounds in 2017, but I can't recall any cool rounds in 2018-2019. (Recommendations, anyone?)

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
            Проголосовать: нравится +44 Проголосовать: не нравится

          There were 7 SRMs in this season so far. In my opinion the order of recommendation is $$$749 > 754 > 752 > 751 > 750 > 755 > 753$$$.

          749/754 are nice, I will recommend SRM for you if it keeps this quality.

          752/751 are fine, I think I'm happy to use them in ARC for example, maybe after some small modifications.

          Other three should be quite straightforward for you.

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
            Проголосовать: нравится +8 Проголосовать: не нравится

          From my point of view, the quality of the problems became very low when cgy4ever left(from SRM 732). Most of the problems are quite straightforward and just need careful implementation. Sometimes the div1 hard is easier than a CF div.2 E.

          And the system is bad too. Usually the rank just depends on whether you can implement the problems correctly with no feedback and how much time it takes to solve them instead of whether you can come up with a nice idea to solve them. So I don't recommend you to take part in TC for training. There are many better contests, Atcoder, Codeforces, etc.

          By the way, I think SRM 727, 728 and 730 are nice. You can try them:)

          • »
            »
            »
            »
            »
            »
            5 лет назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится

            From my point of view, TC always contained some problems that were about careful implementation with little time, except now they're placed more on Medium level than Easy level.

            It's the contest format. With a little bit over an hour and a steep drop in points shortly after you open a problem, a challenging problem will go untouched by most people and is actually worth way fewer points than it seems. I'd rather solve them when I know I have a chance to finish on time and to try other problems too. Yellows/blues have it even worse.

            • »
              »
              »
              »
              »
              »
              »
              5 лет назад, # ^ |
                Проголосовать: нравится 0 Проголосовать: не нравится

              I agree. Maybe it's better to have at least 4 or 5 problems with about 2 hours?

              • »
                »
                »
                »
                »
                »
                »
                »
                5 лет назад, # ^ |
                  Проголосовать: нравится 0 Проголосовать: не нравится

                Yeah, and a slower point drop to go with more time, of course. And/or an increase in division cutoff.

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  5 лет назад, # ^ |
                    Проголосовать: нравится +5 Проголосовать: не нравится

                  And rename to CodeForces 2? As a blue past university guy I like that TC takes me just over an hour to take part in.

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  5 лет назад, # ^ |
                    Проголосовать: нравится +10 Проголосовать: не нравится

                  How about only increasing the number of problems and time in div2 along with increasing division cutoff? That way, you'll keep competing in the same format... in div2 ( ͡° ͜ʖ ͡°)

                  More seriously, some contest formats are better than other ones. It's fine if the current format is kept, but then, division cutoff should increase. Letting people with wildly different skills compete in a shorter contest results in more people being unsatisfied.