giorgosgiapis's blog

By giorgosgiapis, history, 2 years ago, In English

Perhaps this is an unpopular opinion but I think we need more testers that are on the lower side of the rating spectrum. Having only red/orange testers for Div 2 contests might (and does) result in underestimation of the difficulty of the proposed problems. Blue/cyan or even green/gray form a more representative sample for the actual contestants. I understand that there should be a few highly rated and experienced testers but having only them can result in speedforces rounds for Div 2 participants.

  • Vote: I like it
  • +162
  • Vote: I do not like it

| Write comment?
»
2 years ago, # |
  Vote: I like it 0 Vote: I do not like it

I agree

»
2 years ago, # |
  Vote: I like it +1 Vote: I do not like it

yes div 2s are way too hard

»
2 years ago, # |
Rev. 2   Vote: I like it +5 Vote: I do not like it

OK, what's about Round 745? There were a plenty of expert / specialist testers, but the round was very unbalanced (sorry problemsetters).

  • »
    »
    2 years ago, # ^ |
      Vote: I like it +1 Vote: I do not like it

    I'm not saying blue/cyan testers will eliminate all unbalanced rounds but I think it can help.

    • »
      »
      »
      2 years ago, # ^ |
        Vote: I like it +27 Vote: I do not like it

      In my experience, blue/cyan testers can evaluate problem's difficulty not better than orange/red.

      • »
        »
        »
        »
        2 years ago, # ^ |
          Vote: I like it +15 Vote: I do not like it

        That's nonsense. Problems that are easy for an orange/red may be way more difficult for a blue/cyan, so I don't understand why you think they will give the same evaluation. Why would I say a problem is easy if I'm unable to solve it during testing?

        • »
          »
          »
          »
          »
          2 years ago, # ^ |
            Vote: I like it +4 Vote: I do not like it

          For example, I am able to solve D2A D2B D2C D2D. Do you say that for me they have the same difficulty?

          • »
            »
            »
            »
            »
            »
            2 years ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            You missed the point. When someone is highly rated then he might not be able to estimate the difficulty gap between problems and that's quite reasonable tbh.

            • »
              »
              »
              »
              »
              »
              »
              2 years ago, # ^ |
              Rev. 2   Vote: I like it 0 Vote: I do not like it

              Hmm... There is something. I remember preparing my 741 round. Before testing I was absolutely sure that task C was 800-900 rated (it scored 1500 at Codeforces).

              But this is only the half of the truth. I've seen 1800-1900 rated problems that are hard for me, and 2900 problems that are pretty easy for me. To balance round one need many testers, no matter how they are rated.

              This is because each person at each rating can understand difficulty gap (if they solved the problem), But everyone (except tourist) has weak and strong sides. You just need many testers to have combined view.

              • »
                »
                »
                »
                »
                »
                »
                »
                2 years ago, # ^ |
                  Vote: I like it 0 Vote: I do not like it

                Agreed. Many testers from different rating ranges is probably the way to go. The problem is that, unlike coordinators, the number of testers, as well as who those testers are going to be, depends on the problem setter.

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  2 years ago, # ^ |
                    Vote: I like it 0 Vote: I do not like it

                  Sad truth. Maybe this is so: we should have a testers of each colour :)

          • »
            »
            »
            »
            »
            »
            2 years ago, # ^ |
              Vote: I like it +8 Vote: I do not like it

            Maybe idk how testing works, but I believe if there were more cyan/blue testers for today's round most of them won't be able to solve Div2D, and any sensible author would consider that a red flag, since most of the target audience might not be able to solve that problem. I can't say the same for orange/red testers maybe it was an easy or okay-ish problem to them, and they solved it during testing so the setter felt the problem was okay.

»
2 years ago, # |
  Vote: I like it +1 Vote: I do not like it

I agree...I think ideally there should be 1 tester of each rating color...something like Codeforces Round #736 (authored by Agnimandur): which had 32 testers of each color range. Having a bunch of testers in different rating ranges does not guarantee a balanced round, but it certainly makes a round more balanced imo...

»
2 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Please allow me to be a tester

»
2 years ago, # |
  Vote: I like it -8 Vote: I do not like it

Huh? Then some random testers would start selling solutions. Eventually cheaters per round would increase. End of the world incoming with this kind of decision.

  • »
    »
    2 years ago, # ^ |
      Vote: I like it +15 Vote: I do not like it

    That's not how testing works. Testers are (almost always) people whom the authors know (fairly well) and trust, and are experienced participants who would (hopefully) not only never intentionally leak problems/solutions, but would also be careful about unintentionally leaking anything.