Rating changes for last rounds are temporarily rolled back. They will be returned soon. ×

ACGN's blog

By ACGN, history, 14 months ago, In English

As most people know, in yesterday’s round Codeforces Round 852 (Div. 2), problem F coincided with a previous problem on Codeforces. A few months ago, a similar thing happened to Codeforces Round 810 (Div. 1), which also had an issue of coincided problems.

Of course, we know what happened next: Codeforces Round 810 (Div. 1) was unrated for div.1, while as it stands, this round will be rated as per Mike’s latest blog post. I won’t elaborate more on that blog post; I assume you are familiar with Mike’s rationale.

I mulled over that decision for a while, and I’d like to raise a few points. First, let’s ask “is it fair?” Is it fair, for all parties involved, to keep the contest rated, or vice versa (to make it unrated)?

For the contest authors, it is undeniably unfair for them if the contest becomes unrated because of other factors. For those who got a good performance in the round, it would be unfair for their good performance to be discredited as well.

But on the other hand, for those affected by the issues with the contest, who might have lost quite a bit of rating because of this, is this fair to keep the contest? At the top of the leaderboard, a small shift may already be quite a sizable rating difference. Taking round 851 as an example, the performance at #19 (2600 perf) and #57 (2400 perf) differs by 200, while the “specialist perf” range is much larger in terms of the number of places. So while it might not matter for the majority, those near the top would be hit hard — a 200 perf difference could translate to a difference in 50 delta. Thus, you can’t say that making the contest rated affects no one.

This situation is even more impactful in Round 810. In the round, 170 participants solved problem E, most of them presumably copying from an online source. Someone who refused to copy online might lose many places by virtue of being honest — jiangly finishing 27th is an evidence of this impact.

In that case, the round has been thoroughly ruined. The standings could barely be associated with skill, rather it is a question of conscience. If you were Mike, even assuming the problem wasn’t deliberately copied, would you leave the round rated? Some questions begin to come to mind. It isn’t fair to discredit a good performance, but is it fair to discredit this massive disruption?

POI: It is “skill” to be able to associate the problem we have to a previously solved problem.

This brings us to the next point: “cheating”. Yes, it is written in the contest rules that we shall not discuss problems until the end of the round. I’m sure not all of the 100+ people who solved F didn’t solve it themselves, but saw this issue somewhere and decided to copy. But, how do we enforce that? If we ban/skip all participants who AC that problem, it isn’t fair — we skip quite a few participants who legitimately figured the problem out. You cannot prove that they are cheating. Copying solutions from a previous problem cannot be considered cheating. A round isn’t ruined by many people solving a difficult problem – if the solution to a non-coinciding problem F was leaked and posted in a large discord server, we won’t say the round is ruined — it’s just another large-scale cheating that Mike has plenty of resources to combat. This is not.

The issue with Mike’s argument is that just because it isn’t the authors’ fault, and just because no blame should be put on the authors, doesn’t mean that the contest should be rated. If Codeforces suddenly malfunctions 20 minutes into the contest, and it cannot work for a few hours, is it rational to keep the round rated? With such severe disruption, it no longer becomes fair for the participants of the round. Making the contest unrated would be a net negative to the authors, but from the POV of an author, does an unfair rating change mean as much as fair rating changes? In this case, making it unrated is by no means punishing the authors; it is to protect the rights and contest experience of all the affected participants. Sure, as a problemsetter you want the thrill and satisfaction of being able to change others’ rating (among other things of course), but is it meaningful if they didn’t reflect actual skill? And would they be proud that they caused rating changes that they consider to be unjust? Whether a contest is rated shouldn’t be a credit system, and making a contest unrated isn’t, should not and should never imply a punishment of the authors, or any fault with them whatsoever. My belief is that when a round has been seriously ruined, like the above examples demonstrate, it should be unrated. This is for the benefit of the participants overall.

Another issue with such incidents is that by the time such incidents emerge, people already believe that the round is unrated, myself included (even though I’m not a rated participant). There are definitely people who, believing that the contest is unrated, treats it with much less seriousness.

This brings us to another issue: the “unrated mentality”, the belief that the round is unrated because of issues with the round. This might happen with coinciding problems, or major issues with Codeforces — I recall that during a previous round with major server issues, many of my friends were adamant that the round would be unrated. It therefore came as a massive shock for all of us when it continued being rated – I had already stopped focusing on the contest by then. Yes, I wasn’t a rated participant. Yes, nothing was at stake for me. But the belief holds for a rated participant as well. And some might believe that since the round was going to be unrated, it is no longer necessary to adhere to Codeforces rules. In a problem coincidence scenario, some publicly cheat by reporting that the problem is duplicated in a public blog — this happened in round 810, and it seems that it occurred yesterday as well. (I have no solid evidence of what happened, I didn’t check until later on.) Sadly, it is this kind of actions, and mentality, that exacerbates this situation: when people no longer treat the contest seriously, the contest becomes further ruined as a competitive occasion. The same can be said for network issues: when a round was disrupted by major judging issues lasting upwards of 15, 20 minutes, most people will have lost the mood or momentum already.

I repeat: it is this mentality that leads to the worsening of issues. And as such, to tackle this issue, we should start by rectifying this mentality.

What prompts this response? A belief that the round will become unrated. And as such, one easy way is a reassurance that the round will become rated (or not), through an announcement, as soon as an issue occurs. But this evades the issue of “cheating”, and such an announcement might only draw attention towards this issue.

Thus, anti-cheating measures should be put in place. In yesterday’s round, blog posts were blocked. This is in the right direction, but simply disabling comments during contest time, or delaying their posting till post-contest, would be an effective solution to minimize damages. The rest? It is up to the participants to keep the Codeforces environment fair, by refraining to discuss problems in private. If necessary, the problem in question can also be cancelled mid-contest, depending on the number of solves prior to the discovery of this issue. A timely cancellation of the problem would minimize the number of people impacted by attempting the problem early, and keeping the contest rated and fair to the best of our ability. (As a side note, the reason we can't just remove problem F now is that some, indeed quite a good amount of, people spent time on problem F, thinking it was easy and overrated, and would be heavily disadvantaged.)

In summary, if the round has been ruined, it is my belief that it should become unrated. But as long as we take timely measures to nip “cheating” in the bud, we can very well have a rated contest on our hands, keeping everyone happy.

  • Vote: I like it
  • +100
  • Vote: I do not like it

»
14 months ago, # |
  Vote: I like it +4 Vote: I do not like it

100% agree. I also had the “unrated mentality” and took problem D as a “fun” problem.