### VLamarca's blog

By VLamarca, history, 8 months ago, ,

Hi!

In this blog I would like to humbly suggest a change in how hacking is calculated in CF rounds. I've talked to some of my friends and they agree that it feels a little unfair that someone who solved more problems scores less than someone who hacked more. A great example from where this happens is from the top positions (and other positions too) in today's round: Codeforces Round #482 (Div. 2) .

I have two main suggestions on how to calculate hacking points in a way that I consider fairer. The specific way may vary, but the key points are:

1) The more you successfully hack a certain problem, the less you should score with it. This means that the first hack on problem A should get you 100 points, but the second 50, the third 25 and so on (or something like this, maybe keep 50 for all the following hacks starting with the second one). I feel that a decreasing function for this is fairer because, in most cases, when you hack a certain problem, it is easier to hack it again. It only depends on luck to have lots of people in your room that made the mistake you spotted. Notice that, in this suggestion, if you hack problem A one time and then problem B one time, both of them should get you 100 points each.

2) The harder the problem, the more you should score by hacking it. This means that hacking problem A should get you 100 points, and hacking problem B 150 points, or something like this. I feel that in most cases it is harder to hack harder problems because generally their solutions are longer and therefore harder to read. Also fewer people solve harder problems, so it makes sense to increase the value of hacking them to make it more attractive for people that did passed pretest for this problem to try to hack others' solutions.

I am happy to discuss ideas in this regard.

Thanks for reading! (Also if you spot any mistakes in my english I would appreciate to hear about them in my private talks :) )

•
• +258
•

By VLamarca, history, 9 months ago, ,

Hi!

The main objective of my post is to make most Div. 2 only rounds rated for purle users.

This is beneficial because it would distinguish their rating better. It would also mean a more frequent participation for those people, which is good for the growth of the plataform. It makes no sense to make a purple coder wait for a Div. 1 round, which is rare due to the difficulty to come up with a Div1 D/E problem, when in most cases those users don't even read these harder problems at all.

Another reason is because most Div. 2 rounds are hard enough for people with purple rating. I assume the difficulty of contests should be sufficient to not transform them in a "type race", which means that it is good that few people solve all the problems. I will try to convince you that we can make div2 contests rated for people with higher rating without breaking this principle.

EDIT: Originally the title of this blog was "Please increase div2 upper rating limit" but this was missleading as I only want to allow div. 2 participation from purple guys in Div. 2 only contests. By Div. 2 only I mean the contests that do not have a corresponding Div. 1 rated problem set.

## The distribution of participants in div. 2 and div. 1 (when there is a corresponding Div. 1 contest) could and should remain the same.

Some data to back my suggestion:

In Codeforces Round #476 (Div. 2 only) [Thanks, Telegram!] only 1 person solved all the problems officially, and I would say that more than 95% of all the unofficial purple contestants solved at most 4 out of the 5 problems.

In Codeforces Round #477 (rated, Div. 1, based on VK Cup 2018 Round 3), let's assume that the contestants who solved 4 problems would have a chance of solving all problems in the corresponding div. 2 round. Less than 50 (including red and yellow rated people) out of more than 450 div1 contestants solved at least 4 problems.

In Educational Codeforces Round 42 (Rated for Div. 2) only one red rated person solved all the problems and the vast majority of purple rated unofficial contestants could not solve 2 of the contests' problems. Also 3 legendary grand masters contestants could not solve all the problems.

With that being said I suggest that most div. 2 only contests should allow rated participation of users with rating up to 2200, or less depending on the difficulty of the specific round. But please do allow participation of more users in Div. 2 rounds when the difficulty is sufficient, this is healthy for the plataform!

I am open to discussing ideias in this regard.

•
• +351
•

By VLamarca, history, 16 months ago, ,

I was getting WA on this problem, but after I suspect that the problem's statement might be wrong, I got AC with this code.

In a phrase of the problem statement it is said: "The state of objects that do not have any nested objects will be '-'.". From this I understood that when k = 0, s would necessarly be '-', but I then submitted this code, identical to the AC one, with one exception: if s = '+' and k = 0, then it would give TLE, which indeed happend but shouldn't.

Did I misinterpeted the statement or is the problem indeed wrong? In the latter case I will try to contact the author.

•
• +11
•

By VLamarca, history, 23 months ago, ,

Hello!

My solution to this problem E implements the same ideia as the editorial. But first I used a O(m*n*lgn) solution here (for m=1000 and n=5000, which is ~6*10^7) that got TLE. I used map in this one which made it slower. Then I tried using map only in the beggining instead of using all the way in the code (as shown here). This means that first I mapped the string to the index that gives the element that the map would represent. Accessing the element this way is O(1), therefore the solution was O(m*n) which got accepted.

But shouldnt the 3 second limit be enough for less the 10^8 operations in the first solution? I though that 1 second meant 10^9 operations.

Sorry for the begginer's doubt. Possibly it's because I used many operations per iteration, but how should I know? what do you consider as the boundary, 10^7 per second?

PS: actually in the first solution the map is deleted and remade in every iteration, which means that its size is not always the maximun n, which is actually faster than what I considered in this post.