Please, try EDU on Codeforces! New educational section with videos, subtitles, texts, and problems. ×

By MrDecomposition, history, 24 hours ago, In English,
Problem A
Problem B
Problem C
Problem D
Problem E
Problem F
Problem G
Problem H
Problem I

Read more »

  • Vote: I like it
  • +305
  • Vote: I do not like it

By chokudai, history, 23 hours ago, In English,

We will hold AtCoder Beginner Contest 173.

The point values will be 100-200-300-400-500-600.

We are looking forward to your participation!

Read more »

  • Vote: I like it
  • +60
  • Vote: I do not like it

By nikgaevoy, history, 27 hours ago, In English,

Finally, I finished the implementation of an improved version of TrueSkill rating system that EbTech named "TrueSkill from St.Petersburg".


Results are here. Full repository is here.

Important notes on the results

Note that those results were obtained by running on the EbTech's testset (rounds only before Codeforces Round #645 and team contests excluded) and thus do not represent the current standings on Codeforces. Also, the file contains only users with at least 10 contests and at least one contest in 2020 (otherwise, the file would be very large). However, you may obtain full rating by running the algorithm on your computer, it should take about a minute. Also note that the results may seem not very impressive since I chose the first parameters out of the blue and they should be changed of course.

What is this rating system?

It is like original TrueSkill, but without its problems, like multiway ties. What is TrueSkill? It is a generalization of Elo/Glicko to the case when many teams compete simultaneously. However, the original TrueSkill was made to deal with many players, large teams and not so many ties, and therefore should not work correctly in our case. The improved version of TrueSkill was designed for the sport version of "What? Where? When?", thus it solves the original TrueSkill problems and perfectly match Codeforces-like competitions.


  • It is mathematically perfect. It does exactly Bayesian inference on the model with many simultaneous players (just like Codeforces), no approximate inference or any other tweaking of the results. Therefore, no other rating system could be more precise. However, there is a room for customisation to fit your personal preferences (the stability of the rating, etc.).

  • It is very fast. As I have already mentioned, it processes almost all Codeforces history in less than a minute. The complexity of processing contest with $$$n$$$ players is $$$\mathcal{O}\left(n \log \frac{1}{\varepsilon} \right)$$$ where $$$\varepsilon$$$ is the designated precision of users' ratings.

  • Teams out-of-the-box. Despite the fact that results above were obtained on tests with team contests excluded, it is not the rating system issue. The only reason why I excluded team contests from the testset is that CF-loader I took from EbTech's repository doesn't process team contests and I don't want to spend time for making a loader by myself. However, now team performance is computed as sum of player performances, but it could be easily changed to any other formula.

  • Reduced rating inflation, explicit performances, visible rating feature and more.

  • tourist on the top of the rating!


  • Some formulas lack of numerical stability. That happened because formulas from the original article contain some mistakes (at least they fail on some obvious tests like $$$(-m) \cdot \chi_{[-\varepsilon, \varepsilon]} = -\left(m \cdot \chi_{[-\varepsilon, \varepsilon]}\right)$$$) and I failed to retrace the author's steps in the computations. However, they seem to be somewhere near the truth and they are numerically stable, so I believe that it could be fixed.

Further discussion

Rating system parameters

As I already mentioned, I haven't spent lots of time choosing the best parameters for Codeforces case, so there is still some work with data ahead.

Normal vs logistic distribution

Which distribution expresses player rating and performance better? Any choice is very debatable. Currently my implementation uses normal distribution, but it could be easily replaced with logistic or any other distribution, the only thing you need to do is to implement operators like multiplication, division, addition and two multiplication on indicator operators (last two are more tricky than the others, but still not very hard, see article appendix for explanation what to do and this file as an example for normal distributions).

Large multiway tie on the last place

Codeforces standings have an unusual artifact of a large multiway tie of a players with zero score which does not perfectly match the theory and possibly affect those players' ratings too much. There are many ways to deal with this problem, e.g. exclude them from rating at all, make separate tie-$$$\varepsilon$$$ for them, etc.


I'd like to thank EbTech for starting this discussion that led us here, S. Nikolenko for pointing the solution out to me and MikeMirzayanov for great Codeforces system that crashed during my experiments only twice.

Read more »

  • Vote: I like it
  • +237
  • Vote: I do not like it

By Adamenko, 7 hours ago, In English,

Few years ago i read about great data structure — ordered_set, which support such operations like:

1. Insert an element into structure.

2. Remove an element from structure.

3. Find k-th element in a structure.

4. Find the order of element in structure.

So now, l decided to check: what is faster: inbuilt ordered_set or my realization of treap?

I’m going to conduct testing on a simple task that solves ordered_set.

You have been given three types of requests, which were described above(operations 1 — 3).

For completeness of experiment, codes will be tested at various Q, where Q — number of queries, & various ratio of operations.

That's what I did!

This table shows the operating time of the codes depending on the number of requests & the ratio of operations. Each cell contains the operating time of the codes (on the left side of the cell — ordered_set, and on the right side — treap).

To my surprise, the built-in ordered_set works the same time as the treap i wrote(sometimes faster).

Apparently, the built-in ordered_set is written optimally, so there is no point in writing your own (except in cases where you need to merge several ordered_set).

Read more »

  • Vote: I like it
  • +72
  • Vote: I do not like it

By throwawayatcoder, history, 19 hours ago, In English,

In Codeforces Global Round 9, conqueror_of_tourist became the first Python user (to my knowledge) to become Grandmaster, with a rating of 2464!

Considering how much Python is derided by most of the people in the CP community, some people thought this was impossible. conqueror_of_tourist has truly mastered writing cancer code in Python. Feel free to PM him for all of your Python needs :^)

Read more »

  • Vote: I like it
  • +172
  • Vote: I do not like it

By ismail, history, 3 days ago, In English,

In the contest time Codeforces Round #654 (Div. 2), overwrite said, "tell me C and I will tell you B". This is clearly a cheating proposal, that's why I want to report this matter, How can I report this user for cheating proposal?  Message

Read more »

  • Vote: I like it
  • +89
  • Vote: I do not like it

By physics0523, history, 4 days ago, In English,
Tutorial is loading...

Jury solution: 85699387

Tutorial is loading...
Jury solution: 85699884
Tutorial is loading...
Jury solution: 85699939
Tutorial is loading...
Jury solution: 85700178
Tutorial is loading...
Jury solution: 85700334
Tutorial is loading...
Jury solution (Solution 1): 85700515

Jury solution (Solution 2): 85700518

Tutorial is loading...
Jury solution: 85700669

Feel free to share your approach here!

Read more »

  • Vote: I like it
  • +116
  • Vote: I do not like it

By re_eVVorld, 23 hours ago, In English,

Hi, fellow programmer :)

I want to describe ideas that I used during ICPC Challenge 2020: Marathon.

First, how to get more than 600k score?

For 600k score we can just implement what this article says. Lets start with each cluster having only one vertex. (1) Then do the following thing until converge: choose random vertex and move it to one of the adjacent vertices cluster if it makes result better. (2) Now compress each cluster to one vertex and make new graph with weighted edges, then do everything like in (1) but now you are making clusters of clusters. I think image in article above explains it quite good. So, do (1) once and (2) until converge. Basically, that's it.


Second, let's analyze recieved clusters

As we get relatively close to optimal result, we now can analyze the structure of our partition. Here is random example for A3 in form {size of clusters : number of clusters with such size}:


We mostly have 10-50 clusters with big size and 1000-3000 clusters with single vertex.


So we can improve result by making better partition of big clusters or choosing better singleton vertices.

Third, kind of genetic algorithm

Notice that in (1) and (2) we are using random, so every new run give us new partition. The main idea is the next: different partitions are better than other partitions in some local regions; So we can combine them to get better result.

Particularly, what i did:

  1. generate 50-200 partitions
  2. choose random 5-15 of them and make new partition according to the next principal: 2 vertices are in the same cluster only and only if they are in the same cluster in every of chosen 5-15 partitions. Take a look on image.
  3. run 5-10 times (2) on new partition (do 3. on the same new partition from 2. 8 times, parallelly on processor cores and than take the best one)
  4. replace the worst one of 5-15 partitions chosen on 2. with the best one of 8 partitions generated on 3.
  5. do 2.->3.->4. 500-5000 times (it takes 5-20 hours on my laptop depending on A1/A2/A3)

Also you can take 5-100 the best partitions, generate 195-100 new partitions and do 5. again. This way you will probably stuck at different local maximum.

One more small improvement

So far in (1) and (2) we were only moving vertices to adjacent clusters. We can get extra 100-200 points if we will also try to move vertices to new singleton cluster.

That's all. Hope my english is not too bad and you like this post <3

Read more »

  • Vote: I like it
  • +171
  • Vote: I do not like it

By ismail, history, 5 hours ago, In English,

villazer taken screenshot from CF Talks and replace its author with recipient and vise-versa. Then he attached that edited screenshot in a comment and said, I was asked for a solution. BUT i never asked for a solution. BTW, I've proof and I want to complain about this spiteful matter because its really hurtful to me.

Maybe that was happen for this post

Here villazer's comment with 40+ up votes : Comment 1, Comment 2  CF Talks

Visual Proof

BTW, villazer accept my challenge and Let's schedule a live stream for the clear proof.

Read more »

  • Vote: I like it
  • +9
  • Vote: I do not like it

By SummerPark, history, 7 hours ago, In English,

Hello, I'm in need of your opinions.

I am bad at solving problems that involves finding one out of multiple solutions, it's one of my weak areas which i want to improve on.

I AM solving a lot of problems, so i don't think that's the issue, this thing is different from other topics, like for most topic's I've developed a way or a general strategy to approach them, so it takes me like 15 minutes to come up with a solution for those problems even for Div 2 D sometimes.

But for problems like the one i mentioned, i would be at times stuck totally even on Div 2 B. Im actually trying to develop a problem solving pattern or a general approach towards them.

How do you guys approach problems like these. Any help is appreciated.

Read more »

  • Vote: I like it
  • +30
  • Vote: I do not like it