### galen_colin's blog

By galen_colin, 24 hours ago, ,

Hi everyone! I've gotten a few recommendations to start screencasting/making video solutions, and I was initially reluctant, but I realized it's kinda fun to do it. I've started out by screencasting today's Div. 3 and explaining my solution ideas at the end, arguably a bit awkwardly, but I'll get better as I get more experience with it. There are, of course, other awesome youtubers like SecondThread who do the same thing (and better), but I figured the more resources, the better for everyone.

Anyway, my first video is here. The quality's a bit bad for this one, but I know how to fix it for the future so it's just a one-time thing. In this video, you can see me spending too much time on C and F, watching my rank falling over time, and somehow still placing 14th with no mistakes. Enjoy!

As it's my first video, some of the explanations may be confusing (especially F). If you have questions about them or just feedback in general, feel free to leave comments (preferably on this blog, since there's math mode here). Expect more screencasts for the upcoming Div. 2's, contests on other sites like AtCoder and maybe CodeChef, and probably algorithm tutorials in the future.

edit: jesus christ, 100 subs in less than a day. You guys are insane, and I love you all!

• +131

By galen_colin, 10 days ago, ,

I've found that an interesting way to waste pass time is by watching the standings as rounds happen. As far as I know, there are mainly three ways to do this:

• Watch a contest as it happens
• Watch a contest as system tests run
• Virtually participate, and watch standings that way

Obviously, the first two are time-dependent and won't work for past contests. The third option is more viable, but it both introduces meaningless data into the CF system and also forces you to watch with ICPC rules, which is inaccurate and doesn't work for other scoring systems. There's also no way to rewind, you have to watch in real-time, etc. It should be easier than that. And now it is!

I introduce you to Contest Replay — a simple website that will let you "watch" a contest's standings at any point in time. You can also target a user and watch their ranking instead of the overall leaderboard, as well as customize a ton of things to your preference. I've kept it mostly consistent with CF's design because it's nice and classic.

There are also some cool results you can find from watching past rounds — for example, in round 632 (id 1333), some troll held first place in official standings from 1:20 to 1:25 without having solved problem B!

A note: you'll have to pause "autoplay" and press Enter to change some of the settings because I'm bad at UI design. Sorry about that. Also, try not to enter invalid values.

If the thing crashes, anything about the site is unclear, or you have suggestions, feel free to comment below.

Some things I plan to implement:

• API key support — this means "friends standings", viewing private groups/contests, and more.
• Similar to "friends standings", an option to track just a subset of users.
• Data validation, so it won't crash so easily
• Support for IOI contests (might be simple, but it seems like it won't be)
• Leaderboard improvement (distinguishment between systest/hack/pretest fails, also make the "-x" that show up when a problem is never solved consistent with the current time)
• The two things above both require requesting all of the submissions made for the contest. This might be too much.

The site's hosted on repl.it, and you can view the source code at the repl here. It's... not exactly great, but you can see it if you want to.

This blog post is way too long already, but one final thing: autoplay works with negative numbers :)

UPD #1: major changes — API key support and friends standings. Also, a changelog so I don't have to update the blog for small changes; you can find it on the footer.

It also seems like it doesn't work for the edu round today. I think this is because people have changed their handles after competing. On the leaderboard, it shows their old handles, but since they've changed, the API thinks I'm requesting an invalid handle and returns a 400 error. Not really sure what to do about this.

Looks like the issue with the edu resolved itself — maybe handles auto-fix themselves on the standings given time, or there was some manual configuration involved. Anyway, if you see again that it gets permanently stuck on "Loading handles... ", that probably means something is wrong. An error message should pop up, but it may not (more bad UI design).

• +243

By galen_colin, 3 weeks ago, ,
A1
A2
B
C
D
E

Some notes/backstory about the round in general:

• Sorry about the mistake in C! I hope it didn't affect you too much, if at all. You can blame this on my bad organization — a tester did actually catch this mistake, but somehow I forgot to correct it.
• The only tester feedback I received besides corrections was to swap the problems C and D. Based on the solve counts, I guess it worked out.
• A lot of people received WA17 or WA20 on D because of not making the default values in the array small enough. With $0$ as the default, you'd get WA17. Others, who used $-10^9$ or $-10^{10}$, got WA20. I actually made this mistake myself, and my brute-force didn't catch it because it only seemed to happen with large enough $n, k$.
• I was skeptical about placing E at the end because of how simple the solution was, but that seems like it was a good choice now.
• I understand criticism about the problems being too "standard" — in fact, I'm not really proud of having a problem like D, but I thought it was necessary for balance (turns out it wasn't anyway). At the very least, they could serve to teach people new algorithms.

Tutorial of Testing Round #XVII

• +33

By galen_colin, 3 weeks ago, ,

Since the two failures of the contests last weekend, a lot of people have expressed that they want to help Codeforces in some way, but there isn't much many of us can actually do about the system. At the same time, there have also been a lot of (reasonable) requests for a testing round before round 656. So let's put those two together. Why don't we set some potential testing rounds? Maybe they should be rated, maybe they shouldn't, but in any case, they should be interesting enough to draw in participants like normal rounds do. It's probably not a good idea to "sacrifice" rounds not rated for Div. 1, as some have suggested, because that still wastes the efforts of the setters. So what I'm suggesting is that we prepare rounds with the specific intent of them being testing rounds.

• If these rounds are unrated, there's much less pressure for original/novel problems (and probably fewer problems will be rejected), so more of these rounds will be able to be prepared
• On this same point, it's a great entry point for new problemsetters: rounds are easier to prepare and there will likely be fewer participants (meaning it's easier to manage during the contest)
• It reduces pressure on MikeMirzayanov and the Codeforces team
• You get more problems to solve!

• If problems are of lower quality, then they might pollute CF! But there's a simple solution to this: if this becomes an issue, just don't add the problems to "Problemset"
• People would be preparing rounds with a larger chance of the whole round failing! While this is true, these rounds support CF in a different way: they test its infrastructure. And it's not like the whole round is wasted if something goes wrong — after all, that's what we possibly expect from a testing round.

I think it would help, and some testing round would especially be necessary before round 656. So... I'm not one to just throw around words. I have a few problems of my own prepared (not the most "novel", but they should be somewhat educational — I tried to include some common topics, as people have been wanting lately), and am willing to offer them to a testing round. I'll probably host it in the gym, unless it becomes an "official" testing round, and I've adjusted the constraints/time limits to take longer per test case to better simulate the judge pressure of an actual round, since there will likely be fewer people.

Tentatively (it's still in the testing phase, so things may change), there will be 5 problems and the round will last 2 hours. Expect the difficulty to be around Div. 3, maybe somewhat easier. When it's ready, I'll post it in the gym.

I think a good time to schedule it would be around a day before the upcoming round 656, perhaps at the time that Monogon's round was to be held before it was rescheduled. So for now, it will be held at this time (later changed to 1 hour earlier — ignore this link).

UPD: you can now view the contest in the gym. Registration opens 6 hours before it starts, which I don't think I have control over. It will start at Jul/16/2020 16:35 (Moscow time), which is 1 hour earlier than it was initially set to.

Thanks to 12tqian, meiron03, and Kaunta for testing. A special thanks to 12tqian for letting me theme the problems around him. And, of course, thanks to MikeMirzayanov for the awesome Polygon, the even more awesome Codeforces, and the platform issues so that this round can be possible :P

UPD 2: the scoring distribution is $(1 + 1) - 1 - 1 - 1 - 1$ (it's ICPC mode lol)

A is split into subtasks. A1 has a time limit of 15 seconds (maximum setting) with 20 tests, and I highly encourage submitting slow solutions that get close to TL. The rest of the problems have relatively normal constraints. If more people register than I expect, I'll lower these numbers.

If you really want to put load on the queue, you can stall your program until it gets close to 15 seconds. An example of this in C++:

code

Other languages have stuff like this too, you can check the documentation for it.

UPD 3: Editorial is out. I hope you enjoyed the round! It seems like there were way too few participants for it to be effective, though. I hope the div. 3 tomorrow goes well.

Congrats to the winners!

$~$ 1. Geothermal
$~$ 2. Andreii
$~$ 3. Lumberjackman
$~$ 3. NotGivingUp
$~$ 5. komiel

And an honorable mention to Ashwathama and erekle for full-solving.

Announcement of Testing Round #XVII

• +257

By galen_colin, history, 6 weeks ago, ,

Hi everyone! I'm writing this blog to hopefully throw out a bit of inspiration.

With any measure of skill such as rating, people are always going to tie up their self-confidence with it. Unfortunately, that means when rating is lost, people lose confidence and/or motivation, which always sucks, especially with huge rating losses that seem impossible to overcome. But I want to demonstrate with an example that in the long run, even the largest rating losses won't affect much.

We'll use my performance on Global Round 7 as an example. A whopping -166*. Devastating, right? Well, let's see about that...

Let's look at the alternate timeline where I didn't participate in the round and the rating loss never happened (but my performance in the later contests was the same). Thanks to sjsakib and cfviz for allowing me to break the rules of the universe (probably not an intended use case, but you never know with users...).

• +446

By galen_colin, history, 2 months ago, ,

Author's editorial

This is a duplicate of my comment here. I've gotten a couple of requests to make it a blog (for some reason), so here it is. Note: this is unofficial, I was just a competitor

A
B
C
D
E
F

• +25

By galen_colin, 4 months ago, ,

If you failed on test 66 on (this problem | div2 version), you probably haven't reset enough of your array in between. This problem gives rise to a very unique type of bug, since we access the array beyond the part where we initially read elements into. What I mean is, if you have a large case followed by a small one, on the small one, you'll read the first $2^n - 1$ elements in, but the elements in indices $2^n$ to $2^{n+1}-1$ may also be filled with values from the last test case. Many people adapted the function definition from the problem statement, treating an index like a leaf iff the two elements 'below' it (i.e. $a_{2i}$ and $a_{2i+1}$) are both $0$. But if the array isn't reset, then values that are supposed to be leaves may be treated differently. Unfortunately, I made this mistake too :(

Not resetting the state between test cases is common. But this is a very unique type of mistake, specific to this problem. I don't think it's reasonable to blame the setter for overlooking this in the pretests. Surprising that none of the testers made this same mistake, though...