### galen_colin's blog

By galen_colin, 7 days ago,

In the round 702 that happened a few days ago, somehow all submissions were rejudged on all tests, which include some of the uphacks that were done after the contest's hacking phase. For example, my F retroactively changed from accepted to TLE. Maybe this has changed, but I thought the original point of uphacks were to strengthen system tests for practice, not to change the verdicts after contests finish.

I'm a bit sad about losing first place, but it was unofficial for me anyway, so that's not really the point... but, is this a permanent change in the stance on uphacking? I was only hacked because I talked with someone else (after the hacking phase, when it was supposedly final) about a case I found that would cause TLE. Will it be better to keep this sort of thing secret in the future?

• +172

By galen_colin, 5 weeks ago,

upd: has been fixed now

This seems like a problem. On problem A, the statement guarantees $2 \leq n \leq 10^{14}$, but the validator accepts $n = 1$, causing an invalid edge case for many solutions.

For example, this hack should not work. The input is just:

1
1


pinging {Supermagzzz, Stepavly, MikeMirzayanov}

• +39

By galen_colin, 7 weeks ago,

Some time ago I posted this blog, which showed the diminishing effect of a huge rating loss over a period of many contests. Recently, I decided to try to show this effect a bit more broadly. So I extended it to be a bit more extreme and simulated my entire contest history while ignoring all rating losses. That is, I ran through every contest I was ever rated for and calculated the expected rating change with cfviz, but maxed out every rating change with $0$.

Now, the question is, how much of an effect did ignoring rating losses have? I give more details in this video, at the 6:56 mark (said video also has many more tips, generally being about how you can learn more from the contests you do, which is not-so-subtly the whole point of this blog). But, essentially...

the "optimistic" rating was $2662$, only $50$ more than my current rating. There's probably some sort of bias here, one of the possible issues being that I've had an unnatural streak of rating rises within my last few contests, which would likely significantly decrease the difference. You can find the full data here, which probably does show that $50$ is kind of below normal. Taking the average over all differences gives $90$, which is maybe more accurate. You can decide for yourself how significant this number is, but I would say that it's not that large of a difference, given that it's as optimistic as it possibly could be.

• +178

By galen_colin, 3 months ago,

Hi! As another "celebration" video (of 5K subs), I've made a video that attempts to offer a general strategy for problem-solving. As I mention many times, there's no real substitute for practice, but what I'm trying to do is help you avoid those frustrating "I should have seen that!" or "How did I miss that? I knew that already" moments where your own knowledge falters. In a sense, you should be able to (at least semi-consistently) perform at your full potential. Here's the link: https://www.youtube.com/watch?v=2TNyrw4hkIY

• +85

By galen_colin, 3 months ago,

I was gonna save this for another day, but after this post stunned the world, I figured that I might as well get on the shitposting train too.

Currently, Mr. Daved SecondThread Harmayor and I have the same rating: 2622. So it's clear that we need some sort of way to distinguish who's better between us, maybe some sort of duel or something. I wanted to get a bit of insight into how exactly this would go down, so I called up an old friend of mine, Petya. Because he was such a fan of the statue, he let me see into the future to find out the outcome. But what I saw might shock you:

I know, I know, it may be surprising, and it's such a one-sided battle too! But you can't dispute a real photo, even if it was taken in the future. Hope this clears things up

• +174

By galen_colin, 3 months ago,

YouTube's community post system is kinda bad, so I'll make an explicit blog about this so people here can see it too. You can probably see on the sidebar that there will be another topic stream tomorrowish, and it'll be about general tree problems. Like last time, I've prepared a mashup, which can be found here: https://codeforces.com/contestInvitation/06ef32f75d130163cc2536475b38aebe91ba3113. There are 8 problems, essentially hand-picked this time to either be cool problems or include some topic I wanted to cover. A warning: the last problem is much harder than the others.

Making a blog for each of these (there will probably be many in the future) seems kinda annoying though. Maybe a good solution would be to add some sort of "description" field to streams on Codeforces, since that doesn't exist on Twitch (except for channel description), and most people probably wouldn't look at the YouTube description?

I'll give an intro to trees, but it may also be a good idea to be familiar with them beforehand. For example, one useful thing for this would be the well-known tree basics video (by our lord and savior SecondThread).

• +62

By galen_colin, 4 months ago,

Hey guys!

I have a new (for me, at least) idea for stream content that I'm a fan of right now. The main idea is that I'll pick a particular topic — e.g. dynamic programming, greedy, binary search, etc. — and binge a bunch of problems on that topic for a few hours. I'll probably also start off the stream with an overview of what the topic is and/or what it achieves.

I feel like this idea has some advantages:

• It gathers a lot of problems and ideas for a topic into a given video (this is an advantage because certain ideas often relate to other ones in the same topic)
• It covers not only common usages of the topic but also certain tricks that only show up in practice rather than theory, like implementation tricks
• It can easily cover a wide range of difficulties
• It's more of a chill stream than covering a particular contest :)

What do you think about this? Either way, interested or not, I've opened a poll where you can decide on the first topic that'll be covered. You can see the stream time on the sidebar (here) or at the post (on youtube). For now, I'm gonna stick with beginner topics, and we'll see where it goes later.

18 hours-ish from now, I'll make a big mashup of problems of that topic and release it here, so you'll be able to see/try the problems beforehand.

UPD: Sorry that this is a bit late, but it looks like dynamic programming wins by a landslide! You can find the mashup here (ping me if the link doesn't work), and you should be able to preview and attempt the problems beforehand. The difficulty distribution is:

Distribution

On another note, I do recommend familiarizing yourself with the topic beforehand — I'll try to explain it somewhat, but it's better to have your own understanding of it

UPD 2: For archive reasons, I'll add the stream replay here: https://www.youtube.com/watch?v=zDEQaDl3cso. I ended up covering problems A-H and K, and may cover the others in a future stream.

• +152

By galen_colin, 4 months ago,

Hi! In celebration of reaching international grandmaster, I've made a video describing my entire journey up to this point. This includes practice and contest mentality tips throughout for different skill levels. So, in a sense, it can serve as a (partially) complete roadmap up to at least red, although it gets a bit sparse later on. You can find the video here. Enjoy :)

• +344

By galen_colin, history, 5 months ago,

I know a lot of people (including myself) have failed system tests on this notorious problem, and if you've been scratching your head for a while trying to figure out why, this (might be) the blog for you! (although it seems like there's more than 1 type of bug)

One common fail was a wrong answer on test 46. This bug might be because you haven't checked that the two characters you're removing were actually consecutive in the initial string (for a test case like deedc) — this can be remedied by storing indices along with the characters of the string and making the appropriate checks. However, this may now lead you to a different wrong answer. What I found that broke my program was the following simple case: ddeedc. The assumption I made was that if two equal characters at the front of the string haven't been removed yet, then it's not optimal to remove any more of those characters in that "chain" of equal characters (i.e. we checked it before). So to check if you wanted to remove two characters, it would be enough to check two positions ahead (or one, depending on implementation), and remove those two characters only if they were greater than the character two ahead.

However, that assumption is wrong. When ee is removed from that string, we end up with dddc. The reason that the last two d's weren't removed is because they weren't together initially in the string, meaning that it breaks my assumption (since it's optimal to remove the first two d's). So instead of checking the character two characters ahead, I instead had to check for the next different character (which is kind of obvious in retrospect). This can be fixed by storing another stack that contains exactly one copy of each character, rather than the consecutive groups of them. It's probably not easy to read, but if you want it, here's my correct submission.

Were there other bugs you found? Maybe you should comment them here, and we can have this blog as a universal compilation of our stupidity.

• +199

By galen_colin, history, 6 months ago,

Hey guys! I will be streaming my (virtual, obviously; get clickbaited lol) participation in the Div. 3 round that will happen soon. I'll start around 10 minutes after the official contest ends, and you'll be able to interact and ask questions through the chat. It being a virtual contest, as well as a stream, I'll try to solve slowly and explain everything elaborately, as well as answer any questions (ideally problem-related ones) live.

The stream will happen on my channel here. It's also my first time streaming, so there will probably be some fun technical difficulties. If this goes well, more streams of various rounds (or general streams) may happen in the future!

UPD: Stream is live! The contest will start as scheduled.

• +50

By galen_colin, 6 months ago,

See the original post on Codeforces for a playlist of all similar tutorials and also more information about what this is.

# "Intro"

Timestamp: 00:00

This is a "hybrid tutorial" — it has both a blog and a video component. See my original post for more explanation of what this is and also a playlist of all past tutorials. You can view the full video here.

This tutorial is on centroid decomposition, a cool concept that has many uses in different problems. It's also another "decomposition" technique, one of many. I feel like this subject in particular doesn't have many great tutorials online that really capture the essence of what we do with the technique, so hopefully, I can improve that with this one.

# Notation

We'll use $0$-indexing for the whole tutorial, and the example problem will also be $0$-indexed. In addition, "vertex" and "node" mean the same thing and may be used interchangeably.

# Example problem

Timestamp: 02:04

If you've read other centroid decomposition tutorials, you've probably seen a problem called Xenia and Tree. While it's rather well-known now, it's a great example to use for an introductory tutorial because most of the implementation is simple.

I'll restate the problem here as well:

You have a tree of $n$ vertices, where each vertex is initially blue except for vertex $0$, which is red. There are two types of queries ($q$ in total):

• Set the color of a vertex $v$ to red

• Given a vertex $v$, compute the distance to the closest red vertex to $v$ (which could be itself)

There are other ways to solve this such as sqrt decomposition, but we'll do it with centroid decomposition.

• +264

By galen_colin, 7 months ago,

Here is a playlist of all hybrid tutorials I've done.

# "Intro"

Timestamp: 00:00

Hi! Definitely not inspired by this comment, I've decided to try something that seems relatively novel — combining a blog and video tutorial into one, in a "hybrid" fashion. Both should be usable independently, but they will have the same "flow" and structure so you can reference both for concepts that are harder to grasp, and the two will supplement each other. The goal of these is to be complete — beneficial for both video and blog lovers, as well as full of enough information that anyone without much of an idea of what the concept is should be able to fully understand. There will be code as well, however, I very highly recommend not looking at it, but rather working out the implementation for yourself.

The concept of this tutorial is heavy-light decomposition. This one isn't 100% fair, as I'm basically copy-pasting one of my old tutorials that can be found here and adding a video component, but it's never been on Codeforces before.

The full video can be found here.

# The motivation (path queries)

Timestamp: 01:20

Heavy-light decomposition is kind of a scary name, but it's really cool for what it does. Take some classic segment tree problem with two types of queries:

• Compute the XOR of values on the subarray from $l$ to $r$
• Update the value at position $i$ to $x$

Now, anyone familiar with a segment tree will recognize that it can be easily applied to solve this. But what if we moved the problem from queries on an array to queries on a tree? Let's reformulate the queries to be:

• Compute the XOR of values on the path from $u$ to $v$
• Update the value at vertex $i$ to $x$

which is exactly this problem from USACO, and the problem I'll be using as an example for the rest of this tutorial. To note, its editorial also provides a nice description of HLD.

• +404

By galen_colin, 7 months ago,

Hi everyone! I've gotten a few recommendations to start screencasting/making video solutions, and I was initially reluctant, but I realized it's kinda fun to do it. I've started out by screencasting today's Div. 3 and explaining my solution ideas at the end, arguably a bit awkwardly, but I'll get better as I get more experience with it. There are, of course, other awesome youtubers like SecondThread who do the same thing (and better), but I figured the more resources, the better for everyone.

Anyway, my first video is here. The quality's a bit bad for this one, but I know how to fix it for the future so it's just a one-time thing. In this video, you can see me spending too much time on C and F, watching my rank falling over time, and somehow still placing 14th with no mistakes. Enjoy!

As it's my first video, some of the explanations may be confusing (especially F). If you have questions about them or just feedback in general, feel free to leave comments (preferably on this blog, since there's math mode here). Expect more screencasts for the upcoming Div. 2's, contests on other sites like AtCoder and maybe CodeChef, and probably algorithm tutorials in the future.

edit: jesus christ, 100 subs in less than a day. You guys are insane, and I love you all!

• +172

By galen_colin, 7 months ago,

I've found that an interesting way to waste pass time is by watching the standings as rounds happen. As far as I know, there are mainly three ways to do this:

• Watch a contest as it happens
• Watch a contest as system tests run
• Virtually participate, and watch standings that way

Obviously, the first two are time-dependent and won't work for past contests. The third option is more viable, but it both introduces meaningless data into the CF system and also forces you to watch with ICPC rules, which is inaccurate and doesn't work for other scoring systems. There's also no way to rewind, you have to watch in real-time, etc. It should be easier than that. And now it is!

I introduce you to Contest Replay — a simple website that will let you "watch" a contest's standings at any point in time. You can also target a user and watch their ranking instead of the overall leaderboard, as well as customize a ton of things to your preference. I've kept it mostly consistent with CF's design because it's nice and classic.

There are also some cool results you can find from watching past rounds — for example, in round 632 (id 1333), some troll held first place in official standings from 1:20 to 1:25 without having solved problem B!

A note: you'll have to pause "autoplay" and press Enter to change some of the settings because I'm bad at UI design. Sorry about that. Also, try not to enter invalid values.

If the thing crashes, anything about the site is unclear, or you have suggestions, feel free to comment below.

Some things I plan to implement:

• API key support — this means "friends standings", viewing private groups/contests, and more.
• Similar to "friends standings", an option to track just a subset of users.
• Data validation, so it won't crash so easily
• Support for IOI contests (might be simple, but it seems like it won't be)
• Leaderboard improvement (distinguishment between systest/hack/pretest fails, also make the "-x" that show up when a problem is never solved consistent with the current time)
• The two things above both require requesting all of the submissions made for the contest. This might be too much.

The site's hosted on repl.it, and you can view the source code at the repl here. It's... not exactly great, but you can see it if you want to.

This blog post is way too long already, but one final thing: autoplay works with negative numbers :)

UPD #1: major changes — API key support and friends standings. Also, a changelog so I don't have to update the blog for small changes; you can find it on the footer.

It also seems like it doesn't work for the edu round today. I think this is because people have changed their handles after competing. On the leaderboard, it shows their old handles, but since they've changed, the API thinks I'm requesting an invalid handle and returns a 400 error. Not really sure what to do about this.

Looks like the issue with the edu resolved itself — maybe handles auto-fix themselves on the standings given time, or there was some manual configuration involved. Anyway, if you see again that it gets permanently stuck on "Loading handles... ", that probably means something is wrong. An error message should pop up, but it may not (more bad UI design).

• +243

By galen_colin, 7 months ago,
A1
A2
B
C
D
E

Some notes/backstory about the round in general:

• Sorry about the mistake in C! I hope it didn't affect you too much, if at all. You can blame this on my bad organization — a tester did actually catch this mistake, but somehow I forgot to correct it.
• The only tester feedback I received besides corrections was to swap the problems C and D. Based on the solve counts, I guess it worked out.
• A lot of people received WA17 or WA20 on D because of not making the default values in the array small enough. With $0$ as the default, you'd get WA17. Others, who used $-10^9$ or $-10^{10}$, got WA20. I actually made this mistake myself, and my brute-force didn't catch it because it only seemed to happen with large enough $n, k$.
• I was skeptical about placing E at the end because of how simple the solution was, but that seems like it was a good choice now.
• I understand criticism about the problems being too "standard" — in fact, I'm not really proud of having a problem like D, but I thought it was necessary for balance (turns out it wasn't anyway). At the very least, they could serve to teach people new algorithms.

Tutorial of Testing Round #XVII

• +33

By galen_colin, 7 months ago,

Since the two failures of the contests last weekend, a lot of people have expressed that they want to help Codeforces in some way, but there isn't much many of us can actually do about the system. At the same time, there have also been a lot of (reasonable) requests for a testing round before round 656. So let's put those two together. Why don't we set some potential testing rounds? Maybe they should be rated, maybe they shouldn't, but in any case, they should be interesting enough to draw in participants like normal rounds do. It's probably not a good idea to "sacrifice" rounds not rated for Div. 1, as some have suggested, because that still wastes the efforts of the setters. So what I'm suggesting is that we prepare rounds with the specific intent of them being testing rounds.

• If these rounds are unrated, there's much less pressure for original/novel problems (and probably fewer problems will be rejected), so more of these rounds will be able to be prepared
• On this same point, it's a great entry point for new problemsetters: rounds are easier to prepare and there will likely be fewer participants (meaning it's easier to manage during the contest)
• It reduces pressure on MikeMirzayanov and the Codeforces team
• You get more problems to solve!

• If problems are of lower quality, then they might pollute CF! But there's a simple solution to this: if this becomes an issue, just don't add the problems to "Problemset"
• People would be preparing rounds with a larger chance of the whole round failing! While this is true, these rounds support CF in a different way: they test its infrastructure. And it's not like the whole round is wasted if something goes wrong — after all, that's what we possibly expect from a testing round.

I think it would help, and some testing round would especially be necessary before round 656. So... I'm not one to just throw around words. I have a few problems of my own prepared (not the most "novel", but they should be somewhat educational — I tried to include some common topics, as people have been wanting lately), and am willing to offer them to a testing round. I'll probably host it in the gym, unless it becomes an "official" testing round, and I've adjusted the constraints/time limits to take longer per test case to better simulate the judge pressure of an actual round, since there will likely be fewer people.

Tentatively (it's still in the testing phase, so things may change), there will be 5 problems and the round will last 2 hours. Expect the difficulty to be around Div. 3, maybe somewhat easier. When it's ready, I'll post it in the gym.

I think a good time to schedule it would be around a day before the upcoming round 656, perhaps at the time that Monogon's round was to be held before it was rescheduled. So for now, it will be held at this time (later changed to 1 hour earlier — ignore this link).

UPD: you can now view the contest in the gym. Registration opens 6 hours before it starts, which I don't think I have control over. It will start at Jul/16/2020 16:35 (Moscow time), which is 1 hour earlier than it was initially set to.

Thanks to 12tqian, meiron03, and Kaunta for testing. A special thanks to 12tqian for letting me theme the problems around him. And, of course, thanks to MikeMirzayanov for the awesome Polygon, the even more awesome Codeforces, and the platform issues so that this round can be possible :P

UPD 2: the scoring distribution is $(1 + 1) - 1 - 1 - 1 - 1$ (it's ICPC mode lol)

A is split into subtasks. A1 has a time limit of 15 seconds (maximum setting) with 20 tests, and I highly encourage submitting slow solutions that get close to TL. The rest of the problems have relatively normal constraints. If more people register than I expect, I'll lower these numbers.

If you really want to put load on the queue, you can stall your program until it gets close to 15 seconds. An example of this in C++:

code

Other languages have stuff like this too, you can check the documentation for it.

UPD 3: Editorial is out. I hope you enjoyed the round! It seems like there were way too few participants for it to be effective, though. I hope the div. 3 tomorrow goes well.

Congrats to the winners!

$~$ 1. Geothermal
$~$ 2. Matei
$~$ 3. Lumberjackman
$~$ 3. NotGivingUp
$~$ 5. komiel

And an honorable mention to h2o_ and erekle for full-solving.

Announcement of Testing Round #XVII

• +257

By galen_colin, history, 8 months ago,

Hi everyone! I'm writing this blog to hopefully throw out a bit of inspiration.

With any measure of skill such as rating, people are always going to tie up their self-confidence with it. Unfortunately, that means when rating is lost, people lose confidence and/or motivation, which always sucks, especially with huge rating losses that seem impossible to overcome. But I want to demonstrate with an example that in the long run, even the largest rating losses won't affect much.

We'll use my performance on Global Round 7 as an example. A whopping -166*. Devastating, right? Well, let's see about that...

Let's look at the alternate timeline where I didn't participate in the round and the rating loss never happened (but my performance in the later contests was the same). Thanks to sjsakib and cfviz for allowing me to break the rules of the universe (probably not an intended use case, but you never know with users...).

• +446

By galen_colin, history, 9 months ago,

Author's editorial

This is a duplicate of my comment here. I've gotten a couple of requests to make it a blog (for some reason), so here it is. Note: this is unofficial, I was just a competitor

A
B
C
D
E
F

• +25

By galen_colin, 11 months ago,

If you failed on test 66 on (this problem | div2 version), you probably haven't reset enough of your array in between. This problem gives rise to a very unique type of bug, since we access the array beyond the part where we initially read elements into. What I mean is, if you have a large case followed by a small one, on the small one, you'll read the first $2^n - 1$ elements in, but the elements in indices $2^n$ to $2^{n+1}-1$ may also be filled with values from the last test case. Many people adapted the function definition from the problem statement, treating an index like a leaf iff the two elements 'below' it (i.e. $a_{2i}$ and $a_{2i+1}$) are both $0$. But if the array isn't reset, then values that are supposed to be leaves may be treated differently. Unfortunately, I made this mistake too :(

Not resetting the state between test cases is common. But this is a very unique type of mistake, specific to this problem. I don't think it's reasonable to blame the setter for overlooking this in the pretests. Surprising that none of the testers made this same mistake, though...