darkkcyan's blog

By darkkcyan, history, 6 weeks ago, In English

Hello everyone! There is a problem in an Olympiad from my country 7 years ago, but I could not solve it, nor some of my friends. So I wanted to ask for help with this problem. The problem is as follows:

Given a bipartite graph with $$$n$$$ nodes on the left side and $$$m$$$ nodes on the right side. There is also a weight matrix $$$C$$$, $$$C_{i,j}$$$ is the weight of the edge connecting the $$$i$$$-th node on the left side to the $$$j$$$-th node on the right side. The problem ask to find a sub-tree of the graph with minimum total edge weight and it connects every nodes on the left side. In other word, we need to find a minimum Steiner tree, whose terminals are nodes on the left side. It is guaranteed that the answer exists.

Constraints: $$$n, m \le 1000$$$. $$$-1 \le C_{i, j} \le 10000$$$. $$$C_{i, j} = -1$$$ means there is no edge connecting the $$$i$$$-th node on the left side and $$$j$$$-th node on the right side (or it can be considered $$$+\infty$$$).

Here is the samples from the problem statement.

Sample 1
Sample 2

Also some interseting examples that my friend has drawn:

Friend's graph 1
Friend's graph 2

I have done some Googling, but it's no good. I have found a Wikipedia article called Quasi bipartitie graph and there were only approximating solutions. But I'm not sure if this article has anything to do with this problem.

Please help me find a solution to this problem, or maybe help me prove this is an NP problem, because I'm really depressed :(.

Read more »

  • Vote: I like it
  • +78
  • Vote: I do not like it

By darkkcyan, history, 2 months ago, In English

Hello Codeforces!

I am a author of a Codeforces contest, and recently, I became a coordinator for a Codeforces round as well. After those two rounds, there are a some problems arise when preparing a problem. Thinking about the solutions to these problems, I decided to gather them into a blog series. Hopefully, this series will be helpful to the future problem setters, or people who want to create problems in general.

I have finished the first part of this series. It aims to introduce the Polygon platform to the new problem setters, and guide them through the basic steps on creating a new problem from scratch. In the post, I have explained the concepts and practices that a problem setter need to know in order to use Polygon to prepare problems.

Here is the link to the blog post: Polygon.Codeforces Tutorial — A Guide to Problem Preparation [Part 1]

This is a personal blog. I don't post the content of it onto Codeforces, because I want to make things more official on my side, I can customize things, and the amount of content of this post is kinda not suitable for a normal Codeforces post.

Special thanks

I want to give special thanks to Nikolay KAN Kalinin for proofreading, as well as giving very wonderful comments and feedback! I think I could not go this far to make this blog post without him.

Shout out to MikeMirzayanov and the Codeforces team for making Polygon! There are still more and more features being added, even during the writing process of this post.

And thank you for reading this post!

Read more »

  • Vote: I like it
  • +261
  • Vote: I do not like it

By darkkcyan, 5 months ago, In English

In my opinion, all of the problems have a very simple solution and actually require no special data structure at all. To demonstrate the point, I will also use EvErYoNe'S fAvOrItE lAnGuAgE: Pascal.

I will also include some notes, which are not related to the solution at all, but I find them interesting, so I will also include them in.

A. Robot Cleaner

Obviously, the problem is solvable using simulation. But it is solvable in $$$O(1)$$$ time as well, and I will discuss it.

Problem note

Pascal solution: 140968868. C++ solution: 140968900

B. Game On Ranges

There are a lot of solutions to this problem. The solution I described below might be the simplest in my opinion.

Problem note

Pascal solution: 140968967. C++ solution: 140968942

C. Balanced Stone Heaps


There is not much to note about this problem.

Pascal solution: 140968985 C++ solution: 140969004

D. Robot Cleaner Revisit

Problem note

Pascal solution: 140969014 C++ solution: 140969025

E. Middle Duplication

Problem note

Pascal solution: 140969053 C++ solution: 140969037

Read more »

  • Vote: I like it
  • +221
  • Vote: I do not like it

By darkkcyan, history, 5 months ago, In English

Hello, Codeforces!

I am really excited to invite you to Codeforces Round #763 (Div. 2) on Dec/28/2021 16:35 (Moscow time)! The start time is unusual, so please pay attention.

All the problems were authored and prepared by me. I would like to give special thanks to everyone who made this round possible:

The round consists of 5 problems and you have 2 hours to solve them.

Wish you good luck and high ratings!

UPD0: The score distribution 500 — 1000 — 1750 — 2500 — 2750.

UPD1: Congratulation to the winners of the round!

Div. 2
  1. wanyuezaifengli
  2. MilimNava
  3. proton1126
  4. qazsxdew
  5. Hayasaka
Div. 1 + 2
  1. heno239
  2. Geothermal
  3. Vercingetorix
  4. dorijanlendvaj
  5. wanyuezaifengli

UPD2: The Editorial is out!

Hope that you guys enjoy the rounds, and thank you again for participating! <3

Read more »

  • Vote: I like it
  • +643
  • Vote: I do not like it

By darkkcyan, history, 17 months ago, In English

Hello Codeforces!

This is my first ever blog on Codeforces, and today I wanted to gain contributions share my small invention during my upsolving. I don't know if there existed a similar idea yet, but as far as I can tell, the editorial for the problem that I have solved does not use my idea. I think it would be nice to share with you guys and have your opinions on this idea.


You should understand what is a segment and what is a merge-sort tree first.

All the code in this blog is in efficient style, therefore you should at least recognize part of it before reading the code.

Idea explanation

How do we build Merge-sort tree again?

Merge sort tree is a Segment tree, each node of which contains a sorted set/vector of every element in its range. We build it in a bottom-up manner. To build the parent nodes, we merge 2 sets/vectors of the children the same as we do in the merge-sort algorithm in $$$O(\text{number of elements})$$$, hence the name Merge-sort tree.

But there is another way.

Let A be the array we are building the merge-sort tree on, it[i] be the vector that contains sorted elements in the range conquer by the i-th node. Let's create another array B that stores the following pairs: B[i].first = A[i], B[i].second = i. We sort the array B in the increasing order of first. Then we iterate each pair element value, position of B in that sorted order, push the value to the back of every it[i] where i is the node of the merge-sort tree that contains position.

We can see that I sort the array before inserting, so I decided to call this trick sort before insert. Do you guys have a better name?

Some advantages of "sort before insert" over the classical way

  • We can handle the case where we have multiple element located in the same position.
  • With efficient style we can build this tree iteratively.

Here is the small snippet for building the tree.

const int N = 1e5;
vector<int> it[2 * N];
void build(const vector<int>& a) {
  vector<pair<int, int>> b;
  for (int i = 0; i < (int)a.size(); ++i) {
    b.emplace_back(a[i], i);
  sort(b.begin(), b.end());
  for (auto [val, p]: b) {
    for (p += (int)a.size(); p > 0; p >>= 1)

So we can call merge-sort tree — quick-sort tree from now on. :))

Can we do with ranges?

As we can see, each node in the merge-sort tree contains only elements of its range, and in the sort before insert version, we used point-update to build the tree. Can we do the same with range-like update? Yes, yes we can!

But what does each node contains exactly? We already know that for every interval, we can break it into $$$O(\log n)$$$ sub-intervals, each of them will correspond to a node of the segment tree. So if we have some intervals with some associated value, we can add these values to every node corresponding to the sub-intervals of the considering interval, so that the vectors of the nodes will still be sorted.

Here is the implementation.

const int N = 1e5;
vector<int> it[2 * N];
struct Range {
  int l, r, value;  // [l, r)
void build(vector<Range> a) {
  sort(a.begin(), a.end(), [](const Range& u, const Range& v) { return u.value < v.value});
  int n = (int)a.size();
  for (auto [l, r, val]: a) {
    for (l += n, r += n; l < r; l >>= 1, r >>= 1) {
      if (l & 1) it[l++].push_back(val);
      if (r & 1) it[--r].push_back(val);

Basic application


We are given an array $$$a$$$ of length $$$n$$$. We need to process $$$q$$$ queries. For each query, we are given 3 numbers $$$l, r, k$$$. We need to count how many index $$$i$$$ such that $$$l <= i <= r$$$ and $$$a[i] < k$$$.

This is a classical problem and can be solved with a Fenwick tree with coordinate compression in $$$O((n + q) \log n)$$$. But if the merge-sort tree is used instead, the complexity is $$$O(n \log n + q \log^2 n)$$$ with binary searching on each node of the sub-intervals for each query. Using sort before insert, we can change the binary searching part to two pointer technique by the following:

  • Build 2 trees with sort before insert, one for array's elements, one for the query.
  • We go from top to bottom in both trees simultaneously. Let's consider the nodes corresponding to the same interval on both trees. Here we find the answer locally for all queries contains this sub-interval with two pointer technique. Since queries that contain this sub-interval is sorted by their value and the array elements contained in this sub-interval is also sorted, we can maintain 2 pointers, one point to the current query, the other point to the last array's element that is bigger (or first element that is smaller or equals) than the current value of the query.

The complexity of this approach is $$$O((n + q) \log n + q \log q)$$$. Firstly, we must sort both array elements and the queries. Secondly, we can see that the number of times we visit each array's element equals the number of the node that contains it, which is $$$O(n \log n)$$$. And finally, the number of times we visit each query equals the number of sub-intervals of its query range.

The implementation is here


We are given an array $$$a$$$ of length $$$n$$$. We need to process $$$q$$$ queries. For each query, we are given 3 numbers $$$l, r, k$$$. We need to find the $$$k$$$-th smallest element among $$$a_l, a_{l + 1}, a_{l + 2}, ..., a_r$$$.

There is already a solution using the merge-sort tree in $$$O((n + q) \log^2 n)$$$. But I wanted to discuss a little bit naive solution with the merge-sort tree. Let's do binary searching for the answer. If $$$F(x, l, r)$$$ is the number of elements in the range $$$[l, r]$$$ that is smaller or equals to x, then the answer for the query $$$l, r, k$$$ is the smallest number $$$v$$$ such that $$$F(v, l, r) >= k$$$. We can already see that this is exactly the same as the problem SPOJ KQUERY. If we use the above approach, then for the single element (q = 1), we have the complexity of $$$O(n \log n \log (10^9))$$$. But doing that $$$q > 1$$$ times, we need to do multiply the complexity by $$$q$$$, which is very bad.

But the in problem KQUERY, we actually can check $$$q$$$ numbers simultaneously. So the obvious optimization here is to use parallel binary search. Implementation is here.

2D range sum query for big, sparse matrix (with updates).

I can't actually find this problem anywhere (with my constraints), so here is the statement with constraints: You are given a matrix of size $$$n \times m$$$, initially filled with 0. You need to process $$$q$$$ queries of 2 types:

  • 1 r c v — change the value of cell $$$(r, c)$$$ to $$$v$$$.
  • 2 r1 c1 r2 c2 — get the sum of the elements of the submatrix, limited by 2 cells $$$(r1, c2)$$$ and $$$(r2, c2)$$$

If we have $$$n \times m \le 10^5$$$, we can solve this problem with a 2D Fenwick tree in $$$O(\log n \log m)$$$ for each query. But here I wanted to have $$$n \le 10^5, m \le 10^5, q \le 10^5$$$. (Also note that we can use a persistent segment tree to solve this problem but without the first type query).

As you can already guess, we can do some kind of adding the update and queries into the nodes of the merge-sort tree then do something like 2 pointer technique. But what is the sort order? Well, the sort order here is, conveniently, the order of the processing (already given by the input). So first, we need to build 2 trees, one for update and one for queries. For each update, we add it to the first tree at the position given by its r coordinate. For each query, we add it to the second tree with the range [r1, r2]. So now we can go down and find the answer limited by the column only. For this, I use a Fenwick tree. Because we add the updates and the queries by the appearance in the input, in each node we can just process them by the added order.

A small implementation detail though is that we need a fresh Fenwick tree fast for each node. For this, we can mark all changes, then later we just need to roll them back.

To summarize, we got a $$$O(q \log n \log m)$$$ solution. Each query will be added to $$$O(\log n)$$$ different nodes, and in each node, we add/query them with the Fenwick tree in $$$O(\log m)$$$.

Because I cannot find the problem, I also wrote a small test generator with a naive solution then check my solution against it. Implementation is here.

Some real problem application

H. Hold the Line. 2019-2020 ICPC Asia Hong Kong Regional Contest

Link to the problem in the Gym. You can find the editorial here Basically, we are given a number $$$n$$$ and an array $$$a$$$ of length $$$n$$$. initially filled with $$$-1$$$. Then we need process $$$m$$$ queries of 2 types:

  • Given 2 numbers $$$i$$$ and $$$h$$$. Set $$$a[i]$$$ to $$$h$$$. Each position will be set at most once.
  • Given 3 numbers $$$l, r, H$$$. We need to find the position $$$i \in [l, r]$$$ such that $$$a[i] \ne -1$$$ and $$$|a[i] - x|$$$ is minimized. If there is no such $$$i$$$, then output -1

Well in the editorial, this problem is solved in $$$O(n \log n + m \log^2 n)$$$. But no I don't like the square here, and instead I will demonstrate how I solved it in $$$O((n + m) \log n)$$$ (or $$$O((n + m) \log n \cdot \alpha(n + m))$$$ since I also used DSU).

Let's first see how we solve this problem without the positions, i.e we have an empty set of number, the operation is either add an element to the set or querying the lower/upper bound element of the set. This problem literally can be solved with the STL set in $$$O(n\log n)$$$. But if we somehow have the sorted order of added/querying elements, we can do in $$$O(n\cdot \alpha (n))$$$. Let's do the reversed problem then: we initially have a set of numbers, and our operations are either to remove elements or querying lower/upper bound. Because we have the sorted order, let's create a linked list containing both elements of the set and the querying elements. We can already see that the removal cost is constant. For querying, we can just see the next and previous element of the querying element in the linked list. A little problem here is that the next and previous element must be the element of the original array, not the querying element. We can use DSU to join all adjacency querying elements into one, so the next and previous elements will no longer be the querying element.

Now back to the original problem. For each node, we need to maintain the order of operations for the updates in it, as well as their sorted order by value, in other words, each node needs 2 vectors. Well, we can first add them in the order given by the input to the first vector of the nodes, after that, we sort the update/queries by their values and finally add them to the second vector of the nodes. Doing so we have reduced the problem to just add and querying value for the set for each node.

If you guy wanted to see my ugly implementation, here it is (test generator included).

Fun fact: I invented this trick when solving this problem.

G. Greatest Square — Grand Prix of NorthBeach

You can find the link to the upsolving and problem statement here.

We are given a polygon, edges of which are parallel either to $$$Ox$$$ or $$$Oy$$$, and each consecutive pairs of edges are perpendicular. We need to process $$$q$$$ queries. In each query, we are given a point $$$P$$$ lying strictly inside the polygon, and we need to find the side of the largest square, having $$$P$$$ as the lower-left corner and lie inside the polygon.

The base solution is base on this comment. Please read it before continuing. The problem is, I actually don't know how to do the second part with 1 basic segment tree, so here I improvised and finally decided to use this trick.

Let's reformalize what we need to find. For each querying point, we draw a line parallel to the line $$$x = y$$$ from it, and another line parallel to $$$Ox$$$ from it. Let's call these lines $$$a$$$ and $$$b$$$ respectively. We need to find the point with the smallest $$$x$$$ the lie below the line $$$a$$$, but not below the line $$$b$$$.

So first we sort all polygon points, as well as the querying points, increasingly by $$$(x - y)$$$. In other words, we sort them in the sweep-line order when we sweep them diagonally. For each polygon points $$$(x, y)$$$, we add it to the tree with the range $$$[0, y]$$$. For each querying points $$$(x, y)$$$, we add it to the tree with the position $$$y$$$. Doing so, when we are processing each node, we don't need to care about the condition "not below the line $$$b$$$" anymore. And because we add the points in the sweep-line order, we can use 2 pointer technique to ensure the "below the line a" condition. The answer locally in each node for each query is the minimum $$$x$$$ of some prefix of the polygon points.

My ugly implementation again.


I know that this trick is not as powerful as it sounded in the title (the title is just for clickbait though :)). But it has some nice advantages:

  • Less implementation that the other data structure.
  • Remove 2 (or maybe more) restrictions at the same time in one node.
  • Everything is iterative.

But the main drawback is (or is it???), this is heavily offline processing.

I really hope that this idea can be seen more often, but sadly until now I only found out the above with a "meaningful" application. And I hope that you guys can find this idea useful!

Applicable problems

I think this trick is nice and all, but because this is just an implementation trick, there might not be a lot of problems that this trick can, or should be used (because it might be overkill). How ever I will add problems if I found them.

  • CF848C. My idea is roughly the same, the formula is the same. But by the power of offline processing, we don't need coordinate compression, as well as maintaining multiple Fenwick tree. We only 1 Fenwick tree and clear its data after processing a node. Implementation

Read more »

  • Vote: I like it
  • +163
  • Vote: I do not like it