bicsi's blog

By bicsi, history, 3 months ago,

... without doing much of anything.

I've been obsessed for the last two days about problem Data Centers from EGOI 2022. In particular, it has a subtask where $1 \leq N \leq 10^5$ and $1 \leq S \leq 5000$, where it seems $O(NS)$ wouldn't fit TL. I've been conjecturing that using treaps would amortize to good complexity, and after two days of burning grey matter, I've finally found a proof, and it's pretty cool to make a blog out of it.

The problem basically starts with an array $[v_1, v_2, ..., v_N]$, where $1 \leq v_i \leq V$ for all $i$. It then asks us to iteratively simulate the operation: "Subtact $x_i$ from the biggest $m_i$ elements.". All this while keeping the array sorted.

We'll show that just using treaps (or any BBST for that matter) yields a good amortized complexity. More exactly, we'll prove $O((N + Q) \log V \log N)$, and it should actually be better in practice.

Naive solution with treaps

Of course, the "naive" solution (which is much of what we'll implement here), does three things:

1. Extract the $m_i$ suffix of the sorted array (we'll call it $B$, and the remainder $A$)
2. Subtract $x_i$ from all elements of $B$
3. Merge the resulting sequences $A$ and $B$

Using treaps, it's easy to see that steps 1 and 2 can be done in $O(\log N)$ via split and lazy propagation. Step 3 is the heaviest, requiring potentially $O(N)$ and maybe even $O(N \log N)$ steps depending on implementation.

Interweaving number

For a given merging process of sequences $A$ and $B$, let $k$ be the number of interweaving segments. See example below:

A = {2, 5,          15,   25, 30}
B = {      7, 9, 12,    21      }


In the example above, we have $k = 5$ interweaving segments (of lengths $2$, $3$, $1$, $1$, and $2$ respectively).

First, it is rather straightforward to write an implementation of merge on BBST that has complexity $O(k \log N)$ (via $k$ binary searches and $k$ splits).

Secondly, there can be at most $O((N + Q) \log V)$ merging interweaving segments over an array of size $N$ and $Q$ queries.

Potential method

For a given (increasing) sequence $A = [x_1, x_2, ..., x_n]$, let's define:

$\pi(A) = \sum_{i=1}^{n-1} \log(x_{i+1} - x_{i})$

(all logarithms are base-2)

Let's suppose we are merging two sequences $A$ and $B$, and there are $k$ interweaving segments. See example below:

             <-d1->         <--d2-->      <--d3-->       <---d4--->
A = {[--a1--],                      [-a2-],                        [--a3--]}
B = {              [---b1---],                     [-b2-]                  }


In the picture above, the segments [-----] are placeholder to some sequence of elements. Numbers $d_i$ are the distances between numbers, e.g. $d_1 = first(b_1) - last(a_1)$.

The difference between potentials before and after merge $\Delta \pi = \pi (A) + \pi (B) - \pi (A \cup B)$ is:

$\Delta \pi = (\log (d_1 + b_1 + d_2) + \log (d_2 + a_2 + d_3) + \cdots + \log (d_{k-1} + a_{k/2} + d_{k})) - (\log d_1 + \log d_2 + \cdots + \log d_k) \quad (1)$

Nonetheless, we can see that:

$\Delta \pi \geq (\log (d_1 + d_2) + \cdots + \log (d_{k-1} + d_k)) - (\log d_1 + \cdots + \log d_k) \quad (2)$

(more naturally, the "worst case" is when doing a Faro shuffle, i.e. interweaving element by element).

However, because $\log$ is concave, we can use that $\log (\frac{a + b}{2}) \geq \frac{\log a + \log b}{2}$. Reformulating, we obtain:

$\log(a+b) \geq 1 + \frac{1}{2}(\log a + \log b) \quad (3)$

We'll add and subtract $\log (d_k + d_1)$ to inequality $(2)$, use inequality $(3)$, and simplify everything to finally obtain:

$\Delta \pi \geq k - \log (d_1 + d_k)$

(thanks freak93 for the easy one-liner about concavity of $\log$ )

The initial potential of the sequence is bounded by $N \log V$, and each merge operation adds at most $\log V$ to the potential, yielding a total of $(N + Q) \log V$ potential. Then, the number of interweavings is at most $(N + Q) \log V$.

Extra

It's not hard to see the power of this result. In essence, you can do all sorts of operations like: adding/subtracting some value $x$ to an arbitrary subsequence, dividing subsequence by some value, square rooting subsequence, and so on. All while keeping the sequence sorted.

Also, doing $\log N$ binary searches on treaps might be inconvenient. Another approach that is easier to code and faster is to use the treap union function (source: Wikipedia):

function union(t1, t2):
if t1 = nil:
return t2
if t2 = nil:
return t1
if priority(t1) < priority(t2):
swap t1 and t2
t<, t> ← split t2 on key(t1)
return join(union(left(t1), t<), key(t1),
union(right(t1), t>))


Conclusions

• It feel like this is a bit more general method to this
• Replace join treap operation in your library with merge
• I feel like this trick is well known in some countries...

• +150

By bicsi, history, 22 months ago,

Hello Codeforces!

Some time during the past weeks I have been working on developing a tool for preparing contests (for Romanian NOI and programming camps), inspired by Polygon. It allows people to quickly and safely prepare problems for contests using a simple command line interface utility.

It has support for generators, validators, close-to-TL warnings, and even stress-testing to maximize a given objective value (e.g., generate tests where the answer is as big as possible) or to fail a given solution. It also has some neat checks, like checking for duplicate tests, or checking if generator is deterministic for reproductibility. More features will be added in the future.

I have already successfully prepared a couple of problems with it, and it worked flawlessly (for me, at least).

Installation

In order to install it, you just (hopefully) need to make sure you have python installed, and then run:

pip3 install cprep

I have tested it on Linux/Mac OS and it should work. I am curious about Windows, so let me know how it works there.

Usage

Note: The GIF is stale and shows an old command. In the meantime I replaced "testutil" with "cprep" and "make" with "runall", mainly because testutil was already taken on PyPI

The package installed contains two utilities:

• A library for doing all sorts of operations (compilation/evaluation/stress test generation)
• A command-line tool to use it from the command line with a pretty colorful interface

The command-line tool can be used via cprep [command] [args...].

The main idea is that I wanted to write the code such that other interfaces (e.g. web) can be added easily.

For more information about usage and features, I recommend you follow the GitHub page from time to time. I know the information is a bit sparse, but I will do my best to update it in the course of the following days.

Epilogue

If the tool doesn't work for you, that's expected (I have only tested it on my machine until now). Just let me know in the comments what seems to be the problem and I will try to fix it.

If you fancy extra features, let me know privately or in the comments below :).

• +64

By bicsi, history, 2 years ago,

Recently, I’ve figured out a way to improve matrix exponentiation-like algorithms for sparse matrices. I’m pretty sure that some of you might find this being elementary linear algebra, but I’m pretty proud of my discovery, so I’m going to write a blog post about it. I've marked some TODOs and ideas in square brackets, in case you want to help me enhance this.

Prerequisites

Cayley-Hamilton theorem, Berlekamp-Massey algorithm, fast linear recurrence solvers.

Terminology

• $M$ is an $n \times n$ “sparse” matrix, with $m$ entries different than $0$.
• $k$ is the exponent.

Complexities will be written in terms of $n, m, k$.

Cayley-Hamilton theorem

One of the most astonishing theorems in my opinion, what the theorem says is that for any $n \times n$ matrix, there exists some coefficients $c_i$ such that.

$M^n = \sum_{i = 1}^n c_i M^{n - i}$

$c$ is often called the characteristic polynomial of $M$.

This theorem essentially says that the powers of any $n \times n$ matrix follow a linear recurrence of order at most $n$ (to understand why that is, take the above equation and multiply both sides with $M^x$ for any $x \geq 0$).

Computing quadratic forms $\sum{M^k_{i, j} a_i b_j}$

Doing some little algebra, it is easy to see that $\sum{Q_{i, j} a_i b_j}$ can be written equivalently as $a^T Q b$, for any matrix $Q$. This is called a quadratic form.

Many problems involving matrix exponentiation in which you have to output one number essentially boil down to these quadratic forms. For example, computing the $k$-th fibonacci number essentially asks us to compute $a^T M^k b$, where $a = b = [1, 0]^T$, and $M$ is the fibonacci recurrence matrix.

Note that, due to the Cayley-Hamilton theorem, after doing some algebra it can be seen that $s_k = a^T M^k b$ follows a linear recurrence with the recurrence coefficients equal to $c$. Therefore, if we compute $s_1, …, s_{2n}$, we can use Berlekamp-Massey algorithm to deduce $c$, and then use a fast linear recurrence algorithm to compute $X^k mod c$ (polynomial modulo) in $O(n^2 \log{k})$ or $O(n \log{n} \log{k})$, and then find out $s_k$ [there was a tutorial on Codeforces for this, perhaps you can share the link in the comments].

However, to do that, we have to compute $s_0, …, s_{2n}$. In order to do that, remember that $s_i = a^T M^i b$. This means that $s_i = a^T M (M^{i - 1} b)$. If we compute $M^i b$ for $i = 1, 2, …, 2n$ (using iterative matrix vector products), we can compute $s_1, …, s_{2n}$ in time $O(m n)$ (it is essential to understand that a matrix-vector product $Mv$ can be computed in complexity $O(m)$).

Summing up, we can do the following:

• Compute $s_0, …, s_{2n}$ in $O(m n)$
• Compute $c$ in $O(n^2)$ using Berlekamp-Massey
• Compute $X^n mod c$ in $O(n^2 \log{k})$ or $O(n \log{n}\log{k})$
• Compute $s_k$ in $O(n)$

Total complexity is $O(mn + n^2 + n^2 \log{k} + n)$ or $O(mn + n^2 + n \log{n} \log{k} + n)$.

Computing matrix-vector products $M^k v$.

Let’s go even further and compute matrix-vector products. This is the same as computing $e^{(j)} M^k v$ for all $j$. Here $e^{(j)}$ is the $j$-th element of the canonical base ($e^{(j)}_k = [k = j]$). The key thing to notice here is that the characteristic polynomial $c$ is the same for all $j$, so all elements follow the same recurrence. Also, products $M^i v$ are computed in the previous procedure, so we can just use them to compute all initial terms of the recurrence!

This essentially means that you can just take any non-zero vector $w$ (*), compute the initial $2n$ terms $M^i v$ that you need, and then do Berlekamp-Massey on $w^T M^i v$, and you are done! The only thing that changes is the last step, which requires $O(n^2)$ work now.

*There are some good and bad choices for such a vector. In particular, you would want to choose a vector $w$ such that the sequence $w^T A^i v$ has order of recurrence equal to $n$. I think it's fine to just pick random vectors until this works?

Total complexity is $O(mn + n^2 + n^2 \log{k} + n^2)$ or $O(mn + n^2 + n \log{n} \log{k} + n^2)$.

Computing $M^k$?

I’m not sure if this is possible, but I’m eager to hear your ideas!

Example problems

Given a directed graph $G$ with $n$ vertices and $m$ edges, compute the number of paths of length exactly $k$. Optionally, compute the number of paths of length $k$ starting in each of the $n$ vertices. $n \leq 1000, m \leq 10000, k \leq 10^9$. [any links to this problem?]

Given an undirected tree with $n$ vertices, find the number of paths of length $k$ from a given root vertex $r$ to all other vertices. [Link]

Given a planar graph $G$ with $n$ vertices, find the expected number of steps until you reach node $n$, starting from node $1$. At every step you choose an arbitrary neighbor of the current vertex. [Link]

[other problems?]

Sample code

Note that the sample code does not compute matrix-vector products in $O(m)$. While it kind of breaks the purpose of this blog post, it is more a test of correctness than an actual solution to a problem.

Code

• +178

By bicsi, history, 2 years ago,

Recently, Moscow Fall Workshops has taken place (it ended about one week ago), and I wanted to address some issues that came into my mind during that period. As you might notice, it took me some time to build the courage and get in the right mindset to write about this.

First of all, I want to say that I am a huge supporter of programming camps and, in general, initiatives to enhance the joy of doing competitive programming and I respect the efforts of building communities around CP. I have always had a kind of excitement when thinking about participating in programming camps (Petrozavodsk, Moscow Workshops, etc.) and their “hardcore” style with lots of contests. I therefore appreciate the immense efforts of organizers, problem setters, and testers, when preparing this camp.

Main point

Coming towards the main point of the blog, I felt that the last programming camp was a bit… underwhelming. Moreover, I’m having a hard time trying to justify the costs of doing such a camp, given the participation fees. For some quick overview, there were 87 Div1 + Div2 teams participating in the contest, and the participation fees were varying from 30000 rubles (approx. $400) for EA/EU participants to$480 for other participants. Some discounts are applicable, but let’s say for the sake of it that the average price for a participation was $400. This comes to a total of around$35000. I’m not including sponsors here (e.g. Yandex), mainly because I’m not sure if they contributed with money and what not, but this would be a good enough (albeit underestimated) budget for the camp.

Let’s get more technical

Let’s analyze possible costs of doing such a programming camp.

Accommodation & Extra activities

This should not apply, as this edition was held remotely.

Problem setting

Problem-setting is another part of the story. It is well known that making a good problem set is very time consuming and involves a lot of hard work. However, after having the surprise of recalling an already-solved problem during one of the contests, I did some quick research and found that most of the contests consisted in problems that were taken from past contests (e.g. Seoul Regional 2020, Shanghai 2004, as well as several problems from the japanese AIZU Online Judge). Amongst the ones that were not taken from other sources, one day was an Opencup contest, and the others are probably going to make for a future Opencup contest.

This essentially means that problems shouldn't require much work to be prepared. There is, nonetheless, still some work to be done in order to find the right problems, port them to the different judging system, rephrase statements. However, going back to the main point: does this explain the budget?

It’s very important to emphasize the fact that I have nothing against making problem sets from past contests; I think it’s highly educational and efficient. However, I simply can’t ignore the fact that this camp had a budget of about \$35K.

Editorials & Analysis

I don’t want to say too much about editorials. I think they were pretty well written. I don’t think were the most insightful, and often they were very short, but some may appreciate their conciseness. The video analyses on the other hand, were not the greatest quality, and more often than not they were rushed in the interest of time. I’m curios about opinions from other participants, but I felt that the analysis did not provide much more on top of the editorials.

Contest platform

The contest platform might impose extra costs, because servers have to be kept up and running for a premium contest. Oh no, I think I’ve just opened Pandora’s box! Aaargh!

The contest system was bad, there's no doubt about it. A lot of the times there were no standings, a lot of the times the standings shown on the platform were from 2 hours before the current time. On top of that, I remember on one of the days the system wasn’t evaluating any submissions in queue for the first 30 minutes of a contest (and the chat suggested that it had been down even before we started), which made me essentially want to just skip the day.

Of course, this would be very much understandable, except for the fact that the exact same problems happened in the previous camp. Participants suggested using other platforms like Codeforces last time; I understand that it makes sense to use the Yandex platform because sponsors & what not, but at least make it work.

Div 2

What about Div2? I’m not too sure about how the Div2 experience was felt (maybe you could help me with this); however, I couldn’t help but notice the fact that the lessons and contests prepared for Div2 were, more or less, identical to the ones in the Moscow Pre-Finals Workshop last year. I’m not sure if Div2 participants were aware of this fact before signing up, but I’d certainly not want to experience the surprise of having participated in the last workshop, and realizing that I’m just finding the same lessons and problem sets. Again, please take this information with a grain of salt, as I haven’t participated in Div2, and just took a couple of glances over the materials.

Happy thoughts

On a positive note, I think that this camp (and last ones) had one pretty awesome thing: the Telegram chat. In my past participation in programming camps, I felt that I could have benefited a lot from interacting more with the participants, sharing ideas, or even doing some basic chit-chat. In this sense, I think the recent system comes much closer to that goal, and I actually felt a lot more of the “community” aspect of CP. I think this relates to the fact that, due to the “permanent” nature of chats, it’s easy to record and recall conversations and ideas.

Conclusion

I don’t think the last programming camp was bad. I think there is room for improvement, but the overall experience was okay. However, I do think that the camp was overpriced, and I’m having a hard time reasoning about where this budget is spent. The main reason why I’ve made this blog post is partly to try to understand the situation, and partly to open up to the community about this. My intention is not to hate on people who make these events happen, but rather to try to figure out what proportion of these camps are genuine interest in making the CP community more engaged and improve people’s skills, and what proportion is financial interest.

• +795

By bicsi, history, 2 years ago,

I recently found put about this paper paper by Tarjan et al describing a data structure that strikes a lot of similarity to the data structure a lot of us have been familiar throughout the years, the treap without rotation.

I was wondering what you think about it. Is it just a coincidence? Is there something more subtle that ‘Zip trees’ have that treaps without rotations don’t? And, moreover, are there no notable mentions of this Treap implementation in literature?

• +105

By bicsi, history, 2 years ago,

Hello!

Today I will present a data structure that I have worked on for my Bachelor thesis a while ago, called (by me) a PreTree (link to thesis). This is a type of "compressed" trie that is arguably easier to implement than a regular compressed trie, and it achieves merging in amortized $O(N L)$ complexity for $N$ keys of equal length $L$. I will present an implementation as a segment tree, though, as it is easier to understand.

Prerequisite: know segment trees, and "sparse" segment trees (a.k.a. bitwise tries).

Main idea

We will keep a regular segment tree, with the twist that the root contains the minimum key. Basically, a PreTree on keys in the universe $[b, e)$ is a binary tree $T$ where either $T$ is empty, or the following conditions hold:

1. $T$ holds the lowest key present in the range $[b, e)$
2. $T$'s left son is a PreTree on values $[b, \frac{b + e}{2})$
3. $T$'s right son is a PreTree on values $[\frac{b + e}{2}, e)$

Because $T$ is a one-to-one mapping between tree vertices and keys, then the memory complexity of $T$ is obviously $O(N)$.

The reason for the name PreTree comes from it being a "pretty tree" (of course, beauty is subjective), and also because their pre-order traversals yield keys in sorted order.

Implementation

In order to make the implementation easier, we will define a single big operation called "merging" two PreTrees. Suppose we have two trees $T1$ and $T2$. Merging the two trees would be done as the following recursive procedure:

function Merge(T1, T2, b, e):
if T1 is empty then return T2
if T2 is empty then return T1

m = (b + e) / 2
if T1.k > T2.k then swap T1 and T2
T1.l = Merge(T1.l, T2.l, b, m)
T1.r = Merge(T1.r, T2.r, m, e)
T2.l = T2.r = empty

if T1.k != T2.k:
## We have to insert T2 into the corresponding child
if T2.k >= m:
T1.r = Merge(T1.r, T2, m, e)
else:
T1.l = Merge(T1.l, T2, b, m)
return T1


Insertion

In order to insert a key into the tree, just merge it with a singleton tree.

Erase

This is a bit more involved. Notice, however, that any PreTree $T$ is a min-heap on the keys. Find the key and then pop it as you would do in a heap.

Predecessors

In order to find the greatest key smaller than a given $k$, one can do the following:

function Smaller(T, k):
if T is empty or T.k >= k:
return NOT_FOUND
ret = Smaller(T.r, k)
if ret is NOT_FOUND:
ret = Smaller(T.l, k)
if ret is NOT_FOUND:
ret = T
return ret


Finding the smallest key greater than or equal to a given $k$ is possible, albeit a little more complicated. The simplest way would be to also keep the maximum on the subtree. I will omit the actual implementation.

Performance

The Merge function seems not too efficient at first glance. However, the amortized complexity of it is bounded by $O(N log(U))$, where $U$ is the size of the universe, and $N$ is the total number of nodes. In order to see why is that, we will have to look at the depths of each node in the tree.

Let's suppose $T1$ and $T2$ have distinct keys. Each non-trivial call to the Merge function increases the depth of at least one vertex. The reason that is true is because, after each call, the root node with the higher key will get "demoted" one level downwards, therefore increasing its depth by one. As depths of a node can never exceed $O(log(U))$, the total complexity cannot exceed $O(N log(U))$.

Follow-up 1: Prove that this still holds for non-distinct keys!

Follow-up 2: What about erase operations? How do they affect the amortized analysis?

Follow-up 3: What about (Treap) splitting operations? Can you split T into two trees, depending on some (monotonic) predicate? How does that affect complexity?

Applications

One of the most standard applications of this is to improve the $O(N log^2(N))$ bound on various problems involving the small-to-large trick. It has the additional improvement of having $O(N)$ memory without making the implementation a lot harder.

Example problems:

Could you tell me about some other problems involving some form of sparse segment trees and other tasks where this could come handy, to add them to the blog?

• +229

By bicsi, history, 2 years ago,

Hello Codeforces!

For the past couple of years I have been developing a competitive programming platform that would aggregate solutions from different OJs. It was initially written for assigning tasks for my preparations (and some professors used it in our university for grading students in algorithmic courses).

Recently I have been developing a "ladders" feature, because I wanted to train and realized that this would be the best way to do it. Now, I am excited to say that I have decided to "go public" with this feature and invite you to try it out.

How it works

Do the steps below:

• Create an account at https://competitive.herokuapp.com
• Go to ladders and start solving problems
• When you start a task, a 2-hour timer starts; you have to solve the task in this time limit
• Tasks that are solved successfully are marked green, tasks that were unsolved are marked red
• You can also see a leaderboard with the best performances for the ladders

I will have a strict policy against griefers. If you do one of the following, you will probably get banned:

• Add a Codeforces/Atcoder/etc. account that is not yours
• Try to mess with the website in any way that would harm the data inside it
• Create any scripts that would mass-create tasks/groups/etc
• Exploit bugs of any sorts
• Please don't lie to yourself and cheat

If I see many of such attempts, I will discontinue the website. If you see someone do this, or get affected by it, please report it by completing the feedback form.

Other features of platform

I might come back here later to showcase some other nice features that I have been using on this platform. Until then, feel free to explore :). If you have ideas of interesting features, please let me know below :).

Problems?

Please let me know if you find any problems by completing the feedback form on the website, or by writing in the comments. The website is still in development, and there will be a lot of changes in the future.

• +363

By bicsi, history, 3 years ago,

In a recent contest in Romania, there was a problem that was formulated as follows:

Given a convex polygon $P$ given by points $(x_i, y_i)$ in the 2D plane and a number $R$, construct a polygon $P'$ given by $(x'_i, y'_i)$, such that:

• the Euclidean distance between $(x_i, y_i)$ and $(x'_i, y'_i)$ is at most $R$
• the area of polygon $P'$ is as much as possible

It is guaranteed that any two points on the polygon are at distance at least $2R$ apart.

The problem was a bit awkwardly stated, as it required the area not to be the largest possible, but as large as the "model solution". In case you are interested, the task (in Romanian), can be found here. Keep in mind that the task at hand is a bit ill-posed and has some notable problems with floating point precision. However, I'm not as concerned about that, though.

It feels to me that the problem might actually be convex; however, I don't have the mathematical skills to (dis)prove it.

The problem can be formulated in terms of constrained optimization as follows:

max $x'_1 y'_2 + x'_2 y'_3 + ... + x'_n y'_1 - x'_2 y'_1 - x'_3 y'_2 - ... - x'_1 y'_n$

s.t. $(x'_i - x_i)^2 + (y'_i - y_i)^2 \leq R^2$

(here $x'_i, y'_i$ are the actual variables to optimize, excuse me for that)

The constrained space is convex. The objective function seems concave, but I'm not sure how to prove that. For three points, the hessian of the objective function looks like this.

The model solution is to iteratively fix all points apart from one and optimize for a single point (up to convergence). It can be proven rather easily that the best placement lies on the intersection between the perpendicular from $(x_i, y_i)$ to the vector given by $(x_{i - 1}, y_{i - 1})$ and $(x_{i + 1}, y_{i + 1})$ and the circle of radius $R$. Check it out.

• Is the constrained optimization convex?
• If yes/no, does the algorithm above produce the global maximum?
• How are these types of "iterative optimization" algorithms called in literature? Are there known bounds for convergence (in the convex case)?

• +95

By bicsi, history, 3 years ago,

Hello!

After seeing the recent hype towards debugging utils and code snippets usage and what not, and due to the fact that I had 30 minutes of free time to spend this afternoon, I decided to write a small Python script that pre-compiles multiple files into one big file for Codeforces submission.

The tool is here: https://pastebin.com/c4dS4Pik (Python 3 required)

Basically the tool does these steps:

• remove the #include <bla> lines from the code
• run the GCC preprocessor on the code
• add the #include <bla> lines back
• run some cleanup routine (optional)

In order to use this script, copy it somewhere (preferably in your PATH directory), and run it using: /path/to/scr.py file.cpp [ARGS] The ARGS are similar to the ones you would give to your g++ command.

For example, let's say we have three files:

FILE: debug.hpp
FILE: dsu.hpp
FILE: sol.cpp

Output of /path/to/scr.py sol.cpp:

OUTPUT

Output of /path/to/scr.py sol.cpp -DDEBUG:

OUTPUT

The main idea is that it merges all your #include "bla" files (not the #include <bla> ones, though), and replaces all the #defines and other preprocessor instructions.

Let me know what you think! The tool should work fine in UNIX-based systems (Mac OS X, Linux). I would be happy if someone could test/port this tool for Windows. I think one cool thing about this tool is that it pre-compiles all the #defines, so that most of the output code would look more standardized (C++-ish).

I would personally probably not use this tool too much, as my general approach is to copy-paste implementations and tweak them to the specific problem, but I know a lot of people prefer to use them as black boxes, so this might be useful.

• +85

By bicsi, history, 3 years ago,

https://github.com/bicsi/autotest

Hello!

For the last two days I have been working on an idea of mine that I had for quite some time but haven't had the time to implement until now. It is a framework for automatic test generation using optimization methods.

The main point

I want to help people that are into problem-setting but might have problems writing generators (or might have little time to do so). I was thinking that, instead of coming up with complicated generators, one might be able to reuse some generic parametric generators for given structures (partitions, graphs, polygons, etc.) and generate tests by focusing on designing a "fitness" function for a given input case.

Not only that, but many times while preparing problems I found that I was writing some generators by giving them some free parameters and trying to optimize something for the test, while keeping some randomness (for example, making the output answer as big as possible s.t. the constraints, generating test cases that have a given number of candidate solutions, generating test cases where line 100 in my code gets called as much as possible, etc.)

Also, I think it's also useful to have some generic framework for easy hyperparameter optimization with as few distractions as possible (in contests like Hash Code, for example).

I've recently learned about automatic hyperparameter optimization using techniques suitable for fitness functions that take time to compute (e.g. programs), and I've experimented with the hyperopt python library for some university projects. And, to be fair, I was pretty impressed. This thing knows what's about when it comes to optimization.

The framework

The framework should be more or less simple to use, if you go to the GitHub repository and read the readme file there. I focused my work in having an API that is as clean as it could be as a front-end and which requires as few extra libraries as possible (just clone/download the repository and install some python libraries).

How to use it

https://github.com/bicsi/autotest

More tehnical stuff (that's not mandatory to the library users)

Every generator compiled with the autotest framework will allow some under-the-hood extra interaction with the autotest.py script. Populating the autotest::Params is being done via command-line arguments. For example, if you want to feed $n = 5$ into a generator, you have to run it via ./gen -Pn 5. When the .get() method is called, the argument is asserted to have been passed to the executable, and its value is validated and converted to the expected type.

However, the autotest.py can't know about what parameters should be fed into the generator. That's why, the generator compiled with the autotest framework can be run in interactive mode (via --interactive command flag). The only difference here is that when a parameter isn't found as a command-line argument, instead of failing, the generator 'asks' the script via stdin/stdout what its value should be. This way, the autotest.py script "gains knowledge" dynamically about the generator at runtime. This is essential, because that means that you can implement generator libraries with "hidden parameters" that are transparent to the user, and which will be optimized automatically by the script.

Then the script optimizes for the sum of absolute differences between the goal metrics given by the model solution and the goal metrics sought in the tests config file. Some weighted sum might be useful in case of multiple goals, but this hasn't been implemented. It also has some L2 regularization implemented, which means that it penalizes parameters with high absolute value.

Future work

For now, the framework does not provide a lot of features (but it works, so at least some credit is due :) ). I think a lot of work will be put into trying to create adapters for different random generator libraries for this autotest::Param format with proper modelling (I have some examples inside the lib/generators.hpp file illustrating that, but they are weak).

Integration with Polygon doesn't sound too bad either.

Probably an "as important" other aspect of improvement is on the stability. In this case, if you think the framework is useful and also prepare contests, try to use it for a contest and see how it works. However, don't expect it to work perfectly, as I've only tested it on happy paths.

This version is just the MVP

Contribution

I feel like this can grow into a rather large project (if you think it deserves to be used), so let me know what your thoughts and ideas are. If you want to contribute, feel free to let me know, and potentially submit a PR (I'm a bit picky about coding style sometimes, so keep that in mind).

• +147

By bicsi, history, 3 years ago,

Hello!

Recently I have been trying to learn link cut trees (and splay trees, for that matter). I have tried to polish an implementation that is short (for our ICPC notebooks), and also easy to use and extend. I might be extending this blog post towards a tutorial on how link cut trees are used and, more so, how this template can be used.

The implementation is here:

Implementation

It is around 100 lines long (which is not ideal), but it has some neat features. For example, if you call lc.Path(a, b), the implementation will return the root of a splay tree node which contains the whole path from a to b. Also, the function access(v) will make sure that node v's splay tree would contain the whole path from the root at that time to v. It also allows subtree query via the (amazing) trick of virtual subtrees (https://codeforces.com/blog/entry/67637).

I feel like there is no implementation with a very clean way of modifying it (for the people who don't know LCT), so I decided to adapt one myself (also, I don't like using raw pointers -- if you're like me, you might enjoy this implementation). Also, another thing that I like about this implementation is that the splay tree code is independent to the fact that it's used inside a link-cut tree (it's just plain splay tree code, which can be extracted away).

Another thing that is very useful in my opinion is that the splay tree can be used completely separated from the link cut tree (it is not embedded in its implementation), so they could go into separate snippets in your ICPC notebooks.

In order to adapt the implementation to path aggregates (updates/queries of any type), you would have to create some function that updates/queries the splay tree node GetPath(u, v) and to potentially modify the push and pull methods (push is lazy propagation, whereas pull is regular tree update from children).

Also, here is a pastebin of a strees test code, in case you want to test the implementation.

Any thoughts on how to improve the implementation? What do you think about it overall?

• +235

By bicsi, history, 3 years ago,

Hello!

I will be continuing my "solving problems on first sight" series with another livestream on YouTube tomorrow. In case you missed my first post and don't know what it is about, please refer to my blog posts.

The stream is going to be at 5 PM EEST, on Sunday March 22 (note the time difference from my past streams)

Please suggest which problems I should solve by replying to this blog post. Also, let me know if this time doesn't work for you or if I announced it too late.

I hope to see you all tomorrow!

• +68

By bicsi, history, 3 years ago,

Hello!

I will be continuing my "solving problems on first sight" series with another livestream on YouTube this Sunday. In case you missed my first post and don't know what it is about, please refer to the following link:

https://codeforces.com/blog/entry/73212

The stream is going to be at 3 PM EEST

If there is a contest and there is a better time for the livestream that would not clash with it, please let me know by Friday and I might adjust the time. As before, you can vote suggestions for what I would solve on this link: https://poll.ly/#/G65O8lOL (please do your best to check that I haven't actually solved the tasks beforehand). I will also be looking for suggestions inside the comments section on this blog and the one before.

I hope to see you all on Sunday!

• +83

By bicsi, history, 3 years ago,

TL;DR I am thinking of holding a series of livestreams on youtube/twitch of me solving problems without having seen them beforehand. I want to highlight how to approach a medium-hard programming task when you are faced one. I want this to be a social and chill environment. Your ideas/feedback would be greatly appreciated.

TS;WR

Hello guys! I have been doing competitive programming for a while, and lately I have been thinking about how I have developed myself during the past years, and what is different between how I used to approach problems in the beginning and how I do it nowadays. I was thinking that it would be a pretty nice idea to give back my thoughts to the community.

What I was thinking about is recording some videos or streaming myself solving some competitive programming tasks that I haven't seen before. The inspiration for this idea came from a friend (teammate) of mine who wanted to do this quite some time ago, in order to train for ICPC. The format would sound like this: you would propose tasks that you find interesting or that you had been challenged by when approaching them on first sight, and I would try to solve them during the stream (deciphering the statement, finding the solution, implementing the solution).

Why I want to do this

I have been coaching some people for a while, and during this time I have the feeling that somehow explaining a problem/solution turns out to be very different when you have solved the problem beforehand as opposed to coming up with it when you are facing with the task for the very first time. Not only are there a lot of pitfalls that I tend to forget having made after I had solved a problem, but also it's that a lot of the times I don't recall the thought process that needed to happen in order to solve it. That's why it's sometimes hard for me to even give hints to people, if they're stuck on a task, for example.

With this format, I am hoping to capture and highlight the general thought process that happens between reading the statement of a problem and getting AC on it, in addition to the solution itself. It might also include tips on implementation and debugging along the way.

Some more details (FAQ style)

So is this some sort of online coaching sessions?

Yes and no. While the main goal of this is me trying to highlight the thought process of solving the problem, I want to see it as more of a group where you and I both discover which ideas and approaches work on tasks, and which don't

What happens when you run out of ideas for a solution?

Some problems might pose difficult for me as well, and given that I would be live, there's a high chance that I might get stuck. At these times, I think I might move on to another problem (which is probably what I would do in a contest), or ask for a hint from you guys.

What I was thinking of is a simple poll where you could add options, and vote for the best ones. This website seems to provide a quite neat interface for creating exactly what I need. You could also post a comment, and whichever comments receive most upvotes will be chosen. This way, the post will be kept more alive until the actual stream.

Some people solve contests; why don't you just do that?

While I certainly don't want to restrict myself to this proposed format, the reason why I am thinking of solving problems "offline" as opposed to during the contest boils down to the fact that even though solving contests would yield a similar scenario, I am usually more stressed during contests, and that would relate to less talking and more thinking; for now, I think solving tasks offline would provide me with enough calm to more easily and succinctly describe my thoughts.

When is it going to happen?

Vote your options here (you can also just comment down below)

Please only use links when adding options. I would pick the tasks that have most votes during the stream. For this session, please restrict yourselves to problems that are not too hard, so that I would be able to build up my courage :). Problems can be on any judge from Codeforces, Atcoder, CSAcademy, Kattis, Infoarena, and others. They can be either ICPC style (preferred) or IOI style. Difficuly level should be around Codeforces Div1 A/B/C (or Div2 C/D/E)

Final words

I will be updating this post as the week progresses, but I was very eager to post this. I am very excited about this idea, and I can't wait to hear your feedback. Any ideas of yours are greatly appreciated.

I hope that I'll see you during the stream!

UPD: I have settled the stream date, and added a link to my channel.

UPD2: The stream is now on YouTube for whoever wants to watch it. Quick jumps are in the descriptions, for each specific problem. https://www.youtube.com/watch?v=fpxArbp3FLo

• +232

By bicsi, history, 3 years ago,

There seems to be a lot of encouragement for new people to learn segment trees, and in particular the lazy propagation technique, and it seems to me that most of the time it is not actually needed.

As a quick refresher (although I feel most of you would already know), lazy propagation is a technique in segment trees that lets you do updates on whole ranges of elements, by first only updating the smallest factoring of the update range in the tree. For example, in order to update range $[2, 5]$, what you do is you update only ranges $[2, 2], [3, 4], [5, 5]$ in the segment tree, and next time node $[3, 4]$ is accessed you "propagate" the updates downward into ranges $[3, 3]$ and $[4, 4]$. This allows you to effectively aggregate (combine) multiple updates on a given range, to save extra work.

Usually when people talk about lazy propagation, some tasks like "add on range" — "minimum on range" or "add on range" — "sum of range" naturally come to mind. However, I feel like these are poor examples of applications, as most of these can be solved without lazy propagation.

OMG, how is that possible?

An alternative way one could approach these kind of problems is to keep an extra array $lazy$ with the usual segment tree. $lazy(node)$ is an aggregate of all the update operations done on $node$ until present time. In the above examples, $lazy(node)$ would be the sum of all the updates done on the node. Then the recurrence becomes $data(node) = lazy(node) + min(data(node * 2), data(node * 2 + 1))$ in the "minimum on range" case and $data(node) = data(node * 2) + data(node * 2 + 1) + lazy(node) * length(node)$ in the "sum on range" case (here $length(node)$ denotes the length of the range corresponding to $node$ in the tree.

The way to query is by having an extra parameter passed down in the traversal, which aggregates the operations inside $lazy$ while going down in the tree. The technique is similar to something called "upwards-downwards dynamic programming on trees" in Romania.

An example implementation is here.

So, you still store lazy array, but you just don't propagate. Why should we care?

Honestly, firstly I would say that it's more simple and natural. A lot of data structure problems I've encountered need some sort of "lazy" value that holds an aggregate of operations that affect the whole structure (take a simple data structure problem, where you have to model adding a constant value to all elements of a collection, and pop the min element, which can be done by using a priority queue, along with an extra value that lazily aggregates all add operations).

Second of all, I think it's easier to debug a solution that does not propagate, as I feel that I can reason about the correctness of the segment tree values more easily with this approach. In contests like ACM ICPC, it is key to debug on paper as much as possible when applicable, and with this approach one could simulate the range updates and build expectations upon the data structure at different points in time easier.

Third of all (and maybe most importantly), it is way faster. I will probably make a full comparison if people are interested, but from my experience I found that lazy propagation yields a particularly slow solution (hidden constant is bigger), probably because it does a lot more mutations on the underlying array, and this approach reduces that constant very considerably.

Ok, then, why ever propagate?

Well, it seems that not all problems can be solved via this method. For example, a "set to constant value on range" and "sum on range" type problem cannot be easily solved by this. What is particular about these scenarios is that the update operations are order-dependent (in other words, they are not commutative). However, in a lot of cases the update operations are commutative, and I don't see why any of these kind of problems would use lazy propagation instead of this technique.

I'm curious to hear your opinions on this.

• +70

By bicsi, history, 3 years ago,

I was recently trying to solve problem K. Kingdom of Ants on Kattis. However, my submission strangely receives a Memory Limit Exceeded verdict, and I can see no way of achieving that with my code.

Do you guys spot any reasons why the above code would give MLE? Maybe some weird C++ undefined behaviour, or maybe some compiler flaw?

UPDATE: After checking for the (very degenerate) case if there is only one y-coordinate (right between lines 119 and 120), the solution gets Accepted verdict. However, I still don't understand why the initial verdict was MLE, and it was very misleading.

• +3

By bicsi, history, 5 years ago,

I wanted to showcase a simpler algorithm for the closest pair of points in 2D problem, and maybe to discuss its performance / countercases.

The problem is:

Given N distinct points in Euclidean 2D space, compute the minimum (squared) distance between any two distinct points.

The usual approach here is a divide-and-conquer algorithm, which can be found virtually anywhere, including on Wikipedia. The complexity of this algorithm is O(nlog(n)), but it is rather tricky to achieve this complexity.

The alternative approach (based on the same algorithm), is to do sweep-line. We sort the points based on the x-coordinate and we keep a set of the points in the region x - d, x, sorted by y coordinate. Here d is the smallest distance so far (we do that with the two-pointers technique). Now, for each new point x, y, we query the whole range y - d, y + d in this set and possibly update our answer.

Due to the proof of the D&C algorithm, at each time the quieried range should be of size O(1) on average, so total complexity would be O(nlog(n)). Code is below:

long long ClosestPair(vector<pair<int, int>> pts) {
int n = pts.size();
sort(pts.begin(), pts.end());
set<pair<int, int>> s;

long long best_dist = 1e18;
int j = 0;
for (int i = 0; i < n; ++i) {
int d = ceil(sqrt(best_dist));
while (pts[i].first - pts[j].first >= d) {
s.erase({pts[j].second, pts[j].first});
j += 1;
}

auto it1 = s.lower_bound({pts[i].second - d, pts[i].first});
auto it2 = s.upper_bound({pts[i].second + d, pts[i].first});

for (auto it = it1; it != it2; ++it) {
int dx = pts[i].first - it->second;
int dy = pts[i].second - it->first;
best_dist = min(best_dist, 1LL * dx * dx + 1LL * dy * dy);
}
s.insert({pts[i].second, pts[i].first});
}
return best_dist;
}


What do you think? Are there any special cases (e.g. many points at the same x coordinate) that would break the solution?

• +43

By bicsi, history, 5 years ago,

This article will be presenting a rather classical problem that can be solved using deques, along with an extension that allows you to solve the problem in its more general multi-dimensional case. I have decided to write this article after this discussion on 2D range-minimum query.

The article will be mainly based on this following problem:

You are given an array of numbers A[] of size n and a number k ≤ n. Find the minimum value for each continuous subarray of size k.

We will be now focusing on the linear-time solution to this problem.

Solution:

Consider sweeping from left to right through the array. At every moment we keep a list of "candidates" for minimum values throughout the process. That means that at each moment, you have to add one element to the list and (potentially) remove one element from the list.

The key observation is that, during the sweep line process, we find two values A[i] and A[j] which have i < j and A[i] ≥ A[j], then we can safely discard A[i]. That is because, intuitively, A[j] will continue to "live" in our sweep line more than A[i], and we will never prefer A[i] instead of A[j].

We should now consider pruning all the "useless" values ("useless" as in the statement above). It is easy to see now that doing this will lead to a strictly increasing list of candidates (why?). In this case, the minimum will always be the first element (O(1) query).

In order to insert an element to the back of the pruned candidate list, we will do a stack-like approach of removing all elements that are greater than it, and to erase on element, we just pop the front of the list (if it is not already removed).

This is a well-known approach for finding minima over fixed-size continuous subarrays. I will now present an extensions that allows you to do the same trick in matrices and even multi-dimensional arrays.

The multi-dimensional extension

Problem (2D):

You are given an matrix of numbers A[][] of size n × m and two numbers k ≤ n, l ≤ m. Find the minimum value for each continuous submatrix of size k × l.

Solution:

Consider the matrix as a list of rows. For each row vector of A, use the 1D algorithm to compute the minimum value over all l-length subarrays, and store them in ColMin[][] (obviously, ColMin[][] is now a n × (m - l + 1)-sized matrix).

Now, consider the new matrix as a list of columns. For each column vector of ColMin, use the algorithm to compute the minimum value over all k-length subarrays, and store them in Ans[][] (of size (n - k + 1) × (m - l + 1)).

The Ans[][] is the solution to our problem.

The following picture shows the intutition behind how it works for computing Ans[1][1] for n = 5, m = 7, k = 3, l = 4

The pseudocode is as follows:

def solve_2d(M, k, l):
column_minima = {} # empty list
for each row in M.rows:
# We suppose we have the algorithm that solves
# the 1D problem
min_row = solve_1d(row, l)
column_minima.append_row(min_row)

ans = {}
for each col in column_minima.cols:
min_col = solve_1d(col, k)
ans.append_col(min_col)

return ans


Note that the pseudocode is (deliberately) hiding some extra complexity of extracting rows / columns and adapting the 1D algorithm to the 2D problem, in order to make the understanding of the solution clearer.

The total complexity of the algorithm can be easily deduced to be O(n * m)

Multi-dimensional case analysis

The solution can be extended to an arbitrary order of dimensions. For a d-dimensional matrix of size s1, s2, ..., sd, the time-complexity of the problem is O(d * s1 * ... * sd), and the memory complexity is O(s1 * ... * sd). This is much better than other algorithms that do the same thing on non-fixed size submatrices (e.g. multi-dimensional RMQ has O(s1 * ... * sd * log(s1) * ... * log(sd)) time and memory complexity).

Finding the best k minima

The deque approach itself is limited in the sense that it allows you to find only the minimum value over the ranges. But what happens if you want to calculate more that one minimum? We will discuss an approach that I used during a national ACM-style contest where we were able to calculate the best 2 minima, and then argue that you can extend to an arbitrary number of minimum values.

In order to store the lowest 2 values, we will do the following:

Keep 2 deques, namely D1 and D2. Do a similar algorithm of "stack-like popping" on D1 when you add a new element, but instead of discarding elements from D1 when popping, transfer them down to D2 and "stack-like pop" it.

It is easy to see why the lowest 2 elements will always be in one of the two deques. Moreover, there are only 2 cases for the lowest two elements: they are either the first two elements of D1, or the first elements of D1 and D2 subsequently. Checking the case should be an easy thing to do.

The extension to an arbitrary number of minima is, however, not so great, in the sense that the complexity of this approach becomes O(n * k2) for a n-sized array, currently bottlenecked by the number of elements you have to consider in order to find the first k minima. [Maybe you can come up with a cleverer way of doing that?]

This is the problem I referred to above: http://www.infoarena.ro/problema/smax. I recommend trying to think it through and implementing it, and translating the statement via Google Translate or equivalent.

• +349

By bicsi, history, 7 years ago,

Hello! I am a fairly advanced programmer (although I'm pretty new), and I know the basic algorithms and data structures used for most problems. However, I don't know how to get better from this state onwards.

I should say that my strength is mainly DS problems. Greedy, D&C, DP I do pretty well (once I recognize a specific-type problem), constructive algorithms seem very hard to me (for example link). For DS, I know pretty much all there is to know about segment trees, BIT, sqrt-decomposition (I call it a DS, don't blame me :D), BSTs, hash, etc (the basic ones), although problems that involve advanced tricks with these (e.g. persistent segment trees, lazy propagation) seem very appealing.

I want to prepare mysef for this year's ACM-ICPC contest, as it is my first year in Uni. I have been learning algo intensively since December.

I would be grateful if you know any good lists of problems that are really crucial and/or teach useful new techniques, as I am stuck momentarily. I have read the results on Google for ACM algorithms and such, so a personal response would be a lot more appreciated :).

I will also try to keep a list updated with interesting problems and techniques, in case other people struggle with the same issue. Maybe this tread will become a learning portal for more people :D.

LIST:

• (http://e-maxx.ru/algo/) -> there are a lot of algorithmic problems greatly explained, as well as tricks and great algorithms -- the website is in Russian, although it can be translated using Google Translate -- great resource

Thank you a lot!

• +14