ATSTNG's blog

By ATSTNG, history, 2 months ago, In English

Polygon automaticly updates test meta files and produces strange difference, as JSON these files are the same but inputSha1 is moved to suffix.

MODIFIED	testsets/tests/1.meta
MODIFIED	testsets/tests/11.meta
MODIFIED	testsets/tests/2.meta
MODIFIED	testsets/tests/3.meta
MODIFIED	testsets/tests/5.meta
MODIFIED	testsets/tests/6.meta
MODIFIED	testsets/tests/7.meta
MODIFIED	testsets/tests/8.meta
MODIFIED	testsets/tests/9.meta

This difference appears after any action and it prevents from commiting any changes and from building any packages. Building packages asks to commit uncommited changes before building. Commiting changes says Your changes have not been committed. Try to find and resolve conflicts

Is there any workaround?

MikeMirzayanov

Full text and comments »

  • Vote: I like it
  • +20
  • Vote: I do not like it

By ATSTNG, history, 3 months ago, In English

Recently Polygon got enhanced with AI-based features and this is great. Those features can be very helpful at certain situations, but if you don't need them you can just ignore them.

But few days ago I've first encountered a new block on the right panel:

and this one gave me more anxiety than usual. Why this one invites me to review something that I've never requested from AI? Why this one has a separate block and a bold label that suggests some tips as if it is some kind of advertisement?

The review link contains useSuggestGrammaticalStatementFixesCache=true, so correction tips are actually grammatical fixes. This block seems to immediately appear on problems that have newly created statements and on problems that were not visited for years alike. Also grammatical fixes that are provided by this block and by Suggest grammatical fixes seem to be identical.

This grammatical fixes cache seems to be updated with every statement/tutorial update. I was not able to find a way to disable this behaviour.

How do I disable this behavoir? Why this feature is enabled by default? Why this feature has to have a cache in the first place?

I expect that CodeForces does not run own AI service and uses an external one.

Then there are multiple reasons why I would prefer to disable it:

  1. Querying AI service is usually more expensive than running some server-side script. So why should I waste someone's API quota on some intermediate or insignificant versions of my texts that will not be used and I do not care about fixes for it?
  2. Recently I've noticed occasions of significant delays in saving problem statement/tutorial. When it happens statement form is locked with Saving... label for more than $$$30$$$ seconds. But if you decide to interrupt this process and reload the page it will still save the statement. I have no idea whether this issue is connected to querying AI services, but if it will be possible to fix it by opting out of AI assistance I would be glad to do it.
  3. Most importantly: data security. Some authors (including me) can have prepared and partially prepared problems in Polygon waiting to be revealed in the first time untouched for a few years. How can I be sure that prompts that contain unrevealed problem statement and tutorial are not saved on the other side of the AI service and will not be used as a part of a future learning dataset? I do not want to eventially find out that some AI knows about my problem in exact original wording that is directly tied to problem tutorial in it's learning dataset.

Please, correct me if I'm wrong and give some comments, Polygon developers and MikeMirzayanov

Full text and comments »

  • Vote: I like it
  • +176
  • Vote: I do not like it

By ATSTNG, history, 4 months ago, In English

I would like to ask about certain features in problemsetting with CF and Polygon that I was not able to figure out how to use (or not even sure if they are supported at all). Probably some of them deserve feature requests.

SVG images in problem statements

Scalable Vector Graphics is a nice format to draw simple images and schemes that can be very lightweight and can be intergated directly into HTML page.

Now if you include SVG image with

\includegraphics[scale=1]{a.svg}

it will be substituted with PNG version of the image in HTML and will fail to build PDF with

LaTeX Error: Cannot determine size of graphic in a.svg (no BoundingBox).

What am I doing wrong?

Animated GIFs in problem statements

Providing animated example is a great way to explain processes that are described in the problem. Providing an external link to GIF feels somewhat inconvenient.

Now if you include animated GIF with

\includegraphics[scale=1]{s.gif}

it tries to substitute it with PNG image, but the link is broken and it renders missing image in HTML.

Is it possible to make it work?

Custom HTML / JS in problem statements

If problems statement is just an HTML page it can be cool to enhance it with static frontend JS-based application that can play animations or provide interactive examples for participants. Especially for education purposes.

Yes, an ability to include user-written JS script can be a serious security issue, but in some way you have trust the contest author anyway.

Ghosts in gym standings

Ghost participants of past competititon is a really cool feature. I'm planning to add some contests from different platform to the CF gym, and I would also like to export ghosts data for them. But how to configure ghosts for a mashup? What format does it use?

I've only found Export the judgment log to DAT-file section with Ghosts checkbox, but this is not the way to include it.

Different LaTeX or different examples for HTML and for PDF outputs

When problem statement is going to be used in both ways in HTML and in PDF, it is very annoying when you would like to make them a little different in layout. For example long lines in input/output examples are perfectly fine in HTML version, but will not fit into default PDF layout and it is good to provide custom input/output data for it, but only in PDF.

Also PDF and HTML statements are rendered differently and some LaTeX can work in one of them, but break the other one.

Full text and comments »

  • Vote: I like it
  • +6
  • Vote: I do not like it

By ATSTNG, history, 4 months ago, translation, In English

What is happening with CodeForces and Polygon now?

Both sites every other time give A timeout occurred Error code 524 or load the page for $$$30+$$$ seconds.

Polygon almost all the time has Invokers waiting: 0. Problem statements cannot be previewed in HTML, nor in PDF. Validator tests are permanently in queue. Packages fail to build with the message Can't compile source file '<filename>': Can't receive answer from compilation service.

This started for me about $$$2$$$ hours ago.

Full text and comments »

  • Vote: I like it
  • +16
  • Vote: I do not like it

By ATSTNG, history, 19 months ago, In English

From time to time I enjoy making some theorycrafting and implementing some calculators for games and my decisions in them.

But this time another theorycrafting ended up on computational side of things way more than usual. So, I decided to also publish my results here.

League of Legends ARAM champion selection math & statistics

This is the article about congitive biases in random champion selection and about the model that is used to estimate likeliness of certain events in a sequence of games played. Used model is DP over number of games and a vector of current distibution of champions, that uses hypergeometric distibution to calculate transitions. Model itself is impelemented in JS inside browser page to make it easily customizeable for everyone.

I'd be happy to see criticism and suggestions, especially about mathematical and implementational part (see "Approach and model details").

Full text and comments »

  • Vote: I like it
  • +56
  • Vote: I do not like it

By ATSTNG, history, 3 years ago, In English

Number system conversion algorithm is well-known and taught in almost every basic programming course. To get $$$y_b = x_{10}$$$ (number $$$y$$$ written in base $$$b$$$ equal to number $$$x$$$ base $$$10$$$) we have to iteratively chop the lowest digit in base $$$b$$$ from number $$$x$$$ using division and remainder operators:

def naive_conversion(x, base):
    y = ""
    while x > 0:
        y = str(x % base) + y
        x //= base

    return y

This works in $$$\mathcal{O}(\log x)$$$ for all $$$x$$$ that fit into machine word $$$(\log x < w)$$$. However this becomes inacceptably slow when we consider numbers of length $$$n$$$ that is large enough and we cannot use $$$\mathcal{O}(1)$$$ division. Let's define $$$n$$$ as length of converted number $$$x$$$ in some number system $$$(w \ll \log x = n)$$$. Still, base of number systems $$$b$$$ and all digits in it can be stored in usual integer. So we can implement conversion above in $$$\mathcal{O}(n^2)$$$ with some efficient implementation of big-number-by-small-number division.

This operation is commonly required when you are trying to print big integers in base $$$10$$$ that are stored by big integer library in binary notation. To overcome this some competitive programming libraries use base $$$10^9$$$ to print numbers in base $$$10$$$ digit-by-digit without complicated conversion. But this is just a trick for common number presentation. But how to generally approach this problem and be able to quickly convert big integers to any number system?

We are given big integer $$$x_b$$$ and another number system base $$$c$$$. We have to find $$$y$$$ such that $$$y_c = x_b$$$. $$$f(x_b, c) = y_c$$$. Suppose $$$x_b$$$ has length $$$n$$$ and $$$y_c$$$ will have length $$$m$$$.

Best aproach that I can think of is divide and conquer. Main problem with abritrary bases is that many digits in $$$x$$$ can affect many digits in $$$y$$$. To overcome this we can choose $$$p \approx \frac{m}{2}$$$ and represent $$$x_b = L_b \cdot c^p + R_b$$$, where $$$R_b < c^p$$$. In such form we can see that $$$L$$$ part can only affect higher half of digits of $$$y$$$ and $$$R$$$ part can only affect lower half of digits. Now we can safely split our number into two, solve problem recursively obtaining $$$f(L_b, c) = L'_c$$$ and $$$f(R_b, c) = R'_c$$$ and finally concatenate parts $$$y_c = L'_c \cdot c^p + R'_c$$$.

To split $$$x_b$$$ into $$$L_b, R_b$$$ we can use division and remainder. Assuming that big integer divmod can be implemented in $$$\mathcal{O}(n \log n)$$$ whole recursive calculation of $$$f(x_b, c)$$$ can be done in $$$\mathcal{O}(n \log^2 n)$$$.

def fastbase(x, b, order=-1):
    if order == -1:
        order = 1
        while b**(order+1) <= x:
            order *= 2

    if order == 0:
        return str(x)

    sep = b ** order

    hi, lo = divmod(x, sep)

    return fastbase(hi, b, order//2) + fastbase(lo, b, order//2)

In this implementation $$$m$$$ is naively estimated as lowest possible power of two, but then extra leading zeroes must be stripped.

We made some benchmarking in Kotlin and in Python. This algorithm appears to be somewhat competitive with built-in algorithms in Kotlin, however in Python3 this works even faster.

Benchmark code in Python3
Local testing results

Blazing fast conversions to binary formats (and powers of binary) show that numbers are stored in binary.

Are there any well-knows algorithms? What algorithms are used in standard libraries? Are there any better solutions?

Full text and comments »

  • Vote: I like it
  • +39
  • Vote: I do not like it

By ATSTNG, history, 3 years ago, In English

Recently I was preparing some problems in Polygon. One of the problems required custom checker. I've implemented and set up my checker and it turned out that it gives verdict FL for all checker tests. I have tried alternating checker implementations, replacing it completely with standard wcmp.cpp code and simplifying logic into single line

quitf(_ok, "ok");

but none of that helped. Only thing that I managed to achieve is to get exit code 13131313 for roughly half of checker tests.

After messing around for a few hours I finally noticed red "Show warnings" label. (I'm not sure if this label was there when I first encountered this problem roughly a week ago.) Turns out there is that critical warning:

Turns out that I was unlucky choosing problem name set-diff-patch-notes and moving it into checker name. After renaming source file it worked just fine.

But how does this work? Are there any other bad substrings? Why there is a weird limitations for checker file names?

A friend of mine suggested that it has something to do with version control system used in Polygon. But if that's the case why any other source files (solutions, validators) have no problems with patch substring?

Full text and comments »

  • Vote: I like it
  • +66
  • Vote: I do not like it

By ATSTNG, history, 5 years ago, In English

[This article is also available in Russian]

Hello, CodeForces.

I think that matroids are beautiful and powerful concept, however, not really well known in competitive programming.

I’ve discovered matroids at 2019 Petrozavodsk Winter Training Camp. There was a problem that clearly cannot be solved using usual techniques I knew, editorial for this problem was just these three words “just matroid intersection”. Back then it took me more than 2 days of upsolving to find all the information and details I need and implement solution that gets Accepted on this. And it took way longer to actually understand why does it work and exactly how does it work. (I still hesitate in some details.)

Of course, it is not hard to google up all the definitions and some related articles, but in my opinion they all are focused more on mathematical part of theory, strict proofs in some not really obvious but short ways, and observing only key points in whole theory, omitting a lot of intermediate results, logical steps and examples. In some article I’ve encountered this heavy-formula proof:

That was nowhere close to being clear for me. (Probably I’m not mathematician enough, yet.) But still, I’m sure there is a way to make this more clear and detailed.

I am sure, there are a lot of people in CP who will also like to get familiar with matroids. So, I’ve decided to review this topic from very beginning focusing more on explanations, logical steps and providing more examples. This is the goal of this article, and you do not need to know anything about matroids beforehand.

Full text and comments »

  • Vote: I like it
  • +904
  • Vote: I do not like it

By ATSTNG, history, 6 years ago, translation, In English

Hi, CodeForces.

Recently I decided to take a closer look at data structure called Fenwick tree or binary indexed tree.

In this blogpost I will only consider Fenwick tree for sum, but everything stated there can be easily generalized for any associative, commutative and inversable operation.

Some implementation

After reading articles about it on e-maxx.ru and neerc.ifmo.ru and asking friends I've got interested by this moment. Both articles (and my friends) use this algorithm to initialize Fenwick tree on given array:

  1. Fenwick tree for array of zeros is array of zeros, we will take it as a basis
  2. We will use tree queries to replace all elements of array with elements we need
void init_vector_classic (vector<int> a) {
    init_empty(a.size());

    for (int i = 0; i < a.size(); i++) inc(i, a[i]);
}

This will work in O(N * logN) where N is length of given array.

Can we do it faster? Definitely can!

By definition of the Fenwick tree each tree element is the sum of continious segment of initial array. Considering that we have no queries to change elements in initialization process, we can use prefix sums to calculate elements of our Fenwick tree. This will require O(N) preprocessing, but will allow to calculate tree elements in O(1). So initialization asymptotic improves to O(N) + O(1) * O(N) = O(N).

void init_vector_linear_prefix (vector<int> a) {
    init_empty(a.size());

    vector<int> prefix_sum;
    prefix_sum.assign(a.size(), 0);
    prefix_sum[0] = a[0];
    for (int i = 1; i < a.size(); i++) prefix_sum[i] = prefix_sum[i-1] + a[i];

    for (int i = 0; i < a.size(); i++) {
        int lower_i = (i & (i+1)) - 1;
        fenwick[i] = prefix_sum[i] - (lower_i >= 0 ? prefix_sum[lower_i] : 0);
    }
}

But in this implementation we get additional O(N) memory to store additional array.

Can we avoid this? Yes, we can!

We can notice that to initialize fenwick[i] we only require such elements of prefix_sum[j] where j ≤ i. This allows to use only one array and, starting from the end, change each element from prefix sum to Fenwick tree element.

void init_vector_linear (vector<int> a) {
    init_empty(a.size());

    fenwick[0] = a[0];
    for (int i = 1; i < a.size(); i++) fenwick[i] = fenwick[i-1] + a[i];

    for (int i = a.size()-1; i > 0; i--) {
        int lower_i = (i & (i+1)) - 1;
        if (lower_i >= 0) fenwick[i] -= fenwick[lower_i];
    }
}

This way we can improve initialization time of Fenwick tree to O(N) using O(1) additional memory.

Of course, in most of the problems amount of queries is not much less than the size of Fenwick tree, so improving initialization time will not improve final asymptotic of your solution. But this optimization gives as ability to improve constant of the part of your solution, that works with Fenwick tree (or trees) with just a few lines of code.

What do you think about this and how do you initialize your Fenwick trees?

UPD: There is another approach to this problem that leads to even shorter code! Thanks Jakube for mentioning this in comments and sdnr1 for creating blog about it.

Full text and comments »

  • Vote: I like it
  • +84
  • Vote: I do not like it

By ATSTNG, history, 7 years ago, translation, In English

Hi, CodeForces.

Today I tried to implement interactive problem 753B - Interactive Bulls and Cows (Easy) in Ruby. But all of my submissions (23586661, 23586795, 23586863, 23611181) got "Idleness limit exceeded on test 1".

Also this problem (and it's Hard version) has no Accepted submissions in Ruby.

Is CF bugged or am I doing something wrong?

UPD: Solved this question: the reason was my carelessness in reading problem samples, and because of that my solution was incorrect. Correct solution — 23649752.

Full text and comments »

  • Vote: I like it
  • +11
  • Vote: I do not like it