### roll_no_1's blog

By roll_no_1, 18 months ago,

This post elaborates on what is written in the title. So, it may not be relevant to many people. But if you get this kind of error someday on codeforces, this post may be helpful for you.

TL;DR: If you get this kind of an error, check to see if you are initializing a global array with a non-default value. If that's the case, move the piece of code doing the initialization inside the main function, or just replace the array with a vector.

A few days back, I was solving a problem with $N \le 10^6$. I thought of using lazy propagation to solve it. For this, I used my own template of lazy propagation. But, after coding the solution to the problem, I realized that the compilation of the source file was taking too much time on my laptop. So, I tried to use the custom invocation feature of codeforces. But amazingly, to my surprise, I got a compilation error like this:

Invocation failed [COMPILATION_ERROR]

Can't compile file:

Compiled file is too large [34412820 bytes], but maximal allowed size is 33554432 bytes.

Then, I remembered that once I tried using my lazy propagation template on some problem and got an error like this:

Invocation failed [COMPILATION_ERROR]

Can't compile file:

Compilation process timed out.

At that time, I got help from this comment by Riatre, according to which if we initialize a global array with some non-default value, the compiler will generate an initialized array, write it to the output binary file, which not only increases the size of the binary file but also increases the compilation time.

A workaround for this is to initialize the array in the main function.

I just thought it was a one-time thing and I managed to work it out by replacing arrays with vectors. But I got a similar error like that (the one in the title) again a few days back with my template for lazy propagation.

In my code, the part that was responsible for this looked something like this:

struct ans {
int val = 1, lazy = -1;
};


which represents a node. And for the segment tree itself, I had a global array declaration like this:

ans st[N<<2];


This was the reason that the output binary file on my system was 32 MB in size.

A workaround to this is that I could have explicitly initialized the array values using std::fill or maybe using a loop. But I like default arguments, so this is what I did:

The following piece of code does exactly the same thing as the code above, but using this, the size of the output binary got reduced to 16.2 KB, which resulted in a faster compilation as well.

struct ans {
int val, lazy;
ans(int _val = 1, int _lazy = -1)
:val(_val), lazy(_lazy){}
};


So, if you ever come across such an error, look out for global array initializations and work them out. Either move them in the main function or use vectors. In case you are using structs / classes with default values for the attributes, consider making a constructor and move the default values to the constructor.

Here are the 2 pieces of code if you want to try them out in custom invocation.

Code 1: Compilation Error on custom invocation.

Code 2: Compiles successfully on custom invocation.

P.S.: As both codes above do almost exactly the same thing, if you know the reason as to why in the first case the array gets written to the binary file but not in the second case or you have some relevant links regarding this thing, please provide them in the comments section.

• +43

By roll_no_1, history, 2 years ago,

It is a general question, which I came across while solving the Div2 1000 pts problem from SRM 735 on topcoder in practice. I couldn't get an efficient solution for the problem so I decided to read the editorial. I understood the approach, but there was a series of operations implemented using fenwick tree.

There were 2 kinds of operations. They required to answer the no. of elements less than a certain no. at a given instance and also to allow insertion of elements(something like updating the count). I have read policy based data structure (order statistic tree) and I thought that that might also be a good choice to perform these series of operations, as they allow insertion of elements and to find the order of a key(no. of elements less than a certain element) both in O(lg n). Similarly, every operation for a fenwick tree also had a complexity of O(lg n). The complexity using both fenwick tree and order-statistic tree came out to be O(m*n*lg n), where n <= 1e5 and m <= 50.

But the max. time for a test case using approach 1(fenwick tree) was 139ms, whereas using the approach 2(order-statistic tree) was 1.489 s. Actually, in approach 2, there were few more cases which had time greater than 1s.

Here is my code using the fenwick tree, and here is the one with the order-statistic tree.

I haven't solved many problems with the os tree. Indeed, I don't think it is that popular a data structure and generally the problem setter might have some other data structure in mind while making a problem or maybe some other technique. So my question is: Given a problem which I know can be asymptotically solved if I use os tree, Shall I try to use it or try to find alternative solutions to solve the problem and completely abandon the idea of using them ?

Being clearer, can I trust the O(lg n) bound per operation that is specified with this data structure ?

• +1

By roll_no_1, history, 2 years ago,

I recently participated in HourStorm #1 on hackerearth and during competition time, just managed to solve the last problem partially. Now, I decided to upsolve the problems starting with the first one, but I could not understand the editorial and the author's solution. To clarify my point that I am not trying to ask this without trying myself to understand the editorial, these are the few points that I did not understand:

• The editorialist says to fix gcd(Ai, Aj), but I do not know how to do that, how can we fix the gcd, if we don't know all pairwise gcd s that exist without iterating over all pairs.

• The line: An element Ai is inserted in the set as many times as the number of divisors of Bi. ( Even in his solution, the author iterates over all divisors of Bi, and pushes them at the back of the list p[Ai], I do not understand why he did this).

• The author's solution: Frankly, I understood almost no part of the author's given solution. I mentioned one point in the previous point, and then I saw that he looped i over from mx_value downto 1, and considered every multiple of i. I did not understand why he did that, and what he did in those loops.

P.S. : Sorry, if the above points sound too nooby, but I don't know, whether only I am having troubles understanding it, or did the author omit some details which might be obvious to more experienced coders.

• +19

By roll_no_1, 2 years ago,

Hi. Recently, I decided to understand and solve some high quality problems which are a little above my level. So, I decided to start off with topcoder div1 500s. This is the first problem I picked, but unfortunately, it had no editorial. I searched over the internet and found an explanation in chinese here. I tried translating it, but the translated version was incomprehensible to me, because of the jumbled words ( and also my current level ). But I am determined to understand and solve the Div1 500s. So, I will be very thankful to you, if you could help me get started by providing an overview of the approach used to solve this problem. Thanks. Here is a link to the problem: Problem

• +6

By roll_no_1, history, 2 years ago,

This question might sound a little silly, but I was not able to figure the solution to this.

Say, I want to compute an expression of the form (a * b) / c, and I want to do it modulo 13. Now, if a = 13, b = 3, c = 13, then the answer clearly is 3. But if I try to do it this way (((a % 13) * (b % 13)) * inv(c % 13)) % 13, where inv(x) return the multiplicative inverse of x modulo 13, it will not work out. So, how to work around this one ?

 Magumi_Tadokoro provided a nice workaround for this one in the comments. But there's another thing I would want to ask. The expression above is a very common expression, and the problem mentioned might also have occurred for other similar expressions, but I haven't seen anyone trying to do something special to avoid this in any cf problem. Is it that even the setters do not use this in their problems, so as to avoid making use of this unconventional way of doing this kind of computation in modular arithmetic ?