Since I started competing in competitive programming, I discover more and more ways to get RTE or WA verdicts because of some stupid mistakes I make during the coding phase, I will try to mention some of them here for my future self, and you can also share your experience in the matter.

Initially, I will list them without any specific order, and I may write a proper blog if I find myself with good material, so feel free to comment your thoughts.

- wrong fixed arrays sizes // such as 3*10^5 and 10^6
- #define int int64_t // work smart not hard
- #define endl '\n' // one less thing to take care of
- 1e9+9 // or any other stupid value
- INF vs INFLL // always use INFLL with its value equal to 0x3f3f3f3f3f3f3f3f ("0x" + "3f"*(8)) // this can be used for int32 also
- defining void function as a non-void one without returning any thing. // compiler fault and it mostly will works just fine in your machine, not like the judges one.
- freopen the wrong file name // sad memories lives here
- not printing the size of the answer when the problem asked for it
- scanf("%d", (long long)x)
- for(int i = 2; i*i <= x; i++) if(!not_prime[i]) // i*i may overflow
- while(p%x==0) p/=x; // p may be equal to zero.
- n*(n+1)/2
- lcm = x*y/gcd(x, y) // where (x=y=0)
- gcd(x, 0) = x // special case
- (x-y)%mod // just used good defined adding and multiplying functions
- (x+y-1)/y where y < 0 // there is no need to use the trick in negative numbers
- x & (1<<p) // use ((x>>p)&1)
- for(int i = empty_vec.size()-1; i >= 0; i--) // unsigned integer
- for(int i = 0; i < log_n; i++) if(par[u][i] != par[v][i]) // (LCA) // start from the MSB to LSB, not the other way.
- for(int i = 0; i < strlen(s); i++)
- --upper_bound(vec.begin(), vec.end(), (pair<int, int>) {first, -1}) // use lower_bound({first+1, -1}) or inf instead of -1
- lower_bound(set.begin(), set.end(), element) // you silly
- *empty_set.begin(), *--empty_set.end() // you're not trustworthy
- x = min(x, map[x]) // map[new element] equal to zero.
- multiset.erase(x)
- not using size variable in centroid decomposition, hld or small-to-large // very dangerous one
- DFSing on other than the 0-th node when filling binary-lifting values (LCA). // par[u][inf]=0
- using a wrongly sorted queue in Dijkstra algo. // multiply by -1 instead of using a greater operator
- not re initiating global variables in multi testcases problem.
- using memset or fill in a multi test cases problem // be smart and use integer-visited-arrays with visited-id
- breaking from the user input loop in a multi test cases problem
- segment tree base cases conditions // (r < q_l || q_r < l) // don't trust me with this :)
- using 1-indexed bit or sparse table. // never use 1-indexed stuff
- [l, r) // who define things this way??
- ~node() // not writing a destruction function for a pointer based segment tree or trie in a multi testcase problem.
- new node() // MLE
- if(x) where x < 0
- if(x==true && y==true) // put parentheses (acpc kickoff 2020)
- if(x&(1<<p)==0 && y==true) // & is evil
- b = y&(1<<p); x = child[y][b] // b must be a boolean not 2^p
- to_string // write it yourself
- std rand function // use mt19937
- const bool debug = true // check any debug-related stuff like watchers
- Yes vs YES
- alic and bobe
- n==1 or even n==0
- use array<> instead of vector<> when size is fixed
- return answer // return mem[x] = answer
- return x <= y // in user defined compair functions
- switching between n and m // 15m of awkwardness in a session // spoj.com/problems/LABYR1
- vector.clear() // use v = vector<int>(), because .clear() or .resize(0) doesn't deallocate memory

Dividing on a double variable where its value = 0 will return INF. this may give you a wrong answer. it's very annoying because you might think it should be a runtime error.

That's why I stay away from geometry problems XD

You forgot:

Your solution is actually wrong.

you misread the problem statement,

title: Common reasons for NO verdict in CP assuming your solution is correct on paper

now solve the new problem :(

My point is that you can't know for sure if your solution is correct on paper or not because the process of checking that can have human error mixed in.

assuming your solution is correct,

-> your solution is (not (not correct)) -> your solution is (not (wrong)) -> your solution is not wrong

tada!

sorry, I'm just being sarcastic of course I know what you mean, have a nice day.

downvoters

I can't freely describe my thoughts, but I want to ask why you sometimes coding in C but not C++?

even here? do I have to take any precautions?

If you mean by C where I use pointers, then I do it because it feels cool and risky, like riding a motorcycle, you don't know when you are going to write an open loop filling the memory and freezing the computer during a contest.

other than that, I never use pure C.

Printing+breaking out early on multi test case without reading the entire input.

This trick fails with negatives: double x;... x+.5 (to round)

This can round the wrong way with negatives: int m=(l+r)/2

relevant: https://github.com/kth-competitive-programming/kactl/blob/main/content/contest/troubleshoot.txt

Also what bug did you have with to_string?

Great points, I totally forgot about your first one although I did it a couple of times,

about the to_string, it never fails me in an online judge, but somehow in a local ACM contest, they manage to find a compiler that has its to_string function broken, next time they are going to make us wonder if we can normally use cin and cout.

Was it related with these?

https://en.cppreference.com/w/cpp/string/basic_string/to_string

no, we were converting a normal integer into a string, and the code works for the sample testcases.

Oh. I thought you meant no verdict... My bad.

Thats a new one:D

It is at this moment that he knew... he fked up

This is a very underrated blog. Thank you sir

That's the power of being red with a lot of followers on cf.

In my opinion, I think "1-indexed" is good for prefix sums/Fenwick trees since if we want to find the sum of $$$[L, R]$$$ it's just $$$pre[R]-pre[L-1]$$$.

What about

pre[r]-pre[l]+arr[l]:)this principle can be applied for all cumulative arrays,

examples:

pre[i] = pre[i-1] + arr[i] -> val(l, r) = pre[r]-(pre[l]-arr[l])

pre[i] = pre[i-1] ^ arr[i] -> val(l, r) = pre[r]^(pre[l]^arr[l])

pre[i] = pre[i-1] * arr[i] -> val(l, r) = pre[r]/(pre[l]/arr[l])

so for any

pre[i] = add(pre[i-1], arr[i])

val(l, r) = rem(pre[r], pre[l-1]) = rem(pre[r], rem(pre[l], arr[l])

In my opinion it's not "1-indexed", it's half open intervals

[0, R) — [0, L) = [L, R)