Hello all!

If you're like me and want to become good at coding, but don't want to put in the effort, just follow this simple guide. Guaranteed LGM within 3 weeks or your money back!

### 1. Time Travel

Have you heard of using "vectorization" or "pragmas" to allow brute force solutions to pass (from for example, here)? There's no need to include this rather confusing block of code in your template: there's a much simpler and more effective method

```
#include <iostream>
#include <chrono>
#include <thread>
int main()
{
using namespace std::chrono_literals;
std::this_thread::sleep_for(-9999999999999ms);
}
```

Sleeping/stalling for a negative amount of time tricks the system into giving you extra time for your submission. Simply using this in front of all your code makes any solution pass (albeit, in a long time).

### 2. Better Hash

Current top-rated LGM Benq recommends using the following hash function combined with gp_hash_table to optimize unordered_map and avoid hacks and anti-hash bases.

```
struct chash {
const uint64_t C = ll(2e18*PI)+71;
const int RANDOM = rng();
ll operator()(ll x) const {
return __builtin_bswap64((x^RANDOM)*C);
}
};
```

However, a little trick that LGMs don't tell you is that there's an even faster hash function that works in O(1) for all inputs, including strings.

```
struct fhash {
ll operator() (ll x) const { return 7; }
};
```

This statement is intuitive to understand and code, and works extremely efficiently. To confirm this, I tested it against a set of identical strings of 10^5 characters; This hash function performed significantly better than the chash and the default one used by std::unordered_map.

### 3. Get Rid of Constant Factors!

Does your program have the correct complexity, but fail because of constant factors? Try this hack out! Everyone knows that the constant factor of a program is directly proportional to the number of semicolons in it. Thus, it follows that putting everything in one line will save you massively on unnecessary constant factors. Note that this also means using for loops is slow, because that takes two semicolons; replace this with a while loop instead.

For instance, the following code is an unoptimized way to multiply two numbers:

```
int multiply(int a, int b) {
int res = 0;
for (int i = 0; i < b; i++)
res += a;
return res;
}
```

However, this program can be substantially sped up using optimization tactics. Note that in general you should prioritize getting rid of semicolons, but removing lines and characters also improves run time because the compiler does not have to read over as many characters to check it. A general rule of thumb is that if your program is readable, it's not optimized enough.

```
int multiply(int a,int b){int res=0;while(true){res += a,b--;if(b==0)return res;}}
```

### 4. O(1) Optimization

Many people already use this trick for memory management; however, I haven't seen anybody use it for runtime yet. Take the following code to retrieve a number n.

```
int get(int n) {
int res = 0;
for (int i = 0; i < n; i++)
res++;
return res;
}
```

Anybody can tell you that this runs in O(n). But making a couple of changes will make this run in O(1). Assume that the problem contraints tell you that n <= 10^9.

```
int get(int a) {
int res = 0;
for (int i = 0; i <= 1000000000; i++) {
res++;
if (i >= a)
break;
}
return res;
}
```

Now it runs in O(1). You can apply this anywhere, even for quadratic, cubic, and exponential solutions. (for example, all-pairs shortest paths in O(1)).

### 5. Antihack Trick

None of the tricks mentioned above will work if you get hacked (although this is virtually impossible). A simple trick is to create an extremely long template with confusing aliases so that no one can possibly read your code and thus can't hack you! Example: Literally any one of Benq's submissions

Now, I hear you say, "But isn't this purposefully obfuscating code?". It's not, as long as its legitimately "part of your template". After all, who doesn't need to define custom istream/ostream operators for every stl data structure to save them a couple seconds when debugging? And obviously a template for taking the bitwise AND of two multisets is essential for any serious competitive programmer. And of course, make sure to define any common data structure with some random name no sane person could understand. This ensures that you, and only you, can read the code (as a side note, did you know c++ places no limit on variable name sizes?).

I hope you will use incorporate simple hacks and tricks in your coding skillset. I think most coders will benefit from this article, and I will be expecting to see more LGMs on the leaderboard soon!

dutin orz

dutin orz

:)

Tourist: sweats nervouslyNow that all the LGM strats have been exposed, I wonder if there is anything else they are hidingNice title.

dutin orz

"4. O(1) Optimization" — Does this really work?

I figured that some people might take this seriosuly XD. To answer your question, no it won't!

In fact, unrolling loops(which works only in a loop with a known size in compilation time) can actually optimize it and makes the code up to 2 or 4 times faster. I once cheesed an OI problem with a score of 50's with that method while it should only get 20's.

You had us in the first half, not gonna lie.

If you want to make your solution even more unhackable, you should use a programming language with a nice, minimalist syntax.

I used to underestimate JS coders.

You have a bug in (3), you can't multiply when $$$b$$$ is negative. Other than that it actually works in O(1)

I'm assuming that problem constraints tell you that b is positive

Works pretty well due to Int overflow.

If b is -ve. Let's say -x. Then it will just multiply a with (2^32)-x times. Answer modulo 2^32 remains the same.

I was thinking about the first snippet. For the second one, the optimizing compiler recognizes the pattern and replaces it with a single multiplication.

For so long, slow hashing has been holding me back, but finally, I can become LGM!!!

Post title is similar to something like a youtube click bait video XD

I always thought that LGM beat us because of they have superior algorithm&DS knowledge and strong math skill. Thank you for sharing the tricks. Now, we are ready to dethrone them. LOL

"Literally any one of Benq's submissions" — Thanks, I'm having a heart attack

Another Tip about time travel:

Write a $$$O(log n)$$$ algorithm and run it on $$$n=1/2$$$

Through a very highly secret source that LGMs use. I have got to know that using 69 instead of 7 is like 69000000 times faster than using 7 in tht hash function

Actually infact dont we get happy if we get hacked as someone was nice enough to find the bug beforehand just so we can correct, if possible and avoid WA in system tests ?

O(1) hash was such a nice trick. Thanks for sharing. Alternatively, you can make use of multidimensional bit-set in a hyperbolic voids of C++ cache to speed thing even further. This trick is not new and has been exploited by LGM's for such a long. It is even mentioned here .

"OPTIMIZATION OF FLOYD VARŠAL ALGORITHM TO RUN IN N^2: simply instead of going from 1 to N in third loop we willuse bitset which will visit the remaining nodes automatically"

Pathetic, not even O(1)

I don't understand O(1) Optimization. Can you explane me?

You replace all instances of 'N' or any other given variable with the maximum possible value of the variable provided in the problem statement. For example, for a graph with N nodes, always assume that the graph has the largest possible number of nodes (eg 2*10^5), and simply ignore the excess nodes. This simplifies your program from polynomial time to amortized constant time, albeit with a high constant factor.

But in example we just replace n with 1000000000 and add "if". Why does it mean that program will run for O(1)? We still have n operations

1st tip on the blog is "Time Travel" and this bothers you?

No, I'm talking about fourth optimization.

You just trolled me(

sorry haha what OP saying is that the 'optimized' loop is defined by constants so can be considered as a constant time operation.

Why do you always return 7 in fhash? You always get same hash. Doesn't it mean that our code will work slower? Sorry for bad english.

He mentioned he's hashing 10^5 identical strings. Since they all get hashed to 7, there won't be any collisions, resulting in a much faster retrieval.

Oh see! Thanks a lot! It is really cool optimization!

edit:resoved.

Is it "Top 5 Optimizations, Tips, and Tricks LGMs Don't Know About !" ?

The 1st trick time travel donesn't simply work.

3 is why I switched to python

dutin orz

Thanks for the fourth trick. It makes my day!

Now all my templates work in O(1).

Can you please tell me. why/how is it working in O(1).

Is time travel trick really work ??? Seriously asked