### dutinmeow's blog

By dutinmeow, history, 3 weeks ago,

Hello all!

If you're like me and want to become good at coding, but don't want to put in the effort, just follow this simple guide. Guaranteed LGM within 3 weeks or your money back!

### 1. Time Travel

Have you heard of using "vectorization" or "pragmas" to allow brute force solutions to pass (from for example, here)? There's no need to include this rather confusing block of code in your template: there's a much simpler and more effective method

#include <iostream>
#include <chrono>

int main()
{
using namespace std::chrono_literals;

}


Sleeping/stalling for a negative amount of time tricks the system into giving you extra time for your submission. Simply using this in front of all your code makes any solution pass (albeit, in a long time).

### 2. Better Hash

Current top-rated LGM Benq recommends using the following hash function combined with gp_hash_table to optimize unordered_map and avoid hacks and anti-hash bases.

struct chash {
const uint64_t C = ll(2e18*PI)+71;
const int RANDOM = rng();
ll operator()(ll x) const {
return __builtin_bswap64((x^RANDOM)*C);
}
};


However, a little trick that LGMs don't tell you is that there's an even faster hash function that works in O(1) for all inputs, including strings.

struct fhash {
ll operator() (ll x) const { return 7; }
};


This statement is intuitive to understand and code, and works extremely efficiently. To confirm this, I tested it against a set of identical strings of 10^5 characters; This hash function performed significantly better than the chash and the default one used by std::unordered_map.

### 3. Get Rid of Constant Factors!

Does your program have the correct complexity, but fail because of constant factors? Try this hack out! Everyone knows that the constant factor of a program is directly proportional to the number of semicolons in it. Thus, it follows that putting everything in one line will save you massively on unnecessary constant factors. Note that this also means using for loops is slow, because that takes two semicolons; replace this with a while loop instead.

For instance, the following code is an unoptimized way to multiply two numbers:

int multiply(int a, int b) {
int res = 0;
for (int i = 0; i < b; i++)
res += a;
return res;
}


However, this program can be substantially sped up using optimization tactics. Note that in general you should prioritize getting rid of semicolons, but removing lines and characters also improves run time because the compiler does not have to read over as many characters to check it. A general rule of thumb is that if your program is readable, it's not optimized enough.

int multiply(int a,int b){int res=0;while(true){res += a,b--;if(b==0)return res;}}


### 4. O(1) Optimization

Many people already use this trick for memory management; however, I haven't seen anybody use it for runtime yet. Take the following code to retrieve a number n.

int get(int n) {
int res = 0;
for (int i = 0; i < n; i++)
res++;
return res;
}



Anybody can tell you that this runs in O(n). But making a couple of changes will make this run in O(1). Assume that the problem contraints tell you that n <= 10^9.

int get(int a) {
int res = 0;
for (int i = 0; i <= 1000000000; i++) {
res++;
if (i >= a)
break;
}
return res;
}


Now it runs in O(1). You can apply this anywhere, even for quadratic, cubic, and exponential solutions. (for example, all-pairs shortest paths in O(1)).

### 5. Antihack Trick

None of the tricks mentioned above will work if you get hacked (although this is virtually impossible). A simple trick is to create an extremely long template with confusing aliases so that no one can possibly read your code and thus can't hack you! Example: Literally any one of Benq's submissions

Now, I hear you say, "But isn't this purposefully obfuscating code?". It's not, as long as its legitimately "part of your template". After all, who doesn't need to define custom istream/ostream operators for every stl data structure to save them a couple seconds when debugging? And obviously a template for taking the bitwise AND of two multisets is essential for any serious competitive programmer. And of course, make sure to define any common data structure with some random name no sane person could understand. This ensures that you, and only you, can read the code (as a side note, did you know c++ places no limit on variable name sizes?).

I hope you will use incorporate simple hacks and tricks in your coding skillset. I think most coders will benefit from this article, and I will be expecting to see more LGMs on the leaderboard soon!

• +707

 » 3 weeks ago, # |   +8 dutin orz
•  » » 3 weeks ago, # ^ |   0 dutin orz
•  » » » 3 weeks ago, # ^ | ← Rev. 2 →   0 :)
 » 3 weeks ago, # |   +48 Tourist: sweats nervously Now that all the LGM strats have been exposed, I wonder if there is anything else they are hiding
 » 3 weeks ago, # |   +17 Nice title.
 » 3 weeks ago, # |   +14 dutin orz
 » 3 weeks ago, # |   0 "4. O(1) Optimization" — Does this really work?
•  » » 3 weeks ago, # ^ |   +18 I figured that some people might take this seriosuly XD. To answer your question, no it won't!
•  » » 3 weeks ago, # ^ |   +2 In fact, unrolling loops(which works only in a loop with a known size in compilation time) can actually optimize it and makes the code up to 2 or 4 times faster. I once cheesed an OI problem with a score of 50's with that method while it should only get 20's.
 » 3 weeks ago, # |   +4 You had us in the first half, not gonna lie.
 » 3 weeks ago, # |   +17 If you want to make your solution even more unhackable, you should use a programming language with a nice, minimalist syntax.
•  » » 9 days ago, # ^ |   0 I used to underestimate JS coders.
 » 3 weeks ago, # |   +26 You have a bug in (3), you can't multiply when $b$ is negative. Other than that it actually works in O(1)
•  » » 3 weeks ago, # ^ |   +5 I'm assuming that problem constraints tell you that b is positive
•  » » 2 weeks ago, # ^ | ← Rev. 2 →   0 Works pretty well due to Int overflow.If b is -ve. Let's say -x. Then it will just multiply a with (2^32)-x times. Answer modulo 2^32 remains the same.
•  » » » 2 weeks ago, # ^ |   +5 I was thinking about the first snippet. For the second one, the optimizing compiler recognizes the pattern and replaces it with a single multiplication.
 » 3 weeks ago, # |   +43 For so long, slow hashing has been holding me back, but finally, I can become LGM!!!
 » 3 weeks ago, # |   0 Post title is similar to something like a youtube click bait video XD
 » 2 weeks ago, # |   0 I always thought that LGM beat us because of they have superior algorithm&DS knowledge and strong math skill. Thank you for sharing the tricks. Now, we are ready to dethrone them. LOL
 » 2 weeks ago, # |   +13 "Literally any one of Benq's submissions" — Thanks, I'm having a heart attack
 » 2 weeks ago, # |   +76 Another Tip about time travel:Write a $O(log n)$ algorithm and run it on $n=1/2$
 » 2 weeks ago, # | ← Rev. 2 →   -19 Through a very highly secret source that LGMs use. I have got to know that using 69 instead of 7 is like 69000000 times faster than using 7 in tht hash function
 » 2 weeks ago, # |   -19 Actually infact dont we get happy if we get hacked as someone was nice enough to find the bug beforehand just so we can correct, if possible and avoid WA in system tests ?
 » 2 weeks ago, # |   0 O(1) hash was such a nice trick. Thanks for sharing. Alternatively, you can make use of multidimensional bit-set in a hyperbolic voids of C++ cache to speed thing even further. This trick is not new and has been exploited by LGM's for such a long. It is even mentioned here .
•  » » 2 weeks ago, # ^ |   +3 "OPTIMIZATION OF FLOYD VARŠAL ALGORITHM TO RUN IN N^2: simply instead of going from 1 to N in third loop we willuse bitset which will visit the remaining nodes automatically"Pathetic, not even O(1)
 » 2 weeks ago, # |   +3 I don't understand O(1) Optimization. Can you explane me?
•  » » 2 weeks ago, # ^ |   +6 You replace all instances of 'N' or any other given variable with the maximum possible value of the variable provided in the problem statement. For example, for a graph with N nodes, always assume that the graph has the largest possible number of nodes (eg 2*10^5), and simply ignore the excess nodes. This simplifies your program from polynomial time to amortized constant time, albeit with a high constant factor.
•  » » » 2 weeks ago, # ^ | ← Rev. 2 →   +4 But in example we just replace n with 1000000000 and add "if". Why does it mean that program will run for O(1)? We still have n operations
•  » » » » 2 weeks ago, # ^ |   +13 1st tip on the blog is "Time Travel" and this bothers you?
•  » » » » » 2 weeks ago, # ^ |   +11 No, I'm talking about fourth optimization.
•  » » » » » 2 weeks ago, # ^ |   0 You just trolled me(
•  » » » » » » 2 weeks ago, # ^ |   +8 sorry haha what OP saying is that the 'optimized' loop is defined by constants so can be considered as a constant time operation.
 » 2 weeks ago, # | ← Rev. 2 →   +3 Why do you always return 7 in fhash? You always get same hash. Doesn't it mean that our code will work slower? Sorry for bad english.
•  » » 2 weeks ago, # ^ |   0 He mentioned he's hashing 10^5 identical strings. Since they all get hashed to 7, there won't be any collisions, resulting in a much faster retrieval.
•  » » » 2 weeks ago, # ^ |   +3 Oh see! Thanks a lot! It is really cool optimization!
•  » » » 9 days ago, # ^ | ← Rev. 2 →   0 edit:resoved.
 » 2 weeks ago, # |   +8 Is it "Top 5 Optimizations, Tips, and Tricks LGMs Don't Know About !" ?
 » 2 weeks ago, # |   -11 The 1st trick time travel donesn't simply work.
 » 13 days ago, # |   +13 3 is why I switched to python
 » 13 days ago, # |   0 dutin orz
 » 13 days ago, # | ← Rev. 3 →   +5 Thanks for the fourth trick. It makes my day!Now all my templates work in O(1).
•  » » 13 days ago, # ^ |   0 Can you please tell me. why/how is it working in O(1).
 » 11 days ago, # |   0 Is time travel trick really work ??? Seriously asked