BessieTheCow's blog

By BessieTheCow, history, 3 weeks ago, ,

I use Microsoft Visual Studio as my IDE, so I submit my solutions here using the Visual C++ 2017 compiler. Recently, I submitted a solution to 1180D - Tolik and His Uncle which passed the pretests but TLEed on a system test. My solution (55894505) is a bit overcomplicated, but it runs in O(nm) which should be fast enough. In fact, it passes in 280ms when submitted using G++. A custom test shows that my solution takes 1029ms for the system test case which it TLEed on, and adding ios_base::sync_with_stdio(false) and cin.tie(nullptr) only decreased the time by 20ms, still over the time limit. I was only able to get AC with MSVC++ by using printf instead of cout. So for this solution, MSVC++ is apparently three to four times slower than G++.

I then did a custom test with MSVC++ 2010 instead of MSVC++ 2017, and I got 904ms without fast I/O and 842ms with fast I/O, which doesn't seem to make any sense. I got similar results with the official solution, so my code isn't the problem. According to https://codeforces.com/blog/entry/4088, the command line for MSVC++ 2010 includes the /O2 flag which should optimize for maximum speed, while the command line for MSVC++ 2017 isn't shown. Why is MSVC++ so much slower than G++ here, and why is MSVC++ 2017 slower than MSVC++ 2010?

• 0

 » 3 weeks ago, # |   +8 proprietary software memesThe lesson is: Use g++!
•  » » 3 weeks ago, # ^ |   0 But then I have to waste time with preprocessor directives when using non-portable stuff such as intrinsics.
•  » » » 3 weeks ago, # ^ |   0 My point remains unchanged
 » 3 weeks ago, # |   +1 sync_with_stdio is ignored by msvc
 » 3 weeks ago, # | ← Rev. 2 →   -17 try adding #pragma GCC optimize(2) #pragma GCC optimize(3) #pragma GCC optimize("Ofast") into your code.