### cotester's blog

By cotester, 7 weeks ago,

I was looking over some C++ solutions written by the best of CodeForces when I realized that most of them use arrays initialized to a large value instead of instantiating a vector inside the main method. Is there any benefit to doing it the former way other than (maybe) saving a few keystrokes?

Here are some examples of what I mean.

vector <int> graph[200010];
int a[200010];
ll dp[200010][2];
ll dp2[200010][2];

const int Maxn = 1e6 + 9;
typedef long long ll;
const ll oo = (ll)1e18;
vector<int> al[Maxn];
ll val[Maxn];
vector<pair<ll,int> > sub[Maxn];
ll dp[Maxn][2],ans[2],tmp[2];

Upd: Thanks to everyone who responded! The main points from the comments below have been compiled here:

In general, C++ arrays…

• are faster/more efficient
• There's a smaller runtime constant
• Vectors require an extra indirection and are stored on the heap
• The difference in performance becomes more obvious if you use a lot of small vectors.
• use less memory (useful for tight ML constraints)
• are more convenient to use
• Easy to instantiate and clear (with memset)

So overall, I guess vectors are fine in most cases, but you can use arrays just to be on the safe side.

• +73

 » 7 weeks ago, # |   +160 It pleases the cp gods
 » 7 weeks ago, # |   0 how would you do your examples but using vectors?
•  » » 7 weeks ago, # ^ |   0 something like vvi g(N); (where N is defined from the input) would work in place of vector graph[200010];
 » 7 weeks ago, # |   -22 vector has a large constant size(at least in China judges i think) so using array reduces the chance of your code with correct time complexity from TLE i guess?
 » 7 weeks ago, # |   0 Does someone have any examples where if you use vector you get TLE, but if you use int[] you get AC?
•  » » 7 weeks ago, # ^ | ← Rev. 2 →   +24 It happened to me once but i dont remember the problem, so yes, if you dont use some special vector features, better use int[], its just cleaner and faster at least
•  » » 7 weeks ago, # ^ | ← Rev. 2 →   +5 Almost every problem on matrix exponentiation, like one of last problems link. I got TL using vectors submission and got AC using std::array submission. Replacing vectors with arrays can improve constant in your solution and help you to squeeze it in TL...
•  » » » 7 weeks ago, # ^ |   0 Honestly, I use valarrays for matrix exponentiation. Not only are they a usually good match with linear algebra (this being said, it's not always true), they can simulate a matrix by itself, which is why valarray is perfect for representing matrixes.
•  » » » 6 weeks ago, # ^ |   +8 6500ms vs 717ms does not seem like constant factor kind of difference... Replacing vectors with arrays can improve constant in your solution and help you to squeeze it in TL... I doubt that it may indeed happen with constant noticeable difference, unless you are allocating std::array on stack which is, in general, a bad idea. Also, additional small factor can come from the fact that std::vector fills its contents with the default value, while arrays (of any kind) don't, but it also should be dominated by the allocation.
•  » » 6 weeks ago, # ^ |   0 Depends on your implementation but for me I got TLE in these 2 problems with vector and AC with a regular arrayhttps://codeforces.com/edu/course/2/lesson/4/4/practice/contest/274684/problem/Bhttps://www.codechef.com/JULY221C/problems/LARSQR31
 » 7 weeks ago, # |   +45 vector stores data on heap. A global array is stored in .data or .bss and has a constant address. This saves one indirection.
•  » » 7 weeks ago, # ^ |   0 What about std::array?
•  » » » 7 weeks ago, # ^ |   0 In most implementations (where the stack is used), std::array is either allocated on the stack (when local) or in the same place where global arrays are. The more accurate term is them having automatic storage duration.
•  » » » 7 weeks ago, # ^ |   0 std::array is basically a fancy wrapper of T[N], so it's fast-ish too.
•  » » 7 weeks ago, # ^ |   0 Does saving an indirection result in a significant/noticeable speed improvement?
•  » » » 7 weeks ago, # ^ |   +3 Speaking from experience, in some problems this sometimes reduces runtime by 10-20 percent. Incidentally, using global arrays may also result in fewer cache misses, so maybe that's one of the reasons for improvement.
 » 7 weeks ago, # |   +8 The only practical case where raw array can be better I can come up with:vector uses memory for data itself + 2 * size_t + 1 pointer. So if you somehow need 2D array with knowledge that rows small enough, you can save some memory with using raw arrays, which may help in such uncommon case with tight ML. (don't remember any problem for example :))In performance sense absolutely identical. In usage sense vector strictly better. Even described case can be solved with vector but slightly in unstraight way.
 » 7 weeks ago, # | ← Rev. 2 →   +18 For me writing ar[n] is easier than writing vector arThis becomes complex for 2-D Vector vector V vs int ar[][]Also arrays are memory efficient as per my observation
•  » » 7 weeks ago, # ^ |   +2 You can use typedef vector vi; and typedef vector> vvi; in your template.
•  » » » 7 weeks ago, # ^ |   +7 You cannot use your template everywhere :) For, e.g., Coding tests.It's easier to write an array when compared to a vector. You can see the comment of ivan100sic.I use vector and array both when it's time to use multidimensional. I switch to array else, I use vector because vi v(n) :) Thanks
•  » » 7 weeks ago, # ^ |   0 About this : Also arrays are memory efficient as per my observationAny practical example where using vector a(n) doesn't work (not efficient enough and gives TLE), but using int a[n] works?
•  » » 7 weeks ago, # ^ | ← Rev. 2 →   +5 In the newest version of C++ you can do this: vector my_lovely_2d_array(n, vector(m)); auto my_other_lovely_2d_array = vector(n, vector(m)); This can be done with more dimensions.When I said newest version, I mean in C++17. But somehow gcc on Codeforces does not recognize this. Thankfully this works in C++20. This feature is called type deduction btw.
•  » » » 7 weeks ago, # ^ | ← Rev. 2 →   0 Or that: template auto CreateVector(OuterSize outherSize) { return vector(outherSize); } template auto CreateVector(OuterSize outerSize, OtherSize&& ...otherSize) { return vector(outerSize, CreateVector(forward(otherSize)...)); } .... auto v = CreateVector(2, 5); This works on CF's GCC++17You can even modify this to work on earliest C++
•  » » » » 7 weeks ago, # ^ |   +10 Yeah if you use template. I wanted to propose a non template solution. Newer version has a lot of quality-of-life improvement, and the effort to learn, understand and write became significant less.
•  » » » » » 7 weeks ago, # ^ |   0 naaahhh... deduction is cool but not in this case.This can be done with more dimensions. — sure, but "3 or more"-d vectors would be not satisfying to write. CreateVector(2,3,4,5,6) already gives 5-d vector with basic template writing (basic, not great y-combinator of course).
 » 7 weeks ago, # |   +46 It's faster.
 » 7 weeks ago, # |   +66 Much simpler and faster when doing multidimensional DP, for example. You can also clear the entire thing with a single memset.
•  » » 7 weeks ago, # ^ |   +10 It's not like you can't clear a vector
•  » » » 7 weeks ago, # ^ |   0 I dont think any sane person would want to use 3+ dimensional vector XD
•  » » » » 7 weeks ago, # ^ |   +3
•  » » » 7 weeks ago, # ^ |   +24 Clearing int d[2][50][50][50] (quite common): memset(d, 0, sizeof(d)); Clearing a vector>>> d (most elegant way I know): d.assign(2, vector(50, vector(50, vector(50, 0)))); The first one is also over 4 times faster.
•  » » » » 7 weeks ago, # ^ | ← Rev. 2 →   0 noob question but why is clearing array/vector desirable or needed?
•  » » » » » 7 weeks ago, # ^ |   0 For multiple test cases, global variable values should be reseted because they may get overwritten.
•  » » » » » » 7 weeks ago, # ^ |   -6 You don’t need to make vectors global
•  » » » » 7 weeks ago, # ^ |   0 Oh, ok, I didn't think of this since I nearly never need to clear >1(or rarely 2) dimensional vectors. I even have used 4d vector maybe at most once or twice I think.
•  » » » » 7 weeks ago, # ^ |   +8 Maybe I dont understand smth but clearing maybe need in some multitest case, where each test you need to clear. Ofc memset would be faster than assign but: Isn't that speed negligible comparable to the rest of code? I dont see big deal in destruction/creation vector in multitest handler (it's in complexity sense would be same, in practical sense correct solutions would still pass, incorrect solutions mustn't pass)
•  » » » » » 7 weeks ago, # ^ | ← Rev. 2 →   0 Sometimes, without some linear optimization, even the proper solutions won't pass. I literally failed a D question in a contest because I used vector lol. Then, after then contest, I submitted the solution with plain array, and it passed. The problem: That D problem
•  » » 6 weeks ago, # ^ |   0 Is multidemensional vector slower due to cache locality? I used a 5-dimensional vector in a recent abc and it was nearly 10x slower than regular array.
•  » » » 6 weeks ago, # ^ |   0 well, yes. a vector itself has sequential memory but a multidimensional vector is a vector with addresses pointing to the inner vector (which is in the heap, same goes for the outer vector), which is usually (with a high probability) far away from the outer vector. so, yes, multidimensional vectors do have some cache miss issue.
 » 7 weeks ago, # |   +31 I just learned to code in such a way, but I'm slowly changing my code style to use lesser global variables.But I'm not used to write recursion inside main, and writing all functions inside main may not be good. In that case, I'd probably use global variables.
•  » » 7 weeks ago, # ^ |   0 auto dfs = [&](auto&& dfs, int v) -> void { ... }; dfs(dfs, 0); works like charm. (trailing return type is required when compiler can't deduce it, which is often the case)
•  » » » 7 weeks ago, # ^ |   +8 It might be fine for this case, but to avoid confusion, it is much better to rename the auto&& dfs to something like auto&& self instead (and use self instead of dfs in the lambda body, of course).
•  » » » 7 weeks ago, # ^ |   +18 But that's quite ugly, right? There are some workarounds (y-combinators) but I'm not a big fan of lengthy templates, so...
•  » » » » 7 weeks ago, # ^ |   +13 To fix this (and a large number of other issues that make it harder to use C++), C++23 has a feature on the way (called "deducing this", for instance, see this paper). It will make writing recursive lambdas pretty easy and without any templates.To paraphrase from section 5.3 of the proposal, something like the following will work: auto fib = [](this auto self, int n) { if (n < 2) return n; return self(n-1) + self(n-2); }; It's unfortunate that we can't use this already, but when C++23 comes into the picture, it would definitely make sense to migrate to recursive lambdas to avoid polluting the global namespace (and localize code much better).
•  » » » » » 7 weeks ago, # ^ | ← Rev. 2 →   0 also as a current alternative, std::function makes "recursive lambdas" possible, but I don't know how much overhead do they cause, so I can't recommend it so certainly.
•  » » » » » » 7 weeks ago, # ^ |   +10 I've used it for a couple of months now and haven't run into any TLE issues, so I don't think the overhead is that bad.
•  » » » » » » » 7 weeks ago, # ^ | ← Rev. 3 →   0 trust me, I've tried making a recreation of Python's @cache by making a functor that uses std::function for storing the function (same way as how Python decorator classes work), and its functor-std::function-lambda overhead was a massive pain in the arse (though most of the pain might just have been an issue of using std::map for storing the memoized values as there was no hash function for tuples)
•  » » » » » » » » 7 weeks ago, # ^ |   +9 If you're still looking for such a thing, I implemented something of that sort last year. 134333870 has an implementation of cached recursion. Note that this will work only for pure functions (references will need significantly more code to work).
•  » » » » » » 7 weeks ago, # ^ |   0 fib if perfect example to test, std::function runs like 5x slower than classic function
•  » » » » » » 7 weeks ago, # ^ | ← Rev. 3 →   +8 A trivial example like the following gives a > 70x overhead on my local machine. Also, many times, function calls are turned into loops, and std::function sometimes can't be optimized to that extent. Recursive lambdas are practically the same as usual functions (the assembly generated is pretty much the same). I used std::vector instead of replacing a[i] by i; if you use i instead of a[i], you can get even higher overhead. And if you make n const, the call to the recursive lambda is optimized away to something that is O(1). Spoiler#pragma GCC optimize("O3,unroll-loops") #pragma GCC target("avx2,bmi,bmi2,popcnt,lzcnt") #include "bits/stdc++.h" using ll = int64_t; template struct Stopwatch { std::string name; std::chrono::time_point start; Stopwatch(const std::string& s) : name{s} { start = C::now(); } Stopwatch() : Stopwatch("Time") {} void print() const { std::cerr << name << ": " << std::chrono::duration_cast(C::now() - start).count() << " ms\n"; } ~Stopwatch() { print(); } }; int main() { int n = 1e8; std::vector a(n); for (int i = 0; i < n; ++i) a[i] = i; auto f1 = [&](auto&& self, int i) -> ll { if (i == n) return 0; return a[i] + self(self, i + 1); }; std::function f2 = [&](int i) -> ll { if (i == n) return 0; return a[i] + f2(i + 1); }; { Stopwatch st{"Recursive lambda"}; std::cout << f1(f1, 0) << '\n'; } { Stopwatch st{"std::function"}; std::cout << f2(0) << '\n'; } }
 » 7 weeks ago, # | ← Rev. 2 →   -11 My ignorance, I've edited that lol . Scroll down to the commet for a convenient implement for multidimensional vectors.
•  » » 7 weeks ago, # ^ |   +5 I have not realized that, only until Github Autopilot tried to recommend me to add vvvi in my template after I added vi and vvi for one-dimensional and two-dimensional vectors and I thought that was crappy ugly.
•  » » 7 weeks ago, # ^ |   0 Why am I downvoted? lol
•  » » » 7 weeks ago, # ^ |   0 because people actually use multidimensional vectors, and many do consider it fine. heck, I've even heard someone use a 11-dimensional vector to solve a problem requiring 11-dimensional BFS (I know, that sounds totally weird but its real).
•  » » 7 weeks ago, # ^ |   +8 you can apply a bit of C++ templates auto vec(auto elem, size_t sz, auto... dim) { if constexpr (sizeof...(dim) == 0) { return vector(sz, elem); } else { return vector(sz, vec(elem, dim...)); } } so for example vec(1, 4, 2) will create a 4x2 vector filled with 1s
•  » » » 6 weeks ago, # ^ | ← Rev. 2 →   0 thx a lot, this is in my snippet now.
•  » » » 6 weeks ago, # ^ |   0 Could you plz help me that if there a implement for lower version GCC(c++14 or lower?). Many thks.C++17 doesn't work for some dated online judges, though works for my laptop. :)
•  » » » » 6 weeks ago, # ^ |   +10 template auto vec(T elem, size_t sz) { return vector(sz, elem); } template auto vec(T elem, size_t sz, Args... dim) { auto ret = vec(elem, dim...); return vector(sz, ret); } should work with C++14, I don't know how to do it for lower standarts
•  » » » » » 6 weeks ago, # ^ |   0 Please accept my best thanks!
 » 7 weeks ago, # |   +36 Arrays are convenient so people use them. Some comments mention a speed difference. It mainly happens when you create a lot of small vectors (e.g. an array with $10^6$ vectors of size 0-3). Every vector causes a small time & memory overhead. The same happens when you create a multi-dimensional vector with the last dimension of size 2.You shouldn't worry about it when you deal with a few huge vectors.Also, push_back(x) is slightly less efficient than just accessing an array/vector element. It's good to resize or reserve vector size if possible.
•  » » 7 weeks ago, # ^ |   0 most of the time, it should be fine to use vectors, and its just personal preference. however vectors do cause unexpected TLEs sometimes, not because push_back() is inefficient, but exactly because it tries to be efficient and allocates new capacity in powers of two(4 when you need 3, 8 when you need 6, 2048 when you need 1500, you should get what I mean by now). deques do not have this same issue as they allocate memory in buckets, but for this same reason they suffer due to cache miss.
 » 7 weeks ago, # | ← Rev. 2 →   +33 For me it's a matter of using different things for different ideas.I use arrays for things which I know has fixed size while I use vectors for things which I know have variable lengths. In my head, these two are quite different concepts and coding them with different methods helps to tell me the nature (dynamic or fixed length) of this thing. For example when I write vector adj[100005], I can tell the first parameter is fixed (i.e. the number of nodes in the graph is fixed), while the second is dynamic. (i.e I don't know how many adjacent nodes there are until I read the input)
•  » » 7 weeks ago, # ^ |   +2 I mean, how is this better than int n; cin >> n; vector> adj(n); in any sense?The only thing I can come up with is using a global variable adj and having to write vector> twice vector> adj; int main() { int n; cin >> n; adj = vector>(n); } For this use using using graph = vector> fits perfectly and not using global variables if possible.