cotester's blog

By cotester, 7 weeks ago, In English

I was looking over some C++ solutions written by the best of CodeForces when I realized that most of them use arrays initialized to a large value instead of instantiating a vector inside the main method. Is there any benefit to doing it the former way other than (maybe) saving a few keystrokes?

Here are some examples of what I mean.

vector <int> graph[200010];
int a[200010];
ll dp[200010][2];
ll dp2[200010][2];

const int Maxn = 1e6 + 9;
typedef long long ll;
const ll oo = (ll)1e18;
vector<int> al[Maxn];
ll val[Maxn];
vector<pair<ll,int> > sub[Maxn];
ll dp[Maxn][2],ans[2],tmp[2];

Upd: Thanks to everyone who responded! The main points from the comments below have been compiled here:

In general, C++ arrays…

  • are faster/more efficient
    • There's a smaller runtime constant
    • Vectors require an extra indirection and are stored on the heap
    • The difference in performance becomes more obvious if you use a lot of small vectors.
  • use less memory (useful for tight ML constraints)
  • are more convenient to use
    • Easy to instantiate and clear (with memset)

So overall, I guess vectors are fine in most cases, but you can use arrays just to be on the safe side.

 
 
 
 
  • Vote: I like it
  • +73
  • Vote: I do not like it

»
7 weeks ago, # |
  Vote: I like it +160 Vote: I do not like it

It pleases the cp gods

»
7 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

how would you do your examples but using vectors?

  • »
    »
    7 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    something like vvi g(N); (where N is defined from the input) would work in place of vector <int> graph[200010];

»
7 weeks ago, # |
  Vote: I like it -22 Vote: I do not like it

vector has a large constant size(at least in China judges i think) so using array reduces the chance of your code with correct time complexity from TLE i guess?

»
7 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Does someone have any examples where if you use vector<int> you get TLE, but if you use int[] you get AC?

  • »
    »
    7 weeks ago, # ^ |
    Rev. 2   Vote: I like it +24 Vote: I do not like it

    It happened to me once but i dont remember the problem, so yes, if you dont use some special vector features, better use int[], its just cleaner and faster at least

  • »
    »
    7 weeks ago, # ^ |
    Rev. 2   Vote: I like it +5 Vote: I do not like it

    Almost every problem on matrix exponentiation, like one of last problems link. I got TL using vectors submission and got AC using std::array submission. Replacing vectors with arrays can improve constant in your solution and help you to squeeze it in TL...

    • »
      »
      »
      7 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Honestly, I use valarrays for matrix exponentiation. Not only are they a usually good match with linear algebra (this being said, it's not always true), they can simulate a matrix by itself, which is why valarray is perfect for representing matrixes.

    • »
      »
      »
      6 weeks ago, # ^ |
        Vote: I like it +8 Vote: I do not like it

      6500ms vs 717ms does not seem like constant factor kind of difference...

      Replacing vectors with arrays can improve constant in your solution and help you to squeeze it in TL...

      I doubt that it may indeed happen with constant noticeable difference, unless you are allocating std::array on stack which is, in general, a bad idea. Also, additional small factor can come from the fact that std::vector fills its contents with the default value, while arrays (of any kind) don't, but it also should be dominated by the allocation.

  • »
    »
    6 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Depends on your implementation but for me I got TLE in these 2 problems with vector and AC with a regular array

    https://codeforces.com/edu/course/2/lesson/4/4/practice/contest/274684/problem/B

    https://www.codechef.com/JULY221C/problems/LARSQR31

»
7 weeks ago, # |
  Vote: I like it +45 Vote: I do not like it

vector stores data on heap. A global array is stored in .data or .bss and has a constant address. This saves one indirection.

  • »
    »
    7 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    What about std::array?

    • »
      »
      »
      7 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      In most implementations (where the stack is used), std::array is either allocated on the stack (when local) or in the same place where global arrays are. The more accurate term is them having automatic storage duration.

    • »
      »
      »
      7 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      std::array<T, N> is basically a fancy wrapper of T[N], so it's fast-ish too.

  • »
    »
    7 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Does saving an indirection result in a significant/noticeable speed improvement?

    • »
      »
      »
      7 weeks ago, # ^ |
        Vote: I like it +3 Vote: I do not like it

      Speaking from experience, in some problems this sometimes reduces runtime by 10-20 percent. Incidentally, using global arrays may also result in fewer cache misses, so maybe that's one of the reasons for improvement.

»
7 weeks ago, # |
  Vote: I like it +8 Vote: I do not like it

The only practical case where raw array can be better I can come up with:
vector uses memory for data itself + 2 * size_t + 1 pointer. So if you somehow need 2D array with knowledge that rows small enough, you can save some memory with using raw arrays, which may help in such uncommon case with tight ML. (don't remember any problem for example :))

In performance sense absolutely identical. In usage sense vector strictly better. Even described case can be solved with vector but slightly in unstraight way.

»
7 weeks ago, # |
Rev. 2   Vote: I like it +18 Vote: I do not like it

For me writing ar[n] is easier than writing vector ar

This becomes complex for 2-D Vector vector<vector> V vs int ar[][]

Also arrays are memory efficient as per my observation

  • »
    »
    7 weeks ago, # ^ |
      Vote: I like it +2 Vote: I do not like it

    You can use typedef vector<int> vi; and typedef vector<vector<int>> vvi; in your template.

    • »
      »
      »
      7 weeks ago, # ^ |
        Vote: I like it +7 Vote: I do not like it

      You cannot use your template everywhere :) For, e.g., Coding tests.

      It's easier to write an array when compared to a vector. You can see the comment of ivan100sic.

      I use vector and array both when it's time to use multidimensional. I switch to array else, I use vector because vi v(n) :) Thanks

  • »
    »
    7 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    About this : Also arrays are memory efficient as per my observation

    Any practical example where using vector<int> a(n) doesn't work (not efficient enough and gives TLE), but using int a[n] works?

  • »
    »
    7 weeks ago, # ^ |
    Rev. 2   Vote: I like it +5 Vote: I do not like it

    In the newest version of C++ you can do this:

    vector my_lovely_2d_array(n, vector<int>(m));
    auto my_other_lovely_2d_array = vector(n, vector<int>(m));
    

    This can be done with more dimensions.

    When I said newest version, I mean in C++17. But somehow gcc on Codeforces does not recognize this. Thankfully this works in C++20. This feature is called type deduction btw.

    • »
      »
      »
      7 weeks ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      Or that:

      template<typename T, typename OuterSize>
      auto CreateVector(OuterSize outherSize) {
          return vector<T>(outherSize);
      }
      
      template<typename T, typename OuterSize, typename ...OtherSize>
      auto CreateVector(OuterSize outerSize, OtherSize&& ...otherSize) {
          return vector(outerSize, CreateVector<T, OtherSize...>(forward<OtherSize>(otherSize)...));
      }
      
      ....
      
      auto v = CreateVector<int>(2, 5);
      

      This works on CF's GCC++17

      You can even modify this to work on earliest C++

      • »
        »
        »
        »
        7 weeks ago, # ^ |
          Vote: I like it +10 Vote: I do not like it

        Yeah if you use template. I wanted to propose a non template solution. Newer version has a lot of quality-of-life improvement, and the effort to learn, understand and write became significant less.

        • »
          »
          »
          »
          »
          7 weeks ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          naaahhh... deduction is cool but not in this case.

          This can be done with more dimensions. — sure, but "3 or more"-d vectors would be not satisfying to write. CreateVector<int>(2,3,4,5,6) already gives 5-d vector with basic template writing (basic, not great y-combinator of course).

»
7 weeks ago, # |
  Vote: I like it +46 Vote: I do not like it

It's faster.

»
7 weeks ago, # |
  Vote: I like it +66 Vote: I do not like it

Much simpler and faster when doing multidimensional DP, for example. You can also clear the entire thing with a single memset.

  • »
    »
    7 weeks ago, # ^ |
      Vote: I like it +10 Vote: I do not like it

    It's not like you can't clear a vector

    • »
      »
      »
      7 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I dont think any sane person would want to use 3+ dimensional vector XD

    • »
      »
      »
      7 weeks ago, # ^ |
        Vote: I like it +24 Vote: I do not like it

      Clearing int d[2][50][50][50] (quite common):

      memset(d, 0, sizeof(d));
      

      Clearing a vector<vector<vector<vector<int>>>> d (most elegant way I know):

      d.assign(2, vector(50, vector(50, vector(50, 0))));
      

      The first one is also over 4 times faster.

      • »
        »
        »
        »
        7 weeks ago, # ^ |
        Rev. 2   Vote: I like it 0 Vote: I do not like it

        noob question but why is clearing array/vector desirable or needed?

        • »
          »
          »
          »
          »
          7 weeks ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          For multiple test cases, global variable values should be reseted because they may get overwritten.

          • »
            »
            »
            »
            »
            »
            7 weeks ago, # ^ |
              Vote: I like it -6 Vote: I do not like it

            You don’t need to make vectors global

      • »
        »
        »
        »
        7 weeks ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Oh, ok, I didn't think of this since I nearly never need to clear >1(or rarely 2) dimensional vectors. I even have used 4d vector maybe at most once or twice I think.

      • »
        »
        »
        »
        7 weeks ago, # ^ |
          Vote: I like it +8 Vote: I do not like it

        Maybe I dont understand smth but clearing maybe need in some multitest case, where each test you need to clear. Ofc memset would be faster than assign but:

        1. Isn't that speed negligible comparable to the rest of code?
        2. I dont see big deal in destruction/creation vector in multitest handler (it's in complexity sense would be same, in practical sense correct solutions would still pass, incorrect solutions mustn't pass)
        • »
          »
          »
          »
          »
          7 weeks ago, # ^ |
          Rev. 2   Vote: I like it 0 Vote: I do not like it

          Sometimes, without some linear optimization, even the proper solutions won't pass. I literally failed a D question in a contest because I used vector lol. Then, after then contest, I submitted the solution with plain array, and it passed. The problem: That D problem

  • »
    »
    6 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Is multidemensional vector slower due to cache locality? I used a 5-dimensional vector in a recent abc and it was nearly 10x slower than regular array.

    • »
      »
      »
      6 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      well, yes. a vector itself has sequential memory but a multidimensional vector is a vector with addresses pointing to the inner vector (which is in the heap, same goes for the outer vector), which is usually (with a high probability) far away from the outer vector. so, yes, multidimensional vectors do have some cache miss issue.

»
7 weeks ago, # |
  Vote: I like it +31 Vote: I do not like it

I just learned to code in such a way, but I'm slowly changing my code style to use lesser global variables.

But I'm not used to write recursion inside main, and writing all functions inside main may not be good. In that case, I'd probably use global variables.

  • »
    »
    7 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it
    auto dfs = [&](auto&& dfs, int v) -> void {
      ...
    };
    dfs(dfs, 0);
    

    works like charm. (trailing return type is required when compiler can't deduce it, which is often the case)

    • »
      »
      »
      7 weeks ago, # ^ |
        Vote: I like it +8 Vote: I do not like it

      It might be fine for this case, but to avoid confusion, it is much better to rename the auto&& dfs to something like auto&& self instead (and use self instead of dfs in the lambda body, of course).

    • »
      »
      »
      7 weeks ago, # ^ |
        Vote: I like it +18 Vote: I do not like it

      But that's quite ugly, right? There are some workarounds (y-combinators) but I'm not a big fan of lengthy templates, so...

      • »
        »
        »
        »
        7 weeks ago, # ^ |
          Vote: I like it +13 Vote: I do not like it

        To fix this (and a large number of other issues that make it harder to use C++), C++23 has a feature on the way (called "deducing this", for instance, see this paper). It will make writing recursive lambdas pretty easy and without any templates.

        To paraphrase from section 5.3 of the proposal, something like the following will work:

        auto fib = [](this auto self, int n) {
            if (n < 2) return n;
            return self(n-1) + self(n-2);
        };
        

        It's unfortunate that we can't use this already, but when C++23 comes into the picture, it would definitely make sense to migrate to recursive lambdas to avoid polluting the global namespace (and localize code much better).

        • »
          »
          »
          »
          »
          7 weeks ago, # ^ |
          Rev. 2   Vote: I like it 0 Vote: I do not like it

          also as a current alternative, std::function makes "recursive lambdas" possible, but I don't know how much overhead do they cause, so I can't recommend it so certainly.

          • »
            »
            »
            »
            »
            »
            7 weeks ago, # ^ |
              Vote: I like it +10 Vote: I do not like it

            I've used it for a couple of months now and haven't run into any TLE issues, so I don't think the overhead is that bad.

            • »
              »
              »
              »
              »
              »
              »
              7 weeks ago, # ^ |
              Rev. 3   Vote: I like it 0 Vote: I do not like it

              trust me, I've tried making a recreation of Python's @cache by making a functor that uses std::function for storing the function (same way as how Python decorator classes work), and its functor-std::function-lambda overhead was a massive pain in the arse (though most of the pain might just have been an issue of using std::map for storing the memoized values as there was no hash function for tuples)

              • »
                »
                »
                »
                »
                »
                »
                »
                7 weeks ago, # ^ |
                  Vote: I like it +9 Vote: I do not like it

                If you're still looking for such a thing, I implemented something of that sort last year. 134333870 has an implementation of cached recursion. Note that this will work only for pure functions (references will need significantly more code to work).

          • »
            »
            »
            »
            »
            »
            7 weeks ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            fib if perfect example to test, std::function runs like 5x slower than classic function

          • »
            »
            »
            »
            »
            »
            7 weeks ago, # ^ |
            Rev. 3   Vote: I like it +8 Vote: I do not like it

            A trivial example like the following gives a > 70x overhead on my local machine. Also, many times, function calls are turned into loops, and std::function sometimes can't be optimized to that extent. Recursive lambdas are practically the same as usual functions (the assembly generated is pretty much the same). I used std::vector instead of replacing a[i] by i; if you use i instead of a[i], you can get even higher overhead. And if you make n const, the call to the recursive lambda is optimized away to something that is O(1).

            Spoiler
»
7 weeks ago, # |
Rev. 2   Vote: I like it -11 Vote: I do not like it

My ignorance, I've edited that lol . Scroll down to the commet for a convenient implement for multidimensional vectors.

  • »
    »
    7 weeks ago, # ^ |
      Vote: I like it +5 Vote: I do not like it

    I have not realized that, only until Github Autopilot tried to recommend me to add vvvi in my template after I added vi and vvi for one-dimensional and two-dimensional vectors and I thought that was crappy ugly.

  • »
    »
    7 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Why am I downvoted? lol

    • »
      »
      »
      7 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      because people actually use multidimensional vectors, and many do consider it fine. heck, I've even heard someone use a 11-dimensional vector to solve a problem requiring 11-dimensional BFS (I know, that sounds totally weird but its real).

  • »
    »
    7 weeks ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    you can apply a bit of C++ templates

    auto vec(auto elem, size_t sz, auto... dim) {
      if constexpr (sizeof...(dim) == 0) {
        return vector(sz, elem);
      } else {
        return vector(sz, vec(elem, dim...));
      }
    }
    

    so for example vec(1, 4, 2) will create a 4x2 vector filled with 1s

    • »
      »
      »
      6 weeks ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      thx a lot, this is in my snippet now.

    • »
      »
      »
      6 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Could you plz help me that if there a implement for lower version GCC(c++14 or lower?). Many thks.

      C++17 doesn't work for some dated online judges, though works for my laptop. :)

      • »
        »
        »
        »
        6 weeks ago, # ^ |
          Vote: I like it +10 Vote: I do not like it
        template <typename T> 
        auto vec(T elem, size_t sz) {
          return vector<T>(sz, elem);
        }
        
        template <typename T, typename... Args>
        auto vec(T elem, size_t sz, Args... dim) {
          auto ret = vec(elem, dim...);
          return vector<decltype(ret)>(sz, ret);
        }
        

        should work with C++14, I don't know how to do it for lower standarts

»
7 weeks ago, # |
  Vote: I like it +36 Vote: I do not like it

Arrays are convenient so people use them.

Some comments mention a speed difference. It mainly happens when you create a lot of small vectors (e.g. an array with $$$10^6$$$ vectors of size 0-3). Every vector causes a small time & memory overhead. The same happens when you create a multi-dimensional vector with the last dimension of size 2.

You shouldn't worry about it when you deal with a few huge vectors.

Also, push_back(x) is slightly less efficient than just accessing an array/vector element. It's good to resize or reserve vector size if possible.

  • »
    »
    7 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    most of the time, it should be fine to use vectors, and its just personal preference. however vectors do cause unexpected TLEs sometimes, not because push_back() is inefficient, but exactly because it tries to be efficient and allocates new capacity in powers of two(4 when you need 3, 8 when you need 6, 2048 when you need 1500, you should get what I mean by now). deques do not have this same issue as they allocate memory in buckets, but for this same reason they suffer due to cache miss.

»
7 weeks ago, # |
Rev. 2   Vote: I like it +33 Vote: I do not like it

For me it's a matter of using different things for different ideas.

I use arrays for things which I know has fixed size while I use vectors for things which I know have variable lengths. In my head, these two are quite different concepts and coding them with different methods helps to tell me the nature (dynamic or fixed length) of this thing. For example when I write vector<int> adj[100005], I can tell the first parameter is fixed (i.e. the number of nodes in the graph is fixed), while the second is dynamic. (i.e I don't know how many adjacent nodes there are until I read the input)

  • »
    »
    7 weeks ago, # ^ |
      Vote: I like it +2 Vote: I do not like it

    I mean, how is this better than

    int n;
    cin >> n;
    vector<vector<int>> adj(n);
    

    in any sense?

    The only thing I can come up with is using a global variable adj and having to write vector<vector<int>> twice

    vector<vector<int>> adj;
    
    int main() {
        int n;
        cin >> n;
        adj = vector<vector<int>>(n);
    }
    

    For this use using using graph = vector<vector<int>> fits perfectly and not using global variables if possible.