Блог пользователя cirex

Автор cirex, 20 месяцев назад, По-английски

I was looking over some C++ solutions written by the best of CodeForces when I realized that most of them use arrays initialized to a large value instead of instantiating a vector inside the main method. Is there any benefit to doing it the former way other than (maybe) saving a few keystrokes?

Here are some examples of what I mean.

vector <int> graph[200010];
int a[200010];
ll dp[200010][2];
ll dp2[200010][2];

const int Maxn = 1e6 + 9;
typedef long long ll;
const ll oo = (ll)1e18;
vector<int> al[Maxn];
ll val[Maxn];
vector<pair<ll,int> > sub[Maxn];
ll dp[Maxn][2],ans[2],tmp[2];

Upd: Thanks to everyone who responded! The main points from the comments below have been compiled here:

In general, C++ arrays…

  • are faster/more efficient
    • There's a smaller runtime constant
    • Vectors require an extra indirection and are stored on the heap
    • The difference in performance becomes more obvious if you use a lot of small vectors.
  • use less memory (useful for tight ML constraints)
  • are more convenient to use
    • Easy to instantiate and clear (with memset)

So overall, I guess vectors are fine in most cases, but you can use arrays just to be on the safe side.

  • Проголосовать: нравится
  • +73
  • Проголосовать: не нравится

»
20 месяцев назад, # |
  Проголосовать: нравится +160 Проголосовать: не нравится

It pleases the cp gods

»
20 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

how would you do your examples but using vectors?

  • »
    »
    20 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    something like vvi g(N); (where N is defined from the input) would work in place of vector <int> graph[200010];

»
20 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Does someone have any examples where if you use vector<int> you get TLE, but if you use int[] you get AC?

  • »
    »
    20 месяцев назад, # ^ |
    Rev. 2   Проголосовать: нравится +24 Проголосовать: не нравится

    It happened to me once but i dont remember the problem, so yes, if you dont use some special vector features, better use int[], its just cleaner and faster at least

  • »
    »
    20 месяцев назад, # ^ |
    Rev. 2   Проголосовать: нравится +5 Проголосовать: не нравится

    Almost every problem on matrix exponentiation, like one of last problems link. I got TL using vectors submission and got AC using std::array submission. Replacing vectors with arrays can improve constant in your solution and help you to squeeze it in TL...

    • »
      »
      »
      20 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Honestly, I use valarrays for matrix exponentiation. Not only are they a usually good match with linear algebra (this being said, it's not always true), they can simulate a matrix by itself, which is why valarray is perfect for representing matrixes.

    • »
      »
      »
      20 месяцев назад, # ^ |
        Проголосовать: нравится +8 Проголосовать: не нравится

      6500ms vs 717ms does not seem like constant factor kind of difference...

      Replacing vectors with arrays can improve constant in your solution and help you to squeeze it in TL...

      I doubt that it may indeed happen with constant noticeable difference, unless you are allocating std::array on stack which is, in general, a bad idea. Also, additional small factor can come from the fact that std::vector fills its contents with the default value, while arrays (of any kind) don't, but it also should be dominated by the allocation.

  • »
    »
    20 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Depends on your implementation but for me I got TLE in these 2 problems with vector and AC with a regular array

    https://codeforces.com/edu/course/2/lesson/4/4/practice/contest/274684/problem/B

    https://www.codechef.com/JULY221C/problems/LARSQR31

»
20 месяцев назад, # |
  Проголосовать: нравится +45 Проголосовать: не нравится

vector stores data on heap. A global array is stored in .data or .bss and has a constant address. This saves one indirection.

  • »
    »
    20 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    What about std::array?

    • »
      »
      »
      20 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      In most implementations (where the stack is used), std::array is either allocated on the stack (when local) or in the same place where global arrays are. The more accurate term is them having automatic storage duration.

    • »
      »
      »
      20 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      std::array<T, N> is basically a fancy wrapper of T[N], so it's fast-ish too.

  • »
    »
    20 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Does saving an indirection result in a significant/noticeable speed improvement?

    • »
      »
      »
      20 месяцев назад, # ^ |
        Проголосовать: нравится +3 Проголосовать: не нравится

      Speaking from experience, in some problems this sometimes reduces runtime by 10-20 percent. Incidentally, using global arrays may also result in fewer cache misses, so maybe that's one of the reasons for improvement.

»
20 месяцев назад, # |
  Проголосовать: нравится +8 Проголосовать: не нравится

The only practical case where raw array can be better I can come up with:
vector uses memory for data itself + 2 * size_t + 1 pointer. So if you somehow need 2D array with knowledge that rows small enough, you can save some memory with using raw arrays, which may help in such uncommon case with tight ML. (don't remember any problem for example :))

In performance sense absolutely identical. In usage sense vector strictly better. Even described case can be solved with vector but slightly in unstraight way.

»
20 месяцев назад, # |
Rev. 2   Проголосовать: нравится +18 Проголосовать: не нравится

For me writing ar[n] is easier than writing vector ar

This becomes complex for 2-D Vector vector<vector> V vs int ar[][]

Also arrays are memory efficient as per my observation

  • »
    »
    20 месяцев назад, # ^ |
      Проголосовать: нравится +2 Проголосовать: не нравится

    You can use typedef vector<int> vi; and typedef vector<vector<int>> vvi; in your template.

    • »
      »
      »
      20 месяцев назад, # ^ |
        Проголосовать: нравится +7 Проголосовать: не нравится

      You cannot use your template everywhere :) For, e.g., Coding tests.

      It's easier to write an array when compared to a vector. You can see the comment of ivan100sic.

      I use vector and array both when it's time to use multidimensional. I switch to array else, I use vector because vi v(n) :) Thanks

  • »
    »
    20 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    About this : Also arrays are memory efficient as per my observation

    Any practical example where using vector<int> a(n) doesn't work (not efficient enough and gives TLE), but using int a[n] works?

  • »
    »
    20 месяцев назад, # ^ |
    Rev. 2   Проголосовать: нравится +5 Проголосовать: не нравится

    In the newest version of C++ you can do this:

    vector my_lovely_2d_array(n, vector<int>(m));
    auto my_other_lovely_2d_array = vector(n, vector<int>(m));
    

    This can be done with more dimensions.

    When I said newest version, I mean in C++17. But somehow gcc on Codeforces does not recognize this. Thankfully this works in C++20. This feature is called type deduction btw.

    • »
      »
      »
      20 месяцев назад, # ^ |
      Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

      Or that:

      template<typename T, typename OuterSize>
      auto CreateVector(OuterSize outherSize) {
          return vector<T>(outherSize);
      }
      
      template<typename T, typename OuterSize, typename ...OtherSize>
      auto CreateVector(OuterSize outerSize, OtherSize&& ...otherSize) {
          return vector(outerSize, CreateVector<T, OtherSize...>(forward<OtherSize>(otherSize)...));
      }
      
      ....
      
      auto v = CreateVector<int>(2, 5);
      

      This works on CF's GCC++17

      You can even modify this to work on earliest C++

      • »
        »
        »
        »
        20 месяцев назад, # ^ |
          Проголосовать: нравится +10 Проголосовать: не нравится

        Yeah if you use template. I wanted to propose a non template solution. Newer version has a lot of quality-of-life improvement, and the effort to learn, understand and write became significant less.

        • »
          »
          »
          »
          »
          20 месяцев назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          naaahhh... deduction is cool but not in this case.

          This can be done with more dimensions. — sure, but "3 or more"-d vectors would be not satisfying to write. CreateVector<int>(2,3,4,5,6) already gives 5-d vector with basic template writing (basic, not great y-combinator of course).

»
20 месяцев назад, # |
  Проголосовать: нравится +46 Проголосовать: не нравится

It's faster.

»
20 месяцев назад, # |
  Проголосовать: нравится +66 Проголосовать: не нравится

Much simpler and faster when doing multidimensional DP, for example. You can also clear the entire thing with a single memset.

  • »
    »
    20 месяцев назад, # ^ |
      Проголосовать: нравится +10 Проголосовать: не нравится

    It's not like you can't clear a vector

    • »
      »
      »
      20 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I dont think any sane person would want to use 3+ dimensional vector XD

    • »
      »
      »
      20 месяцев назад, # ^ |
        Проголосовать: нравится +24 Проголосовать: не нравится

      Clearing int d[2][50][50][50] (quite common):

      memset(d, 0, sizeof(d));
      

      Clearing a vector<vector<vector<vector<int>>>> d (most elegant way I know):

      d.assign(2, vector(50, vector(50, vector(50, 0))));
      

      The first one is also over 4 times faster.

      • »
        »
        »
        »
        20 месяцев назад, # ^ |
        Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

        noob question but why is clearing array/vector desirable or needed?

        • »
          »
          »
          »
          »
          20 месяцев назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          For multiple test cases, global variable values should be reseted because they may get overwritten.

      • »
        »
        »
        »
        20 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Oh, ok, I didn't think of this since I nearly never need to clear >1(or rarely 2) dimensional vectors. I even have used 4d vector maybe at most once or twice I think.

      • »
        »
        »
        »
        20 месяцев назад, # ^ |
          Проголосовать: нравится +8 Проголосовать: не нравится

        Maybe I dont understand smth but clearing maybe need in some multitest case, where each test you need to clear. Ofc memset would be faster than assign but:

        1. Isn't that speed negligible comparable to the rest of code?
        2. I dont see big deal in destruction/creation vector in multitest handler (it's in complexity sense would be same, in practical sense correct solutions would still pass, incorrect solutions mustn't pass)
        • »
          »
          »
          »
          »
          20 месяцев назад, # ^ |
          Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

          Sometimes, without some linear optimization, even the proper solutions won't pass. I literally failed a D question in a contest because I used vector lol. Then, after then contest, I submitted the solution with plain array, and it passed. The problem: That D problem

  • »
    »
    20 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Is multidemensional vector slower due to cache locality? I used a 5-dimensional vector in a recent abc and it was nearly 10x slower than regular array.

    • »
      »
      »
      20 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      well, yes. a vector itself has sequential memory but a multidimensional vector is a vector with addresses pointing to the inner vector (which is in the heap, same goes for the outer vector), which is usually (with a high probability) far away from the outer vector. so, yes, multidimensional vectors do have some cache miss issue.

»
20 месяцев назад, # |
  Проголосовать: нравится +31 Проголосовать: не нравится

I just learned to code in such a way, but I'm slowly changing my code style to use lesser global variables.

But I'm not used to write recursion inside main, and writing all functions inside main may not be good. In that case, I'd probably use global variables.

  • »
    »
    20 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится
    auto dfs = [&](auto&& dfs, int v) -> void {
      ...
    };
    dfs(dfs, 0);
    

    works like charm. (trailing return type is required when compiler can't deduce it, which is often the case)

    • »
      »
      »
      20 месяцев назад, # ^ |
        Проголосовать: нравится +8 Проголосовать: не нравится

      It might be fine for this case, but to avoid confusion, it is much better to rename the auto&& dfs to something like auto&& self instead (and use self instead of dfs in the lambda body, of course).

    • »
      »
      »
      20 месяцев назад, # ^ |
        Проголосовать: нравится +18 Проголосовать: не нравится

      But that's quite ugly, right? There are some workarounds (y-combinators) but I'm not a big fan of lengthy templates, so...

      • »
        »
        »
        »
        20 месяцев назад, # ^ |
          Проголосовать: нравится +13 Проголосовать: не нравится

        To fix this (and a large number of other issues that make it harder to use C++), C++23 has a feature on the way (called "deducing this", for instance, see this paper). It will make writing recursive lambdas pretty easy and without any templates.

        To paraphrase from section 5.3 of the proposal, something like the following will work:

        auto fib = [](this auto self, int n) {
            if (n < 2) return n;
            return self(n-1) + self(n-2);
        };
        

        It's unfortunate that we can't use this already, but when C++23 comes into the picture, it would definitely make sense to migrate to recursive lambdas to avoid polluting the global namespace (and localize code much better).

        • »
          »
          »
          »
          »
          20 месяцев назад, # ^ |
          Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

          also as a current alternative, std::function makes "recursive lambdas" possible, but I don't know how much overhead do they cause, so I can't recommend it so certainly.

          • »
            »
            »
            »
            »
            »
            20 месяцев назад, # ^ |
              Проголосовать: нравится +10 Проголосовать: не нравится

            I've used it for a couple of months now and haven't run into any TLE issues, so I don't think the overhead is that bad.

            • »
              »
              »
              »
              »
              »
              »
              20 месяцев назад, # ^ |
              Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

              trust me, I've tried making a recreation of Python's @cache by making a functor that uses std::function for storing the function (same way as how Python decorator classes work), and its functor-std::function-lambda overhead was a massive pain in the arse (though most of the pain might just have been an issue of using std::map for storing the memoized values as there was no hash function for tuples)

              • »
                »
                »
                »
                »
                »
                »
                »
                20 месяцев назад, # ^ |
                  Проголосовать: нравится +9 Проголосовать: не нравится

                If you're still looking for such a thing, I implemented something of that sort last year. 134333870 has an implementation of cached recursion. Note that this will work only for pure functions (references will need significantly more code to work).

              • »
                »
                »
                »
                »
                »
                »
                »
                6 месяцев назад, # ^ |
                  Проголосовать: нравится 0 Проголосовать: не нравится

                does writing a custom hash function for tuples make it more expensive than map of tuples ?

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  6 месяцев назад, # ^ |
                  Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

                  std::map does have that $$$\log$$$ factor so logically it depends. But generally the comparison of tuples and the hashing of tuples should take only a constant difference on constants in the worst case

          • »
            »
            »
            »
            »
            »
            20 месяцев назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится

            fib if perfect example to test, std::function runs like 5x slower than classic function

          • »
            »
            »
            »
            »
            »
            20 месяцев назад, # ^ |
            Rev. 3   Проголосовать: нравится +8 Проголосовать: не нравится

            A trivial example like the following gives a > 70x overhead on my local machine. Also, many times, function calls are turned into loops, and std::function sometimes can't be optimized to that extent. Recursive lambdas are practically the same as usual functions (the assembly generated is pretty much the same). I used std::vector instead of replacing a[i] by i; if you use i instead of a[i], you can get even higher overhead. And if you make n const, the call to the recursive lambda is optimized away to something that is O(1).

            Spoiler
»
20 месяцев назад, # |
Rev. 2   Проголосовать: нравится -11 Проголосовать: не нравится

My ignorance, I've edited that lol . Scroll down to the commet for a convenient implement for multidimensional vectors.

  • »
    »
    20 месяцев назад, # ^ |
      Проголосовать: нравится +5 Проголосовать: не нравится

    I have not realized that, only until Github Autopilot tried to recommend me to add vvvi in my template after I added vi and vvi for one-dimensional and two-dimensional vectors and I thought that was crappy ugly.

  • »
    »
    20 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Why am I downvoted? lol

    • »
      »
      »
      20 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      because people actually use multidimensional vectors, and many do consider it fine. heck, I've even heard someone use a 11-dimensional vector to solve a problem requiring 11-dimensional BFS (I know, that sounds totally weird but its real).

  • »
    »
    20 месяцев назад, # ^ |
      Проголосовать: нравится +8 Проголосовать: не нравится

    you can apply a bit of C++ templates

    auto vec(auto elem, size_t sz, auto... dim) {
      if constexpr (sizeof...(dim) == 0) {
        return vector(sz, elem);
      } else {
        return vector(sz, vec(elem, dim...));
      }
    }
    

    so for example vec(1, 4, 2) will create a 4x2 vector filled with 1s

    • »
      »
      »
      20 месяцев назад, # ^ |
      Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

      thx a lot, this is in my snippet now.

    • »
      »
      »
      20 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Could you plz help me that if there a implement for lower version GCC(c++14 or lower?). Many thks.

      C++17 doesn't work for some dated online judges, though works for my laptop. :)

      • »
        »
        »
        »
        20 месяцев назад, # ^ |
          Проголосовать: нравится +10 Проголосовать: не нравится
        template <typename T> 
        auto vec(T elem, size_t sz) {
          return vector<T>(sz, elem);
        }
        
        template <typename T, typename... Args>
        auto vec(T elem, size_t sz, Args... dim) {
          auto ret = vec(elem, dim...);
          return vector<decltype(ret)>(sz, ret);
        }
        

        should work with C++14, I don't know how to do it for lower standarts

»
20 месяцев назад, # |
  Проголосовать: нравится +36 Проголосовать: не нравится

Arrays are convenient so people use them.

Some comments mention a speed difference. It mainly happens when you create a lot of small vectors (e.g. an array with $$$10^6$$$ vectors of size 0-3). Every vector causes a small time & memory overhead. The same happens when you create a multi-dimensional vector with the last dimension of size 2.

You shouldn't worry about it when you deal with a few huge vectors.

Also, push_back(x) is slightly less efficient than just accessing an array/vector element. It's good to resize or reserve vector size if possible.

  • »
    »
    20 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    most of the time, it should be fine to use vectors, and its just personal preference. however vectors do cause unexpected TLEs sometimes, not because push_back() is inefficient, but exactly because it tries to be efficient and allocates new capacity in powers of two(4 when you need 3, 8 when you need 6, 2048 when you need 1500, you should get what I mean by now). deques do not have this same issue as they allocate memory in buckets, but for this same reason they suffer due to cache miss.

»
20 месяцев назад, # |
Rev. 2   Проголосовать: нравится +33 Проголосовать: не нравится

For me it's a matter of using different things for different ideas.

I use arrays for things which I know has fixed size while I use vectors for things which I know have variable lengths. In my head, these two are quite different concepts and coding them with different methods helps to tell me the nature (dynamic or fixed length) of this thing. For example when I write vector<int> adj[100005], I can tell the first parameter is fixed (i.e. the number of nodes in the graph is fixed), while the second is dynamic. (i.e I don't know how many adjacent nodes there are until I read the input)

  • »
    »
    20 месяцев назад, # ^ |
      Проголосовать: нравится +2 Проголосовать: не нравится

    I mean, how is this better than

    int n;
    cin >> n;
    vector<vector<int>> adj(n);
    

    in any sense?

    The only thing I can come up with is using a global variable adj and having to write vector<vector<int>> twice

    vector<vector<int>> adj;
    
    int main() {
        int n;
        cin >> n;
        adj = vector<vector<int>>(n);
    }
    

    For this use using using graph = vector<vector<int>> fits perfectly and not using global variables if possible.