AnandOza's blog

By AnandOza, history, 3 years ago, In English

Hi all, Atcoder Beginner Contest 183 was today. I wrote an unofficial English editorial. Hope it helps!

A: ReLU

We simply write an if statement.

Runtime: $$$\mathcal{O}(1)$$$.

Sample code

B: Billiards

The trick is to reflect $$$G$$$ over the x-axis, so the desired path becomes a straight line. Then it's a matter of simple math to compute the intersection of that line with the x-axis.

Runtime: $$$\mathcal{O}(1)$$$.

Sample code

C: Travel

Because $$$N$$$ is so small, we can simply try all $$$(N-1)!$$$ orderings ($$$7! = 5040$$$), and it takes $$$O(N)$$$ time to evaluate each one. (Why $$$(N-1)!$$$? Well, we know we have to start at $$$1$$$ and end at $$$1$$$, so we can order the $$$N-1$$$ other cities in any order.)

Runtime: $$$\mathcal{O}(N!)$$$.

Sample code

D: Water Heater

We can simulate the process over all periods of time, and check whether there exists a time such that the total water needed at that time is strictly greater than $$$W$$$.

To avoid checking every person at every time step, we can note that each person changes the "current water neded" exactly twice: they add $$$P_i$$$ at time $$$S_i$$$, and they subtract $$$P_i$$$ at time $$$T_i$$$. We can sort these $$$2N$$$ events by time (taking care to put the subtraction events before addition events at the same time, because $$$T_i$$$ is exclusive), then iterate through them.

Runtime: $$$\mathcal{O}(N \log N)$$$.

Sample code

E: Queen on Grid

We could solve this in $$$O(H \cdot W \cdot \max(H, W))$$$ using a classic DP (for each cell, traverse all cells that could have led to it -- this has $$$HW$$$ states, and the transition takes $$$O(\text{number of previous candidate cells})$$$, which is $$$O(\max(H,W))$$$).

However, because $$$H, W \le 2000$$$, this is too slow. Let's speed up the transition to $$$O(1)$$$, so this will run in time.

Let's just discuss how to optimize the transition for horizontal moves (it's the same for vertical and diagonal). The simplest way to do the transition (but too slow) is to loop over all cells to the left of your current cell, then add their counts to your current cell's count. Instead, we'll simply track the cumulative sum of this, which can be updated in $$$O(1)$$$ and then read in $$$O(1)$$$, instead of $$$O(W)$$$.

Runtime: $$$\mathcal{O}(HW)$$$.

Sample code

F: Confluence

The core idea is that there are two $$$O(NQ)$$$ solutions (too slow!). Let's call type 1 queries "updates" and type 2 queries "queries".

First option: we can use a union-find data structure to maintain the groups. To query, loop over all members of class $$$y$$$ and check if they're in the same group as $$$x$$$. This takes $$$O(1)$$$ to process updates, but $$$O(\text{size of class})$$$ to process queries, which is up to $$$O(N)$$$.

Alternatively: we can use one union-find per class, to maintain the count of students from that class in each group. To query, simply output the already-computed count from that union-find. To update, we have to update all the union-finds, so this takes $$$O(\text{number of classes})$$$, which is up to $$$O(N)$$$. This also takes $$$O(N \cdot (\text{number of classes}))$$$ to initialize.

The key observation here is that the first solution is really good if we have small class sizes, and the second solution is really good if we have a small number of classes. As a result, we'll use a classic trick: use the first solution for small classes, and the second solution for big classes. If we define a "big" class as having at least $$$K$$$ students, the number of big classes is bounded by $$$N/K$$$. This means we can solve the problem in $$$O(N \cdot (N/K) + \max(K, N/K) \cdot Q)$$$. If we set $$$K = \sqrt{N}$$$, then we have a solution in $$$O((N+Q) \sqrt{N} )$$$, which is fast enough if you're careful with the constant factor.

Runtime: $$$\mathcal{O}((N + Q) \sqrt N)$$$.

Sample code

You can see my submissions here as well, if you prefer that to reading the code in the blog.

Thanks for reading! Let me know if you have any questions or feedback for me.

  • Vote: I like it
  • +44
  • Vote: I do not like it

| Write comment?
»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Auto comment: topic has been updated by AnandOza (previous revision, new revision, compare).

»
3 years ago, # |
  Vote: I like it +18 Vote: I do not like it

In Problem F ,i think only map will be sufficient and we just need small to large merging. Runtime will be ofc Nlog2n.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Hi, I solved F by this method, but I don't get why TC will be Nlog2n. Can you elaborate? I think it is something to do with union by size, but I can't figure it out.

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Yes exactly, if you do union by size then the complexity is nlogn in worst case as you merge every element no more than logn times and because of the usage of map , it is nlog2n here.

      • »
        »
        »
        »
        3 years ago, # ^ |
        Rev. 2   Vote: I like it 0 Vote: I do not like it

        I think that if we do the following, we can get $$$O(n \log n)$$$ instead of $$$O(n \log^2 n)$$$.

        Submission

        The small to large merging ensures we have at most $$$O(\log n)$$$ mergings. However, in the case of the class accesses, if we do an amortized analysis, we see that the total number of class accesses is at most $$$O(n)$$$, since the number of classes is at most the number of students.

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      If your hastable access/modification operation's complexity is $$$O(\log n)$$$ then with small to large merge you will do $$$O(n \log n)$$$ operations, and the final complexity is $$$O(n \log^2 n)$$$. C++ map is $$$O(\log n)$$$ complexity so there you have it. If you use C++ unordered_map, you should get amortized $$$O(1)$$$ for each access/modification, and then the total is $$$O(n \log n)$$$.

      • »
        »
        »
        »
        3 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        You meant expected instead of amortized :)

        • »
          »
          »
          »
          »
          3 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Is that so? I don't really know the difference so could you explain what's the difference between those two concepts?

          • »
            »
            »
            »
            »
            »
            3 years ago, # ^ |
            Rev. 3   Vote: I like it +8 Vote: I do not like it

            Well, hoping you're not trolling me (though I think you just might be, and I'll be a little sad if I wrote this for a troll), then sure.

            There's actually a big difference and it's one of the concepts that CS students that study a DSA course usually struggle to differentiate (even right up to the exam time :) ).

            Amortized analysis is analyzing the WORST CASE time of ANY SEQUENCE of n operations on some data structure.
            If there is only one operation A, the analysis is straight forward: The amortized cost of A is O(a) if and only if the WORST CASE time of n operations of type A is O(a*n). Notice this doesn't have to be tight, just like regular worst case complexity.
            Once you have more operations in the DS, you must give all of them (or at least the ones you plan on using) an amortized cost! that's because you are looking at ANY sequence of operations on the DS (that may use all operations). For example, in a Fibonacci Heap, insert is O(1) in amortized, and Delete-Min is O(logn) in amortized. That means that any sequence of n operations with a inserts and b delete-min, will take at most O(a*1 + b*logn). (Notice a cool trick — you can change it so Insert will be O(logn) in amortized and Delete-Min will be O(1), assuming that there are more insertions than deletions).

            Now there is Expected(in terms of probability):
            Expected over what? what is the sample space? Basically, when analyzing hash tables we usually sample, well, Hash functions! So very generally speaking, what's guaranteed is that the expected time per Insert/Search/Delete for any input over all hash functions, is small.
            If you've been here some time (and you have) you saw that hacks are made when you use unordered_set because unless asked otherwise, it uses the same function, ALWAYS.
            That means you can find collisions for this particular hash function (collision means x,y s.t h(x)=h(y)) and then you can make sure all the elements you insert go to the exact same cell in the hash table, making it so n inserts will take you n^2 time.

            Therefore, from all that was written above, you can see it is not right to say that unordered_set will insert/delete in amortized O(1), because there exists an input for which n operations will take O(n^2), meaning the amortized cost is at least Omega(n), which is not O(1).

            For that reason, people here that have been burned before, usually use a more "random" hash function that is created at the moment of runtime (or compile time?).
            This practically doesn't happen in python (the one thing python is better at than cpp by default) because it uses the address of the object in memory as the hash value, which is pretty random.

            Edit: If the data that's being hashed is random, the default cpp hash function does fine. That's why it's usually fine using it when there's no open hacking phase.
            Edit 2: when I wrote any sequence of n operations, it's important to note that the operations start with an empty DS!

            • »
              »
              »
              »
              »
              »
              »
              3 years ago, # ^ |
                Vote: I like it 0 Vote: I do not like it

              Oh I see now. Thanks for taking the time to explain that. I certainly was not trolling you--I did learn this in college only to forget it after so I'm glad you helped me refresh that knowledge.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Wow, I'll have to try implementing that. I had to fiddle with my implementation for constant-factor purposes, so that sounds better, thanks!

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Did the same thing, which I think is simpler than the solution presented here. But the solution presented here is pretty cool!

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Can someone please explain why in the sample code for problem E, we are doing "vert[i][j] = vert[i — 1][j]" and same for horiz and diag?? I mean we add the previous ways here and then we add dp[i][j] also latter to them?? Why doesn't it result in double counting since we are adding two times the number of ways or previous cells??

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Can you explain your E solution a little more. For example- why are you doing vert[i][j]=vert[i-1][j] and then in the end you are again adding dp[i][j] to the vert[i][j]. Is it not double counting?

Also, how the presence of walls in the neighbour of a non wall square is being handled. If you can elaborate more then it might be very helpful.

  • »
    »
    3 years ago, # ^ |
    Rev. 2   Vote: I like it +1 Vote: I do not like it

    It all boils down to the definitions of these things so we have $$$dp_{i,j}$$$ is the number of ways to reach the square $$$(i,j)$$$ from the top left square. Define $$$vert_{i,j}$$$ as the sum of $$$dp_{i,j}, dp_{i-1,j}, dp_{i-2,j}, \ldots$$$ until you either hit a wall or the border. Then you can see that $$$vert_{i,j}=vert_{i-1,j}+dp_{i,j}$$$ in normal cases (not involving walls or border).

    If there is a wall at $$$(i',j')$$$, you just define $$$dp_{i',j'}$$$ and $$$vert_{i',j'}$$$ to be zero to ensure that the future calculations will be able to ignore squares beyond the wall when considering the squares you could come from to reach $$$(i,j)$$$.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +1 Vote: I do not like it
    Handling of walls

    My submission.

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Ok. Now I understood. I did not paid attention to the analogy between the normal prefix sum and this problem. Thank you for helping.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

AnandOza do you have a github repository for your templates? If so, could you share link?

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

hello guys, Could anyone point out the resources for large to small merging, and how is the complexity calculated. Any help would be appreciated.

»
3 years ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

In problem D, I got TLE for my submission. It has a similar complexity to tutorial nlog(n). How to optimize my submission?

  • »
    »
    3 years ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    I think you have an invalid comparator. It's possible to have two items such that a < b and b < a are both true, so the sort function will have problems.