Блог пользователя agrawal117

Автор agrawal117, история, 4 года назад, По-английски

How to solve this problem.

Given an array of N integers and we have to perform two type of queries ?

1: L,R,K --> find sum of values in range L to R which are less than K ?

2: A,M --> update value at positon Ath index to M i.e arr[A]=M;

N,Q <=1e5.

  • Проголосовать: нравится
  • +19
  • Проголосовать: не нравится

»
4 года назад, # |
Rev. 2   Проголосовать: нравится -32 Проголосовать: не нравится

Nvm I'm wrong

»
4 года назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

Auto comment: topic has been updated by agrawal117 (previous revision, new revision, compare).

»
4 года назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

Auto comment: topic has been updated by agrawal117 (previous revision, new revision, compare).

»
4 года назад, # |
Rev. 5   Проголосовать: нравится +64 Проголосовать: не нравится

There are many ways of solving this problem with different time complexities:

First, with SQRT Decomposition: Let's partition the array in buckets of size $$$k$$$ (We will see soon which $$$k$$$-value to choose). Let’s keep on each bucket a Data Structure that allows us to get the sum of some prefix and also update single points. This data structure can be a Binary Indexed Tree (also called Fenwick Tree). This consumes $$$O(n\cdot \frac n k)$$$ space. Queries of first type are done in $$$O(\frac n k \cdot \log n + k)$$$ time each one, and queries of second type are done in $$$O(\log n)$$$ time each one. We must choose the $$$k$$$-value that minimizes $$$\frac n k \cdot \log n + k$$$ and this is $$$k=\sqrt{n\cdot \log n}$$$. Time Complexity: $$$O(Q\cdot \sqrt{n\cdot \log n})$$$. Memory complexity: $$$< O(n\cdot \sqrt{n})$$$.

Second: replacing the Binary Indexed Tree by a Balanced Binary Search Tree (BBST). We can easily extend a Red-Black Tree or an AVL or a Treap to support the sum of the sum of first $$$z$$$ elements. this gives us the same time complexity, but let’s see the space. On each one of the $$$\frac n k$$$ buckets we have $$$O(k)$$$ elements, so it consumes $$$O(n)$$$ space. Time Complexity: $$$O(Q\cdot \sqrt{n\cdot \log n})$$$. Space Complexity: $$$O(n)$$$.

third, replacing SQRT Decomposition by Segment Tree: now on each node of the segment tree we have a BBST with elements of the corresponding subarray. Since each leaf has $$$O(\log n)$$$ parents on the Segment Tree, this consumes $$$O(n\cdot\log n)$$$ space, and now it just takes $$$O(\log^2 n)$$$ operations to perform both types of queries. Time Complexity: $$$O(Q\cdot \log^2 n)$$$ Space Complexity: $$$O(n\cdot \log n)$$$

on all these approaches I ignored the build time, cause it can be seen as N updates, although in many cases we can build faster than N updates.

Also, there are some approaches with SQRT decomposition on queries.

  • »
    »
    4 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Can you elaborate more on the 3rd approach? How are you calculating the sum for a given range from BBST?

    • »
      »
      »
      4 года назад, # ^ |
      Rev. 2   Проголосовать: нравится +16 Проголосовать: не нравится

      first, do you know how to keep a BBST that supports asking for the sum of elements $$$\le x$$$? To do this, we keep on each node of the BBST the sum of elements below it. This is easy to update cause $$$sum(u) = sum(left(u)) + sum(right(u)) + val(u)$$$. Now for querying the sum of elements $$$\le x$$$ we do a Binary Search Trasversal over the BBST, each time we move to the right we add to the answer the sum of left child of the previous node. It's something like this:


      struct node{ node * left; node * right; int sum, val; }; int sum(node * u){ if(!u) return 0; return u->sum; } int get_sum(node * u, int x){ // get sum of all elements <= x on u's subtree if(!u) return 0; if(u->val > x) return get_sum(u->left, x); return get_sum(u->right, x) + sum(u->left); }

      The remaining part of the code is the actual implementation of your BBST.

      Now if you know how to do that, then just keep a BBST on each node of the segment tree.

      • »
        »
        »
        »
        4 года назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Okay, i understood your approach but i still have some doubt how things are working when descending down the tree for a range [l, r]. If you have link to any problem which requires this technique please share, it would be helpful.
        thanks :)

»
4 года назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

Can you please share link to the problem? I want to test my solution.

»
4 года назад, # |
  Проголосовать: нравится +30 Проголосовать: не нравится

You can solve this problem here: https://www.spoj.com/problems/RACETIME/

»
4 года назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

The fastest (and easiest to implement) approach I know is by using 2D fenwick tree. You would have to know the points in which you do your updates in advance, so it’s an offline solution.

  • »
    »
    4 года назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    Sir, Will u please share your approach in detail.?

    • »
      »
      »
      4 года назад, # ^ |
      Rev. 3   Проголосовать: нравится +3 Проголосовать: не нравится

      Consider the matrix $$$M(x, y)$$$ where $$$M(x, y)$$$ is $$$y$$$ if $$$v_x = y$$$ and $$$0$$$ otherwise.

      Then, you can simulate update $$$pos$$$ $$$val$$$ by doing $$$M(pos, v_{pos}) -= v_{pos}$$$, $$$v_{pos} = val$$$, $$$M(pos, v_{pos}) += v_{pos}$$$.

      Similarly, a query $$$l$$$ $$$r$$$ $$$k$$$ is just $$$S(r, k) - S(l-1, k)$$$, where $$$S$$$ denotes (2D) prefix sum.

      by 2D fenwick tree I mean this

      Both these operations are supported by a 2D fenwick tree if you know all update points in advance (basically, all possible values that $$$v_i$$$ would take during the process, for each $$$i$$$).

      • »
        »
        »
        »
        4 года назад, # ^ |
        Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

        Well yes, but this isn’t fully online. However I can replace in my approach the Segment Tree by a BIT and this would be better. We can also replace the BBST for a BIT implemented with map<int,int>, but this adds an extra $$$\log n $$$ factor to time complexity.

        • »
          »
          »
          »
          »
          4 года назад, # ^ |
            Проголосовать: нравится +8 Проголосовать: не нравится

          I said it’s not fully online in the beginning. Although, to be fair, in 90% of programming tasks offline solutions works just as well (if not, better).

  • »
    »
    4 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    You can also solve it in nlogn with persistent segment tree and a similar approach, just preprocessing all updates, then the problem can be reduced to something like get the sum of numbers smaller than x in a given interval, and simple updates that can be managed with PST.

    • »
      »
      »
      4 года назад, # ^ |
        Проголосовать: нравится +16 Проголосовать: не нравится

      How do you manage the updates with PST?

      • »
        »
        »
        »
        4 года назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Let's do the following approach: First, how to preprocess the updates: Let's maintain a set S, which has initialy values (i,0,Ai), and cnt[x], which stores the number of updates done previously to position i of the array, then when we see a new update to position u we insert(u,++cnt[u],new_Au) to the set, finaly map all the values of the set so they become 1,2,3...n+q. Then we build a persistent segment tree such that each version of the PST stores the information of a prefix of the mapped set. Then the querys become sum(r,x) — sum(l-1,x), r and l-1 are the versions of the PST which stores the last updates done to positions r and l-1 respectively, now the sum is in a version, the sum of some prefix of the segment tree, and updates (u,Au) are substract Ai from the version u + number_of_updates_done_to_u and add newAu to version u + number_of_updates_done_to_u + 1, this versions actually exist on our PST, so its simple modification update.

»
4 года назад, # |
Rev. 4   Проголосовать: нравится +1 Проголосовать: не нравится

Edit: I think my solution will not work due to changing value of k.

Given the constraints, I think that you might be able to nuke this problem using Mo's algorithm with updates. I might be wrong as I didn't give this solution much thought.

»
4 года назад, # |
Rev. 2   Проголосовать: нравится +2 Проголосовать: не нравится

Here is my solution using Sqrt decomposition with time complexity for 1) count query $$$O(sqrt(n) * log2(sqrt(n))$$$ and 2) for update $$$O(sqrt(N))$$$.
The idea is simple. Break the array in blocks of size sqrt(N) and sort each block individually. We store them in pair so that for every value it's index is also attached. To improve runtime i didn't used modulo/divide operator for finding blocks and precomputed them in an array in linear time(again without using modulo/divide operator).

  • »
    »
    4 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Look at the first approach on this comment. You’ll see that is better to choose bucket size = $$$\sqrt{n\cdot\log n}$$$

»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Any simpler way to do this if there are no updates?

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Persistent Segment tree — O(log n)

    Merge Sort Tree — O(log^2 n) with good constant