agrawal117's blog

By agrawal117, history, 4 weeks ago, In English,

How to solve this problem.

Given an array of N integers and we have to perform two type of queries ?

1: L,R,K --> find sum of values in range L to R which are less than K ?

2: A,M --> update value at positon Ath index to M i.e arr[A]=M;

N,Q <=1e5.

 
 
 
 
  • Vote: I like it
  • +19
  • Vote: I do not like it

»
4 weeks ago, # |
Rev. 2   Vote: I like it -32 Vote: I do not like it

Nvm I'm wrong

  • »
    »
    4 weeks ago, # ^ |
      Vote: I like it -10 Vote: I do not like it

    I don't think it works, or I don't get you.

  • »
    »
    4 weeks ago, # ^ |
    Rev. 2   Vote: I like it +22 Vote: I do not like it

    This won't work as value of k is not same for each query..

»
4 weeks ago, # |
  Vote: I like it +1 Vote: I do not like it

Auto comment: topic has been updated by agrawal117 (previous revision, new revision, compare).

»
4 weeks ago, # |
  Vote: I like it +1 Vote: I do not like it

Auto comment: topic has been updated by agrawal117 (previous revision, new revision, compare).

»
4 weeks ago, # |
Rev. 5   Vote: I like it +64 Vote: I do not like it

There are many ways of solving this problem with different time complexities:

First, with SQRT Decomposition: Let's partition the array in buckets of size $$$k$$$ (We will see soon which $$$k$$$-value to choose). Let’s keep on each bucket a Data Structure that allows us to get the sum of some prefix and also update single points. This data structure can be a Binary Indexed Tree (also called Fenwick Tree). This consumes $$$O(n\cdot \frac n k)$$$ space. Queries of first type are done in $$$O(\frac n k \cdot \log n + k)$$$ time each one, and queries of second type are done in $$$O(\log n)$$$ time each one. We must choose the $$$k$$$-value that minimizes $$$\frac n k \cdot \log n + k$$$ and this is $$$k=\sqrt{n\cdot \log n}$$$. Time Complexity: $$$O(Q\cdot \sqrt{n\cdot \log n})$$$. Memory complexity: $$$< O(n\cdot \sqrt{n})$$$.

Second: replacing the Binary Indexed Tree by a Balanced Binary Search Tree (BBST). We can easily extend a Red-Black Tree or an AVL or a Treap to support the sum of the sum of first $$$z$$$ elements. this gives us the same time complexity, but let’s see the space. On each one of the $$$\frac n k$$$ buckets we have $$$O(k)$$$ elements, so it consumes $$$O(n)$$$ space. Time Complexity: $$$O(Q\cdot \sqrt{n\cdot \log n})$$$. Space Complexity: $$$O(n)$$$.

third, replacing SQRT Decomposition by Segment Tree: now on each node of the segment tree we have a BBST with elements of the corresponding subarray. Since each leaf has $$$O(\log n)$$$ parents on the Segment Tree, this consumes $$$O(n\cdot\log n)$$$ space, and now it just takes $$$O(\log^2 n)$$$ operations to perform both types of queries. Time Complexity: $$$O(Q\cdot \log^2 n)$$$ Space Complexity: $$$O(n\cdot \log n)$$$

on all these approaches I ignored the build time, cause it can be seen as N updates, although in many cases we can build faster than N updates.

Also, there are some approaches with SQRT decomposition on queries.

  • »
    »
    4 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Can you elaborate more on the 3rd approach? How are you calculating the sum for a given range from BBST?

    • »
      »
      »
      4 weeks ago, # ^ |
      Rev. 2   Vote: I like it +16 Vote: I do not like it

      first, do you know how to keep a BBST that supports asking for the sum of elements $$$\le x$$$? To do this, we keep on each node of the BBST the sum of elements below it. This is easy to update cause $$$sum(u) = sum(left(u)) + sum(right(u)) + val(u)$$$. Now for querying the sum of elements $$$\le x$$$ we do a Binary Search Trasversal over the BBST, each time we move to the right we add to the answer the sum of left child of the previous node. It's something like this:


      struct node{ node * left; node * right; int sum, val; }; int sum(node * u){ if(!u) return 0; return u->sum; } int get_sum(node * u, int x){ // get sum of all elements <= x on u's subtree if(!u) return 0; if(u->val > x) return get_sum(u->left, x); return get_sum(u->right, x) + sum(u->left); }

      The remaining part of the code is the actual implementation of your BBST.

      Now if you know how to do that, then just keep a BBST on each node of the segment tree.

      • »
        »
        »
        »
        4 weeks ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Okay, i understood your approach but i still have some doubt how things are working when descending down the tree for a range [l, r]. If you have link to any problem which requires this technique please share, it would be helpful.
        thanks :)

        • »
          »
          »
          »
          »
          4 weeks ago, # ^ |
            Vote: I like it +16 Vote: I do not like it

          well, this problem can be solved with Divide And Conquer on trees + Segment Tree + BBST.

»
4 weeks ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Can you please share link to the problem? I want to test my solution.

»
4 weeks ago, # |
  Vote: I like it +30 Vote: I do not like it

You can solve this problem here: https://www.spoj.com/problems/RACETIME/

»
4 weeks ago, # |
  Vote: I like it +3 Vote: I do not like it

The fastest (and easiest to implement) approach I know is by using 2D fenwick tree. You would have to know the points in which you do your updates in advance, so it’s an offline solution.

  • »
    »
    4 weeks ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    Sir, Will u please share your approach in detail.?

    • »
      »
      »
      4 weeks ago, # ^ |
      Rev. 3   Vote: I like it +3 Vote: I do not like it

      Consider the matrix $$$M(x, y)$$$ where $$$M(x, y)$$$ is $$$y$$$ if $$$v_x = y$$$ and $$$0$$$ otherwise.

      Then, you can simulate update $$$pos$$$ $$$val$$$ by doing $$$M(pos, v_{pos}) -= v_{pos}$$$, $$$v_{pos} = val$$$, $$$M(pos, v_{pos}) += v_{pos}$$$.

      Similarly, a query $$$l$$$ $$$r$$$ $$$k$$$ is just $$$S(r, k) - S(l-1, k)$$$, where $$$S$$$ denotes (2D) prefix sum.

      by 2D fenwick tree I mean this

      Both these operations are supported by a 2D fenwick tree if you know all update points in advance (basically, all possible values that $$$v_i$$$ would take during the process, for each $$$i$$$).

      • »
        »
        »
        »
        4 weeks ago, # ^ |
        Rev. 3   Vote: I like it 0 Vote: I do not like it

        Well yes, but this isn’t fully online. However I can replace in my approach the Segment Tree by a BIT and this would be better. We can also replace the BBST for a BIT implemented with map<int,int>, but this adds an extra $$$\log n $$$ factor to time complexity.

        • »
          »
          »
          »
          »
          4 weeks ago, # ^ |
            Vote: I like it +8 Vote: I do not like it

          I said it’s not fully online in the beginning. Although, to be fair, in 90% of programming tasks offline solutions works just as well (if not, better).

  • »
    »
    4 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    You can also solve it in nlogn with persistent segment tree and a similar approach, just preprocessing all updates, then the problem can be reduced to something like get the sum of numbers smaller than x in a given interval, and simple updates that can be managed with PST.

    • »
      »
      »
      4 weeks ago, # ^ |
        Vote: I like it +16 Vote: I do not like it

      How do you manage the updates with PST?

      • »
        »
        »
        »
        4 weeks ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Let's do the following approach: First, how to preprocess the updates: Let's maintain a set S, which has initialy values (i,0,Ai), and cnt[x], which stores the number of updates done previously to position i of the array, then when we see a new update to position u we insert(u,++cnt[u],new_Au) to the set, finaly map all the values of the set so they become 1,2,3...n+q. Then we build a persistent segment tree such that each version of the PST stores the information of a prefix of the mapped set. Then the querys become sum(r,x) — sum(l-1,x), r and l-1 are the versions of the PST which stores the last updates done to positions r and l-1 respectively, now the sum is in a version, the sum of some prefix of the segment tree, and updates (u,Au) are substract Ai from the version u + number_of_updates_done_to_u and add newAu to version u + number_of_updates_done_to_u + 1, this versions actually exist on our PST, so its simple modification update.

»
4 weeks ago, # |
Rev. 4   Vote: I like it +1 Vote: I do not like it

Edit: I think my solution will not work due to changing value of k.

Given the constraints, I think that you might be able to nuke this problem using Mo's algorithm with updates. I might be wrong as I didn't give this solution much thought.

»
4 weeks ago, # |
Rev. 2   Vote: I like it +2 Vote: I do not like it

Here is my solution using Sqrt decomposition with time complexity for 1) count query $$$O(sqrt(n) * log2(sqrt(n))$$$ and 2) for update $$$O(sqrt(N))$$$.
The idea is simple. Break the array in blocks of size sqrt(N) and sort each block individually. We store them in pair so that for every value it's index is also attached. To improve runtime i didn't used modulo/divide operator for finding blocks and precomputed them in an array in linear time(again without using modulo/divide operator).

  • »
    »
    4 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Look at the first approach on this comment. You’ll see that is better to choose bucket size = $$$\sqrt{n\cdot\log n}$$$