Блог пользователя -arma-

Автор -arma-, история, 7 лет назад, По-английски

Hi CodeForces, in this topic i want to tell what sqrt decomposition is, give you an example, and give some problems which can be solved by sqrt.

The whole thing that sqrt is, is to reduce complexity by divide queries and get better complexity. e.g. I have an algorithm with O(N^2) complexity and another algorithm with O(k.N + N^2 / k) complexity where k is size of blocks i made, then i can have O(N.sqrt(N)) algorithm with set k to sqrt(N).

We'll solve a problem to clarify. The problem is this:

you're given an array A with n elements and q queries with 2 types: "? l r" you have to print sum of values between l(inclusive) and r(exclusive) ( [l, r) ) "+ l r x" you have to increase all elements between l and r by x. and n <= 1e5, q <= 1e5. (by the way you can solve this problem using segment tree or fenwick tree with more ease.)

let's consider A as n/k consecutive bucket with each bucket (except of the last) has k elements. then we can do so: if we want to change/process a full bucket, then we'll change a variable referencing to this bucket and if not, we'll change elements one by one. so in each query we'll use at most 2 incomplete buckets and at most n/k complete buckets, so the complexity will be from O(q.n/k + q.k). since product of q.n/k and q.k is constant, the minimum sum is reached when q.n/k = q.k. thus k = sqrt(n) and the complexity is from O(n.sqrt(n)).

there is a sample code that do so here.

read 342E - Xenia and Tree, first think about how you can solve it and then go ahead.

think about how we can solve if we could use O(q^2) for example.

Answer

now, let's try to improve the algorithm by partition queries in k-sized groups and see how this can help us.

Answer

so now we have an algorithm from O(n.q/k + qk), thus how we proved before, k = sqrt(n) and the algorithm is from O(q.sqrt(n)).

here are some other problems that can be solved by sqrt decomposition. 44C - Holidays, 13E - Holes, 86D - Powerful array, 398D - Instant Messanger and 455D - Serega and Fun.

In the end, how you saw in problems there's not a single type problems which sqrt can solve. another thing that you should notice, is what your bucket sizes should not be sqrt always, think of you have an O(k + n.log/k) algorithm like we proved before, the minimum is when k^2 = n.log or k = sqrt(n.log).

  • Проголосовать: нравится
  • +17
  • Проголосовать: не нравится

»
7 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Thanks a lot:) very nice implementation of Sqrt decomposition .

»
7 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

good tutorial .

»
7 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

OK, we can reunion after k queries and make a variable for answer of queries upon last k multiplier using bfs with first nodes on red nodes.

plz explain this clearly...why don't you all remember that we are noobies and you should explain your approach clearly and be clear in your words...please explain

  • »
    »
    7 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    You're right. Zorry.

    So after k queries, I want to find the closest red node to each of the nodes in the tree. Let's reformuate this problem: We have a tree T and we want to find $$$d_v$$$ for every node $$$v$$$ of $$$T$$$.

    To solve this problem, I make an observation: closest red node to a node $$$v$$$ is either the highest red node in $$$v$$$'s subtree or the closest node to its parent (since the path has to go through the parent)

    You can solve this problem linearly by figuring out the highest red node in each subtree with a DFS. (Return the highest red node observed to the parent, and find highest over all the children)

    Then for the root of $$$T$$$ the answer has to be highest red node in the subtree as there are noother nodes available. For each of the root's child we can find the answer, because we know both the highest in the subtree and the parent's answer. So with a DFS or BFS we can find the answer for all the nodes.

    Now that we know the answer with red nodes until query $$$ik$$$, we can answer query $$$ik + j$$$ by figuring ou whether any of the $$$j$$$ neww red nodes are closer than the last answer we found for $$$T$$$.

    Feel free to write back if it's still unclear.

    • »
      »
      »
      7 месяцев назад, # ^ |
      Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

      can u plz explain me in detailed that how you derived the time complexity ? and you have written that naively we would have time complexity as O(q^2)...how is it ?

      • »
        »
        »
        »
        7 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        go watch errichto video.

      • »
        »
        »
        »
        7 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Let's say I'm dividing queries into batches of $$$k$$$ and we have in total $$$q$$$ queries. After every batch I have a processing of $$$O(n)$$$ to update the closest red node for each node. And every query I have at most $$$k$$$ nodes to compare with the most recent value at the last batch.

        So in total we have $$$q$$$ queries each taking time $$$O(k)$$$ and at each of the $$$q/k$$$ batches we have a processing of $$$O(n)$$$. So it's going to be of $$$O(qk + nq/k)$$$.

        To find the distance between two nodes on a tree, you can find the Least Common Ancestor (LCA, you can find many tutorials on how to calculate it in logarithmic time) and the rest is just simple calculations based on the nodes' height. You can draw the path from each of the two nodes to the root, and you can see that you have the shortest (only) path between the nodes and you have traversed the distance between the LCA and the root twice.

        • »
          »
          »
          »
          »
          7 месяцев назад, # ^ |
          Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

          Ok...now I understood it...thanks for replying to a comment 6 years after its original post Just one last thing to clarify...I understood the O(n) processing time via BFS after each batch...and the O(k) time (for each query in worst case) comes when after each color update of a node...we update the minimum distance for the (minimum distance queries) of nodes in that batch...am I right ? plz confirm...

          • »
            »
            »
            »
            »
            »
            7 месяцев назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится

            No worries.

            Didn't get what you mean.

            After each batch closest red node for all of the tree's nodes are updated (you need those to find the values anyway I guess)

            After each query you compare the last solution you have at a batch with all new red nodes in the current batch.

            • »
              »
              »
              »
              »
              »
              »
              7 месяцев назад, # ^ |
                Проголосовать: нравится 0 Проголосовать: не нравится

              yes....I got it....I was trying to tell the same thing that after each query we update solution of each blue node in a batch while comparing the previous solution with the new list of red nodes....thanks for taking out so much of your precious time...sorry for being too dumb !!

              • »
                »
                »
                »
                »
                »
                »
                »
                7 месяцев назад, # ^ |
                  Проголосовать: нравится 0 Проголосовать: не нравится

                also...plz provide me some useful beginner friendly, intermediate resources, videos and questions of sqrt decomposition...because there is much on the codeforces so it becomes very difficult to filter out the content