Sqrt-tree: answering queries in O(1) with O(loglogN) preprocessing.

Revision en1, by gepardo, 2018-01-11 21:09:18

Hello, Codeforces!

Some time ago I invented an interesting data structure and I'd like to share it with the community. Maybe, it was known before, and if you knew about it before my blog post, please share the link to the source.

# What can the data structure do?

Given an array a that contains n elements and the operation op that satisfies some criteria:

1. Associative property: (x op y)op z = x op(y op z).
2. Commutative property (optional, not required): x op y = y op x.

So, such operations as gcd, min, max, sum, multiplication, and, or, xor, etc. satisfy these conditions.

Also we have some queries l, r. For each query, we need to find al op al + 1 op ... op ar. Let's call such queries q(l, r).

My data structure can process such queries in O(1) time with preprocessing time and memory.

# How it works?

## 1. Sqrt

Let's do a sqrt-decomposition. We divide our array in blocks, each block has size . For each block, we compute:

1. Answers to the queries that lie in the block and begin at the beginning of the block (_prefix-op_)
2. Answers to the queries that lie in the block and begin at the end of the block (_suffix-op_)

And we'll calculate another one thing:

1. between[i, j] i ≤ j -- answer to the query that begins at the start of block i and ends at the end of block j. Note that we have blocks, so the size of this array will be .

Let's see the example.

Let op be  +  (we calculate sum on the segment) and we have the following array a:

1 2 3 4 5 6 7 8 9

It will be divided onto three blocks: 1 2 3, 4 5 6 and 7 8 9.

For first block prefix-op is 1 3 6 and suffix-op is 6 5 3.

For second block prefix-op is 4 9 15 and suffix-op is 15 11 6.

For third block prefix-op is 7 15 24 and suffix-op is 24 17 9.

between array is:

6 21 45
- 15 39
-  - 24


It's obvious to see that these arrays can be easily calculated in O(n) time and memory.

We can answer such queries using these arrays. If the query doesn't fit in one block, we can divide it onto three parts: suffix of a block, then some segment of contigious blocks and then prefix of some block. We can answer the query by taking op of some segment from suffix-op, then some segment from in between, then some segment from _prefix-op$. But if we have queries that entirely fit into one block, we cannot process them using these three arrays. So, we need to do something. ## 2. Tree We cannot answer only the queries that entirely fit in one block. But what if we build the same stucture as described above for each block? Yes, we can do it. And we do it recursively, until we reach the block size of 1 or 2. Answers for such blocks can be calculated easily in O(1). So, we get a tree. Each node of the tree represents some segment of the array. Node that represents array segment with size k has children -- for each block. Also each node contains the three arrays described above for the segment it contains. The root of the tree represents the entire array. Also it's obvious that the size of this tree is , because if some vertex of the sqrt-tree represented an array with length k, then its children will have length . , so decreases two times every layer of the tree and its height is . The time for building and memory usage will be , because every element of the array appears exactly once in each layer of the tree. Now we can answer the queries in . We can go down on the tree until we meet a segment with length 1 or 2 (answer for it can be calculated in O(1) time) or meet a segment in which our query doesn't fit entirely into one block. See the first section on how to answer the query in this case. OK, now we can do per query. Can it be done faster? ## 3. Optimizing the query complexity One of the most obvious optimization is to binary search the node we need in the tree. Using binary search, we can reach the complexity per query. Can we do it even faster? The answer is yes. Let's assume the following two things: 1. The block size is a power of two. 2. All the blocks are equal on each layer. To reach this, we can add some zero elements to our array so that its size becomes a power of two. When we use this, some block sizes may become twice larger to be a power of two, but it still be$O(\sqrt{k}) in size and linear complexity for building the arrays in one block remains.

Now, we can easily check if the query fits entirely into a block size 2k. Let's write the ranges of the query, l and r (we use 0-indexation) in binary form. For instance, let's assume k = 4, l = 39, r = 46. The binary representation of l and r is:

l = 39(10) = 100111(2)

r = 46(10) = 101110(2)

Remember that one layer contains segments of the equal size, and the block on one layer have also equal size (in our case, their size is 2k = 24 = 16. The blocks cover the array entirely, so the first block covers elements (0 - 15) (000000 - 001111 in binary), the second one covers elements (16 - 31) (010000 - 011111 in binary) and so on. We see that the indices of the positions covered by one block may differ only in k (in our case, 4) last bits.

So, we need to check if nothing more that k smallest bits differ (or l xor r doesn't exceed 2k - 1).

Using this observation, we can find a layer that is suitable to answer the query quickly. How to do this:

1. For each i that doesn't exceed the array size, we find the highest bit that is equal to 1. To do this quickly, we use DP and a precalculated array.

2. Now, for each q(l, r) we find the highest bit of l xor r and, using this information, it's easy to choose the layer on which we can process the query easily. We can also use a precalculated array here.

For more details, see the code below.

So, using this, we can answer the queries in O(1) each.

# Conclusion

We have a data structures that asymptotically answers the queries on a segment very fast. But, the constant of this data structures is big enough. Also it doesn't support changing the elements. But it can be modified to preform modification queries in something about for modifying a single element. But in this case, query complexity becomes . You can think how to do this as an exercise.

https://paste.ubuntu.com/26367387/

If you have some suggestions on how to improve this data structure or how to modify it, or something is written unclear and needs more explanation, feel free to write in the comments.

#### History

Revisions

Rev. Lang. By When Δ Comment
en15 gepardo 2018-04-24 22:44:35 80
en14 gepardo 2018-01-12 18:37:10 224 Removed the commutative property
en13 gepardo 2018-01-12 13:25:17 11 Tiny change: 'y to $O(n\alpha(n))$ for pre' -> 'y to $O(n\log* n)$ for pre'
en12 gepardo 2018-01-12 12:56:59 222 Text was rewritten a little
en11 gepardo 2018-01-12 12:50:08 2 Tiny change: '-01-12] decsribed in t' -> '-01-12] described in t'
en10 gepardo 2018-01-12 12:49:47 372 Added improvement by geniucos
en9 gepardo 2018-01-12 11:35:41 1 Tiny change: '(\sqrt{k}) in size a' -> '(\sqrt{k})\$ in size a'