RaresC's blog

By RaresC, 2 months ago, ,

Hello everyone. I just wanted to share something with all of you that I thought of. I found it interesting, and I think some of you may find it interesting as well. There's quite a lot going on, and I'm not sure how useful it really is in the end, but may be interesting nonetheless.

It will be useful to be familiar with segment trees. If not, I encourage you to read up on those.

We'll be able to answer range queries (for any type of query that a segment tree can support) in O(1) time, with preprocessing.

The Problem

I think it's important we take a little step back, and really think about what segment trees do at a more abstract level. We are given a list A = {a1, a2, ..., an}. As we move forward, we'll denote [i, k] ≡ {ai, ..., ak}. Our goal is to create a set P = {p1, ..., pm} where each element pi is a contiguous sub-sequence of A. Answering a query (i, k) consists of finding the smallest partition of the range [i, k]

In actuality, the goal is to compute some function f(ai, ..., ak) for each query (i, k). So, how is partitioning related? We want to compute , and we have the additional restriction that f has the following property:

Given any two adjacent ranges, [i, j], [j + 1, k], it must be true that the answer to f(i, k) must be completely determined by the values of f(i, j) and f(j + 1, k). We say that we merge [i, j] and [j + 1, k] to produce [i, k]. So, if we are capable of finding a partition X of [i, k], we can merge all of the elements of X to compute the answer to our query. All we need to do is to precompute the value of f(p) for all at the beginning.

We haven't specified what P must be. However, we know that we want |P| to be small to reduce preprocessing time, but also for |X| to be small (for all possible queries) in order to reduce query time.

Segment Trees

This is an existing data structure, which I'm sure most of you are very familiar with, so I won't go into detail here. The only aspect we need from segment trees is the set of ranges which it precomputes. Below is a picture of a segment tree. If you haven't see this before, it should give you a feel for what ranges are used.

Looking back at the partitioning idea we talked about, segment trees are a way of partitioning any range into parts, and each element is also contained in elements of P.

A Graph Construction

Let's create a graph G = (P, E), where each node in the vertex set, P, represents one of the ranges of a segment tree. Additionally, let pi = [i1, i2], pk = [k1, k2]. Then if i2 + 1 = k1. More simply put, there is an edge between two nodes if the ranges they represent are adjacent. Moreover, this is a directed edge from the node that 'lies on the left' to the node that is on the 'right'.

We also define the length of a node to be the length of the range it represents, and we define a node's column by its left endpoint. So, we say that a node [i, k] is in column i.

Constructing Partitions

Any path in G is a partition. In order to answer a query [i, k], we need to find the shortest path from a node in column i to a node whose right endpoint is k. Something like BFS would work here, but we have a very structured graph with some nice properties which will make the task much simpler, as we'll see.

Here's an initial naive algorithm that finds an optimal partition. By optimal, we mean that there does not exist any other partition of [i, k] using fewer nodes.

Step 1: Find the longest node in column i completely contained in Q. Say it has length 2j.

Step 2: Solve the sub-problem of finding optimal partition for the query Q' = [i + 2j, k]. Go to Step 1 until Q' is empty.

This is essentially finding the same partition that a segment tree would find, only doing so in a different manner. So, the size of the partition we find is also . Finding the longest node in a column will take us at most time as well since that is the height of each column. So, overall this is . This is is worse than a segment tree, but don't worry, there's a quick fix.

Reduction to

We need a small observation.

The optimal partition will consist of two sequences. The first contains nodes of increasing length, and the second contains nodes of decreasing length. To see why this is true, think about what happens if we need to start decreasing the length at some point. If a node of length 2c is followed directly by a node of length 2d, with d < c, then no node of length greater than or equal to 2d will be chosen again. Think about it if it's not clear why. This implies that there is an increasing sequence followed by a decreasing one.

So, why does this help? Searching for the largest node in a column which is contained in the query is much simpler. In total, we will only traverse the graph up and down at most once. We can now find a partition in time.

One More Observation

First, a definition. We call the longest node in a column a 'maximal' node.

Let's only look at the nodes in the increasing sequence of the partition. All elements, except perhaps the last one in the increasing sequence, are maximal nodes. This very specifically defines the path that we walk on the graph, at least for the increasing half. What about the decreasing sequence?

The Decreasing Sequence: An Equivalent Problem

Create a graph G' with the same vertex set as G and if , then . Essentially, we reverse the direction of edges. We will also define columns differently. We say that a node [i, k] is in column k, not in column i as before.

Note that the increasing sequence in G' directly corresponds to the decreasing sequence in G. So, instead of finding the decreasing sequence in G, we find the increasing sequence in G'.

One Final Observation

Every node is a maximal node in G or in G'. I won't prove this here since this post is long enough. However, here is a picture of the two graphs side by side, to more easily convince yourselves.

Let's clean up a little bit. We said that "All elements, except perhaps the last one in the increasing sequence, are maximal nodes". Now, we can say that

• All elements in the increasing sequence are maximal nodes in G and all elements in the decreasing sequence are maximal nodes in G'.

This only makes use of the previous observation. If the last element in the increasing sequence is not maximal in G, we can view it instead as the first element of the decreasing sequence, making it maximal in G'. So, we can represent any range query as the union of a path of increasing maximal nodes in G and a path of increasing maximal nodes in G'.

So, we have found a very nice property of the optimal partition. At each step, we know exactly which node to choose next. In fact, from the very beginning, we know exactly which nodes to choose: The maximal ones.

So, let's precompute the answer to all paths consisting of maximal nodes. How many of these paths are there? Well, there are n maximal nodes at which a path must begin, and each successive element in a maximal path has increasing length, so a path has length at most . We then have paths we need to precompute the answer for. We use similar merging technique as in segment trees to do this. (Note that we initially need to compute the answer for each node in G). We also need to proceed in the same fashion for G'.

Reduction to

To find the increasing sequence when partitioning a query [i, k], we only need to look at paths which begin in column i. So, when we store the answers to paths, we group them by left endpoint, and sort by increasing length. To find the longest path which is completely contained, we can now do a simple binary search.

The number of paths that begin at a specific column is at most . Using binary search, we then have time to find the increasing sequence. We perform an identical operation in G' to find the decreasing path.

A nice aspect of this approach is that we partition any query using exactly two precomputed ranges, no matter what n is. The only thing keeping us from having O(1) query time is the time it takes to actually find those two ranges. Let's work on that.

O(1) Query Time

The idea here is similar to that of sparse tables. All nodes in a path are of lengths that are powers of two. So, we can precompute the largest power of two smaller than or equal to i = 1, 2, ..., n. This gives us an idea of where the last node in the path is.

For any query [i, k], we find c, the largest power of two which is smaller than or equal to k - i + 1. There are now two cases: 1) the path starting in column i and ending in a node of length 2c is completely contained in the query range. In this case, that's our answer. 2) It's not completely contained. Let's look at the path which ends at the node preceding the one of length 2c. This path has length less than 2c (it's clearly upper bounded by 20 + 21 + ... + 2c - 1 < 2c). Therefore, this path is completely contained in our query. So, that can be our answer.

Both cases above can be carried out in O(1) time. Proceed similarly for G'.

So, we have preprocessing time, and O(1) query time.

Conclusion

I hope all of you found this as interesting as I did. And possibly, you may find some applications that I've overlooked.

-Rares

•
• +20
•