Блог пользователя bicsi

Автор bicsi, история, 4 года назад, По-английски

Hello!

Today I will present a data structure that I have worked on for my Bachelor thesis a while ago, called (by me) a PreTree (link to thesis). This is a type of "compressed" trie that is arguably easier to implement than a regular compressed trie, and it achieves merging in amortized $$$O(N L)$$$ complexity for $$$N$$$ keys of equal length $$$L$$$. I will present an implementation as a segment tree, though, as it is easier to understand.

Prerequisite: know segment trees, and "sparse" segment trees (a.k.a. bitwise tries).

Main idea

We will keep a regular segment tree, with the twist that the root contains the minimum key. Basically, a PreTree on keys in the universe $$$[b, e)$$$ is a binary tree $$$T$$$ where either $$$T$$$ is empty, or the following conditions hold:

  1. $$$T$$$ holds the lowest key present in the range $$$[b, e)$$$
  2. $$$T$$$'s left son is a PreTree on values $$$[b, \frac{b + e}{2})$$$
  3. $$$T$$$'s right son is a PreTree on values $$$[\frac{b + e}{2}, e)$$$

Because $$$T$$$ is a one-to-one mapping between tree vertices and keys, then the memory complexity of $$$T$$$ is obviously $$$O(N)$$$.

The reason for the name PreTree comes from it being a "pretty tree" (of course, beauty is subjective), and also because their pre-order traversals yield keys in sorted order.

Implementation

In order to make the implementation easier, we will define a single big operation called "merging" two PreTrees. Suppose we have two trees $$$T1$$$ and $$$T2$$$. Merging the two trees would be done as the following recursive procedure:

function Merge(T1, T2, b, e):
  if T1 is empty then return T2
  if T2 is empty then return T1
  
  m = (b + e) / 2
  if T1.k > T2.k then swap T1 and T2
  T1.l = Merge(T1.l, T2.l, b, m)
  T1.r = Merge(T1.r, T2.r, m, e)
  T2.l = T2.r = empty
  
  if T1.k != T2.k:    
    ## We have to insert T2 into the corresponding child
    if T2.k >= m:
      T1.r = Merge(T1.r, T2, m, e)
    else:
      T1.l = Merge(T1.l, T2, b, m)
  return T1

Insertion

In order to insert a key into the tree, just merge it with a singleton tree.

Erase

This is a bit more involved. Notice, however, that any PreTree $$$T$$$ is a min-heap on the keys. Find the key and then pop it as you would do in a heap.

Predecessors

In order to find the greatest key smaller than a given $$$k$$$, one can do the following:

function Smaller(T, k):
  if T is empty or T.k >= k:
    return NOT_FOUND
  ret = Smaller(T.r, k)
  if ret is NOT_FOUND:
    ret = Smaller(T.l, k)
  if ret is NOT_FOUND:
    ret = T
  return ret

Finding the smallest key greater than or equal to a given $$$k$$$ is possible, albeit a little more complicated. The simplest way would be to also keep the maximum on the subtree. I will omit the actual implementation.

Performance

The Merge function seems not too efficient at first glance. However, the amortized complexity of it is bounded by $$$O(N log(U))$$$, where $$$U$$$ is the size of the universe, and $$$N$$$ is the total number of nodes. In order to see why is that, we will have to look at the depths of each node in the tree.

Let's suppose $$$T1$$$ and $$$T2$$$ have distinct keys. Each non-trivial call to the Merge function increases the depth of at least one vertex. The reason that is true is because, after each call, the root node with the higher key will get "demoted" one level downwards, therefore increasing its depth by one. As depths of a node can never exceed $$$O(log(U))$$$, the total complexity cannot exceed $$$O(N log(U))$$$.

Follow-up 1: Prove that this still holds for non-distinct keys!

Follow-up 2: What about erase operations? How do they affect the amortized analysis?

Follow-up 3: What about (Treap) splitting operations? Can you split T into two trees, depending on some (monotonic) predicate? How does that affect complexity?

Applications

One of the most standard applications of this is to improve the $$$O(N log^2(N))$$$ bound on various problems involving the small-to-large trick. It has the additional improvement of having $$$O(N)$$$ memory without making the implementation a lot harder.

Example problems:

Could you tell me about some other problems involving some form of sparse segment trees and other tasks where this could come handy, to add them to the blog?

  • Проголосовать: нравится
  • +229
  • Проголосовать: не нравится

»
4 года назад, # |
  Проголосовать: нравится +171 Проголосовать: не нравится

Commenting before a 14 yo Chinese boy comes saying "This is ... method, well known in China, made popular by ..."

»
4 года назад, # |
Rev. 3   Проголосовать: нравится +5 Проголосовать: не нравится

By the way, I just figured that this implementation can serve as a regular segment tree as well. See this solution (I have added the solution to the blog as well)

It is probably not nearly as useful in this scenario, but I guess it's a nice thing to have, especially if you want a top-down implementation of a segment tree with less memory (and which is a tiny bit more cache-friendly, because the left son is always basically in the same cache line as the current vertex).

It's probably nothing new as well, as I think I've seen some glances of tourist's segment tree implementation, and it uses a similar node layout.

  • »
    »
    4 года назад, # ^ |
      Проголосовать: нравится +4 Проголосовать: не нравится

    Can you tell me where can I learn about sparse segment trees / bitwise tries mentioned in pre requisites?

    • »
      »
      »
      4 года назад, # ^ |
      Rev. 3   Проголосовать: нравится +10 Проголосовать: не нравится

      Imagine keeping a segment tree over the whole universe of values (e.g. $$$[0, 10^9]$$$ or even something like $$$[-10^9, 10^9]$$$). This would require too much memory; however, you can create nodes only as you update them.

      This data structure is conceptually equivalent to implementing a trie over the binary alphabet $$$\Sigma = $$$ { $$$0, 1$$$ }, where the "strings" are the bitwise representations of the integers (in MSB to LSB order).

      I haven't found an article that explains that more thoroughly; maybe other people could point to some resources.

»
4 года назад, # |
  Проголосовать: нравится +10 Проголосовать: не нравится

The blog heading is a bit misleading, since complexity is $$$O(N \log(U))$$$, while $$$O(N \log(N))$$$ is stated.

  • »
    »
    4 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I guess what I wanted to emphasise is that you can obtain sorted sets + small-to-large trick of merging in $$$O(N log N)$$$ instead of $$$O(N log^2 N)$$$. I think most of the times large keys can be compressed to be $$$O(N)$$$, but I did not concern myself with that in the blog post. While I agree that the title is a bit misleading, I think adding an extra variable in the title would make it a bit more confusing.

»
4 года назад, # |
Rev. 4   Проголосовать: нравится +8 Проголосовать: не нравится

It seems to me that you just made merging of sparse segment trees a bit more complicated without any additional possibilities. The point is that you don't need to store k to merge sparse trees, because merge and split are supported by default with $$$O(N \log U)$$$ amortized complexity.

UPD. gamegame already pointed to the blog which shows that.

  • »
    »
    4 года назад, # ^ |
      Проголосовать: нравится +8 Проголосовать: не нравится

    I assume that the key advantage of this structure over merging segment tree is memory reduction from $$$O(N \log(U))$$$ to $$$O(N)$$$. Also it looks like this structure supports all treap operations, but has smaller constant, considering $$$U = N$$$. So basically this structure takes advantages of both treap and merging segment tree. bicsi can you clarify how to do split? I'm a bit stuck with this.

    • »
      »
      »
      4 года назад, # ^ |
        Проголосовать: нравится +18 Проголосовать: не нравится

      Yes, my key point here us that it has $$$O(N)$$$ memory, and that it turns out to be one of the fastest data structures that I know to implement by hand to solve BBST operations. And it’s especially useful when you need to augment the DSU data structures with these sets (like in the ‘small-to-large’ trick). It also seems to share some interesting structure with skewed binary number systems and the linear memory binary lifting here.

      Regarding splitting, it’s hard. First of all, it’s easy to see that in this scenario, for a set of keys, the tree that represents them is unique. However, if you have the full tree and you split, the minimum value greater than the splitting value has to go to the root of the second tree, which decreases the depth of $$$O(log(U))$$$ nodes. In worst case this may yield a total potential increase of $$$O(log^2(U))$$$, so the data structure would achieve $$$O(log^2(U))$$$ instead of $$$O(log(U))$$$ complexity. What you could do instead is keep ‘phantom nodes’ in the tree, but this makes it similar to regular merging trees.

      Overall: splitting is not too great (unless you are okay with $$$O(log^2(U))$$$ time complexity or $$$O(N log(U))$$$ memory. If you have ideas on how to modify the structure to allow that, I’m open to them.

»
4 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

yeah boy!