### brunomont's blog

By brunomont, history, 3 months ago,

Hello, Codeforces!

In this blog I will describe yet another useless unconventional data structure. This time, I will show how to maintain an extended suffix array (suffix array, inverse of suffix array, and $lcp$ array) while being able to add or remove some character at the beginning of the string in $\mathcal{O}(\log n)$ amortized time.

It can be used to overkill solve problem M from 2020-2021 ACM-ICPC Latin American Regionals, which my team couldn't solve during the contest. I learned it from here (Chinese), but it is not shown how to maintain the $lcp$ array.

Here is the whole code of the data structure. To read this blog, I recommend you ignore this code for now, but use it if you have questions about helper functions, constructors, etc, since I will pass over some implementation details in the explanations.

Full code

## 1. Definitions

I will assume the reader is familiar with the definition of suffix array ($SA$) and $lcp$ array. If that is not the case, I recommend learning it from Codeforces EDU.

Here, the $i$-th position of the $lcp$ array will denote the longest common prefix between $SA[i]$ and $SA[i-1]$. Also, $lcp[0] = 0$. The inverse suffix array ($ISA$), or rank of a suffix says what position that suffix is at in the $SA$. That is, $ISA[SA[i]] = i$.

In the data structure, since we add characters at the front of the string, we will use a reversed index representation of each suffix. For example, the suffix of index 2 of $baabac$ is $bac$. We will use a function $\texttt{mirror}$ to go from the standard index representation to the reversed one, and vice versa.

Mirror function

## 2. Binary search tree

Let's represent the $SA$ using a binary search tree, such that the in-order traversal of the tree gives the $SA$. For example, take the $SA$ of $baabac$ below. For each suffix, its (reversed) index and $lcp$ are represented.

We could represent this using the following binary search tree.

Additionally, we will store an array $tag$, such that $tag[i] < tag[j]$ if, and only if, suffix $i$ comes before suffix $j$ in the $SA$ (equivalently, in the binary search tree). Using this, we can compare two suffixes lexicographically in constant time. This is pivotal to make the complexity of the operations $\mathcal{O}(\log n)$ amortized.

Because we want to maintain the $tag$ array, we will not use a binary search tree that does rotations, since it not trivial how to update the $tag$ values during the rebalancing. Instead, we will use a binary search tree called Amortized weight-balanced trees (see below for references). The idea of these trees is that, at any point, every subtree will be $\alpha$-balanced for some $0.5 < \alpha < 1$, which means that for every subtree rooted at node $x$, $\max(\text{size}(x.left), \text{size}(x.right)) \leq \alpha \cdot \text{size}(x)$. It turns out that if we insert and delete using trivial binary search tree algorithms and, after every change, for every node that is not $\alpha$-balanced we simply rebuild the entire subtree rooted at that node so that it becomes as balanced as possible, the height of the tree and the amortized cost of inserting and deleting is $\mathcal{O}(\log_{\frac{1}{\alpha}} n)$. Proving this is beyond the scope of this blog. In the code, I use $\alpha = 2/3$.

This rebuilding scheme makes it easy to maintain the $tag$ values by having a large enough interval of possible values (I use $[0, 10^{18}]$) and recursively dividing it in half (refer to the code for more details).

Given that we have a binary search tree representing the $SA$ and $lcp$ array with logarithmic height, how do we query for $SA$, $lcp$ and $ISA$? If we maintain the subtree sizes of the tree, we simply go down the tree to get the $i$-th smallest node and so we get $SA[i]$ and $lcp[i]$ in $\mathcal{O}(\log n)$, if $n$ is the size of the string. To get $ISA[i]$, we go down the tree, counting how many nodes in the tree have $tag$ value smaller than $tag[i]$.

Code for ISA, SA and lcp

Finally, we can also use the tree to query for the $lcp$ between two arbitrary suffixes $i$ and $j$ (not necessarily consecutive in the $SA$). We know that this is equal to the minimum between the $lcp$ values in the range defined by the two suffixes. So we just need to store the minimum over all $lcp$ values on every subtree, and query for the minimum over all nodes that have $tag$ value in the range $(tag[i], tag[j]]$, assuming $tag[i] < tag[j]$. This can be done by going down the tree similar to a segment tree, in $\mathcal{O}(\log n)$ time.

Code for arbitrary suffix lcp query

Since we add or remove characters at the front of the string, we only add or remove a single suffix of the string (and, equivalently, a single node in the binary search tree), and the previous suffixes remain unchanged and maintain their relative order. Below we see represented in red what changes in the $SA$ and $lcp$ after adding $b$ to $aabac$.

$\hspace{30pt}$

So, to update the structure with the new suffix, we just need to find the position where the new suffix will end up at, its $lcp$ value and we might need to update the $lcp$ value of the suffix that comes just after the added suffix.

### 3.1. Comparing the new suffix with another suffix

If we can compare the new suffix with any other suffix, we can use this to go down the tree and figure out the position we need to insert the new suffix. To make this comparison, we use the following trick: we compare the first character of the suffixes. If they are different, we know which one is smaller lexicographically. If they are the same, then we end up with a comparison between two suffixes that are already in the tree, and we know how to compare them! Just use their $tag$ value.

Code for comparing the new suffix with some other suffix in the tree

### 3.2. Getting the $lcp$ values

To get the $lcp$ value for the new suffix and for the node the goes after the new suffix (since its $lcp$ value might change), we will use the same trick: if the first characters of the suffixes are different, then their $lcp$ is zero. Otherwise, we are left with an $lcp$ query between two suffixes that are already in the quill, which we already know how to answer, just call the $\texttt{query}$ function.

Code for getting the lcp between the new suffix and some other suffix

Since now we know how to compare the new suffix with any other suffix in the tree, and how to get the $lcp$ between the new suffix and any other suffix in the tree, we are ready to insert a new suffix: just go down the tree to the correct position, insert the new suffix, compute its $lcp$ and recompute the $lcp$ of the next node in the tree.

Code for inserting a new suffix

## 4. Removing a character

Removing a suffix is relatively straightforward: go down the tree, using the $tag$ array, to find where the suffix we want to remove is. When found, just remove it using standard binary search tree algorithm.

Now note that the $lcp$ value of the node that comes after the removed node might be incorrect. Luckily, it is easy to get its correct value: it is the minimum between the current value and the $lcp$ value of the suffix we are removing. This follows from the fact that the $lcp$ between two suffixes is the minimum over the $lcp$ values of the range defined by them in the $SA$.

Code for removing a suffix

Note that I don't do the rebalancing/rebuilding stuff on the node deletion. This makes so the height of the tree won't always be logarithmic on the current size of the string, however it will still be logarithmic on the maximum size of the string so far, so this is not a problem.

## 5. Problems

I only know of one problem that makes sense using this data structure to solve it.

Solution

## 6. Final remarks

This data structure is relatively simple, and yet very powerful. Not even suffix trees and suffix automaton, that can update their structure with character additions, are capable of rollbacks without breaking their complexity. So this scores yet another point for suffix arrays being the best suffix structure.

This data structure is not too fast, but it is not super slow either. It runs in about 3 times the time of my static $\mathcal{O}(n \log n)$ suffix array implementation.

I would like to thank arthur.nascimento for creating the problem 103185M - May I Add a Letter?, which made me look into this data structure. Also tdas for showing me the existence of it.

## References

Explanation of the data structure (without maintaining the $lcp$ array) (Chinese): Link

• +332

By brunomont, history, 12 months ago,

Hello, Codeforces!

This blog is heavily inspired by TLE's blog using merging segment tree to solve problems about sorted list. I don't know exactly how well known this data structure is, but I thought it would be nice to share it anyway, along with some more operations that are possible with it.

## What it can do

We want a data structure that we can think of a set/multiset of non-negative integers, or even as a sorted array. We want to support all of the following operations:

1. Create an empty structure;
2. Insert an element to the structure;
3. Remove an element from the structure;
4. Print the $k$'th smallest element (if we think of the structure as a sorted array $a$, we are asking for $a[k]$);
5. Print how many numbers less than $x$ there are in the set (similar to lower_bound in a std::vector);
6. Split the structure into two: one containing the $k$ smallest elements, and the other containing the rest;
7. Merge two structures into one (no extra conditions required).

Turns out that, if all elements are not greater than $N$, we can perform any sequence of $t$ such operations in $\mathcal{O}(t \log N)$, meaning that each operation costs $\mathcal{O}(\log N)$ amortized (as we will see, all operations except for (7) have worst case $\mathcal{O}(\log N)$).

## How it works

We are going to use a dynamic segment tree to represent the elements. Think of each of them as an index, a leaf on a segment tree. We want the segment tree do be dynamic (we only create the nodes when we need them).

So, initially, there are no nodes on the structure (grey nodes are not present in the structure):

If we insert the value $2$, we need to create the path from the root to the leaf that represents $2$. It is also useful to store, on each node, the number of created leafs on that node's subtree.

Let's also add $1$, $4$ and $5$. In the end, the tree will look like this:

Using this representation, it is very straightforward to implement operations (1), (2) and (3). Operations (4) and (5) are also easy, they are classic segtree operations. To do operation (6), we can go down the tree and either call recursively to the left or right child. Here is a pseudocode of this operation:

node* split(node*& i, int k) {
if (!k or !i) return NULL;
node* ret = new node();
if (i is a leaf) {
i->cnt -= k; // assuming multiset
ret->cnt += k;
} else {
if (k <= i->left->cnt) ret->left = split(i->left, k); // split the left
else { // take everything from the left and split the right
ret->left = i->left;
ret->right = split(i->right, k - i->left->cnt);
i->left = NULL;
}
}
return ret;
}


It is clear that all these operations cost $\mathcal{O}(\log N)$. On operation (6), note that we only go down recursively to the left or to the right child.

But what about operation (7)? It turns out that we can merge two structures in the following way: to merge the subtrees defined by nodes l and r, if one of them is empty, just return the other. Otherwise, recursively merge l->left and r->left, and l->right and r->right. After that, we can delete either l or r, because they are now redundant: we only need one of them.

Now let's see why any sequence of $t$ operations will take $\mathcal{O}(t \log N)$ time. The number of nodes we create is bounded by $\mathcal{O}(t \log N)$, because each operation creates at most $\mathcal{O}(\log N)$ nodes. Now note that the merge algorithm either returns in $\mathcal{O}(1)$, or it deletes one node. So total number of times the algorithm doesn't return in $\mathcal{O}(1)$ is bounded by the total number of created nodes, so it is $\mathcal{O}(t \log N)$.

## Implementation

There are some implementation tricks that make the code easier to write and use. Firstly, we don't need to set $N$ as a constant. Instead, we can have $N = 2^k-1$ for some $k$, and if we want to insert an element larger than $N$, we can just increase $k$, and update the segment tree (we will only need to create a new root, and set its left child to the old root).

Also, it's easy to change between set and multiset representation. It is also easy to insert $x$ occurrences of an element at once, just increase the cnt variable of the respective leaf accordingly.

This implementation uses $\mathcal{O}(t \log N)$ of memory, for $t$ operations.

Code
How to use

## Memory optimization

Another way to think about this data structure is as a trie. If we want to insert numbers 1, 2, 4, and 5, we can think of them as 001, 010, 100, and 101. After inserting them, it will look like this:

Using this idea, it is possible to represent this trie in a compressed form, just like a Patricia Trie or Radix Trie: we only need to store the non-leaf nodes that have 2 children. So, we only need to store nodes that are LCA of some pair of leafs. If we have $t$ leafs, that is $2t-1$ nodes.

This is very tricky to implement, but it can reduce the memory usage from $\mathcal{O}(t \log N)$ to $\mathcal{O}(t)$ (see below for implementation).

## Extra operations

There are some more operations you can do with this data structure. Some of them are:

1. Insert an arbitrary number of occurrences of the same element;
2. You can modify it to use it as a map from int to something, and also have a rule of merging two mappings (for example, if in set $A$ $x$ maps to $u$ and in set $B$ $x$ maps to $v$, it can be made so that in the merged set $x$ maps to $u \cdot v$);
3. Using lazy propagation and possibly modifying the merge function, it is possible to do operations like insert a range of values (insert values $a, a+1, a+2, \dots b$ all at once), or flip a range of values (for every $x \in [a, b]$, if it is in the set, remove it, otherwise insert it), etc, maintaining the complexity of all operations;
4. It is possible to use an implicit treap (or other balanced binary search tree) to represent an array, so that in each node of the treap you store a sorted subarray represented by this structure. Using this, we can do all of the usual operations of implicit tree such as split, concatenate, reverse, etc (but not lazy updates); however we can also sort a subarray (increasing or decreasing). To sort a subarray, just split the subarray into a treap and merge all of the subarray structures. The time complexity is $\mathcal{O}((n+t)(\log n + \log N))$ for $t$ operations on an array of size $n$.

## Downsides

The memory consumption of the data structure might be a problem, though it can be solved with a more complicated code. The code is already not small at all. Another big downside is the amortized complexity, which leaves us with almost no hope to do rollbacks or to have this structure persistent. Also, the constant factor seems to be quite big, so the structure is not so fast.

## Final remarks

The core of the idea was already explained on TLE's blog, and I also found this paper which describes its use as a dictionary (map). Furthermore, while I was writing this, bicsi posted a very cool blog where he shows an easier way to have the memory optimization, but it seems to make the split operation more complicated.

## Problems

Using this structure to solve these problems is an overkill, but they are the only examples I got.

Solution
Solution

UPD: I have implemented the memory optimization for segment trees, you can check it out here: 108428555 (or, alternatively, use this pastebin) Please let me know if there is a better way to implement this.

• +338

By brunomont, 16 months ago,
TL; DR

Hello, Codeforces!

Here I'll share an algorithm to solve the classic problem of Range Minimum Query (RMQ): given a static array $A$ (there won't be any updates), we want to find, for every $\text{query(l, r)}$, the index of the minimum value of the sub-array of $A$ that starts at index $\text{l}$ and ends at index $\text{r}$. That is, we want to find $\text{query(l, r)} = \text{arg min}_{l \leq i \leq r}{\left(A[i]\right)}$. If there are more than one such indices, we can answer any of them.

I would like to thank tfg for showing me this algorithm. If you have read about it somewhere, please share the source. The only source I could find was a comment from jcg, where he explained it briefly.

## Introduction

Sparse table is a well known data structure to query for the minimum over a range in constant time. However, it requires $\Theta(n \log n)$ construction time and memory. Interestingly, we can use a sparse table to help us answer RMQ with linear time construction: even though we can't build a sparse table over all the elements of the array, we can build a sparse table over fewer elements.

To do that, let us divide the array into blocks of size $b$ and compute the minimum of each block. If we then build a sparse table over these minimums, it will cost $\mathcal{O}(\frac{n}{b} \log{\frac{n}{b}}) \subseteq \mathcal{O}(\frac{n}{b} \log{n})$. Finally, if we choose $b \in \Theta(\log n)$, we get $\mathcal{O}(n)$ time and space for construction of the sparse table!

So, if our query indices happen to align with the limits of the blocks, we can find the answer. But we might run into the following cases:

• Query range is too small, so it fits entirely inside one block:

• Query range is large and doesn't align with block limits:

Note that, on the second case, we can use our sparse table to query the middle part (in gray). In both cases, if were able to make small queries (queries such that $r-l+1 \leq b$), we would be done.

## Handling small queries

Let's consider queries ending at the same position $r$. Take the following array $A$ and $r = 6$.

Obviously, $\text{query}(6, 6) = 6$. Since $\text{query}(5, 6) = 5 \neq \text{query}(6, 6)$, we can think of the position $\text{5}$ as "important". Position $\text{4}$, though, is not important, because $\text{query}(4, 6) = 5 = \text{query}(5, 6)$. Basically, for fixed $r$, a position is important if the value at that position is smaller than all the values to the right of it. In this example, the important positions are $6, 5, 2, 0$. In the following image, important elements are represented with $\text{1}$ and others with $\text{0}$.

Since we only have to answer queries with size at most $b \in \Theta(\log n)$, we can store this information in a mask of size $b$: in this example $\text{mask[6] = 1010011}$, assuming $b \geq 7$. If we had these masks for the whole array, how could we figure out the minimum over a range? Well, we can simply take $\text{mask[r]}$, look at it's $r-l+1$ least significant bits, and out of those bits take most significant one! The index of that bit would tell us how far away from $r$ the answer is.

Using our previous example, if the query was from $\text{1}$ to $\text{6}$, we would take $\text{mask[6]}$, only look at the $r-l+1=6$ least significant bits (that would give us $\text{010011}$) and out of that take the index of the most significant set bit: $\text{4}$. So the minimum is at position $r - 4 = 2$.

Now we only need to figure out how to compute theses masks. If we have some mask representing position $r$, lets change it to represent position $r+1$. Obviously, a position that was not important can't become important, so we won't need to turn on any bits. However, some positions that were important can stop being important. To handle that, we can just keep turning off the least significant currently set bit of our mask, until there are no more bits to turn off or the value at $r+1$ is greater than the element at the position represented by the least significant set bit of the mask (in that case we can stop, because the elements represented by important positions to the left of the least significant set bit are even smaller).

Let's append an element with value $\text{3}$ at the end of array $A$ and update our mask.

Since $\text{A[6] > 3}$, we turn off that bit. After that, once again $\text{A[5] > 3}$, so we also turn off that bit. Now we have that $\text{A[2] < 3}$, so we stop turning off bits. Finally, we need to append a 1 to the right of the mask, so it becomes $\text{mask[7] = 10100001}$ (assuming $b \geq 8$).

This process takes $\mathcal{O}(n)$ time: only one bit is turned on for each position of the array, so the total number of times we turn a bit off at most $n$, and using bit operations we can get and turn off the least significant currently set bit in $\mathcal{O}(1)$.

## Implementation

Here is a detailed C++ implementation of the whole thing.

template<typename T> struct rmq {
vector<T> v; int n;
static const int b = 30; // block size

int op(int x, int y) {
return v[x] < v[y] ? x : y;
}
// least significant set bit
int lsb(int x) {
return x & -x;
}
// index of the most significant set bit
int msb_index(int x) {
return __builtin_clz(1)-__builtin_clz(x);
}
// answer query of v[r-size+1..r] using the masks, given size <= b
int small(int r, int size = b) {
// get only 'size' least significant bits of the mask
// and then get the index of the msb of that
int dist_from_r = msb_index(mask[r] & ((1<<size)-1));

return r - dist_from_r;
}
rmq(const vector<T>& v_) : v(v_), n(v.size()), mask(n), t(n) {
for (int i = 0; i < n; i++) {

// shift mask by 1, keeping only the 'b' least significant bits

while (curr_mask > 0 and op(i, i - msb_index(lsb(curr_mask))) == i) {
// current value is smaller than the value represented by the
// last 1 in curr_mask, so we need to turn off that bit
}
// append extra 1 to the mask

}

// build sparse table over the n/b blocks
// the sparse table is linearized, so what would be at
// table[j][i] is stored in table[(n/b)*j + i]
for (int i = 0; i < n/b; i++) t[i] = small(b*i+b-1);
for (int j = 1; (1<<j) <= n/b; j++) for (int i = 0; i+(1<<j) <= n/b; i++)
t[n/b*j+i] = op(t[n/b*(j-1)+i], t[n/b*(j-1)+i+(1<<(j-1))]);
}
// query(l, r) returns the actual minimum of v[l..r]
// to get the index, just change the first and last lines of the function
T query(int l, int r) {
// query too small
if (r-l+1 <= b) return v[small(r, r-l+1)];

// get the minimum of the endpoints
// (there is no problem if the ranges overlap with the sparse table query)
int ans = op(small(l+b-1), small(r));

// 'x' and 'y' are the blocks we need to query over
int x = l/b+1, y = r/b-1;

if (x <= y) {
int j = msb_index(y-x+1);
ans = op(ans, op(t[n/b*j+x], t[n/b*j+y-(1<<j)+1]));
}

return v[ans];
}
};


## But is it fast?

As you might have guessed, although the asymptotic complexity is optimal, the constant factor of this algorithm is not so small. To get a better understanding of how fast it actually is and how it compares with other data structures capable of answering RMQ, I did some benchmarks (link to the benchmark files).

I compared the following data structures. Complexities written in the notation $<\mathcal{O}(f), \mathcal{O}(g)>$ means that the data structure requires $\mathcal{O}(f)$ construction time and $\mathcal{O}(g)$ query time.

• RMQ 1 (implementation of RMQ described in this post): $<\mathcal{O}(n), \mathcal{O}(1)>$;
• RMQ 2 (different algorithm, implementation by catlak_profesor_mfb, from this blog): $<\mathcal{O}(n), \mathcal{O}(1)>$;
• Sparse Table: $<\mathcal{O}(n \log n), \mathcal{O}(1)>$;
• Sqrt-tree (tutorial and implementation from this blog by gepardo): $<\mathcal{O}(n \log \log n), \mathcal{O}(1)>$;
• Standard Segment Tree (recursive implementation): $<\mathcal{O}(n), \mathcal{O}(\log n)>$;
• Iterative (non recursive) Segment Tree: $<\mathcal{O}(n), \mathcal{O}(\log n)>$.

The data structures were executed with array size $10^6$, $2 \times 10^6$, $\dots , 10^7$. At each one of these array sizes, build time and the time to answer $10^6$ queries were measured, averaging across 10 runs. The codes were compiled with $\text{O2}$ flag. Below are the results on my machine.

Results in table form

## Conclusion

We have an algorithm with optimal complexity to answer RMQ, and it is also simple to understand and implement. With the benchmark made, we can see that its construction is much faster than Sparse Table and Sqrt-tree, but a little slower than Segment Trees. Its query time seems to be roughly the same as Sqrt-tree, losing only to Sparse Table, which have shown to be the fastest in query time.