In the static RMQ problem, one has to answer queries about the minimum in a range of a fixed array. A classic variation allows updates of the form $$$A_i \leftarrow x $$$. A good solution is segtree which solves it in $$$O(\log{n})$$$ query and update times. My question is : How fast can we update if we must answer queries in $$$O(1)$$$?

I can think of a solution which works in $$$O(\sqrt{n})$$$ per update :

We can answer queries in $$$O(1)$$$ using a sparse table. Since each element is a part of $$$O(n)$$$ ranges in the sparse table, we can clearly update a sparse table of an array of size $$$n$$$ in $$$O(n)$$$. Now, divide into $$$\sqrt{n}$$$ blocks. For each block, maintain the prefix and suffix minimums and also a sparse table. Clearly, a block can be updated in $$$O(\sqrt{n})$$$. On the upper level, maintain another sparse table which can answer minimum over a range of blocks. Clearly this can also be done in $$$O(\sqrt{n})$$$ and the query time is still $$$O(1)$$$. The memory used and precomputation done are both $$$O(n \log{n})$$$.

Can we do better, say $$$\text{polylog}(n)$$$, perhaps using more memory and/or precomputation? Is there a known lower bound?

Updating the block sparse table, or the upper level sparse table, runs in $$$O(\sqrt{n} \log(n))$$$, not in $$$O(\sqrt{n})$$$.

You approach reminds me a little of the Sqrt Tree by gepardo. That one does updates in true $$$O(\sqrt{n})$$$ and still answers queries in $$$O(1)$$$. And as a bonus it supports more complex operations than computing the maximum, like computing the sum in a range, for which the sparse table cannot compute the answer in $$$O(1)$$$. You can learn about it, see here or here

The second link contains also an idea, how to improve it even further with less precomputing/memory, $$$O(n \log^* n)$$$, so almost $$$O(n)$$$, if you can overlap ranges like you can do with min/max queries. I'm not 100% sure, but I think updates are still $$$O(\sqrt{n})$$$ though.

Each element is a part of $$$O(n)$$$ ranges in the sparse table of an array with $$$n$$$ entries. Proof : For the range $$$[j, j + 2^k - 1]$$$ to contain $$$i$$$, $$$i - 2^{k} + 1 \leq j \leq i$$$. So there are $$$\leq 2^k$$$ ranges of length $$$2^k$$$ that require a change. Summing up gives $$$O(n)$$$.

Another way to see this is to subtract the left and right parts to get $$$n \log{n} - i \log{i} - (n-i) \log{(n-i)} = O(n)$$$

So, the upper level sparse table takes $$$O(\sqrt{n})$$$ per update.

Anyway, I also posted this on stackexchange, and as the answer says, we can recursively get $$$O(n^{\epsilon})$$$ for any constant $$$\epsilon > 0$$$, which is really cool.

The possibility of a polylogarithmic update time is still a question though!

Oh, right. You are correct.

Have you looked at the idea in the second article? It might have polylog updates. Basically splits the array into $$$log(n)$$$ segments, and do the same thing for each block recursively. And it keeps a sparse table for all every block in every layer. But I haven't read the article for a long time, so I don't understand all the details right now.

Well... I don't think that's what you want, but you can get $$$O(k n^{\frac{2}{k}})$$$ for any fixed $$$k$$$.

Let's build $$$k$$$-layer structure, $$$i$$$-th layer is divided into blocks of length $$$n^{\frac{i}{k}}$$$, each block is a union of $$$n^{\frac{1}{k}}$$$ blocks on previous layer. Each block stores minima on all $$$O((n^{\frac{1}{k}})^{2})$$$ segments of blocks of previous layer.

I think you get the idea. Update is $$$O(k n^{\frac{2}{k}})$$$, get is $$$O(k)$$$ which is $$$O(1)$$$ for fixed $$$k$$$. With $$$k = \log n$$$ you get the segment tree.