Noam527's blog

By Noam527, history, 23 months ago, In English

Throughout the last several weeks I have tried to invent and solve, a generalization of segment trees (although now it doesn't fit to name it "segment trees"... see for yourself). The results are very few (and not good, if you ask me), but the problem was very appealing and fun to think about.

The purpose of this blogpost is to share my experience with this problem, in order to provide curiousity to at least a few people, and also to let bright-minded users tackle this problem together, and hopefully achieve better results. Since I want to share my experience, I will also put some possibly challenging parts under spoilers, if you want to attempt it yourself.

So, there is no tl;dr. Sorry :)

The Motivation and The Problem

As the title suggests, the motivation was from inspection of segment trees. Define the following problem, which we will try to solve throughout this post:

Given $$$n$$$ elements $$$a_0, \ldots, a_{n-1}$$$, initially $$$0$$$, and $$$m$$$ subsets $$$S_i \subseteq [n]$$$ (where $$$[n] = \lbrace 0, \ldots, n-1 \rbrace$$$), we need to support the following:

  • Update $$$(i, x)$$$: add $$$x$$$ to $$$a_i$$$.
  • Query $$$v$$$: return $$$\sum_{i \in S_v} a_i$$$.

Segment trees define the $$$m = \binom{n+1}{2}$$$ subsets implicitly — as ranges. For some "magical" reason, when we have ranges, we can define $$$n' = O(n)$$$ elements, and subsets $$$S_i, T_i \subseteq [n']$$$, such that an update $$$(v, x)$$$ requires adding $$$x$$$ to all $$$a_j, j \in S_v$$$, and a query $$$v$$$ is equal to $$$\sum_{j \in T_v} a_j$$$ — but with the bonus that $$$|S_i|, |T_i|$$$ are all $$$O(\log n)$$$, so all operations can be done in $$$O(\log n)$$$.

The construction is just setting $$$n'$$$ as the number of nodes in the segment tree, setting $$$S_i$$$ for every element as the set of $$$O(\log n)$$$ ranges that include it, and setting $$$T_i$$$ for every range as the set of ranges that cover it.

I wanted to figure out whether there is a generalization that allows such optimizations for problems where we don't have ranges. Analysis of cases where the subsets are too many and implicit, was too difficult to me. So the problem is as above, where the subsets $$$S_i$$$ are all given explicitly, and then queries and updates are made. So the input size is $$$O(n + \sum |S_i|)$$$.

Note: Segment trees also support different associative operations, such as minimum instead of sum. It was convenient to think about sums, and a lot of the following can be generalized to other operations, with perhaps an additional log factor. We can also generalize to "laziness", as subset updates and subset queries, but this generalization is much slower (perhaps it would be interesting to attempt this as well?). Anyway, both of these are left as an exercise to the reader.

Interpreting as a Graph

We can define a similar problem, and see that it's equivalent (up to a constant): Given an undirected graph, each node has a value $$$a_i$$$, initially $$$0$$$. An update requires us to add to a node's value, and a query $$$v$$$ asks for the sum of values of all neighbors of $$$v$$$.

The reduction from this problem to the aforementioned problem is simple — just define every subset as the set of neighbors of a node. In the opposite direction, given subsets we can define a bipartite graph of size $$$n + m$$$, where each of the $$$m$$$ vertices on one side, has neighbors on the other side corresponding to each of the $$$m$$$ subsets.

I found that, by interpreting the problem as a graph, I was able to make progress.

Solving for a tree

Suppose the graph is a tree. Can we solve this problem efficiently? You can attempt this yourself before reading.

Found Complexity
Solution

That was awesome, let's try a general graph

Found Complexity
Solution

Let's analyze what happened here

The tree solution is significantly better than the general graph solution. Applying the general solution to a tree gives us $$$O(\sqrt{V})$$$ instead of $$$O(1)$$$. So what was this magic in trees that allowed us to cheat like this?

The magic boils down to relating vertices and their parents. Let's generalize this to any graph:

We direct every edge in our graph in some direction. Define for every vertex, the vertices to which it has an outgoing edge, as $$$T_v$$$. Think of it as $$$v$$$'s "parents". The sum of neighbors of a vertex can be split to the sum of outgoing neighbors, and the sum of incoming neighbors. Now, the tree solution is generalizable:

Maintain $$$a_v$$$ for every vertex as usual, and maintain $$$b_v$$$ for every vertex, as the sum of its incoming neighbors. Then on an update $$$(v, x)$$$ we add $$$x$$$ to $$$a_v$$$, and to $$$b_u$$$ for all $$$u \in T_v$$$. On a query $$$v$$$, we compute $$$b_v$$$ + $$$\sum_{u\in T_v} a_u$$$.

The time complexity becomes $$$O(\max |T_i|)$$$. While this result doesn't improve our situation in dense graphs, it can be useful in different ones — for example, trees.

Minimizing $$$\max |T_i|$$$

Given a graph, let's say we want to find the minimum value of $$$\max |T_i|$$$ among all possible ways to direct the edges. Turns out that it is computable in polynomial time!

Hint
Solution

Another interesting fact is that there is a relatively simple formula that expresses the minimum achievable $$$\max |T_i|$$$:

Formula
Proof

To finish with what I covered into this area — the main issue is that, in the case where we first compute this directioning, and then start the queries and updates, this algorithm might be too slow. So, here is a linear time algorithm that directs all of the edges s.t the maximum outgoing degree is at most $$$2d(G)$$$ — asymptotically optimal.

Approximation Algorithm

Before I found this algorithm, I had a different one, with a complex proof that it also gets a constant factor approximation. The algorithm consists of an arbitrary initial direction to every edge. Then observe that we can flip an edge and update our data in $$$O(1)$$$. For every vertex we get a query/update on, flip all of its outgoing edges and then operate on it. You can attempt to prove its amortized complexity if you want.

Random Dense Graphs

I'll have to address the issue that we still don't have anything good for dense graphs. Of course, for large $$$n$$$, we're probably not going to get explicitly a dense graph, but still it would be interesting to analyze this.

Let's backtrack to the initial problem (not the graph formulation) — up until now, we were given $$$n$$$ elements and our $$$m$$$ "query subsets" as input, while all of our strategies consist of choosing a new $$$N$$$ (for instance, in a tree we had $$$N = 2n$$$ since we also initialized another array $$$b$$$), constructing $$$n$$$ "update subsets" and $$$m$$$ "query subsets", so that every update requires adding in a subset and every query requires computing the sum over a subset.

Let's prove a depressing $$$\Omega \left( \frac{n}{\log n} \right)$$$ lowerbound for this approach.

Proof

Furthermore, I believe that any algorithm can be transformed via linear algebra to an algorithm that constructs update subsets and query subsets, but I was unable to prove it.

Anyway, there is also an $$$O\left( \frac{n}{\log n} \right)$$$ construction, so we hit our limit it seems. This follows the pattern that many algorithms use to remove a log factor off complexity, like RMQ.

The almost-awful-but-not-quite algorithm

As a final note, this means that I also don't know how to do $$$o \left( \frac{d(G)}{\log n} \right)$$$ for any graph $$$G$$$ (following the definition of $$$d(G)$$$ from before). Can we use this trick to reduce the time complexity below $$$d(G)$$$? (when it is bigger than $$$\log n$$$).

Implicit Subsets

Since I did feel like I covered enough ground on explicit inputs, it might be time to select some interesting implicit cases. I didn't spend a lot of time on this, so I don't have any results.

For implicit subsets as ranges, segment trees work :). Here's an interesting implicit case that I have no clue how to solve: We are given the subsets in the input, except that every given subset is either a pair of indicies, or a union of two previously given subsets. Maybe we can provide solutions based on small "depth" of subsets?

An interesting application of this would be reachability queries in a DAG.

Applications

All my work was purely theoretical, but since some people enjoy a new topic only if it leaves the "sweet flavor of applications", I guess I have to make up some.

  • You begin with $$$n,k$$$ and an empty set. In an update you insert or erase an element in $$$[0, 2^n)$$$ from your set, and in a query you are asked for the minimum bitwise OR that can be made by choosing $$$k$$$ elements from the current set. If we construct $$$2^n$$$ elements, and $$$2^n$$$ subsets such that $$$S_i$$$ contains all $$$j$$$ whose bits are included in $$$i$$$'s bits, then we have $$$3^n$$$ edges in this graph. The average degree of the graph is $$$1.5^n$$$, and I believe that $$$d(G)$$$ is also $$$O(1.5^n)$$$, so this problem is solvable in $$$O(1.5^n \cdot n)$$$ per query, instead of $$$O(2^n \cdot n)$$$.

  • Given $$$n$$$, we can construct a bipartite graph with $$$n$$$ vertices on each side, and connect $$$(i, j)$$$ if $$$i$$$ divides $$$j$$$. Then using this graph we can solve a problem of inserting/removing a value $$$x \in [1, n]$$$ from a set and querying for the number of values that divide $$$x$$$ / are multiples of $$$x$$$. $$$d(G)$$$ of this graph I think has to do with highly composite numbers — choose some number $$$k$$$, on one side take all divisors of $$$k$$$ and on the other take all multiples of $$$k$$$. The density of this subgraph is $$$\frac{n\cdot \sigma_0(k)}{n + k\sigma_0(k)}$$$, where $$$\sigma_0$$$ is the divisor counting function. Choosing some highly composite $$$k$$$ such that $$$k \cdot \sigma_0(k)$$$ is about $$$n$$$, feels like it hits the upperbound, and I think it leads to quite a small density. If I recall correctly, running the greedy approximation algorithm on $$$n = 200000$$$ gave around $$$50$$$ maximum outdegree, which is not bad.

You can probably find applications that are more interesting, but I don't find it important right now. I think we should try to:

  • Disprove my proof of optimality on dense graphs.
  • Prove or disprove that any algorithm can be transformed into one where we construct update subsets and query subsets.
  • Try to make up implicit inputs and solve them — this might be the most interesting road to obtain new results.

Full text and comments »

  • Vote: I like it
  • +200
  • Vote: I do not like it

By Noam527, 4 years ago, In English

In many data structures, the operation of "undo" on the last update can be implemented easily: we can (usually) maintain a stack of the updates, where each update on the stack holds the memory cells it changed, and their original values. To undo an operation, just revert all changes from the top update on the stack. To maintain good complexity, we require the updates to operate in non-amortized time. I've seen this being used multiple times on DSU (without path compression).

If we imagine the updates as a sequence, then we can push an update to the end, and pop an update from the end by undo. Then, this sequence is a stack of updates. Here we discuss the idea of having a queue of updates: we can add a new update, or undo the oldest update still active. Specifically, this blogpost attempts to solve the following problem:

Given a data structure that can support some updates, some queries and an "undo" operation (each with their own time complexity), how can we create a data structure that supports the same updates and queries, but its undo operation works like a queue (and we don't support a stack-like undo in the new one)? And of course, creating such a data structure that adds the smallest time complexity factor we can.

Well, this is almost what this post discusses; I require the DS to have another property, which I'll call "commutativity". The best way to put it is probably: A data structure is commutative if for any sequence of updates, and a query afterwards, the result we want from the query remains the same, regardless of how we order those updates. This property holds in DSU (which was the motivation for this post): For any sequence of adding edges, after which we query whether two nodes are in the same component, the query result doesn't change for any order of edge additions. However, if the DSU query asks for the specific root of the node we query on, then this result depends on the order of edge additions, and also on the implementation. I think this query is less interesting anyway.

A data structure this won't work on is, for example, a segment tree with the update of "set in range", since the order of updates matters. Perhaps this specific DS can support queue undoing with a different algorithm, but here we discuss a general commutative data structure.

A few more notes: We build a DS from a commutative DS. We don't require any other constraint on the original DS (we don't care if its complexities are amortized, this includes the complexity of its undo operation). The data structure we create will work in amortized time, and it will also be online (which was a main purpose). Finally, I've never heard about this anywhere, yet I wouldn't be surprised if this idea is well known in China.

The Idea

Let $$$S$$$ be the stack of updates, which is initially empty. We begin by receiving updates which fill up $$$S$$$, until the first undo. What we do now, is reversing $$$S$$$: we undo all the updates and redo them in reverse order. We use commutativity here which ensures this reversal is fine. Now, to undo the first operation, we just call the undo operation on the original data structure: it pops the topmost element of $$$S$$$ which is the first update. In an ideal world, we'd hope that we only receive undo's until the stack is empty, but it is probably the case that we'll get both new updates and undoing: We must push new updates into $$$S$$$, but also must be able to pop old updates, so we have to somehow interleave new updates and old updates. We'll call this "The Stack Problem", which is the main roadblock.

The Stack Problem

We'll consider the stack problem in the following way: we have a stack $$$S$$$ of $$$n$$$ letters $$$A$$$, which represent old updates we need to pop (even while denoted the same, these $$$A$$$'s are different updates... but we don't care about their difference, we just need to remove them in order from topmost to bottommost). On $$$S$$$ we can get 2 types of operations: either pop an $$$A$$$, or add a $$$B$$$, which represents a new update. We will assume there are a total of $$$2n$$$ operations; $$$n$$$ which pop $$$A$$$ and $$$n$$$ which add $$$B$$$, and these arrive online. What we really care about is the number of $$$A$$$'s and the number of $$$B$$$'s in $$$S$$$: when we're asked to pop an $$$A$$$, we modify $$$S$$$ so that the number of $$$A$$$'s decreases by 1, and symmetrically for adding a $$$B$$$. At the end we should have a stack of only $$$B$$$'s.

You might have noticed we assume there will be exactly $$$n$$$ new updates before we pop exactly $$$n$$$ old updates. This is just to ease on solving the stack problem, and we'll later see this assumption makes no asymptotic difference.

We assume that an operation of push or pop on $$$S$$$ take $$$\mathcal{O}(1)$$$, since we care for now about minimizing stack operations. If we solve this problem in time complexity $$$\mathcal{O}(n \cdot f(n))$$$, consider this as duplicating every update and undo, $$$\mathcal{O}(f(n))$$$ times. Following is an algorithm in $$$\mathcal{O}(n \log n)$$$ time.

Solving The Stack Problem

This will be our algorithm:

  • When provided a $$$B$$$ update (add B), we just push it to the top of $$$S$$$.
  • When provided an $$$A$$$ update: If $$$S$$$ already had $$$A$$$ on top, we just pop. Otherwise, we begin the following process (which I'll call "fixing"): pop from $$$S$$$ and save all the elements we popped, until we popped an equal amount of $$$A$$$'s and $$$B$$$'s, or until no $$$A$$$'s remain in the stack; empty stack or only $$$B$$$'s remain (we can keep an index of this position in the stack, which will only increase). Then, push back all the elements we popped, where we first push all $$$B$$$'s, then all $$$A$$$'s (we use commutativity here). Since the top of $$$S$$$ had a $$$B$$$, and we were asked to pop an existing $$$A$$$, after fixing, the topmost element will be an $$$A$$$ which we pop.

Let's show an upperbound of $$$\mathcal{O}(n \log n)$$$ time on this algorithm. This actually might be interesting for those who want to try, before opening the spoiler:

Upperbound

But, can this algorithm be better than $$$\mathcal{O}(n \log n)$$$ in worstcase? Well... no. In the case where the operations are $$$BABA...BA$$$ (alternating between adding $$$B$$$ and popping $$$A$$$, we'll return to this case later), this algorithm takes $$$\Theta(n \log n)$$$ time (the sequence of positive fixing costs is 1, 2, 1, 4, 1, 2, 1, 8, ..., basically cost $$$i$$$ equals to the least significant bit of $$$i$$$).

The Idea, Cont.

As mentioned, we begin by pushing updates into $$$S$$$ until the first undo operation. Suppose at this time, the number of updates in $$$S$$$ was $$$s_1$$$. So we did $$$s_1$$$ updates, then we do $$$s_1$$$ undo's (on the original structure), and insert these back in reverse order, which is another $$$s_1$$$ updates (asymptotically the same). Now we execute our algorithm for the stack problem, as long as the last of the first $$$s_1$$$ updates still exists. Suppose once we undo the last of those, $$$S$$$ contains $$$s_2$$$ new updates.

Even though $$$s_1, s_2$$$ might not be equal, the proposed algorithm is still well defined. As for its complexity, we can give a simple upperbound that is tight on the worstcase: if we denote $$$n = s_1 + s_2$$$, any sequence of operations that begins with $$$s_1$$$ $$$A$$$'s and ends with $$$s_2$$$ $$$B$$$'s, is a prefix of some sequence of operations where we start with $$$n$$$ $$$A$$$'s and end with $$$n$$$ $$$B$$$'s, which takes $$$\mathcal{O}(n \log n)$$$, so we can bound by $$$\mathcal{O}((s_1 + s_2) \log (s_1 + s_2))$$$, tight when $$$s_1 = s_2$$$.

Anyway, once $$$S$$$ contains these $$$s_2$$$ new updates (which we can also assume are ordered by the order we received them), we can again reverse $$$S$$$, and begin our algorithm again until we get $$$s_3$$$ new updates, and so on.

If we had a total of $$$U$$$ updates (and upto $$$U$$$ undo's between them) asked of us, and we had $$$k$$$ "phases" of the algorithm ($$$s_1$$$ through $$$s_k$$$), then $$$U = \sum_{i=1}^{k}s_i$$$, and our time complexity in terms of push and pop into $$$S$$$:

  • Across all reversals of $$$S$$$ we take $$$\mathcal{O}(\sum_{i=1}^{k}s_i) = \mathcal{O}(U)$$$.
  • Across all executions of the algorithm we take $$$\mathcal{O}(\sum_{i=1}^{k-1}(s_i + s_{i+1}) \log (s_i + s_{i+1}))$$$, bounded by $$$\mathcal{O}(U \log U)$$$.

In total we take $$$\mathcal{O}(U \log U)$$$ operations on $$$S$$$: in other words, if our original data structure operated in $$$\mathcal{O}(f(U))$$$ for any sequence of $$$U$$$ operations (including undo's), then we support queue-undo's for any sequence of $$$U$$$ operations in time $$$\mathcal{O}(f(U \log U))$$$, which in most cases would imply we add a logarithmic factor to our total complexity.

Can We Do Better?

A logarithmic factor is small, but not constant. I suppose usually our original structure would have its updates run in $$$\mathcal{O}(\log n)$$$, which now becomes $$$\mathcal{O}(\log n \cdot \log U)$$$, and that's not too small...

So, can we do better than an $$$\mathcal{O}(\log U)$$$ factor? Well, for a general commutative DS (without any more assumptions), the answer is a sad no.

If we only assume a general commutative DS, we can't do any funny tricks; after each update or undo, the stack of updates of the original DS must hold exactly all the active updates. The following attempts to prove that we cannot achieve all these states of the stack in $$$\mathcal{o}(U \log U)$$$ stack operations. Some parts of this might be informal, and if it fails there I would happily hear someone point this out... I want a better factor.

First, let's define a similar problem, which we'll conveniently name $$$P$$$: Given a positive integer $$$n$$$ and an empty stack $$$S$$$, all operations we're able to do are push and pop $$$A$$$'s and $$$B$$$'s into $$$S$$$. Define the state of the stack as a pair (number of A's, number of B's). Then our goal is to pass through states $$$(n, 0), (n-1, 1), (n-2, 2), \dots, (0, n)$$$, in this order, while minimizing the number of operations. (note that for problem $$$P$$$ of order $$$n$$$ we have $$$n+1$$$ states... don't get confused with off by 1's).

We recursively define the optimal algorithm for problem $$$P$$$: Let $$$T(n)$$$ be the minimum number of operations required for problem $$$P$$$ of order $$$n$$$, with $$$T(0) = 0$$$.

A few observations on the optimal algorithm:

  • It must begin by pushing $$$A$$$, since the first state we need to visit is $$$(n, 0)$$$, so if we were to push a $$$B$$$ at the start, we would end up popping it without reaching any state.
  • At some moment, this $$$A$$$ at the bottom of $$$S$$$ must be replaced with $$$B$$$ since we must visit $$$(0, n)$$$ at the end. Informally I claim, that once we put the $$$B$$$ at the bottom, we never pop it again. A formal proof can probably use some exchange argument; we visited some states after we put the first $$$B$$$ at the bottom, then some more states after the second $$$A$$$ at the bottom and at some point we put a $$$B$$$ at the bottom again. So all the states visited by the first $$$B$$$ could be visited by the first $$$A$$$ with less operations, as we merge all states visited by the first $$$A$$$, first $$$B$$$ and second $$$A$$$.

So the optimal algorithm is split to two phases; an $$$A$$$ at the bottom and a $$$B$$$ at the bottom. This implies there exists some $$$1 \leq k \leq n$$$ such that we visit the first $$$k$$$ states in the first phase, and the rest $$$n+1-k$$$ states in the second phase. We can analyze how the algorithm works for any $$$k$$$ and choose the best $$$k$$$:

Now it's informal, but can probably be proven with an ugly exchange argument; for a fixed $$$k$$$, what the algorithm does is:

  • At the beginning, push $$$n-k+1$$$ $$$A$$$'s into $$$S$$$.
  • From this point, recursively solve a subproblem of order $$$k-1$$$, which takes $$$T(k-1)$$$ operations.
  • Pop the entire stack, which is $$$n$$$ operations.
  • Push $$$k$$$ $$$B$$$'s into $$$S$$$.
  • From this point, recursively solve a subproblem of order $$$n-k$$$, which takes $$$T(n-k)$$$ operations.

So this running time is $$$2n+1 + T(k-1) + T(n-k)$$$. minimizing across all $$$k$$$, we get the equation:

$$$T(n) = \mathcal{O}(n) + \min_{1 \leq k \leq n}(T(k-1) + T(n-k))$$$

This achieves minimum when $$$k = \frac{n}{2}$$$, resulting in $$$T(n) = \Theta(n \log n)$$$, so the optimal running time for $$$P$$$ is $$$\Theta(n \log n)$$$.

Back to our original problem; Let $$$A$$$ be an algorithm that can handle $$$n$$$ updates and $$$n$$$ undo's in running time $$$\mathcal{O}(f(n))$$$. Specifically, the following scenario is also solved in $$$\mathcal{O}(f(n))$$$: push $$$\frac{n}{2}$$$ updates, then alternate between pushing an update and undoing an update.

(Informally?) If no further assumptions are made on the updates, we must be able to operate on the stack of updates $$$S$$$, so that after each update or undo, exactly those updates that are active, should be in $$$S$$$. Now one can see that, if we name the first $$$\frac{n}{2}$$$ updates as $$$A$$$'s and the new ones as $$$B$$$'s, we pass through states $$$(\frac{n}{2}, 0), \dots, (0, \frac{n}{2})$$$. So algorithm $$$A$$$ can generate a solution to problem $$$P$$$ of order $$$\frac{n}{2}$$$ in running time $$$\mathcal{O}(n + f(n)) = \mathcal{O}(f(n))$$$ (we add $$$n$$$ since in problem $$$P$$$ we begin with an empty stack).

This implies we can solve $$$P$$$ of order $$$\frac{n}{2}$$$ in $$$\mathcal{O}(f(n))$$$ operations, but by the optimality we've shown on $$$P$$$, we get $$$f(n) = \Omega(n \log n)$$$.

Open Questions

Although we showed some kind of optimality in a general case, there are still open questions for improvement:

  • If the data structure supports different updates in different complexities, can we prioritize some updates over others? Our algorithm does overall $$$\mathcal{O}(U \log U)$$$ stack operations, but it could be that some updates get called many times and some get called only a few times. Perhaps a weight function on each update in the stack, and a variant of the fixing process?
  • Can a similar algorithm be applied with less assumptions (for instance, no commutativity)?
  • Can a similar algorithm be applied with more assumptions, and better running time? Perhaps the same algorithm can be proven to have better running time on specific scenarios?

I've yet to try answering any of these.

Examples

My motivation problem is supporting the queue of updates on DSU (LCT probably defeats this purpose with even better complexity, so I was looking for something more general and simple). We can use it to solve this problem online in running time $$$\mathcal{O}(n \log n \log q)$$$. There also exist an offline D&C algorithm where we have monotonicity (corresponds to queue undoing), and a cool offline algorithm here where we compute for each update its "living time", split these intervals across a segment tree and perform a DFS on it (this is offline, but it doesn't even require the undoing to be in any form of stack or queue).

If I had to come up with another problem... Given an array of length $$$n$$$ of integers, an update provides $$$(l, r, x)$$$ to set all $$$A_i = \min(A_i, x)$$$ for $$$l \leq i \leq r$$$, and a query could ask for maximum in range. These updates are commutative so we could apply the idea on a lazy segment tree.

Another example could be 1386C, from BOI. Here it is interesting, because we can use the idea to solve the problem, even though we don't explicitly take a DS and modify it; we can add all edges from $$$i = 1$$$ to $$$i = n$$$, then proceed by either undoing an update from the suffix, or doing an update from the prefix.

I haven't seen many instances where this can be applied, but it's probably not difficult to come up with such. If you remember problems that can utilize this, link them in the comments.

Implementation

I was wondering if we can have a C++ generic implementation, probably using inheritance, but I don't know enough of it to come up with such. Can you find any? Maybe in another language?

Anyway, here is an implementation (of DSU-queue), solving 1386C in time $$$\mathcal{O}(m \log n \log m)$$$. Hovering over all tests, the worstcase running time was 358ms, which is actually faster than I expected. Either the constant is small or there happens to be no testcase that hits us hard (like some alternating case).

Edit: bicsi has another implementation here, using a few clean tricks to cut down implementation size. This implementation also removes some amortization, explained in his comment here.

Thanks for reading :)

Full text and comments »

  • Vote: I like it
  • +474
  • Vote: I do not like it

By Noam527, history, 4 years ago, In English

Proposal

The feature to "star" or "favorite" a problem already exists; if you go to the problemset archive, you can choose to mark any problem as favorite. Same holds for marking any contest.

Unfortunately, it seems the only use for this feature, is so one could revisit his favorite problems in the future — you can't view the favorite problems of other participants. I think this feature can be easily extended to something more meaningful.

Maintain for each problem, the number of accounts who liked it (now referred to as the likeability of the problem). Then implement the option to filter (or sort) problems by their likeability (hopefully filtering problems by likeability could be combined with sorting by the number of people solved, so that people could find their own difficulty level). Same could go for contest likeability.

Pros

  • I think this can mainly help beginners. I hope that people who try out competitive programming for the first time, will first get intoduced to good problems. This way, they will get intrigued to solve more problems — I know this is what's gotten me and a few friends to invest into this branch. (Of course, this means that the option to filter or sort problems by likeability should be very apparent for any newcomer)

  • I believe this will improve the CF problemset as a source of practice, which is probably a main intention of this problem archive.

Cons

I can't find any at the moment. Feel free to point out some in the comments.

Side Notes

  • Perhaps Mike would want to weight the likeability marks, depending on the rating of people who marked it. This is open for discussion of course, but I think Mike would have better expertise in that area, so I'd like to hear his opinion.

  • Though unlikely, I suppose some people could create fake accounts to like their own problems, for unknown reasons. So, it could be implemented to include only rated accounts' marks into the likeability of a problem.

  • Contests' likeability could increase the contribution of all authors. Just a side note though.

Everything is open for discussion. You're welcome to throw in more pros, cons or side notes.

Full text and comments »

  • Vote: I like it
  • +506
  • Vote: I do not like it

By Noam527, history, 5 years ago, In English

Is it allowed now to discuss the APIO? According to the schedule, the official competition is done by now... but also according to the schedule, the open contest is in about a week.

Edit: It seems like discussions should be kept until after the open contest ends. So, don't scroll down if you want to avoid any kind of spoiler to the open contest (and yes, let's stop discussing until then).

Edit2: Should be good now.

Full text and comments »

  • Vote: I like it
  • +40
  • Vote: I do not like it

By Noam527, history, 5 years ago, In English

It happens to me once in a while that I want to open some contest on CF to have a look at its problems, or attempt a virtual contest. Of course, for that I need to pick a contest, but (unfortunately?) the quality of problems here is inconsistent and I just need to hope I can pick contests with good quality.

For this and other training purposes, I think a feature to rate contests (possibly using the same system of upvoting blog posts or comments) can be very helpful. I don't think a contest should have the same upvotes as its announcement, since it's common the announcements get downvoted for technical issues like a huge queue, or server crashes, which don't affect virtual participation.

What do you think?

Full text and comments »

  • Vote: I like it
  • +80
  • Vote: I do not like it

By Noam527, history, 5 years ago, In English

(prologue) Some time ago this crossed my mind, but I only recalled it now and I think it could be worth a small blog post. This is nothing big and rarely useful but nevertheless, I found it interesting so hopefully you will too (don't expect to find this enriching).

It is widely known that the time complexity to compute the GCD (greatest common divisor) of two integers a, b, using the euclidean algorithm, is .

Short proof

This bound is nice and all, but we can provide a slightly tighter bound to the algorithm:

We show this bound by adding a few sentences to the above proof: once the smaller element becomes 0, we know that the larger element becomes the resulting gcd. Therefor we can first bound by , and lastly notice that we can change the maximum to minimum, since after one step of the algorithm the current maximum is the previous minimum; min(a, b) = max(b, a % b) when a ≥ b.

This bound is of course negligible... for a single gcd computation. It turns out to be somewhat useful when used multiple times in a row. I'll explain with an example "problem":

(1) Given an array A of n integers in range [1, M], compute their greatest common divisor.

The solution is of course, we start with the initial answer G = A0, and iterate over all the remaining elements, assigning to G the value . The known time complexity analysis gives us the bound of , for computing gcd n times over values of order M. The tighter analysis actually gives a bound that is asymptotically better: (for practical values of M, you can refer to this as ). Why is that so? again, we can determine the time complexity more carefully:

The iteration over the array gives us the factor of n, while the remaining is recieved from gcd computations, which we analyze now; Let the value Gi be equal to G after the i-th iteration, that is:

  • G0 = A0

On the i-th iteration, the gcd computation starts with 2 values Gi - 1, Ai, and results with Gi, so the time complexity of it is , which is worstcase , so we will assume it's the latter.

The total gcd iterations (differing by a constant factor) is:

And generally speaking, this analysis sometimes allows us to show that the solution is quicker by a factor of .

As a last note, we can use (1) to show more such improvements. For example, (2): suppose the problem of gcd queries for ranges, and point updates. Of course, we solve this by a segment tree. The known analysis gives us a bound of per query or update. We can use (1) to give ; an update consists of a starting value, and repeatedly for steps we assign to it its gcd with some other value. Following (1), this takes the desired complexity. The same analysis is done for queries.

If the constraints were n, q ≤ 5·105, Ai ≤ 1018, then this shows that instead of doing around 6·108 operations, we do around 4·107 operations (if we follow the big O notation and ignore the constant), which is close to the well known bitset optimization (factor of 1/32).

Thanks for reading :). As a question to the reader: What other tasks can utilize this, like (2) can?

Full text and comments »

Tags gcd
  • Vote: I like it
  • +85
  • Vote: I do not like it

By Noam527, history, 6 years ago, In English

In the problem archive right now, we are able to sort the problems by the amount of people who solved it (and also filter by the subjects the problem requires), but that's it.

I was wondering whether we should have more options to sort by. For example, the feature to make a contest or a problem one of your "favourites" already exists, so what about being able to sort the problems in the archive by the number of people that made it one of their favourites? (Could also be applied to favourite contests.)

I'd be happy if more options were suggested in the comments, as I don't have many, but I am interested to hear some more.

MikeMirzayanov (Hopefully this is worth the mike-tag :P.)

Full text and comments »

  • Vote: I like it
  • +25
  • Vote: I do not like it

By Noam527, history, 6 years ago, In English

Hey!

I am not related to the organization of this competition, however there doesn't seem to be any blog post regarding it (and there's not much time left), which is unfortunate so that's why I'm writing this :).

Today (in approximately 4.5 hours, 14:00 UTC) starts this year's COI — Croatian Olympiad in Informatics. The contest is open and the registration page is here (while this page contains a redirect to the registration, along with their past problemsets).

The competition should be close to IOI style; It seems to contain ~4 problems and from what I understood, has a duration of 5 hours. A big difference is that there is no full feedback during the contest: every submission is only tested on the samples provided during the contest (I would expect them to change this format for COI but it doesn't seem to be the case).

Good luck! Let's discuss the problems and the solutions here after the contest ends :).

Full text and comments »

  • Vote: I like it
  • +43
  • Vote: I do not like it

By Noam527, history, 6 years ago, In English

This is a quick question; I'd like to know if there is any place (some online judge possibly) in which I could put 2 programs of mine to interact with each other, just like interactive problems work here (the judge's program interacts with the submitted program).

And if there seems to be none, I'm wondering what's the best way I could implement something to do this for me.

Thanks :).

Full text and comments »

  • Vote: I like it
  • +26
  • Vote: I do not like it

By Noam527, history, 6 years ago, In English

I have 2 questions in mind, just out of curiousity :). They both consider tries (not compressed). Also note — by a trie's size I'm not considering its root (even though, it does not matter for the purpose of this problem).

1) For some 2 natural numbers N and K (suppose K ≤ N), what is the maximal size of a trie that can be achieved if you take some string S of length N, with the size of its language (number of distinct characters it can contain) being K, and insert into the trie all its suffixes (so, a suffix trie)?

For instance, if N = 2, K = 2, we can form the string "ab", which makes the trie of size 3. On the other hand, if N = 10, K = 1, the only string we can form is "aaaaaaaaaa" which makes the trie size 10.

2) How do you form such a string with maximal trie size? An algorithm or an idea, both are acceptable.

Thanks.

Full text and comments »

  • Vote: I like it
  • +11
  • Vote: I do not like it

By Noam527, history, 6 years ago, In English

Usually the wavelet tree is made not to support updates. I wonder what types of updates it can recieve that will still keep all its operations in , where A is the range of values it gets. For instance the only one I found is that you can support appending or removing the element from the back of the array (on which the wavelet is built).

A short tutorial for this data structure can be found here, for those who are interested.

Full text and comments »

  • Vote: I like it
  • +10
  • Vote: I do not like it

By Noam527, history, 6 years ago, In English

I really want to know what's the shortest problem statement in CF but I'm unable to find a blog post talking about it. So, what's the shortest problem statement you know of in CF?

Full text and comments »

  • Vote: I like it
  • -3
  • Vote: I do not like it

By Noam527, history, 6 years ago, In English

Here is a problem I thought of. This is just for others' interest, maybe to help them pass the time. I thought of this problem initially because I wanted to put it in a contest, however it turned to be too hard for me and a few others so it shouldn't be in a contest:

you need to construct a permutation of length N. you're given M pairs of integers (each pair contains 2 distinct elements, both are between 1 and N). the permutation you construct needs to have the following value minimized: for each pair {X1, X2}, add up to the total sum the distance (indicies distance) between the values {X1, X2}. you can print any correct answer is multiple ones exist.

input:
N M
M pairs, described in the statement
output:
the minimized value described in the statement on the 1st line, and the permutation on the 2nd one.

examples:
input:
3 2
1 2
3 1
output:
2
3 1 2

input:
4 4
1 2
4 2
3 1
3 4
output:
6
2 1 4 3

Full text and comments »

  • Vote: I like it
  • +75
  • Vote: I do not like it

By Noam527, history, 7 years ago, In English

I came up (I think, I just didn't see it anywhere) with this problem earlier today and I think it's better if I ask for help here as I still don't know its solution:

You have an array of N integers and in each step you can merge any 2 of them into a single integer (and then this integer is added to the array while the previous 2 are erased from it). The cost of merging two elements X and Y is max(X, Y).

You need to find the minimal cost you can get after merging all the integers into 1 (essentially, doing N — 1 merges).

Currently there's no constraint (not on N, nor on the limit of the integers) so just try to come up with the best complexity you can.

Examples:

input:
3
4 6 8
output:
16

(merging 4 and 6 to 10 with the cost of 6, and then merging 8 and 10 with the cost of 10, giving a total of 16).

input:
4
10 11 13 16
output:
55
(10->16) + (11->13) + (24->26) = 16 + 13 + 26 = 55.

Full text and comments »

  • Vote: I like it
  • +51
  • Vote: I do not like it

By Noam527, history, 7 years ago, In English

I came up with the following problem a few days ago and I didn't manage to solve it in a better complexity than O(NlgN + QN), maybe you can help as I'm stuck and starting to think it's impossible:

You're given an array of N elements, and Q queries. Each query consists of a single integer X, and you're asked to determine if there's a pair of integers in the array whose sum is X, or if there isn't any.

I didn't determine any limits, but the less, the better. some bounds can be on the maximal value of an element in the array, I don't mind, as long as someone finds a solution better than mentioned earlier. Would also be better if the algorithms will be able to output the indicies of the 2 integers with sum X.

I'd also be happy if someone proves it to be impossible.

Full text and comments »

  • Vote: I like it
  • +8
  • Vote: I do not like it