### whzzt's blog

By whzzt, history, 4 months ago,

The statement and dataset of this problem can be found here.

Short statement: there are $N \le 20000$ objects with value $0$ or $1$. Each time you can select some objects and permute them to form a 0-1 sequence, then the interactor will tell you the total number of 0-1 and 1-0 shifts in the sequence(i.e. 011000 has 2 shifts and 000101 has 3 shifts). It is guaranteed that the value of the 1st object is $0$. You are required to find out the number of objects with value $0$ within $226$ queries ($100\%$) / $904$ queries ($>25\%$), and the total length of the sequence in the queries is bounded by $10^5$.

When encountering the problem, a linear algorithm comes directly. We can check whether there is a shift in value sequence $0, x$. If so $x = 1$, otherwise $x = 0$. Hence we can solve the problem within $N$ queries, getting $10\%$ as a result.

We then can come up to do some tiny optimizations, without changing the linearity of the algorithm. After two queries, we will have at least two $0$ s or $1$ s. Then we can set the value sequence to be $x, 0, y, 0$ or $x, 1, y, 1$, then the value of $x$ and $y$ can be determined simultaneously. This lead to an approach with $\frac{N}{2} + 1$ queries which gets $25\%$.

For higher points, we need a sublinear algorithm. As we only need the sum of the values of objects, we don't have to determine the value of each object. One observation is that we can take advantage of the objects we have determined, and infer the sum of values by asking for $a, X, b, X, c, X, d, X \ldots, X$ where $X = 0$ or $1$ depending on which kind of object we have more (so that the sequence can contain more unseen objects for counting). When $X = 0$ as an example, the returned number by the interactor is just $a + 2b + 2c + 2d + \ldots$. Notably, the value of $a$ can be determined by the last digit of the result, since each object will be counted twice except $a$. Hence we can get more objects with determining value (which can be used as $X$ to extend the length of our query) when we count the objects. Directly apply this approach enables us to solve for $N \le \sum_{i = 1}^{m + 1} \lceil i / 2 \rceil$ within $m$ queries, which gets $80\%$.

However, one should notice that putting more than one value between $X$ or replacing $X$ with both $1$ and $0$ seems useless. The former is just why we need pattern like $a, X, b, X, \ldots$, and the latter is because of, if we get variable $p - q$, we are required to perform another query to solve $p$ and $q$, then why not ask $p$ and $q$ individually. This may make us confused about how to optimize the algorithm.

In fact, there is still an easy optimization of the current algorithm. We can divide the program into two phases: in the first phase, we use patterns like $x, 0, y, 0$, and determine two variables at a time; in the second phase, we use the determined variables to do queries. Balancing the two phases (seems 3:4 in number of queries) gives $244$ queries as a result (about $1.75 \sqrt{n}$ according to calculation), which gets $92.62\%$.

We will reformulate the statement (which is stronger) according to our discussion above. We can hold a value $k$ on our hand, initially $k = 1$. Then in each step, you can ask for the sum of no more than $\lceil k / 2 \rceil$ objects. Once you know the value of some specific object, we can increase $k$ by one; and after each step, $k$ will automatically increase one. Our target is to determine the sum of all objects. Then our two-phase algorithm is just to ask one object in the first phase, and as many objects as possible in the second phase.

Intuitively, the bottleneck of our algorithm seems to be that only ask for one object at a time, which only utilizes $1$ bits of information. This still seems hard to optimize, but if you once met another problem before, it will be easier for you to solve this problem. The target of that problem is to recover a 0-1 sequence of length $n$ within $O(n / \log n)$ steps by asking the sum of some elements (for example, $n = 1000$ and solve it in $300$ steps).

The solution of that problem is a divide-and-conquer like approach. Let $f_m$ be the longest sequence we can recover within $2^m$ steps, and $q_m$ be the query sequence of it, with the last step asking for the sum of the whole sequence, then we can have $f_{m + 1} \ge 2 f_m + 2^m - 1$. This is done by dividing the sequence into two blocks of size $f_m$ and one block of size $2^m - 1$. Then we first ask for the total number of ones $c$ in the second block of size $f_m$. For each non-last query $q_{m, i}$ in $q_m$, if we apply it on the two $f_m$ blocks, we will get $a$ and $b$ respectively. We construct two queries, asking for $a + b$ and $a + (c - b) + i$, where $i$ is a bit in the last block, then clearly we can recover $a, b, i$ through the query. The last query is used to obtain the total number of ones in the sequence. We recursively apply this approach, and finally, we are done with $O(n / \log n)$ steps.

Thus in the first phase of our algorithm, we can try to use a bunch of $2^m$ queries to obtain $f_m$ determined objects if we have enough determined objects for calculating the sum. We can sort the required bits in $q_m$ and get the required initial bits, then do a DP on the number of queries (we take the addition on $k$ in each step into consideration). Finally, we enumerate a position to begin the second phase. This yields $209$ queries in total, which is much smaller than the bound given in the problem. A tiny optimization reducing the number of ones required in the last query of $q_m$ further enables us to solve the problem in $203$ queries.

The implementation of this approach can be found here.

• +356

 » 4 months ago, # |   0 Can you explain in more details how we can recover 0-1 sequence in $O(n/logn)$ steps please? Or maybe provide a link to the problem if it exists on codeforces?
•  » » 4 months ago, # ^ | ← Rev. 2 →   +21 We maintain a query sequence $q_i$ (consisting of sets, indexed from 1) of length $2^i$ for recovering sequence of length $f_i$. Initially, $i = 0$ and $f_i = 1, q_i = \lbrace \lbrace 1\rbrace \rbrace$.Each time, we recurrently obtain $q_{i + 1}$ base on $q_i$ and try to use $2^{i + 1}$ queries to recover a 0-1 sequence of length $2 f_i + 2^i - 1$. During our process, we always assume the last term of $q_i$ queries the sum of the sequence.Then we can perform the following queries for $j$-th set $S \in q_i$ which is not a complete set: $S \cup \lbrace x + f_i | x \in S\rbrace \cup \lbrace 2 f_i + j \rbrace$ and $S \cup \lbrace x + f_i | 1 \le x \le f_i, x \not \in S\rbrace$. The remaining two queries are $\lbrace x | f_i + 1 \le x \le 2 f_i \rbrace$ and $\lbrace x | 1 \le x \le 2 f_i + 2^i - 1 \rbrace$, which is consistent with our assumption.Then in each pair of queries, we get result $a$ and $b$. Suppose the number of ones in items indexed from $f_i + 1$ to $2 f_i$ is $m$, then $\lfloor (a + b - m) / 2 \rfloor$ indicate the sum over $S$, and $\lfloor (a - b + m) / 2 \rfloor$ indicate the sum over $\lbrace x + f_i | x \in S\rbrace$, and $(a + b + m) \bmod 2$ indicate the value of $2 f_i + j$.This enables us to use $2^n$ queries to recover a sequence of length $n 2^n + 1$ since $f_n = n 2^n + 1$.
•  » » » 6 weeks ago, # ^ | ← Rev. 2 →   0 Very cool!!This is also clearly optimal, since we are discovering $n$ bits of information, and each query has $\log n$ bits.Just two typos: To recover bit $2f_i+j$ I think you need to compute $(a+b+m)\bmod 2$ rather than $(a+b)\bmod 2$. In the final sentence, it should say $f_n = n 2^{n-1}+1$, rather than $2^n(\log n-1)$. Do you know who originally came up with this algorithm?
•  » » » » 6 weeks ago, # ^ |   0 Updated. I did not remember even where the problem originally came from :(
 » 3 months ago, # |   0 How do we solve the main problem if we can recover a 0-1 sequence given a sum?
•  » » 3 months ago, # ^ |   0 The high-level idea is, we divide the main problem into two phases: In the first phase, we determine the value of some positions; in the second phase, we query the sum of remaining positions. Recover 0-1 sequence is used in the first phase to determine more values of positions so that the main problem can be solved within a bounded number of queries.