The contest announcement and invitation link is here. The contest link is here.

## A. Artistic Masterpiece

**Tutorial**

Since $$$(x,y)$$$ is covered by the brush iff $$$(x - x_c)^2 + (y - y_c)^4 \leq r$$$, that means we can decompose the area of the brush into $$$O(\sqrt[4]{r})$$$ independent 1-dimensional intervals (a contiguous block with the same $$$y$$$-coordinate).

For each center $$$(x_c,y_c)$$$ traversed by the brush, add the 1-dimensional intervals for $$$y \in [y_c - \sqrt[4]{r}, y_c + \sqrt[4]{r}]$$$. After adding these intervals, the problem is now reduced to computing the area union of 1-dimensional intervals for each separate $$$y$$$-coordinate.

## B. Broken Calculator

**Tutorial**

First, we consider the minimum cost to get from an integer $$$X$$$ to another integer $$$Y$$$ using the operations *without using the reset operation*. Suppose we fix the number of times $$$b$$$ we use the doubling operation. Then $$$X 2^b \leq Y$$$ must hold. Moreover, if we use $$$a_i$$$ operations after the $$$i$$$-th and before the $$$(i+1)$$$-th doubling operation, then $$$X$$$ will increase to $$$X 2^b + a_0 2^b + a_1 2^{b-1} + \ldots + a_b$$$, with a total of $$$a_0 + a_1 + \ldots + a_b$$$ increment operations.

Let $$$D = Y - X 2^b$$$, then we can determine the optimal values for $$$a_0,a_1,\ldots,a_b$$$ by looking at the binary representation of $$$D$$$. Since $$$X,Y \leq 10^{18}$$$, we can bound $$$b$$$ by $$$O(\log \max{X, Y})$$$. For each fixed value $$$b$$$, we can find $$$a_0,a_1,\ldots,a_b$$$ in $$$O(\log \max{X, Y})$$$ as well. Thus, we can find the cost of going from $$$X$$$ to $$$Y$$$ without the reset operation in $$$O(\log^2 \max x)$$$. We can also further optimize this and find the cost in $$$O(\log \max x)$$$.

Let $$$\texttt{cost}(X,Y)$$$ denote the cost of going from $$$X$$$ to $$$Y$$$ without the reset operation. Next, we build an adjacency matrix $$$a_{i,j}$$$, which denotes the cost of going from $$$x_i$$$ to $$$x_j$$$ also allowing the reset operation, which is equal to

Finally, we find the minimum cost perfect (bipartite) matching using the adjacency matrix $$$a$$$. The total weight of the matching, subtracted by $$$C$$$, is equal to the answer.

Note that a perfect matching corresponds to a decomposition of the complete graph into cycles; specifically, if the vertex with index $$$i$$$ on the left part is matched with $$$j$$$ on the right part, this corresponds to having the outgoing edge of vertex $$$i$$$ (in the cycle decomposition) set to vertex $$$j$$$. However, we further note that all cycles visit the integer $$$0$$$, since there is at least one reset operation (as a cycle must have $$$x_i \geq x_j$$$ for some adjacent nodes in the cycle). Therefore, this cycle decomposition actually forms a valid solution to this special instance of the traveling salesman problem by visiting all integers $$$0, x_1, x_2, \ldots, x_n$$$.

We can build the adjacency matrix in $$$O(n^2 \log^2 \max x)$$$ or $$$O(n^2 \log \max x)$$$, then use the Hungarian algorithm to find a perfect matching in $$$O(n^3)$$$.

## C. Counting Trees

**Tutorial**

The condition is equivalent to choosing $$$c-1$$$ edges that will be deleted. However, if we fix the $$$c-1$$$ edges to be deleted, then there are only 2 possible coloring that results in these edges being deleted. Thus, the number of valid colorings is simply $$$2 \cdot \binom{n-1}{c-1}$$$.

Note that $$$10^{10} + 3233 = 3 \times 11 \times 101 \times 3\,000\,301$$$. All of these are primes. Suppose we fix the prime $$$p \in {3, 11, 101, 3\,000\,301}$$$, we can compute $$$\binom{n-1}{c-1} \bmod p$$$ using Lucas theorem in $$$O(\log n)$$$, after first precomputing $$$i! \bmod p$$$ for all $$$0 \leq i \leq p-1$$$.

Thus, we can get the answer modulo $$$p$$$ for all $$$p \in {3, 11, 101, 3\,000\,301}$$$. To find the answer for mod $$$10^{10}+3233$$$, we can use the Chinese Remainder Theorem (three times) to find the answer modulo $$$3 \times 11$$$, then $$$3 \times 11 \times 101$$$, and finally $$$10^{10} + 3233$$$.

The total time complexity is $$$O(p)$$$ for $$$p \in {3, 11, 101, 3\,000\,301}$$$ to first precompute the factorials for each different prime modulus, then $$$O(\log n)$$$ per query to compute $$$\binom{n-1}{c-1}$$$ using Lucas Theorem and Chinese Remainder Theorem.

## D. Decision Making

**Tutorial**

We build the Aho-Corasick automaton $$$A$$$ on the given strings. Furthermore, for each state $$$s \in A$$$, we precompute $$$\texttt{win}(s)$$$, denoting the winner if we arrive at state $$$s$$$. Let $$$\texttt{nxt}(s,0)$$$ denote the state if we are currently at state $$$s$$$ and the coin flip results in $$$\texttt{H}$$$, and $$$\texttt{nxt}(s,1)$$$ if it results in $$$\texttt{T}$$$.

First, we fix the winner $$$w$$$. Consider the following naive solution. Let $$$p(s)$$$ be the probability of winning if we initially start at state $$$s$$$. $$$p(s) = 1$$$ if $$$\texttt{win}(s) = w$$$ and $$$p(s) = 0$$$ if $$$\texttt{win}(s) = i$$$ for $$$1 \leq i \leq n$$$, $$$i \neq w$$$. Then, for all other states $$$s$$$ with $$$\texttt{win}(s) = -1$$$, the equation $$$2 p(s) = p(\texttt{nxt}(s,0)) + p(\texttt{nxt}(s,1))$$$ must hold. If we let $$$p(s)$$$ be variables, then we can use Gaussian elimination to solve this set of linear equations. This runs in $$$O((\sum |s_i|)^3)$$$ per winner, for a total of $$$O(n (\sum |s_i|)^3)$$$.

We can slightly optimize the solution above. First, notice that if we have a different winner, the equations only change slightly. Specifically, if the equations are in the form $$$\mathbf{Ax} = \mathbf{b}$$$, then only the values in $$$\mathbf{b}$$$ are different. Therefore, we can first compute $$$\mathbf{A}^{-1}$$$ in $$$O((\sum |s_i|)^3)$$$, then compute the answer for each winner in $$$O(\sum |s_i|)$$$.

We try to optimize this solution. Consider the underlying trie in the Aho-Corasick automaton and its relation with $$$\texttt{nxt}$$$. Now, instead of assigning a new variable for every state $$$s$$$, we do it ``lazily'' in the following manner with the BFS or DFS order of the states $$$s$$$ using the trie edges:

- If we have not expressed $$$p(s)$$$ as the sum of currently assigned variables, we assign a new variable $$$x_k$$$ to $$$p(s) \gets x_k$$$. In this case, we add a new variable.
- If we have not expressed both $$$p(\texttt{nxt}(s,0))$$$ and $$$p(\texttt{nxt}(s,1))$$$ as the sum of currently assigned variables, we assign one of them a new variable $$$x_k$$$, and recurse to the case below. In this case, we add a new variable.
- If we have not expressed one of $$$p(\texttt{nxt}(s,0))$$$ or $$$p(\texttt{nxt}(s,1))$$$ as the sum of currently assigned variables, we can determine its value as the sum of currently assigned variables with the form $$$p(\texttt{nxt}(s,i)) = 2 p(s) - p(\texttt{nxt}(s,1 - i))$$$. In this case, we do not add a new variable.
- If we have expressed both $$$p(\texttt{nxt}(s,0))$$$ and $$$p(\texttt{nxt}(s,1))$$$ as the sum of currently assigned variables, we do nothing. Actually, this case will never happen without running into the above cases first.

In this way, by ``lazily'' assigning variables, we can reduce the number of variables used. It remains to find out the total number of variables assigned.

Since we do the above in BFS or DFS order, for the root node (denoting the empty string), we run into the first case and assign it a new variable. Let $$$\texttt{trie}(s,c)$$$ denote the state after state $$$s$$$ by following the trie edge with character $$$c$$$.

In the following, we have an invariant: when visiting state $$$s$$$ for the first time, we know its expression as the sum of currently assigned variables. For the root node, since we initialized it with a new variable beforehand, it is true. For all other nodes, there are only a few cases:

- If $$$\texttt{trie}(s,0)$$$ and $$$\texttt{trie}(s,1)$$$ do not exist, then we must be in a winning state. Thus, we do not need to add a new variable. Add the current expression of $$$p(s)$$$ as the sum of variables as a row to the matrix $$$\mathbf{A}$$$.
- If only one of $$$\texttt{trie}(s,0)$$$ or $$$\texttt{trie}(s,1)$$$ exist, then exactly one of $$$\texttt{nxt}(s,0)$$$ or $$$\texttt{nxt}(s,1)$$$ leads to an unvisited state. Since we know the current $$$p(s)$$$ as the sum of variables, and we know one of $$$p(\texttt{nxt}(s,c))$$$ (the previously visited one) as the sum of variables, we can deduce the other unvisited $$$p(\texttt{nxt}(s,1-c))$$$ as the sum of currently assigned variables.
- If both $$$\texttt{trie}(s,0)$$$ or $$$\texttt{trie}(s,1)$$$ exist, then both of $$$\texttt{nxt}(s,0)$$$ or $$$\texttt{nxt}(s,1)$$$ leads to an unvisited state. We assign one of them $$$p(\texttt{nxt}(s,c))$$$ with a new variable, and reduce it to the case above.

Thus, the only time we add a new variable is during the third case when $$$\texttt{trie}(s,0)$$$ and $$$\texttt{trie}(s,1)$$$ both exist. However, for a trie of $$$n$$$ binary strings, there are at most $$$n-1$$$ nodes with 2 children. Therefore, with the ``lazy'' method of assigning variables, we assign at most $$$n$$$ variables.

We can compute all $$$p(s)$$$ as the sum of the assigned variables in $$$O(n \sum |s_i|)$$$, compute $$$\textbf{A}^{-1}$$$ in $$$O(n^3)$$$, and evaluate each winner in $$$O(n)$$$. The total time complexity is $$$O(n^3 + n \sum |s_i|)$$$.

**Remark**: We set $$$n \leq 100$$$ to allow $$$O(n^4)$$$ solutions to pass; e.g., if we do Gaussian elimination for each winner in $$$O(n^3)$$$.

## E. Erosion Filter

**Tutorial**

We can use a slightly modified prefix sum to solve this problem. Consider the prefix sum $$$p(i) = A[i] + \frac{1}{2} p(i-1)$$$, and the suffix sum $$$s(i) = A[i] + \frac{1}{2} s(i+1)$$$. Then $$$B[i] = p(i)+s(i)-A[i]$$$. We can compute these in $$$O(n)$$$.

There's also an alternative solution by taking advantage of floating-point precision. For each $$$B[i]$$$, we just need to compute the contribution of the values $$$A[j]$$$ for $$$j \in [i-64, i+64]$$$. This is correct since $$$n \cdot \max A \cdot \epsilon^{-1} \leq 2^{64}$$$, and runs in $$$O(n \cdot k)$$$ with $$$k=64$$$.

## F. Fizz Buzz

**Tutorial**

Instead of imagine $$$F_i$$$ from $$$F_{i-1}$$$ by replacing all $$$\texttt{F}$$$s with $$$F_0$$$, we can construct $$$F_i$$$ by replacing all $$$\texttt{F}$$$s in $$$F_0$$$ with $$$F_{i-1}$$$. Thus, let $$$\texttt{len}(i)$$$ denote the number of characters in $$$F_i$$$. Then $$$\texttt{len}(0)=33$$$, and $$$\texttt{len}(i) = 33 - 3 + 3\texttt{len}(i-1)$$$ from the construction above. Note that $$$\texttt{len}(i)$$$ grows exponentially, and thus with $$$K = O(\log 10^{18})$$$, we will have $$$\texttt{len}(K) > 10^{18}$$$.

For each query $$$(X,Y)$$$, we first set $$$Y \gets \min{Y, K}$$$. Next, we carefully consider the structure of $$$F_Y$$$ (also with the alternative recursive expression of $$$F_i$$$):

We can use the precomputed $$$\texttt{len}(Y-1)$$$ to determine where $$$X$$$ is on the partition above. If $$$X$$$ falls into the partition with $$$F_{Y-1}$$$, we recursively solve the problem with $$$Y \gets Y-1$$$. Since $$$Y \leq K = O(\log Y)$$$, the total time we need to answer each query is $$$O(\log Y)$$$.

## G. Geometry is Fun

**Tutorial**

The first observation we need is that it is enough to consider the convex hull of vectors $$$V_i$$$ when determining the polygon formed by $$$S_i$$$. Furthermore, since the vectors are randomly generated, the convex hull size is quite small, around the order of $$$O(\log |V_i|)$$$.

Now, let's solve the problem when $$$q=1$$$. Consider the set of polygons. We add an edge from polygon $$$i$$$ to polygon $$$j$$$ if they intersect. We also add an edge from polygon $$$i$$$ to the top wall if it intersects, and similarly for the bottom wall. Then, the answer is equal to the minimum cost to remove vertices in this graph, where each vertex has its own cost, such that the nodes representing the top and bottom walls are disconnected.

But this subproblem is just a variant of the minimum-cut problem! We can solve it by splitting each vertex $$$v$$$ into $$$v_{\text{in}}$$$ and $$$v_{\text{out}}$$$, and connecting them with a directed edge of cost $$$\texttt{cost}(v)$$$.

To solve for queries, notice that after answering the $$$j$$$-th query, to answer the $$$(j+1)$$$-th query we need to add some (possibly none) edges to the graph, then find the minimum-cut. For two polygons $$$i$$$ and $$$j$$$, let $$$d_{i,j}$$$ be the first time in seconds such that $$$i$$$ and $$$j$$$ intersects, which we can find through binary search. We add edge $$$(i,j)$$$ whenever $$$d_{i,j} \leq q_k$$$.

After modifying the graph by adding edges, we can rerun Dinic's algorithm *without resetting the flow of the edges first*. In other words, after adding a new edge, we simply rerun Dinic's algorithm on the current network with previously-found flow values in each edge. Due to the nature of the random generation of graphs, this method runs in time.

There's one last issue — the capacities between $$$v_{\text{in}}$$$ and $$$v_{\text{out}}$$$ change since the factor $$$F$$$ changes. However, a polygon where each side is $$$F$$$-times longer has an area that is $$$F^2$$$-times larger, so the capacity for each polygon is simply the capacity when $$$F=1$$$, multiplied by the current value of $$$F^2$$$. Therefore, we can choose not to modify the capacity, and simply multiply the minimum-cut value with $$$F^2$$$ before outputting it.

Finally, to check whether two convex polygons intersect, we only need to check two cases: first, we check if a point of a polygon is inside the other polygon. Otherwise, we check if two line segments of the two polygons intersect each other.

You can `__int128_t`

to prevent overflow when handling the calculations.

**Remark**: The random generation of vectors is (partly) inspired by the fact that it is almost impossible to randomly generate a test where flow algorithms run slowly.

## H. Hashing Algorithm

**Tutorial**

First, we compute the suffix array and the $$$\texttt{lcp}$$$ array. We will only work on the $$$\texttt{lcp}$$$ array.

Consider the algorithm to count the number of distinct substrings. We compute the sum $$$\texttt{lcp}(i) - \texttt{lcp}(i-1)$$$ for all $$$i$$$, since we $$$\texttt{lcp}(i-1)$$$ of the subtrings induced by $$$\texttt{lcp}(i)$$$ have already been counted before.

Now, we continue to the original problem. If the minimum value of the $$$\texttt{lcp}$$$ array is $$$m > 0$$$, then there are $$$m$$$ substrings which occur *exactly* $$$\texttt{size}(\texttt{lcp})$$$ times. So, we add $$$m \cdot \texttt{size}(\texttt{lcp})^2$$$ to the answer. Afterward, we subtract all elements in the $$$\texttt{lcp}$$$ array by $$$m$$$, since we have counted the contributions of length-$$$m$$$ substrings. However, if the minimum value is $$$m=0$$$, then we split the $$$\texttt{lcp}$$$ array into two halves at the $$$\texttt{lcp}$$$ index with value $$$0$$$.

Note that the subproblem above can be solved with the recursive function $$$\texttt{solve}(l,r)$$$, which solves for the $$$\texttt{lcp}$$$ array from index $$$l$$$ to $$$r-1$$$. To find the minimum value of a segment and subtract the values in a segment, we can use a segment tree.

## I. Interconnectivity Measure

**Tutorial**

The abridged statement is: given $$$q$$$ queries, find the *bitwise-or* shortest path. Let's solve for $$$q=1$$$ first.

We maintain a variable $$$a$$$, denoting the current answer, and a bit-position $$$b$$$. Let $$$k = \lceil \log (m+1) \rceil$$$. Then we initially set $$$a=2^k-1$$$ and $$$b=k-1$$$. At each iteration, we will try to turn off the $$$b$$$-th bit of $$$a$$$.

To check whether the answer is less than or equal to $$$a \oplus 2^b$$$, we iterate through all $$$2^{\texttt{popcount}(a \oplus 2^b)}$$$ edges which have their indices as sub-masks of $$$a \oplus 2^b$$$, and use a Disjoint-Set Union-Find structure to add these edges. Then, we simply check after connecting all these edges, whether $$$(s,t)$$$ are disconnected from each other. If they are, then the $$$b$$$-th bit of $$$a$$$ cannot be turned off; if they are not, then it can be turned off and we set $$$a \gets a \oplus 2^b$$$. Afterward, we recurse to the case $$$b \gets b-1$$$.

Notice that in the procedure above, each $$$a \in [0, 2^k-1]$$$ is checked *at most once*. Let $$$\texttt{solve}(a, b, \texttt{queries})$$$ be a recursive function, where

- for each $$$(s,t) \in \texttt{queries}$$$, they are not disconnected when considering edges which are sub-masks of $$$a$$$;
- for each $$$(s,t) \in \texttt{queries}$$$, we aim to check whether we can turn off the $$$b$$$-th bit of $$$a$$$.

We split $$$\texttt{queries}$$$ into two sets $$$\texttt{queries}_0$$$ and $$$\texttt{queries}_1$$$, each containing the set of queries whether we can or cannot turn off the $$$b$$$-th bit of $$$a$$$. Then, we recursively solve for $$$\texttt{solve}(a \oplus 2^b, b-1, \texttt{queries}_0)$$$ and $$$\texttt{solve}(a, b-1, \texttt{queries}_1)$$$.

Again, considering all instances of calls of $$$\texttt{solve}$$$, for each $$$a \in [0, 2^k-1]$$$, we check it at most once.

Now, we just need to check for a given $$$(a,\texttt{queries})$$$, whether $$$(s,t) \in \texttt{queries}$$$ are disconnected if we only consider edges which are sub-masks of $$$a$$$. Before any calls to $$$\texttt{solve}$$$, we initialize a Disjoint-Set with $$$n$$$ nodes. Then, when checking for a mask $$$a$$$, we first add the $$$2^{\texttt{popcount}(a)}$$$ edges to the Disjoint-Set structure. Next, we check all $$$(s,t) \in \texttt{queries}$$$ whether they are disconnected. Finally, we need to reset the Disjoint-Set structure. Notice that the $$$\texttt{parent}$$$ array entries in the Disjoint-Set structure only change for vertices that is an endpoint of one of the edges that we added. So, we can iterate all $$$2^{\texttt{popcount}(a)}$$$ edges again, and reset the $$$\texttt{parent}$$$ value for each endpoint of the edges iterated.

With this algorithm, our solution runs in $$$O(3^k \alpha(n) + q \log m)$$$. Since $$$k = O(\log m)$$$, this is equal to $$$O(m^{\log 3} \alpha(n) + q \log m)$$$, with $$$\log 3 \approx 1.585$$$.

## J. Judo Championship

**Tutorial**

Let $$$E$$$ be the set of edges, and $$$B \subseteq E$$$ be the set of *bridges* in the graph. The maximum possible ambiguity can be computed by removing $$$B$$$, then computing $$$\sum \binom{\texttt{size}(c)}{2}$$$ for each connected component $$$c$$$, since each connected component is bi-connected.

For each bi-connected component, we need to orient the edges such that a strongly-connected component is formed. A rough algorithm that works on bi-connected components is as follows:

- Initially, we have $$$S = {s}$$$, where $$$s$$$ is an arbitrary (source) vertex in the graph. At every point in time, we ensure $$$S$$$ forms a strongly-connected component.
- Find $$$u \in V \setminus S$$$ such that $$$u$$$ is
*adjacent*to a vertex in $$$S$$$ through an edge $$$e$$$. We do a depth-first search starting from $$$u$$$, without going through the edge $$$e$$$ and going through edges at most once, until we arrive at a vertex in $$$S$$$ again. We will eventually arrive back at $$$S$$$ since the component is bi-connected. - We orient the edges traversed through according to the direction that we traversed it, which will create a path starting from $$$S$$$ and ending also at $$$S$$$. This will maintain and extend the strongly-connected component.
- We do the above until all vertices are in $$$S$$$. At this point, we can arbitrarily direct the remaining edges.

A possible and efficient implementation of the ideas of the rough algorithm above is as follows.

**Code**

```
def dfs(u):
visited.add(u)
for v, edge_id in adj[u]:
if edge_id not in directed:
direct(edge_id, u, v)
directed.add(edge_id)
if v not in visited:
dfs(v, edge_id)
```

The algorithm is correct due to the property of the DFS tree. You can refer to [Tutorial] The DFS tree and its applications for a more thorough understanding of the DFS tree.

## K. Knapsack 101

**Tutorial**

The key idea to solve this problem is to utilize the Birthday paradox.

Suppose we partition the $$$2n$$$ items into two sets $$$L$$$ and $$$R$$$, and we pick $$$k_L$$$ items $$$S_L$$$ from $$$L$$$ and $$$k_R$$$ items $$$S_R$$$ from $$$R$$$, such that $$$k_L + k_R = n$$$. We need to satisfy the following constraint:

\begin{align*} \left(\prod_{i \in S_L} w_i \right) \times \left(\prod_{i \in S_R} w_i \right) &= z\ \left(\prod_{i \in S_L} w_i \right) &= z \times \left(\prod_{i \in S_R} w_i^{-1} \right) \end{align*}

First, precompute the values $$$w_i^{-1}$$$. We pick $$$k_L$$$ items from $$$L$$$ randomly and compute $$$\left(\prod_{i \in S_L} w_i \right)$$$, and pick $$$k_R$$$ items from $$$R$$$ randomly and compute $$$z \times \left(\prod_{i \in S_R} w_i^{-1} \right)$$$. If we generate until both parts have roughly $$$\approx k$$$ items, then the probability that a conflict occurs is $$$1 - (1 - \frac{1}{p-1})^{k^2}$$$. For $$$k \approx \sqrt{p}$$$, this value is larger than $$$\frac{1}{2}$$$ (Birthday paradox).

However, $$$k_L$$$ and $$$k_R$$$ may be large. There are numerous ways to resolve this. One of them is to pick two small constants $$$c$$$ and $$$d$$$, and randomly partition the $$$2n$$$ items into sets of sizes $$$[2n-2c, c, c]$$$. We pick $$$n-d$$$ random items from the set with size $$$2n-2c$$$, and pick the other $$$d$$$ items from the two sets with size $$$c$$$. We can generate all $$$2^c$$$ possible generations of $$$S_L$$$ and $$$S_R$$$, and check whether there is a conflict with the total number of items $$$d$$$ in $$$O(c 2^c)$$$. Repeat this until an answer is found.

## L. Language Interpreter

**Tutorial**

Let $$$\textbf{r}$$$ be a row vector of size $$$33$$$, denoting the register values plus an entry for the bias term $$$1$$$, such that $$$\textbf{r} = (\texttt{r0}, \texttt{r1}, \ldots, \texttt{r31}, 1)$$$. Notice that each of $$$\texttt{add}$$$, $$$\texttt{addi}$$$, $$$\texttt{move}$$$, and $$$\texttt{li}$$$ are *linear transformations*; we can represent the operation as a matrix $$$\textbf{M}$$$, such that $$$\textbf{r}$$$ changes to $$$\textbf{rM}$$$ after doing this operation.

Now, we try to handle $$$\texttt{for}$$$ loops. A $$$\texttt{for}$$$ loop consists of repeated instructions. Therefore, if a $$$\texttt{for } c$$$ consists of the instructions with matrices $$$\textbf{M}_1, \textbf{M}_2, \cdots, \textbf{M}_k$$$, then the instruction matrix of the $$$\texttt{for } c$$$ including all instructions inside is equal to $$$\textbf{M}' = (\textbf{M}_1 \textbf{M}_2, \cdots \textbf{M}_k)^c$$$, since we repeat these instructions $$$c$$$ times. The register vector $$$\textbf{r}$$$ will change to $$$\textbf{rM}'$$$ after executing the $$$\texttt{for } c$$$ loop.

Therefore, we can handle all instructions as if they were a matrix, and compute the matrix $$$\textbf{T}$$$ which consists of all given instructions. Afterward, we can easily compute the final register values. There are at most $$$\frac{n}{2}$$$ $$$\texttt{for}$$$ loops, and we can compute matrix power in $$$O(k^3 \log c)$$$ for each $$$\texttt{for}$$$ loop.

## M. Most Difficult

**Tutorial**

The only even prime is $$$2$$$. Therefore, if $$$x_i$$$ is odd, then we check whether $$$x_i + 2$$$ is a prime. If it is not, then the answer does not exist.

For even $$$x_i$$$, iterate primes $$$p \in {2, 3, 5, 7, \ldots}$$$, then check whether $$$x_i + p$$$ is prime with Miller-Rabin primality test. Primes behave semi-randomly, and the density of primes is $$$O(\frac{M}{\log M})$$$ with $$$M = 10^{18}$$$, so the number of primes you need to check until $$$x_i + p$$$ is prime is quite low.

We generated over $$$2 \cdot 10^{11}$$$ random $$${x_i}$$$, and the worst-case test case we found has the 652-nd prime number as the answer.

**Remark**: "Every even number can be written as the difference of two primes" is an unproven conjecture.

## N. Never Give Up

**Tutorial**

The key idea is that we can consider the top and bottom ridges separately.

Intuitively, if we only consider the bottom wall ridges, if we throw with speed $$$v$$$ and it hits the ridge, that means *we did not throw hard enough*. Therefore, we can binary search for the minimum speed $$$v_{\min}$$$ such that throwing with a speed $$$v_{\min}$$$ will not hit any of the bottom ridges.

Similarly, if we throw with speed $$$v$$$ and it hits the top ridges, that means *we threw too hard*. Therefore, we can binary search for the maximum speed $$$v_{\max}$$$ such that throwing with a speed $$$v_{\max}$$$ will not hit any of the top ridges.

Assuming $$$v_{\min}, v_{\max} \in [L, R]$$$, the answer is equal to $$$\frac{v_{\max} - v_{\min}}{R - L}$$$. Note that we need to handle the case where $$$v_{\min} > v_{\max}$$$.

When we binary search for both $$$v_{\min}$$$ and $$$v_{\max}$$$, we initially set our valid range of values as $$$[L, R]$$$. After $$$\log \epsilon^{-1}$$$ steps of the binary search, we can output $$$\frac{v_{\max} - v_{\min}}{R - L}$$$ with $$$\epsilon$$$ precision. Therefore, $$$\log \epsilon^{-1} \leq 20$$$ steps of the binary search is enough.

The final step is to check for a speed $$$v$$$, whether it intersects with the top or bottom ridges. Notice that the ridges are composed of multiple line segments, and we just need to check whether our throw intersects with one of the line segments. From the given equations, we can derive the parabolic equation of our throw as follows.

Therefore, if we throw with a speed $$$v$$$, the parabola of our throw has equation $$$y = k - 5x^2/v^2$$$.

To check whether a quadratic equation intersects with a line segment, consider the line equation $$$y = mx + c$$$ which goes through the line segment. We solve the equations $$$y = mx + c$$$ and $$$y = k - 5x^2/v^2$$$ to get \begin{align*} 5x^2 — mv^2 x + (c — k)v^2 &= 0 \end{align*}

We can use the quadratic formula to derive the solutions for $$$5x^2 - mv^2 x + (c - k)v^2 = 0$$$. If the solution does not exist, that means our parabola never intersects the line. Otherwise, let $$$x_1$$$ and $$$x_2$$$ be the solutions to the quadratic equation. If either $$$x_1$$$ or $$$x_2$$$ falls between the $$$x$$$-coordinate of the endpoints of the line segment, then the parabola intersects with the line segment. otherwise, it does not.

Therefore, we can check whether a given speed $$$v$$$ intersects with the ridges in $$$O(n)$$$. The total time complexity is $$$O(n \log \epsilon^{-1})$$$.