As the editorial for round #709 has yet to be released, I thought I'd write up and post my solutions to the problems (with the exception of Div. 1 F). My code may not be especially useful, since it's a bit messier than necessary in some cases, but I'm hoping that the written explanations will be fairly helpful. Feel free to comment if you have any questions!

# 2A — Prison Break

By simply playing around with a few small test cases, you might guess that the answer is always equal to $$$ab$$$. This intuition is correct; let's prove it formally.

Consider a graph with $$$ab+1$$$ vertices including one vertex representing each cell of the prison, one vertex representing the outside world, and an edge for each wall of the prison connecting the two vertices on either side of the wall. Then, observe that a set of walls we can remove such that the outside world is accessible from every cell is equivalent to a set of edges that spans our graph (as if every cell is reachable from the outside world, then all cells must be in the same connected component). In particular, a minimal set of walls we can remove is equivalent to a spanning tree of our graph, and since our graph has $$$ab+1$$$ vertices, its spanning trees have $$$ab$$$ edges.

Consequentially, we can simply output $$$ab$$$.

Time Complexity: $$$\mathcal{O}(1)$$$ per test case. Click here for my solution.

# 2B — Restore Modulo

Let $$$d_i = a_{i+1} - a_i.$$$ The key observation is that $$$c$$$ must be congruent to $$$d_i$$$ modulo $$$m$$$ for all $$$i$$$. Moreover, note that $$$c$$$ cannot be congruent to both $$$x$$$ and $$$y$$$ modulo $$$m$$$, if $$$x$$$ and $$$y$$$ are both nonnegative or if they are both negative. Consequentially, if $$$d_i$$$ takes on multiple different positive values or multiple different negative values, the answer is no.

Now, let's say $$$d_i$$$ takes on at most one positive value and at most one negative value. We do casework.

First, suppose $$$d_i$$$ takes no positive values and no negative values. Clearly, this occurs only when $$$n = 1$$$, at which point $$$m$$$ can be arbitrarily large and the answer is zero.

Second, suppose $$$d_i$$$ takes on a positive value but no negative values. Then, $$$m$$$ can take any value greater than $$$a_n$$$, while $$$c = d_i$$$ for all $$$i$$$. Therefore, since $$$m$$$ is unbounded, the answer to the problem is zero.

Third, suppose $$$d_i$$$ takes on a negative value but no positive values. Then, $$$m$$$ can take any value greater than $$$a_1$$$ with $$$c = m - d_i$$$ for all $$$i$$$. Therefore, $$$m$$$ is unbounded, so the answer to the problem is zero.

Finally, suppose $$$d_i$$$ takes on a positive value and a negative value. Let the positive value be $$$x$$$ and the negative value be $$$y$$$. Then, since $$$d_i = x = y \mod m$$$, we must have $$$x - y = 0 \mod m$$$. Therefore, $$$m$$$ cannot be larger than $$$x-y$$$. We can then verify that $$$x-y$$$ is a valid choice of $$$m$$$ with $$$c = x$$$, giving us our desired answer.

Having addressed every case, we're finished here.

Time Complexity: $$$\mathcal{O}(n)$$$ per test case. Click here for my solution.

# 2C/1A — Basic Diplomacy

Initially, let's pick $$$f_{i1}$$$ as our teammate on day $$$i$$$, for all $$$i$$$. (In other words, we choose the first available friend from the given list on each day.) Obviously, this may not provide a valid set of selections, as some friend may be chosen more than $$$\lceil \frac{m}{2} \rceil$$$ times.

Note that because $$$2 \lceil \frac{m}{2} \rceil \geq m$$$, it is impossible for two friends to both be chosen more than $$$\lceil \frac{m}{2} \rceil$$$ times. If no friend is chosen this many times, we can simply output our initial selections and be done. Otherwise, there must be exactly one friend $$$x$$$ who is chosen more than $$$\lceil \frac{m}{2} \rceil$$$ times.

We give a greedy algorithm that will turn this into a valid set of selections, if doing so is possible. Iterate over all days once again. If friend $$$x$$$ is chosen on day $$$i$$$ and at least two friends are available on this day, replace friend $$$x$$$ with $$$f_{i2}$$$ in our selections. Terminate the algorithm once we've iterated over all the days or friend $$$x$$$ no longer appears more than $$$\lceil \frac{m}{2} \rceil$$$ times.

We conclude by proving that this algorithm is optimal. We clearly see that this algorithm will ensure that friend $$$x$$$ appears at most $$$\lceil \frac{m}{2} \rceil$$$ times unless there are more than $$$\lceil \frac{m}{2} \rceil$$$ days on which only friend $$$x$$$ can be chosen, in which case it is impossible to choose friend $$$x$$$ at most $$$\lceil \frac{m}{2} \rceil$$$ times, and the answer is no.

Then, we note that this algorithm will never lead to a different friend being chosen more than $$$\lceil \frac{m}{2} \rceil$$$ times. Suppose for contradiction that this algorithm causes friend $$$y$$$ to be chosen more than $$$\lceil \frac{m}{2} \rceil$$$ times. Then, immediately before this occurs, friend $$$y$$$ must have been chosen exactly $$$\lceil \frac{m}{2} \rceil$$$, while friend $$$x$$$ must have been chosen more than $$$\lceil \frac{m}{2} \rceil$$$ times. However, the sum of these two values must then be greater than $$$m$$$, which is impossible since we choose only $$$m$$$ friends in total, providing our desired contradiction.

Therefore, if it is possible for friend $$$x$$$ to be chosen at most $$$\lceil \frac{m}{2} \rceil$$$ times, this algorithm will ensure that this happens, and it will never cause another friend to be chosen more than $$$\lceil \frac{m}{2} \rceil$$$ times. Therefore, if a solution exists, this algorithm will find it, so it is optimal, as desired.

Time Complexity: $$$\mathcal{O}(n + m + \sum k)$$$ per test case. Click here for my solution.

# 2D/1B — Playlist

We maintain two sets. First, we maintain a set containing all songs that have not been removed from the playlist. (We'll refer to this as set one.) Then, we maintain a set containing all songs such that the GCD of their genre with the song of the previous song in the current playlist is equal to $$$1$$$. (We'll refer to this as set two.) Both of these sets can be initialized efficiently using the initial array.

Then, maintain the index $$$nxt$$$ of the next song we must listen to. While set two is non-empty, the next song we must remove from the playlist is the next element of set two after $$$nxt$$$. Call this song $$$rem.$$$ Then, we must remove $$$rem$$$ from both sets. Afterwards, we must update set two. The trick here is that we only need to update one element of set two with each removal: we only need to update whether the next element in set one after $$$rem$$$ is in set two, which occurs if and only if it has GCD $$$1$$$ with the previous element in set one.

We repeat this process until set two is empty. It will then give us a complete list of all the songs that must be removed from the playlist, in the order in which we remove them.

Time Complexity: $$$\mathcal{O}(n \log n)$$$ per test case. Click here for my solution.

# 2E/1C — Skyline Photo

We'll need a bit of heavy machinery to solve this problem. We'll maintain a segment tree of pairs $$$(dp_i, val_i)$$$ that allows us to perform the following operations: - Set the value of $$$dp_i$$$ for some $$$i$$$. - Set the value of $$$val_i$$$ to some fixed value for all $$$i$$$ in a range. - Find the maximum value of $$$dp_i + val_i$$$ over all $$$i$$$ in a given range.

Now, let's actually define these variables. We'll iterate over the buildings in reverse order. Let $$$dp_i$$$ be the maximum beauty of a set of photos covering buildings $$$i$$$ through $$$n$$$. For simplicity, define $$$dp_{n+1}$$$ to be zero. Then, we'll maintain $$$val_i$$$ such that as we iterate through building $$$j$$$, we have that $$$val_i$$$ is the beauty of the shortest building from $$$i$$$ to $$$j-1$$$. Note that $$$val_i$$$ is then the beauty of a photo consisting of buildings $$$i$$$ through $$$j-1.$$$ The last piece of machinery we'll need is a monotonic stack, which will contain a series of buildings in decreasing order (from the bottom) of position and increasing order of height. We'll use this to find the next building shorter than each building we process.

Let's begin iterating over the buildings. When we reach a building, pop all taller buildings from the stack. Once we're finished, the building on top of the stack will be the next building shorter than $$$i$$$. Let this building have position $$$j$$$. Then, update $$$val_k$$$ to $$$b_i$$$ over the interval $$$[i+1, j].$$$ Finally, let $$$dp_i$$$ be the maximum value of $$$dp_j + val_j$$$ over the interval $$$[i+1, n+1]$$$. Each choice of $$$j$$$ accounts for the case in which our first photo contains buildings $$$i$$$ through $$$j-1$$$.

After iterating over all our buildings, our answer is $$$dp_1$$$.

Time Complexity: $$$\mathcal{O}(n \log n)$$$. Click here for my solution.

# 2F/1D — Useful Edges

Let's start with some precomputation. First, using Floyd-Warshall, let's compute the shortest paths between all pairs of vertices. Let the shortest path from $$$i$$$ to $$$j$$$ have length $$$d_{ij}.$$$ Then, let's compute an array $$$L_{ij}$$$ representing the maximum length of a path from vertex $$$i$$$ to vertex $$$j$$$ such that this path can be extended to form a path satisfying one of our conditions $$$(u, v, l).$$$ We initialize this array such that $$$L_{uv} = l$$$ for all queries $$$(u, v, l).$$$ Then, observe that for all $$$i, j,$$$ and $$$k$$$, we have $$$L_{ij} \geq L_{ik} - D_{jk}$$$, since a path from $$$i$$$ to $$$j$$$ can be part of a valid path from $$$i$$$ to $$$k$$$ if it can be extended using the shortest possible path from $$$j$$$ to $$$k$$$ to reach a length less than $$$L_{ik}.$$$ We can therefore compute the final array $$$L$$$ using another application of Floyd-Warshall, taking the edges to have negative length.

Now, let's find the useful edges. Iterate over all root vertices $$$r$$$. Then, iterate over each edge $$$(u, v),$$$ and let this edge have length $$$x$$$. Then, if $$$D_{iu} + x \leq L_{iv}$$$ or $$$D_{iv} + x \leq L_{iu}$$$, the edge $$$(u, v)$$$ can be marked as useful. In the first case, $$$D_{iu} + x$$$ is the minimum length of a path starting at $$$i$$$ and using the edge in the direction from $$$u$$$ to $$$v$$$ to finally reach $$$v$$$, and we know from our computation above that $$$L_{iv}$$$ is the maximum length of a path starting at $$$i$$$ and ending at $$$v$$$ such that this path can be extended to satisfy one of our queries. The second case proceeds similarly.

Once we've finished this process, we can output the list of valid edges.

Time Complexity: $$$\mathcal{O}(n^3 + nm + q)$$$. Click here for my solution.

# 1E — Vabank

Let's start with an observation. Let's let our account balance equal $$$B$$$, and let $$$H$$$ be the highest query we've made so far without exceeding the limit. Then, we should never make a query greater than $$$\max(B, H)$$$, as we'll go bankrupt if $$$M$$$ is between $$$\max(B, H)$$$ and our query.

Consequentially, we should clearly start by querying $$$1$$$. Moreover, this observation motivates us to increase $$$H$$$ as quickly as possible, both to bound our answer and to maximize our ability to increase our account balance quickly. Therefore, let's begin by querying $$$1, 2, 4, \cdots$$$ until one of these values exceeds $$$M$$$ or until we would need to query a value greater than $$$10^{14}$$$. These are the maximum values we can query at any given time, since each query is equal to our current balance, which is greater than the highest previous query we've made. Since $$$\log_2 10^{14} \approx 46.5$$$, this will take at most $$$47$$$ queries.

Now, we've bounded the answer in a range of the form $$$[2^k, 2^{k+1} - 1].$$$ At this point, we might be tempted to binary search for $$$M$$$. Suppose at any given time that we've bounded $$$M$$$ in the range $$$[lo, hi].$$$ Then, we might query $$$lo$$$ until our balance exceeds $$$\frac{lo+hi}{2}$$$, at which point we query $$$\frac{lo+hi}{2}$$$ and update our range based on whether this query is successful. However, this is not efficient enough. If all queries of this form fail, then we take about $$$\log_2 10^{14}$$$ of these queries, and we must also query $$$lo$$$ at least once for each of these queries (in some cases, twice) in order to ensure our balance is at least $$$\frac{lo+hi}{2}.$$$ Therefore, this process consumes a number of queries on the order of $$$2 \log_2 10^{14}$$$ queries. After our step above, it takes $$$3 \log_2 10^{14}$$$ queries, which vastly exceeds our query limit.

We realize here that we need more queries to get an upper bound on our range than a lower bound, since when a query fails, it costs us money (and thus forces us to query $$$lo$$$ again), while when a query succeeds, it increases our balance (and thus decreases the extent to which we'll need to query $$$lo$$$ in the future). Therefore, as an improvement over the above idea, we might try doing a similar approach in the style of binary search, but instead of taking our query to be $$$\frac{lo+hi}{2}$$$, we might take the query to be $$$lo + r(hi-lo)$$$ for some fixed ratio $$$r$$$. By taking $$$r < \frac{1}{2}$$$, we try to establish relatively tight lower bounds so that it will take fewer lower bounds to find our answer, thus avoiding the $$$3 \log_2 10^{14}$$$ worst-case complexity of the above solution.

This is the approach that many people attempted in-contest; unfortunately, some quick math shows that this idea is probably hopeless. With about $$$60$$$ queries remaining after our initial step, we have the capacity to perform roughly $$$30$$$ upper bounds, since each lower bound requires at least one additional operation. (In practice, the maximum number of upper bounds turns out to be slightly lower, but this is a good enough approximation for our purposes.) Since our range can contain up to $$$2^{45}$$$ values initially, we must have $$$r^{30} \leq \frac{1}{2^{45}}$$$, which implies $$$r \leq \frac{1}{2^{3/2}} \approx 0.35.$$$ However, when $$$M = hi$$$, we must also be able to identify $$$M$$$ using at most about $$$60$$$ lower bounds, which means that we must have $$$(1-r)^{60} \leq \frac{1}{2^{45}}.$$$ This gives $$$1-r \leq \frac{1}{2^{3/4}} \approx 0.59$$$, which implies $$$r \geq 0.41.$$$ No $$$r$$$ satisfies both of these conditions, implying that any choice of $$$r$$$ is doomed to fail (though it may be quite hard to find a value of $$$M$$$ to break any given solution using this approach, leading to a number of unfortunate FSTs).

For the right value of $$$r$$$, this approach intuitively feels relatively optimized (and, in fact, it's quite close to passing), so let's figure out what precision we haven't yet squeezed out of this algorithm. The key idea is that when our queries are successful, they actively help us perform future queries--that is, after successfully finding a new lower bound for our answer, performing upper bounds becomes cheaper in the future. As a result, after each lower bound, we can afford to increase $$$r$$$ slightly to account for the fact that we won't have to perform as many $$$lo$$$ queries in order to get a given number of lower bounds.

Let's now state the correct solution formally. Suppose that at some point, we know that $$$M$$$ is in the range $$$[lo, hi]$$$ and we have $$$Q$$$ queries left. Moreover, let's say that $$$\frac{B}{lo} = K$$$. At this point, we can perform approximately $$$K$$$ upper bounds without needing to query $$$lo$$$ in order to restore our balance. Therefore, the total number of upper bounds we can perform is $$$\frac{Q+K}{2}.$$$ As a result, if all our future queries are unsuccessful, we must have $$$r \leq \frac{1}{(hi-lo+1)^{2/(Q+K)}}$$$. We take this value of $$$r$$$, or $$$\frac{1}{2}$$$ if $$$r > \frac{1}{2}$$$, and query $$$lo + r \cdot (hi-lo).$$$ Finally, we update $$$lo$$$ or $$$hi$$$ accordingly.

Unfortunately, this solution is a little finicky; I had to perform a bit of tweaking to get my solution to pass, but once configured appropriately, this approach consistently passes within the given query limit.

Time Complexity: $$$\mathcal{O}(q)$$$ per test case, where $$$q$$$ is our query limit. Click here for my solution.