Please, try EDU on Codeforces! New educational section with videos, subtitles, texts, and problems. ×

Geothermal's blog

By Geothermal, history, 8 days ago, In English,

A — Calc

Just directly print the given sum.

Time Complexity: $$$O(1)$$$. Click here for my submission.


B — Minor Change

The answer is simply the number of positions $$$i$$$ for which $$$S[i] \neq T[i]$$$. This is because we must make a move fixing each of those position, and each move can only make $$$S[i] = T[i]$$$ for one new position. Thus, we can iterate through the positions and count those where $$$S$$$ and $$$T$$$ differ, printing the total as our answer.

Time Complexity: $$$O(|S|)$$$. Click here for my submission.


C — Tsundoku

We apply two pointers. Iterate over the number of books to be read from desk A in decreasing order, starting with the greatest number of those books we could read without exceeding $$$K$$$ minutes. Maintain the sum of all books to be read from desk A. Then, while we can do so without exceeding $$$K$$$ minutes, add the next book from desk B to the list of books to be read. We can then consider the total number of books read between desk A and desk B as a possible answer, decrement the number of books from desk A, and continue.

The basic intuition is that as we decrease the number of books we read from desk A, the number of books we read from desk B monotonically increases. By processing the number of books to be read from desk A in order, we avoid having to recompute any sums, achieving a time complexity of $$$O(N+M)$$$.

Time Complexity: $$$O(N+M)$$$. Click here for my submission.


D — Sum of Divisors

Let's reframe the problem by considering the total contribution each divisor makes to the answer. For any given $$$i$$$, $$$i$$$ contributes $$$K$$$ to the sum for each $$$K$$$ from $$$1$$$ to $$$N$$$ such that $$$K$$$ is a multiple of $$$i$$$.

Notice that the list of these $$$K$$$ is of the form $$$i$$$, $$$2i$$$, $$$3i$$$, ..., $$$\lfloor \frac{N}{i} \rfloor i$$$, noting that the last term is the greatest multiple of $$$i$$$ less than $$$N$$$. For simplicity, let $$$M = \lfloor \frac{N}{i} \rfloor.$$$ Then, the sum of this sequence is equal to $$$i + 2i + 3i + \cdots + Mi = i (1 + 2 + \cdots + M) = \frac{iM(M+1)}{2}.$$$

Thus, by computing $$$M$$$ and the above product, we can compute each divisor's contribution to the answer in $$$O(1)$$$, giving us an $$$O(N)$$$ solution.

Time Complexity: $$$O(N)$$$. Click here for my submission.


E — NEQ

First, let's count the number of ways to choose $$$A$$$, irrespective of $$$B$$$. There are $$$\dbinom{M}{N}$$$ ways to choose the $$$N$$$ integers to appear in $$$A$$$ and $$$N!$$$ ways to arrange them, so we have $$$\dbinom{M}{N} N!$$$ choices for $$$A$$$ in total.

Now, without loss of generality, assume the sequence $$$A$$$ is simply $$$1, 2, \cdots, N$$$, since the number of choices for $$$B$$$ doesn't depend on our sequence $$$A$$$. We want to count the number of sequences $$$B$$$ where there is no position $$$i$$$ such that $$$B_i = i$$$.

We do this using the principle of inclusion and exclusion. By PIE, we know that the number of sequences $$$B$$$ where $$$B_i \neq i$$$ for all $$$i$$$ is equal to the sum over all valid $$$j$$$ of $$$(-1)^j$$$ times the number of ways to create a sequence $$$B$$$ where at least $$$j$$$ positions $$$i$$$ have $$$B_i = i$$$.

Thus, it remains to compute the number of ways to compute the number of sequences $$$B$$$ with at least $$$j$$$ equal positions. There are $$$\dbinom{N}{j}$$$ ways to choose $$$j$$$ positions where $$$B_i = i$$$. Then, it doesn't matter what we put in the remaining positions, so there are $$$\dbinom{M-j}{N-j}$$$ ways to choose the numbers and $$$(N-j)!$$$ ways to arrange them. Thus, the total number of ways to ensure that at least $$$j$$$ positions have $$$B_i = i$$$ is $$$\dbinom{N}{j} \dbinom{M-j}{N-j} (N-j)!$$$.

By precomputing all relevant factorials and their modular inverses in $$$O(M)$$$, we can compute the above product in $$$O(1)$$$ for each $$$j$$$. We then sum these up using our PIE formula, giving us an $$$O(N+M)$$$ solution. Since $$$N \leq M$$$, this is simply $$$O(M)$$$.

Time Complexity: $$$O(M)$$$. Click here for my submission.


F — Unfair Nim

It is well known that the second player wins a game of Nim if and only if the bitwise XOR of the numbers of stones in each pile is equal to $$$0$$$. Thus, we want to make the equation $$$A_1 \wedge A_2 \wedge A_3 \wedge \cdots \wedge A_n = 0$$$. By taking the XOR of this equation with $$$A_3 \wedge A_4 \wedge \cdots \wedge A_n$$$, we have $$$A_1 \wedge A_2 = A_3 \wedge A_4 \wedge \cdots \wedge A_n.$$$ We can immediately compute the right-hand side, since it is fixed before we make our moves. Now, we want to find the largest $$$x$$$ between $$$1$$$ and $$$A_1$$$, inclusive, such that $$$x \wedge (A_1 + A_2 - x) = A_3 \wedge A_4 \wedge \cdots \wedge A_n.$$$

We build $$$x$$$ via digit DP, determining the bits of $$$x$$$ from the most significant bit to the least significant bit. Let $$$a$$$ be the maximum possible value of $$$x$$$ thus far such that we have found a bit in which $$$x$$$ contains a $$$0$$$ while $$$A_1$$$ contains a $$$1$$$ (so, we know that no matter what bits we add to $$$a$$$ in the future, it will be less than $$$A_1$$$), or $$$-\infty$$$ if no such value exists thus far. Let $$$b$$$ be the value of $$$x$$$ such that thus far, all bits in $$$b$$$ are the same as those in $$$A_1$$$, or $$$-\infty$$$ if creating this value is impossible. Note that we cannot add a $$$1$$$ to $$$b$$$ if the corresponding bit in $$$A_1$$$ is a $$$0$$$, as this would make $$$b$$$ too large.

Let $$$S = A_3 \wedge A_4 \wedge \cdots \wedge A_n.$$$ If $$$S$$$ contains a $$$0$$$ in any given bit, we know that $$$x$$$ and $$$A_1 + A_2 - x$$$ must either both contain a bit or neither must contain a bit. We can tell which case applies by determining whether after subtracting out all bits we've assigned so far, $$$A_1 + A_2$$$ is greater than $$$2^{k+1}$$$, where $$$k$$$ is the index of the current bit. This is because the sum of all future bits will be at most $$$2^{k+1} - 2$$$, so $$$A_1 + A_2 \geq 2^{k+1}$$$ both enables us to add two $$$1$$$s at position $$$b$$$ and forces us to do so. If we add a bit to both numbers, we add $$$2^k$$$ to $$$a$$$ and $$$b$$$, but set $$$b$$$ to $$$-\infty$$$ if $$$A_1$$$ contains a $$$0$$$ in position $$$k$$$. Otherwise, we don't add anything to either $$$a$$$ or $$$b$$$, but if $$$A_1$$$ contains a $$$1$$$ in position $$$k$$$, we can set $$$a$$$ to $$$\max(a, b)$$$, since now we've found a position in which $$$x < A_1$$$.

If $$$S$$$ contains a $$$1$$$ in any given bit, we know that exactly one of $$$x$$$ and $$$A_1 + A_2 - x$$$ must contain a bit. Thus, we can add $$$2^k$$$ to $$$a$$$, because this will not make $$$a$$$ exceed $$$A_1$$$. If $$$A_1$$$ contains a $$$1$$$ in position $$$k$$$, we can also add $$$2^k$$$ to $$$b$$$, or we can alternatively assign a $$$0$$$ in order to guarantee that $$$x < A[1]$$$, allowing us to set $$$a$$$ to $$$\max(a, b)$$$.

At the end of this process, $$$a$$$ and $$$b$$$ are our possible values for $$$x$$$. Of course, if $$$a$$$ and $$$b$$$ are zero or $$$-\infty$$$, they can't work (the zero case fails because we cannot move every stone from the first pile). Moreover, we also must confirm that $$$a \wedge (A_1 + A_2 - a) = S$$$ in order to use $$$a$$$, and we must check the same equation for $$$b$$$. This is because the total bits we assign throughout our process may not equal $$$A_1 + A_2$$$, since the number of bits we assign is largely determined by $$$S$$$. (One countercase that demonstrates this issue is $$$N=3, A = 7, 5, 6$$$: our solution assigns one bit in position $$$2$$$, one bit in position $$$1$$$, and two bits in position $$$0$$$, but the total of these bits is $$$4 + 2 + 2 = 8 < 7+5.$$$) Then, we take the largest of these possible values for $$$x$$$ and subtract it from $$$A_1$$$ to get our answer.

Time Complexity: $$$O(N + \log A_i).$$$ Click here for my submission.


Read more »

 
 
 
 
  • Vote: I like it
  • +251
  • Vote: I do not like it

By Geothermal, history, 5 weeks ago, In English,

I just did my first ABC in a while, so I decided to write up and share my solutions below. Feel free to leave questions in the comments!

There were several questions this round that had the potential to create precision issues; however, the solutions below give approaches that sidestep those errors altogether. Sadly, I think compiling and testing my A prevented me from winning the round :(

A — Multiplication 1

Just multiply the numbers and print them out. It's not that hard.

Time Complexity: $$$O(1)$$$. Click here for my submission.


B — Multiplication 2

This turns out not to be quite as easy as the last problem. Multiplying numbers this large is actually a challenge in C++ because the result is likely to be greater than $$$2^{63}$$$, and thus will cause long long overflow. Thus, in C++, we can't just multiply the numbers and check if the result is greater than $$$10^{18}$$$.

One possible approach is to switch to a different language in order to solve this problem. For example, Python integers seem like a natural choice given their ability to handle integers of arbitrary size. Java BigIntegers also seem workable for similar reasons. However, we have to be a little careful here: if you try to multiply all the integers together and then check whether they're greater than $$$10^{18}$$$, you might TLE: you're performing $$$O(N)$$$ multiplications using a number with $$$O(N)$$$ digits, so the time complexity could potentially be at least $$$O(N^2)$$$.

To get around this, we need a slightly smarter approach: multiply the numbers together, and print $$$-1$$$ as soon as our product exceeds $$$10^{18}$$$. Now, the number of digits we're working with is bounded, so this is safe time-wise. Note that you need to separately check if the input contains a $$$0$$$, since that's the only way the product can decrease: for example, on an input of $$$10^{10}$$$, $$$10^{10}$$$, $$$0$$$, you need to check for the $$$0$$$ separately because otherwise you'll stop as soon as you scan the first two numbers and reach a product of $$$10^{20}$$$.

But, what if you don't want to switch languages? It turns out that there's a pretty simple C++ solution. The key idea is that we can easily compute the base-10 logarithm of the answer, using the identity that $$$\log A_1 A_2 A_3 \cdots A_N = \log A_1 + \log A_2 + \log A_3 + \cdots + \log A_N$$$. Then, the product is greater than $$$18$$$ if and only if the logarithm is greater than $$$18$$$.

To avoid precision errors, though, my solution just checks that the logarithm is small enough that we can perform the computation without long long overflow. If the logarithm is smaller than some value slightly greater than $$$18$$$ (say, $$$18.1$$$), we perform the multiplication and print the result if it is less than $$$10^{18}$$$. Note that we also need to handle the $$$0$$$ case separately here because $$$\log 0$$$ is undefined.

Time Complexity: $$$O(N)$$$. Click here for my submission.


C — Multiplication 3

One could perform the multiplication using doubles, but precision errors are always scary, so let's try a different approach. Let's read $$$A$$$ as a long long and $$$B$$$ as two integers: one representing its integer part and one representing its decimal part. Then, we can write the integer $$$100B$$$ as the sum of $$$100$$$ times $$$B$$$'s integer part plus the two digits representing $$$B$$$'s fractional part. For example, in the first sample, we read $$$1$$$ as $$$B$$$'s integer part and $$$10$$$ as its decimal part, so $$$100B = 110$$$.

Now that we have $$$A$$$ and $$$100B$$$ as integers, we can compute $$$100AB$$$ by taking their product. Then, we can compute the integer part of $$$AB$$$ by dividing this by $$$100$$$, noting that division in C++ drops the fractional part.

Time Complexity: $$$O(1).$$$ Click here for my submission.


D — Div Game

First, let's factor $$$N$$$ in $$$O(\sqrt{N})$$$ time. This is doable by attempting trial divisions by all numbers up to $$$\sqrt{N}$$$. Then, if there's anything left over when all these factors are removed from $$$N$$$, what's left must be $$$N$$$'s last prime factor.

Realize that since each $$$z$$$ we select only affects one prime, we can just process each of $$$N$$$'s prime factors independently, compute the number of times we can apply the given operation to the power of that prime factor, and sum up the results.

So, we now only need to solve the problem for numbers equal to $$$p^k$$$ for some prime $$$p$$$. We first attempt a simple greedy algorithm: divide out $$$p$$$, then $$$p^2$$$, then $$$p^3$$$, and so on, until our input is no longer divisible by the next power of $$$p$$$; let the last power of $$$p$$$ we divided out be $$$p^x$$$. Then, we claim that $$$x$$$ is the maximum number of times we can apply this operation to this number.

First, we see that $$$x+1$$$ is impossible: since $$$p p^2 p^3 \cdots p^x p^{x+1}$$$ does not divide $$$p^k$$$, we know that the smallest $$$x+1$$$ operations are still too large, so we cannot have $$$x+1$$$ or more operations. Then, we see that $$$x$$$ is doable: apply the operations with $$$p$$$, $$$p^2$$$, and so on, up to $$$p^{x-1}$$$, then apply one last operation to whatever is left over. The remaining exponent must be at least $$$p^x$$$ because we know $$$p p^2 p^3 \cdots p^x$$$ divides $$$p^k$$$, so we don't repeat any values of $$$z$$$, so this is a valid series of operations. Thus, we can't do any better than $$$x$$$, but we can definitely achieve $$$x$$$ operations, so $$$x$$$ is our answer for $$$p^k$$$.

We thus compute the value of $$$x$$$ for each $$$p^k$$$ and sum up the results to get our answer. Though there are more efficient ways to compute $$$x$$$, you can just do it naively (by adding $$$1$$$, then $$$2$$$, then $$$3$$$, and so on, until the result exceeds $$$k$$$): $$$N$$$ can't have very many prime factors and none of them can be raised to especially large powers, so the $$$O(\sqrt{N})$$$ factor from factoring $$$N$$$ will dominate.

Time Complexity: $$$O(\sqrt{N}).$$$ Click here for my submission.


E — Count Median

So that we can just deal with integers, rather than decimals, we redefine the definition of median for even numbers to refer to $$$x_{N/2} + x_{N/2 + 1}$$$. This is essentially twice the given median. We can see that this does not change the number of unique medians in the set because we're essentially doubling every number in the set of medians, which means that two equal medians will still be equal and two different medians will still be different after we double them.

We can easily compute the smallest and largest possible medians: take the medians of the arrays $$$A$$$ and $$$B$$$, respectively. Let these medians be $$$Y$$$ and $$$Z$$$. Then, we make a critical claim: the medians we can achieve are exactly the set of integers between $$$Y$$$ and $$$Z$$$. Thus, our answer is the number of integers from $$$Y$$$ to $$$Z$$$, which is $$$Z-Y+1$$$.

But, of course, we need to prove this claim. Obviously, we can't achieve a non-integer median, nor can we have a median lower than $$$Y$$$ or greater than $$$Z$$$, so there's no way to have any medians that aren't integers from $$$Y$$$ to $$$Z$$$.

Now, we just need to show that every median from $$$Y$$$ to $$$Z$$$ is achievable. Start with an array equal to $$$A$$$, which has median $$$Y$$$. Here's the key observation: whenever we increase an integer in the array by $$$1$$$, the median will either not change or will increase by $$$1$$$. This can be proven by fairly simple casework, but it's also pretty intuitive. Thus, as we increase the integers by $$$1$$$ in some arbitrary order until the array becomes $$$B$$$, our median never skips any integers, so it goes from $$$Y$$$ to $$$Z$$$ and, at some point, touches all the integers in between. This shows that any integer between $$$Y$$$ and $$$Z$$$ is a possible median, as desired.

Now, we can sort the arrays $$$A$$$ and $$$B$$$, compute their medians, and print $$$Z-Y+1$$$ as our answer.

Time Complexity: $$$O(N \log N).$$$ Click here for my submission.


F — Knapsack for All Subsets

Consider a subset $$$x_1, x_2, x_3, \cdots, x_k$$$ summing to $$$K$$$. Then, notice that there are $$$2^{N-k}$$$ subsets of $$$A$$$ containing this subset, because for each element not in our set of $$$k$$$, we have two options: to include it or exclude it from our subset. So, for each subset containing $$$k$$$ integers, we want to add $$$2^{N-k}$$$ to our answer.

We compute this summation using DP. Let $$$dp[i][j]$$$ be the sum of $$$2^{N-k}$$$ over all subsets of the first $$$i$$$ elements of the array $$$A$$$ that sum to $$$j$$$. Initially, $$$dp[0][0] = 2^N$$$ and $$$dp[0][j] = 0$$$ for all other $$$j$$$, since there is one subset of the first $$$0$$$ elements, which has size $$$0$$$ and sums to $$$0$$$.

Then, to transition, we can either add the next element in the array to our subset or not add it. If we don't add it, the result is simple: we add $$$dp[i][j]$$$ to $$$dp[i+1][j]$$$, effectively skipping this element. Alternatively, though, we can add the array to the subset. However, this adds $$$1$$$ to $$$k$$$, effectively multiplying the sum of $$$2^{N-k}$$$ by one-half, so we add $$$\frac{dp[i][j]}{2}$$$ to $$$dp[i+1][j + A[i]]$$$. Since we're working with modular arithmetic, we can divide by $$$2$$$ by multiplying by the modular inverse of $$$2$$$.

Then, our answer is $$$dp[N][S]$$$. Since we have $$$O(NS)$$$ states and each state has $$$O(1)$$$ transitions, this easily passes in time.

Time Complexity: $$$O(NS).$$$ Click here for my submission.


Read more »

 
 
 
 
  • Vote: I like it
  • +281
  • Vote: I do not like it

By Geothermal, history, 5 weeks ago, In English,

As most of you are likely aware, cheating in virtual contests (typically by submitting prewritten or copied solutions in order to appear at the top of the leaderboard) is very common. Most recently, wannabecandidatemaster submitted solutions to all problems in this round in order to take first place in the standings, and bemandrei competed virtually in this round after competing in the round officially, and ended up in second place on the leaderboard.

In response to the rise of VC cheating (as well as some other issues with the VC system I'll describe below), I'd like to make a simple proposal: Remove all virtual participants from the contest leaderboards. The one exception is that you should see your own name on leaderboards for contests you VCed (that is, virtual contestants should be visible only to themselves). Similarly, virtual participants should not be counted in the solve statistics during future contestants' VCs.

Cheating on VCs is harmful for two reasons. First, it makes it impossible to review the scoreboard as it appeared at the end of the round; in particular, this makes it difficult to discern accurate rankings for out of contest participants. Though this may seem like a relatively minor issue, Div 2, 3, and 4 rounds regularly attract hundreds or even thousands of competitors too high rated to compete officially, so it has an exceptionally broad scope and affects a large number of users.

Second, and more importantly, VC cheaters damage the VC experience for future competitors by making the solve statistics inaccurate. Many competitors use the dashboard's view of how many people have solved each problem in order to make strategic decisions during contests. In the round when I first reached GM, I achieved a high rank by noticing a few early solves on D, correctly discerning that it was unusually easy, and solving it before B or C in order to achieve a better penalty time. This kind of strategy is very common, but the presence of VC cheaters makes it much less practical because in pretty much every contest, when there are a few early solves on the hardest problems, they're not because those problems are easy--they're the result of VC cheaters. The result is that VC cheaters make the VC experience less realistic, so removing them from the solve statistics and leaderboards is critical to achieving the intended goals of the VC feature.

One might claim that the solution is simply to identify cheaters and remove them from the standings after the fact. However, this is a worse approach than removing VCs from the leaderboards altogether for three reasons. First, this approach requires continual effort on the part of the site administrators to remove VC cheaters, whereas my proposed solution needs only a one-time update. Second, it is impossible to accurately detect cheaters, especially because many forms of cheating are less obvious--for example, VCers can look up the test data using a second browser in order to gain an advantage when debugging, and there is no way to accurately distinguish between this and simply guessing the error quickly.

Third and finally, VCs are fundamentally different from participating in a contest live, so even in the absence of cheating, they should not appear on the leaderboards. There are a number of reasons for this, but I see two of them as most important. First, VC solutions are judged on all tests, rather than just the pre-tests, so it is impossible for them to FST: if their solution fails a test not originally included in the pre-tests, they get a chance to fix their error and resubmit, giving them a huge advantage over in-contest competitors. Second, frequently, Codeforces users release blog posts discussing techniques that came up in recent rounds; competitors who read an article inspired by a certain round and then VCed that round would have an advantage over anyone who did that round live.

As a result, I claim that the ideal way to deal with VCs is to remove them from the leaderboards entirely. Please feel free to leave thoughts below; I'm happy to discuss this further with anyone interested or to answer questions about my proposal.

Read more »

 
 
 
 
  • Vote: I like it
  • +985
  • Vote: I do not like it

By Geothermal, history, 6 weeks ago, In English,

Hi all!

I'm here to share my solutions to today's Div. 3 round, since I managed to finish the round relatively quickly and thought that my approaches might be of some use to others. Feel free to leave questions in the comments!


A — Minimal Square

Without loss of generality, let's say $$$A \leq B$$$, that is, $$$B$$$ is the larger side-length. We claim that the minimum side length is $$$\max(2A, B)$$$, hence, the answer is $$$\max(2A, B)^2$$$. This is relatively easy to observe just by looking at the sample cases and trying different configurations, but I'll prove this claim below in case you're curious.

To prove this, we first show that $$$\min(2A, B)$$$ is a valid side-length. To achieve this, just stack the two rectangles on top of each other, with the two short sides stacking vertically and the long sides overlapping. (This looks much like the solution given in the problem statement for the second test case.) The vertical side-length of the block formed will be $$$2A$$$, and the horizontal side-length of the block formed will be $$$B$$$. Thus, as long as we can fit a block with dimensions $$$2A$$$ by $$$B$$$ in the square, we can fit in the two rectangles; this is clearly doable when the side-length of the square is $$$\min(2A, B)$$$.

Now, we prove that no lower side-length is achievable. First, suppose the side-length is less than $$$B$$$. Then, there is no way to fit either of the rectangles in the square because the sides of length $$$B$$$ won't fit. Thus, the side-length must be at least $$$B$$$. Then, we can show that the side-length must also be greater than $$$2A$$$ through messy casework, so the side-length must be at least $$$\max(2A, B)$$$.

Since a side-length of $$$\max(2A, B)$$$ is both attainable and optimal, our answer is $$$\max(2A, B)^2$$$. We can print this value directly, after swapping $$$A$$$ and $$$B$$$ if necessary to ensure that $$$B$$$ is larger.

Time Complexity: $$$O(1)$$$ per test case. Click here for my solution.


B — Honest Coach

We claim that the answer is the minimum difference between any two athletes' strengths. Obviously, since the answer must be a difference between two strengths, it is impossible to do any better. Then, to show that this is attainable, consider the two athletes $$$X$$$ and $$$Y$$$ who form this minimal difference, letting $$$X$$$ be at least as strong as $$$Y$$$. Place $$$X$$$ and all athletes weaker than $$$X$$$ except for $$$Y$$$ on the first team, and $$$Y$$$ and all athletes at least as strong as $$$X$$$ on the second team. It is easy to see that each team will have at least one athlete and that each athlete will be on a team. Moreover, $$$X$$$ is the strongest athlete on the first team and $$$Y$$$ is the weakest athlete on the second team, so $$$|max(A) - min(B)|$$$ is equal to the difference in their strengths, as desired.

Thus, we can simply sort the input, compare every two consecutive values, and print the minimum difference. Alternatively, since the bound on $$$n$$$ is fairly low, we can just brute-force all pairs of athletes and compare their strengths to give the answer.

Time Complexity: $$$O(N \log N)$$$ per test case. $$$O(N^2)$$$ is also possible and acceptable. Click here for my solution.


C — Similar Pairs

Suppose that there is an even number of even numbers in the input (and thus, there is also an even number of odd numbers in the input). Then, the answer is always yes--we can just pair up the even numbers with each other and pair up the odd numbers similarly.

Now, suppose that there is an odd number of even numbers (and thus, an odd number of odd numbers). Then, if there are no numbers with difference $$$1$$$ in the input, the answer is no, because in this case we cannot pair odd numbers with even numbers, and it is impossible to pair all the numbers with other numbers of their same parity. However, if there does exist a pair of numbers with difference $$$1$$$, the answer is yes: pair these two numbers up, and then there will be even numbers of odd and even numbers in the input; we have already shown that we can pair up these remaining numbers.

Thus, the answer is yes if there is a pair of numbers with difference $$$1$$$ or there is an even number of even numbers in the input. We can check these conditions in $$$O(N \log N)$$$ by sorting the input and checking consecutive numbers to see if they have difference $$$1$$$ or in $$$O(N^2)$$$ by brute-forcing all possible pairs; either will be accepted. (Obviously, we can count even numbers in $$$O(N)$$$.)

Time Complexity: $$$O(N \log N)$$$ per test case. $$$O(N^2)$$$ is also possible and acceptable. Click here for my submission.


D — Buying Shovels

Clearly, the type of packages must be a factor of $$$N$$$, since to get exactly $$$N$$$ shovels from buying a certain number of packages, the number of shovels per package must be a factor of $$$N$$$. We can thus compute all factors of $$$N$$$ in $$$O(\sqrt(N))$$$, then, any factors less than $$$K$$$ could be the type of package we want to use. The last observation is that we want to buy the largest type of package possible to minimize the number of packages we have to buy. Thus, the type of package we want is the largest factor of $$$N$$$ that is less than or equal to $$$K$$$, and our answer is then $$$N$$$ divided by this package size.

Time Complexity: $$$O(\sqrt{N})$$$. Click here for my submission.


E — Polygon

We claim that the answer is YES if and only if every $$$1$$$ has another $$$1$$$ or the border in either the cell to the right or the cell below it. We prove that this criterion is correct.

First, suppose that this is the case. Then, the desired grid is achievable; we will prove it by giving a construction. We build the grid such that the lowest $$$1$$$'s are added first; if several $$$1$$$'s appear on the same level, then we add the rightmost $$$1$$$'s first. This order ensures that whenever it comes time to add a $$$1$$$ to the grid, all $$$1$$$'s in the rectangle below and to the right of the $$$1$$$ we intend to add are already in the grid, while no $$$1$$$'s directly above or to the left of this $$$1$$$ have been added yet. Then, if there is a $$$1$$$ or the grid border below the $$$1$$$ we intend to add, we can fire the $$$1$$$ vertically and it will land in its intended position, while if there is a $$$1$$$ or the grid border to the right of the $$$1$$$ we intend to add, we can fire the $$$1$$$ horizontally and it will land where we want it. Thus, we can use this process to add $$$1$$$'s to the grid until all $$$1$$$'s have been added, showing that this grid is achievable.

Now, suppose this is not the case. Then, there exists a $$$1$$$ with $$$0$$$'s below it and to its right. It is then clearly impossible to fire a $$$1$$$ into this position because if it is fired horizontally, it will pass through to the $$$0$$$ position on its right, and if it is fired vertically, it will pass through to the $$$0$$$ position below it. Thus, in these cases, the grid is not achievable, showing that our criterion is exactly right.

Then, we can check this criterion for each $$$1$$$ in our grid; it is fairly easy to directly confirm whether it is satisfied.

Time Complexity: $$$O(N^2)$$$ per test case. Click here for my submission.


F — Spy-string

This problem falls to a brute-force approach. Clearly, the solution must differ from the first string in the input in at most one position, thus, there are less than $$$26M$$$ possible input strings. We can simply check all of them to see if they satisfy the condition: for each string, iterate over all input strings and count the differences between the input string and our potential solution; if we find a string that has at most $$$1$$$ difference, then we can report it as our answer.

Time Complexity: $$$O(26NM^2)$$$ per test case. Click here for my submission.


G — A/B Matrix

First, note that the matrix must have exactly $$$NA$$$ ones, since it has $$$N$$$ rows and $$$A$$$ ones per row. Likewise, it must have exactly $$$MB$$$ ones, since it has $$$M$$$ columns and $$$B$$$ ones per column. Thus, we must have $$$NA = MB$$$, so if $$$NA \neq MB$$$, the answer is NO.

If $$$NA = MB$$$, then we claim the answer is YES. Initially, let the matrix contain only zeros. Then, iterate over all rows in the matrix. For each row, $$$A$$$ times, let $$$X$$$ be the number of ones we've added so far, then, place a $$$1$$$ in the corresponding row and in column $$$X \mod M$$$. (We can maintain $$$X$$$ throughout this process so we don't need to recompute it every time.)

This clearly gives us $$$A$$$ ones per row, so we have a total of $$$NA$$$ rows. Moreover, it is fairly easy to see that this process distributes ones evenly across all the columns, so each column gets $$$\frac{NA}{M}$$$ ones. Then, since $$$NA = MB$$$, we have $$$B = \frac{NA}{M}$$$, so each column gets $$$B$$$ ones, as desired.

Runtime: $$$O(NM)$$$ per test case. Click here for my submission.


H — Binary Median

First, note that the resulting set contains $$$2^M - N$$$ strings. Thus, our median will be larger than exactly $$$X = \lfloor \frac{2^M - N - 1}{2} \rfloor$$$ strings in our set.

Then, we build the string bit by bit, starting from the most significant bit. Suppose we add a $$$1$$$ to the end of the string. Then, if the answer string is now lexicographically greater than at most $$$X$$$ strings (not counting strings that are equivalent to our answer up until the point where the answer ends), we know that we must append a $$$1$$$, because if we do not, then our string cannot be greater than $$$X$$$ strings because it will be smaller than a string greater than at most $$$X$$$ strings. Meanwhile, if our answer string becomes lexicographically greater than more than $$$X$$$ strings, we must append a $$$0$$$ instead, because if we append a $$$1$$$, then no matter what, our string will be greater than more than $$$X$$$ strings and thus cannot be our median.

We can compute the number of lexicographically smaller strings than our current answer by computing our answer string's binary representation and subtracting the number of smaller strings we've removed from the set.

We need to complete this building process $$$O(M)$$$ times. Each time, we must evaluate whether each of the input strings is lexicographically smaller; this can be done in $$$O(NM)$$$ time. By reusing data between iterations, we can also count smaller input strings in $$$O(N)$$$ time, but this makes the code slightly more complex. As is, we have an $$$O(NM^2)$$$ solution; since the sum of all $$$NM$$$ is bounded at $$$10^5$$$, the sum of all $$$NM^2$$$ is at most $$$6 \cdot 10^6$$$, so we can expect this algorithm to pass easily.

Time Complexity: $$$O(NM^2)$$$ per test case. Click here for my submission.

Read more »

 
 
 
 
  • Vote: I like it
  • +109
  • Vote: I do not like it

By Geothermal, history, 7 weeks ago, In English,

Since yesterday's Div. 3 round, I've received several PMs asking me to explain my solution to problem E (link here). As far as I'm aware, it's one of the simplest solutions submitted from an implementation standpoint; the code is relatively short and only one line involves nontrivial logic. So, I figured it might be more efficient to just post a writeup publicly. (As an aside, this is not something I do often; I am unable to respond to most PMs I receive asking for problem explanations. I'm writing this post because I received multiple requests related specifically to my solution for this particular problem, and because I happened to have the time. Generally, though, PMing me over Codeforces is unfortunately not an especially efficient way to get help with problems.) I'll try to outline my thought process as I worked through the problem, since I think solving E within three minutes was probably the biggest single contributing factor to my performance in the round.

The first thing I noticed about this problem was that the definition of $$$k$$$-periodic was unusual---typically, for example, 110110110 would have period $$$3$$$, while 1001000 would not. After parsing the definition, we see that the basic idea is that there must be one string of 1's, each $$$k$$$ spaces apart, and no other 1's left in the string.

The intuition we gather from this is that spaces $$$i$$$ and $$$i-k$$$ are probably related. More formally, if we have a string such that the last 1 is at position $$$i-K$$$, then we can create a string such that the last 1 is at position $$$i$$$ by taking the same string, then changing the character in position $$$i$$$ to a 1.

Our ability to reuse computation for earlier positions in the string to help us deal with later positions motivates a DP approach. We define our state as follows: let $$$dp[i]$$$ be the maximum value of the number of 1's kept as 1's (rather than changed to 0) minus the number of 0's changed to 1's in order to get a valid string that has its last 1 at position $$$i$$$, if it has any 1's at all. Then, our answer is the minimum value of $$$cnt_1 - dp[i]$$$ over all $$$i$$$, where $$$cnt_1$$$ is the number of 1's in the whole string. This is because, since $$$dp[i]$$$ counts unchanged 1's minus changed 0's, $$$cnt_1 - dp[i]$$$ counts changed 1's plus changed 0's, which is exactly what we want.

This seems like a very bizarre DP state, but it's actually quite nicely motivated. The key idea is that since we're forming a chain of 1's separated by $$$k$$$ spaces each, and our transition will thus relate $$$dp[i]$$$ to $$$dp[i-k]$$$, we probably want to form a state based only on changes made within the chain of positions we're changing to 1's, so that we don't need to worry about any positions after $$$i$$$ or outside this chain. Then, within our chain, we keep all the 1's as 1's and change all the 0's to 1's. Keeping 1's that were in the string already effectively saves us from performing a modification, while changing 0's effectively forces us to perform a modification. Thus, $$$dp[i]$$$ is effectively the number of modifications we saved minus the number of extra modifications we added compared to if we changed all the elements of the string to 0.

Then, we just need to figure out the transitions. First, it is possible for $$$dp[i]$$$ to equal $$$S[i]$$$. If $$$S[i] = 0$$$, then we can achieve the string with no 1's without changing any 0's to 1's, so $$$dp[i]$$$ can equal $$$0$$$. If $$$S[i] = 1$$$, then we can save one modification by creating the string where the only 1 is at position $$$i$$$.

This leaves one case left: taking a string ending at $$$i-K$$$, and adding a $$$1$$$. Starting with $$$dp[i-K]$$$, this forces us to make one extra change if $$$S[i] = 0$$$, because we now need to change position $$$i$$$ to a 1. However, it saves us one modification if $$$S[i] = 1$$$, because we no longer need to change position $$$i$$$ to a 0. This can be written concisely as dp[i-K] - 1 + 2 * (S[i] - '0'), since $$$dp[i] = dp[i-K] + 1$$$ if $$$S[i] = 1$$$ or $$$dp[i-K] - 1$$$ if $$$S[i] = 0$$$.

From here, we can compute the DP table in $$$O(N)$$$ and use it to generate the answer. The relevant portion of my code is below.

    int T; cin >> T;
    while(T--) {
        int N, K; cin >> N >> K;
        string S; cin >> S;
        int dp[N];
        int ans = 0;
        F0R(i, N) {
            dp[i] = S[i] - '0';
            if (i >= K) {
                dp[i] = max(dp[i], dp[i-K] - 1 + 2 * (S[i] - '0'));
            }
            ans = max(ans, dp[i]);
        }
        int cnt = 0; F0R(i, N) cnt += S[i] - '0';
        cout << cnt-ans << nl;
    }

Read more »

 
 
 
 
  • Vote: I like it
  • +158
  • Vote: I do not like it

By Geothermal, history, 3 months ago, In English,

Hi all!

Since nobody has posted about the round yet, I thought I'd open up a page to discuss GCJ Round 1A, which happened earlier this evening.

The scoreboard, problems, and editorials are available at this link. Practice mode should be open now--I'd recommend trying the problems if you didn't compete, since I thought they were generally pretty nice.

It looks like everyone with a score greater than 63 or with a score of 63 and a penalty time no greater than 2:14:05 qualifies for Round 2, assuming that nobody gets removed from the scoreboard later on. Everyone else can attempt to qualify through Rounds 1B and 1C, even if they participated in this contest but failed to qualify.

If nobody posts solutions here within the nearish future, I'll write up and share mine, but as far as I can tell, my ideas were pretty much identical to the ones explained in the official editorials.

Read more »

 
 
 
 
  • Vote: I like it
  • +75
  • Vote: I do not like it

By Geothermal, history, 3 months ago, In English,

Hi all!

As it might be a little while before the editorial for today's round is released (due to the open hacking phase), I thought I'd publish my solutions in case anyone wants to review the problems right away.


A — Divisibility Problem

There are a few ways to approach this. One is to find the next multiple of $$$B$$$ after $$$A$$$ and subtract $$$A$$$. Recall that we can compute $$$\frac{A}{B}$$$, rounded up, as $$$\frac{A+B-1}{B}$$$, rounded down. We can compute this value and then multiply it by $$$B$$$ to get the next multiple of $$$B$$$, subtracting $$$A$$$ to get our answer.

An alternative solution (and the one I implemented) is to realize that, as long as $$$A$$$ isn't already a multiple of $$$B$$$, the number of moves required is $$$B - (A \% B)$$$, since the distance to the next lower multiple of $$$B$$$ plus the distance to the next higher multiple of $$$B$$$ will be $$$B$$$. We can use this to compute the answer. In the case where $$$A$$$ is a multiple of $$$B$$$, this prints $$$B$$$ instead of $$$0$$$, so we can take this value modulo $$$B$$$ to compute the correct answer.

Runtime: $$$O(1)$$$. Click here for my submission.


B — K'th Beautiful String

We build the string character-by-character. At any stage, we want to find the $$$K$$$'th lexicographically smallest string starting with a certain prefix and including $$$X$$$ more characters, including $$$Y$$$ b's. Then, there are $$$Z = \dbinom{X-1}{Y}$$$ strings satisfying these conditions and with an 'a' as the next character. These strings are lexicographically smaller than all strings with b's as the next character, so if $$$K \leq Z$$$, we'll use one of these strings, and we should append an 'a' and continue with $$$X$$$ decreased by $$$1$$$. Otherwise, we need to append a 'b', so for our next stage, we want to find the $$$(K-Z)$$$'th lexicographically minimal string with $$$X-1$$$ more characters and $$$Y-1$$$ more b's.

Runtime: $$$O(N)$$$. Click here for my submission.


C — Ternary XOR

We build the answer digit by digit. Notice that the two numbers will be equal up to a certain point. As soon as we make the two numbers different, though, whichever number is bigger at this stage will always be bigger, no matter what we do, so we should seek to minimize this number without caring what happens to the smaller number. This means that as soon as we add different characters to the two numbers, every remaining digit in the larger number should be a zero, while the remaining digits in the smaller number should be the same as the digits of the input number.

Until we reach this point, though, we should try to keep the two numbers as balanced as possible to avoid making the larger one too big. When we process a $$$0$$$, we are clearly best off adding a $$$0$$$ to each number. When we process a $$$2$$$, we should add a $$$1$$$ to each number: this is better than adding a $$$0$$$ to one number and a $$$2$$$ to the other because this gives a maximum of $$$2$$$ rather than $$$1$$$. When we process a $$$1$$$, we must add a $$$1$$$ to one number and a $$$0$$$ to the other, and from here, we move into the stage described above, adding only $$$0$$$'s to the number that received the $$$1$$$ and appending all digits from the input to the number which received a $$$0$$$.

Runtime: $$$O(N)$$$. Click here for my submission.


D — Carousel

First, note that the answer will always be at most $$$3$$$. To achieve this, alternate between $$$1$$$ and $$$2$$$ for the first $$$N-1$$$ numbers, with the last number receiving a $$$3$$$. We can easily see that this guarantees that all adjacent figures have distinct colors, so we will never have a pair of adjacent figures with the same color and distinct types. (The reason $$$2$$$ may not be achievable is because the carousel is circular, so the last figure is adjacent to the first--this makes the odd case harder to deal with, since, for example, when $$$N=3$$$, we could end up with a result of 1, 2, 1, which doesn't work if the first and last figures are of different types.)

Now, we determine whether we can do better. It is trivial to check if the answer is $$$1$$$: this works only if all figures are of the same type. Now, we simply need to check if the answer is $$$2$$$ and provide a construction if so. To do this, we use dynamic programming. Without loss of generality, let the first figure have color $$$1$$$. Then, if it is possible to color the first $$$i$$$ figures in a valid way such that figure $$$i$$$ has color $$$j$$$, let $$$dp[i][j]$$$ be the color of figure $$$i-1$$$ used in one such coloring. Otherwise, let $$$dp[i][j] = -1$$$. To transition, if $$$dp[i][j] \neq -1$$$, we can transition to $$$dp[i+1][3-j]$$$ (if we let the colors be $$$1$$$ and $$$2$$$) from $$$dp[i][j]$$$, and if figures $$$i$$$ and $$$i+1$$$ have the same color, we can also transition to $$$dp[i+1][j].$$$

From here, we can determine whether an answer of $$$2$$$ is possible. If $$$dp[N-1][2]$$$ isn't $$$-1$$$, then we can definitely get an answer of $$$2$$$. If $$$dp[N-1][1]$$$ isn't $$$-1$$$ and figure $$$N-1$$$ has a different color from figure $$$0$$$, we can also achieve an answer of $$$2$$$. Either way, we can pick one of these cases and backtrack through our DP table to compute a valid coloring, if one exists.

If no such coloring exists, the answer must be $$$3$$$, and we can output the construction described above.

Runtime: $$$O(N)$$$. Click here for my submission.


E — Tree Queries

Let's start with an observation. Realize that if a path from the root goes through any vertex within distance $$$1$$$ of a node, it will also go through the vertex's parent. Thus, the problem is reduced to determining whether some path from the root contains the parents of all vertices in the query set. For simplicity, we simply replace every vertex in the query set with its parent (replacing the root with itself).

Since a path from the root to a vertex includes that vertex and its ascendants, we see that the answer is yes if and only if there is some node in the set that's a descendant of every other node in the set. We can perform a pre-order traversal, tracking entry and exit times, to enable us to determine in $$$O(1)$$$ whether some node $$$v$$$ is an ancestor of some node $$$u$$$. Then, we maintain a node that's a descendant of all nodes we've considered thus far. We process each node in the query set---if this node is an ancestor of our current node, we do nothing, and if the current node is an ancestor of this node, this node is now a descendant of all nodes we've considered, so we replace the current node with the new one. If neither node is an ancestor of the other, there is no vertex in the set that is a descendant of all the others, so the answer is no. If we finish this process and still have a valid result node, the answer is yes.

Click here for my submission.


F — Make k Equal

Suppose we want to make $$$K$$$ elements of the set equal to some number $$$X$$$. If there are at least $$$K$$$ copies of $$$X$$$ in the set, this takes $$$0$$$ operations. Otherwise, realize that to make any number lower than $$$X$$$ equal to $$$X$$$, we must first raise all numbers lower than $$$X$$$ to $$$X-1$$$. Meanwhile, to make any number higher than $$$X$$$ equal to $$$X$$$, we must first lower all numbers greater than $$$X$$$ to $$$X+1$$$. Take for granted that we can compute the costs of doing these things---we'll explain how to do this below. Then, if there are at least $$$K$$$ numbers less than or equal to $$$X$$$, we can bring all the lower numbers to $$$X-1$$$ and then raise however many of them we need to $$$X$$$. If there are at least $$$K$$$ numbers greater than or equal to $$$X$$$, we can bring all the higher numbers to $$$X+1$$$ and then lower however many we need to $$$X$$$. In any case, we can also bring all the lower numbers to $$$X-1$$$ and all the higher numbers to $$$X+1$$$, after which we can take however many numbers we need to $$$X$$$.

Now, how can we determine the number of operations this will take for each possible $$$X$$$ efficiently? First, a quick observation: we only need to consider elements of the input array as $$$X$$$, as we can prove that as long as $$$X$$$ is not an element of the array, we can increase or decrease it to the next element of the input array without increasing the number of required operations. Then, we consider each possible $$$X$$$ in increasing order. We can then maintain the number of elements less than $$$X$$$ and the sum of all those elements, as well as the same values for the numbers greater than $$$X$$$. These values allow us to compute the costs of bringing all numbers lower than $$$X$$$ to $$$X-1$$$ or the costs of bringing all numbers higher than $$$X$$$ to $$$X+1$$$, after which we can consider each of the three cases above to compute our answer.

Runtime: $$$O(N \log N)$$$, to account for sorting. Click here for my submission.

Read more »

 
 
 
 
  • Vote: I like it
  • +97
  • Vote: I do not like it

By Geothermal, history, 4 months ago, In English,

It's been a while since I've done one of these!


A — Station and Bus

After parsing the problem statement, we see that we simply need to determine whether the two companies each operate at least one station, since if this is the case, those stations will be connected, and otherwise, if one company operates every station, no connections will be built. This is now a fairly easy task---one way to do it is to check whether every character in the string is the same, printing No if this is the case and Yes otherwise. Alternatively, we can directly compare $$$S$$$ to "AAA" and "BBB", rejecting if the input is one of these strings and accepting otherwise. There are a wide variety of approaches, but they're all essentially equivalent.

Runtime: $$$O(1)$$$. Click here for my submission.


B — Count Balls

Given the large limits on $$$N, A,$$$ and $$$B$$$, a brute force simulation will be vastly too slow---we need a faster approach here. First, we compute $$$X = \lfloor \frac{N}{A+B} \rfloor$$$ to determine the number of times the operation is fully completed before we reach $$$N$$$ balls. Then, each of these operations adds $$$A$$$ blue balls to our answer, so we can initialize our answer as $$$XA$$$. Then, we have $$$Y = N \% (A+B)$$$ remaining balls to add. If $$$Y < A$$$, all of the remaining balls will be blue, and if $$$Y \geq A$$$, then we add another full set of blue balls. Thus, we can add $$$\textbf{min} (Y, A)$$$ to our answer and print it.

Runtime: $$$O(1)$$$. Click here for my submission.


C — Tax Increase

This initially seems like an annoying but doable math exercise--we could compute the minimum and maximum values at which the taxes are $$$A$$$ and $$$B$$$, intersect these ranges, and take the minimum value in the intersection. However, implementing this is mildly frustrating, and it turns out that there's an even simpler solution.

Notice that the inputs on $$$A$$$ and $$$B$$$ are extremely low. Indeed, as soon as we reach a tax of $$$1010$$$, we see that the $$$8\%$$$ tax will already be greater than $$$100$$$, so the answer must be less than $$$1010$$$. Thus, we can check the taxes resulting from every potential answer from $$$1$$$ to $$$1009$$$ and print the first one that works, returning $$$-1$$$ if none of them are valid.

Runtime: $$$O(A+B)$$$, albeit with a bad constant factor. Click here for my submission. Note that I capped the answer at $$$10000$$$ instead of $$$1009$$$ in my solution--this doesn't change the output; I just used an unnecessarily large bound in case I did my arithmetic wrong and the correct bound was higher than I thought.


D — String Formation

We can simulate the process fairly directly. First, we maintain whether or not the string $$$S$$$ is currently reversed or in its original configuration. Then, we maintain two strings $$$B$$$ and $$$A$$$, containing the characters before and after $$$S$$$ when $$$S$$$ is in its original configuration.

Processing the queries is fairly easy. For queries of type $$$1$$$, we just update our variable to indicate that $$$S$$$ is in the reverse of its position before the query. For queries of type $$$2$$$, we simply append the new character to either $$$B$$$ or $$$A$$$. If $$$S$$$ is in its original configuration, we append to the end specified in the problem. If $$$S$$$ is reversed, we append to the other end, because, for example, appending to the end when $$$S$$$ is reversed is equivalent to appending to the beginning when $$$S$$$ is in its original configuration.

Then, at the end of the process, if $$$S$$$ is in its original configuration, we print $$$B^RSA$$$, where $$$B^R$$$ is $$$B$$$ reversed (we define this operator for other strings similarly). We reverse $$$B$$$ because we want characters added to the beginning last to show up at the beginning of our string. If $$$S$$$ is reversed, we print $$$A^RS^RB$$$, which is the reverse of this string.

Runtime: $$$O(|S|+Q).$$$ Click here for my submission.


E — Divisible Substring

We use a common trick in problems asking us to count substrings satisfying a specific criterion. Essentially, we store the residues, modulo $$$P$$$, of every prefix of the string, followed by appending zeros until we reach $$$N$$$ characters. For example, in the first sample input, the numbers we get are $$$0000$$$, $$$3000$$$, $$$3500$$$, $$$3540$$$, and $$$3543$$$, giving us residues of $$$0, 0, 2, 0,$$$ and $$$0$$$ modulo $$$3$$$. Then, given that each residue appears $$$C[i]$$$ times in the result, we sum $$$\dbinom{C[i]}{2}$$$ over all $$$i$$$ and print that as our answer. There's one catch: because we're appending extra zeros, we handle the cases where $$$P = 2$$$ or $$$5$$$ separately. (These are fairly easy because we simply need to count the substrings whose last digits are divisible by $$$P$$$.)

Why does this work? Notice that if we subtract the $$$i$$$'th value in this array from the $$$j$$$'th, we get the substring over the range $$$i+1 \cdots j$$$, followed by some zeros. For example, subtracting $$$3000$$$ from $$$3540$$$ gives us $$$540$$$. Moreover, note that as long as $$$10$$$ is relatively prime to $$$P$$$, appending zeros won't affect whether or not this is divisible by $$$P$$$. Thus, we can choose any two of these numbers with the same residue mod $$$P$$$ and subtract them to get a substring divisible by $$$P$$$, while choosing any two of these numbers with different residues mod $$$P$$$ will not give us a substring divisible by $$$P$$$. Therefore, this process computes exactly the right answer.

How do we compute these numbers? Note that we don't need to know the numbers exactly (and in fact, computing all of them would require $$$O(N^2)$$$ memory): we only need their residues modulo $$$P$$$. We can thus compute these values sequentially. First, we compute the residue of each prefix--to compute one prefix's residue from the last, multiply by ten, add the new digit, and take the remainder modulo $$$P$$$. Then, to get the residue of a number from its corresponding prefix, multiply by $$$10^{N-1-i}$$$, given that we just added the digit in position $$$i$$$ in the string. These powers of ten can be computed quickly using any efficient modular exponentiation algorithm.

Once we have the counts of each residue, we can compute the answer easily.

Runtime: $$$O(N \log P).$$$ Click here for my submission.


F — Removing Robots

Start by sorting the robots by position. Realize that each robot $$$i$$$, either directly or through a chain reaction, will necessarily activate all robots up to some position $$$L[i]$$$, and no robots after this position. As a subproblem, we determine how to compute $$$L[i]$$$. We compute the values in reverse. Maintain a segment tree storing the values of $$$L$$$. Then, for each robot, use binary search to determine the furthest robot it directly activates. Finally, set $$$L[i]$$$ to the maximum of the values of $$$L[i]$$$ up to this furthest robot (using the segment tree to query for this value). This works because the furthest robot activated by $$$i$$$ is also the furthest robot directly or indirectly activated by any robot $$$i$$$ touches.

Now, let's compute the answer. Let $$$dp[i]$$$ be the number of sets of robots that could be activated given that we're only activating robots in the range $$$i \cdots N-1$$$. Define $$$dp[N]$$$ to be $$$1$$$. We see that $$$dp[0]$$$ is our answer. Then, we compute $$$dp[i]$$$ from $$$N-1$$$ to $$$0$$$. Note that when we consider adding given robot, we have two possible choices. We could avoid activating this robot, giving us $$$dp[i+1]$$$ possible sets (since if we're considering robots $$$i \cdots N-1$$$ but don't activate robot $$$i$$$, we can now pick any valid set from $$$i+1 \cdots N-1$$$). Alternatively, we can activate this robot, which adds robots $$$i \cdots L[i]$$$ to this set and gives us $$$dp[L[i]+1]$$$ possible sets. Notice also that if we're considering robots $$$i \cdots N-1$$$ and don't directly activate robot $$$i$$$, no higher-numbered robot will activate robot $$$i$$$, so the only way we'll ever activate robot $$$i$$$ is if we choose it now. Thus, we have $$$dp[i] = dp[i+1] + dp[L[i] + 1].$$$

We compute this recursion and print $$$dp[0]$$$ to get our answer.

Runtime: $$$O(N \log N)$$$. Click here for my submission.

Read more »

 
 
 
 
  • Vote: I like it
  • +81
  • Vote: I do not like it

By Geothermal, history, 5 months ago, In English,

A — Serval vs Monster

Unpacking the problem, we need to find the least multiple of $$$A$$$ that is greater than $$$H$$$. This is equal to $$$H$$$ divided by $$$A$$$, rounded up, or $$$\lceil \frac{H}{A} \rceil$$$. Using the integer division provided by most programming languages, we can compute this as $$$\frac{H + A - 1}{A}$$$, rounded down. (This is correct because when $$$H$$$ is equal to $$$0$$$ mod $$$A$$$, the $$$A-1$$$ component will be discarded, but otherwise, adding $$$A-1$$$ will be enough to push $$$H$$$ to the next multiple of $$$A$$$, effectively adding $$$1$$$ to the result similarly to rounding up.)

Runtime: $$$O(1)$$$. Click here for my submission.


B — Common Raccoon vs Monster

Given that we can't use the same move twice, our best option is to just use all the moves once, so the amount of damage we can deal is the sum of $$$A_i$$$ over all $$$i$$$. We can compute this sum and then compare it to $$$H$$$, printing Yes if the sum is at least $$$H$$$ and No otherwise.

Runtime: $$$O(N)$$$. Click here for my submission.


C — Fennec vs Monster

First, we claim that we should use the special move on the $$$K$$$ monsters with the greatest $$$H_i$$$. To prove this, note that we should never attack a monster and then use the special move on it (it'd be pointless--we could save a move just by not attacking beforehand), so the special move will always save us exactly $$$H_i$$$ attacks when used on monster $$$i$$$. Thus, since we want to save as many attacks as possible, we should use the special move on the monsters with the $$$K$$$ greatest values of $$$H_i$$$.

Now, we need to attack the remaining $$$N-K$$$ monsters. We sort the data and take the sum of the $$$N-K$$$ (or $$$0$$$, if $$$N < K$$$) monsters with the lowest $$$H_i$$$. We can then print this sum as our answer.

Runtime: $$$O(N \log N)$$$. Click here for my submission.


D — Caracal vs Monster

We claim that the answer is $$$2^{\lfloor \log_2 H \rfloor + 1} - 1$$$. We will prove this by strong induction on $$$H$$$. The base case, $$$N = 1$$$, is trivial: $$$2^{0 + 1} - 1 = 2 - 1 = 1$$$, and indeed, we need to use only a single attack to kill the monster, as desired.

Now, for our inductive step, we prove that this answer is correct for some $$$H$$$ given that it works for all other values. First, notice that the answer for $$$H$$$ is equal to one more than twice the answer for $$$\lfloor \frac{H}{2} \rfloor$$$, since after our first attack, we have two independent subproblems with value $$$\lfloor \frac{H}{2} \rfloor$$$. Thus, our answer for $$$H$$$ is

$$$2(2^{\lfloor \log_2 \lfloor \frac{H}{2} \rfloor \rfloor + 1} - 1) + 1.$$$

We can observe that $$$\lfloor \log_2 \lfloor \frac{H}{2} \rfloor \rfloor = \lfloor \log_2 H \rfloor - 1$$$. The intuition comes from a basic application of logarithm rules, but we can also prove this formally: note that if $$$H$$$ is an odd number greater than $$$1$$$, then $$$\lfloor \log_2 H \rfloor = \lfloor \log_2 (H-1) \rfloor$$$ and $$$\lfloor \frac{H}{2} \rfloor = \lfloor \frac{H-1}{2} \rfloor$$$, so subtracting $$$1$$$ from $$$H$$$ won't change the result on either side of the equation. Thus, if $$$H$$$ is odd, we can subtract $$$1$$$ from it, so we can assume $$$H$$$ is even. Then, $$$\lfloor \frac{H}{2} \rfloor = \frac{H}{2}$$$, and $$$\lfloor \log_2 \frac{H}{2} \rfloor = \lfloor \log_2 H \rfloor - 1$$$, as desired, where this final step comes from our logarithm rules.

Thus, our answer becomes

$$$2(2^{\lfloor \log_2 H \rfloor} - 1) + 1 = \boxed{2^{\lfloor \log_2 H \rfloor + 1} - 1},$$$

as desired.

Now, we can simply implement this formula and output its value for our $$$H$$$.

Runtime: $$$O(1)$$$ if you use a native function to compute logarithms, or $$$O(\log H)$$$ if you do it yourself. Click here for my submission.


E — Crested Ibis vs Monster

Though it initially might seem like this problem demands some sort of greedy solution, the problem is fraught with edge cases, so coming up with a correct greedy approach is more difficult than it looks (maybe even impossible). Luckily, the constraints are low enough that $$$O(HN)$$$ solutions are admitted, motivating a dynamic programming approach.

Let $$$dp[i]$$$ be the fewest MP necessary to deal $$$i$$$ damage. Of course, we are looking for $$$dp[H]$$$. To start off, note that $$$dp[0] = 0$$$, and initialize all other $$$dp[i]$$$ to $$$\infinity$$$. Then, our transitions are our spells--for each $$$i$$$ and $$$j$$$, we take $$$dp[i] = min(dp[i], dp[i-A_j] + B_j)$$$, raising $$$i-A_j$$$ to zero if it is negative. The intuition is that we're transitioning from the state $$$i-A_j$$$, since that's the damage we had dealt before casting this spell, and then we add $$$B_j$$$, the cost of this spell.

Our answer is then $$$dp[H]$$$.

Runtime: $$$O(HN)$$$. Click here for my submission.


F — Silver Fox vs Monster

My solution essentially overkills this problem with a lazy segment tree. Of course, this problem can be solved via several simpler approaches, most of which implement the same general idea using prefix sums or a BIT (or any other structure that can support range updates and point queries), but the asymptotic complexity is the same either way, and I found this solution easiest to write quickly since I had an easily accessible template I could use.

We start by sorting the monsters by location and creating a map from monster locations to their indices in the sorted list. We then create a lazy segment tree, where the value at position $$$i$$$ represents the health of monster $$$i$$$, where monsters are indexed based on their position in the sorted list. We initialize all of these values to $$$H_i$$$.

Then, we proceed through the monsters in increasing order of location. Our basic idea is to continually drop bombs whose leftmost position is the position of the leftmost remaining monster. We can prove that this will lead to an optimal solution because any bombs with positions further to the right will avoid killing the leftmost remaining monster, and this configuration kills the leftmost monster while maximizing damage to monsters further to the right.

For each monster $$$i$$$, we start by querying the segment tree for its health. Then, we compute the number of bombs $$$B$$$ necessary to kill it, similarly to our approach to problem A. We also compute, using the map created beforehand, the highest index $$$j$$$ whose monster is at a position less than or equal to $$$X_i + 2D$$$, since this will be the furthest monster to the right that we can reach. Finally, we update the segment tree by subtracting $$$AB$$$ over the range $$$[i, j]$$$ and add $$$B$$$ to our answer. Then, we move on to the next monster.

Runtime: $$$O(N \log N)$$$, due to sorting, our maps, and $$$O(N)$$$ segment tree operations. Click here for my submission.

Read more »

 
 
 
 
  • Vote: I like it
  • +23
  • Vote: I do not like it

By Geothermal, history, 6 months ago, In English,

A — Next Alphabet

The easiest way to deal with this problem is probably to convert the given character to an integer, add one, and convert it back. If you absolutely can't do that, a somewhat more annoying implementation would involve iterating through the alphabet, waiting until you find the given letter, using a boolean variable to store that the letter has been found, and printing the letter on the next iteration of the loop. (However, iterating over the alphabet is itself nasty unless you have conversion from letters to integers, so this approach is rather pointless unless your language just doesn't support the first method, though in that case you should probably switch languages anyway.)

Runtime: $$$O(1)$$$. Click here for my submission.


B — Achieve the Goal

In total, we need to earn $$$NM$$$ points. We subtract the points earned so far and determine whether this is a valid score for the final test. There are two special cases to deal with. First, if we need more than $$$K$$$ points, the answer is $$$-1$$$ because we cannot achieve the goal. Second, if we need a negative number of points, we should print $$$0$$$, since we must earn a nonnegative score. Otherwise, we can print $$$NM$$$ minus the sum of the input values.

Runtime: $$$O(N)$$$. Click here for my submission.


C — Welcome to AtCoder

This problem can be solved via direct simulation. Maintain two arrays: the number of incorrect submissions and whether we've seen an accepted solution, with both arrays containing entries for each problem. Then, when we see an incorrect submission for a problem, we can increment the first array. When we see a correct submission, we check whether this is the first correct submission we've seen so far and, if so, add the count of incorrect submissions for this problem to our answer.

Runtime: $$$O(N)$$$. Click here for my submission.


D — Maze Master

Note that using BFS, we can compute the distance from one point to all other points in $$$O(HW)$$$. Thus, by performing BFSes rooted from all $$$HW$$$ points, we can compute all pairwise distances in this graph in $$$O(H^2W^2)$$$. From here, we can iterate over each of these distances and compute the largest among them, which is our answer.

Runtime: $$$O(H^2W^2)$$$. Click here for my submission.


E — Max-Min Sums

We will separately count the number of times each element appears as the minimum and the maximum in a set of $$$K$$$ integers. Then, if $$$A_i$$$ appears $$$L_i$$$ times as the largest element in the set and $$$S_i$$$ times as the smallest element in the set, we add $$$(L_i - S_i) A_i$$$ to our answer.

First, sort the array and zero-index it (so that, for example, $$$A_0$$$ is the smallest element). Note that the number of sets in which $$$A_i$$$ is the largest value is the number of ways to choose $$$K-1$$$ smaller integers, or $$$\dbinom{i}{K-1}$$$. Meanwhile, the number of sets in which $$$A_i$$$ is the smallest value is the number of ways to choose $$$K-1$$$ larger integers, or $$$\dbinom{N-i-1}{K-1}$$$. We can thus compute $$$L_i$$$ and $$$S_i$$$ for each value in $$$O(1)$$$ each, with $$$O(N \log MOD)$$$ precomputation. (If you aren't familiar with how to efficiently compute the choose function, I strongly recommend looking into this--the basic idea is to precompute all relevant factorials and their modular inverses, which we can then use to compute the value of the choose function.) Now, we can use these values to compute our final answer.

Runtime: $$$O(N \log MOD)$$$. Click here for my submission.


F — Enclose All

Using the Google Algorithm, we can find code for this problem on the internet in $$$O(1)$$$ real time. From here, we can apply the copy-and-paste lemma to enter this code into our editor. Afterwards, we can read in our input and feed it to our copied algorithm, printing out the result. If you implemented the Google Algorithm effectively, your code should have $$$O(N)$$$ runtime, which will easily pass in time. This is the approach I implemented during the contest.

Of course, I'm (mostly--I actually did do this during the contest) joking, and I'll now describe the solution I assume was intended. The basic idea is that there are two cases to deal with. First, two points form a diameter of the circle, or second, at least three points are on the circle.

We will now prove that in any other case, we can shrink the circle to create a better answer. First, if only one point is on the circle, we can shrink the circle by dilating it with respect to that one point until another point is on the border, shrinking it without excluding any points. Second, if two points are on the circle, we can move the center towards these points while keeping them on the border, until the two points are on the diameter or another point is on the border, to shrink the circle without excluding any points. The reason this shrinks the circle is that the distance between the two points is constant, but the distance from the center of the circle to the line is decreasing, so by the Pythagorean Theorem we can see that the distance from the center to each of the two points, which is the radius, decreases. In all other cases, we have that either two points are on the diameter or three points are on the circle, as desired.

Thus, we can make a list of possible circles by computing the circles with the segments connecting each pair of points as diameters and the circumcircles of all triples of points. The circumcenter can be computed with some algebra by noting that it is the intersection of the perpendicular bisectors or through a closed form in terms of the given coordinates. Given the circumcenter, we can compute the circumradius by finding its distance to one of our three points. Now, we can consider each of these circles in turn and determine whether they contain all points. Among all the circles that do contain all of the points, we print the smallest radius.

Runtime: $$$O(N^4)$$$ if implemented from scratch, or $$$O(N)$$$ if copied from the internet. Click here for my submission. All credit goes to nayuki.io!

A bit of a soapbox: though this was an ABC, I think this problem was far too standard to appear on a modern programming competition, especially as problem F. Finding the solution on Google took me less than five minutes during the round, and given the simplicity of the problem statement, at least one of the round's authors/testers ought to have realized that the solution might exist on the internet already. Then, they should have done some basic research and, upon realizing how easy it would be to copy/paste the solution, scrapped this problem and written a new F.

Yes, writing original, challenging problems is very challenging, but this would have been a very easy issue to avoid. And, yes, the intended solution was not absurdly complex, but this problem still substantially advantages those who use Google and copy/paste code (including yours truly, oops), and even the intended solution is a lot easier to write if you Google the circumcenter formula. I'm not especially bothered by this as a one-time issue, but I do hope that this doesn't create a norm of allowing standard problems to appear as ABC E or F. (In contrast, for example, I'm mostly fine with problem D, even though it was also very standard, because at that level the focus probably should be on educating newcomers over encouraging competition.) I don't intend this as a personal attack on the authors, though, and I thank them for the contest and for the other five problems, which made for a very solid (if somewhat easier than usual) ABC.

Read more »

 
 
 
 
  • Vote: I like it
  • +40
  • Vote: I do not like it

By Geothermal, history, 6 months ago, In English,

A — 500 Yen Coins

We are essentially asked whether $$$500K \geq X$$$. We can determine this using an if statement. If you'd like to be fancy, you can shorten your code using the ternary operator, printing $$$\texttt{500K >= X ? "Yes" : "No"}$$$.

Runtime: $$$O(1)$$$. Click here for my submission.


B — Count ABC

There are several ways to do this. The first is to compute each three-letter substring of $$$S$$$, either through brute force or your language's substring computation function, then comparing them to "ABC". Another, which I implemented, is to simply iterate over each position in the string up to $$$N-3$$$, using zero-indexing, and check whether $$$S[i] = A$$$, $$$S[i+1] = B$$$, and $$$S[i+2] = C$$$. If so, we increment the answer. Either way, we can maintain a count of "ABC" substrings and return it at the end. (Of course, we theoretically could use a more complicated pattern matching algorithm, but because the string whose occurrences we're counting is extremely short, brute force is more than sufficient here.)

Runtime: $$$O(N)$$$. Click here for my submission.


C — Count Order

Obviously, if we can compute $$$a$$$ and $$$b$$$ efficiently, we're finished. Since the two values can be computed similarly, this reduces the problem to determining the lexicographical position of a permutation.

Luckily, $$$N$$$ is quite small, so we can use brute force. Using $$$\texttt{next_permutation}$$$ in C++, we can generate all permutations in lexicographical order and compare each of them to $$$P$$$ and $$$Q$$$ in order to compute $$$a$$$ and $$$b$$$. Then, we can print $$$|a-b|$$$, as requested. If you're not using C++, you could simply generate all arrays of $$$N$$$ numbers from $$$1$$$ to $$$N$$$ and ignore the ones that aren't permutations. This has complexity $$$O(N^{N+1})$$$, which still passes in time.

As an extra challenge, it turns out that there's a way to solve this problem in $$$O(N \log N)$$$. Of course, doing so is not necessary here and takes substantially more thought and code.

Runtime: $$$O(N \cdot N!)$$$. Click here for my submission.


D — Semi Common Multiple

It should be noted that much of this explanation is largely messy number theory used to prove that the solution works. The majority of the actual implementation details is contained in the final two paragraphs.

We are essentially being asked to count values of $$$X$$$ such that $$$X = \frac{a_i}{2} \mod a_i$$$ for all $$$i$$$. Let's start by characterizing the possible values of $$$X$$$. The key here is the Chinese remainder theorem, which tells us in this case that there is at most one valid solution to this system of modular congruences when we take it modulo $$$L = \texttt{lcm}(a_1, a_2, \cdots, a_n)$$$. In other words, there is at most one value $$$K$$$, with $$$0 \leq K < L$$$, such that $$$K = \frac{a_i}{2} \mod a_i$$$ for all $$$i$$$.

Now, there are two possible cases. First, suppose $$$2$$$ divides some two values of $$$a_i$$$ in the array a different number of times. (This is the case in the second sample input.) In this case, there is no working value of $$$K$$$. To prove this, suppose that $$$a_i$$$ is divisible by $$$2^a$$$ but not $$$2^{a+1}$$$, and $$$a_j$$$ is divisible by $$$2^b$$$ but not $$$2^{b+1}$$$, with $$$a < b$$$. Then, any value that is $$$\frac{a_i}{2} \mod a_i$$$ must be divisible by $$$2^{a-1}$$$ but not $$$2^a$$$, but any value satisfying the same equation for $$$a_j$$$ must be divisible by $$$2^{b-1}$$$, which, since $$$a < b$$$, implies that it is divisible by $$$2^a$$$, a contradiction. Therefore, in this case, the answer is $$$0$$$.

Now, assume that $$$2$$$ divides all $$$a_i$$$ the same number of times. In this case, $$$K = \frac{L}{2}$$$. We can see that since $$$L$$$ contains the same power of two as all of the $$$a_i$$$, $$$K$$$ contains the same power of two as all of the $$$\frac{a_i}{2}$$$ values, and for all other primes $$$p$$$ such that $$$p^k$$$ divides $$$a_i$$$, $$$L = a_i = 0$$$ mod $$$p^k$$$, so $$$\frac{L}{2} = \frac{a_i}{2}$$$ mod $$$p^k$$$, so $$$\frac{L}{2}$$$ will be congruent to $$$\frac{a_i}{2}$$$ modulo any power of any prime dividing $$$a_i$$$, proving that $$$\frac{L}{2} = \frac{a_i}{2} \mod a_i$$$, as desired.

By the way, it's probably not worth working through the logic to prove that $$$\frac{L}{2}$$$ works in the second case below--in-contest, I figured this out by experimenting with the sample inputs.

From here, we can first determine which case we're in--we can do this either by counting the times $$$2$$$ divides each $$$a_i$$$ or by computing $$$\frac{L}{2}$$$ and checking that it is congruent to $$$\frac{a_i}{2}$$$ modulo $$$a_i$$$ for all $$$i$$$. (Recall that we can compute the LCM of two numbers by multiplying them and taking their GCD. To prevent overflow, we need to return zero immediately if $$$L$$$ gets too large, noting that if $$$L > 2M$$$, the answer will always be zero.)

Now, we need to count values that are $$$\frac{L}{2}$$$ mod $$$L$$$ and less than or equal to $$$M$$$. To start, our answer is at least $$$M / L$$$, rounded down, since for any block of $$$L$$$ consecutive numbers, one of them will be $$$\frac{L}{2} \mod L$$$. Then, if $$$M \% L \geq \frac{L}{2}$$$, we get one extra value, so we increment the answer. Finally, we print the result.

Runtime: $$$O(N \log M)$$$. (The latter factor comes in from our LCM computation.) Click here for my submission.


E — Change a Little Bit

Starting with an observation, note that we should make the necessary changes in decreasing order of cost, as we want to pay the smallest cost the most times and the largest cost the fewest times. Let's sort the data so that without loss of generality, $$$C_1 \geq C_2 \geq \cdots \geq C_N$$$.

Now, let's compute the amount each $$$C_i$$$ contributes to the answer. We do this by computing the amount each $$$C_i$$$ contributes on average given a completely random $$$(S, T)$$$ pair (to simplify the computation, we won't assume $$$S \neq T$$$, as the cases where $$$S = T$$$ don't actually add to the answer) and multiplying by the $$$2^{2N}$$$ total ways to pick $$$S$$$ and $$$T$$$.

How much does $$$C_i$$$ contribute for any given string? Obviously, if the two strings are not different in position $$$i$$$, $$$C_i$$$ doesn't contribute at all. However, if the strings are different in position $$$i$$$, $$$C_i$$$ will count once for position $$$i$$$, plus once for each position less than $$$i$$$ at which $$$S$$$ and $$$T$$$ are different. $$$S$$$ and $$$T$$$ have a $$$\frac{1}{2}$$$ chance of differing at each position, so this adds $$$\frac{i-1}{2}$$$, using one-indexing, to our count, for a total of $$$\frac{i+1}{2} C_i$$$ as our average cost among strings that differ in position $$$i$$$. Half the strings differ in position $$$i$$$, so $$$C_i$$$ contributes $$$\frac{i+1}{4} C_i$$$ to our sum.

Thus, our answer is $$$2^{2N} \sum_{i = 1}^N \frac{i+1}{4} C_i$$$, or alternatively, $$$2^{2N-2} \sum_{i = 1}^N (i+1)C_i$$$. We can directly compute this sum in $$$O(N)$$$.

Runtime: $$$O(N \log N)$$$, due to sorting. Click here for my submission.


F — Xor Shift

Consider the function $$$d(a)$$$ that computes, given an array $$$a$$$, a result array such that $$$d[i] = a[i] \, XOR \, a[i+1]$$$ for $$$0 \leq i < N-1$$$ and $$$d[N-1] = a[N-1] \, XOR \, A[0].$$$ The first insight here is that the operation given in the problem is that $$$d(a')$$$ is a cyclic shift of $$$d(a)$$$ by $$$k$$$ positions. The proof is that XORing two values by the same number does not change the XOR of the two values--in other words, $$$(a \, XOR \, x) \, XOR \, (b \, XOR \, x) = a \, XOR b$$$ for all $$$a, b, x$$$, since XOR is commutative, associative, and satisfies $$$x \, XOR \, x = 0$$$. Therefore, since we obviously must have $$$d(a') = d(b)$$$, our values of $$$k$$$ must have that $$$d(a)$$$, cyclically shifted $$$k$$$ positions, is equal to $$$d(b)$$$.

The second observation is that all of these values of $$$k$$$ have exactly one value of $$$x$$$. The proof is that we can uniquely construct an array $$$a$$$ from $$$d(a)$$$ given $$$a[0]$$$, by the recursion $$$a[i+1] = a[i] \, XOR \, d[i]$$$. Thus, if $$$a[0] \, XOR \, x = b[(N-k) \mod N]$$$ and $$$d(a') = d(b)$$$, we will have $$$a' = b$$$. Thus, for any $$$k$$$ such that $$$d(a') = d(b)$$$, we can take $$$x = a[0] \, XOR \, b[(N-k) \mod N]$$$ and have a working pair. Note that this uniquely defines $$$x$$$, so there will be at most one value of $$$x$$$ for any $$$k$$$.

Now, we need to enumerate the ways to cyclically shift $$$d(a)$$$ to get $$$d(b)$$$. There are several ways to do this--the one I used involves hashing the sequence. Note that we can compute the polynomial hash of an array $$$d$$$ as $$$d[0] \cdot s^{N-1} + d[1] \cdot s^{N-2} + \cdots + d[N-1] \cdot s^0$$$ for some arbitrary base $$$s$$$, all modulo some large prime. We can compute the hash of $$$a$$$ and $$$b$$$ in $$$O(N)$$$. Then, to cyclically shift our hash of $$$a$$$ by one position, we can subtract $$$a[0] \cdot s^{N-1}$$$, multiply the remainder of the hash by $$$s$$$, and add $$$a[0]$$$ back. Now, we have the hash of the array $$$a[1], a[2], \cdots, a[N-1], a[0]$$$. We can repeat this process for all $$$i$$$ and list the cyclic shifts for which our hash of $$$a$$$ equals our hash of $$$b$$$. With an appropriate hashing prime (the prime should be low enough to prevent long long overflow but higher than $$$2^{30}$$$ to prevent collisions), this is extremely unlikely to cause collisions and should pass essentially every time.

Once we have our values of $$$k$$$, we can use the approach outlined above to compute the corresponding values of $$$x$$$, at which point we're done.

Complexity: $$$O(N)$$$. Click here for my submission.

Read more »

 
 
 
 
  • Vote: I like it
  • +25
  • Vote: I do not like it

By Geothermal, history, 6 months ago, In English,

Hi all!

I'm proud to announce Competitive Programming Academy, my new programming competition training community.

Though the broad objective of CPA is to help people grow as competitive programmers, there are three specific goals I hope to achieve with this program:

  1. To provide a programming competition community focused specifically on training and growth. There are many existing communities that work well for discussing programming contests, but I think having one emphasizing training will make it easier for people to ask questions without fear of being ignored (or downvoted). Of course, the other advantage of asking questions there is that I'll be around to answer them. Additionally, my hope is that having a community with which to train is likely to help people who don't have other competitive programmers around locally find motivation to grow.
  2. Helping people create training plans and find helpful resources to use while practicing. While I'm not convinced there's a huge difference between various training platforms (e.g. I don't think it matters much whether you solve problems on Codeforces versus AtCoder versus Codechef versus Kattis, as long as you solve problems at all), I do think identifying these resources may be helpful for complete beginners, and for others, I'm hoping to minimize the amount of time trainees need to spend thinking about how to train in order to maximize the amount of time they can actually spend training.
  3. Communicating the intuition behind challenging problems. While most successful competitive programmers train individually (and this approach is generally effective), I think there's one question most people have trouble answering on their own or with editorials while working on challenging problems: how would I have thought of that? I think the most important way I (or a coach/mentor/etc in general) can help with the training process is by helping trainees build the intuition helpful in solving competition problems.

In particular, I should note that because I don't want to restrict CPA to a smaller size, it is not a one-on-one training program--for the most part, I won't be able to select specific problems for each individual person in the program, for example. If enough people would find it useful, I'm open to creating some sort of community-based mentoring structure, matching mentors with groups of trainees for a more personal touch. (My instinct, however, is that this is probably not the way to go because finding active and effective mentors would be challenging and because a similar concept has been tried in the past and failed, so I plan to implement it only if much of the community is fairly convinced that this would be useful.)

Currently, the plans include a set of chats for general discussions and questions and answers and moderated discussion sessions after Codeforces rounds or group virtual contests. I'm starting small because I'd like to get a sense for the program's scale and what people are looking for before expanding too much--in the near future, I'm hoping to add additional services. Some ideas I've been bouncing around include the aforementioned mentoring program, lectures (with accompanying problemsets) on specific topics that come up regularly in contests, and a problem-of-the-day system.

Competitive Programming Academy is hosted on Discord--join the server at https://discord.gg/VxVnGHu. Please read the announcements in #first-time, fill out the linked form, and you'll be added soon afterwards. Please make sure to read the rules and guidelines carefully. To ensure a high quality of discourse, violations of these rules will result in mutes and/or bans from the server.

Looking forward to seeing many of you there!

Read more »

 
 
 
 
  • Vote: I like it
  • +208
  • Vote: I do not like it

By Geothermal, history, 7 months ago, In English,

Hi all!

I'm currently considering opening up a competitive programming training program of some sort. The details are definitely still in the works, though, and I wanted to discuss some preliminary ideas with the community to get feedback on what would be most helpful (and on whether there would be any interest in this).

I'm interested in starting a training program for three main reasons:

  • To make competitive programming more accessible to those without access to resources like college courses or training camps

  • To provide a space in which questions are encouraged and actively answered (especially since I think this generally isn't the case in the community at large)

  • To help competitive programmers build problem-solving skills and intuition, which I think aren't sufficiently emphasized by most contest training resources

I think one of the most frustrating things about being a new (or even intermediate) competitive programmer is that it's generally pretty difficult to get help from people more experienced than you. My hope is that creating a community with the explicit intent of training competitive programmers less experienced than myself will alleviate this problem, and perhaps will help establish a norm of helping newcomers within the overall competitive programming community.

However, I'm not sure exactly what such a program would entail, so I'd love to get some input from the community on what would be most helpful. I would most likely organize this through a Discord server, which would include places to ask for help with problems, to discuss competitive programming in general, and to discuss how best to train. However, I think it'd probably be best to incorporate some element of active training into the program--some preliminary ideas I've considered include:

  • Regularly published problemsets, including tasks of varying difficulties. Most likely, I'd publish relatively short problemsets a few times a week (something like one problem at each of three overall levels, with each trainee attempting one of the problems). For the next few days, those interested could work on solutions and message me for hints and feedback, and afterwards, I'd post my solutions, with a particular focus on outlining how I came up with my overall approach.

  • Topic lectures introducing important algorithms and discussing several problems applying them. These would probably be most useful for people with some experience (e.g. stably rated above 1400-1500), since below that there aren't really too many specific topics to cover.

  • Scheduled community virtual contests, with discussion sessions afterwards to talk through the problems and discuss ideas, especially with a focus on the intuition underlying the solutions.

  • For a smaller group of people with more competitive programming experience, I'd be open to providing personal assistance with training, problem/topic selection, overall contest strategy advice, etc.

  • If enough strong competitors are willing to get involved, it might make sense to have some sort of mentor-trainee program in which newcomers are matched with more experienced programmers in order to get help and advice.

The target audience for this would depend on who's interested--I think that at least in theory, I'd be able to work with anyone up to around (and including) the 2200s. My main goal would probably be to help novices have their questions answered and come up with a good general training plan while working more personally with more advanced competitors to focus specifically on building up intuition and shoring up weak areas.

Of course, all of this is very vague, since the above isn't any sort of finalized plan. At this point, though, I have a couple requests: 1. Post below if you would be at all interested in this. It'd be helpful for me to figure out how many people would want to be involved as I decide whether starting this up would be worth the effort. 2. If you have any thoughts on which (if any) of the above ideas you'd find helpful, or any other forms of training that would be useful to you, please share them--in the event I do decide to make this happen, I'd like to know what I can do to be of as much assistance as possible.

Thanks to everyone who read this far--if you have any questions or comments, please be sure to post them below!

Read more »

 
 
 
 
  • Vote: I like it
  • +376
  • Vote: I do not like it

By Geothermal, history, 7 months ago, In English,

A — Can't Wait for Holiday

The only approach here is the trivial one--compare the input to each of the possible seven values and print the appropriate answer for each of them. One reasonably fast way to implement this is to create an array $$$A$$$ containing the seven inputs, with "SUN" in position zero and "SAT" in position six. Then, we compare $$$S$$$ to each value in the array and, upon finding the value $$$i$$$ such that $$$A[i] = S$$$, we print $$$7-i$$$. This is valid because the next Sunday after the day represented by our input would come in position seven.

Runtime: $$$O(1)$$$. Click here for my submission.


B — ROT N

Again, the only real solution here is to directly implement the approach given in the problem statement. However, character arithmetic gives us a fairly nice way to do so. For each character in $$$S$$$, we can compute the position of the corresponding letter in the alphabet by taking $$$S[i]$$$ minus the character 'A'. From here, we can add $$$N$$$ to this value and take the result modulo $$$26$$$ to find the offset corresponding to our new letter upon ciphering. Then, we can add 'A' to the offset and set $$$S[i]$$$ to the result to update $$$S$$$ accordingly. Finally, we simply print $$$S$$$.

Runtime: $$$O(N)$$$. Click here for my submission.


C — Buy an Integer

Observe that we should buy a number with the greatest possible number of digits, since any number is strictly greater than any other number with fewer digits. We iterate over the possible numbers of digits in decreasing order and print our answer as soon as we find a number of digits at which we can afford some number.

Given that we want to buy an $$$N$$$-digit number, the cheapest one is $$$10^{N-1}$$$, which will cost $$$10^{N-1}A + NB$$$. If this is greater than $$$X$$$, we must try a lower value of $$$N$$$. If not, let's solve for the maximum number we can buy. Suppose this number is $$$M$$$. Then, we must have $$$MA + NB \leq X$$$, so $$$MA \leq X - NB$$$, so $$$M \leq \frac{X - NB}{A}$$$. Thus, any $$$N$$$-digit number that is at most $$$\frac{X - NB}{A}$$$ will be cheap enough. Keep in mind that since we've assumed we're buying an $$$N$$$-digit number, $$$M$$$ must be at most $$$10^N - 1$$$, even if $$$\frac{X - NB}{A}$$$ is $$$10^N$$$ or greater. (For example, if $$$A = 1$$$, $$$B = 20$$$, and $$$X = 40$$$, then we clearly cannot buy any number with at least two digits, and when we consider one digit, we find $$$\frac{X - NB}{A} = 20$$$. However, since we're only considering one-digit numbers, the greatest possible answer is $$$9$$$.)

From here, we can iterate over all possible values of $$$N$$$ in decreasing order and print the answer corresponding to the first sufficiently cheap value of $$$N$$$. It's probably a good idea to check separately whether we can buy $$$10^9$$$, since this case works a little differently because we can't buy any larger $$$10$$$-digit numbers.

Runtime: $$$O(\log MX)$$$, where $$$MX$$$ is the greatest integer sold. Click here for my submission..


D — Coloring Edges on Tree

We claim $$$K$$$ is the maximum degree of any edge. Our construction is fairly simple. Root the tree at vertex $$$1$$$ and perform a DFS. Then, when we reach a node $$$v$$$, color its children edges with any set of colors from $$$1$$$ to $$$K$$$ not already used to color the edge from $$$v$$$ to its parent, if $$$v$$$ isn't the root. This will always be possible because we know by our choice of $$$K$$$ that there will be at least $$$\textrm{deg} \, v$$$ colors to choose from, so we can use the colors $$$1$$$ through $$$\textrm{deg} \, v$$$ for this, possibly excluding the one already used for the edge from $$$v$$$ to $$$v$$$'s parents. In practice, my solution iterates through the colors starting with $$$1$$$ and skipping the one already used if necessary.

Proving that no answer lower than this $$$K$$$ is possible is fairly trivial. Suppose for contradiction that $$$\textrm{deg} \, v$$$ is greater than the minimum possible $$$K$$$ for some vertex $$$v$$$. Then, by the Pigeonhole Principle, it is impossible to color the edges incident to $$$v$$$ with different colors from $$$1$$$ to $$$K$$$, so this must not be the case. Therefore, $$$K$$$ must be at least $$$\textrm{deg} \, v$$$ for all vertices on the graph, which implies that our solution is optimal.

Runtime: $$$O(N)$$$. Click here for my submission.


E — Rem of Sum is Num

First, note that the required conditions cannot hold for a sequence of at least $$$K$$$ elements, since any value $$$K$$$ or greater cannot be the remainder when our sum is divided by $$$K$$$. Otherwise, we can write the given equation for the range $$$i$$$ through $$$j$$$ as

$$$\sum_{k = i}^j A_k - 1 = 0 \, \textrm{mod} \, K.$$$

We thus decrease all values in the array by $$$1$$$, reducing our problem to counting subarrays summing to $$$0 \, \textrm{mod} \, K$$$ with length less than $$$K$$$. This can be solved using prefix sums and a sliding window. First, ignore the condition on the length of $$$K$$$. Then, the number of valid sequences ending at $$$j$$$ is the number of earlier positions $$$i$$$ such that

$$$\sum_{k = 0}^i A_k - 1 = \sum_{k = 0}^j A_k - 1,$$$

working modulo $$$K$$$. We can iterate over all prefixes of the array and maintain a map from prefix sums modulo $$$K$$$ to the number of times they've occurred so far to keep count of this. Then, to bring back the condition on the length of our array, we can remove the sum that occurred $$$K$$$ positions earlier from our map before incrementing the answer at each position so that at any given time, all subarrays of length $$$K$$$ or greater are not considered as options.

Runtime: $$$O(N \log N)$$$. Click here for my submission.


F — Sugoroku

We employ dynamic programming. Let $$$dp[i]$$$ be the minimum number of moves to reach cell $$$N$$$ from cell $$$i$$$, and let $$$nxt[i]$$$ be the least possible next move from position $$$i$$$ that makes it possible to achieve this minimum. We can compute $$$dp[i]$$$ for all positions except the game-over cells as one plus the minimum of $$$dp[i+j]$$$ over all $$$j$$$ from $$$1$$$ to $$$M$$$, and $$$nxt[i]$$$ is the least $$$j$$$ that achieves this minimum. (For simplicity, we let $$$dp[i]$$$ be infinity for all game-over cells.)

Of course, implementing this as given would take $$$O(NM)$$$ time, so we need to be faster about it. There are lots of ways to do this--for example, we could use a segment tree. However, my solution employs a priority queue. The queue will store pairs $$$(dp[i], i)$$$ and will be sorted such that the lowest element is that with the lowest value of $$$dp[i]$$$, with ties broken by selecting the lowest value of $$$i$$$ (in order to satisfy lexicographic minimality), since at any given point we should transition to the smallest possible next point. Then, we iterate over all positions in decreasing order of $$$i$$$. To compute $$$dp[i]$$$, we first check the top element $$$(dp[j], j)$$$ from the queue and, while this top element has $$$j > i+M$$$, pop it from the queue, since we can no longer make a move that will get us far enough to reach this position. Then, if any elements remain in the queue, we can set $$$dp[i]$$$ to $$$dp[j] + 1$$$ and $$$nxt[i]$$$ to $$$j-i$$$. If no elements remain in the queue, it is impossible to reach the end of the board from position $$$i$$$.

From here, we can print $$$-1$$$ if $$$dp[0]$$$ is infinity. Otherwise, we can reconstruct and print the answer using the $$$nxt$$$ array.

Runtime: $$$O(N \log N)$$$. Click here for my submission.


As always, questions and comments are welcome below.

Read more »

 
 
 
 
  • Vote: I like it
  • +90
  • Vote: I do not like it

By Geothermal, history, 8 months ago, In English,

A — Circle

The area of a circle with radius $$$r$$$ is $$$r^2 \pi$$$, while the area of a circle with radius $$$1$$$ is $$$\pi$$$. The answer is thus $$$\frac{r^2 \pi}{\pi} = r^2$$$, so we can simply print $$$r \cdot r$$$.

Runtime: $$$O(1)$$$. Click here for my submission.


B — Echo

First, the answer is obviously no if $$$N$$$ is odd. Otherwise, we can directly check whether the strings created by the first and last halves of $$$S$$$ are the same. One way to do this is to iterate over all positions $$$i$$$ from $$$0$$$ to $$$\frac{N}{2}-1$$$ (remembering that the positions in a string are zero-indexed) and check whether position $$$i$$$ stores the same character as position $$$i + \frac{N}{2}$$$. If this isn't the case for some $$$i$$$, the answer is no. If, on the other hand, this holds for all $$$i$$$, the answer is yes.

Runtime: $$$O(N)$$$. Click here for my submission.


C — Average Length

The key to this problem is the C++ STL next_permutation function. (Some other languages may have similar methods; some do not.) If you aren't familiar with this function, running next_permutation(vec.begin(), vec.end()) rearranges the elements in the vector vec into the next lexicographically higher permutation, returning true if this was possible or false if vec is already in its lexicographically highest possible arrangement (which occurs when it is sorted in decreasing order). The function runs in $$$O(N)$$$.

We can use this function to generate all possible permutations of $$$N$$$ elements in $$$O(N \cdot N!)$$$. Then, for each permutation, we can easily compute the distance between each consecutive pair of points and add them to the answer. Finally, we divide the answer by $$$N!$$$ to get our average and print it.

If you're using a language that doesn't support a function like this: first of all, you should probably learn C++, but secondly, there are also other ways to generate the permutations. One fairly simple way to do this is recursive backtracking, storing an array containing the elements used so far in our current permutation and iterating over all possible remaining elements. This can be implemented efficiently, but even an extremely inefficient implementation runs in $$$O(N^3 \cdot N!)$$$, which, given $$$N \leq 8$$$, should run in time.

Runtime: $$$O(N \cdot N!)$$$. Click here for my submission.

Fun fact: There's also an $$$O(N^2)$$$ solution to this problem relying on a slightly more clever insight. Note that element $$$i$$$ will appear immediately before element $$$j$$$ in $$$(N-1)!$$$ of our permutations, since there are $$$N-1$$$ ways to position $$$i$$$ and $$$(N-2)!$$$ ways to arrange the remaining $$$N-2$$$ elements. Thus, we can add the distance from point $$$i$$$ to point $$$j$$$ multiplied by $$$\frac{1}{N}$$$ to our answer, since it will appear in $$$\frac{1}{N}$$$ of our permutations. The total sum over all ordered pairs $$$(i, j)$$$ with $$$i \neq j$$$ is our answer.


D — Knight

First, note that $$$X+Y$$$ must be a multiple of three for any paths to exist, since both of our moves increase the sum of our coordinates by $$$3$$$. If this does not hold, the answer is zero. Then, we must take $$$K = \frac{X+Y}{3}$$$ moves. Suppose that we take $$$N$$$ moves that increase our coordinates by $$$(2, 1)$$$ and $$$K-N$$$ moves that increase our position by $$$(1, 2)$$$. Then, in order to reach $$$X$$$ as our x-coordinate, we must have that $$$2N + K-N = X$$$, so $$$N = X - K$$$. Then, we must take $$$K - N$$$ moves that increase our position by $$$(2, 1)$$$. Note that if $$$X-K$$$ or $$$2K-X$$$ is less than zero, we obviously cannot make a negative number of moves in one direction, so our answer is zero. (This will occur if one of $$$X$$$ and $$$Y$$$ is more than double the other.)

Now, note that the order of our moves doesn't affect our final outcome. Thus, we need to make $$$K$$$ moves, of which $$$N$$$ are one type and $$$K-N$$$ are the other. The number of ways to do this is simply $$$\dbinom{K}{N}$$$, or, in terms of our original variables, $$$\dbinom{\frac{X+Y}{3}}{\frac{2X-Y}{3}}$$$, which is our answer. The choose function can be computed in $$$O(X+Y)$$$.

Runtime: $$$O(X+Y)$$$. Click here for my submission. The logic I used is slightly different from what was described here, but it is ultimately equivalent to this solution.


E — All-you-can-eat

We employ dynamic programming. One key insight here is that the timing condition is equivalent to eating some dishes that take a total of no longer than $$$T-1$$$ minutes, followed by one final dish whose time constraint doesn't matter.

Let $$$dp[i][j][k]$$$ be the greatest happiness we can achieve if we've eaten dishes taking a total of $$$i$$$ minutes to consume and we've taken some subset of the first $$$j$$$ dishes. $$$k$$$ is $$$1$$$ if we've already taken a dish and ignored its time constraint and $$$0$$$ if not. Then, we can transition from $$$dp[i][j][k]$$$ to $$$dp[i+A_i][j+1][k]$$$ or, if $$$k = 0$$$, to $$$dp[i][j+1][1]$$$, and add $$$B_i$$$ happiness in the process. Additionally, since we can waste time, skip a dish, or not bother to eat an extra dish with no cost, we can also transition to $$$dp[i+1][j][k]$$$, $$$dp[i][j+1][k]$$$, or $$$dp[i][j][k+1]$$$ with no happiness increase. Of course, each $$$dp$$$ value should be the maximum possible over all ways to transition to it.

From here, our answer is $$$dp[T-1][N-1][1].$$$

Runtime: $$$O(N^2)$$$, as we have $$$O(N^2)$$$ states and $$$O(1)$$$ transitions in our DP table. Click here for my submission. Note that I used a 2D DP table storing rows corresponding to $$$i$$$ and $$$k$$$. My loop iterating over $$$x$$$ corresponds to $$$j$$$ in this table.


F — Laminate

Again, we use dynamic programming. Let $$$dp[i][j]$$$ be the minimum cost of painting the first $$$i$$$ columns given that we've changed $$$H_i$$$ for $$$j$$$ of them, given that column $$$i$$$ has not had its $$$H_i$$$ changed.

Let's figure out the transitions. Suppose we want to reach $$$dp[i][j]$$$, and the last $$$k$$$ columns before column $$$i$$$ will have their heights changed. In this case, our last painted column is $$$i-k-1$$$, so our previous state was $$$dp[i-k-1][j-k]$$$. Now, if the height of column $$$i$$$ is less than or equal to the height of $$$i-k-1$$$, we can set the height of all adjusted columns in the middle to the height of column $$$i$$$ and then paint all of these columns using some of the same strokes we used for column $$$i-k-1$$$, so the cost of this transition is zero. Otherwise, we can use $$$H_{i-k-1}$$$ strokes to start painting column $$$i$$$ but will need $$$H_i - H_{i-k-1}$$$ more to finish, so this is the cost of our transition.

There are several ways to compute the final answer from this DP table, though the constraint that column $$$i$$$ must not have had its height modified is bothersome since it could be that the best solution involves modifying the last column's height. Probably the easiest way to get around this is to add an extra column with $$$H_i = 0$$$ at the end--given our cost scheme, this will never increase the height, nor will it be advantageous to change the height of this column. Then, our answer is $$$dp[N][K].$$$

Runtime: $$$O(N^2K)$$$, since each of our states has $$$O(N)$$$ transitions. Click here for my submission.


Feel free to post below if you have any questions or comments.

Read more »

 
 
 
 
  • Vote: I like it
  • +80
  • Vote: I do not like it

By Geothermal, history, 8 months ago, In English,

Now that the first North American ICPC regionals are complete, I'm starting a list of teams attending the North American Championships next February.

To start things off, the list will include team names and ranks. However, please comment below if you know any of the members of the advancing teams--I'll fill in their Codeforces handles if they have them or their names if not. (To avoid leaking sensitive personal information, please do not provide someone's Codeforces handle if their profile does not include their real name and you haven't received permission from them to post their username.)

Additionally, if you notice any errors, let me know. In particular, I'm gathering a list of schools by taking the top teams at each regional, but I may need to adjust this list if any teams choose not to accept their NAC invitations.


East Central

Rank University Team Name Member 1 Member 2 Member 3
1 University of Waterloo Waterloo Gold FatalEagle Reyna wleung_bvg
2 Purdue University Purdue RE _Kuroni_ strongoier qbbqrl
3 Carnegie Mellon University CMU2 Gilwall PlasmaVortex WhaleVomit
4 University of Toronto UofT Royal Blue Deemo Growth teochaban
5 University of Michigan Victors Kognition ramchandra xukailun0316
6 Case Western Reserve University CWRU White Hung Vu Huy Nguyen Trung Nguyen

Greater New York

Rank University Team Name Member 1 Member 2 Member 3
1 Cornell University Tank Cadets aaaaajack chilli daggertye
2 Columbia University Columbia-1 taorunz xzj1997 zhj
3 New York University NYUCIMS-TEAM1 Andy Polizzotto Muge Chen Zhen Li
4 Princeton University Princetn-1 joshkol1 roosephu Shunyu Yao

Mid-Atlantic USA

Rank University Team Name Member 1 Member 2 Member 3
1 Swarthmore College cout << 1/0 << endl; Geothermal nitrousoxide silxi
2 Duke University Castle Jack Yuheng Xu Liang Lyu Muthu Arivoli
3 University of Maryland Yellow
4 North Carolina State University Punctuation Blender Duncan Page Thomas Barnette Yosuke Mizutani
5 Virginia Tech 46 is a Prime Number atakyn benpr99 Yibo_Huang
6 University of North Carolina at Chapel Hill UNC-Alternate

Mid-Central

Rank University Team Name Member 1 Member 2 Member 3
1 University of Illinois at Urbana-Champaign UIUC — A I_love_simplespy Suzukaze yhchang3
2 University of Chicago Fire Enchanted
3 Truman State University TruWarriorss
4 Washington University in St. Louis Hashirama
5 Rose-Hulman Institute of Technology RHIT Fighting Engineers — Black
6 University of Kentucky uky-blue

North Central

Rank University Team Name Member 1 Member 2 Member 3
1 University of Wisconsin-Madison Model Solution bvd RobeZH top34051
2 University of Nebraska-Lincoln Poincare maxnguyen Cuong Viet Than Kim-Hao Duong Nguyen
3 Iowa State University Oak Nation Armada
4 Milwaukee School of Engineering Javanaughts Jacob Huebner Kiel Dowdle Nicholas Johnson
5 Kansas State University KSU Lynx
6 Dakota State University Dunder Mifflin Scranton
7 South Dakota School of Mines and Technology Blue hofergabriel Brian Brunner Mangesh Sakordekar

Northeastern

Rank University Team Name Member 1 Member 2 Member 3
1 Massachusetts Institute of Technology MIT NULL Benq C_SUNSHINE sqrtdecompton
2 Harvard Harvard ekzhang Franklyn_W pbt17
3 Brown University Blueno Bears
4 McGill University Bees
5 Northeastern University Husky Alpha

Pacific Northwest

Rank University Team Name Member 1 Member 2 Member 3
1 U British Columbia University of British California chenvictor1999 davidberard paulll
2 UC Berkeley Berkeley Blue Doriath eygmath Suchir
3 U Washington Combo googlebot milmillin Nonthakit Chaiwong
4 Stanford Stanford Cardinal andyshih12 helios1111 miagkov
5 UBCO UBC O(1) IvanCarvalho keyvankhademi Andrew Kiggins

Rocky Mountain

Rank University Team Name Member 1 Member 2 Member 3
1 University of Utah Utah Hydrogen Igor Durovic Oliver Flatt Sam Zachary
2 University of Alberta Alberta Green IanDeHaan Noah Gergel Jacob Skitsko
3 University of Calgary Emerald BENYAM1N Thomas Vu Charlie Zheng
4 University of Lethbridge University of Lethbridge WA Anemone JSwidinsky rmaccrimmon

South Central

Rank University Team Name Member 1 Member 2 Member 3
1 University of Texas at Austin UT Orange LamoreauxAJ vamaddur 0adityad
2 University of Texas at Dallas Comets 1
3 Texas A&M University Unordered Cartographers Durumu hoke_t nathanmandell
4 Rice University I see PC

Southeastern USA

Rank University Team Name Member 1 Member 2 Member 3
1 Georgia Institute of Technology Georgia Tech 5 AkaiNeko rekt_n00b xyz2606
2 University of Central Florida UCF Kamino Ahmad AuroraGarden SecondThread
3 Emory University M||E ChatC Rudy358 Chenxi Xu
4 University of Florida UF Fire
5 Florida Institute of Technology Panthers A

Southern California

Rank University Team Name Member 1 Member 2 Member 3
1 California Institute of Technology Baltech B
2 University of Southern California USC Trojans Adhyyan1252 deSignTheClutch TianyiChen
3 University of California San Diego UCSD J mjguru nibnalin zucker42
4 University of California Los Angeles UCLA Bruins

Comment below if you notice any missing or incorrect information.

Read more »

 
 
 
 
  • Vote: I like it
  • +184
  • Vote: I do not like it

By Geothermal, history, 8 months ago, In English,

A — 9x9

We can simply directly implement the procedure given in the problem. Print $$$AB$$$ if $$$A$$$ and $$$B$$$ are less than $$$10$$$ and $$$-1$$$ otherwise. One particularly fast way to do this is to use the ternary operator, which takes a boolean expression and two values as inputs and returns the first value if the expression is true and the second value otherwise. In C++, this looks like

$$$\texttt{A < 10 && B < 10 ? A*B : -1}$$$.

Runtime: $$$O(1)$$$. Click here for my submission.


B — 81

Since we're only considering pairs of numbers from one through nine, we can simply iterate over every pair of numbers from one to nine and check if each pair multiplies to $$$N$$$. As soon as we find such a pair, we output Yes and exit the program. If we reach the end of the loop, we can then print No, since we've checked every possible pair and found that none of them work.

Notice that we could also iterate over all numbers from $$$1$$$ to $$$9$$$, and for each number $$$i$$$, check if $$$N$$$ is divisible by $$$i$$$ and, if so, whether $$$N / i$$$ is between $$$1$$$ and $$$9$$$. However, this would take longer to implement, and with so few values to check, the above approach is efficient enough anyway. This is a good example of a problem where it's better to quickly implement a trivial solution than to waste time coming up with a more efficient idea.

Runtime: Technically $$$O(1)$$$, though our loop will iterate $$$81$$$ times (or $$$45$$$, if you only iterate over pairs $$$(i, j)$$$ with $$$j \geq i$$$). Either way, it will run very quickly. Click here for my submission.


C — Walk in Multiplication Table

Note that we can walk to any pair $$$(i, j)$$$ in $$$i+j-2$$$ steps. Thus, we simply need to find the pair $$$(i, j)$$$ with $$$ij = N$$$ that minimizes $$$i+j$$$. Luckily, we can enumerate all possible pairs $$$(i, j)$$$ and check them in $$$O(\sqrt{N})$$$. To do this, without loss of generality, we can assume $$$i \leq j$$$, and we must have $$$i \leq \sqrt{N}$$$ since $$$N = ij \geq i^2$$$, so $$$\sqrt{N}^2 \geq i^2$$$, so $$$\sqrt{N} \geq i$$$. Thus, we can iterate over all possible values of $$$i$$$ from $$$1$$$ to $$$\sqrt{N}$$$ and, among all working pairs $$$(i, j)$$$, we pick the lowest value of $$$i+j-2$$$ and print it.

Runtime: $$$O(\sqrt{N})$$$. Click here for my submission.


D — Water Bottle

First, note that when we take an $$$A$$$-by-$$$B$$$ rectangular cross-section, tilting it counterclockwise at an angle of $$$\theta$$$, the water will be at the height of the top-left corner. (Draw the picture yourself if it's hard to visualize this in your head.) Then, we want $$$A$$$ times the area within this rectangle at or below this height to equal $$$X$$$.

We perform casework on whether the water bottle is at least half full.

Suppose that it is not half-full. Then, examining the figure, we can observe the wet area will be a right triangle with one side of length $$$B$$$ and adjacent angle $$$90^circ - \theta$$$. We can thus express the area of this triangle as $$$\frac{B^2 \tan 90^\circ - \theta}{2}$$$, and the total volume of the water will be $$$\frac{AB^2 \tan 90^\circ - \theta}{2}$$$. Thus, we have $$$X = \frac{AB^2 \tan 90^\circ - \theta}{2}$$$, which gives $$$\tan 90^\circ - \theta = \frac{2X}{AB^2}$$$, so $$$90^\circ - \theta = \arctan \frac{2X}{AB^2}$$$ and $$$\theta = 90^\circ - \arctan \frac{2X}{AB^2}$$$.

Now, suppose that the bottle is at least half-full. Then, we can observe that the part of the rectangle not filled with water is a right triangle with one side of length $$$A$$$ and adjacent angle $$$\theta$$$. This triangle has area $$$\frac{A^2 \tan \theta}{2}$$$, so the volume of the box not filled with water is $$$\frac{A^3 \tan \theta}{2}$$$. Therefore, we have $$$A^2B - X = \frac{A^3 \tan \theta}{2}$$$, so $$$\tan \theta = \frac{2A^2B - 2X}{A^3}$$$, so $$$\theta = \arctan \frac{2A^2B - 2X}{A^3}.$$$

Now, we can simply implement these formulas. Make sure to convert from radians to degrees if your programming languages uses radians by default! You can do this by multiplying by $$$\frac{180^\circ}{\pi}$$$. (If your programming language doesn't have $$$\pi$$$ built-in as a constant, you can compute it as $$$2 \arcsin 1$$$.)

Runtime: $$$O(1)$$$. Click here for my submission.


E — Gluttony

First, an intuitive observation: we should assign the fastest eaters to the most challenging foods. One way to prove this is to note that if we have two eaters with constants $$$a \leq b$$$ and two foods with difficulties $$$c \leq d$$$, we can see that $$$max(ac, bd) = bd \leq max(ad, bc)$$$, so we are better off assigning the eater with the lower constant to the food with higher constant. We can apply this algorithm many times to prove that this ordering--fastest eaters with the highest-difficulty foods--is optimal.

So, let's sort the eaters in increasing order and the foods in decreasing order. Then, we binary search for the answer, noting that this is valid because if we can finish within some amount of time, then we can also finish within any greater amount of time. To check if some value $$$T$$$ could be the answer, note that in order to make the maximum eating time equal to $$$T$$$, we need to train all eaters for whom $$$A[i] F[i] > T$$$ until $$$A[i] = \lfloor \frac{T}{F[i]} \rfloor$$$. We can thus compute the number of training sessions required in $$$O(N)$$$. An answer is valid if at most $$$K$$$ training sessions are needed. We can then use binary search to find the least possible value of $$$T$$$ using $$$O(\log A_i F_i)$$$ of these queries, since the most time we could possibly take is $$$O(A_i F_i)$$$.

Runtime: $$$O(N \log A_i F_i)$$$. Click here for my submission.


F — Fork in the Road

Let's first compute three auxiliary arrays. First, we compute an array $$$deg$$$ such that $$$deg[i]$$$ is the number of edges coming from node $$$i$$$. Then, compute an array $$$P$$$ such that $$$P[i]$$$ is the probability that we reach node $$$i$$$ at some point in the process. To do this, we can initialize $$$P[0]$$$ (using zero-based indexing for the nodes, of course) to $$$1$$$ and all other $$$P[i]$$$ to $$$0$$$. Then, process the $$$P[i]$$$ in increasing order, and for each edge from $$$i$$$ to $$$j$$$, add $$$\frac{P[i]}{deg[i]}$$$ to $$$P[j]$$$.

Finally, we want an array $$$E$$$ such that $$$E[i]$$$ is the expected value of the number of steps it will take to reach node $$$N-1$$$ from node $$$i$$$. We can write $$$E[i]$$$ in terms of higher values of $$$i$$$ as follows, where the summation is over all vertices that can be reached from $$$i$$$:

$$$E[i] = 1 + \frac{1}{deg[i]} \sum_{j} E[j]$$$

From here, we can initialize $$$E[N-1]$$$ to $$$0$$$ and then compute the $$$E[i]$$$ values in decreasing order. Note that the computation of all of these arrays is $$$O(N^2)$$$.

Now, it should be easy to see that the answer, if we don't remove any passages, is $$$E[0]$$$. What if we remove some passage, though?

Let's say we remove a passage from $$$i$$$ to $$$j$$$. To compute the new value of $$$E[0]$$$, we compute $$$E'[i]$$$, the value of $$$E[i]$$$ if we didn't have access to this passage, and add $$$(E'[i] - E[i]) P[i]$$$ to $$$E[0]$$$. The intuition here is that this will capture all of the resulting change because if we don't reach $$$i$$$, then this will have no effect on us, but if we do reach $$$i$$$, which happens with probability $$$P[i]$$$, our expected value changes by the addition of $$$E'[i] - E[i]$$$.

Now, how can we compute $$$E'[i]$$$? One way to do this would be to simply reevaluate the formula given above, subtracting one from $$$deg[i]$$$ and considering only the other nodes. This would run in $$$O(N^3)$$$, but with a two-second time limit and a good constant factor (since there can only be about $$$\frac{N^2}{2}$$$ edges and any given vertex, on average, can only have about $$$\frac{N}{2}$$$ as its degree, so we'll take $$$\frac{N^3}{4}$$$ operations in total), this will probably be fast enough.

However, we'll present a solution that computes $$$E'[i]$$$ in $$$O(1)$$$, for an $$$O(N^2)$$$ solution in total. We're essentially trying to compute

$$$1 + \frac{1}{deg[i] - 1} \sum_{j} E[j].$$$

This time, the summation is over only the values of $$$j$$$ that are still adjacent to $$$i$$$ after the removal. First, let's find this summation. To do so, subtract $$$1$$$ from $$$E[i]$$$ and multiply it by $$$deg[i]$$$, which gives you the summation from the original $$$E[i]$$$. Then, subtract the $$$E$$$ value corresponding to the removed passage from the summation, giving you the new summation. Now we can multiply it by $$$\frac{1}{deg[i] -1}$$$ and add $$$1$$$, at which point we're finished. Then, we can use the value of $$$E'[i]$$$ as described above to compute our answer given the removal of a certain passage.

We can then compute the answer for every possible passage removal and print the best one.

Runtime: $$$O(N^2)$$$. Click here for my submission.

Read more »

 
 
 
 
  • Vote: I like it
  • +95
  • Vote: I do not like it

By Geothermal, history, 10 months ago, In English,

A — Weather Prediction

Store the three weather states in an array. Then, identify the element of the array equal to the input value, and print the next element in the array.

Time complexity: $$$O(1)$$$. Click here for my submission.


B — Tap Dance

We can employ brute force here, checking each element of the string individually and printing No as soon as we find one that doesn't satisfy the given condition. Of course, if all elements of the string comply with the requirements, the answer is Yes.

To implement the solution faster, note that the given conditions are equivalent to \texttt{S[i] != 'L'} when $$$i$$$ is odd and \texttt{S[i] != 'R'} when $$$i$$$ is even, so you can implement each of them using just one condition instead of three.

Time complexity: $$$O(N)$$$. Click here for my submission.


C — Attack Survival

Suppose player $$$i$$$ answers $$$C_i$$$ questions correctly. Then, they lose $$$Q - C_i$$$ points from the other players' correct answers, leaving them with $$$K - (Q - C_i) = K - Q + C_i$$$ points in the end. Then, they must have a positive score to survive, which occurs when $$$K - Q + C_i > 0$$$. This is equivalent to $$$C_i > Q - K$$$.

We can thus compute the value of $$$Q-K$$$ and the values of $$$C_i$$$ for each $$$i$$$. Then, for each $$$i$$$, print Yes if $$$C_i > Q-K$$$ and No otherwise.

Time complexity: $$$O(N)$$$. Click here for my submission.


D — Powerful Discount Tickets

We'll employ a greedy approach: at each point in this process, apply a ticket to the item for which this ticket will decrease our cost the most. Because each subsequent ticket we apply to any given item will have less value than the last ticket we used on the same item, this approach is always optimal.

To implement this strategy, we'll employ a priority queue, containing a single entry for each item and sorted by the amount a ticket would save us if used on that item. $$$M$$$ times, we pick the most valuable use of a ticket and subtract the savings from our total cost. Then, we compute the amount the next ticket will save us on the current item and insert this into the priority queue. (To save implementation time, you could alternatively just insert every possible ticket for each item into the priority queue immediately--since each item's price will become zero after the use of at most thirty tickets, this won't be too time-consuming. However, this does degrade the performance to $$$O(N \log^2 N)$$$.)

Time complexity: $$$O((N+M) \log N)$$$. The logarithmic factor is due to our priority queue operations. Click here for my submission.


E — Who Says a Pun?

To start, let's hash every substring of $$$S$$$. (Be careful to do this in $$$O(N^2)$$$, not $$$O(N^3)$$$, by using the shorter hashes to help compute the long ones.) Then, we can iterate over each possible value $$$L$$$ of the answer and use brute force to check if two substrings of length $$$L$$$ satisfy our condition. To do so, we maintain a set containing hashes we've encountered so far and iterate over all positions of the string. At each position, we add the hash of the appropriate length ending at this position to our set and check if the hash starting at this position is in our set already. If so, $$$L$$$ is indeed a possible answer.

This has complexity $$$O(N^2 \log N)$$$, since we have $$$N$$$ values for $$$L$$$, $$$O(N)$$$ hashes at each length, and our set operations take $$$O(\log N)$$$. This is a little on the slow end, so to speed up our program, we use binary search rather than iterating over every possible $$$L$$$. It is easy to see that if we can achieve some $$$L$$$, we can also find an example for any lower value of $$$L$$$ (simply take prefixes of the length $$$L$$$ substring), so we can binary search on the answer, which speeds up the time complexity after hashing to $$$O(N \log^2 N)$$$.

Time complexity: $$$O(N^2)$$$, due to the hashing step. Click here for my submission.


F — Xor Sum 3

Let's start with an observation: we really only care about the bits that occur an even number of times in the overall set. If a bit occurs an odd number of times in the overall set, it will occur an odd number of times in one of our two subsets and an even number of times in the other, so it will be counted once in our overall sum. On the other hand, a bit occurring an even number of times in the overall set could occur an even number of times in each subset, counting it zero times, or an odd number of times in each subset, counting it once.

Let's remove every bit that occurs an odd number of times in our overall set from all elements in the data. Now, we're left only with the bits occurring an even number of times. Now, we want to find the maximum XOR-sum of any subset of the processed data. This is because any bits (that show up an even number of times in total) occurring in the XOR-sum of one of our subsets will also occur in the XOR-sum of the other, while any bits that don't occur in the XOR-sum of one subset won't occur in the other. Afterwards, we can double this XOR-sum (since each bit in it occurs in both subsets) and add back the values of the bits showing an odd number of times to get our final answer.

We've thus essentially reduced this problem to identifying the maximum XOR-sum of any subset of our data. We can solve this problem by computing the linear basis of our data and iterating over the values it contains in decreasing order of significance, adding each value if it increases our sum. More details on using linear bases to solve XOR problems can be found in this article--I used the author's solution to Problem 3 to finish this problem.

Time complexity: $$$O(N \log A_i)$$$. Click here for my submission.. Again, substantial credit goes to DrSwad, as I used code from his article to compute the maximum subset XOR after processing the data.


As usual, feel free to leave comments or discuss alternate solutions below!

Read more »

 
 
 
 
  • Vote: I like it
  • +64
  • Vote: I do not like it

By Geothermal, history, 10 months ago, In English,

A — Password

We have $$$N$$$ choices for each of the three characters in our password. This gives us a total of $$$N \cdot N \cdot N = N^3$$$ possible passwords. We can thus print $$$N \cdot N \cdot N$$$ as our answer.

Runtime: $$$O(1)$$$. Click here for my submission.


B — Buffet

We can solve this with a brute-force simulation, simply implementing the procedure given in the problem. Iterate over the dishes in the order specified by array $$$A$$$, and when we eat dish $$$i$$$, add $$$B[i]$$$ to our answer, plus $$$C[i-1]$$$ if we just ate dish $$$i-1$$$.

Note that since we'll eat every dish once, we could also just add the sum of array $$$B$$$ to our answer as we read it in, iterating over $$$A$$$ only to add values from $$$C$$$ where necessary. My solution implements this approach.

Runtime: $$$O(N)$$$. Click here for my submission.


C — Maximal Value

We're given a condition on $$$B_i$$$, but since we want to find the best possible array $$$A$$$, let's try to rephrase it as a condition on $$$A_i$$$. We claim that the given condition is equivalent to $$$A_i \leq min(B_i, B_{i-1})$$$, given that both of those values exist. (If $$$i=0$$$, $$$B_{i-1}$$$ won't exist, so we just have $$$A_0 \leq B_0$$$. Likewise, since $$$B_N$$$ doesn't exist, we have that $$$A_N \leq B_{N-1}$$$, since the $$$B_i$$$ term won't come into play.)

This might be relatively intuitive, but let's prove it formally. We'll show that our condition on $$$A_i$$$ is both necessary and sufficient for the condition on $$$B_i$$$ to be true. Let's start by showing necessity. Suppose that our condition does not hold. If $$$A_i > B_i$$$, then this contradicts the fact that $$$B_i \geq max(A_i, A_{i+1})$$$, and similarly, if $$$A_i > B_{i-1}$$$, then we cannot have that $$$B_{i-1} \geq max(A_{i-1}, A_i)$$$. Therefore, if this condition is not true, we cannot have the condition on $$$B_i$$$ given in the problem, showing that the $$$A_i$$$ condition is necessary.

Now, we'll prove sufficiency. If $$$B_i \geq max(A_i, A_{i+1})$$$, we must have that $$$B_i \geq A_i$$$ and $$$B_i \geq A_{i+1}$$$. Both of these are implied by our $$$A_i$$$ condition, so this condition is sufficient--that is, as long as our $$$A_i$$$ condition is true, our $$$B_i$$$ condition is also true.

We have thus shown that the $$$A_i$$$ condition is both necessary and sufficient for the $$$B_i$$$ condition to hold, so the two are equivalent.

Now, the problem is relatively easy. To maximize the sum of the array $$$A$$$, we want each $$$A_i$$$ to be as large as possible, so we simply set each $$$A_i$$$ to $$$min(B_i, B_{i-1})$$$. Then, our answer is the sum of these values.

Runtime: $$$O(N)$$$. Click here for my submission.


D — Face Produces Unhappiness

We make two key observations. (Note that for simplicity, we'll refer to the number of happy people as the answer.)

  1. The answer will be at most $$$N-1$$$.
  2. Each of our $$$K$$$ moves can increase the answer by at most $$$2$$$.

The intuition behind these observations isn't too complicated, but the proofs are fairly annoying. The main interesting part of this problem is coming up with the algorithm which generates optimal moves. If you're not interested in the proofs, you can skip the following section.


Claim 1 is relatively easy to prove. Suppose that all $$$N$$$ people are happy. Then, all $$$N$$$ people must be looking at someone else facing the same direction, which implies that all must be facing the same direction. However, then, a person on one of the ends won't be looking at anyone, so that person won't be happy, a contradiction.

Claim 2 is a little more complicated. First, note that there are only four people whose happiness could be affected by any of our moves: the two people on either end of the move and the two people outside of the move standing next to the people inside of it. It's fairly easy to see that everyone else outside of the move won't be affected--they'll be looking at the same person as they were before the move. Additionally, the people inside the move and not on the end will be looking at the same person after the move, and those two people will both have swapped the directions in which they're looking, so the happiness of this group also won't change.

Now, let's discuss the remaining four people. First, we'll prove that we can't increase the answer by more than two by showing that there cannot be more than two of these people who were unhappy before the move but happy after it.

The proof is long and relatively uninteresting, but the basic idea is to note that there are only four people whose happiness could be changed by a move: the two people on either end of the move and the two people outside the move next to them. From there, we can do casework on which three of them were unhappy before the move but happy after it. As we can show that none of these cases are possible, we cannot increase the answer by more than two with a single move.


Now that we've proven that an increase of two per move is optimal, we'll give an algorithm that achieves this maximum. Suppose that the input starts with an L. (The same logic applies if it starts with an R.) Then, with each move, take a block of consecutive R's (with L's on each side of the block) and flip it. Flipping any of these blocks except for one on the end of the string will increase our answer by two, so we should flip these ones first. Finally, if we have any moves left over and there's a block of R's on the end, we flip that block, giving us the maximum answer of $$$N-1$$$.

To make this approach clearer, let's go over the second sample test case. We have the string LRRLRLRRLRLLR and can make up to three moves. We'll thus flip the first block of R's to get LLLLRLRRLRLLR. Likewise, we'll flip the second and third blocks, to get a final string of LLLLLLLLLRLLR. This gives the desired answer of nine happy people. If we had any more moves, we would flip the second-to-last block of R's, to get LLLLLLLLLLLLR and an answer of eleven, and with one more move after that, we would flip the final R to get LLLLLLLLLLLLL, which gives an answer of twelve.

This gives us a fairly simple implementation. First, count the answer in the given string. Then, for each move, we can increase the answer by two as long as it doesn't go over $$$N-1$$$.

Runtime: $$$O(N)$$$. Click here for my submission.


E — Second Sum

We iterate over the numbers in decreasing order. For each number, we count how many subarrays contain exactly one of the numbers we've seen previously. We then multiply that by the current number and add it to our answer.

Now, we'll explain how to do this more precisely. Let's let $$$a$$$ and $$$b$$$ be the last and second-to-last positions of greater numbers in the set before the current number. Similarly, let $$$x$$$ and $$$y$$$ be the first and second positions of greater numbers after the current number. Then, there are two ways our subarray could contain exactly one larger number: it could contain position $$$a$$$, but not positions $$$b$$$ or $$$x$$$, or it could contain position $$$x$$$, but not positions $$$a$$$ or $$$y$$$. Thus, if our current number's position is $$$i$$$, there are $$$(a-b)(x-i) + (y-x)(i-a)$$$ subarrays containing $$$i$$$ and exactly one greater position. (The first term accounts for subarrays containing $$$i$$$ and $$$a$$$ and the second accounts for subarrays containing $$$i$$$ and $$$x$$$.)

To query for $$$a$$$, $$$b$$$, $$$x$$$, and $$$y$$$, we can use a set containing all the positions we've seen so far. The one remaining detail is to figure out what to do if $$$a, b, x,$$$ or $$$y$$$ doesn't exist (because there aren't two lower or higher positions of greater numbers). It turns out that if $$$a$$$ or $$$b$$$ doesn't exist, we can set it to $$$-1$$$, and if $$$x$$$ or $$$y$$$ doesn't exist, we can set it to $$$N$$$. (The intuition here is that if there are no such positions within the array, the points at which our subarrays become invalid occur when we leave the array--that is, one position before the start of the array and one position after the end.) The rest of the solution works the same as before.

Runtime: $$$O(N \log N)$$$. Click here for my submission.


F — Many Slimes

We employ a greedy approach. For obvious reasons, our initial slime should be the largest one (as otherwise, we will never be able to create the largest value). Then, when each slime reproduces, make it create the largest possible slime we still need to build. (Of course, this new slime must be smaller than our current slime.) If any of our slimes cannot create any new slimes when required to do so, the answer is no. If we can build all the slimes in this manner, the answer is yes.

The proof that this approach is correct is fairly simple. Larger slimes can do anything smaller slimes can do and then some--therefore, if we have the choice between creating a large slime and a smaller slime, we are never worse off when we simply create the large slime.

We can implement this by storing the slimes in a multiset and using C++'s upper_bound function to query for the largest possible slime for a given slime to create.

Runtime: $$$O(N \cdot 2^N)$$$. (The multiset operations make this $$$O(2^N \log 2^N)$$$, which is equivalent to the given complexity.) Click here for my submission.

As an aside, I think this problem was substantially easier than E, and proving that the solution works was probably easier than doing the same for D.


Please feel free to share questions or comments below.

Read more »

 
 
 
 
  • Vote: I like it
  • +117
  • Vote: I do not like it

By Geothermal, history, 10 months ago, In English,

A — Tenki

This is a fairly simple task--just iterate over all positions in the strings and count the ones where S[i] = T[i].

Runtime: $$$O(|S|)$$$. Click here for my submission.


B — Power Socket

Note that we already have one socket, so we need to fill $$$B-1$$$ more sockets. Additionally, each new outlet we add will take up one socket and add $$$A$$$ sockets, resulting in a net addition of $$$A-1$$$ sockets. We thus need to create at least $$$B-1$$$ sockets where each outlet gives us $$$A-1$$$ sockets, making our answer $$$\lceil \frac{B-1}{A-1} \rceil$$$. Since most programming languages use floor division, we can also express this as $$$\lfloor \frac{A+B-3}{A-1} \rfloor$$$.

Runtime: $$$O(1)$$$. Click here for my submission.


C — Lower

We essentially use two pointers. Maintain the starting position at zero. Then, while the next position isn't higher than the current one, we make that move. If the next position is higher than the current one, we need to reset the starting position to the current position. The answer is then the maximum number of consecutive moves we make without resetting the starting position.

You can also think of this as simulating the given process and picking the best answer, but only from starting positions that are higher than the previous position. The reason this gives us the best possible answer is that if we have a starting position lower than the position before it, we could get one extra move by starting from the previous position instead, so we should never start at a position that is not higher than its previous position.

Runtime: $$$O(N)$$$. Click here for my submission.


D — ModSum

Notice that the result when a number is taken mod $$$X$$$ can be at most $$$X-1$$$. Therefore, since we're summing results mod $$$1$$$, $$$2$$$, $$$3$$$, and so on, up to $$$N$$$, our answer can be at most $$$0 + 1 + 2 + \cdots + N-1 = \frac{N(N-1)}{2}$$$.

Now, let's prove that we can always achieve this answer. Take $$$1$$$ mod $$$2$$$, $$$2$$$ mod $$$3$$$, and so on, up to $$$N-1$$$ mod $$$N$$$. Then, take $$$N$$$ mod $$$1$$$, which adds zero to the sum. Therefore, this approach gives us an answer of $$$1 + 2 + 3 + \cdots + N-1 = \frac{N(N-1)}{2}$$$.

Runtime: $$$O(1)$$$. Click here for my submission.


E — League

Let's rephrase this as a graph problem. We essentially have $$$\dbinom{N}{2}$$$ matches we need to play, and we are given a set of criteria that each indicate that some match must be played before some other match. We can express the matches as vertices on a graph and the given requirements as directed edges. An edge from one match to another means that the first match must be played before the second.

Phrased like this, the problem seems very similar to topological sorting (given the set of requirements that one vertex must come before another). And indeed, this motivates a similar solution. Maintain the set of matches with no incoming edges. Then, while this set is non-empty, increase the day by one and delete these matches from the graph, removing their outgoing edges as well. Finally, replace the set of matches with no incoming edges by adding any matches whose last incoming edges were removed. (To avoid checking each individual match every time, we can maintain the indegrees of each match and then simply check whether a match now has indegree zero each time we remove one of its incoming edges.)

After the set is empty, we must have either processed every match or encountered a cycle. In the latter case, the answer is $$$-1$$$, as this indicates that there is no order in which to play the matches. In the former case, we can now output the number of days we computed.

Runtime: $$$O(N^2)$$$. Click here for my submission.


F — Engines

First, I'll present the (rather sketchy) solution I used in-contest. Then, I'll discuss the somewhat simpler approach that was most likely intended by the problemsetters.

My solution used a pruned complete search. The actual idea is fairly simple--the challenging part is showing that it will run in time.

First, let's split into four cases. In two of the cases, we'll be attempting to maximize the x-coordinate; in the other two we'll try to minimize it. Similarly, in two of the cases, we'll attempt to maximize the y-coordinate, and in the other two we'll try to minimize it. (The motivation for these cases is that we maximize our end result by either maximizing or minimizing the two coordinates.) The rest of the solution will be tailored to the case in which we attempt to maximize both coordinates--the other cases can be made identical by multiplying the appropriate values in the data by $$$-1$$$.

Now, let's make a key observation. If, after considering some number of the engines, we could reach two points $$$(a, b)$$$ and $$$(c, d)$$$, with $$$a <= c$$$ and $$$b <= d$$$, we shouldn't consider $$$(a, b)$$$ in the future, as no matter what moves we make later on, $$$(c, d)$$$ will always do a better job of maximizing $$$x$$$ and $$$y$$$.

Let's suppose we're adding the engines one by one. Given a set of candidate points we've reached so far, our new set of candidates is the original set plus the points in the original set when the current engine is added to them. Then, we can prune this data to delete any redundant nodes as outlined in our observation.

Finally, after we process every engine, we can iterate over all points in our candidate set and pick the one that gives us the best answer. We'll then print the best answer from all four of our cases.

We should be able to intuitively see that this is correct--each case is guaranteed to print the correct answer if that answer lies in its quadrant. However, the tricky part is bounding its time complexity. One fairly simple bound is $$$O(N^2 \, maxX)$$$, as we consider $$$N$$$ engines, and at any point our set may contain at most $$$O(N \, maxX)$$$ points because it will contain at most one point for each X-coordinate. However, in practice, we actually do much better than this, as it's rare that we'll actually have anywhere near $$$N \, maxX$$$ points. My solution had an unnecessary $$$\log N$$$ factor in the big-O expression (though this made it somewhat faster in practice--I sorted the candidate points rather than iterating over every possible position), but it passed in about 900 ms. It's possible that some particularly tough test case might cause this submission to TLE, but such a case would have to be very contrived.

Click here for my submission.


Now, let's discuss a simpler and more efficient solution. The key observation is that when we order the engines by the angles their vectors make with the x-axis, the answer will include vectors in some contiguous range. A proof of this fact is at this link. Then, the solution is fairly easy to implement--we can iterate over all possible ranges and compute the answers for each of them, taking the maximum as our result. When implemented appropriately, this is an $$$O(N^2)$$$ approach. Credit for this solution goes to silxi, as he was the one who showed it to me after we both finished the contest.


As always, feel free to post questions or comments below.

Read more »

 
 
 
 
  • Vote: I like it
  • +116
  • Vote: I do not like it

By Geothermal, history, 11 months ago, In English,

Although AtCoder seems to be getting back to posting English editorials, I'm continuing to release some myself in case having the extra explanations helps some people.

(Sidenote: Some of the provided submissions come from after the contest itself, because I had to leave midway through, before I got the chance to debug my E or write my F.)


A — +-X

Just use your language's max function. As $$$A$$$ and $$$B$$$ are small, we don't need to worry about integer overflow.

Runtime: $$$O(1).$$$ Click here for my submission.


B — One Clue

Let's try to find the minimum and maximum values in the answer. It's fairly easy to see that all integers between them will also be in the set.

To find the minimum possible position of a black stone, we place $$$K$$$ black stones such that $$$X$$$ is at the maximum position. Then, the minimum position will thus be $$$X-K+1$$$, as we have $$$K-1$$$ black stones to the left of $$$X$$$. Likewise, if we place $$$K-1$$$ black stones to the right of $$$X$$$, our maximum position is thus $$$X+K-1$$$.

We should thus print all integers from $$$X-K+1$$$ to $$$X+K-1$$$.

Runtime: $$$O(K)$$$. Click here for my submission.


C — Green Bin

We take advantage of the small length of the strings. Represent the strings as character arrays. Then, note that two strings can be rearranged to become equal if and only if, after sorting their character arrays, the arrays are equal.

We read in the strings and sort their character arrays. We then build an array out of the character arrays and sort that array. From there, we know that all equal character arrays will form a consecutive subsegment of the new array.

Each set of $$$K$$$ equal strings adds $$$\binom{K}{2}$$$ to our answer. So, we now simply need to find the lengths of each of the subsegments containing identical arrays.

Counting sets of equal elements in an array is a relatively standard problem. We can use two pointers--maintain the starting position of our current segment and, when we find a position $$$i$$$ that has a different array from the starting position, we have a set of $$$i - start$$$ to add to the answer.

Runtime: $$$O(N |S| \log N)$$$, to account for the sorting (since comparing two of the character arrays takes $$$O(|S|)$$$ time). Click here for my submission.


D — Summer Vacation

Note that on day $$$i$$$, it's pointless to do any jobs that take more than $$$M-i$$$ days to pay out. We use this lemma to build a greedy strategy.

Let's pick our jobs for the days in reverse order. Sort the jobs in increasing order of payout time. We keep a priority queue containing the values of the jobs we can do. Then, iterate over the days $$$i$$$ from $$$M-1$$$ to $$$0$$$. Add the jobs that take $$$M-i$$$ days to the priority queue (by maintaining our position in the array, we can do this in $$$O(N)$$$ across all values of $$$i$$$), then pick the most valuable job in the priority queue (if the queue isn't empty, of course) and do it on day $$$i$$$. We're essentially picking the most valuable job we can do on day $$$i$$$ that we haven't already reserved for a future day.

The idea here is that on each of these days, there's no reason to wait to do the most valuable job we can--we're no better off doing a more valuable job on an earlier day rather than doing it on our current day, and in fact we may be worse off because some other jobs we want to do might become possible in that time.

Runtime: $$$O((N+M) \log N)$$$. The logarithm factor is due to our priority queue operations. Click here for my submission.


E — Coins Respawn

Note that when we travel along edge $$$i$$$, we gain $$$C_i$$$ coins, but eventually, $$$P$$$ coins will be subtracted from our final score. Therefore, we effectively have increased our score by $$$C_i - P$$$.

Afterwards, we have a weighted directed graph (possibly with both positive and negative weights), and we want to find the longest path on from vertex $$$1$$$ to vertex $$$N$$$ on this graph. If we multiply all of the edge weights by $$$-1$$$, this becomes a shortest path problem, and it can be solved in $$$O(NM)$$$ with the Bellman-Ford Algorithm.

The trick here is to handle the case in which the answer does not exist properly. Bellman-Ford is equipped to detect negative cycles (by determining whether processing the edges one more time would still make our path better, even if the algorithm already terminated), but the catch is that we can only use a negative cycle if it is on some path from $$$1$$$ to $$$N$$$. We therefore run a BFS (a DFS would work too) on the graph with all its edges reversed, starting with vertex $$$N$$$, to determine which vertices can reach $$$N$$$. Then, before confirming that an edge creates a negative cycle, we must check that it has been found both in our Bellman-Ford process (which means it can be reached from $$$1$$$) and in our BFS (which means it can reach $$$N$$$).

If this doesn't give us a usable negative cycle, we can simply print the given answer--that is, $$$-1$$$ times the shortest path--or zero, if it's greater.

Runtime: $$$O(NM).$$$ Click here for my submission.


F — Polynomial Construction

There are several solutions to this problem. The one presented here actually does not depend on the fact that all values of the polynomial are $$$0$$$ or $$$1$$$. However, I think it's still one of the simpler solutions, and it relies on a relatively accessible idea.

The basic idea is to employ finite differences. If you aren't familiar with this concept, I recommend this article to learn about the topic. While the article works with real-valued polynomials, the key theorems turn out to be true when applied over the integers modulo $$$P$$$.

We start by computing all finite differences of this polynomial in $$$O(P^2)$$$. Sequence zero is simply the input data; sequence one is the differences of consecutive values in the input array, and so on, until we have sequence P-1, which only has one element. Let $$$D_i$$$ be the first element in sequence $$$i$$$.

Our key lemma is that $$$f$$$ is equal to:

$$$\sum_{n = 0}^{P-1} \frac{D_n}{n!} \left( \prod_{m = 1}^{P-1-n} x-m \right).$$$

Of course, all arithmetic is modulo $$$P$$$. Division is represented by multiplication by a modular inverse.

This is a similar concept to the second theorem presented in the Proof of Theorem section in the linked article. The proof is fairly lengthy and is thus omitted here, but I encourage you to take a look at it if you're interested. It's a complex formula, but it's relatively easy to evaluate, and it should be fairly tractable with a solid understanding of finite differences.

The naive implementation of this algorithm could be $$$O(P^3)$$$ due to the need to evaluate $$$P$$$ different inner products. However, we take advantage of the fact that the products are relatively similar. We evaluate the sum starting at $$$n=P-1$$$ and moving downwards. At each step, we multiply the existing polynomial by $$$x-n$$$ (doable in $$$O(P)$$$ since this polynomial only has two terms) and then add $$$\frac{D_n}{n!}$$$ to the result. The result after dealing with $$$n=0$$$ is our answer.

Runtime: $$$O(P^2)$$$. Click here for my submission.


As usual, feel free to comment below if you have any questions.

Read more »

 
 
 
 
  • Vote: I like it
  • +34
  • Vote: I do not like it

By Geothermal, history, 11 months ago, In English,

A — Transfer

First, we compute the resulting amount of water in Cup 1. Observe that this is equal to $$$min(A, B+C)$$$. Let this value be $$$D$$$. Then, the amount transferred from Cup 2 to Cup 1 is simply $$$D - A$$$, so the amount remaining in Cup 2 is $$$C - D + A$$$.

Time complexity: $$$O(1)$$$. Click here for my submission.


B — Uneven Numbers

There are several efficient ways to approach this problem. One is to simply to count the one-, three-, and five-digit numbers less than or equal to $$$N$$$. However, because of the small maximum on $$$N$$$, a more naive approach works. Simply iterate over every value from $$$1$$$ to $$$N$$$, count its digits, and add one to the answer if the count is odd. (We can count a number's digits by repeatedly adding one to the count and dividing the number by ten until it reaches zero.)

Time complexity: $$$O(N \log N)$$$. Note that $$$O(\log N)$$$ is possible with the more efficient approach. Click here for my submission.


C — Build Stairs

We employ a greedy strategy. Process the elements in increasing order, and decrease the current element whenever we can do so without making it less than the last element. (The first element should thus always be decreased.) If at any point the current element must be less than the last element, no matter what we do, the answer is no.

The reason this strategy is optimal is that decreasing an number will make it easier (or at least as easy) to deal with the next element in the list, since any value the next element could have taken will still work when the previous number is lower, and in fact decreasing the previous number expands the range of possible values for the next set. (For example, if we have an element 3, the next element must be at least 3, but if we decrease this to 2, all the same values 3 or higher still work, but 2 does as well.)

Time complexity: $$$O(N)$$$. Click here for my submission.


D — Gathering Children

We first note that we can break up the given string into components consisting of a string of R's followed by a string of L's. Observe that no child will ever leave its component.

We thus solve the problem for each component. Within each component, a child on the left side will continually move to the right, or a child on the right side will continually move to the left, until it reaches the middle two characters, RL. At this point, the child will move between these two positions until the end of the process. With $$$10^{100}$$$ steps, we can see that all children will reach these two positions.

How can we determine how many children will land on the R and how many will land on the L? Note that one of these two characters must be in an odd position (based on its position in the original string) and the other must be in an even position. Because $$$10^{100}$$$ is even and each move changes the parity of a child's position, each child will end up in whichever position has the same parity as its original position. That means that whichever of R or L is on an odd position will get all the children in the component starting in odd positions, while the other will get all the children in the component starting in even positions.

We can implement this by using a two pointers-style technique to track the components.

Time complexity: $$$O(N)$$$. Click here for my submission.


E — Max GCD

First, can we substantially limit the possible answers to this problem? It turns out that we can: our $$$K$$$ moves keep the sum of our elements constant, so the answer must be a factor of our sum. Since our sum is at most $$$5 \cdot 10^8$$$, we can check every value less than or equal to the square root of the sum to compute a list of these factors. Now, we can simply check, for each of these factors, whether it is possible to use our $$$K$$$ operations to make all items in the list divisible by this factor. We then take the maximum of the factors for which this is possible.

To perform a check on a factor $$$F$$$, start by creating a multiset storing the values of the array modulo $$$F$$$. We'll need to decrease some of the values in the multiset and increase others so that all are equal to an element of the list $$$-F, 0, F, 2F,$$$ etc.

The key lemma is as follows: there will always exist a way to turn the smallest $$$X$$$ elements of the multiset, for some $$$X$$$, into $$$0$$$, and the remaining elements into $$$F$$$, in some number of moves. Moreover, this will be the smallest possible number of moves we could use.

In case you're interested, I'll give a proof that this is true after the solution. (The proof is relatively convoluted and hard to follow, which is why I'm emphasizing the rest of the solution first.)

Iterate over the multiset in increasing order. Maintain the number of subtractions required to decrease the elements we've seen so far to $$$0$$$ and the number of additions required to increase the elements we haven't seen yet to $$$F$$$. When these costs are equal, that's the minimum number of moves we need to achieve the factor $$$F$$$. If this minimum is at most $$$K$$$, $$$F$$$ is a possible answer. We can then simply print the maximum value $$$F$$$.

This concludes the solution--for completeness, a proof of the lemma is included below.

To prove existence, note that the sum of the values in the multiset must be $$$0$$$ mod $$$F$$$. Call an element "high" if we'll be raising it to $$$F$$$ and "low" if we'll be lowering it to $$$0$$$. Let the "cost" of an element be the number of additions or subtractions necessary to raise or lower it to its desired value. Call the "high cost" the sum of the costs of all high elements and the "low cost" the sum of the costs of all low elements. Note that the cost of a high element $$$i$$$ is $$$F - i$$$ and the cost of a low element $$$i$$$ is $$$i$$$.

Start with all elements being high. We maintain the difference of the high cost and the low cost. Initially, the high cost is the sum of $$$F - i$$$ over all elements. Since $$$F$$$ is $$$0$$$ mod $$$F$$$ and the sum of all $$$i$$$'s is $$$0$$$ mod $$$F$$$, this sum will be $$$0$$$ mod $$$F$$$. Express it as $$$kF$$$, for some integer $$$k$$$. Note that $$$0 \leq k \leq N$$$. The low cost, meanwhile, will be $$$0$$$, because no elements are currently low.

Suppose that we switch an element $$$i$$$ from high to low. In this case, the high cost will decrease by $$$F - i$$$ and the low cost will increase by $$$i$$$. Therefore, the difference will decrease by $$$(F-i) + i = F$$$. Therefore, if we switch the $$$k$$$ lowest elements from high to low, the high cost will be the same as the low cost. This means that it will take the same numbers of subtractions to deal with the low elements and additions to deal with the high elements, meaning that these subtractions and these additions can be our moves.

We have thus proven that this is doable. Now, we prove that it is optimal. First, note that if we're going to set all elements of the multiset to either $$$0$$$ or $$$F$$$, we minimize our costs by making the least values our low elements and the greatest values our high elements. Moreover, we can observe that it will always be inefficient to set any element to a value other than $$$0$$$ or $$$F$$$, as since our values are all between $$$0$$$ and $$$F-1$$$, moving to one of these other values will pass $$$0$$$ or $$$F$$$ and thus will cost us moves without getting any elements closer to $$$0$$$ mod $$$F$$$.

Time complexity: $$$O(\sqrt{S} + N \log N F(S))$$$, where $$$S$$$ is the sum of the $$$A_i$$$ and $$$F(S)$$$ is the maximum number of factors of any number less than $$$S$$$. While the exact value is hard to compute, it's not hard to see that with $$$N = 500$$$, $$$F(S)$$$ will be small enough to easily pass. Click here for my submission.


F — Enclosed Points

For each point, we count the number of subsets whose bounding rectangles enclose this point. Our answer is the sum of these values.

First, perform coordinate compression, mapping the X and Y-coordinates to integers from $$$0$$$ to $$$N-1$$$. Then, iterate over the points in increasing order of $$$X$$$. Additionally, maintain a segment tree, where each node represents a Y-coordinate and is $$$0$$$ if we have not seen this coordinate yet but $$$1$$$ if we have.

We solve the problem for each point using the Principle of Inclusion/Exclusion. First, the total number of rectangles is $$$2^N$$$, as we can either include or exclude each point from our subset. However, we must subtract the ones that don't contain the current point. For a rectangle not to contain the current point, all of the points in its subset must be above, below, to the left, or to the right of our current point. For each of these cases, let $$$P$$$ be the number of points satisfying its criterion. Then, we subtract $$$2^P$$$ to delete these cases.

However, we've subtracted out some cases twice. In particular, if all points are to the upper-left, upper-right, lower-left, or lower-right of our current point, we've subtracted it out twice (for example, the upper-left case was subtracted in the upper and left cases above), so we need to add $$$2^P$$$ back to account for each of these cases, where $$$P$$$ is the number of points satisfying each criterion.

The only case in which all points in a subset satisfy three or more of our subtraction criteria is the empty set (as no point can be both above and below our current point or both to the left and the right of this point). We added the empty set once in our original count, subtracted it four times, and then added it back four times in our most recent cases. We want it to count zero times, so we need to finally subtract the empty set once to get our final answer for the point.

Now, we need to compute the number of points satisfying these eight criteria. The first four are easy. A point $$$(X, Y)$$$, where $$$X$$$ and $$$Y$$$ are coordinates from $$$0$$$ to $$$N-1$$$, has $$$X$$$ points to its left and $$$N-X-1$$$ points to its right. Similarly, this point has $$$Y$$$ points below it and $$$N-Y-1$$$ points above it.

The remaining four criteria require us to use our segment tree. In particular, the number of lower-left points is equal to $$$query(0, Y)$$$ on our segment tree, as this stores the number of points to the left with Y-coordinates from $$$0$$$ to $$$Y$$$. Similarly, the number of upper-left points is equal to $$$query(Y, N-1)$$$ (or alternatively, simply the total number of left points minus the number of lower-left points). We can then compute the lower-right points by subtracting the lower-left points from the total lower points, and likewise, we can compute the upper-right points by subtracting the upper-left points from the total upper points.

Once we have these counts, we can use binary exponentiation to compute $$$2^P$$$ in $$$O(\log P)$$$ for each $$$P$$$. After we update the answer, we can update the segment tree at $$$Y$$$ to prepare for our next point.

Time complexity: $$$O(N \log N)$$$. Click here for my submission.

Read more »

 
 
 
 
  • Vote: I like it
  • +139
  • Vote: I do not like it

By Geothermal, history, 11 months ago, In English,

A — Harmony

Without loss of generality, let $$$A < B$$$. Then, we have three cases:

  • $$$K$$$ is less than $$$A$$$. This gives $$$A - K = B - K$$$, which gives $$$A = B$$$, which is false.
  • $$$K$$$ is greater than $$$B$$$. This gives $$$K - A = K - B$$$, which is also false.
  • $$$K$$$ is between $$$A$$$ and $$$B$$$. This gives $$$K - A = B - K$$$, which gives $$$2K = A+B$$$.

Thus, we must have $$$2K=A+B$$$. If $$$A+B$$$ is odd, there is thus no solution. If $$$A+B$$$ is even, our answer is $$$\frac{A+B}{2}$$$. It is easy to verify that this number is indeed between $$$A$$$ and $$$B$$$.

Runtime: $$$O(1)$$$. Click here for my submission.


B — 0 or 1 Swap

Let $$$K$$$ be the number of positions $$$i$$$ at which $$$p_i \neq i$$$ (using 1-indexing). If $$$K = 0$$$, the answer is yes, as we can simply leave the permutation as is. If $$$K = 2$$$, the answer is also yes: swap the two misplaced elements. (Notice that we can never have $$$K = 1$$$, as if any element is put in the wrong position, the element that was meant to be in that position must also be misplaced.)

If $$$K > 2$$$, the answer is no: our swap can only affect two elements and can thus only correct at most two misplacements. We can thus simply compute $$$K$$$ and check whether it is at most two to get our answer.

Runtime: $$$O(N)$$$. Click here for my submission.


C — City Savers

We employ a greedy strategy. Notice that the first hero is the only one who can attack monsters in the first city. Therefore, the first hero should attack as many monsters in the first city as possible, only moving on to the second city after all the monsters in the first city are eradicated. Then, after the first hero attacks, the second hero is now the only remaining hero who can attack the monsters in the second city, so the second hero should prioritize second-city monsters over third-city monsters.

This logic extends to the following strategy. Process the heroes in increasing order of $$$i$$$. Let each hero start by attacking as many monsters as possible in city $$$i$$$. Then, if they can still kill more monsters after all the monsters in city $$$i$$$ are defeated, our hero can use their remaining energy to attack the monsters in city $$$i+1$$$.

Runtime: $$$O(N)$$$. Click here for my submission.


D — Digits Parade

Let $$$dp[i][j]$$$ be the number of ways to create an $$$i$$$-digit number consistent with the first $$$i$$$ digits of the given pattern and congruent to $$$j$$$ modulo $$$13$$$. As our base case, $$$dp[0][i] = 0$$$ for $$$i$$$ from $$$1$$$ to $$$12$$$, and $$$dp[0][0] = 1$$$ (as our length-zero number has value zero and thus is zero mod $$$13$$$.)

Notice that appending a digit $$$k$$$ to the end of a number that's $$$j$$$ mod $$$13$$$ gives a number that's congruent to $$$10j+k$$$ mod $$$13$$$. We use this fact to perform our transitions. For every state $$$dp[i][j]$$$ with $$$i < N$$$, iterate over the possible values of $$$k$$$. (If $$$S[i] = \, ?$$$, there will be ten choices for $$$k$$$, and otherwise there will only be one choice.) Then, we add $$$dp[i][j]$$$ to $$$dp[i+1][(10j+k) \, \% \, 13].$$$

To get our final answer, we can simply print $$$dp[N][5].$$$

Runtime: $$$O(N).$$$ Notice that this will have a weak constant factor, with $$$13$$$ states in every row of our DP table and up to $$$10$$$ transitions for each state, but our algorithm is still easily fast enough. Click here for my submission.


E — Golf

Let's assume $$$X$$$ and $$$Y$$$ are at least zero, for simplicity. (If $$$X$$$ and/or $$$Y$$$ is negative, we simply multiply the appropriate values in our output by $$$-1$$$.)

Let $$$N$$$ be the minimum number of moves we need. Then, the total Manhattan-distance across all of our moves is $$$NK$$$. Additionally, in the end, we must move $$$X+Y$$$ further in positive directions than in negative directions. Therefore, if $$$A$$$ and $$$B$$$ are the total distances we move in positive directions and negative directions, we must have $$$A+B = NK$$$ and $$$A-B = X+Y$$$. Solving gives $$$A = \frac{NK + X + Y}{2}$$$ and $$$B = \frac{NK - X - Y}{2}$$$. Since we can only move to integer points, these must both be nonnegative integers.

We use these facts to make the following observations.

  • If $$$K$$$ is even and $$$X+Y$$$ is odd, there is no solution. This is because $$$NK + X + Y$$$ will be the sum of an even and an odd, and will thus be odd, so the result when it is divided by two won't be a fraction.
  • We must have that $$$NK \geq X+Y$$$ and that $$$NK = X+Y$$$ mod $$$2$$$. Both of these are implied by the fact that $$$B$$$ is a nonnegative integer. The former also should be intuitive: if we move less than $$$X+Y$$$ distance, there is no way we can get to $$$(X, Y)$$$.
  • If $$$N = 1$$$, then we must have $$$X+Y = K$$$, as we can only move to a point with $$$X+Y = K$$$ in a single move.

We claim that the first observation is the only case in which the answer is $$$-1$$$ and the latter two observations are the only restrictions on $$$N$$$. Therefore, the answer is the least $$$N$$$ that satisfies the latter two observations' requirements. To prove this, we simply give the construction, which is necessary to finish anyway.

We start by dealing with our negative movement. While $$$B$$$ remains greater than $$$K$$$, simply move $$$K$$$ in the negative direction of whichever coordinate is currently closer to our goal. Once $$$B$$$ is at most $$$K$$$, we move $$$B$$$ in the negative direction of the coordinate closer to the goal, as before, and $$$K - B$$$ in the positive direction of the other coordinate.

Afterwards, we have moved exactly $$$B$$$ units in the negative direction, so we know the rest of our movement must be in the positive direction. We can therefore simply increase $$$x$$$ by $$$K$$$ until it reaches $$$X$$$, and then switch to increasing $$$y$$$ until it reaches $$$Y$$$.

You may be asking yourself why in our first step, we need to move in whichever direction is currently closer to the goal. The reason is that this protects us from forcing ourselves into unnecessary waste. As an example, suppose that $$$X=8$$$, $$$Y=0$$$, and $$$K=5$$$. We can compute by the above rules that $$$N = 2$$$, $$$A = 9$$$, and $$$B = 1$$$. Since we're currently at $$$(0, 0)$$$, we have distance $$$8$$$ from the goal in the $$$X$$$ direction and distance $$$0$$$ from the goal in the $$$Y$$$ direction. Our algorithm tells us to take our negative step in the $$$Y$$$ direction while using the rest of our movement to increase $$$X$$$. This takes us to $$$(4, -1)$$$. Our next move is then to $$$(8, 0)$$$, as desired. If we took our negative step in the $$$X$$$ direction, we would get to $$$(-1, 4)$$$ or $$$(-1, -4)$$$, which leads to wasted movement in the $$$Y$$$ direction. There is no one-step path from either of these points to $$$(8, 0)$$$.

Runtime: $$$O(N)$$$. Our limiting factor is the need to reconstruct our steps. We can bound $$$N$$$ as $$$O(X+Y)$$$, so this is fast enough. Click here for my submission.


F — Strings of Eternity

For simplicity, we let $$$N = |S|$$$ and $$$M = |T|$$$.

Let's try to understand the setup here. We can reformulate the problem: we want to find the greatest $$$i$$$ for which the concatenation of $$$i$$$ copies of $$$T$$$ appears somewhere in an infinite concatenation of $$$S$$$.

One key observation is that it never matters which copy of $$$S$$$ we're in: we can define our state simply as the position of $$$S$$$ we're in after adding a copy of $$$T$$$ to our concatenation.

This motivates the following first step. Concatenate copies of $$$S$$$ until the resulting string has length of at least $$$N+M$$$. Then, using any efficient string matching algorithm (I successfully used Rabin-Karp), determine which of the $$$N$$$ positions in $$$S$$$ could be the start of a copy of $$$T$$$.

We now reformulate this as a graph problem. Notice that if we start a copy of $$$T$$$ in position $$$i$$$ on $$$S$$$, the next copy of $$$T$$$ would start at position $$$(i+M) \, \% \, N$$$. We build a directed graph where each node represents one of the $$$N$$$ positions in $$$S$$$. If position $$$i$$$ could be the start of a copy of $$$T$$$, we add an edge from $$$i$$$ to $$$(i + M) \, \% \, N$$$.

The answer is now the length of the longest path in this graph, as traversing an edge represents adding a copy of $$$T$$$. This is a fairly standard problem. We can solve it using a similar algorithm to topological sort. Maintain a queue of vertices we are currently able to process, and store arrays maintaining the lengths of the longest paths ending at each node and whether each node has been visited. To start, consider every vertex with indegree zero. Add these vertices to the queues and set the lengths of their longest paths to zero.

When we visit a node, mark this node as visited, then subtract one from the indegree of the vertex at the end of this vertex's edge (if the vertex does have an edge). Then, set the longest path to this next vertex as the maximum of its previous value or one plus the longest path to our starting vertex. If the indegree of the next vertex is now zero, add it to the queue.

At the end of our process, the length of the longest path is infinity if and only if some vertex has never been visited (as this occurs exactly when our graph contains a cycle). If every vertex has been visited, our answer is the highest value in our distance array.

Runtime: $$$O(N+M).$$$ Click here for my submission.


If you have any questions, please feel free to post them below!

Read more »

 
 
 
 
  • Vote: I like it
  • +144
  • Vote: I do not like it

By Geothermal, history, 11 months ago, In English,

As many of you are aware, some users abuse Codeforces' virtual participation feature in order to submit prewritten solutions immediately when the contest starts so as to appear at the top of the contest leaderboards. A few examples are here, here, and here.

Of course, preventing cheating by virtual participants is rather difficult, especially because while these three are relatively obvious cases, other forms of dishonest behavior are just as plausible but are harder to detect. For example, someone who looked at the comments on a round's announcement before participating virtually and thus learned that a particularly valuable problem was relatively easy to solve would have an unfair advantage (as they would be able to earn more points by knowing to attempt this problem early into the contest), but one that would be rather difficult to detect.

Luckily, all Codeforces scoreboards include a checkbox towards the top-right of the page through which users can select whether to include unofficial scores in the rankings. This works great for rounds rated for all users (i.e. combined Div 1/Div 2 Rounds and rounds with separate contests for the two divisions). However, for rounds run only for Division 2 or Division 3, we run into a problem. There is no way to let the standings include participants who participated in the actual contest but who were over the rating cap, but not virtual participants from after the contest. I personally enjoy participating in these rounds to compete with the other unofficial contestants, to practice quick problem-solving, and to solve the harder problems, which are often interesting and challenging even though they're designed for a lower rating cap. Therefore, it's somewhat frustrating that it's difficult to access the standings as they were at the end of the contest (i.e. including only live participants) a while later.

It turns out that there's an easy solution already implemented. On Educational Round scoreboards, an option at the top-left allows users to select whether to view only Division 2 users or to show users from both divisions (separately from the option to show or hide virtual participants). I propose that Codeforces extend this feature to all rounds rated only for Division 2 or for Division 3. In other words, Div. 2 only and Div. 3 only rounds would have the same option to show or hide participants above the contest's rating cap independent of whether virtual participants are hidden.

Please feel free to share your thoughts below. While this is a relatively small problem, it does impact a large number of users: for example, 1557 competitors above the 1600 rating cap registered for the last Division 3 round alone. Moreover, I'm optimistic that this wouldn't be overly difficult to fix, as the necessary infrastructure already exists and is in active use. Implementing this proposal would simply be a matter of extending it to more rounds.

Read more »

 
 
 
 
  • Vote: I like it
  • +141
  • Vote: I do not like it

By Geothermal, history, 11 months ago, In English,

Hi all! As I've done several times in the past, I've written up an unofficial set of solutions to Codeforces Round 575 so that some solutions will be available before system testing completes. If my code gets hacked, I'll update the appropriate problem's solution as soon as I can.


A — Three Piles of Candies

We claim that the answer is always $$$\lfloor \frac{a+b+c}{2} \rfloor$$$. We clearly can't do any better: if Alice and Bob could each get more candies than this, then in total, they would have more than $$$a+b+c$$$ candies, which is impossible. To prove that this is always doable, let Alice take the smallest pile and Bob take the second-smallest pile. Then, distribute enough candies from the largest pile so that Alice has as many candies as Bob (which is always possible because the largest pile must contain more candies than the difference between the other two piles). Then, distribute the remaining candies from the large pile as evenly as possible.

Runtime: $$$O(1)$$$ per query. Click here for my submission.


B — Odd Sum Segments

Let's first figure out when there is an answer. Notice that the K subarrays must each have a sum of 1 mod 2, so the sum of the entire array must be congruent to K mod 2. Moreover, because every odd subarray must contain at least one odd element, there must be at least K odd elements in the array. If these conditions aren't satisfied, the answer will be No.

We claim that the answer is Yes for all other cases. To construct a solution, we output the indices of the first K-1 odd elements, followed by N. The first K-1 subsequences contain exactly one odd element, so their sum will be odd, and because the sum of the entire array is congruent to K mod 2, the sum of the final subarray is congruent to K-(K-1) = 1 mod 2, so it will have an odd sum, as desired.

Runtime: $$$O(N)$$$ per query. Click here for my submission.


C — Robot Breakout

Notice that for each robot individually, disabling action one means that the resulting $$$X$$$ cannot be lower than the robot's $$$X$$$, because the robot cannot move towards a lower value of $$$X$$$. Disabling action two means that the resulting $$$Y$$$ cannot be higher than the robot's $$$Y$$$, for similar reasons. Actions three and four have similar effects.

We thus maintain the maximum and minimum possible values for $$$X$$$ and $$$Y$$$. Initially, both maxima are $$$100,000$$$ and both minima are $$$-100,000$$$, in order to satisfy the output constraints. Then, as we process each robot, for each disabled action, we update the appropriate maximum or minimum. If the maximum $$$X$$$ is less than the minimum $$$X$$$ or the maximum $$$Y$$$ is less than the minimum $$$Y$$$, the answer is $$$0$$$. If not, we can output any valid point in the given range.

Runtime: $$$O(N)$$$ per query. Click here for my submission.


D — RGB Substring

For the smaller version, D1, we can consider every length-K substring of S individually. For each substring, we take three cases: the substring could start with an R, an G, or a B. Then, the rest of the values of the substring are fixed: any R must be followed by a G, any G must be followed by a B, and any B must be followed by an R. To compute the cost of editing this substring into one of these three cases, we can simply count the number of values in this substring that differ from the intended value at that position. Then, the best answer for this substring is the minimum cost over our three cases, and the best answer overall is the minimum cost over all substrings.

For the larger version, we can apply a very similar technique, using a sliding window approach to optimize the solution. We can modify our cases as follows: essentially, we need our string to have a length-K substring equal to the corresponding substring at the same position in RGBRGBRGB..., GBRGBRGBR..., or BRGBRGBRG.... We consider each of these cases individually. Start by computing the number of "errors" (i.e. positions at which the original string differs from the goal string) in the first K elements. Then, to shift the window, subtract one from the count of errors if the first position in the window causes an error and add one to the count of errors if the position added at the end of the window contains an error. We then simply take the minimum possible number of errors.

Runtime: $$$O(N)$$$ per query (or $$$O(K(N-K))$$$ for the easy version). Click here for my submission.


E — Connected Component on a Chessboard

Without loss of generality, assume $$$B \geq W$$$. (The other case proceeds similarly--in fact, we can simply add one to $$$x$$$ at every point in a solution to this case to get a solution to the case in which $$$B$$$ and $$$W$$$ are swapped.)

The answer is Yes if and only if $$$B \leq 3W+1$$$. We prove this by induction on $$$W$$$. If $$$W = 1$$$, $$$B$$$ can be at most $$$4$$$ by inspection. Then, every new white cell we add can give us at most three new black cells, as the white cell must be adjacent to at least one preexisting black cell. This construction is possible if we simply make a large column, alternating black and white cells, and then add black cells above or below the end white cells and to the left and right of every white cell. To get fewer black cells, we can omit some of the black cells on the sides and/or above and below the ending white cells.

This proof is essentially the key to solving the problem. We can simply implement this procedure to construct a solution. Start by building a column of $$$W$$$ white cells and $$$W-1$$$ black cells. Then, we have space for $$$2W+2$$$ more black cells at the top and bottom and on the sides, so we should simply use as many of those cells as necessary to add the remaining black cells to our component.

Runtime: $$$O(B+W)$$$ per query. Click here for my submission.


F — K-th Path

I'll first present the solution I used in contest. Then, I'll discuss a simpler solution, courtesy of dorijanlendvaj.

Recall that Dijkstra's algorithm visits points in increasing order of their distance from the source node. Therefore, if we wanted to find the K'th shortest path from a single node, we could do so more quickly by running Dijkstra's and terminating as soon as we've visited K other nodes. We exploit a similar property to solve the problem while considering paths from all nodes.

First, we read in the data. For each vertex, we sort the edges coming from that vertex in increasing order of weight. (Ties can be broken arbitrarily.) Then, we define a "state" variable. A state consists of a shortest path from a starting vertex to some other vertex in which we log four variables: the starting node, the second-to-last node on the path, the index of the final edge in the second-to-last node's adjacency list, and the length of the path.

We maintain a priority queue of these states, pulling the least-weight ones first. Start by adding a state for the single-edge paths using each node's shortest edge. Then, to process a state, we first check that the shortest path from our starting node to the ending node has not been found yet. If not, we add this length to our list and output it if it's the K'th one we've found. Then, we add a new state to the priority queue representing the path from our starting node to the ending node, plus the shortest edge of the ending node.

Then, we add a new state in which we replace the last edge of the current state with the next-shortest edge from our second-to-last node. In other words, we increase the index variable in our state by one.

The key reason this method works is that it guarantees that we process all possible states in increasing order of weight. Moreover, we start with $$$N$$$ states, and each state only creates a second extra state if it creates one of our $$$K$$$ paths. Therefore, we'll end up with $$$O(N+K)$$$ states in total.

Due to the priority queue operations and sorting, our runtime is $$$O((N+K) \log (N+K) + M \log M).$$$

Click here for my submission.

Here's a second solution. I haven't submitted this one myself, so please feel free to comment if you spot any errors. (Although I received this solution from Dorijan, I wrote it up myself, so all blame for any errors should go to me.)

Eliminate all edges except the K cheapest ones, with ties broken arbitrarily. Then, run N Dijkstra's iterations or an efficient all-pairs shortest paths algorithm to find all possible lengths of paths. From here, sort the paths and pick the K'th least expensive one. This is correct because none of the paths we're looking for will include an edge other than one of the K cheapest ones. Moreover, it's fairly easy to see that this will run in time: there will be, at most, $$$O(K^2)$$$ paths resulting from this process.


Feel free to post below if you have any questions!

Read more »

 
 
 
 
  • Vote: I like it
  • +95
  • Vote: I do not like it