# A — Password

We have $$$N$$$ choices for each of the three characters in our password. This gives us a total of $$$N \cdot N \cdot N = N^3$$$ possible passwords. We can thus print $$$N \cdot N \cdot N$$$ as our answer.

Runtime: $$$O(1)$$$. Click here for my submission.

# B — Buffet

We can solve this with a brute-force simulation, simply implementing the procedure given in the problem. Iterate over the dishes in the order specified by array $$$A$$$, and when we eat dish $$$i$$$, add $$$B[i]$$$ to our answer, plus $$$C[i-1]$$$ if we just ate dish $$$i-1$$$.

Note that since we'll eat every dish once, we could also just add the sum of array $$$B$$$ to our answer as we read it in, iterating over $$$A$$$ only to add values from $$$C$$$ where necessary. My solution implements this approach.

Runtime: $$$O(N)$$$. Click here for my submission.

# C — Maximal Value

We're given a condition on $$$B_i$$$, but since we want to find the best possible array $$$A$$$, let's try to rephrase it as a condition on $$$A_i$$$. We claim that the given condition is equivalent to $$$A_i \leq min(B_i, B_{i-1})$$$, given that both of those values exist. (If $$$i=0$$$, $$$B_{i-1}$$$ won't exist, so we just have $$$A_0 \leq B_0$$$. Likewise, since $$$B_N$$$ doesn't exist, we have that $$$A_N \leq B_{N-1}$$$, since the $$$B_i$$$ term won't come into play.)

This might be relatively intuitive, but let's prove it formally. We'll show that our condition on $$$A_i$$$ is both necessary and sufficient for the condition on $$$B_i$$$ to be true. Let's start by showing necessity. Suppose that our condition does not hold. If $$$A_i > B_i$$$, then this contradicts the fact that $$$B_i \geq max(A_i, A_{i+1})$$$, and similarly, if $$$A_i > B_{i-1}$$$, then we cannot have that $$$B_{i-1} \geq max(A_{i-1}, A_i)$$$. Therefore, if this condition is not true, we cannot have the condition on $$$B_i$$$ given in the problem, showing that the $$$A_i$$$ condition is necessary.

Now, we'll prove sufficiency. If $$$B_i \geq max(A_i, A_{i+1})$$$, we must have that $$$B_i \geq A_i$$$ and $$$B_i \geq A_{i+1}$$$. Both of these are implied by our $$$A_i$$$ condition, so this condition is sufficient--that is, as long as our $$$A_i$$$ condition is true, our $$$B_i$$$ condition is also true.

We have thus shown that the $$$A_i$$$ condition is both necessary and sufficient for the $$$B_i$$$ condition to hold, so the two are equivalent.

Now, the problem is relatively easy. To maximize the sum of the array $$$A$$$, we want each $$$A_i$$$ to be as large as possible, so we simply set each $$$A_i$$$ to $$$min(B_i, B_{i-1})$$$. Then, our answer is the sum of these values.

Runtime: $$$O(N)$$$. Click here for my submission.

# D — Face Produces Unhappiness

We make two key observations. (Note that for simplicity, we'll refer to the number of happy people as *the answer*.)

- The answer will be at most $$$N-1$$$.
- Each of our $$$K$$$ moves can increase the answer by at most $$$2$$$.

The intuition behind these observations isn't too complicated, but the proofs are fairly annoying. The main interesting part of this problem is coming up with the algorithm which generates optimal moves. If you're not interested in the proofs, you can skip the following section.

Claim 1 is relatively easy to prove. Suppose that all $$$N$$$ people are happy. Then, all $$$N$$$ people must be looking at someone else facing the same direction, which implies that all must be facing the same direction. However, then, a person on one of the ends won't be looking at anyone, so that person won't be happy, a contradiction.

Claim 2 is a little more complicated. First, note that there are only four people whose happiness could be affected by any of our moves: the two people on either end of the move and the two people outside of the move standing next to the people inside of it. It's fairly easy to see that everyone else outside of the move won't be affected--they'll be looking at the same person as they were before the move. Additionally, the people inside the move and not on the end will be looking at the same person after the move, and those two people will both have swapped the directions in which they're looking, so the happiness of this group also won't change.

Now, let's discuss the remaining four people. First, we'll prove that we can't increase the answer by more than two by showing that there cannot be more than two of these people who were unhappy before the move but happy after it.

The proof is long and relatively uninteresting, but the basic idea is to note that there are only four people whose happiness could be changed by a move: the two people on either end of the move and the two people outside the move next to them. From there, we can do casework on which three of them were unhappy before the move but happy after it. As we can show that none of these cases are possible, we cannot increase the answer by more than two with a single move.

Now that we've proven that an increase of two per move is optimal, we'll give an algorithm that achieves this maximum. Suppose that the input starts with an L. (The same logic applies if it starts with an R.) Then, with each move, take a block of consecutive R's (with L's on each side of the block) and flip it. Flipping any of these blocks except for one on the end of the string will increase our answer by two, so we should flip these ones first. Finally, if we have any moves left over and there's a block of R's on the end, we flip that block, giving us the maximum answer of $$$N-1$$$.

To make this approach clearer, let's go over the second sample test case. We have the string LRRLRLRRLRLLR and can make up to three moves. We'll thus flip the first block of R's to get LLLLRLRRLRLLR. Likewise, we'll flip the second and third blocks, to get a final string of LLLLLLLLLRLLR. This gives the desired answer of nine happy people. If we had any more moves, we would flip the second-to-last block of R's, to get LLLLLLLLLLLLR and an answer of eleven, and with one more move after that, we would flip the final R to get LLLLLLLLLLLLL, which gives an answer of twelve.

This gives us a fairly simple implementation. First, count the answer in the given string. Then, for each move, we can increase the answer by two as long as it doesn't go over $$$N-1$$$.

Runtime: $$$O(N)$$$. Click here for my submission.

# E — Second Sum

We iterate over the numbers in decreasing order. For each number, we count how many subarrays contain exactly one of the numbers we've seen previously. We then multiply that by the current number and add it to our answer.

Now, we'll explain how to do this more precisely. Let's let $$$a$$$ and $$$b$$$ be the last and second-to-last positions of greater numbers in the set before the current number. Similarly, let $$$x$$$ and $$$y$$$ be the first and second positions of greater numbers after the current number. Then, there are two ways our subarray could contain exactly one larger number: it could contain position $$$a$$$, but not positions $$$b$$$ or $$$x$$$, or it could contain position $$$x$$$, but not positions $$$a$$$ or $$$y$$$. Thus, if our current number's position is $$$i$$$, there are $$$(a-b)(x-i) + (y-x)(i-a)$$$ subarrays containing $$$i$$$ and exactly one greater position. (The first term accounts for subarrays containing $$$i$$$ and $$$a$$$ and the second accounts for subarrays containing $$$i$$$ and $$$x$$$.)

To query for $$$a$$$, $$$b$$$, $$$x$$$, and $$$y$$$, we can use a set containing all the positions we've seen so far. The one remaining detail is to figure out what to do if $$$a, b, x,$$$ or $$$y$$$ doesn't exist (because there aren't two lower or higher positions of greater numbers). It turns out that if $$$a$$$ or $$$b$$$ doesn't exist, we can set it to $$$-1$$$, and if $$$x$$$ or $$$y$$$ doesn't exist, we can set it to $$$N$$$. (The intuition here is that if there are no such positions within the array, the points at which our subarrays become invalid occur when we leave the array--that is, one position before the start of the array and one position after the end.) The rest of the solution works the same as before.

Runtime: $$$O(N \log N)$$$. Click here for my submission.

# F — Many Slimes

We employ a greedy approach. For obvious reasons, our initial slime should be the largest one (as otherwise, we will never be able to create the largest value). Then, when each slime reproduces, make it create the largest possible slime we still need to build. (Of course, this new slime must be smaller than our current slime.) If any of our slimes cannot create any new slimes when required to do so, the answer is no. If we can build all the slimes in this manner, the answer is yes.

The proof that this approach is correct is fairly simple. Larger slimes can do anything smaller slimes can do and then some--therefore, if we have the choice between creating a large slime and a smaller slime, we are never worse off when we simply create the large slime.

We can implement this by storing the slimes in a multiset and using C++'s upper_bound function to query for the largest possible slime for a given slime to create.

Runtime: $$$O(N \cdot 2^N)$$$. (The multiset operations make this $$$O(2^N \log 2^N)$$$, which is equivalent to the given complexity.) Click here for my submission.

As an aside, I think this problem was substantially easier than E, and proving that the solution works was probably easier than doing the same for D.

Please feel free to share questions or comments below.

Answer for D can be directly written as min(n-1, a+2*k) where a is the number happy people initially

Can you please explain more? How did you arrive at this solution?

Pair of neighbors in queue can be one of two types: people looking in the same direction or in opposite directions. Real key observations are

Number of happy people is a number of type 1 pairs.

Each of our moves changes the type of exactly those pairs that are divided by endpoints of chosen segment.

Why does this not work? https://atcoder.jp/contests/abc140/submissions/7396993 Just greedily matching greatest slime we have to greatest slime we need to make etc.

8

4 3 3 3 2 2 2 1

Says No, should be Yes

Why is it Yes, sorry im bad at programming

You aren't, lol.

A lotta people did the same greedy you did, including me.

Take the test case:

`8 7 7 7 2 1 1 1`

We would produce

`2 1 1 1`

on the final day and will get the answer`NO`

But the correct way of doing it will be:

`8`

`8 7`

`8 7 7 2`

`8 7 7 2 7 1 1 1`

Thanks a lot XD

Why making Fenwick tree for problem E by BFS doesn't work?

I mean that the first slime is slime number 0000...0 and a slime that became seperated from his father on second i has his father's index +

ith bit. So father of some bitmaskMaskisMask — Mask & -mask.My idea is to have BFS on this kind of tree from root(0000...0), and when we are in a node, we choose the health of that node the biggest unused health yet.

But I don't know why this is wrong.

Can you help me?

Sorry for my bad English (:

That feeling, when you came up with problem E a long time ago with an $$$O(n)$$$ solution and were unable to cut $$$O(n\log{n})$$$ so discarded this problem :c

So challenge: solve problem in $$$O(n)$$$

Go! Ps. I didn't implement this myself. Ref to this blog. :)

My attempt

I believe the $$$O(N \log N)$$$ solution can be adapted fairly easily to run in $$$O(N)$$$: we can simply calculate $$$a$$$, $$$b$$$, $$$x$$$, and $$$y$$$ by sweeping through the array twice (once forwards, to compute $$$a$$$ and $$$b$$$, and once backwards, to compute $$$x$$$ and $$$y$$$) rather than using the set operations. From there, the given solution works in $$$O(N)$$$. Did you have a different approach in mind?

I personally don't have a problem with $$$O(N \log N)$$$ solutions being accepted here--I don't think the $$$O(N)$$$ requirement makes the problem too much more difficult or interesting. (Of course, on the off chance there's an error in this approach, let me know.)

I read your code and I got confused until I saw

`FORd`

. At first, I always think it is just`FOR`

:'(Will you please explain the Sweeping technique ? or any similar problems with solution...

My Solution

I think it is O(n).

The idea is similar to the idea of finding the previous or the next larger number for each position=i (this is why there are two for loops in the code. Its for two previous and two next larger numbers for each position=i.).

In the classic problem, every time we meet in the list a number 'a' which is less than the number at position=i, we can delete this 'a'. Here we can't delete it immediately because this 'a' can be the 2nd larger number for a future number. But if two times it happens that 'a' is less than the current number then we can delete this 'a' with the same logic as in the classic problem.

Also for each number we are currently on, we are seeking for only two larger numbers.

So every number in the list will be deleted if we avoid it two times and also every number we are currently on will stop iterating the list after find two larger numbers.

I am not sure for the constant in O(n), but I feel that it is O(n).

Just curious, if in problem D we had the constraint that the same direction should be 'L' only for a person to be happy instead of any same direction then how would we solve the problem?

Can anyone please explain the approach of problem E.

In problem D, i think there are only 3 people are affected. if we flip range $$$[l:r]$$$, only 3 people who are located in $$$l-1$$$, $$$r$$$ and $$$r+1$$$ are affected.

for D a different approach took the subsets of continuous similar chars and then greedily selected for changing direction submission