lucifer1004's blog

By lucifer1004, history, 10 months ago, In English

Google Kick Start 2022 Round A Tutorial

中文版 | Chinese version

Here are my solutions to Google Kick Start 2022 Round A.

Problem A — Speed Typing

Solution I: Single Pointer

We use a pointer which points to the position to be matched in the target string. We then enumerate the original string and move forward the pointer whenever a match is found.

If the target string is fully matched, the answer will be the difference between the length of the original string and the target string, otherwise there will be no valid answer.


  • Time complexity is $$$\mathcal{O}(|S|+|T|)$$$
  • Space complexity is $$$\mathcal{O}(1)$$$
Code (C++)

Problem B — Challenge Nine

Solution I: Math + Greedy

We know that a number is divided by 9 is equivalent to that its digit sum can be divided by 9. We can quickly determine the digit we need based on this. If the number is already divided by 9, we can add either 0 or 9, but to make the answer the smallest, we will choose 0 instead.

We then look for the position of insertion greedily. We enumerate from to left and find the first position which is larger than the digit to be inserted.

However, when we are inserting 0, we cannot place it at the beginning, so we have to put it in the second place.


  • Time complexity is $$$\mathcal{O}(N)$$$
  • Space complexity is $$$\mathcal{O}(N)$$$
Code (C++)

Problem C — Palindrome Free Strings

Solution I: Dynamic Programming

Any palindrom with length $$$\ge$$$ 5 must either contain a 5-character palindrom or a 6-character palindrom. So we can find all the 5- and 6- palindroms and mark them, then maintain the possible situations of the last 6 positions during a dynamic programming.


  • Time complexity is $$$\mathcal{O}(2^K\cdot|S|)$$$, where $$$K=5$$$
  • Space complexity is $$$\mathcal{O}(2^K)$$$
Code (C++)

Problem D — Interesting Integers

Solution I: Digit Dynamic Programming

Such problems that ask for interval $$$[L,R]$$$ can usually be transformed to $$$F(R)-F(L-1)$$$. In this problem, $$$F(x)$$$ means the number of good numbers in the interval $$$[1,x]$$$.

Then we can calculate $$$F(x)$$$ via digit dynamic programming.

  • We need to consider the relationship between digit product and digit sum. When the digit sum is fixed, we only need to record the GCD of the digit product and the digit sum.
  • We can only use numbers no larger than $$$x$$$. Such problems with an upper bound or a lower bound can be solved by using a flag to denote whether the current prefix is equal to the bound.
  • We need an extra flag to denote whether we are still within the leading zeros.

Suppose the upper bound has $$$N$$$ digits, then the digit sum cannot exceed $$$9N$$$. Then we can enumerate the digit sums. For a fixed digit sum $$$sum$$$, we do a 4-dimension DP like $$$dp[p][s][z][same]$$$, in which:

  • $$$p$$$ denotes the GCD of the digit product and $$$sum$$$
  • $$$s$$$ denotes the current digit sum
  • $$$z$$$ denotes whether the current number is larger than zero
  • $$$same$$$ denotes whether the current prefix is equal to that of the upper bound

The contribution of this DP to the final answer is $$$dp[sum][sum][1][0]+dp[sum][sum][1][1]$$$.

The details of the dynamic programming can be found in the code below.


  • Time complexity is $$$\mathcal{O}(S^{5/2}\cdot ND)$$$, where $$$S\le9\times12=108$$$, $$$N$$$ is the length of the upper bound and $$$D=10$$$
  • Time complexity is $$$\mathcal{O}(S^2)$$$
Code (C++)

Full text and comments »

  • Vote: I like it
  • +61
  • Vote: I do not like it

By lucifer1004, history, 21 month(s) ago, In English

Google Kick Start 2021 Round B Tutorial

中文版 | Chinese version

Here are my solutions to Google Kick Start 2021 Round B. Some of them (C & D) are not optimal, albeit they passed all the test cases.

Problem A — Increasing Substring

We simply go ahead and try appending the current letter after the last letter.


  • Time complexity is $$$\mathcal{O}(|S|)$$$.
  • Space complexity is $$$\mathcal{O}(|S|)$$$.
Code (C++)

Problem B — Longest Progression

For $$$N\leq3$$$, it is obvious that the answer is $$$N$$$.

For $$$N\geq4$$$, we can first calculate $$$l[i]$$$, which denotes the length of the longest arithmetic array from left till $$$i$$$ without making any changes, and $$$r[i]$$$, which denotes the length of the longest arithmetic array from right till $$$i$$$ without making changes.

Then we check each position $$$i$$$ to see if we could get a longer length by changing $$$a[i]$$$ so that we can:

  • combine $$$l[i-1]$$$ and $$$r[i+1]$$$
  • combine $$$l[i-1]$$$ and $$$a[i+1]$$$
  • combine $$$a[i-1]$$$ and $$$r[i+1]$$$

And the complexity:

  • Time complexity is $$$\mathcal{O}(N)$$$.
  • Space complexity is $$$\mathcal{O}(N)$$$.
Code (C++)

Problem C — Consecutive Primes

Solution I: Find the three primes $$$A<B\leq\sqrt{S}<C$$$ closest to $$$\sqrt{S}$$$

The official solution leverages the concept of the prime gap and is a very beautiful brute-force solution.

I would not repeat the prime gap part, but will instead talk about the primality test. In this problem, a naive $$$\sqrt{N}$$$ primality test is enough to pass, but we could do better with Miller-Rabin, which runs in $$$\mathcal{O}(K\log^3N)$$$ time.

Since $$$S\leq10^{18}$$$, the largest number we would need to check will be around $$$10^9$$$, in this case, $$$[2,7,61]$$$ would be enough to ensure the correctness of the primality test.

Code (C++)

Solution II: Find all primes and binary search

Since the test cases are bundled, we can also pass Test 3 if we use Euler sieve to generate all primes smaller than $$$\sqrt{\text{MAXN}}$$$ optimally, and then use binary search for each query. Note that we need to make $$$\text{MAXN}$$$ a bit larger than $$$10^{18}$$$ so that we will also generate the smallest prime that is larger than $$$10^9$$$.

Note that we need to use bitset instead of bool[] to save space.

And the complexity:

  • Time complexity is $$$\mathcal{O}(\sqrt{\text{MAXN}}+Q\log\frac{\sqrt{\text{MAXN}}}{\ln\sqrt{\text{MAXN}}})$$$.
  • Space complexity is $$$\mathcal{O}(\sqrt{\text{MAXN}})$$$.
Code (C++)

Problem D — Truck Delivery

The official solution uses the segment tree. However, we can also use the blocking technique to solve this problem.

We separate edges into blocks according to their limits $$$L_i$$$, and each block will have an optimal size of $$$\sqrt{N}$$$.

We first do a depth-first search to gather the information we need. That is, for each node, we would like to know the GCD value of all the edges whose limits are within the same block ($$$acc[i][j]$$$ in the code below), along the path from the root (capital city) to the current node. Also, we would like to know the previous edge ($$$last[i][j]$$$ in the code below) that has a limit that falls within a certain block, along the path.

For each query, we first determine the block $$$j$$$ that $$$W_i$$$ belongs to. Then we can safely calculate the GCD value of all the blocks that are smaller than $$$j$$$. For the $$$j$$$-th block, however, we need to check each edge to find out whether $$$L_k\leq W_i$$$, which can be done with the help of the $$$last$$$ array.

And the complexity:

  • Time complexity is $$$\mathcal{O}((N+Q)\sqrt{N})$$$, if the block size is set as $$$\sqrt{N}$$$. (In the code below, I used a constant block size of $$$500$$$ to avoid discretization.) Also, note that all $$$L_i$$$ are distinct, so each block with size $$$\sqrt{N}$$$ can contain at most $$$\sqrt{N}$$$ edges, which ensures the correctness of the time complexity.
  • Space complexity is $$$\mathcal{O}(N\sqrt{N})$$$.
Code (C++)

Full text and comments »

  • Vote: I like it
  • +60
  • Vote: I do not like it

By lucifer1004, 22 months ago, In English

Google Kick Start 2021 Round A Tutorial

中文版 | Chinese version

Problem A — K-Goodness String

First, we need to calculate the current score $$$K_c$$$. Since in one operation we can increase or decrease the score by $$$1$$$, the answer is $$$|K_c-K|$$$.

  • Time complexity is $$$\mathcal{O}(|S|)$$$.
  • Space complexity is $$$\mathcal{O}(1)$$$.
Code (C++)

Problem B — L Shaped Plots

First, we calculate the length of longest continuous $$$1$$$ starting from each cell via simple dynamic programming, namely, $$$W[i][j]$$$, $$$E[i][j]$$$, $$$N[i][j]$$$ and $$$S[i][j]$$$.

Then we enumerate each cell and count the L-shaped plots centering at the cell. There are four types of L-shaped plots considering the orientation:

  • WN
  • WS
  • EN
  • ES

And for each type, we need to consider which side to be the longer side.

  • Time complexity is $$$\mathcal{O}(RC)$$$.
  • Space complexity is $$$\mathcal{O}(RC)$$$.
Code (C++)

Problem C — Rabbit House

Obviously, the highest cell should not be increased. With this insight, we can solve this problem in a Dijkstra-like manner.

We use a max-heap to store the cells to be processed. When we process a cell with height $$$h$$$, we need to ensure that all its neighbors are at least as high as $$$h-1$$$. If the height of any of its neighbors is increased, then we should enqueue that cell with the new height.

In the end, we accumulate the difference between each cell's final height and original height to get the answer.

  • Time complexity is $$$\mathcal{O}(RC\log(RC))$$$.
  • Space complexity is $$$\mathcal{O}(RC\log(RC))$$$.
Code (C++)

Problem D — Checksum

The hardest part of this problem is to recognize that it is actually a Minimum Spanning Tree (actually Maximum Spanning Tree, but they are equivalent) problem.

The first observation is that we can consider the known numbers to be unknown with a cost of $$$0$$$, and solve an equivalent problem in which all numbers are unknown.

The second observation is that we need to check exactly $$$(N-1)^2$$$ cells for a $$$N\times N$$$ matrix:

  • If we check fewer than $$$(N-1)^2$$$, then at least $$$2N$$$ cells are unknown, while we can induce at most $$$2N-1$$$ cells.
  • If we check more than $$$(N-1)^2$$$, then there is at least a row/column whose cells are all checked, which is unnecessary.

Now, instead of determining the $$$(N-1)^2$$$ cells to be checked, we try to determine the $$$2N-1$$$ cells which are induced. We want the sum of their costs to be maximized so that the sum of the checked cells can be minimized.

An insight is that, if we consider the $$$N$$$ rows and $$$N$$$ columns to be nodes, and the cells to be undirected edges (cell $$$(i,j)$$$ is considered to be an edge $$$r_i\leftrightarrow c_j$$$), then there will be exactly $$$2N$$$ nodes, while we need to choose $$$2N-1$$$ edges from all edges such that the sum of their weights is maximized. Very close to MST!

To confirm that it is exactly MST, we need to check whether loops are allowed. Since there are no edges that connect two rows or two columns, a loop must be in the form $$$r_0\leftrightarrow c_0\leftrightarrow r_1\leftrightarrow c_1\cdots\leftrightarrow r_0$$$. If a loop occurs, then all the rows and columns in this loop would have at least two unknown cells, which is unsolvable.

So we need to choose $$$2N-1$$$ edges in a graph with $$$2N$$$ nodes and loops are not allowed. Now we are confirmed that this is an MST.

The MST part is quite trivial, both Kruskal and Prim can work fluently.

  • Time complexity is $$$\mathcal{O}(N^2\log N)$$$ for Kruskal, and $$$\mathcal{O}(N^2)$$$ for Prim (since it is a dense graph).
  • Space complexity is $$$\mathcal{O}(N^2)$$$.
Code (C++)

Full text and comments »

  • Vote: I like it
  • +99
  • Vote: I do not like it

By lucifer1004, history, 2 years ago, In English

中文版 | Chinese version

Problem A — Retype

We only have two options, thus

$$$ ans=K-1+\min(N + 1, K - S + N - S + 1) $$$

Time complexity is $$$O(1)$$$.

Code (C++)

Problem B — Boring Numbers

All problems such that require counting of numbers within range $$$[L,R]$$$ can be transformed into solving for $$$[0,R]$$$ and $$$[0,L]$$$ separately, and taking their difference as the final answer.

Now suppose $$$X$$$ has $$$D$$$ digits and we want to count boring numbers within $$$[0,X]$$$.

First, let's consider all numbers with $$$d<D$$$ digits. For $$$d$$$ digits, we can generate $$$5^d$$$ boring nubmers since we have $$$5$$$ options for each position (the most significant nubmer must be add so it cannot be $$$0$$$). So all numbers with $$$d<D$$$ digits make a contribution of $$$\sum_{i<D}5^i$$$.

Then we consider numbers with $$$D$$$ digits and are no larger than $$$X$$$.

Start from the most significant digit, and suppose that we are at the $$$i$$$-th digit now.

  • If $$$X[i]$$$ does not satisfy the requirement of parity, we just need to count the digits that are smaller than $$$X[i]$$$ and can satisfy the parity (we can precalculate such numbers in $$$b[X[i]]$$$), then add to the total number $$$b[X[i]]\cdot5^{D-i}$$$. Since for these $$$b[X[i]]$$$ numbers, the following $$$D-i$$$ digits can be chose arbitrarily. In this case, we can stop right here.
  • Otherwise, we first count the digits that are smaller than $$$X[i]$$$ and can satisfy the parity (we can precalculate such numbers in $$$a[X[i]]$$$) and add to the total number $$$a[X[i]]\cdot5^{D-i}$$$. Then we are going to count boring numbers that have exactly same $$$i$$$ digits as $$$X$$$ and continue our processing. Note that if $$$i=D$$$, we need to add $$$1$$$ to the total number, since this means $$$X$$$ itself is a boring number.

Time complexity is $$$O(\log R)$$$ if we exclude the precalculations.

Code (C++)

Problem C — Rugby

Apparently, we can solve for $$$x$$$ and $$$y$$$ independently.

First consider $$$y$$$. Since all the people will be in the same row, this becomes a classical problem in which we just need to take the median of $$$Y_i$$$ as the meeting place.

Then we consider $$$x$$$. It is obvious that once we determine the starting point $$$x$$$, the optimal movement is determined. The leftmost person will go to the leftmost cell, and the rest follow.

Thus we can solve this problem via ternary search. In order to prove the correctness, we need to prove that $$$dist(x)$$$ has only one extreme point, which is also its minimum point. (If we consider integer points, there might be two, but the two must be $$$x$$$ and $$$x+1$$$).

Obviously, when $$$x+N-1\leq\min(X_i)$$$, $$$dist(x)$$$ decreases with $$$x$$$. While when $$$x\geq\max(X_i)$$$, $$$dist(x)$$$ increases with $$$x$$$.

We then observe that, when we move the starting point from $$$x$$$ to $$$x+1$$$, there will be $$$k(x)$$$ people who will move $$$1$$$ less, and $$$N-k(x)$$$ people who will move $$$1$$$ more. So $$$dist(x+1)-dist(x)=N-2\cdot k(x)$$$. During the process where $$$x$$$ moves from $$$-\infty$$$ to $$$\infty$$$, $$$k(x)$$$ goes to $$$0$$$ from $$$N$$$, and will never increase. So $$$dist(x+1)-dist(x)$$$ will increase from $$$-N$$$ to $$$N$$$ and will never increase. So $$$dist(x)$$$ will take its extreme value (also its minimum) at the minimum $$$x$$$ that makes $$$dist(x+1)-dist(x)\geq0$$$.

The final time complexity is $$$O(N\log N+N\log MAX)$$$, in which $$$MAX$$$ is our search range.

Code (C++), Ternary search

We can also do binary search on $$$dist(x+1)-dist(x)$$$, or $$$k(x)$$$, and the solution is very similar.

Code (C++), Binary search

Actually, we can also apply the median method on $$$x$$$. But we need to substitute $$$X_i$$$ with $$$X_i-i$$$ after the first sort, and then do a second sort. Detailed explanation can be seen in the official analysis.

Code (C++), Two-pass sorting for x

Problem D — Friends

If we build a graph with the strings, we will have too many edges.

So instead we can build a graph with different letters (in this problem we have $$$C=26$$$ letters). We will save this graph in an adjacent matrix.

The initial setting is $$$d[i][j]=\infty$$$ and $$$d[i][i]=0$$$. Then we enumerate on all $$$N$$$ strings. If two different letters $$$a$$$ and $$$b$$$ both occur in the same string $$$S$$$, we set $$$d[a][b]=d[b][a]=1$$$. The meaning is that, if we have a string $$$S'$$$ with $$$a$$$ and another string $$$S"$$$ with $$$b$$$, we can build a chain $$$S'\to S\to S"$$$ which has $$$1$$$ middle point.

Then we do Floyd-Warshall on this adjacent matrix. Now $$$d[i][j]$$$ means the minimum middle points that are needed to build a chain from $$$i$$$ to $$$j$$$.

For each query, we enumerate on letters in $$$S[X_i]$$$ and $$$S[Y_i]$$$, and the final answer will be

$$$ \min_{p\in S[X_i],q\in S[Y_i]}d[p][q] + 2 $$$

If the answer is $$$\infty$$$, we just output $$$-1$$$.

The total time complexity is $$$O((N+Q)L^2+C^3)$$$, in which $$$C=26$$$ is the size of the alphabet.

Code (C++)

Full text and comments »

  • Vote: I like it
  • +13
  • Vote: I do not like it

By lucifer1004, history, 2 years ago, In English


1437A - Маркетинговая схема

Code (Python 3)

1437B - Развороты бинарных строк

Code (Python 3)

1437C - Шеф Монокарп

Code (Python 3)

1437D - Дерево минимальной высоты

Code (C++)

1437E - Сделай возрастающим

Code (C++)

1437F - Эмоциональные рыбаки

Code (C++)

1437G - СУБД смерти

Code (C++)

Full text and comments »

  • Vote: I like it
  • +147
  • Vote: I do not like it

By lucifer1004, history, 2 years ago, In English

中文版 | Chinese version

Problem A — ATM Queue

It takes a person $$$\left\lceil\frac{A_i}{X}\right\rceil$$$ rounds to withdraw $$$A_i$$$. So we can turn $$$A_i$$$ into pairs $$$(\left\lceil\frac{A_i}{X}\right\rceil, i)$$$, then sort the pairs to get the final sequence.

Total time complexity is $$$O(N\log N)$$$ since we need to sort.

Code (C++)

Problem B — Metal Harvest

First we need to sort all the time segments. Since there are no overlaps, we then handle them one by one. During the process, we keep record of the right endpoint of the last coverage, $$$R$$$.

For each segment, if it can be covered by the last coverage, we do nothing. Otherwise, we will need to cover a length of $$$r_i-\max(l_i,R)$$$, which requires $$$\left\lceil\frac{r_i-\max(l_i,R)}{K}\right\rceil$$$ robots. After adding those robots, we update $$$R$$$ with $$$\max(l_i, R)+K\cdot\left\lceil\frac{r_i-\max(l_i,R)}{K}\right\rceil$$$.

Total time complexity is $$$O(N\log N)$$$ since we need to sort.

Code (C++)

Problem C — Painters' Duel

The museum can be easily represented by a graph with $$$S^2$$$ nodes, and each node has at most $$$3$$$ edges.

The hardest point is how to represent the current state of the museum. Since there are at most $$$6^2=36$$$ rooms, a $$$64$$$-bit integer is just enough. We can use the last $$$40$$$ bits for rooms, $$$[41,50]$$$ for person $$$B$$$ (who moves second), and $$$[51,60]$$$ for person $$$A$$$ (who moves first). A room bit is set if it is under construction or it has been painted in previous steps.

Then we can do DFS.

There are in total three cases for each step:

  • $$$A$$$ can move to a room. From all choices, we need to choose the one that minimizes the points (since the role of $$$A$$$ and $$$B$$$ are swapped).
  • $$$A$$$ cannot move while $$$B$$$ can move. We just swap $$$A$$$ and $$$B$$$ and continue.
  • Neither $$$A$$$ nor $$$B$$$ can move. This is a boundary condition. The value is $$$0$$$.

Of course, we will use memoization to avoid duplicate calculation.

The theoretical time complexity is $$$O(2^{S^2})$$$ which is huge, but as in many other DFS problems, many conditions are automatically pruned, and this solution is enough to pass all test cases.

Code (C++)

Problem D — Yeetzhee

An intuition is that we should not reroll dices unless current situation is already invalid.

Similar to Problem C, the hardest point is how to represent a state, and how to validate it.

Here, prefix sum can be a good choice. We use $$$S[i]$$$ to represent the number of colors (I change "numbers" to "colors" just for narrative convenience) that have occured no more than $$$i$$$ times.

For example, if the target state is $$$[1,2,2,3]$$$ (which implies that $$$N=8$$$) and $$$M=6$$$, the target prefix sum can be represented as $$$[2,3,5,6]$$$ (the array is truncated at $$$i=3$$$ since the largest frequency is $$$3$$$). And the original prefix sum is $$$[6,6,6,6]$$$. For any valid state, all elements in the prefix sum array should be larger than or equal to the corresponding element in the target prefix sum array, because numbers in the prefix sum array can only decrease.

During each transition, we enumerate all current states. For each state $$$(S, p, e)$$$ where $$$S$$$ is the prefix sum, $$$p$$$ is its probability and $$$e$$$ is the expected value of rolling times, we first find all valid updates. An update $$$(i_k, c_k)$$$ is valid, only if there is at least one color occuring $$$i_k$$$ times (which means $$$c_k=S[i_k]-S[i_k-1]>0$$$) and $$$S[i_k]>target[i_k]$$$. We can then count the number of good colors $$$g$$$. We are expected to roll $$$\frac{M}{g}$$$ times to get a good color. This update is stored as $$$(S',p\cdot\frac{c_k}{g},e+\frac{M}{g})$$$.

We then apply all updates. For a new state $$$S'$$$, its probability is the sum of all updates targeted at it, while its expected value is the weighted sum of all updates targeted at it.

The time complexity is hard to estimate. But since the partition number of $$$50$$$ is $$$\sim10^6$$$, there will not be too many states at each step. So this solution can pass all the test cases.

Code (C++)

Full text and comments »

  • Vote: I like it
  • +22
  • Vote: I do not like it

By lucifer1004, history, 2 years ago, In English

There is also a Chinese version.

1397A - Juggling Letters

Just count the number of every letter, and check whether every number can be divided by $$$N$$$.

Code (Python 3)

1397B - Power Sequence

First sort the array in ascending order, then enumerate $$$c$$$. We can constrain the range of $$$c$$$ by $$$c^{n-1}<=2\sum a_i$$$, otherwise, we can simply change all $$$a_i$$$ to $$$1$$$ and that will cost less.

Code (Python 3)

1396A - Multiples of Length

We can make use of the fact that $$$N-1$$$ and $$$N$$$ are coprime. First we operate on $$$[1,N-1]$$$ whose length is $$$N-1$$$, then we operate on $$$[2,N]$$$ whose length is also $$$N-1$$$. After two operations, we can make all numbers divided by $$$N$$$. Finally, we operate on $$$[1,N]$$$ whose length is $$$N$$$, and we can just add $$$-a_i$$$ to each $$$a_i$$$ ($$$a_i$$$ may have changed after the previous operations).

Note that $$$N=1$$$ is an edge case.

Code (C++)

1396B - Stoned Game

If $$$\max a_i>\sum a_i - \max a_i$$$, then T can always win. He can just take stones from the largest pile, while HL can only take from the rest piles. Since $$$\max a_i>\sum a_i - \max a_i$$$, there is definitely a time when HL has no stone to take after T takes a stone.

Otherwise, $$$\max a_i\leq\sum a_i - \max a_i$$$, then both players can avoid $$$\max a_i>\sum a_i - \max a_i$$$ with the following strategy:

  • If the maximum pile is currently available, just take from it
  • Otherwise, it means that the maximum pile has just been taken, so we have $$$\max a_i\leq\sum a_i - \max a_i - 1$$$. No matter which pile we take a stone from, we will have $$$\max a_i\leq\sum a_i - \max a_i$$$ afterwards.

So the winner depends on the parity of stones. If the total number of stones is odd, then T wins, otherwise HL wins.

Code (C++)

1396C - Monster Invaders

First, let's consider one level. We have two ways to clear all monsters:

  • Kill the boss with one shot. We use pistol to kill all normal monsters, then use AWP to kill the boss. The total time cost is $$$p[i]=a[i]\cdot r_1+r_3$$$.
  • Kill the boss with two shots. We can use pistol to kill all normal monsters, and then attack the boss once. Or we can use the laser gun to kill all normal monsters while damaging the boss at the same time. Then we are forced to leave. The next time we reach this level, we use pistol again to kill the boss. If we only consider the time spent on reloading, the total time cost is $$$q[i]=\min((a[i]+1)\cdot r_1, r_2)+r_1$$$.

Now we can consider the general route. Let's reflect on where we shall end.

image for 1396C

Supposing we end at $$$3$$$, our best choice should be like the figure above. That is to say, we first come to $$$3$$$, then we go to the other end and go back to $$$3$$$. Otherwise we will spend extra time on teleportation.

So we can divide the total time cost into three parts. The time cost of $$$[1,2]$$$, the time cost of $$$[3,8]$$$ and the time cost of moving from $$$2$$$ to $$$3$$$ which is $$$d$$$.

Let $$$L[i]$$$ be the minimal time to clear $$$[1,i]$$$ and stop at level $$$i$$$, $$$R[i]$$$ be the minimal time to clear $$$[i,N]$$$ and stop at level $$$i$$$, our final answer would be

$$$ \min_{i=1}^NL[i]+R[i+1]+d $$$

In order to avoid edge cases, we can let $$$L[0]=R[N+1]=-d$$$.

Then we need to solve $$$L[i]$$$ and $$$R[i]$$$.

$$$R[n]$$$ is special because there are no further levels. We can choose to kill all monsters in $$$p[n]$$$, or use the teleportation twice and spend $$$q[n]+2d$$$. For rest $$$R[i]$$$, since we need to go from $$$i$$$ to $$$i+1$$$ and then go back to $$$i$$$, so we must have passed level $$$i$$$ twice. So the total time cost is

$$$ R[i]=R[i+1]+2d+\min(p[i],q[i]) $$$

Then we will consider $$$L[i]$$$. For $$$L[i]$$$, we have two strategies.

  1. Not going back. We go from level $$$i-1$$$ to level $$$i$$$ then kill all monsters, using $$$L[i-1]+d+p[i]$$$.
  2. Going back. We take the route $$$i-2\rightarrow i-1\rightarrow i\rightarrow i-1\rightarrow i$$$. Since we have passed level $$$i-1$$$ and $$$i$$$ twice, we can use both $$$p[i-1]$$$ and $$$q[i-1]$$$, and $$$p[i]$$$ and $$$q[i]$$$.

Comparing the two strategies, we will have

$$$ L[i]=\min(L[i-1]+d+p[i],L[i-2]+4d+\min(p[i-1],q[i-1])+\min(p[i],q[i])) $$$
Why don't we consider going back further?

We can then use the formula we come to earlier to calculate the final answer.

Code (C++)

1396D - Rainbow Rectangles

Not yet.

1396E - Distance Matching

First, we want to find when the problem has no answer.

Considering $$$dist(u,v)=dep(u)+dep(v)-2\cdot dep(LCA(u,v))$$$, we can conclude that all possible values have the same parity, since they share the $$$\sum dep(i)$$$ part, while the second part is always even.

Now let's turn to the perspective of edges. We randomly choose a root, then for each non-root node, we consider the edge from its parent to it. We define $$$sz[i]$$$ to be the size of the subtree rooted at $$$i$$$. Then the edge can be counted at most $$$\min(sz[i],n-sz[i])$$$ times (we try to make as many pairs as we can from nodes in the subtree to the outer nodes). Meanwhile, it will be counted at least $$$sz[i] \% 2$$$ times (we try to make as many pairs as we can in its subtree). For this part, see also 1280C - Jeremy Bearimy.

So we have found the lowered bound $$$LB=\sum_{i\neq root} sz[i]\%2$$$ and the upper bound $$$UB=\sum_{i\neq root} \min(sz[i],n-sz[i])$$$. And we have known its parity must conform to that of $$$LB$$$ and $$$UB$$$. If $$$k<LB$$$, $$$k>UB$$$ or $$$k$$$'s parity is different from $$$LB$$$'s, the problem has no answer.

Can we definitely find a valid answer if $$$LB\leq k\leq UB$$$ and $$$k-LB$$$ is even?

Yes, we can, based on the strategy below.

We will consider how many nodes in each subtree (except the root tree) are cross matched (matched with an external node). The upper limit for each subtree has been calculated in the last step, which equals to $$$w[i]=\min(sz[i],n-sz[i])$$$. We use binary search to find the minimal global upper bound $$$limit$$$ which will be applied to all subtrees, so that

$$$crossmatch[i]=\min(w[i], limit+(w[i]-limit)\%2)$$$ and $$$\sum_{i\neq root}crossmatch[i]\geq k$$$

both hold. Note that the $$$(w[i]-limit)\%2$$$ part is added so that the parity can be kept.

Since we need $$$\sum_{i\neq root}crossmatch[i]=k$$$, further modification is required. Here we only modify those nodes with $$$crossmatch[i]=limit+1$$$ to $$$limit-1$$$, until the updated sum meets the requirement. Since $$$limit$$$ is the minimal possible value to make $$$\sum_{i\neq root}crossmatch[i]\geq k$$$, it is ensured that we can meet the requirement within finite steps, otherwise $$$limit-1$$$ will also make $$$\sum_{i\neq root}crossmatch[i]\geq k$$$, which contradicts with the premise.

Now that we have constructed all $$$crossmatch[i]$$$, how can we ensure that a valid matching can be made based on it? Let's consider an invalid configuration instead. In an invalid configuration, there must be a node $$$u$$$ where $$$crossmatch[u]>\sum_\text{v is child of u}crossmatch[v]+1$$$, but this case has been excluded via the global upper bound.

Since we already have a valid configuration, we now need to implement it.

We will use another DFS, and handle the deepest subtrees first, because a cross match for a deeper subtree will finally become an internal match at a higher level. For each subtree, we deal with its internal matches. Note that all sub-subtrees of current subtree will have no more internal matches, since that has been handled earlier, so we must match a node in one sub-subtree with a node in another sub-subtree. To achieve that, we will use a set to store the current number of unmatched nodes in every sub-subtree (the root of the current subtree is also considered a sub-subtree with a single element). Every time, we choose one node from the largest group, and one node from the second largest group, then make them a pair. We repeat this until the number of internally matched nodes has met our configuration. Since all remaining unmatched nodes will be handled outside the current subtree, they will be cross matched just as we have expected.

We still have a few finishing touches. We need to collect all unmatched nodes in the sub-subtrees and put them into the $$$tomatch$$$ list of the current subtree. There is a trick in this step: we need to always merge a shorter vector into a longer one, in order to save time.

This solution is from jiangly.

Code (C++, based on jiangly's solution)

Full text and comments »

  • Vote: I like it
  • +131
  • Vote: I do not like it

By lucifer1004, history, 2 years ago, In English

In the original problem, the heights are strictly increasing, as a result, there can be at most one equal pair, leading to a simple solution.

However, what if the heights are non-decreasing, instead of strictly increasing? In this case, there can be more than one equal pair in the final heights. For example, $$$[1,1,2,2,3,3]$$$ will remain the same, in which there are three equal pairs. The fact that there can be more than one equal pairs makes it impossible for us to solve the problem based on the sum and length of the original heights. For example, $$$[4,4,5,5]$$$ and $$$[3,4,5,6]$$$ are all valid heights.

How can we solve this modification?

In the Official Tutorial, it has been proved that the sequence of operations does not matter. So we can assume that when we process the $$$k$$$-th element, the first $$$k-1$$$ elements are already a valid sequence (but might not be in their final form).

Let's consider the difference of a valid sequence. It can only have $$$0$$$ and $$$1$$$. For example, $$$[2,3,3,4]$$$ has difference $$$[1,0,1]$$$. How will the difference change when future slides happen?

We can do some experiments. Supposing there is an infinite hill at the end, we will have:

$$$[2,3,3,4]$$$ ==> $$$[2,3,4,4]$$$ ==> $$$[2,3,4,5]$$$ ==> $$$[3,3,4,5]$$$

And the difference array changes like this:

$$$[,1,0,1]$$$ ==> $$$[,1,1,0]$$$ ==> $$$[,1,1,1]$$$ ==> $$$[,0,1,1]$$$

We can see that the position of $$$0$$$ in the difference array keeps moving backward until it vanishes at the end. Then it will occur again at the front.

In the following, we will call a $$$0$$$ in the difference array a "slot". In the above example, there is only one slot, what about multiple slots?

Let's see another example.

$$$[2,2,3,3,4]$$$ ==> $$$[2,2,3,4,4]$$$ ==> $$$[2,2,3,4,5]$$$ ==> $$$[2,3,3,4,5]$$$ ==> $$$[2,3,4,4,5]$$$ ==> $$$[2,3,4,5,5]$$$ ==> $$$[2,3,4,5,6]$$$

Where the difference array goes like:

$$$[,0,1,0,1]$$$ ==> $$$[,0,1,1,0]$$$ ==> $$$[,0,1,1,1]$$$ ==> $$$[,1,0,1,1]$$$ ==> $$$[,1,1,0,1]$$$ ==> $$$[,1,1,1,0]$$$ ==> $$$[,1,1,1,1]$$$

We can find that the last slot goes to the end and vanishes. Then the first slot goes to the end and vanishes.

How many steps does a slot need to vanish? We can see that it is related to its distance to the right border since the slot moves one position backward in every step.

Now we can come to an intuition that we can maintain the positions of current slots. When we add a new element to the right, we first use the rightmost slot, then the one to its left, and so on. Also, to restore the final heights, we need to keep track of the current height of the first element.

After all elements have been processed, we can first restore the difference array using remaining slots, then restore the final heights with the height of the first element, and the difference array.

More details can be seen in the code.

What's the complexity of this solution? Since a new element can create at most one slot, we can conclude that the total time complexity is still $$$O(n)$$$.


Full text and comments »

  • Vote: I like it
  • +23
  • Vote: I do not like it

By lucifer1004, history, 3 years ago, In English


Problem A — Record Breaker

Implementation. Keep a record of the current maximum. Compare the current value to it, and also to the next number.

Time complexity is $$$O(N)$$$.


Problem B — Alien Piano

Simple DP. Enumerate all choices of the last step and the current step.

Time complexity is $$$O(P^2K)$$$, where $$$P=4$$$ in this problem.


Problem C — Beauty of Tree

Inclusion-exclusion, DFS.

We need to find, for every node, the number of descendants has a distance that can be divided by $$$A$$$, and similar for $$$B$$$. This can be done during DFS if we keep a record of the current path.

After having $$$ca[i]$$$ and $$$cb[i]$$$, we can simply calculate how many times the current node will be counted, via inclusion-exclusion:

$$$t[i]=(ca[i]+cb[i])\cdot N-ca[i]\cdot cb[i]$$$

Then we add up all the results and divide it by $$$N^2$$$.

The total time complexity is $$$O(N)$$$.


Problem D — Locked Doors

I used a rather complicated algorithm leveraging monotonic stack, sparse table and binary search.

For each door, calculate the first door to its left having higher difficulty, and the first door to its right having higher difficulty. For simplicity, a door with infinite difficulty is added at either end. This part is done by monotonic stack in $$$O(N)$$$.

Then construct a sparse table for range maximum query in $$$O(N\log N)$$$.

In each query, on the $$$K_i$$$-th day, there will be $$$K_i-1$$$ opened doors. Some are to the left of the $$$S_i$$$-th room, while some are to the right. If there are $$$l$$$ opened doors to the left on the $$$K_i$$$-th day, there needs to be at least $$$f(l)$$$ opened doors to the right, so that the highest difficulty among the $$$f(l)$$$ doors to the right is higher than the highest difficulty among the $$$l$$$ doors to the left. An observation is that $$$l+f(l)$$$ is monotonic. So the suitable $$$l$$$ can be found via binary search.

The last step is to determine whether we should use the leftmost room or the rightmost room. This can be done by comparing the highest difficulty of the doors to the left and those to the right. If the highest difficulty is to the left, then on the $$$K_i$$$-th day we must be in the leftmost room, otherwise rightmost.

The total time complexity is $$$O((N+Q)\log N)$$$.


Heltion's Cartesian tree solution.


Full text and comments »

  • Vote: I like it
  • +65
  • Vote: I do not like it