1775A1 - Gardener and the Capybaras (easy version)

**Solution**

1775A2 - Gardener and the Capybaras (hard version)

**Solution**

1775B - Gardener and the Array

**Solution**

**Solution**

**Solution**

**Solution**

**Solution**

Now it is time for the bonus task and author solutions!

**Bonus**

**Bonus answer**

**Code for Problem A (solution for large alphabet)**

**Code for Problem B**

**Code for Task C**

**Code for Task D**

**Code for Task E**

**Code for Task F**

Links for the codes not working

Will be fixed soon.

Kay! The arts in the problems were really great btw.

Big thanks BaluconisTima for it)

Not working yet

Thank you very much for interesting problems and good tutorial!

Thank you! :)

In solution for A1, shouldn't there be just $$$O(n^2)$$$ ways to do the splitting because the third segment is completely defined by the first two? The time complexity of the algorithm would still be $$$O(n^3)$$$ though since string equality checking takes $$$O(n)$$$ time if I'm not mistaken.

Easier solution for A2:

If $$$s[1]=a$$$, then we can split it into $$$a=s[0], b=s[1], c=s[2...n-1]$$$.

If $$$s[1]=b$$$, then we can split it into $$$a=s[0], b=s[1....n-2], c=s[n-1]$$$.

Thanks for this (: I was curious whether there is some cool(Really easy to interpret) solution for it or not.

One more

if $$$s[ind] = a$$$ for $$$1 \leq ind \leq n-2$$$, then we can have $$$s_1 = s[0...ind-1], s_2 = s[ind], s_3 = s[ind+1...n-1]$$$

Otherwise, $$$s_1 = s[0], s_2 = s[1...n-2], s_3 = s[n-1]$$$

Very cool bipartite idea, I couldn't find an elegant way to set up the graph for problem D. Also because it is bipartite it is easy to print the path because we know to skip every other element if I'm not mistaken?

My first bipartite problem solved! My bipartite implementation using a red and blue adjacency list if anyone is interested 188940781

Wait, so my bruteforce submission somehow eventually covered all possible cases?

A-F video editorial for Chinese

BiliBili

In problem D, why is output the distance divided by 2?

Because in the author solution they draw edges between the number of legs and prime factors and traverse the graph that way. But the prime factors just serve to connect the number of legs, and don't truly count towards the distance

can you tell me [submission:207782995]why i am getting a MLE error on this here is the link https://codeforces.com/contest/1775/submission/207782995

I had this issue too,

is terminating incorrectly for you, I would reexamine the part of the code that deals with this and find where your infinite loop is occurring

I didn't notice that the string consisted of only 'a' and 'b' in problem A and solved it for all letters. just realized it after seeing the editorial :')

Same, it was lucky that this problem had a linear solution for the whole alphabet or I would have been really screwed

Actually the original problem was set for the whole alphabet, and then it was made easier to fit for D2A.

Hey Why cant the original problem kept as B or C and you could have created Problem A ?

Creating problems is not so easy as you think.

Same i also did for all letters like i was upsolving my friend was why u taking so time think it as 800 problem but i was doing it for all letters after solving i noticed it was only for ab

E is a great problem. A 2000+ problem with a solution which can be implemented by any beginner is pretty hard to propose. I just completely didn't thought about that such easy solution could exist, and wrote an O(n*log(n)) solution using double linked-list and priority queue, which I failed to debug before the contest ended.

E is amazing. SSRS could AC problem E in just 2 minutes.

lol I had the same idea, I went through three different implementations and finally settled on using two multisets storing {value, index} and {index, value} pairs and spent like an hour debugging — felt so stupid after reading other people's solution post contest

E is 2000+ only because everyone was stuck on D.

If E was swapped with C, it would have a rating of 1600 max

Did you manage to debug it? I'd love to see an alternate solution.

My submission:188766704 It's too complicated compared with the intended solution.

Share a solution for Problem C. First,

`n & x = x`

. Second, n became less from lowbit to highbit, and the value make n less is`lowbit<<1`

, for example, n = 10101, x = 10100, lowbit is 1, n became 10100, m is`10100 + 10`

. However, here has a exception, n = 11010, x = 10000. After n bacame 10000, lowbit = 1000, m =`10000+10000`

is illegal.188746063

In problem F we could let $$$x=\lfloor \frac{p}{2} \rfloor$$$, $$$y=\lceil \frac{p}{2} \rceil$$$ at first, and after each step do $$$x = x-1$$$, $$$y = y+1$$$ until $$$xy<n$$$. Also if $$$x \neq y$$$ we need to consider the symmetric situation. Thus the complexity of each query will be $$$O(n^{1/4})$$$.

Also the official solution just said "you must optimize to $$$O(n)$$$". In fact the final array we get is the 4th convolution power of the array of partition number: $$$p=[1, 1, 2, 3, 5, 7, 11...]$$$ where $$$p[n] = \sum_{i \neq 0}(p[n+(-1)^{i}i(3i-1)/2])$$$ where $$$i$$$ iterate among all non-zero integers. We only need it's value up to $$$\sqrt{maxn}$$$, so we can calculate $$$p[n]$$$ in $$$O(maxn^{3/4})$$$ and calculate the convolution in $$$O(maxn)$$$.

Interesting to see the official solution for problem D. I solved it in a slightly different way, by making the vertices prime numbers and the edges the indices of the arrays. For each pair of prime factors of $$$a_i$$$, I added edge $$$i$$$. I ran a breadth first search, initializing the queue with all prime factors of $$$a_s$$$, running it until I hit edge $$$t$$$. Did anyone else do something similar?

Yes, I also happened to use the same approach but was getting TLE on test #132. Here is the link to the submission if someone can help.

I don't recommend using maps and sets for your adjacency list because you introduce a log factor into your time complexity unnecessarily. Try using C++ vectors instead.

I was able to make it work by adding another if statement to check if the starting and end node have same value. Here is the submission I made, you can compare with previous one.

I have no idea why adding this condition makes the test pass, although the runtime is still 1.7s, so it just might be what you said and I got lucky.

Thanks

Nice approch .no of nodes will be in order of 3e4 .each number can have different prime factirs more than 7 .Total no of nodes will be in order of O(1e7) + multisource bfs great.Some times we break a node into edges .here u have model it in opposite way

Problem A2 for some reason solved easier than in the parse. For each letter I looked for a substring before it and a substring after it. And checked the condition of the inequalities. Trying the letters of the string from left to right resulted in TL2, and trying the letters of the string from right to left resulted in the complete solution.

How to solve C using binary search?

Bitwise & is non-increasing. So we just need a function to efficiently calculate the biwise & over a range. We can derive it ourself or look it up. https://www.geeksforgeeks.org/bitwise-and-or-of-a-range/ . Then we just use binary search over all possible values of M that are >= n, setting r = mid-1 if the result is too low and l = mid+1 if the result is too high. My submission: 188831318

Observe that the range AND will be either greater than $$$x$$$, equal to $$$x$$$, or smaller than $$$x$$$, and these three conditions will appear in order (interval equal to $$$x$$$ may be empty). So, you can find the smallest value $$$t$$$ where the range AND of $$$[n,t]$$$ is equal to or smaller than $$$x$$$, and do a simple sanity check on whether the range AND is actually equal to $$$x$$$.

UPD: submission (ruby) — 188951229

I do not understand the logic of B's question. According to the solution if any bit is less than or equal to 1 answer is "No" otherwise "YES". But what if we don't take that number? eg. [2, 4, 4]; binary representation: 10 100 100 the first bit in the 2 is set and unset in 4. But the answer should be "YES". Because we can take subsequence [4], [4].

I think there are some problems in the description of question B, so I didn't understand it

I don't understand how dp works to find number of ways to choose staircases in 4 corners of problem F, can anyone explain it clearer?

Look at my comment. It can be calculated directly from the partition numbers

Is the final result the F[X*Y — n] where F is the 4th convolution power of the partitions array?

you need iterate for all possible (x,y) and sum up f[x*y-n].

Thank u so much <3

I can't understand editorial of F, in second para, what does 4 figures that form empty cells in the matrix mean? Where do they come from? And how staircase came into the picture?

Imagine, that you have a rectangle with weight X and height Y. It must have minimal perimeter for N cells and $$$N \leq X \cdot Y$$$.

If we iterate deleting cells with outside 'wall' in this rectangle we can see, that empty cells form 'ladder'.

For example, X = 10, Y = 10 and we delete 6 cells( 3 in the 1st column, 2 in the 2nd and 3 in the 3th column):

...#######

..########

.#########

##########

##########

##########

##########

##########

##########

##########

I have small doubt regarding F's editorial. In the DP transition what is maxP ?

It is constant that means a maximal possible perimeter which can be in the task

Did I understand that? Tell me. For task B, the first test. According to the test, it turns out that the fifth bit is only in the 1st number, the fourth bit is contained only in the 2nd number, and the 3rd bit is only in the third number. Each number contains one unique bit. So that's why the numbers are different and the answer is No. If I'm not reasoning like that, then someone explain in detail on this example how the answer turns out. Thanks.

Why it said "Code for

ProblemA", and "Code forTaskC"?How binary search works in problem C? Can someone explain in simple words please

Let $$$F(x,y)$$$ be $$$x \text{ & } (x+1) \text{ & } ... \text{ & } y$$$.

For fixed $$$x$$$ we see that $$$F(x,y) \geq F(x, w)$$$ if $$$y \le w$$$, in other words the $$$F(x,y)$$$ for fixed $$$x$$$ is non increasing function.

Let's look at $$$F(n, m) \leq x$$$. If for some value $$$m$$$ it is true, it is also true for $$$\forall y \geq m$$$. So just binary search value $$$m$$$.

Thanks i got it now. Since every bit starting from 0 will get turned off after at most 2 ^i steps so this will gradually decrease the overall value hence makes it a decreasing function.

There is other solution in $$$\mathcal{O}(\log{n})$$$, since we are "and-bitting" increasing values, soon or late, but we will get a zero on every bit position, so let's calculate the number $$$f(i)$$$, $$$f(i)$$$ is the first number, since $$$i$$$-th bit in number $$$x \text{ & } (x+1) \text{ & } ... \text{ & } f(i)$$$ is zero. For example, for $$$x = 5$$$ $$$(101_2)$$$ $$$f(0) = 6$$$, $$$f(1) = 5$$$, $$$f(2) = 8$$$.

So if $$$i$$$-th bit is $$$1$$$ in start number, but it is $$$0$$$ in the end number, then we have to choose $$$m \geq f(i)$$$, but if $$$i$$$-th bit is $$$1$$$ in start number, and it is still $$$1$$$ in the end number, then $$$m$$$ must be strictly lower than $$$f(i)$$$ (otherwise, we will get $$$0$$$ at this bit position!). So we just calculate $$$f(i)$$$ for all $$$i$$$, it can be done at $$$\mathcal{O}(\log{n})$$$, and choose maximum value.

Actually this is what my brain was thinking to solve this problem. Thanks , its such a wonderful solution for me!

D is nice tho, you have to use set to minimize the complexity of bfs through verticles

Problem E can also be solved with dp.Thanks to callmepandey for explaining this approach to me. For dp we will maintain 2 states.We will be maintaining the types of operations we have done and then at each index updating accoringly. Sample code.

Hi, can you please tell me the intuition behind this dp solution? I am not getting it. I was unable to solve the problem and now after seeing the editorial I can't unsee the solution provided by the author. I would like to know your approach. Thank you.

Can anyone explain why (max_prefix — min_prefix) in problem E is working ?

What don't you understand in editorial for task E?

"If we calculate the array of prefix sums, we'll see that the operations now look like "add 1 on a subsequence" or "take away 1 on a subsequence". Why? If we take the indices i and j and apply our operation to them (i.e. ai=ai+1 and aj=aj−1 ), it will appear that we added 1 on the segment [i...j−1] in the prefix sums array." I didn't understand this.

For example:

we have an array $$$5$$$ $$$2$$$ $$$-10$$$ $$$9$$$ $$$0$$$. Pref sums will be $$$5$$$ $$$7$$$ $$$-3$$$ $$$6$$$ $$$6$$$. If we do operation for subsequence $$${1, 3, 4}$$$ we sub $$$1$$$ from $$$a_1$$$, add $$$1$$$ to $$$a_3$$$ and sub 1 from $$$a_4$$$. Our array became $$$4$$$ $$$2$$$ $$$-9$$$ $$$8$$$. Pref sums: $$$4$$$ $$$6$$$ $$$-3$$$ $$$5$$$ $$$5$$$. And we can see, that we decrease $$$pf_1$$$, $$$pf_2$$$, $$$pf_4$$$ and $$$pf_5$$$.

As said in editorial, if we do an operation for $$$a_i$$$ and $$$a_j$$$ we decrease/increase $$$1$$$ in segment $$$[i; j)$$$. We can see that in example above

Ok, now I understand how that the prefix will increase/decrease. But , how the prefix will give me the minimum number of steps , it just increase /decrease by 1 only not until reach 0. In other words, what is the relationship between the (max_prefix — min_prefix) and the minimum number of steps?

After this observation we need do our prefix sums equals to 0, because we need all $$$a_i=0$$$. And we can increase or decrease subsequence by 1. So, we decrease all positive integers until they become 0 and increase all negative integers until they become 0. And count of operations equals $$$maxPF-minPF$$$.

Note, that we also have prefix sum $$$pf_0=0$$$.

yeah ,got it thank u <3

Why do local prefix sums have impact on whole array?

For broblem B If you remove this line, then everything will work: --a[i][j];

Problem F can be solve in $$$O(n^{0.75}+Tn^{0.25})$$$.

We need to calculation $$$(\prod_{i\ge 1}(1-x^i))^{-4}$$$, which can be solved in $$$O(n^{0.75})$$$ by differentiating the function.

If the width of rectangle is $$$\sqrt n-x$$$, we can found that $$$x$$$ is at most $$$O(n^{0.25})$$$ because $$$(\sqrt n+x)(\sqrt n-x)=n-x^2$$$, it's easily to found that $$$x^2\le \sqrt n$$$.

So the problem can be solved in $$$O(n^{0.75}+Tn^{0.25})$$$.

Can u explain it in more details?

We only need to calculate the first $$$\sqrt n$$$ coefficients of $$$F,G$$$. and there are only $$$O(n^{0.25})$$$ coefficients which is not $$$0$$$ in $$$F$$$. So we can calculate $$$G$$$ in $$$O(n^{0.75})$$$.

Belarusian students don't know differentiating functions(may be have minimal knowledge about it)

Interesting solve, thanks :)

I used DP for E, since I noticed that the last element $$$a_n$$$ must have exactly $$$a_n$$$ operations applied to it (since any more would just be redundant, if $$$a_n > 0$$$, there's no point in adding 1 to it since we can just choose not to include it in our subsequence)

So then I just apply this idea backwards, keeping track of the operations done so far (how many increases and decreases), and making sure to alternate increases to decreases when necessary. I don't like how messy my solution is though lol.

NEED HELP FOR E!I was going through many solutions toproblem Esolved by different persons. Maximum people have done it using either DP or having 2 variables (i.e inc & dec) and linearly traversing and solving accordingly.My understanding of the problem is if I have 5 numbers, i.e 4 -8 -4 7 -3 Then It's better to perform 4 "Decreasing" operations because it is a terminal element. So if it's reasonable to make it 0 here. Now I can say I have 4 "Decreasing" operations in my pocket which I can distribute further but the problem is these EVEN ODD criteria. How to take care of this thing I am not getting at all.

Is there some observation/fact I am missing?

I have an $$$O(max_n + t n^{1/4})$$$ solution for pF.

A "good shape" can be viewed as a $$$h \times w$$$ square delete some "staircase" for each corner, consider calculate the number of "staircase" consist of $$$x$$$ blocks, which is equivalent to the number of non-decreasing sequence $$$s$$$ whose sum is $$$x$$$ and $$$s_1 > 0$$$, which can be further transform to a unbounded knapsack problem by seeing every object with weight $$$w$$$ as "adding $$$1$$$ to the last $$$w$$$ elements of $$$s$$$".

And here comes the interesting part, consider an $$$h \times w(h < w)$$$ square, we can't delete all element in one row/col, otherwise we can achieve lower perimeter of shape, so it means the calculation of "staircase" for each corner are independent! So after calculate the above dp in $$$O(max_n)$$$ ($$$\sqrt{max_n}$$$ different object weighted from $$$1$$$ to $$$\sqrt{max_n}$$$, max_sum_of_weight = $$$\sqrt{max_n}$$$), and let the final result be $$$cnt$$$, where $$$cnt_i$$$ means the number of "staircase" with $$$i$$$ blocks, then by independence, the number of combination of "staircase" for four corner is just $$$cnt^4$$$, which can be calculate naively in $$$O(max_n)$$$ since the size of $$$cnt$$$ is $$$O(sqrt(max_n))$$$.

Finally, for an $$$n$$$, enumerate all possible height and width of square, let say it is $$$h \times w$$$, then the answer for this square is $$$[x^{(h\times w-n)}]cnt^4$$$, and there are $$$O(n^{1/4})$$$ possible $$$(h, w)$$$.

why?let say $$$h = w = x$$$ is a possible square, then you can delete atmost $$$x-1$$$ blocks from it, and all other possible combination of $$$h, w$$$ will be $$$(x + 1, x - 1), (x + 2, x - 2), ..., (x + d, x - d)$$$, and its area is $$$x^2 - d^2$$$, and we need to make sure $$$d^2$$$ is less than $$$x-1$$$ otherwise we can't fit $$$n$$$ blocks in this square, and we have $$$d^2 < x \le sqrt(n)$$$, so the number of possible $$$d$$$ is $$$O(n^{1/4})$$$, same argument apply for the case $$$h = x, w = x - 1$$$.

So we can do it with $$$O(n)$$$ precompute and $$$O(n^{1/4})$$$ per query.

in problem B, why using a vector instead of map for counting the number occurrences would cause TLE?

The number of active bits is limited. lets say n = 2, and num1 having 1 active bit at 20 and num2 having active bit at 10000. now even though for the map you would need only O(1) space but for vector you need at least a 10001 length vector to count active bits which will cause TLE.

The difficulty of the solution of F can be optimized to $$$O(n+t\times \sqrt{\sqrt n})$$$,just by going through less (x,y) when answering each query.

One specific way can be seen in jiangly's solution.(Just where I find it hah :)

well...Somebody had proved it above.

188759886 This submission passed during the contest. But now it gives MLE on test 130 which is one of the hack testcases. Still even after system testing this submission appears to be accepted in the standings. How???

Also, how can I correct the MLE verdict?

Really great tournament! had a blast! very interesting questions indeed

Alternate (and I believe simpler) solution for E: Link

We can just iterate over $$$i$$$, and maintain the minimum number of subsequences ending with a $$$+1$$$ element before the $$$i'th$$$ element, and minimum number of subsequences ending with a $$$-1$$$ element before $$$i$$$. For each $$$i$$$ greedily choose the subsequences that the $$$i'th$$$ element will be part of, and create new subsequences if required, such that $$$a[i]$$$ becomes equal to $$$0$$$.

Intuition: Just observe the 1st element, we can easily show that it if is negative, it will only be part of subsequences in which odd elements are positive, and vice versa if it is positive. Then just use some greedy ideas and induction.

I have no clue what the editorial is trying to say, but this problem should be 1500 rated tbh.

Can anyone tell me why this code is causing time limit exceeded?

I managed to discovered that memset() is actually an O(n) operation such that all values will be initialized in one loop, which might be the reason to cause time limit exceeded error, as there is one loop from 1 to n, and an inner loop from 1 to k, k can be up to 200000.

I just changed the array structure into basic map and the exact same code gets accepted.

It's not that I am testing out code in a trial and error way, but my expectation on the speed and inner-workings of memset might be profoundly changed after solving this question tonight.

we can also solve problem D using dijkstra and gcd function for filling in edges with their weights. code i dont know why my code isn't getting accepted though