The tutorial for problem G will be added soon, we are translating it.

Update #1: OK it's added now.

Update #2: Chinese editorial with lots of pretty derby anime pictures

Tutorial is loading...

Author: dXqwq

Tutorial is loading...

Author: SpaceJellyfish Tutorial is loading...

Author: Forever_Pursuit Tutorial is loading...

Author: unputdownable Tutorial is loading...

Author: SpaceJellyfish Tutorial is loading...

Author: antontrygubO_o Tutorial is loading...

Idea: antontrygubO_oSolution: orzdevinwang

Tutorial is loading...

Author: orzdevinwang
Really fast editorial!

Is this just me or did everyone else feel that this round should be renamed to

Pinely Round 1 (Div.1 + Div.1)Pinely Round 1 (Div.0 + Div.1)

Div-1 + Div0 ;D

Why? A,B,C are good and easy if u can make observations. I feel D and E are hard for my level.

D -> Know the concepts involved in that but could not come up with a correct solution in the contest.

E -> I learned a new concept called clique.

B is not easy to prove or am I too weak...?

I agree that B is relatively difficult compared to C. But i am lucky today and tried writing down different cases and observed the pattern and tried my luck by submitting and luckily it worked. I bet it won't happen all the time. I feel constructive algorithms and greedy needs luck sometime.

I think always, not sometime (

p.s. I did the same thing as you to solve B lol

So you are unlucky during the contest. You may get luck sometime later when u try to solve the same problem. I faced this a lot of times.

emmm I solved B by trying writing down different cases and observing the pattern like you so I'm lucky to solve B quickly to become blue did you misunderstand me?

Never mind, this is an unnecessary discussion.

I think so (

orz tyr

come on, first 3 problems aren't hard

trust me, those 3 problems were the type of problems where you likely won't get the idea till the end of the contest if you couldn't in the first 5 minutes.

Well yes I did it virtual today, and I was so slow at them but I was tired so I can't really judge XD

it's a gud contest but does this have rated?

rated for all

i was able to solve C but not B :LOL

i did not solve even A

Could anyone please explain the solution of D to me?

I do understand the following DP approach:

However, I couldn't figure out how to straighten this access pattern.

Define a value function $$$f(s)$$$ where s is a binary string as $$$3^t$$$, where $$$t$$$ is the number of indexes $$$i$$$ such that $$$s_i=s_{i+1}$$$. Notice that your DP is equivalent to finding the sum of all $$$f(s)$$$ for a binary string $$$s$$$ of length $$$n+1$$$ that includes $$$k$$$ 1s. But now it's easier to use some combinatorial approach to tackle this problem.

Was this your approach while solving the problem during the contest? I'm having trouble to solve combinatorics problems on Codeforces, I didn't know how to proceed after finding the DP.

Yes. I found this approach natural since the DP transitions seem nice and symmetric.

kurisutina!

Did anyone solve problem D by using generating functions?

Like that?

I think he means a combinatorial method to describe a sequence as a formal power series, not creating a function in c++ lol

Indeed. I actually think it's possible to find it, but I couldn't manage to do it in time during the contest.

I feel generating functions approach is a bit difficult and tedious.

i describe my solution with convolutions here, which may be of some use to you in framing this as a GF?

Check this comment

Nice contest ..For me C is easier than B..B took me 1 hour and I did not solv it..but C took me 5 minutes..

Can you guys check the solution for q1, I was getting wrong answer despite getting it correct in test case. And the solution mentioned here is exactly same as my solution ~~~~~

## include<bits/stdc++.h>

using namespace std;

## define long long int

signed main() { int t; cin>>t; while(t--) { int n,a,b; cin>>n>>a>>b;

} ~~~~~

use a==b&&b==n

yeah thats fine, but it's not syntactical error ig since my compiler didnt show any

well in c++,a==b==n means （(a==b）==n），and it is ok for your complier. but it means ((a==b)(which is 0 or 1)==n),so it will not give you the correct answer.

ohhh okay now i get it, thank you for the input

First you didn't solve the problem; second please use

Markdown.I was facing same problem that's why I posted my code

So please don't post your code-- someone already mentioned this problem.All you have to do is to search online or to wait others' solutions like this.

While your problem has been solved, I want to remind that please use

Markdownwhen you post your source code like## ```

`//your code here...`

## ```

How many tests you have made for G? Problemsetters: Yes

500 lol

imagine WA on test 498

pain

181811355 he did..

Why the hell did I use topological sort in C ?

I think of same approach and that made my approach way too hard to actually solve this problem.

In problem D I thought of a n^2 solution which involved some combinatorics and figured that if this problem is solvable there must exist a fast way to compute the sum of all terms, I was stuck trying to achieve this the entire contest.

I want to know how do other contestants decide when to stop trying to reduce the combinatorics elements and think of a different approach altogether in these kind of problems?

The summation I came up with: $$$\sum_{x=1}^{k} \sum_{y=0}^{n-k} {k-1 \choose x-1} {x+y \choose x} {n-k \choose y} {2^{n-(x+y)}}$$$

something i've learned is that the tools we have access to today really are quite powerful, so there's almost always a way to "plow through" using an amalgamation of advanced techniques.

you can solve many combinatorics problems with convolutions and FFT, many tree problems with link-cut tree and heavy-light decomposition, many data structures problems with your favorite BBST (splay tree, treap, etc.). however, these techniques often produce long (less of a problem on CF than ICPC and OI), gross, and (personally) bug-prone codes, so you should try to avoid them if you can. it's really easy for me to spend the entire contest debugging or improving constant factor with these. my old teammate Sharon would always call HLD "Highly Likely Don't-need", because it's very infrequent that authors require it these days (at least on CF and my ICPC region).

so in this contest, i came up with a similar summation, noticed i'd (probably) need some convolution (i couldn't apply the hockey-stick or Chu–Vandermonde identities on my summation), and saw that as a sign to change up the way i was counting, especially on 1s TL and 1e6 input (intended FFT solutions have more wiggle room usually/use the other mod). i often see the "direct" convolution solution as a proof of solution existence/runtime as opposed to a solution itself. from here, a few potential threads to follow are trying to (part of) the summation in the style of a double counting proof, trying to count a different way using the framework i've already established, or looking for more observations in the problem or framework.

Thanks! I also have a similar pov regarding stuff like hld and treaps.

My O(n) for D, quite similar to the editorial, did not pass. Perhaps 1000 ms was too strict.

If you use C++20 then you will get Accept

you're right.. sad..

huge constant factor because of the mod operations.

Your code also has an extra $$$O(\log(MOD))$$$ factor from repeatedly computing modular inverse and modular exponentiation, you can get rid of those with precomputation.

what is the precomputation that achieves this?

For this problem, you need to precompute factorials, inverses of factorials, and powers of 3. Powers of 3 are easy to precompute in linear time, and factorials are also easy.

To compute inverses of factorials in linear time, note that $$$n! = (n-1)! \cdot n \implies 1/(n-1)! = (1/n!) \cdot n$$$. Thus, we can compute $$$1/\text{MAXN}!$$$ in one modular inverse, and then iterate

downwardsto compute $$$1/(\text{MAXN}-1)! \ldots 1/0!$$$ using only modular multiplication. This means we only have to do a single modular inverse, and our algorithm is linear time!As a bonus, note that we can now easily evaluate $$$1/n = (n-1)! * (1/n!)$$$. This means that we can do batch modular inverse of $$$1\ldots n$$$ in linear time. In fact, this generalizes to any set of values, we just take prefix products, invert the final one, and go backwards to compute inverses of the prefix products.

thank you very much for the great explanation!

B was really hard :(

thanks for fast editorial

when the solution for G will be published

Did anyone try solving Problem B using Doubly Linked-List?

I don't know why my solution fails for pretest 3.

Can anyone please help?

My solutionLightning Fast Editorial.

The pain of missing out on a problem by 2 mins in 2 consecutive rounds

Can someone explain proof of B? Didn't understand from editorial.

It's that if you have only 2 elements repeating again and again then you can perform (n/2)+1 operations only because after deleting any 1 element by your side it gets deleted automatically by mentioned property. At last only two elements will be remaining which you need to delete individually so that +1 in the equation. For rest of the cases where there are more that 2 elements in the array you can simply first wipe out the whole left side and then right side individually starting from that unique element without making the array use its special property. Try doing it for given array: 1 2 1 2 1 2 3 2 1 2 1 2 .

You can easily understand by solving above array. Even if you still have doubt feel free to ask.

This is not a complete proof. For "the rest of the cases" you didn't justify why there is always a "unique element". You didn't define what it means to "wipe out left and then right". This is a non-trivial problem and I don't think we can understand it "simply" or "easily" in any sense. And solving one example certainly does not mean it is always correct.

I think it's like this:

Given a sequence

awhere there is at least three types, there are either a unique type ina, or every type has at least two elements ina.If every type has at least two elements and there are at least three types, then at least one of the indices will have neighbors of two different types. To show this, suppose not, that is, for every (i, j) where a[i] = a[j], the neighbors of a[i] are the same. Then you can show that every even index of

ahas the same type and every odd index ofahas the same type. Thus, there are two types, so a contradiction is reached.If there is an element that is unique, then you can keep removing the neighboring elements of the unique element because that element won't meet another element that has the same type, so it won't be deleted.

If there is not, then we remove the element in

awhere it contains two different neighbors and repeat until there is an unique element.Thanks a lot!! nice proof

Can any one tell me.How to master combinatorics and probabality problem. ! any suggestion Blog Tutorial.. What rating problem should i solve . I have tried till 1500 but still not able to get it.

I don't understand B's proof. In particular, why is the following claim true?

"When the length of the sequence is greater than 3, there will always be a pair of positions (i, j), such that a[i] = a[j] and a[i] has two different neighboring elements."

(I'm assuming that this is for the case "there are > 2 types of elements".)

Can someone explain it?

UPDATE: now I understand.

How I thought was as The min answer can be achieved in case elements are as ababab = (no. of a's)/2 +1 = (length)/2+1; [ remove the second b which eases two a's abab again remove the second b......At last remain ab which can be removed by two. ]

If there is any other sequence then the answer is always n. such as "abcab" (remove all elements before c and all elements after c)

But still, I don't see why in the case with >= 3 types of elements you can always achieve n operations. In your example there is a unique "c" but this is not always true (consider 123213123).

In 123123123 operation performed :

-> remove two ones ->2312323 ->212323 ->12323 ->1323 ->123 ->12 ->13

c=can be any sequence where the first element differs from the first two. Then it is always possible to remove all elements before it without collision. If there is any c at last in sequence then remove that first before emptying the first part.

I know in this "particular" case we can do it, but what I'm looking for is a

proofshowing that we canalwaysdo it. In particular, what do you mean by "applying the operation backward"? Imagine "...aca...bcb...aca...": how do you guarantee that you can always eliminate the third element ("c" in this case) first? And how do you define which one is the "third" element? It seems that you are still assuming there is only one "c" but again this is not always true.My intuition:

If there are at least 3 different numbers, there is a segment '......xyz.....' where x, y and z are all different.

That must be true, otherwise, our ring would look like xyxyxyxyxyxyxy...

After getting ....xyz..., if y appears only once in the entire ring, we can keep removing elements to the right of y until we remove all elements.

If y appears at least 2 times, we remove y and continue our algorithm, as there are at least 3 unique numbers left.

Thank you,you are great!

REMOVED.

OK, I think I finally understand it by myself: suppose there is no

`i`

such that`a[i]`

has different neighbors, then every position's two neighbors are the same. This means if`a[1] = a`

and`a[2] = b`

, then`a[3]`

has to be`a`

,`a[4]`

has to be`b`

, ..., and finally we can see this is just the case of having only two types of elements.Here's the way I thought of it. Suppose there are 3 consecutive positions with 3 distinct values: ABC. Then I can always delete B, then go to AC? where ? is not equal to C and then keep going -> n. So there are only 2 values in the entire thing, which yields (n/2)+1

As long as there exist a third element the third element will always have a first occurence going from the left. So if you go from left and find the first *third" element the 2 elements to left of it must be different so the the first element to the left of third element can always be deleted.

My first ever contest in CF , very good learning experience , looking forward for div4 contest tomorrow.

If constraints for problem B was 1 <= n <= 100000, instead of 1 <= n <= 100, would have solved B lot quicker XD

In editorial of D, what does c[i] means?

From what I understood, c[i] should not be this: "Notice that c[i]=a[i]∨b[i]∨c[i−1]". It should be (a[i]∧b[i])∨((a[i]∨b[i])∧c[i-1]))

There is definitely a mistake there because the formula isn't true if we use the values of the next line (c[i]=0, c[i-1]=0, and (a[i], b[i]) = (0, 1) makes the formula 0 = 0 v 1 v 0). Hopefully it will get fixed soon ^^

Time limit for D was so tight. Is there a reason for such tight bounds?

My solution passed after precomputing powers of 3 instead of using fast exponentiation. I think it's not fair the problem was fairly hard and such a time-expensive test case should've been in the pretests.

orz

Let me expand B's proof: why for >= 3 types of elements can we always do n operations?

(1) If there is a type of element occurring only once, let's say it is

`x`

. then we repeatedly remove the element before`x`

and finally remove`x`

itself. The entire process doesn't have auto-erasing since`x`

is different from anything before it.(2) If every type of element occurs more than once, then first the length of the ring is at least 6. Also, there must exist a position

`j`

such that`a[j - 1] != a[j + 1]`

(otherwise for all`j`

,`a[j - 1] = a[j + 1]`

. Then if`a[1] = x`

and`a[2] = y`

, then`a[3]`

has to be`x`

,`a[4]`

has to be`y`

, etc. This contradicts the assumption that there are >= 3 types of elements). So we remove a[j] and the problem is reduced. We can eventually reduce the problem to case (1) and use (1)'s process to remove the entire ring.If anyone finds this helpful, here's my written solutions (video):

A. Two PermutationsSolutionIf $$$n - a - b \le 1$$$, then you can see that $$$p=q$$$, because either there is one element not in the common prefix or suffix, making the position of that element forced, or the common prefix and suffix intervals are touching/overlapping, so check if $$$n=a=b$$$. Otherwise, $$$n - a - b \ge 2$$$, and you can take $$$q = p_1, ..., p_a, p_{n-b}, ..., p_{a+1}, p_{n-b+1}, ... p_n$$$, so the answer is "Yes."

B. Elimination of a RingSolutionIf there is $$$1$$$ distinct element, then $$$n=1$$$ so the answer is $$$1$$$.

If there are $$$2$$$ distinct elements, they alternate between two elements, and two elements are deleted each operation no matter what, except for the last two operations, where one element is deleted each time, so the maximum number of operations is $$$n/2+1$$$ ($$$n$$$ will be even).

If there are $$$\ge3$$$ distinct elements, the main idea is to make it so that there is a unique element in the ring. Then you can just delete elements next to the unique element one at a time. So assume the frequency of each element is at least two. There will be two equal elements on the ring with distance at least $$$3$$$ apart, going clockwise or counter-clockwise. Take one of those elements $$$x$$$. If its two neighbors are different, delete $$$x$$$. Otherwise, delete one of its neighbors, then you are guaranteed able to delete $$$x$$$. Repeat the process until $$$x$$$ is unique, and then we can do as stated before. The answer is $$$n$$$.

C. Set ConstructionSolutionIf $$$A_i$$$ is a subset of $$$A_j$$$ make a directed edge from $$$j$$$ to $$$i$$$. Then, run a depth-first search algorithm on all the nodes $$$u$$$ with no parents. Let $$$x$$$ be the smallest number not used yet, (this is guaranteed to be $$$\le n$$$, since there are $$$n$$$ nodes). Let $$$N$$$ be the set of all the neighbors of $$$u$$$. Then you can set $$$A_u=(\bigcup_{v \in N} A_v)\cup x$$$. $$$A_u$$$ is guaranteed to not be a subset of anything we don't want because of $$$x$$$.

D. Carry BitSolutionConsider all blocks of carry bits and non-carry bits. Loop over $$$i$$$, the number of carry bit blocks.

If a carry bit comes right before a non-carry bit, there is only one way. If a carry bit comes right before another carry bit, there are three ways.

If a non-carry bit comes right before a carry bit, there is only one way. If a non-carry bit comes right before another non-carry bit, there are three ways.

So you can go through each of the four cases, whether it starts/ends with a carry bit or non-carry bit, and calculate the number of ways in that case, using the stars and bars formula. Special cases need to be handled for $$$k=0$$$ and $$$i=1$$$.

\begin{array}{|c|c|c|c|c|} \hline \text{First block} & \text{Last block} & \text{Ways to put carry blocks} & \text{Ways to put non-carry blocks} & \text{Ways to set bits} \cr \hline \text{Carry} & \text{Carry} & \binom{k-1}{i-1} & \binom{(n-k)-1}{(i-1)-1} & 3^{n-i-(i-1)}\cr \hline \text{Carry} & \text{Non-carry} & \binom{k-1}{i-1} & \binom{(n-k)-1}{i-1} & 3^{n-i-(i-1)} \cr \hline \text{Non-carry} & \text{Carry} & \binom{k-1}{i-1} & \binom{(n-k)-1}{i-1} & 3^{n-i-i} \cr \hline \text{Non-carry} & \text{Non-carry} & \binom{k-1}{i-1} & \binom{(n-k)-1}{(i+1)-1} & 3^{n-i-i} \cr \hline \end{array}

I'm solved it)

I like your D solution. It has a lot of concepts involved that were known to me but could not come up with the solution. Looks like I need more practice on the math+combinatorics section.

For C, can't believe I overcomplicated it in my initial approach so much.

In problem C, one more hint in addition to what is stated is: The element which has bigger indegree has more number of elements in its set.

We can consider this case.

1 5 00010 00010 00010 00001 00000

According to your hint, 4 has the highest indegree amongst all nodes, so A4 should have most number of elements in its set. However, we know that A4 is proper subset of A5, hence A5 would contain more elements.

Correct me if I'm wrong.

Problem E is very good. I learned a new concept called Clique and how problems can be framed around that. However, the time limit is bit tight.

For example that a feasible vertex(A vertex that can make the graph a connected graph) always appears in a non-clique graph. Because by the definition of a clique, every vertex has an edge to every other vertex. So a feasible vertex has an edge to every other vertex in that graph. So if we perform an operation it gets connected to other components but disconnects from the main component. So it can't be a feasible vertex. Also, if it is an articulation point(also called a cut vertex), it disconnects from the main component and also disconnects the main component to one or more components. People who did not use this concept or missed this case, lost in Test case 4

1

7

0100000

1000000

0001110

0010100

0011000

0010001

0000010

$$$2$$$ corrections for $$$D$$$'s editorial:

Can you please explain the second statement. If a=0101, b=0101, then c = 0101 and q = 2, it doesn't have any consecutive 1 at all

Even only one $$$1$$$ is a group of consecutive ones of length $$$1$$$. In your example we have:

$$$c_3c_2c_1c_0c_{-1}\\ $$$$$$0$$$ $$$1$$$ $$$0$$$ $$$1$$$ $$$0$$$

Here $$$q=4$$$. So, number of $$$1$$$ groups $$$=\lceil \dfrac{4}{2}\rceil =2$$$, and number of $$$0$$$ groups $$$=\lfloor \dfrac{4}{2}\rfloor +1=3$$$.

Problem E is a trash problem.`cin`

or`scanf("%1d",&x)`

will get TLE.Skill issue

But I used cin and passed it fastly.

Agree.

The E has so many border cases

Here's the solution for the Problem D from the tester myee, which can give out the answer of query pairs $$$(k,k),(k+1,k),\dots,(n,k)$$$ in the time complexity of $$$O(n)$$$.

Let's define a function $$$L(v)=\log_2\operatorname{lowbit}(v+1)$$$; that means, $$$L(v)$$$ is the number of last bits $$$1$$$ of $$$v$$$ in the binary system.

For example, $$$L((1011)_2)=2,L((11100100111)_2)=3$$$.

Then, let's define

and the answer would be

Let's think about making

recursion(orDynamic Programming?) on it.With classification discussion on the last bit of $i,j$, it's easy for us to prove that

and it immediately give out

That is, we need just to get some infomation of $C(n,k,0)$!

Let's define a

Bivariate Generating FunctionThen we can find that

And our answer would be

Let's think about how to calculate $Ans_{t,k}$, for all $$$t\in[k,n]$$$.

The two parts are similar, so let's just look at the first part. Let's just call it $A_t$.

If we just want to get single $A_n$, we can find that

So we can get it in the time complexity of $O(n+\log\mathbf{Mod})$, which is the Problem D.

As for the bonus, how can we get the answers in $$$O(n)$$$ time?

Let's think of the method of

ODE, which is well-known in China.Let's call

and then

So

That is

So

And we can find that

So it's just the way how the sequence of $G$ can be calculated in $$$O(n)$$$ time.

Moreover, in fact we can get the single answer in $$$O(\sqrt n\log n)$$$ time, if $$$\mathbf{Mod}=998244353$$$.

Code 1 for D itself.

Code 2 for the bonus.

Why do you have so many interesting solutions?

Boring.

oops.

So interesting!

You can submit it on loj now.

The bonus need you to output

How do you "get the single answer in $$$O(\sqrt n logn)$$$"?

The version in Chinese.

## Preknowledge 1: $$$p$$$-recursive sequence and D-finite function

Let's call sequence $$${A_n}$$$ $$$p$$$-recursive, that is there's some constant polynomial $$$P_0\sim P_p$$$ that

while $P_0(n)\neq0$ for sure.

We have such a fact:

For example,

are all D-finite, and their relevant sequences are $p$-recursive.

## Preknowledge 2: Continuous Point Value Shift

If there's a polynomial $$$F$$$ that $$$\deg F<n$$$, and we have known

Then we can get

in the time of $O(n\log n)$, with FFT(or NTT).

That can be solved with the Lagrange Interpolation Formula.

moreover, if we know

then we can get

in the time of $O(n\log n)$.

## Preknowledge 3: $$$\lambda$$$-matrix

$$$\lambda$$$-matrix is the matrix on $$$F[\lambda]$$$, where $$$F$$$ is a field.

## The algorithm

For a $$$p$$$-recursive sequence $$$A$$$, we can get its single item $$$A_n$$$ in the time of $$$O(\sqrt n\log n)$$$ with the following algorithm.

We can found that

Let's call

And we want

while the multiplication is carried out from right to left.

Let's call $T=2^t$.

Let's call $$$d=\max{\deg P_k|0\le k\le p}$$$.

Let's call $$$B_T(\lambda)=\prod_{i=0}^{T-1}B(\lambda+i)$$$, a $$$\lambda$$$-matrix with the degree $$$\le dT$$$.

so we can get $$$B_T(p)$$$, $$$B_T(p+T)$$$, $$$B_T(p+2T)$$$, $$$\dots$$$, $$$B_T(p+(dT-1)T)$$$, $$$B_T(p+dT^2)$$$ of the $$$\lambda$$$-matrix $$$B_T$$$.

To make $$$t=\log_2T$$$ up $$$1$$$, let's do the following thing.

get $$$B_T(p+dT^2)$$$, $$$B_T(p+(dT+1)T)$$$, $$$B_T(p+(dT+2)T)$$$, $$$\cdots$$$, $$$B_T(p+(2dT-1)T)$$$, $$$B_T(p+(2dT)dT)$$$ in $$$O(T\log T)$$$.

get $$$B_T(p+2dT^2)$$$, $$$B_T(p+(2dT+1)T)$$$, $$$B_T(p+(2dT+2)T)$$$, $$$\cdots$$$, $$$B_T(p+(3dT-1)T)$$$, $$$B_T(p+(3dT)dT)$$$ in $$$O(T\log T)$$$.

get $$$B_T(p+3dT^2)$$$, $$$B_T(p+(3dT+1)T)$$$, $$$B_T(p+(3dT+2)T)$$$, $$$\cdots$$$, $$$B_T(p+(4dT-1)T)$$$, $$$B_T(p+(4dT)dT)$$$ in $$$O(T\log T)$$$.

$$$B_{2T}(v)=B_{T}(v+T)B_{T}(v)$$$.

In the beginning $$$T=1$$$, $$$B_1(\lambda)=B(\lambda)$$$, and we need $$$B(p),B(p+1),\cdots,B(p+d)$$$.

we need just to make $$$T\ge\sqrt n$$$ and $$$T=\Theta(\sqrt n)$$$.

And what we want can be get by $$$\Theta(\sqrt n)$$$ matrix multiplication.

The contribution of $$$-\frac1{P_0(n)}$$$ can be done with the similar process.

And The total time is

Not sure why this is downvoted, but it's awesome. Thank you so much for sharing the technique!

I tried to solve problem C using topological sort but it gave me WA .

But when I just initialize the sets with one value for each

`ans[s][cur] = 1`

before the topological sort it get accepted ,here is my accepted code.Why this little modification make this difference ? can any one find a counter example .

Take a look at Ticket 16478 from

CF Stressfor a counter example.I hope I wasn't the only one going on OEIS for D. Felt really close to the answer there

Can somebody explain why this test in problems E my solution judged as unvalid output:

Test case1 7 0100000 1000000 0001110 0010100 0011000 0010001 0000010 My output: 1 3 Answer: 1 7

Spoiler my solutionI count number of connected component, if there only 1 connected component the answer is 0, if there is a connected component in which there are two node not direct connected then the answer is 1 because you can choose one of the node, otherwise the answer is the size of smallest connected component

``Solution not valid'' means the graph didn't become connected after your operation(s).

if you operate on vertex $$$3$$$ the graph will become

There are two connected components, one contains $$$1$$$ $$$2$$$ $$$3$$$ $$$7$$$, and one contains $$$4$$$ $$$5$$$. The graph isn't connected.

Thanks alot, I'm understand now

Update I know why I wrong, thanks to ganesh_6

Did i help you? Anyway you are welcome!

In 3 problem what is this , to check weather it is proper subset or not . I had find it on tourist and other top coder answer not able to understand. e[i][j]|=e[i][k] && e[k][j];

Why my solution for E: 181836762 is getting TLE. I have implemented as said in editorial. Time complexity is O(n*n). While the same solution in C++ gives AC: 181837428

Can we say that Number of carries in (a+b) = Number of ones in the binary representation of (a&b) ? where & is the bitwise AND.

sadly not.for example in case 00001+11111,number of carries is 5 but a&b is 1

Yeah, i just found it. If it happens, problem would be very simple, just 3^(n-k) * nCk

I love C! Although I did not make it during the race.

Can anybody help me in debugging my solution for Problem C. Not able to debug why its failing on test 2.

Link of submission: https://codeforces.com/contest/1761/submission/181847962

I am going with idea of topological sorting.If i is a subset of j then I have added an edge from j to i in the graph and I have also constructed a transposed graph.Later for the nodes having no out edges I have assigned single elements to them and proceeding in a similar way with their ancestors.If any ancestor's set is equal to its child's set then I added one more element.

It's the if

and only iffrom the problem statement that's causing WA. Take a look at this example:Clearly $$$A_3$$$ is a subset of $$$A_6$$$, but the input says it shouldn't be. Similarly for $$$A_4$$$/$$$A_5$$$.

Thank You !!

I'm solved it)

Can anyone point out the name of a well-known problem from F2? Thank you in advance ;)

Catalan's trapezoids

Thank you

My submission

Why Problem D has fft tag? I am new to it. Any solutions around that?

Alternate solution to G:

As in the editorial, we reduce the problem to finding vertices $$$a$$$ and $$$b$$$ such that the centroid lies on the path between them. Picking $$$a$$$ and $$$b$$$ at random will succeed with probability at least $$$1/2$$$. We would like to repeat this to boost our success rate, but we can only afford to test one pair. Instead, we will repeat a cheaper process that finds $$$a'$$$ and $$$b'$$$ such that the path $$$a'$$$-$$$b'$$$ will contain the centroid with constant probability if $$$a$$$-$$$b$$$ does not contain the centroid, and with near certainty if $$$a$$$-$$$b$$$ does contain the centroid.

To do this, we sample $$$s$$$ vertices at random. Let $$$A_1,\ldots,A_k$$$ be the components of the graph with the edges of the path $$$a$$$-$$$b$$$ deleted, such that $$$a\in A_1,\ldots,b\in A_k$$$ (i.e. same $$$A_i$$$ as in editorial). For each sampled vertex $$$v$$$, we compute $$$dist(a,b)+dist(v,a)-dist(v,b)$$$ using two queries to determine which $$$A_i$$$ it belongs to. If the centroid does not lie on $$$a$$$-$$$b$$$, with probability at least $$$1/2$$$, at least half the vertices will be on the opposite side of the centroid as the path $$$a$$$-$$$b$$$, so some $$$A_i$$$ will contain half our sampled vertices. We can then replace an endpoint of our path $$$a$$$-$$$b$$$ with a sampled vertex from $$$A_i$$$ to have at least a $$$1/4$$$ chance of our new path containing the centroid.

To ensure that we do not lose the centroid if it is already on $$$a$$$-$$$b$$$, we use our freedom to choose which of $$$a$$$ or $$$b$$$ to replace. Suppose our new endpoint is selected from $$$A_i$$$. Let $$$w(A_j)$$$ denote the number of sampled vertices in $$$A_j$$$. We replace $$$a$$$ if $$$w(A_1)+w(A_2)+\ldots+w(A_{i-1})\le w(A_{i+1})+w(A_{i+2})+\ldots+w(A_k)$$$ and $$$b$$$ otherwise. Since we only replace if $$$w(A_i)\ge s/2$$$, we can only lose the centroid if $$$3/4$$$ of the sampled vertices lie on one side of the centroid. For $$$s=100$$$, the probability of this happening is already less than $$$10^{-6}$$$.

Code: 181852626

Oh B is interesting indeed. I submitted something that doesn't explicitly compute the frequency of elements (182076103)

In summary, "once removable implies twice removable". Here we say an element is "removable" if its adjacent neighbours are different.

Suppose our sequence contained exactly one removable element $$$R$$$. Then by uniqueness of removability, it would look like:

But our circular sequence is finite, so the ends must meet:

contradicting the uniqueness of removability in both cases.

Therefore, if there exists some removable element, we can remove it without affecting removability of the other one. Apply the same argument to this element in the new sequence.

In the editorial of problem E, to prove that there always exists a non-cut vertex in a non-clique component, so that it is not adjacent to all other vertices in this component, it reads:

However, the proof above is wrong, since after erasing vertices, the component may be divided into several components, and these components may be all cliques.

Instead, the proof can be something like this: Find any non-cut vertex $$$u$$$. If $$$u$$$ is not adjacent to all other vertices in this component, we are done. Otherwise, all other vertices should be non-cut vertices, since after removing any one of them, the component will be still connected due to the existence of $$$u$$$. According to the premise that the component is not a clique, we can always find a vertex (differing from $$$u$$$) that is not adjacent to all other vertices in this component, which must be a non-cut vertex as mentioned above.

Q.E.DBy the way, here is the proof that a vertex with the least degree in a non-clique connected component is a feasible vertex: Let's name the vertex $$$u$$$. If $$$u$$$ is a non-cut vertex, since $$$u$$$ is not adjacent to all other vertices in this component, we are done. Otherwise, consider any one of the connected components $$$V$$$ after removing $$$u$$$, and let its size be $$$s$$$. Since the degree of any vertex in $$$V$$$ is not greater than $$$s$$$ (or $$$|V|$$$ will be greater than $$$s$$$), the degree of vertex $$$u$$$ should be not greater than $$$s$$$. And because there are other connected components besides $$$V$$$, the number of edges between $$$u$$$ and $$$V$$$ should be less than $$$s$$$, which means after operating on $$$u$$$, $$$u$$$ and $$$V$$$ will be still connected.

Q.E.DYes I misremembered my proof when rewriting it after many days

will fix it soon

Thanks for mentioning it btw

If you view the page saved in the Wayback Machine(Internet Archive), you'll get Chinese editorial without an*me pictures. (Maybe that's because he used a Chinese website to save his pictures, and it was not accessible to the Internet Archive.)

For problem E, the editorial said "Actually, the vertex with the least degree in a connected component always satisfy the condition", I don't understand why the least degree node in a connected component must NOT be a cut vertex. Imagine you have a connected component where node A connects to two different connected component with one edge each. So, A has degree 2, and in those two connected component, each node connects to each other. So A has the least degree, but apparently A is a cut vertex. Would anyone please help me? Thanks a lot.

The vertex with the least degree does not have to be a non-cut vertex, this is just a sufficient condition.

Let $$$u$$$ be the vertex with the least degree. If $$$u$$$ is indeed a non-cut vertex, then we are done. Otherwise, we consider the connected components separated by $$$u$$$. Let one of them be $$$S$$$. If $$$S$$$ contains exactly one vertex $$$v$$$, then $$$v$$$ must have $$$1$$$ degree and $$$u$$$ has at least $$$1$$$ degree. If $$$u$$$ has $$$1$$$ degree as well, then $$$u$$$ and $$$v$$$ form a clique, which contradicts the premise. Otherwise, $$$v$$$ should be the vertex with the least degree, which contradicts the premise. If $$$S$$$ contains more than $$$1$$$ vertex, since $$$u$$$ is the vertex with the least degree as well as a cut vertex, then $$$S\cup {u}$$$ must not be a clique (otherwise $$$u$$$ will have more degree than those in $$$S$$$). If $$$S\cup {u}$$$ is not a clique, meaning that $$$u$$$ is not connected to all vertices in $$$S$$$, then $$$u$$$ will still be connected with $$$S$$$ after an operation with $$$u$$$.

Really thanks for your explanation! Now I understand. The minimum degree node doesn't have to be non-cut vertex, even when it is cut vertex, for each component it separates, it cannot have edge to every node in that component, otherwise it will have more degrees than nodes in that component. So after the operation, this node will still connects to all the components.

Can anyone explain why this code is going TLE? 183461784

In problem C, won't the time complexity be O($$$n^{3}$$$) or O($$$n^{3}logn$$$)? Looking at tourist's solution, for sure, it doesn't seem to be O(n^2). 181757137 (tourist's solution) 189290765 (my solution)

Also, if it's O($$$n^{3}$$$), how does it fit to the time limit if some over all testcases don't exceed 1000?

$$$\sum n^3\leq \sum (n\cdot n_{max}^2)=(\sum n)\cdot n_{max}^2=10^7$$$

code https://codeforces.com/contest/1761/submission/181776511