## A. LuoTianyi and the Palindrome String

Consider the substring of $$$s$$$ from the second character to the last, or $$$s_2s_3\cdots s_n$$$. If it's not palindrome, then the answer must be $$$n-1$$$. What if it's palindrome? This implies that $$$s_2=s_n$$$, $$$s_3=s_{n-1}$$$, and so on. Meanwhile, the fact that $$$s$$$ is palindrome implies $$$s_1=s_n$$$, $$$s_2=s_{n-1}$$$, etc. So we get $$$s_1=s_n=s_2=s_{n-1}=\cdots$$$ or that all characters in $$$s$$$ is the same. In this situation, every subsequence of $$$s$$$ is palindrome of course, so the answer should be $$$-1$$$.

**Code**

## B. LuoTianyi and the Grid

Assume that $$$n>m$$$. Greedily thinking, we want the maximum possible $$$a$$$ to appear as the maximum value of as many subtables as possible, meanwhile, we also want the minimum possible $$$a$$$ to appear as the minimum value of as many subtables as possible. This gives us two choices: making the upper-left square the minimum or the maximum. It's symmetrical so we'll only consider the minimum situation.

Now all the subtables have the same minimum value, we want to maximize the number of subtables where the maximum $$$a$$$ appears as the maximum value. Placing it at $$$(1,2)$$$ and $$$(2,1)$$$ makes the number $$$n(m-1),m(n-1)$$$ each, because $$$n>m$$$, we have $$$m(n-1)>n(m-1)$$$, so we place the largest $$$a$$$ at $$$(2,1)$$$ and the second largest at $$$(1,2)$$$, the answer for this case is $$$m(n-1)\times \max+m\times \text{second max}-mn\times\min$$$.

**Code**

## C. LuoTianyi and the Show

First we can notice that, if someone with a specific favourite seat(i.e. not $$$-1$$$ nor $$$-2$$$) has got his seat taken by a $$$-1$$$ guy or a $$$-2$$$ guy, it's better to let the first man go first, and the $$$-1$$$ or $$$-2$$$ one go after him.

Now, we know it's better to make those with a favourite seat go in first. After they have seated, now we consider filling the space between them with $$$-1$$$ and $$$-2$$$. It's easy to notice that we can find two non-overlapping prefix and suffix, and fill the blank seats in the prefix with $$$-1$$$, and the blanks in the suffix with $$$-2$$$. We now only need to find the answer greedily for each division point between the prefix and the suffix.

The time complexity is $$$O(n)$$$.

**Code**

## D. LuoTianyi and the Floating Islands

Call a node special if there is a person in it.

When $$$k$$$ is odd, we find that there is only one node satisfying the conditions.

$$$\bf{Proof}.$$$ Assume distinct node $$$x$$$ and node $$$y$$$ are good nodes. Let $$$x$$$ be the root of the tree. Define $$$s_i$$$ as the number of special nodes in subtree $$$i$$$. Think about the process we move from $$$x$$$ to $$$y$$$. If we try to move the chosen node from its father to $$$i$$$, the variation of cost is $$$k-2s_i$$$. When move from $$$x$$$ to its son $$$i$$$ which $$$s_i$$$ is maximal, $$$k-2s_i\geq 0$$$ is held (Otherwise, $$$x$$$ isn't a good node). And we can get $$$k-2s_i>0$$$ further because $$$k$$$ is odd and $$$2s_i$$$ is even. Since $$$\min_{1\leq j\leq n}{k-2s_j}=k-2s_i$$$, we find $$$k-2s_j>0$$$ for all $$$j$$$. So $$$y$$$ can't be good node.

Then think about the situation that $$$k$$$ is even. Choose a node as root arbitrarily. With the same method, we find that good nodes satisfy $$$2s_i=k$$$. It's also sufficient. Define $$$p_i$$$ as the possibility that $$$s_i=\frac{k}{2}$$$, then the answer is $$$1+\sum_{i=1}^{n}p_i$$$.

Define $$$S_i$$$ as the size of subtree $$$i$$$. When $$$s_i=\frac{k}{2}$$$, there are $$$\frac{k}{2}$$$ special nodes in subtree $$$i$$$ and $$$\frac{k}{2}$$$ in the other part. The number of ways to place special nodes is $$$\binom{n}{k}$$$, and $$$\binom{S_i}{\frac{k}{2}}\binom{n-S_i}{\frac{k}{2}}$$$ of them satisfying the condition. So $$$p_i=\dfrac{\binom{S_i}{\frac{k}{2}}\binom{n-S_i}{\frac{k}{2}}}{\binom{n}{k}}$$$.

So we can solve the problem in $$$O(n)$$$.

**Code**

## E. LuoTianyi and XOR-Tree

Hint: Consider a brute force dynamic programming solution and try to optimize it.

Denote the minimum number of operations needed to make every path from a leaf inside the subtree of $$$u$$$ to the root have the xor value of $$$w$$$ as $$$f_{u,w}$$$. Observe that for every $$$u$$$, there are only $$$2$$$ possible different values for $$$f_{u,w}$$$. This is because if $$$f_{u,w_1}-f_{u,w_2}>1$$$, we can use an operation of xor-ing $$$a_u$$$ with $$$w_1 \ \text{xor} \ w_2$$$ to make all the xor values from $$$w_2$$$ to $$$w_1$$$, which takes $$$f_{u,w_2}+1$$$ steps instead of $$$f_{u,w_1}$$$.

Now we only need to calculate $$$\text{minn}_u=\min f_{u,w}$$$, and the set $$$S_u$$$ of $$$w$$$ that makes $$$f_{u,w}$$$ minimum. We have $$$\text{minn}_v=0$$$ and $$$S_v={\text{the xor value from root to v}}$$$ for leaf $$$v$$$. It's trivial to calculate $$$\text{minn}_u$$$.

Note that $$$S_u$$$ contains of the numbers appearing the most times in the sets of $$$u$$$'s son. We can maintain $$$S_u$$$ using a map and merging it heuristically. Consider when merging sets into a new set $$$S'$$$. If every element of $$$S'$$$ appears only once in the original sets, then we keep $$$S'$$$ as the result, otherwise, brute force the whole set $$$S'$$$ and find the elements appearing the most times. For the second situation, every element's count of appearance is at least halved(those appearing once have $$$0$$$ and others have $$$1$$$ afterwards), so the number of brute force checking operations is $$$O(n\log n)$$$.

The final time complexity is $$$O(n\log^2 n)$$$.

**Code**

## F. LuoTianyi and the Function

Consider an alternative method of calculating $$$g$$$. Notice that $$$g(i,j)$$$ is the minimum of the last appearing position of all colors(let's call different values of $$$a_x$$$ colors for short) in the interval $$$[i,j]$$$.

Consider the sequence from $$$a_n$$$ to $$$a_1$$$. Adding $$$a_i$$$ to the front of the sequence only affects the values $$$g(i,x)(i\leq x<\text{nxt}_i)$$$, where $$$\text{nxt}_i$$$ is the next position after $$$i$$$ having the same $$$a$$$ value as it. Or it's to say to modify $$$g$$$ values in the submatrix of $$$[(1,i),(i,\text{nxt}_i-1)]$$$ to $$$i$$$, which can be done in $$$O(n\log^2 n)$$$, but it's not fast enough.

Because the queries happen after all modifications take place, you can store the queries offline, and calculate a submatrix using $$$4$$$ queries of submatrixes having $$$(1,1)$$$ as the upper-left corner. Now we need to maintain a data structure that can: 1. set all values in an interval as a same value $$$x$$$, 2. query the history sum(sum of values on all previous editions). We can maintain the segments of adjacent positions with the same values, and turn the modification into 'range add' for a segment.

An operation makes at most $$$O(1)$$$ new segments, and now there's only $$$O(n)$$$ range add modifications and $$$O(m)$$$ range history sum queries, now the problem can be solved in $$$O(n\log n)$$$ time complexity with segment tree.

**Code**

## G. LuoTianyi and Cartridge

Consider finding the maximum value of $$$B+D$$$ for every $$$\min(A,C)$$$. Denote $$$\min(A,C)$$$ as $$$x$$$. We call a vertex $$$u$$$ satisfying $$$a_u\geq x$$$ or an edge satisfying $$$c_e\geq x$$$ **optional**. Denote as $$$V$$$ the optional vertex set and as $$$E_0$$$ the optional edge set.

Firstly, if all optional vertices are on the same side of an edge, this edge mustn't be chosen. Delete these edges from $$$E_0$$$ and we get the edge set $$$E$$$. Formally, an edge $$$e$$$ is in $$$E$$$ if and only if $$$c_e\geq x$$$ and there exists $$$u,v$$$ so that $$$e$$$ is on the path between them.

$$$\bf{Lemma.}$$$ **There exists an optimal $$$T_{\text{ans}}=(V_\text{ans},E_\text{ans})$$$ that either $$$V=V_\text{ans}$$$ or $$$E=E_\text{ans}$$$.**

$$$\bf{Proof.}$$$ Assume an optimal $$$T'=(V',E')$$$ with $$$V'\neq V,E'\neq E$$$. Choose an edge $$$e$$$ that is in $$$E$$$ but not in $$$E'$$$. Because $$$V'\neq V$$$, there must exist two vertices $$$u,v$$$ on different sides of edge $$$e$$$ and $$$u\in V',v\notin V'$$$. Consider adding the edge $$$e$$$ and the vertex $$$v$$$ into our chosen tree, the resulting graph is obviously a tree. Note that $$$b_v,d_e\geq 0$$$, so the resulting choice is no worse than $$$T'$$$.

When we delete the edges in $$$E$$$ from the original tree, we get some connected components. Shrink one component into a single vertex to get $$$V'$$$, and then for all edges $$$(u,v)\in E$$$, connect $$$u$$$'s and $$$v$$$'s component together in the new graph and get $$$E'$$$. Obviously, the new graph $$$T'=(V',E')$$$ is a tree.

For any leaf vertex $$$u'$$$ on the new tree $$$T'$$$, there must exist a vertex $$$u$$$ in the component $$$u'$$$ that is chosen, otherwise the edge connecting to $$$u'$$$, let's say, $$$e'$$$ is not chosen either. Adding $$$u$$$ and $$$e'$$$ into our answer tree achieves a better answer.

Assume that now we have chosen a vertex $$$u$$$ for every leaf $$$u'$$$, denote the set of chosen vertices as $$$V_x$$$. Consider an arbitary choice of vertex for components $$$V_c$$$ and edge choice $$$E_c$$$ satisfying $$$V_x\subseteq V_c\subseteq V,E_c\subseteq E,|V_c|-1=|E_c|$$$. It's easy to prove that the choice is a legal answer, given the fact that every $$$e\in E_c$$$ has at least one leaf component on each side and every leaf component contains a chosen vertex.

Reconsider the lemma, and we can get a solution for a fixed $$$x$$$:

- Find $$$V,E$$$. Calculate the components and get $$$V',E'$$$.
- Find the vertex with the maximum $$$b$$$ in every leaf-component in $$$V'$$$ and get $$$V_x$$$.
- Let $$$m$$$ be $$$\min(|V|,|E|+1)$$$, and $$$m'$$$ be $$$|V_x|$$$. Choose the vertices in $$$V\setminus V_x$$$ with the first $$$m-m'$$$ largest $$$b$$$, and the edges in $$$E$$$ with the first $$$m-1$$$ largest $$$d$$$ and get the answer.

Consider the process when $$$x$$$ gets larger, the sets $$$V,E$$$ get smaller and smaller while the components merge into each other. We use segment trees to maintain the $$$b$$$ value of the vertices and the $$$d$$$ value of the edges, when merging two components, we simply merge the two segment trees.

The final time complexity is $$$O(n\log n)$$$.

**Code**

Why is the time limit for 1D so tight

agree

first mine std and some tester runs 3.5s, second i dont want $$$O(nlog^2n)$$$ solution passed it, third this is my first problem in CF and i dont have enough experience，so TL is 7s. sorry for an uncomfortable problem:(

i'll notice in next time, hoping for your forgiveness

use segment tree which is $$$O(nlogn)$$$ will exceed the time limit on test 11, maybe it's because the segment tree has a large constant $$$: ($$$

yeah i just used BIT to solve that and forget to check the time of segmenttree...that's my fault

so fast!

Who else solved B2 by using arbitrary mod convolution because they looked at vertices instead of edges?

Actually though we look at vertices we don't need MTT or sth since

Here $$$x_i$$$ denotes the size some subtree

So we can calculate this in linear time.

https://codeforces.com/contest/1824/submission/205182489

How is LHS = RHS?

https://www.cnblogs.com/YunQianQwQ/p/17385469.html

We can explain it by using its combinatorial meaning: LHS = the ways to place $$$k$$$ balls in $$$n$$$ boxes, and there should be at least $$$w+1$$$ balls in the first $$$x_i$$$ boxes. Let's count this by the place of the $$$w+1$$$-th ball, if it is placed in the $$$r$$$-th box, there are $$$\binom{r-1}{w}\times\binom{n-r}{k-w-1}$$$ ways. Sum this up from $$$r=1$$$ to $$$x_i$$$ then we get RHS.

It would be helpful if you also provide the code for the problem. It would be useful for beginners like me . Peace!

Added links in the editorial.

Yes Thanks ,that would be useful.

It shows: You are not allowed to view the requested page

Sorry, fixing it soon.

done now.

we need Chinese

You mentioned in the tutorial of 1B that

However consider a tree with $$$2$$$ vertices and $$$k = 2$$$. Let's say the root is $$$1$$$, so $$$p_1 = 0$$$ because the other part has $$$0$$$ vertices, and $$$p_2 = 1$$$ because $$$\binom{2}{2} = 1$$$ and there is only $$$1$$$ way to choose $$$1$$$ special node from $$$1$$$ vertex. So you're indicating that the answer is $$$1$$$. However the answer is obviously $$$2$$$.

Could you please explain this case?

maybe the answer should be $$$1+\sum\limits_{i=1}^{n}p_i$$$ ?

The expected number of good nodes is in fact $$$1 + \displaystyle\sum p_i$$$.

$$$\displaystyle\sum p_i$$$ counts the expected number of edges on the path made of the good nodes.

Thanks for the explanation. I see that the tutorial has also changed.

You say that "$$$\sum p_i$$$ counts the expected number of edges on the path made of the good nodes". So you're indicating that all good vertices are always connected? Why is it true? If the good vertices form two connected components then we'll have to add $$$2$$$.

Yes, all good vertices are always connected. For a given set of points, it's the intersection of k/2 paths on the tree.

Isn't it also assuming that there cannot be "only one good vertex"?

Yeah good nodes are always connected.

If you visit node

KfromVwhich sum of paths is bigger then there will not be a nodeUs.t.Kis in path fromVtoUandUssum of paths is smaller thenV's sum. So there can't be two nodes V and U s.t their sums are minimal and there is node on their path whos sum is not minimal :)The answer is 1 + $$$\sum_{i=2}^{N} p_i$$$. Accually, $$$p_i$$$ is the probability that the edge from i to its parent is included in the answer.

Edit: look at my submission 205128466

You missed +1 in the answer.

Maybe considering the edges is easier to understand.

we need unrated register!!!!!!!!!

If you're having trouble solving 1825C - LuoTianyi and the Show, you might finding this way of thinking useful: we are able to consider a non-overlapping prefix and suffix because in the optimal solution, we will never find a $$$-1$$$ person seated to the right of a $$$-2$$$ person. If the $$$-2$$$ person is seated first, then it is either the leftmost position or has a position to the left of it. So, the position at which $$$-1$$$ will be seated will always be to the left of a $$$-2$$$ position. A similar argument holds when you consider seating $$$-1$$$ first.

This means that we can split the array into a left and right segment where the left segment contains $$$-1$$$ people and no $$$-2$$$ and the right segment contains $$$-2$$$ people and no $$$-1$$$.

In D's tutorial,

"Assume distinct node x and node y are good nodes. Let x be the root of the tree. Define si as the number of special nodes in subtree i. Think about the process we move from x to y. If we try to move the chosen node from its father to i, the variation of cost is k−2s"

What is node i? What exactly are we doing here?

i is any arbitrary vertex. cost is sum of distances of this vertex to to all special vertices. If cost of par[i] was cp. Then cost of i will be cp-si+(k-si) = cp+ (k-2si). -si becase -1 for si vertices and +1 for remaining (k-si) vertices

In editorial of problem D1B2:

Then think about the situation that k is even. Choose a node as root arbitrarily. With the same method, we find that good nodes satisfy$$$ 2 s_{i} = k$$$Shouldn't it be $$$ 2s_{i} \leq k $$$

In the following graph, k = 4 and yellow vertices are special. Vertex 1 is good but doesn't satisfy given condition

graphBecause given condition satisfy only if there are more than one good nodes

So how are we handling the case of only 1 good vertex?

We don't need to handle case that has only 1 good node because every random case will have at least 1 good node. We just need to figure out the exceed part.

I think the editorial is very confusing. That property $$$2s_i = k$$$ does NOT always hold for good nodes, even when there is more than one good node, as here:

But note that actually $$$p_i$$$ is NOT computing the probability that node $$$i$$$ is good. It is computing the probability that BOTH node $$$i$$$ and

its parentare good. In a sense it is computing the probability that the "edge" from $$$i$$$ to its parent is a "good edge". By the analysis you can notice that when there are more than one good nodes, they form a single path.And that is a very clever idea, to focus on EDGES and not in nodes, that is not stressed in the editorial. By doing so, a very ugly summation of binomials (to compute all the options for $$$2s_i \leq k$$$, which is needed to compute the precise probability that a PARTICULAR node is good, which is totally avoided with the editorial solution) turns into a single product of binomials, because for an EDGE having good nodes on both ends, that equality must hold.

Thank you

[Deleted]

Very nice explanation!

In Div2B's editorial, it is said that "It's symmetrical so we'll only consider the minimum situation.". Why don't we have to consider the case when the top left cell is the maximum value? Why does it suffice to check it only as the minimum value? Thanks!

Sorry for bad English.

What I mean was something like: it's symmetrical so I'll only explain the minimum senario

here. the problem D1/B1 can be easily solve using REROOTED TECHNIQUE. this is pretty standard problem to finding the distance each node to every other node.

Can you explain your method or give me a link to it

do the authors know any topics except greedy algorithms and mathematics?

have you read all the problems yet bruh

100+ people fail to look at the tags

100+ people succeeded in realizing tags aren't everything. Nearly every problem can be given a mathematics tag since there are only so many things which do not do basic maths at the very least.

yes B might have math tag, its not a math problem. Expected value is just there, and they want you to compute sum over all ways of choosing k nodes instead (atleast how most people solved it)

C is a dp problem with a cool optimization.

D is a (i gather) data structure problem.

I have'nt checked exactly what E is on.

Only A perhaps is greedy algorithms, but i would classify it as ad hoc.

ABC : it would be unfair and unbalanced to put other things as the starter problems because they are intended to be approachable by beginners.

D: alright, i (and cf tags ig) consider them separate, but sure if you want to. [dp is memoized recurrence => maths :yay:]

E: wtf?? i dont get what greedy aspect there is in the observation.

Either ways, get good and stop (falsely) complaining about problem topics. Even after losing rating due to div1 B and C, i liked both of them, and B is definitely one of my favourites.

In general, stop focusing on problem topics and focus on beauty instead. (however this round definitely wasnt at fault with relation to topics)

Evidence : 100+ people disagreeing with you

i dont care about my rating so i don't hate contests because i lose rating on them

also i dont care about people opinion because the crowd is the crowd, the crowd does not have its own opinion, they see -100 on the post and immediately put a dislike without even reading it

as for me, greedy problems are complete shit, because i think its not ok when the hardest part of a problem is to guess one fact(almost always without proof) and then write a few lines of simple code that magically work

combinatorics problems are complete shit because its just math formula but you have to code it

i think that cp problems are something different where you can come up with a solution step by step, not just using one idea, its not interesting to sort through ideas and hope that the next unproven fact will get AC, its interesting to come up with a solution step by step, first n^3, then n^2, then n*log while using several ideas

in this contest unfortunately i didnt see any such problem, maybe div1D or div1E are interesting, but these tasks were not in div2

You just can't solve greedy problems, ha-ha!

"in this contest unfortunately i didnt see any such problem, maybe div1D or div1E are interesting, but these tasks were not in div2"

D1C, D1B were both exactly what you mentioned. D1C involved getting a bruteforce dp of n * a[i], then seeing optimization to n^2, and then seeing optimization to nlog^2. D1B involved solving for k = 2, observing for odd k, and then solving for general even k. I highly doubt you solved these tasks if you say the above.

These problems also tend to be harder due to being multistep, so expecting them as div2A or B is quite unreasonable.

"combinatorics problems are complete shit because its just math formula but you have to code it"

right......okay :). You are missing out on a very nice field of problems, but whatever your loss.

"as for me, greedy problems are complete shit, because i think its not ok when the hardest part of a problem is to guess one fact(almost always without proof) and then write a few lines of simple code that magically work"

its very few times that i have ever needed to guess an assumption in recent contests, i think most problems are solvable without such guessing (ofcourse i can only speak for <= my level). This doesnt mean people dont guess, they certainly do. Its just not required by any means in most problems.

div1B has two subtasks, one is note that if k is odd than answer is 1(k = 2 is actually trivial case), and one is for even k >= 2

in div1C as for me key idea is "Observe that for every u, there are only 2 possible different values for fu,w", after noticing this fact problem is trivial

congrats on finding them trivial? what more do you want exactly? Just because they are trivial to you doesnt mean they arent multistep.

how exactly do you intend to find problems which have 5 difficult steps in div2. Your hopes are quite unreasonable in a 2hr long contest unless youre GM+.

"its not interesting to sort through ideas and hope that the next unproven fact will get AC"

In B and C, tell me exactly which idea felt to you like this?

You talk about div1C as if you have solved it. Yes, noticing that there are only 2 different values is a step forward, but it's just a very small step, the whole idea is so much harder to get. I spent literally 1.5 days solving that problem, and in the end it seems as a very hard problem with a lot of steps.

hindsight solvers after reading editorial :)

"how exactly do you intend to find problems which have 5 difficult steps in div2. Your hopes are quite unreasonable in a 2hr long contest unless youre GM+" a lot of div2E-div2F with quite few contest solves have 3-4 big steps in its solution, for example 1797F - Li Hua and Path or 1805F2 - Survival of the Weakest (hard version).

"Just because they are trivial to you doesnt mean they arent multistep" — thus every problem can be called multistep, for me those problems have 1 big step and then they are easy

maybe for u they are multistep, but i see only 1 big idea in every problem

"In B and C, tell me exactly which idea felt to you like this?" — B is trivial; i wasnt able to solve C, when i got wa2 i threw the contest in the trash, but i dont really remember(fortunately) what i was writing, i think some peace of shit with no proof, oh no, with "its easy to see".

You are just helping my point :) you sent a GM lebel problem and a LGM level problem, which was exactly my point.

So getting the dp in div1C is that trivial huh? Congrats i guess, you find stuff trivial this master found hard.

I was talking about div1B and C :). Also congrats on learning guesses dont work always, just like it shud be. If you had some actual problem solvimg skills instead of complaining, you wud have solved div1A fast instead of guessing and rage quitting

"You are just helping my point :) you sent a GM lebel problem and a LGM level problem, which was exactly my point" — those problems were in div2 and some people solved it. "Congrats i guess, you find stuff trivial this master found hard" — i dont see anything wrong with it. "Also congrats on learning guesses dont work always, just like it shud be. If you had some actual problem solvimg skills instead of complaining, you wud have solved div1A fast instead of guessing and rage quitting" — i wasnt even raging, idk were u get it. I am 100% sure i would solve div1A, but i dont want to solve problems if I dont get positive emotions from solving them

u want to say that problems are ok and i am just crying that i couldnt solve it, no, i dont care about it, i just dont like the problems thats all, if you think that contest feedback should be only positive, i am so pity for you

"Some people solved them"

Right some lgms and igms and gms did indeed solve them, i cant find a single div2 person who solved them in both the problems, atleast if he did solve them, he didnt solve the others. Thanks for proving my point for me. You tried to give examples of good problems, in my mind you did the opposite. I dont think an unsolved problem is a good problem.

No i dont need contest feedback to be all positive, but i want feedback to be justifiable. Your complaints are not. There are other complaints about div1CDE being classical, i am fine with that complaint, it probably is reasonable.

At some point, i need to stop replying since you clearly dont care about what others have to say having made up your mind. Have a good day :).

i have my point of view, u have yours, so its ok i guess that we are just discussing, but if u dont want to talk anymore, thats ok, good luck

Can anybody explain D1/C more detailed: what do we store in dp and what sets do we merge?

dp[u][val] = minimum operations to make the subtree of u have all paths to leaves with xor val.

then, you get transitions dp[u][val] = min(dp[u][y] + 1, (change the subtree root), sum(dp[v][val]))

It becomes obvious that if ANS is the lowest value of dp[u][val] for all val, then all dp[u][val] are atmost ANS + 1.

Thus, you just store the sets of val with lowest value of dp[u][val] (i.e. = ANS)

I hope you understood

Thank you, I understood

your welcome :)

Can anyone explain D1/B2 in more simpler way, the editorial seems a bit tough to grasp stuff for me.

For odd $$$k$$$, let's assume the node which is giving the minimum sum of distance is node $$$x$$$. Now, if you move in whatever direction, the distance from each of the nodes will change, since $$$k$$$ is odd, you will have either a majority of node's distances increased or else the majority of node's distances decreased. The count of increment and decrement of nodes distances can't be the same as $$$k$$$ is odd. Hence you will never end up with that minimum distance.

For even $$$k$$$, I intuitively thought that if you consider a particular edge, then on both sides it should have $$$k/2$$$ nodes each, following a similar reasoning as above. In this case, the count of increment and decrement of nodes distances will be equal and hence there will be no change in the final summation. Now the only thing that remained to prove was that the summation will be the minimum among all the possible options. If you try to move in a direction that doesn't lie on the line of connecting $$$k/2$$$ pairs on both sides, then you will end up increasing both sides' node distances and hence the total will increase. So, that edge will surely give the minimum value as it lies on the line connecting the $$$k/2$$$ pairs.

Hope it helps in the visualization :)

Thank you, now I got the solution, I was initially trying to count each node's contribution, which is relatively harder it seems, than doing the same for edges.

Yeah, I was also trying to count each node's contribution during the contest :(

In the tutorial of D, when you write $$$sz_i$$$, do you actually mean the size of the subtree rooted at $$$i$$$ or do you just mean $$$s_i$$$?

Note that the editorial defines two similar but different things: $$$S_i$$$ for the size of the subtree (does not depend on choosing special nodes), and $$$sz_i$$$ for the number of special nodes in the subtree (so this is while reasoning for a specific choice of special nodes).

Check my other comment if the editorial seems a bit confusing.

div2D.

how do you understand that in the end we should add 1? i explain it like this: we are counting probability that edge u-v is "good". for a given subset of special nodes all "good" vertices form a single path. it means, that for every edge in this path the probability that we are finding is the same. but since we actually want to find the probability that vertex v is good, we can notice that in this path #vertices = #edges + 1. so for every subset of special nodes there is always exists exactly one vertex that we didn't count when we summing up over all edges. so, in the end we should add one.

are there any easier ways to understand this? thanks in advance.

Yes, we are calculating the expected number of edges for all the pairs. Since all the good vertices will be connected so expected number of good vertices is 1 more than the expected number of edges.

The editorial for E is written poorly. I can't understand a thing

EDIT: Ok, I'm finally starting to understand it. Please add more explanations tho

lis05 can u please help me understand the solution ?

Let's calculate $$$dp(v,x)$$$ — how many operations we need to make every leaf in the subtree of $$$v$$$ have value of $$$x$$$

For any node $$$v$$$ suppose that the minimum $$$dp(v,x)$$$ for all $$$x$$$ is when $$$x$$$ is equal to $$$x_1$$$. Then we can make any $$$dp(v, x_2)$$$ equal to $$$dp(v, x_1) + 1$$$ by first making all leaves equal to $$$x_1$$$ in $$$dp(v, x_1)$$$ operations and then XORing the value of node $$$u$$$ which takes one more operation. So $$$dp(v, x)$$$ can only have two values.

For any node v we will maintain a few things (call these dp as a structure, function dp is another thing)

The minimum value of $$$dp(v, x)$$$, call it $$$cost_v$$$

Set $$$S_v$$$ which contains all values $$$x$$$ that we can make all leaves be equal to $$$x$$$ in $$$cost_v$$$ operations

For leaves calculating all these things it trivial.

Now suppose we have a node $$$v$$$, and its children $$$u_1$$$, $$$u_2$$$, $$$...$$$ We have already calculated dp for the children, now we want to calculate dp of $$$v$$$.

First, note that for each child we will either make all leaves in its subtree equal to some value in its $$$S_u$$$ in $$$cost_u$$$ operations, or to any other value not in $$$S_u$$$ in $$$cost_u + 1$$$ operations. Also, we want all leaves in the subtree of $$$v$$$ be equal to some single value. So if this value is $$$y$$$, and it appears in $$$S_u$$$ for $$$cnt$$$ different children $$$u$$$, then the cost to make all leaves in the subtree of $$$v$$$ be equal to $$$y$$$ will be $$$sum(cost_u)$$$ for $$$u$$$ that have $$$y$$$ in their sets, plus $$$sum(cost_u + 1)$$$ for $$$u$$$ that don't have $$$y$$$ in their sets. Simply said, it's just $$$sum(cost_u + 1)- cnt$$$.

Now it's obvious that we need to maximize $$$cnt$$$. It means that we need to find all numbers $$$y$$$ that appear the most in total in the sets of children of $$$v$$$. It is very important to say that these numbers will be included in $$$S_v$$$, and none others will.

The question is: how do we find these numbers efficiently? Let's use merging small to large. For each child $$$u$$$ we will merge its dp into $$$dp(v)$$$. This merging will maintain S correctly. Also, to do this we need to add a new thing to the dp. Call it $$$S_{cnt}$$$. For each value $$$y$$$ that is present in the set $$$S_v$$$, $$$S_{cnt}v$$$ will contain the number of occurences of that number (how many times in appears in the sets already merged into $$$v$$$).

Also, some elements in $$$S_{cnt}$$$ may not be in $$$S$$$. So if any element is in $$$S$$$ but is not in $$$S_{cnt}$$$, then we will say that it's number of occurences is 1, and if we access it in $$$S_cnt$$$, we will set it to 1.

Also for a dp let's maintain the greatest number of occurence in $$$S{cnt}$$$. Call it $$$gCnt$$$

So now we are merging a smaller dp (call it a) into the bigger one (call it b).

First we merge $$$S$$$ of a into b. To do this, we must update the number of occurences of each element in S into $$$S_{cnt}b$$$. Note that if an element has number of occurences greater than $$$gCnt_b$$$, then we need to clear all elements from $$$S_b$$$ and update $$$gCnt_b$$$. After this, if the new $$$gCnt_b$$$ is equal to the number of occurences of the element, we push it into $$$S_b$$$. Note that we are still maintaining $$$S_{cnt}b$$$, but we are not deleting elements from it.

An important this is that when you delete an element from $$$S_b$$$, you need to insert it's number of occurences into $$$S_{cnt}b$$$

After we are done with merging $$$S_a$$$ into b, we need to merge $$$S_{cnt}a$$$ into b. It is very similar, however, we need to check if we are not merging any element twice (because it may be present in $$$S_a$$$ but not in $$$S_{cnt}a$$$.

After we are finished with all the children, we calculate $$$cost_v$$$ by subtracting number of occurences of any element in $$$S_v$$$ from it.

Before we exit node $$$v$$$ we must set all element in $$$S_{cnt}v$$$ equal to 1. To do this efficiently, we just clear the entire map(this will work because out program will count $$$S_{cnt}$$$ of a number x as 1 if it is present in $$$S$$$ but not in $$$S_{cnt}$$$.

Now if 0 is in the dp of the root, we print $$$cost_{root}$$$. Otherwise, print it plus 1.

The problem was really hard and I tried my best to explain my idea. If anyone reading this has any questions — feel free to ask me in any way

thank you :) .

If we try to move the chosen node from its father to i, the variation of cost is k-2si.

Why is it k-2si? Could anyone help explaining this?

See ThisThanks.

What's the intended solution of D's easy version? I can't even figure that out.

There are 3 cases:

You might have got that answer for k=1 or k=3 will be 1 (try some cases).

Now, it remains to find ans for k=2. For any 2 nodes in the tree, the number of good nodes for them are simply all the nodes lying on the simple path connecting them. So, I have just sum up the contribution (how many times it is a good node) for each node, and then divided the sum by total cases (which is NC2).

So for $$$k = 1\ \text{and}\ k = 3$$$ that the answer is $$$1$$$.

For $$$k = 2$$$. For each node $$$i$$$, we need to pick two nodes such that no two nodes are from same subtree.

For example, here R cannot be a good node, if friends are at two red nodes

exampleThe number of ways to choose $$$2$$$ nodes such that $$$i$$$ can be a good node is (

considering $$$i$$$ as root) can be calculated as $$$\binom{n}{2} - \sum_{v \in child(i)} \binom{cnt_v}{2}$$$. Here $$$cnt_i$$$ is number of nodes in the subtree of $$$i$$$. Final answer is sum of answer for each $$$i$$$ divided by total ways to choose $$$2$$$ nodes, i.e. $$$\binom{n}{2}$$$.Here, is my submission link.

May I ask why the 1 in the 1D answer means what, why is it added?

If you mean div2D, it's clear that in any path, the number of vertices(which is the number of good islands) is equal to 1+the number of edges. Let's call them the good edges.

(In fact, considering the edges is easier to understand since the subtree and the other part of the tree is actually divided by an edge.)

When we define $$$p_i$$$ as the probability of this: The edge between i and its father divides the tree into two parts, and each part of them has $$$\frac{k}{2}$$$ special islands. (This edge is a good edge.) Then $$$\sum p_i$$$ means the expect number of the good edges, and adding 1 to it turns it into the expect number of good islands.

($$$E(X+Y)=E(X)+E(Y)$$$, that is, if we add 1 to the final answer, it is equivalent to add 1 to the answer of each path.)

I never figured out what p means. I understand now, thanks bro

the number of vertices(which is the number of good islands) is equal to 1+the number of edgesYeah this line is important . plx add this in editorial to avoid all the confusion. thnx Imdie

There is a solution with $$$O(n\log n)$$$ complexity for Div.1 C by using segment tree merging. But its constant is too large that it has the same performance as the $$$O(n\log^2n)$$$ solution.

The tutorial of Div.1 B2 had problems in $$$p$$$.

As was mentioned by elsantodel90 before, $$$p_{u,v}$$$ actually represents the possibility of the edge $$$(u,v)$$$ is a so-called 'good edge', meaning both ends of it are good nodes. Thus, the formula to compute $$$p$$$ is like this:

$

（Although $$$S_u$$$ equals to $$$S_v$$$ , the meaning is completely different.）

[Deleted]

Can anyone explain solution for E? I can't understand the last paragraph of the editorial.

EDIT: Solved the problem. Still, the editorial is hard to read and understand, especially the last paragraph

Essentially what they say is the following:

We are solving for some dp[i][j], which is the minimum number of operations to make all paths to the leafs of subtree i with xor equal to j. Now if dp[i][k] < dp[i][j], with one operation we can turn all j-s into k-s. What this means is that we only need to maintain the set with values, which get the optimal dp answer.

From here we need to learn how to compute the answer efficently. For some node v, we have solved all the dp-s for its children. Now let's take a certain value j and call the number of its appearances in the children's sets x. Then we need (number of children — x) operations to make all paths to leafs equal to j. Then it is optimal to use the value j, which appears the highest number of times.

We will use small to large technique to pass the sets up the tree (check my solution, if you need to). If all values appear once, we do nothing. If the maximum number of appearances of a values is mx, keep only the values, which appear mx times but only one copy of them. This way the number of values we keep will halve at least.

Sorry for the sloppy text, hope it helps though.

Thanks. Lol, why do people downvote this additional editorial for E?

I see you are iterating through mp[v] at the end of dfs in your solution. How is the time complexity not quadratic atleast?

This is due to the small to large technique. Here is a reference: https://codeforces.com/blog/entry/67696

Thanks for the nice contest , I am finally a CM :)

Congrats :)

Thanks

Nice contest

For div 2 D2, let's assume that $$$1$$$ is the node I choose as a root to count every subtree size. From this, I can find every combination of special nodes if $$$i$$$ is a good node, except the root (since the subtree size does not cover the whole tree for non-root node). But how do I count the combination of special nodes if $$$1$$$ is a good node? Can someone explain, I'm confused :(

nvm i can just reroot it (edit: nvm again i'm still confused)

Nice problemE, thanks for sharing. Learned something new. Basically in author's implementation from editorial, they do dfs. Upon returning from each node

`v`

during dfs, the invariant is that,`A[v]`

XOR'ed with every value`K`

from`set(v)`

results in the set of values, for which minimum number of operations needed (to make xor of every`leaf->v`

path in the subtree rooted at`v`

equal to`A[v] ^ K`

). This allows for small-to-large merge effectively. I implemented similar logic in Java: https://codeforces.com/contest/1824/submission/205282718Please consider replacing the codes in the editorial for E and F. Currently they are absolutely unreadable. A lot of people use codes not only to debug their own solutions, but also to help understand the solutions.

I can't understand some editorial is there any other way?

Which problem? I might be able to help you

please explain problem C div2!

If the first person you ever place is of type -1, then you can't place any persons of type -2 in the future. If the first person you ever place is of type -2, then you can't place any person of type -1.

So, there are 3 cases:

if persons of type -1 get placed but persons of type -2 get not.

if persons of type -2 get placed but persons of type -1 get not

if both persons with type -1 and -2 get placed.

The first case is easy: place persons with type -1 greedily. If there is a person of type > 0 which wants to be placed at the current seat, it's always optimal to place them first, and then continue placing persons of type -1.

The second case is very similar to the first one, you just go from right to left instead.

The third case is different. If we want to have both persons of type -1 and -2 be present in the seats, then we need to first give a seat to some person of type > 0. After this, we can greedily place persons of type -1 and -2 independently. But if we want to place a person of type -1 or -2 in a place where a person ot type > 0 wants to seat, we will place that person first, and then continue placing those of types -1 and -2. Repeat this until you can.

To do the third case efficiently, we will use prefix and suffix sums of how many persons of type > 0 there are. If you want to place starting person at pos x, then you can calculate how many persons of type -1 and -2 will be able to have a seat if you place all the persons with types > 0.

Thank you it was of great help!

Why the answer for this testcase is

n = 12, m = 10

7 1 1 7 9 6 -2 -2 -2 -1 7 2

my answer = 9

system answer = 8

Why the answer is not 9?

Hey can you help me understand how to implement third case.

for person of type x > 0

denote cntL the number of people who have type > 0 and want to sit to the left of x not counting duplicates

denote cntR as the number of people who have type > 0 and want to sit to the right of x not counting duplicates

denote cnt1 as the number of people with type -1, and cnt2 as the number of people with type -2

denote distLeft and distRight as the number of sits to the left of x and the number of sits to the right of x

the answer is $$$1 + cntL + min(distLeft - cntL, cnt2) + cntR + min(distRight - cntR, cnt1)$$$

https://codeforces.com/contest/1825/submission/205098188

Thank you so much, but it was this ans = max(ans, 1 + cntR + cntL + min(distLeft — cntL, cnt1) + min(distRight — cntR, cnt2) );

Since we are placing them from left to right for -1's

i may have confused cnt1 and cnt2, sorry. bu i hope you got the idea. if not, dm me

I solved it. It was exceeding time limit firstly but then I figured it out. I really like how you explain. Thanks

If F, how does the structure that authors maintain solve the problem of modifying a submatrix and querying the sum of a submatrix?

1+1=?

Solved F. A very good problem. However, the editorial is very bad. Too little explanations, the transition from "setting on a submatrix" to "adding on a subarray" is not explained at all. The data structure that can maintain historic sum should be in the editorial too.

Unfortunately, "the data structure that can maintain historic sum" is a well-known algorithm (appears in the contest that every China CPer takes parts in) in China :( and that may be the reason why authors do not elaborate on it. However, it's not a good idea to not introduce that in the official editorial.

Why isn't it a good idea? If you prepare a problem and use a poorly-known idea in it — you must explain it in the editorial. Otherwise what's the point in having an "empty" editorial?

Oh my fault I used two "not" there and that may confuse you... So literally I meant the authors should have introduced the technique then.

You can read about historic information here.

Hello! I cannot understand, in Problem D, why is the answer equal to 1+∑pi?

1825B - LuoTianyi and the Table Can anyone please tell me, what is the configuration of the table for the testcase:

2 3

7 8 9 -3 10 8

The statement in div1E is a bit off. It talks about the minimum of $$$c_v$$$ values over edges, and I assumed that it means that there should be at least one edge. However, in the tests, there are examples where the optimal way is to take only one vertex. In this situation I think it would be better to not define $$$C$$$ value and $$$A$$$ value separately but instead, their combined value that takes the value $$$D$$$ of a minimum of $$$a_v$$$ over vertices and $$$c_e$$$ over edges.

I have another way to think up the solution of D1B, which may be more intuitive(?

First by using linearity, we have

and observe that an island $$$v$$$ is good iff there is no more than $$$\frac{k}{2}$$$ people under a child's subtree when rooted at $$$v$$$, because otherwise we can move to it and get a better cost.

so an island is bad iff there is at least one of the child have subtree size greater than $$$\frac{k}{2}$$$(actually, exactly one), then we can enumerate which subtree is bad and sum over it, which give us

finally observe that $$$\sum\limits_{x = 0}^{\left\lceil \frac{k}{2} \right\rceil - 1}\binom{n - size_u}{x}\binom{size_u}{k - x}$$$ will be calculated from $$$u$$$ to $$$v$$$ and from $$$v$$$ to $$$u$$$, which sum up to

which is equal to $$$\sum\limits_{x = 0}^{k}\binom{n - size_u}{x}\binom{size_u}{k - x} = \binom{n}{k}$$$ when $$$k$$$ is odd, and equal to $$$\binom{n}{k} - \binom{size_u}{k / 2}\binom{n - size_u}{k / 2}$$$ otherwise.

so for $$$k$$$ is odd, sum over this stuff will be $$$(n-1)\binom{n}{k}$$$, and the answer will be

and for $$$k$$$ is even, sum over this stuff will be $$$(n-1)\binom{n}{k} - \sum\limits_{(u, v)\in E}\binom{size_u}{k / 2}\binom{n-size_u}{k/2}$$$, and the answer will be

which can be calculated using dfs and some combo stuff in $$$O(n)$$$.

Bad coding for problem D. Not that readable at all...:)

Hello, I would appreciate some help with my logic for B. LuoTianyi and the Table.

The formula I came up with when solving it before looking at the editorial was: $$$|f -d_{1}| \cdot ((max(m, n) - 1) \cdot min(m, n)) + |f-d_{2}| \cdot (min(m, n) - 1)$$$

Where $$$f$$$ is the furthest number from the average of the sum of the elements of the array $$$b$$$, $$$d_{1}$$$ is the number with the greatest distance from $$$f$$$, and $$$d_{2}$$$ is the number with the second greatest distance from $$$f$$$.

My logic passes the first test cases, but I am failing on test 2. I don't think my implementation is the problem, but here is my code if anyone wants to check.

Can i solve D1,div2 by rerooting the tree and calcute the sum of distance everytime?