Let's discuss problems.

Does anyone have an idea for D?

# | User | Rating |
---|---|---|

1 | tourist | 3534 |

2 | moejy0viiiiiv | 3307 |

3 | ainta | 3174 |

4 | -XraY- | 3170 |

5 | LHiC | 3155 |

6 | Petr | 3135 |

7 | Um_nik | 3098 |

8 | Merkurev | 3055 |

9 | TakanashiRikka | 3031 |

10 | RomaWhite | 3007 |

# | User | Contrib. |
---|---|---|

1 | Errichto | 176 |

2 | rng_58 | 171 |

3 | Petr | 161 |

4 | csacademy | 155 |

5 | Swistakk | 152 |

6 | Zlobober | 148 |

7 | GlebsHP | 145 |

8 | zscoder | 137 |

8 | Um_nik | 137 |

10 | Xellos | 134 |

10 | Radewoosh | 134 |

10 | lewin | 134 |

Let's discuss problems.

Does anyone have an idea for D?

↑

↓

Codeforces (c) Copyright 2010-2017 Mike Mirzayanov

The only programming contests Web 2.0 platform

Server time: Apr/23/2017 16:42:15 (c2).

Desktop version, switch to mobile version.
User lists

Name |
---|

How to solve C, H, G, K?

H: The answer is the number of ways we can delete two vertices from the tree so that the size of its maximum matching doesn't change.

Compute this by subtree dp:

dp[x][u][k][r] is the number of ways to deletervertices from the subtree ofxsuch that the size of its maximum matching isksmaller than the size of the optimal matching for the subtree ofx. Ifu= 0 thenxis not matched and not removed (we can later match it to its parent) and ifu= 1 thenxis removed or used in the matching.You should count each way of removing vertices only once, so you should fix a strategy to get exactly one maximum matching for each set of removed vertices. One way to do this is to match a vertex to its first child that is not matched or removed.

C: First, I put vertex on each edge. [v1-v2] => [v1-e1-v2] So, this problem can be solved with dominator tree, but I get TLE. So, I rewrite program more faster, faster, faster...., I get AC with 0.999s/1s!!!!(with +14...XD)

H: If we know edmond's blossom algorithm, we can convert this problem to "we have forest those vertex colored with red, blue, yellow. count (u, v) with color(u) == red && color(v) == red && path(u, v) have blue vertex." Construct forest and solve this problem are easy, but my poor english can't explain this convert...

G: choose a root arbitrarily(respecting all the requests), then recursively solve for each connected component of "i want this guy to be my superior"-requests.

Wow. before reading your solution I thought my algorithm was wrong. but....

I am very suprised it was solved by just 7 teams. It was meant to be one of easier problems. On AE it was solved by ~75 people including some Div2 guys.

Some of this desperated div2 guys like me spent 24 hours on the problem on AE ;)

C : We will disregard all vertices which is not reachable from 1.

We calculate DP[i] in topological order, which is

For some vertex v

It's not hard to prove why this works. O(n+m)

K. Sort the array

a. For eachiwe want to check whether we can distinguisha_{i}anda_{i + 1}. We can distinguish them if there existx,ysuch thatxis the sum of some subset of {a_{1}, ...,a_{i - 1}}.yis the sum of some subset of {a_{i + 2}, ...,a_{N}}.x+y+a_{i}andx+y+a_{i + 1}using the scale.We define

dpL[i][j]: the minimumzsuch thatz%C=jandzis the sum of some subset of the firstielements. Similarly definedpRfor suffixes. With these DP arrays straightforward checking of the conditions above will work inO(NC^{2}). It's not hard to improve it toO(NClogC) with priority_queue.Why should we check only

a_{i}anda_{i + 1}?We get 2

^{N}numbers by measuring all subsets. These numbers can be divided into two multisets of size 2^{N - 1}by the inclusion of particular item. We can't distinguish two items when this division results in the same pair of multisets, thus this is an equivalence relation.Why is this true? We also know which elements were for every number in multiset.

Upd.: I understand why this is true: a_i is equivalent to a_j if for every subset of {1..n}/{i}/{j} we get the same value. So I got solution using two iterators for 2*n * nc.

Upd.2: I proved it from my understanding but still don't understand your prove.

Suppose that the multiset corresponding to the subsets with

a_{1}(call itX) and the multiset corresponding to the subsets witha_{2}(call itY) are the same. In this case, the multiset corresponding to the subsets that containa_{1}and doesn't containa_{2}() and the multiset corresponding to the subsets that containa_{2}and doesn't containa_{1}() are the same. In each of these sets, obviously the numbers are sorted by the part that comes from {a_{3}, ...,a_{N}}, so for any subsetSof {a_{3}, ...,a_{N}}, the "weight" (shown by the scale) of and the weight of are the same. That means even if someone secretly swapsa_{1}anda_{2}, you can't notice that.Yes, this proof is easier than mine. Thanks.)

"For each i we want to check whether we can distinguish ai and ai + 1" — omg, that's so trivial and brilliant and the same time :o

We can improve it to

O(NC)by using sliding window with deque.C. We've got a topological sort of the graph vertices, sorted edges by the left edge vertex according to the topological sort and processed edges. Let's call the vertex that can be reached by two necessary paths "good". Let's process the edge A -> B. If B has not been visited earlier, remember its parent A. Otherwise find LCA(A, parent[B]) and check whether there is the good vertex on the path between LCA(A, parent[B]) and A or between LCA(A, parent[B]) and parent[B] or not. If there is such vertex on either of these paths we call the vertex B "good".

H:Calculate maximum matching bottom-up. For each matching edge color black the parent side vertex of it (all other vertices are white). Cut edges connecting two black vertices. For each black leaf remove it and adjacent white vertex. Remove black vertices has at least three neighbours. Then, we can connect two vertices if and only if they are white vertices from different connected components.

Problem D:

If you sort (ai,bi) by ai, then you need to do this: take maximum element and change all numbers to the right b_j+=a_j (its own adding value) and delete maximum (put -inf). We couldn't understand how to implement it.

Where can we find results?

http://opencup.ru/

Explain, please, samples in problems I and O(div 2).

How to solve O,E div2?

Some ideas for D:

Sorry for posting completely wrong stuff

The problem is pretty similar to Bear and Bowling from one of Errichto's rounds. You can write the DP and optimize with f.ex. splay. You can also write a not-very-nice segtree or sqrt convex hulls (the last thing actually worked for a nonzero number of points of AE, but it's really unlikely to squeeze it for accepted)

Problem A please!

Probably the easiest way: create a trie containing all the binary words describing the correct assignments of the variables

x_{1},x_{2}, ...,x_{n}(e.g. if the assignment is okay, we want the trie to contain the word 101). Do it using the following construction: start with a single leaf (no variables). Add the variables in the decreasing index order.When adding a variable

x_{j}to the trie T describing the choices on variablesx_{j + 1}, ...,x_{n}, we need to take two copies of T and join them with the root (for each possible choice ofx_{j}). Then, for each clause starting withx_{j}, remove the appropriate subtree from the trie (e.g. if the clause if , remove the subtree 010 as it fails this clause).Doing it straightforwardly, we'd have an

O(2^{n}+m) algorithm (m— total length of the clauses). However, instead of copying the tries, we can make them persistent and get the linear runtime.In other words, we need to count number of binary words so that it doesn't contain any of forbidden substrings (for every substring we have one particular place it is forbidden).

Let's assume we fixed some prefix of our word. Key observation is that how it affects our future choices is only through smallest

isuch that there exists a forbidden substring starting at positionithat so far matches our choices. So we can use dp that for every such state will keep number of such prefixes. Number of such states is total length of forbidden substrings. We need to cleverly create a dp for positionk+ 1 if we already know dp for positionk. What we need to solve is following problem "Assume we are in state S. What will be state if we append 0/1 to the end?". You can answer that if you keep states in some clever tree structure (maybe it has some clever name like KMP automaton or Aho, I don't know).Can you please explain the last part (appending 0/1 and finding new states after that) in more detail?

During the contest we came up with everything except this moment.

Fix position k. Let

s_{1}, ...,s_{l}be forbidden substrings that contain positionk. We consider their prefixes not further thank-th position — call themr_{1}, ...,r_{l}.We say that

r_{i}is descendant ofr_{j}iffr_{j}is suffix ofr_{i}. We say thatr_{i}is child ofr_{j}isr_{i}is descendant ofr_{j}and there is nor_{k}such that it is in between (meaning it is descendant ofr_{j}andr_{i}is descendant ofr_{k}). You can see that everything is well defined and we get some tree structure on those strings. Another way of getting it is considering trie created from reversed stringsr_{i}and marking their ends in that trie and compressing it so that we are left only with nodes that are marked.Let fix some particular node in that tree (that corresponds to some

r_{i}). Assume its next symbol is 0. Then if we append 0 we go to state corresponding to same string. If we append 1 we need to go to some strings that is our ancestor. Out of those we need to chose the longest one so that its next symbol is 1. These can be determined by running simple dfs that goes from top to bottom and keeps arrays of two elements meaning what is the lowest node that is my ancestor and is extended by appending 0/1. When doing this we need to keep track of whether some string that is our ancestor will end if we append 0/1 in which case there is no next state for corresponding transition.Here is my code, I think it is pretty clear: http://ideone.com/psR3HF

Full solution to D:

We observe first three observations that lewin noted. Using this we arrive at what -XraY- said. Unfortunately, I don't know any way it can be implemented in an efficient way (Marcin_smu do you know one? I didn't get whether that's what you do).

We assume glades are sorted according to

a_{i}. Letp_{1}, ...,p_{n}be optimal order of adding glades to our partial solutions (that means, if we want to get answer for k days we need to collect mushrooms from glades of indicesp_{1}, ...,p_{k}in an ascending order ofavalues).We need to turn our thinking upside down. Instead of determining firstly

p_{1}, thenp_{2}and so on for eachkwe will determine optimal order if we consider only glades withksmallesta_{i}values. Key observation here is that adding $k+1$th glade doesn't affect order of adding previous glades (what follows from what -XraY- said)! So $k+1$th order will be $k$th order with $k+1$th glade inserted somewhere in the middle. Given this we can determine those orders using your favorite balanced binary tree.By the way, I think that third lewin's claim is highly nontrivial. My proof used some augmenting paths-like argument, but I don't remember it now clearly.

I like that problem very much, it's sad nobody solved it, but I am not very surprised since it took me 5 hours to come up with solution and 5 hours to code :P (this was my first time I implemented BBST). However on AE ~10 people got full marks (but we were given almost 2 days), so it was not impossible to solve it.

I think I have very easy proof of this fact.

Suppose

i_{0}is the item with the biggestb_{i}(if there are many of them, we choose arbitrarily). I claim, that we should take this item at some moment for eachk. It's easy: if we don't take it, then we can change our first move and takei_{0}as our first move.Also, it's trivial, that for each

kwe should take items in increasing order ofa_{i}(because we can swap two adjacent items in our sequence). And we know, that we should certainly take itemi_{0}. So, if we just delete itemi_{0}and for every itemisuch thata_{i}>a_{i0}we increaseb_{i}bya_{i}, then we get the same problem as before.Ahh, yeah, that's nice :).

The lewin's idea can be implemented in time n^3/2, but such complexity of course don't get accepted.

How to solve B?

First, consider table dp[i][j] — minimum cost to transform first i symbols of pattern into first j symbols of text. We have

dp[0][j] = 0(because substring can start at any position) and usual transitions.Now for each difference

j-i(sayj-i=d) and each value s(0 ≤s≤k) let's find the maximumisuch thatdp[i][i+d] =s, call itf[s][d]. If one of these maximums is equal to |P|, then we found a required substring.f[0][d] is just equal to lcp ofPandT[d..] To calculatef[i][d] it's enough to considerf[i- 1][d+ { - 1, 0, 1}], skip one symbol in P, T or them both, move forward by corresponding lcp and choose maximum of three values.Our solution for C:

Main idea is to track "sources" of single paths to quickly check whether different incoming edges to this vertex are from the same "source".

Lets associate with every graph vertex two additional values:

`paths[i]`

is amount of paths that lead to vertex with number`i`

from vertex`1`

.`0 <= paths[i] <= 2`

. Initially`paths[i] = 0`

for every`i > 1`

and`paths[1] = 2`

`source[i]`

is the pointer to the vertex from where single path that leads to vertex`i`

takes it's "source". (only matters for vertexes that has it's`paths`

set to`1`

)"Source" of the path is the vertex with

`paths`

=`1`

and there is an edge that leads from vertex with`paths`

=`2`

to this vertex.Algorithm:

Lets calculate

`paths`

for each vertex in thier topological order.Calculating for vertex

`i`

: lets check every edge that leads to vertex`i`

whether this edge give us additional path to this vertex. Lets call`j`

the vertex that is the start of current edge that leads to`i`

.If

`paths[i] == 0 && paths[j] > 0`

then we can add path from`j`

to`i`

and set`paths[i] = 1`

. Now we should set the "source" of this path: if`paths[j] == 2`

then we create new "source" of single path that starts in vertex`i`

, if`paths[j] == 1`

then we "push the stream" from some other "source" and save the pointer to it's initial "source" —`source[i] = source[j]`

If

`paths[i] == 1`

then we can only accept another path form another "source" or from the vertex with`paths[j] == 2`

If

`paths[i] == 2`

already then we cannot change anything and we can skip all other edges to this vertexIn the end just print all indexes of

`2`

in`paths`

array.Complexity: O(N+M)

Why does it work: Sources of single is just an end of edge from 2 paths to one: they path spread thier "single stream" to all edges that can be reached from this source and all of them will point to this "source". All the vertexes that can be reached only through this source use edge that created this source — all of them become unreachable if we remove this edge. If the vertex can be reached form multiple sources (or directly of two-pathed vertexes with two edges) it becomes two-pathed itself and can create this own sources. Two-pathed vertexes can create infinite new sources because we need to make only two path and in the end to answer we can choose two for each vertex and find answer.

Probably this is the easiest solution to implement: only topological sort is required. And solution code is also pretty short.

What do you think about this?

Where I can find the statements?