waaitg's blog

By waaitg, history, 20 months ago, In English

1708A - Difference Operations

idea: Imakf

Solution
Code

1708B - Difference of GCDs

idea: waaitg

Solution
Code

1707A - Doremy's IQ

idea: Imakf

Solution
Code of solution 1
Code of solution 2

1707B - Difference Array

idea: Imakf and waaitg

Unfortunately, a harder version of this problem has appeared in a codechef contest here.

In fact, this problem was come up with more than 1 year ago, earlier than the problem in that codechef contest. We are sorry again.

Solution
Code

1707C - DFS Trees

idea: waaitg

Solution
Code

1707D - Partial Virtual Trees

idea: Imakf and waaitg

Solution
Code

1707E - Replace

idea: Wolam, Imakf and waaitg

Solution
Code

1707F - Bugaboo

This is a noun almost one year ago when this problem came out.

idea: Imakf and waaitg

Solution
Code
  • Vote: I like it
  • +172
  • Vote: I do not like it

| Write comment?
»
20 months ago, # |
  Vote: I like it +65 Vote: I do not like it

c was tough today :/

  • »
    »
    20 months ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    Yeah lol, but it was really enjoyable for me (even though I didn't solve it)

»
20 months ago, # |
Rev. 4   Vote: I like it +72 Vote: I do not like it

From testing, I had a solution for E that works in $$$O(n \log n)$$$ assuming some claim that I guessed is true. Unfortunately, no one could prove it's correctness nor find a counterexample. Here is the solution, hopefully someone can prove my claim :P

Spoiler
  • »
    »
    20 months ago, # ^ |
    Rev. 2   Vote: I like it +10 Vote: I do not like it

    I am able to prove a linear bound for it, but not able to prove the exact bound you mention.

    Here is a brief sketch of the proof. Consider the important segments at a given level, say $$$lvl$$$. By important segments at a level $$$lvl$$$, I mean all the important segments for which the answer is equal to $$$lvl$$$. So for $$$level=0, [0,n-1]$$$ is the only important segment, and so on.

    Now, a small detour — instead of looking at segments like $$$[l,l+1,...r]$$$ look at them as $$$[l,l+1],\cdots,[r-1,r]$$$. So, when I say the endpoints of a segment, I mean $$$[l,l+1]$$$ and $$$[r-1,r]$$$. If the segment has just two points, then it has only one unique endpoint. Now, consider that we're at level $$$lvl+1$$$, and we want to find all the important segments at this level. Then consider the end-points of all the important segments at all the previous levels (endpoint defined as above). My claim is that

    • There can't be any important segment at $$$lvl+1$$$ that strictly contains any of these end-points. So, for any end-point, either it's outside of a segment at this level, or, it's the end-point for that segment, but it can't be strictly inside.

    • Segments at the same level can only share up to an end-point. i.e. they can either be disjoint, or touch each other at a point, or have a common end-point (have two points in common)

    The proof of this claim is using induction. For the first part of the claim, say there was an important segment $$$[l',r']$$$ at this level strictly containing a previously occurred endpoint $$$[i,i+1]$$$. Then since this end-point must have been the end-point of some imp. segment previously, WLOG $$$[i, j]$$$ was the imp. segment at previous level. then, $$$f([i, j])$$$ must be either equal to the entire array, or it must contain some imp. segment $$$[a,b]$$$ at a previous level. Now, if its image contains some segment at a previous level, then notice that $$$f([i+1,j-1])$$$ must be strictly inside such a segment (not even touching the ends), more precisely, $$$f([i+1,j-1])\subseteq [a+1,b-1]$$$. If not, then there could have been a smaller segment containing this segment, which is a contradiction. Similar argument can be applied for when the image is equal to the entire array, as if $$$f([i+1,j-1])$$$ was touching any of the ends of the array, we would have had a smaller segment and the segment wouldn't be important.

    Now, $$$r<j$$$ (as the contrary would imply $$$[l,r]$$$ contains $$$[i,j]$$$), so $$$r \in [i+1,j-1]$$$ and hence $$$f([r])$$$ falls strictly inside the segment $$$[a,b]$$$. Also, for $$$[l,r]$$$ to be an important segment at $$$lvl+1$$$, there must be a segment at previous level that is inside $$$f([l,r])$$$. But notice that by our induction hypothesis, any such segment must either be disjoint with $$$[a,b]$$$, or inside it completely, or outside it but touching $$$[a,b]$$$ at one end, or inside/outside it but intersecting with $$$[a,b]$$$ at exactly one endpoint. One can show that for all possible scenarios, either $$$f([l,i+1])$$$ contains this segment, or $$$f([i,r])$$$ contains it, and our induction hypothesis is true. Similar arguments can be applied to prove the second part of the claim.

    Using this observation, we can deduce that each end-point, when introduced at any level, can be part of two segments (one to the left and one to the right). and finally, all end-points themselves can be imp. segments, giving an upper bound of roughly $$$3n-6$$$.

»
20 months ago, # |
  Vote: I like it +68 Vote: I do not like it

$$$a_i - a_{i-1}$$$ forces

»
20 months ago, # |
  Vote: I like it +54 Vote: I do not like it

It's been a while since there was a contest this stupidly difficult

»
20 months ago, # |
  Vote: I like it +7 Vote: I do not like it

wow, thanks for superfast editorial

»
20 months ago, # |
  Vote: I like it +53 Vote: I do not like it

Just listen to my absurd mistake in B: I thought both all $$$a[i]$$$ and all $$$gcd(i, a[i])$$$ should be different. When I realized my mistake, I solved the problem in like 1 minute. But before that, I spent more than one hour.

»
20 months ago, # |
  Vote: I like it +10 Vote: I do not like it

woah new lesson learned: brute force sometimes does work

»
20 months ago, # |
  Vote: I like it +1 Vote: I do not like it

I liked the idea of B. It is a good math problem for beginners.

»
20 months ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Fine contest. I love problem 1707A(div2. c) and 1707B(div2. d) because they are interesting to solve. Thank you.

»
20 months ago, # |
  Vote: I like it +94 Vote: I do not like it

Every single problem is great, but the whole problemset is disappointing.

  • »
    »
    20 months ago, # ^ |
      Vote: I like it +6 Vote: I do not like it

    Yes, exactly, it is too boring to have several problems with the same topic, especially ones like this.

»
20 months ago, # |
  Vote: I like it +33 Vote: I do not like it

"DFS Trees" is essentially formulated as "given spanning tree of a graph, find all vertices for which the tree is a DFS-tree". This task is kind of folklore, I guess... Another interesting variation would be to find all vertices for which the given tree is a BFS-tree.

  • »
    »
    20 months ago, # ^ |
      Vote: I like it +28 Vote: I do not like it

    You look very similar to one of the greats..."Dale steyn"

    For Those Who Dont know
  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    The MST detail ensures that we won't use an edge $$$(u, v)$$$ before using the tree edges on the path between $$$u$$$ and $$$v$$$. I think that makes the problem much easier. Does your solution work for any spanning tree?

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      It depends on how you formulate it... If you're given unordered adjacency lists and you need to check whether there exists some ordering that the given spanning tree is a DFS tree, than the solution would be exactly the same, i.e. check that there are no cross edges for each starting vertex.

      I have a hunch that it should be possible to modify the algorithm when you're given specifically ordered adjacency lists, but I feel like it's going to be a bit more tedious...

      By the way, 102129F - Milliarium Aureum is formulated almost same as DFS trees, but asks to count vertices for which the given tree is a widest paths tree, rather than whether it is DFS/BFS tree.

»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Able to solve only 1.

»
20 months ago, # |
  Vote: I like it +1 Vote: I do not like it

In problem C solution 2, why is the greedy approach that she should test the contest since it will increase her iq valid? Since on one hand, it does increase her iq but also it makes her reach close to the limit q, beyond which she cannot test any contest.

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    You can use the same reasoning as in solution 1, that there is an optimal solution where all the skipping happens before the tests that decrease the IQ.

»
20 months ago, # |
  Vote: I like it +6 Vote: I do not like it

Div.2 C Was SICK

»
20 months ago, # |
Rev. 3   Vote: I like it +2 Vote: I do not like it

For Difference Array, I have another solution:

First we have an $$$O(n^2)$$$ simulation algorithm.

Notice that The same $$$n$$$ numbers will generate $$$n-1$$$ zeros after the operation, while multiple identical numbers and one of this number will have the same effect (the difference is only the number of zeros at the beginning). As well as for a number $$$m>0$$$, which is a sum of distinct non-negtive numbers $$$a_i$$$. There can be at most $$$O(\sqrt m)$$$ distinct numbers. For this problem $$$m$$$ is $$$5\times 10^5$$$. We can de-duplicate the original sequence.Then there will be only $$$O(\sqrt m)$$$ numbers left, in which case the simulation is performed, the complexity would be just $$$O(m)$$$.

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    can you send the code here

  • »
    »
    20 months ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    Deleted

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    You say the complexity becomes O(m), how is this true? Shouldn't simulation be run N times, you can't just remove the leading zeros and simulate because they still effect your answer. Shouldn't the complexity be O(N * sqrt(m))?

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Why are we neglecting zeroes ? For ex: If we have a = [0,0,2,5]. Then if we neglect 0's, we get [2,5]-> [3]. But, rather the last element should be 1 as [0,0,2,5] -> [0,2,3] -> [1,2] -> [1]. Can you please explain?

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I don't mean to say delete all 0s, but if there are many 0s, leave only one

»
20 months ago, # |
  Vote: I like it +20 Vote: I do not like it

Why in problem 2C/1A, we should assume $$$Q = 0$$$ at first, which is to say: why starting with $$$Q \neq 0$$$ isn't optimized?

  • »
    »
    20 months ago, # ^ |
    Rev. 2   Vote: I like it +15 Vote: I do not like it

    That's what I hate about cf editorials. Such very important details are always omitted and it is considered to be ok.

    I think we can reason like that.

    Imagine a chart q = q(i). It is a staircase.

    Lemma: of the equal values if you don't take some, taken always have greater indices than not taken, Otherwise you can swap them in this way and get result that is at least as good,

    Now consider that in optimal solution Q = x != 0. Look at the rightmost value you haven't taken. Take it. Obviously nothing is going to change on the left (from the taken value) of the chart because q remains the same there. What happens on the right? On the right all values are smaller than q by definition (you took the rightmost one) so you can still take them just the same even though q reduces by 1. Now you obtained just as good solution with Q = x — 1. By induction you can reduce it to Q = 0. So it is enough to always consider only Q=0.

»
20 months ago, # |
Rev. 4   Vote: I like it +1 Vote: I do not like it

Might be helpful for somebody who is/was struggling with B (alternative idea):
Just loop from 1 to n (let it be I) and for each I check whether L is divisible by I (L % I == 0) — if so ans[i] = L.
Otherwise ans[i] = L + (I — (L % I)), it is basically the nearest number to L that is divisible by I.


Code
  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    You are so great! It help me to understand the hole process

»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

https://codeforces.com/contest/1708/submission/164510862 Why my this submission working for problem D? I am not sure if it will pass the constraints.

»
20 months ago, # |
  Vote: I like it +1 Vote: I do not like it

Today's C was quite hard !

»
20 months ago, # |
Rev. 2   Vote: I like it +2 Vote: I do not like it

Finaly I'm Pupil! the round was cool, I really liked problem C. In my opinion idea of that problem was similar to This problem

»
20 months ago, # |
  Vote: I like it +6 Vote: I do not like it

wait Difference Array is brute force? :*)

»
20 months ago, # |
  Vote: I like it +1 Vote: I do not like it

Anyone else failing test case 5509 on problem C? I would love to know what that case is

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    1
    6 2
    6 1 1 1 1 2
    Your program prints 011111 instead of 111111.

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      How did you get the testcase ? I was doing the same mistake and sometimes it becomes difficult to think about the testcase where the code is failing.

      • »
        »
        »
        »
        20 months ago, # ^ |
          Vote: I like it +3 Vote: I do not like it

        Method 1:
        Here's a logic of wrong solution:

        Logic
        Long and chaotic explanation follows (sorry for my bad English)

        Method 2 (simpler):
        His code failed on testcase 5477, so I've copied his code and replaced

        this

        with

        this

        Then you just have to submit modified code and read the checker log.

        checker log


»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

For those who solved problem C, can you guys explain how did you come up with the required observations during contest?

  • »
    »
    20 months ago, # ^ |
      Vote: I like it +6 Vote: I do not like it

    It was clear that after we reach last index it is useless to have q more than 1. So I thought of binary search because I was thinking like there will be a certain index after which we have to test every contest, but after thinking a bit I was not finding any monotonic behavior.

    So I thought of a simply greedy approach that we keep the q at index n, 1 and start traversing backwards and if a[n-1] >= q then we increase q else we keep the q same and when q becomes equal to original IQ we stop now all the places after this index will be 1 and before this index the values which are less then IQ will be one.

    Solution Link

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Just brute force from all the possible variants) This problem was unexpectedly hard for me.

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Same here! I was not able to find a proper greedy solution during the contest.

  • »
    »
    20 months ago, # ^ |
    Rev. 3   Vote: I like it +7 Vote: I do not like it

    I think I spent like, 20 minutes trying different ideas, but it all ultimately boiled down to "she can do at most q hard contests, which ones should she do?"

    The complication was that doing a hard contest early can cause some later contests to become hard (that otherwise wouldn't have been hard); this initially seemed really annoying at first, but then I realized that it simply means that doing a hard contest later is better than doing a hard contest earlier.

    This led to the result that the q hard contests for her to do should ideally be the last q hard contests that she encounters. In other words, once she loses her first point of IQ, she should not skip any more contests after this point. Once I mentally proved this (greedy exchange argument that skipping a hard contest B from after losing the first point of IQ cannot be better than skipping an earlier hard contest A instead), the biggest hurdle was overcome.

    There were still further issues (like how this can lead to solutions where she does fewer than q hard contests, but I had to assure myself that they're still optimal), since I didn't realize I could binary search and tried to process the vector in reverse, but I think at this point, a correct implementation should eventually fall in place regardless.

    My submission, FWIW: https://codeforces.com/contest/1708/submission/164517038

    (I kept imagining her derpy LoLK face as she makes her decisions and struggles on hard contests, but I'm not sure if this helped me realize the critical observations or distracted me from discovering them earlier)

»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Spent the whole time optimizing my indexed_multiset in C, only to realise just after the contest that binary searching the answer is sufficient. :( Nice bugaboos btw!

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I also spent the whole time for just finding out how to use the indexed_multiset, after which I was not even able to test it locally :(

»
20 months ago, # |
  Vote: I like it +1 Vote: I do not like it

In case someone is interested, solution 1 of Div 1A can be done in linear complexity too. Let dp[i] be the minimum value of q such that we can test all contests from i....n-1. Then dp[n-1]=1 and dp[i] = (a[i]<=dp[i+1])?dp[i+1]:dp[i+1]+1. Now we can iterate over values of i -> we take all tests with a[i] <=q (this can be maintained using prefix sums) for indices <=i-1 and then add dp[i] to it, the value of i resulting in the largest sum will be optimal.

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I always face problem on finding path from dp array, can you please explain that part too.

    And if possible please share some problems which requires this technique.

    Thanks in advance for your help

»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Can someone explain C?

  • »
    »
    20 months ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    Lets say there are $$$k$$$ elements such that $$$a_i <= q$$$. Now suppose the ans is $$$x$$$, therefore $$$x >= k$$$. So there are $$$e = x - k$$$ extra elements that we need. The optimal way of choosing those $$$e$$$ elements is to take them from the end of the array. So if we want to check whether it is possible to make $$$answer = x$$$ then we can just simulate the process and see if it is possible to choose those extra $$$e$$$ elements from the end of the array. Knowing this we can apply binary search to find the best answer.

»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

I would fail a phone screen very easily if the question was problem B

»
20 months ago, # |
  Vote: I like it -28 Vote: I do not like it

This was the worst contest i've ever had

»
20 months ago, # |
  Vote: I like it +3 Vote: I do not like it

This is the perfect example of editorial and solution to ruin a beginner's mind and crush their confidence ;-;

»
20 months ago, # |
  Vote: I like it +61 Vote: I do not like it

Can someone explain how to derive complexity in 1707B - Difference Array:

So in each operation, you cost $$$O(nlogn)$$$ time to sort the new array and decrease $$$S$$$ by at least $$$n-1$$$.

After the first operation, $$$S$$$ is $$$O(a_n)$$$. The complexity is $$$O(AlogA)$$$, where $$$A=max(n, a_n)$$$.

I understand first two sentences, but still struggle to understand why complexity is $$$AlogA$$$

  • »
    »
    20 months ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    Glad I'm not the only one who doesn't understand that. It should be explained in the editorial.

  • »
    »
    20 months ago, # ^ |
    Rev. 2   Vote: I like it +11 Vote: I do not like it

    I assume this complexity is simply incorrect. Or correct in a strict definition of big-o notation, not the way we informally use it.

    Cause even author's solution from this editorial successfully passes codechef tests (the same problem) where A is limited by 10^18

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it +17 Vote: I do not like it

      True.

      I think the correct complexity would be $$$O(n\ log\ a_n)$$$, as described in codechef's solution

»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

did anyone solve div2 c using indexed set? if yes, can you please share the code.

»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

For C i used this Logic: traversing forward (i=1 to n) if(a[i]<=q) then appear in the contest but if not: then check if any occorence of a[j] = q exists (j>i) if no such occurence then appear in the contest and q-- otherwise dont appear skip the contest

(But if n-i<=q) regardlessly appear in the contest; Here is my submission : https://codeforces.com/contest/1708/submission/164519432 can anybody help me, it was not accepted and i cant find the error

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Like if you consider a case like 5 5 5 5 5 5 5 5 5 5 5 5 5 2 1 1 1 1 1 and q =2, you will miss out at more 5 than considering 2.

»
20 months ago, # |
  Vote: I like it -15 Vote: I do not like it

Unfortunately, a harder version of this problem has appeared in a codechef contest here.

SAME PROBLEM

  • »
    »
    20 months ago, # ^ |
    Rev. 2   Vote: I like it +1 Vote: I do not like it

    atleast look at the constraints before commenting

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it +6 Vote: I do not like it

      Bro I submitted the same solution to both of these problems codechef codeforces

      • »
        »
        »
        »
        20 months ago, # ^ |
        Rev. 2   Vote: I like it +5 Vote: I do not like it

        Funnily even the solution from codeforces editorial successfully passes codechef tests after a couple of algo-unrelated fixes.

        Which means that asymptotic analysys here was crap, no wonder no one understands it

»
20 months ago, # |
  Vote: I like it +19 Vote: I do not like it

1707C - DFS Trees If findMST(x) creates an MST, there is no cross edge in the graph.

Is there any explanation or proof about it?

  • »
    »
    20 months ago, # ^ |
    Rev. 3   Vote: I like it +16 Vote: I do not like it

    Well, an MST is unique iff no non-MST edge can substitute an MST edge(we call edge included in MST MST edge) which weight is equal s.t. it's still an MST.(You can find Chinese version here.)

    The problem restricts weight of every edge different from each other, so the MST must be unique.

    So we've got the only MST tree and the corresponding MST edge. If we set an root rt in MST and any non-MST edge is cross-edge(Suppose two vertexes are u and v, and neither u is in subtree of v nor v in u), the MST mustn't be an DFS tree starting from rt since cross edge is not allowed to appear in DFS tree of an undirected graph. If not, the non-MST edge is bigger than any edge between u and v, it can't be reached since we visit smallest edge starting from current vertex first.

    • »
      »
      »
      20 months ago, # ^ |
      Rev. 4   Vote: I like it +16 Vote: I do not like it

      Additionally, after proving the theorem above, we can find that a root is disqualified iff there exists one or more non-MST edge $$$(u,v)$$$ so that the MST path between $$$u$$$ and $$$v$$$($$$u,v$$$ excluded)includes i.

      Thus, we can calculate how many non-MST edges make it disqualified for every vertices, costing $$$O(nm)$$$ time in total.

      We can use Rerooting DP to optimize it. Firstly, calculate the number of cross edges for the case where root is 1. And then start a DFS. When we travel outwards from a path contributing to the answer or vice versa, we update the answer.

      UPD: I've found the previous algorithm having an incorrect time complexity and hacked with a graph with some vertex of high degree due to some bad implementation in the process of Rerooting DP.

      Sorry 4 misleading.

      I solved the bug with preprocessing toward which vertex it'll go into a path; however, introducing the Binary Lifting Algo on tree(without using finding LCA part) to find the $$$k$$$-th ancestor of every vertex, leading the running time to $$$O(m\log(n))$$$(excluding finding MST), similar to the official solution.

      Code: 164904402

      Hope anyone can come up with a solution to 1708E - DFS Trees to run in $$$O(m)$$$ time in rerooting process, and it's still an "online" algo; i.e., getting the answers of most vertices during DFS, but not getting it to use algos like differencing and accumulating on tree.

»
20 months ago, # |
  Vote: I like it +124 Vote: I do not like it

Codeforces editorials be like:

It is easy to see that apsodnvapowrbnapwoiv eo + awevoi ^ 23 + 34 = x/2. From this, obviously x + 239 vonwef oij 238 + ovw= 23t239ug 3ig --> 2039jjoi. Finally, we see that oawijpoi24n = 23gin 3p23 + 3gi2bj 9 )g3j.

Now we can easily solve the probem in O(log n + m^xy + q)

»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

can anyone pls tell me where does my code fail for the third problem?? I first marked all the numbers less than k as 1 and then I proceeded for greater numbers.

https://codeforces.com/contest/1708/submission/164523613

»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Can Someone give me counter test for my submission for problem C Submission

»
20 months ago, # |
  Vote: I like it +17 Vote: I do not like it

Alternative solution for Div1C:

Each of the edges not in MST gives us a restriction on which vertices can be good. We can view those restrictions as values, i.e. for each edge we can either add 1 to all good vertices or subtract 1 from all bad vertices. In the end vertices with values equal to the total number of additions are good.

The trick is that we actually don't need to update all good or all bad vertices for every edge, but just 2.

Pick an any vertex $$$s$$$ and run dfs on MST starting from $$$s$$$.

Now suppose that we are at the some vertex $$$v$$$ and there is an edge $$$(v, w)$$$ in the original graph, but not in MST. We can also assume that we have already visited $$$w$$$, since edges are bidirectional.

There are 2 cases:

  1. $$$w$$$ is not an ancestor of $$$v$$$: the the only possible good vertices lie in subtrees of $$$v$$$ and $$$w$$$, so we can increment values of $$$v$$$ and $$$w$$$ by $$$1$$$

  2. $$$w$$$ is an ancestor of $$$v$$$: let $$$p$$$ be an ancestor of $$$v$$$, such that $$$w$$$ is the parent of $$$p$$$, then all vertices that are descendants of $$$p$$$, but not descendants of $$$v$$$ are bad, so we can decrement value of $$$p$$$ by $$$1$$$ and increment value of $$$v$$$ by $$$1$$$

After that the values of each vertex $$$v$$$ is sum of the values of vertices on the path between $$$v$$$ and root $$$s$$$, so we can just run another dfs from $$$s$$$ to calculate the answer.

164545959

  • »
    »
    20 months ago, # ^ |
      Vote: I like it +1 Vote: I do not like it

    thanks very much, the editorial's code is similar to your statement, but your statement is much clearer, more precise, and completer than the editorial's statement.

    some additions:

    for case 1, why other vertices are all bad? - because when they call findMST and reach v first, they will always reach w by add (v,w), which is not in MST. reach w first is vice versa.

    case 2 have similar reasons.

»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Do you have a bound for the number of operations in Div1E? Some submissions are assuming $$$O(n\log n)$$$ moves.

»
20 months ago, # |
Rev. 2   Vote: I like it +77 Vote: I do not like it

Although, "brute-force" solution for div1B works for harder version($$$a_i$$$ up to $$$10^{18}$$$) too.

Theorem(TL, DR). "Brute-force" solution(considering zeros differently) has $$$O(n \log C)$$$ time complexity.

Statement. If we have $$$k$$$ non-zeros after $$$d$$$ iterations, then initial sequence had an element, which is at least $$$\left(\frac{k - 1}{d}\right)^d$$$.

Proof. Suppose we have at least $$$k$$$ non-zeros after $$$d$$$ iterations. Then, after previous iteration we had an element greater or equal than $$$k$$$, moreover, lexicographically smallest possible sequence was $$$0, 1, 2, \ldots, k$$$. Two iterations before lexicographically smallest sequence was $$$0, 0, 1, 3, 6, 10, \ldots, C_{k}^2 $$$. By induction, we can prove, that $$$d$$$ iterations before(at the beginning), lexicographically smallest possible sequence was $$$a_i = C_{i - 1}^d$$$, and thus $$$a_n \geqslant C_{k - 1}^d \geqslant \left( \frac{k - 1}{d} \right)^d$$$.

UPD. Optimal sequence was not $$$a_i = C_{i - 1}^d + \ldots + C_{i - 1}^0$$$, but just $$$a_i = C_{i - 1}^d$$$, but theorem still holds.

Picking $$$k = 2d + 1, d = \log_2 C + 1$$$ yields $$$a_n > C$$$. So after $$$\log C + 1$$$ iterations($$$O(n \log n)$$$ each) there will be at most $$$2\log C + 3$$$ non-zeros and thus total complexity is $$$O(n \log n \log C)$$$.

But picking $$$k = \sqrt[3]{n}d + 1, d = \log_{\sqrt[3]{n}}C = \frac{3\log C}{\log n}$$$ also yields $$$a_n > C$$$. After $$$O\left(\frac{\log C}{\log n}\right)$$$ iterations we will have $$$O\left(\sqrt[3]{n} \frac{\log C}{\log n}\right)$$$ non-zeros. It is still too small, so bound $$$O(k^2 \log k)$$$ for the rest part of the algorithm is still good($$$O(n)$$$). And complexity of first $$$d$$$ iterations is $$$O\left(n \log n \frac{\log C}{\log n}\right) = O(n \log C)$$$. Thus, we have upgraded our bound to $$$O(n \log C)$$$ which is cool, actually.

Just for some illustration, how optimal sequences look like:

$$$0, 0, 0, 0, 1, 5, 15, 35, 70$$$

$$$0, 0, 0, 1, 4, 10, 20, 35$$$

$$$0, 0, 1, 3, 6, 10, 15$$$

$$$0, 1, 2, 3, 4, 5$$$

$$$1, 1, 1, 1, 1$$$

  • »
    »
    20 months ago, # ^ |
    Rev. 2   Vote: I like it +8 Vote: I do not like it

    Yeah, I think my brute force is $$$O(n\log n\log V)$$$, too, where $$$n\log n$$$ is the sorting complexity.

    However, my submission TLed on pretest 3.

    Can you tell me why?


    Ah, I see. I didn't ignore the 0's.

    That was kind of interesting.

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Could you explain the transition from "one iteration before" to "two iterations before". I.e. where do binomials come from?

    • »
      »
      »
      20 months ago, # ^ |
        Vote: I like it +3 Vote: I do not like it

      I've made some mistake, the sequence it not a sum of binomials, but just binomials.

      But the method is the following: we are trying to "invert" an iteration. How'd we do it? $$$a_1, \ldots a_n \mapsto 0, a_1, a_1 + a_2, \ldots, (a_1 + \ldots + a_n)$$$

      $$$1, 1, 1, 1, 1 \to 0, 1, 2, 3, 4, 5$$$

      $$$0, 1, 2, 3, 4, 5, \to 0, 0, 1, 3, 6, 10, 15$$$

      Obviously, we will get the correct initial sequence, but why is it optimal?

      Statement. Let $$$f$$$ be our transform, $$$\Delta$$$ -- function, mapping sequence to its pairwise differences. Then for all $$$B$$$ satisfying $$$\Delta(B) = A$$$, $$$B_i \geqslant f(A)_i$$$.

      Proof. Note, that we can assume, that any inversion $$$B$$$ has form $$$0, a_{i_1}, a_{i_1} + a_{i_2}, \ldots, (a_{i_1} + \ldots + a_{i_n})$$$, where $$$i_1, \ldots, i_n$$$ is some permutation of numbers $$$1, \ldots, n$$$. $$$a_{i_1} + \ldots + a_{i_k} \geqslant a_1 + \ldots + a_k$$$.

      Statement. If $$$A_i \leqslant B_i$$$, then $$$f(A)_i \leqslant f(B)_i$$$.

      So $$$f(f(\ldots f(A)\ldots))$$$ gives us optimal sequence(any other has each element greater or equal to ours), and thus with least possible max element.

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Very neat proof. I made a video solution on Problem D illustrating this proof.

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    thank you

»
20 months ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Video Solution for Problem C and Problem D

»
20 months ago, # |
  Vote: I like it +36 Vote: I do not like it

To me, 1A is harder than 1BCD ... High quality round!

»
20 months ago, # |
Rev. 2   Vote: I like it +16 Vote: I do not like it

I wish editorials could explain more about how to come up with such a solution instead of only proving why the solution works.

»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

even not able to solve C /kk

»
20 months ago, # |
  Vote: I like it +10 Vote: I do not like it

For the solution 2 of 1707A :

  • $$$a_i\gt Q$$$ and $$$Q\lt q$$$. If Doremy tests the contest, her IQ will decrease by $$$1$$$, so in the reverse order, her IQ increases by $$$1$$$.

Why? Lets say $$$a_i = Q+1$$$. Now if we increase the IQ by $$$1$$$ at this point, it becomes $$$Q+1$$$ which is equal to $$$a_i$$$. The former was in reverse order, but in the correct order what we are creating is after testing a contest with difficulty $$$Q+1$$$, Doremy's IQ dropped from $$$Q+1$$$ to $$$Q$$$ and that's incorrect.

  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    In that case, you can actually increase the initial q value (from 0 to 1,2,3 and so on) until this kind of cases do not happen. If you do this, you will get the "real q", which, however, is the same as the q calculated by the editorial's algorithm.

    If your IQ still don't get to the upper bound after testing all round, you can also increase the initial q value. Also, this operation will not influence the contests you are able to test.

    Sorry that I didn't mention this in the editorial. My English wording is bad.

»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

E wa 7???

»
20 months ago, # |
Rev. 2   Vote: I like it +6 Vote: I do not like it

How can I reach the idea 'If findMST(x) creates an MST, there is no cross edge in the graph. So if you can determine whether there is a cross edge starting DFS from every node, the problem is solved.' at div1C?

I knew the MST is fixed, and tried to find which vertex's rooted dfs tree includes edges not in the MST but tried tree dp and dfs to figure it out for every vertices and failed to reach that idea.

  • »
    »
    20 months ago, # ^ |
    Rev. 2   Vote: I like it +8 Vote: I do not like it

    Since the MST is fixed, we know which edges are bad edges (not in the MST).

    Now given a bad edge, can we figure out the vertices $$$r$$$ such that $$$\mathrm{findMST}(r)$$$ will pick this edge?

    Observe that $$$\mathrm{findMST}(r)$$$ is like a regular DFS, but with some extra logic to choose the order of exploring children. So for a moment, think of it to be a normal DFS.

    Since DFS explores all edges of a child before returning to its parent, if there is a bad edge from a vertex $$$u$$$ currently being explored by the DFS algorithm, to an unvisited vertex $$$v$$$ in a different subtree that is not its ancestor, the DFS algorithm will pick this edge and explore $$$v$$$ (yielding an incorrect MST). This is nothing but a cross-edge.

    So the problem boils down to finding for which vertices $$$r$$$, none of the bad edges are cross-edges in $$$\mathrm{findMST}(r)$$$.

»
20 months ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

typo in Div. 1 D?

was
need???
»
20 months ago, # |
  Vote: I like it +3 Vote: I do not like it

Can someone explain why edge (u, v) is not a cross edge when DFS from t ?

Definition from GFG: It is a edge which connects two node such that they do not have any ancestor and a descendant relationship between them. 

But isn't v a descendant for u in the DFS traversal?

»
20 months ago, # |
  Vote: I like it +1 Vote: I do not like it

Anyone else failing test case 2161 on problem B?

My submission :- https://codeforces.com/contest/1708/submission/164581294

»
20 months ago, # |
  Vote: I like it +3 Vote: I do not like it

can someone explain problem D clearly as editorials are not clear enough about brute force approach

»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

1D is quite interesting for me.

»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

In Div2 E (or Div1 C), I found out the cycle and find out the max weight of the edges on the cycle. suppose the max weight edge is between u-v, I traversed all the nodes in the cycle other than u-v, then I marked all the nodes in those subtrees as '0' in the final answer string. I am getting TLE for 7th Test case. Can anyone help?

164528852

»
20 months ago, # |
Rev. 2   Vote: I like it +32 Vote: I do not like it

I was able to find out a test case where Isonan submission for Div1E: Replace fails.

Submission: 164525375

Ticket 15813

Can someone hack it for me? Or let me know if I constructed an invalid test case? Thanks!

  • »
    »
    20 months ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    You were right.

    I didn't consider some corner cases...

»
20 months ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Div 2 problem D: Can somebody prove How it only require O(logn) operations to have array consist of <= 1 non zero elements.

»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Hints for Div2C?

»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

A was tougher than B... Amazing!!!

»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Does anyone know why solution might be not passing 1707B 3rd test?

»
20 months ago, # |
  Vote: I like it +8 Vote: I do not like it

Can someone post a solution for 1B, it feels like the person who wrote the editorial didn't even solve the problem.

  • »
    »
    20 months ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    Here's mine: https://codeforces.com/problemset/submission/1707/164635699

    Basically, you keep track of where the first non-zero element is, and then perform the operations exactly as specified, except you start at the first non-zero element, i.e., start calculating differences and perform the sorting on the non-zero range (since the difference between 0s will be 0s and will remain at the left side of the array after sorting).

    The actual challenge to this problem is realizing that this will not exceed the time limit. It may seem impractical, since there are 100000 elements, each as large as 500000. But if you consider a single operation, the largest element before the operation is an upper bound on the sum of ALL elements after the operation, e.g., for n = 4, $$$(a_2 - a_1) + (a_3 - a_2) + (a_4 - a_3) = a_4 - a_1$$$. Basically, the values drop down really fast, and you get a lot of 0s quickly, making it practical to actually perform every operation on the non-zero elements directly.

  • »
    »
    20 months ago, # ^ |
    Rev. 3   Vote: I like it 0 Vote: I do not like it

    After excluding the zeros lets say we have

    only non zero elements. a1 , a2 , a3 , .... , an

    Let S = a1 + a2 + ... + an

    now when we do one iteration our array becomes a2-a1 , a3-a2, ... , an — an-1

    so new sum is an — a1 = S — ( a1 + a2 + ... + an-1)

    S decreases least when (a1 + a2 + ... + an-1) = n-1 as all these elements are > 0

    So in each step S decreases by n-1.

    first it decreases by n-1 then n-2 ... (in worst case for us)and it will get to 0 fast.

    So bruteforce works.

    You can refer this : https://discuss.codechef.com/t/array-ops-editorial/100450/5

»
20 months ago, # |
Rev. 3   Vote: I like it 0 Vote: I do not like it

Can i have an explain for this ? At problem A, if the entered array was 2 3 4 6 5 the writer code will generate "NO" but here i can make it possible! Note: index start from 1
 Before any operations, the array is [2,3,4,6,5]
 Choose i=4, and the array becomes [2,3,4,2,5]
 Choose i=3, and the array becomes [2,3,1,2,5]
 Choose i=5, and the array becomes [2,3,1,2,3]
 Choose i=5, and the array becomes [2,3,1,2,1]
 Choose i=4, and the array becomes [2,3,1,1,1]
 Choose i=2, and the array becomes [2,1,1,1,1]
 ... and you can make 3 opeartion now to convert the ONE's to ZERO's
 so the result is : [2,0,0,0,0]
is there any wrong ?!
  • »
    »
    20 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    All great, except for the fact your array ended up with [2,1,0,0,0] in reality and $$$A_2$$$ is now in eternal misery being unable to change him/herself to $$$0$$$

»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

1E is really brilliant!

»
20 months ago, # |
  Vote: I like it 0 Vote: I do not like it

May I ask what is bin-up on tree?

»
20 months ago, # |
  Vote: I like it +1 Vote: I do not like it

Any List of Problems similar to Div2C?

»
20 months ago, # |
  Vote: I like it +3 Vote: I do not like it

Found this explanation for 1707B - Difference Array to be much more intuitive and easy to understand : https://discuss.codechef.com/t/array-ops-editorial/100450/6

»
19 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Why does the solution of C works?

»
18 months ago, # |
Rev. 3   Vote: I like it 0 Vote: I do not like it

In the editorial for 1D, dp is defined as below:

$$$dp_{x,i}$$$ is the number of ways to delete everything in the subtree of $$$x$$$ in exact $$$i$$$ operations.

I am unable to understand what do we mean by "operation" here.

Can someone please help with what one operation means?


Update:

Figured it out. It is the same operation as in the problem statement.

delete everything in the subtree of $$$x$$$ in exact $$$i$$$ operations.

means none of the vertices from the subtree of $$$x$$$ in present in the partial virtual tree (i.e., set $$$U$$$ in the problem statement).

»
11 months ago, # |
  Vote: I like it 0 Vote: I do not like it

For Problem C, where is it failing 202440451

My Approach
»
9 months ago, # |
  Vote: I like it 0 Vote: I do not like it

The solution of div1b is based on the defense

»
8 months ago, # |
  Vote: I like it 0 Vote: I do not like it

In problem B(Div2), why it's (l-1) / i + 1? instead of just l / i?

»
7 months ago, # |
  Vote: I like it 0 Vote: I do not like it

The crux of the problem 1707A — Doremy's IQ lies in making the following observation:

If we start from day 1 and attend all the contests and are not able to reach the day n because the iq is no more strictly greater than 0, then if we skip a bad contest (contest which lowers iq) then it guarantees that we will cover at least the same no of contests before or more. This is because the iq on the last day will in the worst case be one more than it was before and hence allows us to cover one more contest which makes up for the skipped contest.