Bazoka13's blog

By Bazoka13, 3 weeks ago, In English

1975A - Bazoka and Mocha's Array

Idea: Bazoka13 Solution: Bazoka13 Prepared by: Bazoka13

Hint 1
Hint 2
Solution
Code

1975B - 378QAQ and Mocha's Array

Idea: Bazoka13 Solution: Bazoka13 Prepared by: Bazoka13

Hint 1
Hint 2
Solution
Code

1975C - Chamo and Mocha's Array

Idea: Bazoka13 Solution: Bazoka13 Prepared by: Bazoka13

Hint 1
Hint 2
Hint 3
Solution
Code

1975D - Paint the Tree

Idea: SanweiTreap Solution: Atomic-Jellyfish Prepared by: Bazoka13

Hint 1
Hint 2
Solution
Code

1975E - Chain Queries

Idea: Bazoka13 Solution: Serval Prepared by: Bazoka13

Hint 1
Hint 2
Hint 3
Solution
Code

1975F - Set

Idea: Toxel Solution: errorgorn, Toxel Prepared by: Nerovix

Hint 1
Solution
Code

1975G - Zimpha Fan Club

Idea: Atomic-Jellyfish, zimpha Solution: Atomic-Jellyfish Prepared by: Atomic-Jellyfish, errorgorn, MagicalFlower

Hint 1
Hint 2
Hint 3
Solution
Code

1975H - 378QAQ and Core

Idea: Bazoka13 Solution: Toxel Prepared by: Bazoka13

Solution
Bonus
Code

1975I - Mind Bloom

Idea: Atomic-Jellyfish Solution: Atomic-Jellyfish Prepared by: Atomic-Jellyfish, errorgorn

Hint 0
Hint 1
Hint 2
Solution
Code

Solution by Melania

Solution
  • Vote: I like it
  • +111
  • Vote: I do not like it

»
3 weeks ago, # |
  Vote: I like it +9 Vote: I do not like it

Any guesses on what game Jellyfish will be playing next?

»
3 weeks ago, # |
Rev. 2   Vote: I like it +18 Vote: I do not like it

Fun facts for problem E: At first, this problem was Div. 2 D and required adding vertices dynamically.

»
3 weeks ago, # |
  Vote: I like it -34 Vote: I do not like it

E was Nice.

»
3 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

I had a different solution for C where I was able to infer hint 1, but could not deduce hint 2, so I instead ended up simulating the entire process by considering each element as a candiate (in descending order) and optimizing the $$$O(n^2)$$$ solution using data structure (set) to maintain the minimum distance between adjacent elements.

Overkill, I know, but just wanted to share my thought process during the contest.

  • »
    »
    3 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    In the contest, I almost implemented the following solutions. In decreasing order of elements, let's see if we have a subarray with median >= X. In the original array, lets replace all the nos < X by -1 and >=X by 1. A subarray with median >= X exists, which implies that there exists one subarray with length >= 2 and sum >= 1.

    In the segment tree, if we maintain the following in a node, - Sum - max pref sum - max suffix sum - max subarray sum

    After each -1 -> 1 update we can check if there exists a subarray with length >=2 and sum >= 1.

    262621530 262621561

    Btw once someone realised this idea. We don't need a segment tree; we can do a binary search on the median and check if a subarray exists with sum >= 1 and length >= 2.
    262622439

    • »
      »
      »
      3 weeks ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      I have done this in O(N) 262595462, I couldn't formally prove it, but my intuition was that if there are number x,y,z such that x<=y<=z, then median of them is y, right? Any window of size K, and it's median is x, then this should imply that there are at least floor(K+1/2) elements in that window which have value less than or equal to x, for odd K (2*M-1), the number of elements less than or equal to x should be at least M, and since the size is 2*M-1, this implies in any case, the difference of indices of elements less than or equal to x should be at most 2, in other words there exists a smaller window [a,b,c] in the window of size K, such that either a and b or b and c or c and a should be less than or equal to x.

      • »
        »
        »
        »
        3 weeks ago, # ^ |
          Vote: I like it +3 Vote: I do not like it

        Thats the solution in editorial too.

        Look at it like this.

        Lets say we have an array of 11 elements and the median is 10.
        What I will prove is there exists 3 adjacent elements in this array which has median >= 10.

        Median = 10, implies that this array has atleast 6 elements >= 10.

        Lets look at first 2 elements,
        If both of them are >= 10, then we can take first 3 elements and the median will be >= 10 of these first 3 elements.

        Otherwise either none of them or only one of them is >= 10. We can delete these two elements. In remaining 9 elements we will still have atleast 5 elements >= 10.

        If we keep repeating this process either we will find first 2 elements >= 10, in which case we are done. Or atlast we will be left with 3 elements and among them 2 of them must be >= 2.

        Among 11 elements, atleast 6 of them are >= 10.
        Among 9 elements, atleast 5 of them are >= 10.
        Among 7 elements, atleast 4 of them are >= 10.
        Among 5 elements, atleast 3 of them are >= 10.
        Among 3 elements, atleast 2 of them are >= 10. If we reach this subarray, we can just use it to get a subarray of size 3 which has atleast 2 elements >= 10.

        The proof for even case is similar.
        Similary if we have an array of 12 elements and median is 10. There must exist atleast 7 elements >= 10.
        We can use similar reasoning to say that there exists atleast 2 adjacent elements >= 10.

        This is the reason why we should only look at subarray of length 2 and 3 to get maximum possible median.
        Using a subarray of size 5 (a1,a2,a3,a4,a5) will make the median worse when compared with 3 size subarray's present in the subarray, namely (a1,a2,a3) , (a2,a3,a4) , and (a3,a4,a5).

      • »
        »
        »
        »
        3 weeks ago, # ^ |
        Rev. 2   Vote: I like it 0 Vote: I do not like it

        nice

»
3 weeks ago, # |
  Vote: I like it +1 Vote: I do not like it

Fun facts for problem H: At first, this problem was Div. 2 F.(=・ω・=)

»
3 weeks ago, # |
  Vote: I like it +33 Vote: I do not like it

Desired solution for E is much more simpler than thinking about HLD .~.

  • »
    »
    3 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    at first, I used DSU, after getting WA I noticed that I need HLD, I almost gave up but somehow, I figured out that DSU works too

  • »
    »
    3 weeks ago, # ^ |
    Rev. 3   Vote: I like it +11 Vote: I do not like it

    At my first glance,it's surely a HLD problem.However,i noticed that the graph is a tree and only degrees of nodes influnced the answer,then i wrote something very close to the editorial with a higher complexity of $$$O(n \sqrt{n})$$$,which ran rather quick.If using set to implement the editorial's idea,it would be a much simpler solution than the $$$O(n)$$$ one.And the way that the solution figured out the root which has two sons amazed me.Anyway,it's a problem worth thinking.

  • »
    »
    3 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    DuongForeverAlone can you explain your solution for E please?

    • »
      »
      »
      3 weeks ago, # ^ |
        Vote: I like it +3 Vote: I do not like it

      i also used HLD (and got TLE), but my approach was to keep the set of all minimal elements of the relation "a is ancestor of b", i.e. for each "path to the root" i only keep the lowest vertex (max depth) in the set. when adding/removing a black vertex you have to know the next black vertex on the path to the root which can be done using HLD and segtrees.

      the rest is similar to the editorial; you have a few conditions with which you can check whether the minimal vertices form a chain (there must be at most 2, the path between them (==> euler tour trick + segtrees/fenwick tree) must only consist of black vertices ...)

      • »
        »
        »
        »
        3 weeks ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        The constraints were not generous. I do not know HLD very well too, so I did not go for that. I was trying something similar to editorial's approach but quite different in implementation.

        Sorry, I am not very good at tree queries but I get your idea a bit. By "euler tour trick + segtrees/fenwick tree" do you mean the approach where you flatten the tree and build segtree/fwick tree over the flat tree?

        • »
          »
          »
          »
          »
          3 weeks ago, # ^ |
            Vote: I like it +3 Vote: I do not like it

          with HLD you can answer path sum queries in $$$O(\log^2(n))$$$ and the constants (at least in my implementation) aren't that good. i couldn't come up with another approach though so i gave it a try

          i used the euler tour trick to count the number of black vertices on the path between two "minimal" vertices. i could've also used the hld, but this would be another $$$O(\log^2(n))$$$ factor, so i used ETT after the first TLE. you basically have each vertex twice in the flattened tree, once at "dfs_innum" and one at "dfs_outnum" with a "1" at "innum" and "-1" at "outnum". then a path sum from the root to "u" is sum [0; dfsin[u]]. and to answer such queries you can use a FT/SegTree.

          the total complexity is still $$$O(n \log^2(n))$$$

          • »
            »
            »
            »
            »
            »
            3 weeks ago, # ^ |
              Vote: I like it +3 Vote: I do not like it

            Thank you, I understood this part. I will try implementing it on my own. Thanks a lot!

    • »
      »
      »
      3 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Do you mean HLD approach?

      • »
        »
        »
        »
        3 weeks ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        No, the one that you submitted here 262667007.

        • »
          »
          »
          »
          »
          3 weeks ago, # ^ |
          Rev. 9   Vote: I like it +3 Vote: I do not like it

          Oh, I just implement the approach of the editorial. Looking at the 4 "if" at the end of the while loop, you will see the four cases that exactly the same like the editorial said:

          If the black vertices form a chain: — no black vertex that has three or more black child vertices and there is at most one black vertex that has two black child vertices. — there is at most one black vertex whose parent vertex is white. — if there is one black vertex that has two black child vertices, its parent vertex must be white.

          If you need to detail of the variables I used, then: $$$cnt1$$$, $$$cnt3$$$ is number of black nodes which have $$$1$$$ and $$$3$$$ (or more) black child respectively. $$$st$$$ is the set of black vertices which have $$$2$$$ black child. white_cnt is the number of black vertices which have a white parent. I used a $$$cnt$$$ array to keep track the number of black child for each vertex $$$i$$$, denote by $$$cnt_i$$$. For each query, I need to update all those variables before checking for the mentioned $$$4$$$ conditions.

          if(cnt3){
          	cout << "NO" << endl;
          	continue;
          }
          

          This ensures there's no black vertex having 3 or more black child

          if(st.size() > 1){
          	cout << "NO" << endl;
          	continue;
          }
          

          This ensures there is at most 1 black vertex with 2 black child.

          if(white_cnt != 1){
          	cout << "NO" << endl;
          	continue;
          }
          

          Exactly 1 black vertex has a white parent.

          if(st.size()){
          	if(a[par[*st.begin()]] == 1){
          		cout << "NO" << endl;
          		continue;
          	}
          }
          

          If there exists a black vertex with 2 black child, then its parent must be white.

          PS: I just realize that the $$$cnt1$$$ is pretty useless here, never mind it :v.

          • »
            »
            »
            »
            »
            »
            3 weeks ago, # ^ |
            Rev. 3   Vote: I like it 0 Vote: I do not like it

            Thank you for the explanation. I had some doubts —

            Exactly 1 black vertex has a white parent.

            How do you handle the case with root being a part of the chain? In that case there won't be any white parent. I put a dummy node over root as a white node but I am not sure if that is the best way to about it.

            Also, could you please explain how the parent is being updated per query.

            if(a[v] == 1){
            	if(cnt[v] == 1) --cnt1;
            	else if(cnt[v] == 2) st.erase(v);
            	else if(cnt[v] >= 3) --cnt3;
            }else{
            	white_cnt -= cnt[v];
            }
            

            I did read the editorial but the implementation there is going on top my head.

            • »
              »
              »
              »
              »
              »
              »
              3 weeks ago, # ^ |
              Rev. 5   Vote: I like it +5 Vote: I do not like it

              I've already handled the case when the root node being a part of the chain. I use the dummy node $$$0$$$ as white node, and connect it to the node $$$1$$$ (which is the root of the tree). Focus on the part before I used DFS, and that's it:

              par[1] = 0;
              cnt[0] = a[1];
              

              For the updating part, I recommend you to do it yourself for better understanding, as different people have their own coding style. However, I still explain mine in this situation. It is similar to something like updating the sum of an array. Let's say I have an array $$$a$$$ with the sum $$$s$$$. If I need to update $$$a_i = x$$$, then $$$s$$$ will be updated like:

              s -= a[i];
              a[i] = x;
              s += a[i]
              

              which is pretty much similar to my implementation. It was like: remove the previous state, and adding the new state to the variable.

              PS: If you have any more questions, just DM for me as the comment section is pretty "flowwy" right now.

              • »
                »
                »
                »
                »
                »
                »
                »
                3 weeks ago, # ^ |
                  Vote: I like it 0 Vote: I do not like it

                Thank you, I will try thinking and coding it again myself. Thanks a lot for all the explanations, really kind of you.

  • »
    »
    3 weeks ago, # ^ |
      Vote: I like it +5 Vote: I do not like it

    And is much more simpler than thinking about the number of black chains of three vertices and debugging counter's update function lol 262610439

»
3 weeks ago, # |
  Vote: I like it +1 Vote: I do not like it

I liked this contest very much, the best one I did this year I guess. F then E are my favourite problems in the contest.

»
3 weeks ago, # |
Rev. 2   Vote: I like it +5 Vote: I do not like it

Problem A,B,C Video Editorial : YOUTUBE VIDEO LINK --Click Here

»
3 weeks ago, # |
Rev. 2   Vote: I like it +29 Vote: I do not like it

Problem G has a linear solution. Or maybe it doesn't.

»
3 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

nice problems, E is very interesting, tutorial solution is way easier than mine any source to study how to solve problems like F, it took me much time to understand it, and I didn't even get close to a solution

  • »
    »
    3 weeks ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    What xor_two represent in the tutorial solution? Could you please explain the editorial code of question E.

    • »
      »
      »
      13 days ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      there is a case for the chain, when

      there is 1 vertex with 2 child

      there is 2 vertex with 0 child

      the rest must have 1 child

      but you also have to check that the vertex having 2 child must have white parent.

      So if you maintain xor of all the vertices having 2 child, when their count is 1, the xor will be exactly the vertex which has 2 children, so you can check whether it has white parent or not.

»
3 weeks ago, # |
  Vote: I like it +7 Vote: I do not like it

problem D idk why it works but it dose
op1 = 2 * (n — 1) — max_dph_a
op2 = 2 * (n — 1) — max_dph_b + dist_bwn_a_b % 2

ans = min(op1, op2) + dist_bwn_a_b

  • »
    »
    3 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Ok so I was working on this problem for a while and I think this is why your solution works:

    you calculate the minimum steps required to paint all the nodes from red_start, as red doesnt depend on blue. you also calculate the minimum steps required to travel all the nodes from blue.

    if red has a better start, blue simply goes to red and then follows its path, therefore: ans = (min_steps_red) + dist_ab;

    if blue has a better start, then red and blue should try to meet each other on their path, if dist_ab is even, they can syncronize perfectly otherwise, blue stays one step behind. ans = (min_steps_blue + dist_ab % 2) + dist_ab;

    You calculate your answer from the minimum of the two above approaches.

»
3 weeks ago, # |
  Vote: I like it +1 Vote: I do not like it

1975C — Chamo and Mocha's Array

Detailed Video Editorial

English Language => https://www.youtube.com/watch?v=J-xsU37pJoI

Hindi Language => https://www.youtube.com/watch?v=g6720vEw8r4

»
3 weeks ago, # |
  Vote: I like it +8 Vote: I do not like it

Can you explain editorial of F more clearly? Firstly, what does mean array $$$f[n][s]$$$ and why such transitions on it?

  • »
    »
    3 weeks ago, # ^ |
    Rev. 2   Vote: I like it +8 Vote: I do not like it

    I don't understand the editorial of F too. It would be nice if someone explains it in more of a detail.

  • »
    »
    3 weeks ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    It's some sort of SOS dp. f[n][s] is "for each set bit b of f[n][s] it means that after considering all masks of the first n bits you have b 1's to spare in the remaining bits".

    Basically when you use a bit 1 as 1, then you consume 1 bit which is the reason why there's the shift right. When you use 1 as 0 or 0 as anything then no bit is consumed.

  • »
    »
    3 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I guess in fact they are enumerating $$$i$$$ from $$$n-1$$$ to $$$0$$$ in the sample implementation.

  • »
    »
    3 weeks ago, # ^ |
    Rev. 4   Vote: I like it +8 Vote: I do not like it

    Crux of the idea is —

    We are adding nos from n-1 to 0.

    N = 6
    Our current set is ***101, i.e. we have added 3 and 5 in the set, we are yet to decide for 2,1,0.

    Lets say we have -
    1. f(100101) allows set sizes 0,1,3,4
    2. f(100001) allows set sizes 0,3,5

    The idea is we can merge all f(100***) into just one constraint.

    Union of 100101 and ***101 is set of size 2. So we can say for the remaining bits we are allowed to only include -2,-1,1,2 nos . We can remove -ve and say we must include 1 or 2 nos only.

    Union of 100101 and ***001 is set of size 1. So we can say for the remaining bits we are allowed to only include -1,2,4 nos . We can remove -ve and say we must include 2,4 nos only.

    Infact we can merge these two constraints and say we must include 2 nos among 0,1,2.

    This way when we have chosen X nos so far. For remaining nos we have $$$2^{N-X}$$$ distinct combinations. We can reduce original $$$2^N$$$ constraints to $$$2^{N-X}$$$ distinct constraints.

    Basically after deciding whether we should include n-1 in our set or not, we can reduce original 2^n constraints to $$$2^{N-1}$$$ constraints.

    By merging $$$s_0s_1s_2s_3s_40$$$ with $$$s_0s_1s_2s_3s_41$$$.
    If we include $$$N-1$$$ we subract all the allowed sizes by 1 in $$$s_0s_1s_2s_3s_41$$$, and then merge with $$$s_0s_1s_2s_3s_40$$$
    If we do not include $$$N-1$$$ and then we directly merge $$$s_0s_1s_2s_3s_41$$$ with $$$s_0s_1s_2s_3s_40$$$

    Now, if you read the editorial and model soln again it should make sense what they are doing.

    • »
      »
      »
      3 weeks ago, # ^ |
        Vote: I like it +8 Vote: I do not like it

      Oh, thanks. Now it makes sense. Looks very easy for problem F :)

    • »
      »
      »
      2 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      So , why should f[n][0] be odd so that s is included in then answer?

      Also , why do we do , & with (f[i][m | t] >> 1) in not take case. Why are we right shifting by 1 or dividing by 2..

      Thank you..

»
3 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Can someone explain approach binary search for problem C

  • »
    »
    3 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    to determine if the answer is greater or bigger than x, you check whether there are two indices i, j (i != j) that diffrence between i and j is smaller than 3 and a[i] >= x and a[j] >= x. you can see my submission to see my impl.

  • »
    »
    3 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    The idea is similar to what explained in below gfg link.

    LINK — https://www.geeksforgeeks.org/find-largest-median-of-a-sub-array-with-length-at-least-k/

    IDEA : will apply BS on set of values given.

    CHECK(_) : "check(mid)" will validate if there exist a subarray of length >= 2 , having median mid or not.

    LOGIC for CHECK(_) : we will build a prefix array ,
    pre[i] = 1 ,a[i] >= mid , otherwise -1. : After building prefix sum find the max_subarray_sum , if it's positive than median id possible otherwise not.

    SUBMISSION LINK : https://codeforces.com/contest/1975/submission/262866407

    • »
      »
      »
      3 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      if it's positive than median id possible otherwise not.

      how do make sure this always works ?

      is there any proof or logic behind it

      • »
        »
        »
        »
        3 weeks ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        -> A subarray in a′ of given array having a non-negative sum means that the number of elements ≥mid is at least the number of elements <mid in that subarray , and so if we sort the subarray and find median it will com out to be mid.

        CONDITION :- -> For a subarray to have mid as its median, at least half (or more) of its elements must be ≥mid.

        -> Hence, if a subarray in a′ has a sum ≥0, then the subarray has at least as many 1's as −1's, implying mid can be a median.

        I hope this makes sense.

        • »
          »
          »
          »
          »
          3 weeks ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          yes indeed it makes , i tried to impliment the same idea but i didnt find how !

          thank you

»
3 weeks ago, # |
  Vote: I like it +1 Vote: I do not like it

too unfortunate didn't realize that minimum number of steps to reach every vertex from certain vertex in tree is classical problem by taking furthest distance d then 2 * (n — 1) — d. I've tried to implement that from scratch but couldn't make it...😭

  • »
    »
    3 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Do you have any resources for this classic problem?

  • »
    »
    3 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    You can implement it two ways:

    if you want to use the formula, get the maximum distance by simply using BFS/DFS.

    Otherwise, to implement everything from scratch, here's what you do: Do a DFS from the node that returns the maximum depth from a node, for each node add twice the depth of each neighbor to a sum, as well as keep track of the maximum depth among each neighbor, then simply subtract the maximum depth from the sum.

    int get_steps(int pos, auto& adj, auto& vis, bool child = false) {
        if (vis[pos]) return 0;
        vis[pos] = true;
    
        int sum = child, max_branch = 0;
        for (const auto& c : adj[pos]) {
            int branch = get_steps(c, adj, vis, true);
            max_branch = std::max(max_branch, branch);
            sum += 2 * branch;
        }
        return sum - max_branch;
    }
    

    I did it like this. Sorry if my implementation is a bit naive, I am still new to graphs and trees

»
3 weeks ago, # |
  Vote: I like it +83 Vote: I do not like it

I was discussing with adamant about some different approaches to do the Wildcard Matching in problem G. I had some ideas to hack him, because his solution was randomized and had a pretty bad probability of failure. The basic idea is the following case:

*a*a*a*a*
b-b-b-

Here, whenever characters a and b accidentally match, the algorithm fails, because afterwards it can match all the a's with the wildcards. So with this you can build testcases that have failure probability of $$$(1 - \text{failure at one position})^{10^6}$$$. Unfortunately people do not always reset their randomness everytime they do a wildcard matching, so then this case is pretty useless. But by putting multiple different kinds of strings at the places of a and b, and for example constructing similar testcases with words of $$$2$$$ and $$$3$$$ characters, you can also hack fishy solutions which don't reset their randomness.

This hack hacked some solution in Polygon, so it gave a Unexpected Verdict :(.

»
3 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it
int mx=INT_MIN;
    for(int i=1;i<n;i++)
    {
        int t=min(a[i],a[i-1]);

        mx=max(mx,t);
    }
    cout<<mx<<"\n";

can someone explain why does this logic doesnot work for question 3 why cannot we just check the median of all subarray of length 2 and return the maximum among them

»
3 weeks ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Deleted. Irrelevant

»
3 weeks ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

For G, I am wondering why this submission is getting a WA but this one gets an AC.

I tried to use Atomic-Jellyfish's template cuz the MOD = 2013265921 seemed cool, but is the way I copied it somewhat wrong? I copied just as it is except that I made p[] and w[] for every 2^i beforehand.

»
3 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

a very good round with some traps like the guess for c looked so trivial but it wasn't. Also I concluded the correct algo for D when the distance between A and B is even then it is quite simple. We just reach a common vertex and do euler tour together from there but I couldn't conclude that it is also the case when the distance is odd. Can someone explain why it works with odd distance between A and B?

  • »
    »
    3 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    To want to find the best vertex where they can meet, I did a modified bfs on both the vertices untill i visit a vertex by B which was also visited by A, have a look at the getVertex function here 262674537

»
3 weeks ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Editorial for 1975D - Paint the Tree is not clear to me.

Will anyone explain me in more details ?

  • »
    »
    3 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Firstly, it's clear that the Pb movements are useless if it's not moving on a red vertex. once it reaches a red vertex, it's easy to see that we can make Pa move ahead of Pb (or with Pb), so, after we reach a red cell, we just need to traverse the whole tree to make it all blue. we can traverse the whole tree and go back to where we were in 2*(n-1) moves, but once we color the last vertex blue we don't need to go back, so if the distance between the starting node and that last-colored vertex is d, we will need 2*(n-1)-d moves, obviously we need to maximize d so it'll be the furthest vertex from out starting position. the last thing is that we need to reach a red vertex ASAP so we can calculate the distance between Pa and Pb and divide it by 2 rounded up.

»
3 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

In problem B, I retrieve minimum even and minimum odd number of array a.Then iterate over array a and check wheater ai is divisible by minimum even or minimum odd number. So what is the problem of this approach?

»
3 weeks ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

we can simplify E further by considering another array pc[v] = xor (c[v] and costs of its children). Now, c and pc form a bijection. And a chain of black vertices is just 4 black vertices in pc that satisfy some conditions. 262582001

»
3 weeks ago, # |
Rev. 3   Vote: I like it +8 Vote: I do not like it

Please someone tell me what is wrong with this solution for problem D. my submission (It's failing on test 3).

Or please provide a test case where it may fail.

  • »
    »
    3 weeks ago, # ^ |
      Vote: I like it +1 Vote: I do not like it

    hack

    1
    5
    1 5
    2 1
    3 1
    4 3
    5 4
    
    • »
      »
      »
      3 weeks ago, # ^ |
        Vote: I like it +8 Vote: I do not like it

      Thank you!!

      There is an issue in the algorithm for finding the middle element on the path between a and b.

»
3 weeks ago, # |
  Vote: I like it +1 Vote: I do not like it

Can someone please explain solution of Problem D in detail with "proof"? Thanks in advance

  • »
    »
    3 weeks ago, # ^ |
      Vote: I like it +4 Vote: I do not like it

    The key point is to notice that no matter how the vertices are painted red before they meet for the first time, the minimum cost will be $$$2∗(n−1)−maxdep$$$ if we choose the $$$vertice$$$ first painted blue as the root. The proof is that $$$P_A$$$ can choose to go to any subtree of the root before they meet, ensuring $$$2∗(n−1)−maxdep$$$ can be reached(It's obvious that it's the minimum). And it is also obvious that delaying the time of the meeting will at most decrease $$$maxdep$$$ by 1, as the new root can only be the child of the original root.

»
3 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

mysubmission can someone explain why I am getting wa? please give me one testcase.

»
3 weeks ago, # |
  Vote: I like it +6 Vote: I do not like it

Probably an overkill but E can be solved using Euler Tour and Seg Tree . Code

»
3 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Please explain how will we find the first vertex that will be painted blue in Problem D, the editorial is not clear to me.

»
3 weeks ago, # |
Rev. 5   Vote: I like it +3 Vote: I do not like it

For problem C: - Why check for a length of 3? Initially, I used a length of 2 and noticed that some tests failed. I then considered a length of 3. - Can you help me ?

»
3 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

I love the idea of problem number B. You solved it in a beautiful way. Love you man :)

»
3 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

I was able to get the logic of C during the contest. But using the wrong Data Structure just overkilled me.

»
3 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Please explain any approach to problem D ( unable to understand editorial). Also, if you were able to relate the problem with any classical problem ( as mentioned in Edi) then please provide a reference.

»
3 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Problem C Why this is giving wrong answer...



void solve(){ ll n;cin >> n; vector<ll> v(n); for(int i = 0;i < n;i++) cin >> v[i]; sort(all(v)); cout << v[n-2] << nl; }
  • »
    »
    3 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Just consider this test case 4 3 2 1 5 You can try to find answer but answer can't be more than 3. But your approach will give answer as 4 which is wrong. Messing up the array will give wrong answers

»
3 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it
  • »
    »
    3 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    the 2nd number B must be the smallest number not divisible by A, not the smallest number not equal to A.

    Because then B can divide those numbers that A can't.

  • »
    »
    3 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Example 2 4 5 — you must pick 2 and 5, your code picks 2 and 4.

»
3 weeks ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Can anybody let me know what is wrong with my logic in Problem C:

My observation, for any array of size n, you can convert the entire array to any element that is not at the last position. So the answer must be the highest element in the sub array of size n-1, that excludes the last element. What is wrong with my observation?

for example pick an array [a, b, c, d, e, f]: with the given operations, you can convert all elements to a, b, c, d and e, but not f. therefore the answer is the highest among a b c d & e.

Edit: nvm I got it

  • »
    »
    3 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    shitty mistake

  • »
    »
    3 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Hey whats the mistake here? Tried the same but got WA

    • »
      »
      »
      3 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      If you are excluding the values by position then fine but don't do it by value since the array can contain duplicates like for example if 4 3 2 5 5

      here the answer is 5 but if you discard by comparing it with the first and last element it will cause issue or if you calculate median ignoring the right 5 then you might get 3 as your answer

      • »
        »
        »
        »
        3 weeks ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        My approach was to find out the biggest element from [0,n-2], with the same reason as kyojin27 mentioned. The case that you mentioned, the ans is still 5 with this approach right? Could you give a test case where this fails?

        • »
          »
          »
          »
          »
          3 weeks ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Try doing it on 4 3 2 1 5 here according to your logic answer should be 4 but here the answer is 3 since no matter how you form an array 4 can never become median

          • »
            »
            »
            »
            »
            »
            3 weeks ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            Can't I select $$$(1,2)$$$ and then have $$$⌊\frac{(2+1)}{2}⌋ = 1st$$$ element as the median?Then similarly select $$$(2,3)$$$,$$$(3,4)...$$$ to make entire array 4

            • »
              »
              »
              »
              »
              »
              »
              3 weeks ago, # ^ |
                Vote: I like it 0 Vote: I do not like it

              Bro you misread the question I guess when we find median we always sort the array so for (1,2) median will be 1 then (1,3) will be 1 and then (1,4) will be 1 again

              • »
                »
                »
                »
                »
                »
                »
                »
                3 weeks ago, # ^ |
                  Vote: I like it 0 Vote: I do not like it

                Thanks for the clarification. I was also stuck in this same kind of logic.

              • »
                »
                »
                »
                »
                »
                »
                »
                3 weeks ago, # ^ |
                  Vote: I like it 0 Vote: I do not like it

                Ah sorry, stupid mistake, Thanks a lot :)

»
3 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

in ques C i think ans should be max of what is present between first and second last elements.bcz then we can ca change every element to that max,first taking subarray of length 2 and then 3..

correct me If I am wrong

»
3 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

262692303 Can anyone tell why mle and any possible sol to remove it??

»
3 weeks ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Problem C was very similar to a past problem 1349B - Orac and Medians. It is exactly the same operation on a subarray, only it asks a slightly different question, but both problems can be easily reduced to eachother and the same observations can be used.

»
3 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

What would be the rating of problem D?

»
3 weeks ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

in G, can somebody explain why can I take only |2*ti| of s and match it with ti?

»
3 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Can problem D be solved using centroid of a tree? Please explain why it will/will not work!

»
3 weeks ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Help needed. Can anyone explain my mistake for problem A? My approach was as follows. From the beginning, find the first element that breaks a non-decreasing sequence, and starting from there again "find" an element that breaks the current non-decreasing sequence. If there is none (i == n), it's good. Also, make sure that last element is actually less or equal to first.

I'm sure the bug is stupid, but can't see it. Also, the submission is here. 262528960

Gist of the code
  • »
    »
    3 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    The code of your submission fails on cases as this one.

    1
    4
    4 6 7 5
    

    Your code gives YES, though it should be NO.

    But when I use your snippet 262716265 it works.

    • »
      »
      »
      3 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Thanks. I didn't realize that the two codes are different. And I thought the change was impactless, so I didn't bother submitting the updated (and correct) one. My bad.

»
3 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

The presentation of problem D was absolutely mind-blowing. The concept of using the formula 2 * (n - 1) - d to traverse a tree using DFS was cleverly presented and quite tricky. I don't think beginner programmers could have easily guessed the approach required for this problem!

The hint and solution provided for problem D were excellent and are highly recommended to go through. The contest questions overall were amazing!

»
3 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Can anyone explain why would a subarray of length 3 always suffice to provide the correct solution for C ?

  • »
    »
    3 weeks ago, # ^ |
      Vote: I like it +1 Vote: I do not like it

    Think about determining whether an array's any sub-array has a median $$$>x$$$. Clearly, the array needs to have a majority of $$$>x$$$ values (i.e. 3 values greater than $$$x$$$ out of 5, or 6 out of 9, 51 out of 100 etc). So, for every $$$<x$$$ values, there has to be equal (in fact, more) number of $$$>x$$$ values.

    Now, with that observation in mind, if the array starts to look like $$$x, <x, <x, <x,...$$$ which is 1 $$$x$$$ and 3 values less than $$$x$$$, then you'll need to have 3 $$$x (or >x)$$$ to have a 4 out of 7 majority. So, it's better to ignore the first value (so that we don't have to carry the "burden" of 3 $$$<x$$$ values, and start fresh from the later positions. If the answer exists (for the previous example, indeed there are 3 more $$$>x$$$ values are coming ahead, you'll find them in the future anyway.

    Thus, you can see that the moment we get an $$$x$$$ followed by 2 $$$<x$$$ values, we ignore that and start again from the next position. This is the reason of every sub-array of length $$$3$$$ and its median is a potential answer (and we are taking the maximum of them). Hopefully I was clear enough.

»
3 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Thank you for the contest; I really learnt something from E.

»
3 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Can someone explain the solution to Problem F? The tutorial mentions "constraints" and sets T1 and T2 without defining anything.

»
3 weeks ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Problem C solution is not clear to me. Can anyone explain?

Also, the approach I used was not the right one and but still I don't know why. Median is the middle value of non-decreasing sequence, the middle value position is floor(m+1/2). Now, for 2 elements like [5, 4] answer would be 4. For [1, 3, 5] the answer would be 3. So, for any array a, if I just sort it in non-decreasing order and say that the maximum median value possible after performing the operations and making every value same, is simply a[n-2]. Why is it wrong? Because the 2nd last value in a non-decreasing array of n>=2 is always a valid candidate to be the median as [a1, a2] has median a1 if it is sorted. Now, I just need to pick the l, r in such way that a[n-2] stays the median for all a[l...r] subarray. What is the wrong logic I am using here I don't understand. Can anybody explain the problem if possible with a solution where this logic does not work?

Edit: Got the answer of my 2nd query.

»
3 weeks ago, # |
  Vote: I like it +8 Vote: I do not like it

Here's an alternative solution for E which I found easier to understand than the model.

Let's root the tree arbitrarily. If the forest formed by the black vertices has 1 leaf, it consists of a single chain leading upwards from that leaf. If there are 2 leaves, each leaf similarly forms a chain leading upward, except now we need to check if they meet and stop at their LCA.

This is as simple as checking if the highest black vertex has 2 black children. The 2 black children tell us it's the LCA we're looking for, and we know that each of them must lead directly to a leaf, because it would mean there are more than 2 leaves otherwise.

»
3 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

problem C hint 3

then there must be a subarray of length 3 with a median of y (y≥x ).

this is not clear can someone please explain why length of 3 is enough ???

»
3 weeks ago, # |
  Vote: I like it +10 Vote: I do not like it

Problem G is very beautiful. Both the problem and the solution is.

»
3 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

int j=(i%n)+1;

Why this has been done in the first solution? What's the point?

»
3 weeks ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Can someone plz explain hint2 & hint3 for the problem C? I couldn't connect those actually.

»
2 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

what a nonsense question C is . Clearly we can do the same with the by taking two numbers at a time and convert the largest number into smaller one and then expand it to make all same by making it of size of 3.

»
2 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Can anyone tell me why my solution 263565067 for problem A is wrong.

»
2 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

In problem $$$D$$$, you can try brute force all the nodes to determine the best answer.

Let $$$sumTree[i]$$$ be the minimum number of steps needed to traverse all the tree considering $$$i$$$ as a root. As we know, $$$sumTree[i] = 2 * (N - 1) * maxHeight(i)$$$. Now the problem is to find $$$maxHeight$$$ for each node. We can see that this value will always equal to the max distance from node $$$i$$$ to one of the tree diameters (as they are considered the farthest nodes by apart). So know, we can calculate $$$sumTree$$$ for any node in just $$$O(1)$$$ by determining the diameter and their distances for each node in the tree (which can be simply done by $$$3$$$ dfs).

Now to calculate our $$$answer$$$, we just need to calculate distances from our start nodes $$$P_A$$$, $$$P_B$$$.

For each node $$$i$$$, $$$ans[i]$$$ $$$=$$$ $$$distFromB[i]$$$ + $$$sumTree[i]$$$ . (don't forget to check that person $$$A$$$ has reached node $$$i$$$ before person $$$B$$$). The $$$answer$$$ will be the min one for all nodes.

This is the solution link

»
2 weeks ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

In the tutorial of problem 1975C - Chamo and Mocha's Array, I couldn't understand the $$$3^{rd}$$$ hint.

  • »
    »
    2 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Do you understand 2nd hint?plz explain. I didn't understand 2nd and 3rd hint.

»
11 days ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

int d;

if((d1[i]+d2[i])&1 && d1[i] > d2[i]){


            d =  1 + max(d1[i],d2[i]);
        }
        else{
            d = max(d1[i],d2[i]);
        }

This is for problem D Is this the correct code to find number of steps taken before the first node is turned blue? d1 and d2 have the path length from A and B to i th node respectively

»
10 days ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

In this question,i.e. C part, I tried to use a sliding window of length 2 in the contest but in editorial they have used a sliding window of length 3. What is the problem with a sliding window of length 2 in this question.

  • »
    »
    9 days ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I have a simple proof for length 3, but not for 2. In fact I can provide a counter example.

    Consider the array 5 1 4 2 3. The median of the array is 3. But the medians of sliding window of length 2 are 1 1 2 2. Since they are all smaller than 3, length 2 will not work.

    Proof for length 3 (By contradiction): Consider array a1 a2 a3...an. Let ak be the median. Let number of elements less than ak be l and greater be r. Clearly l < 2*r. Also, the median of each subarray of length 3 is less than ak. But if you consider the fact that each subarray which contains a number greater than ak will contain 2 numbers less than ak. If we sum over all those numbers (notice those are disjoint) we get l >= 2*r. Thus contradiction.

  • »
    »
    34 hours ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I am also having the same issue. I dont understand why we can't take the subarray of length 2

»
9 days ago, # |
Rev. 3   Vote: I like it +10 Vote: I do not like it

For problem I, it looks like the second solution naturally extends to arbitrary $$$c_i$$$ as long as division by zero does not occur.


Here's an argument that division by zero does not occur when the $$$c_i$$$ are constrained to be nondecreasing:

  • When $$$c_i\neq 1$$$, $$$C=0$$$ (you can never loop back to the same state)
  • When $$$c_i=1$$$, and $$$C$$$ is nonzero, $$$C$$$ must be of the form $$$1/x$$$ for some integer $$$x\in [2,N]$$$.

So in either case $$$1-C\not\equiv 0\mod{10^9+7}$$$.


I wonder if there are other examples of nontrivial constraints on $$$c_i$$$ (other than $$$c_i$$$ nondecreasing) that guarantee that division by zero does not occur.

  • »
    »
    5 days ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    It is interesting that the original problem is actually to choose any card to play, maximizing the probability of emptying the draw pile.

    Through a large amount of data verification, it has been shown that the problem is equivalent to the final problem. However, until the day before the start of the competition, no one was able to provide a rigorous proof, so the final problem is given.

»
4 days ago, # |
  Vote: I like it 0 Vote: I do not like it

Can anyone explain the proof in problem D?