When submitting a solution in C++, please select either C++14 (GCC 6-32) or C++17 (GCC 7-32) as your compiler. ×

hugopm's blog

By hugopm, 4 years ago, In English
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
  • Vote: I like it
  • +255
  • Vote: I do not like it

| Write comment?
»
4 years ago, # |
  Vote: I like it +86 Vote: I do not like it

Thanks for really fast + detailed editorial :D problem E and F are really interesting too :D

»
4 years ago, # |
  Vote: I like it +3 Vote: I do not like it

Thanks

»
4 years ago, # |
Rev. 4   Vote: I like it +35 Vote: I do not like it

D can be solved in O(V+E). We do this by first going in increasing order of node index from i...N. In the dfs we query for the node with greatest index that we can visit from a vertex v. This is denoted by f(v). We also color the nodes we visit with i representing their connected component.

Then, we essentially have the interval (i, f(i)). Because we iterate in order of node index, it is guaranteed to be the earliest disjoint interval.

Now, we iterate through all the nodes j in the interval, and if j is not colored the same as i, we add an edge between them. Dfs on j, and extend the right side of the interval to max(f(i), f(j)). We do this because if we have f(j) > f(i), there might be nodes between f(i) and f(j) that must be connected to i.

Once we are done iterating through the interval, it is guaranteed that all nodes to the right are disjoint from all to the left, so we simply set the value of i to f(i) (this makes the next iteration start from f(i)+1).

code

  • »
    »
    4 years ago, # ^ |
    Rev. 2   Vote: I like it +4 Vote: I do not like it

    There can be simpler code for D .....first iterate from 1 to n and in the loop find the maximum of that component using dfs and if some node is unvisited and it is less than maximum just increase your ans ...its as simple as that ...go through the code for clarification 65209450

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      please can you explain in some detail i am not able to understand means why are we calculating max...please can you clarify once more ??

      • »
        »
        »
        »
        4 years ago, # ^ |
        Rev. 3   Vote: I like it +3 Vote: I do not like it
        • we want to 'resolve' all triplets such that there is an edge from L to R and there is no edge from L to M such that L < M < R.

        • since its an undirected graph, every vertex in a component is reachable from each other.

        • this means 'no such triplet can exist whithin a component'.

        • in given testcase components are [1,2,5,7] [3,4,6,8] [9] [10] [11,12] [13] [14]

        • paths which should be there but dont exist 'after visiting first component' => 1-3 1-4 1-6 2-3 2-4 2-6 5-6

        • when dfs from 1 is over, we see for 3 that already a vertex greater than 3 could be visited by a vertex smaller than it.

        • so we add an edge. adding an edge between any two components makes them into one component. Now see point three. So adding an edge between [1,2,5,7] and [3,4,6,8] removes all issues in point 5.

        • now the claim is for every i you will have to add only one edge. why? suppose two vertices x and y exist such that x < y < i. now, if x could reach a point > i and y too could reach a point > i.

        • Then, we would already have added an edge between y and x. so, x and y are already one component and adding one edge will be sufficient for i.

        • »
          »
          »
          »
          »
          4 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          thanks shameek,i have solved it, efforts are appreciated

»
4 years ago, # |
  Vote: I like it +4 Vote: I do not like it

Can someone explain C solution? I'm not sure if I got what is written in the editorial

  • »
    »
    4 years ago, # ^ |
    Rev. 2   Vote: I like it +4 Vote: I do not like it

    So lets first sort sweets by sugar concentration. — So because of this complexity of program is $$$O(n$$$log$$$n)$$$

    Lets think of some number of sweets $$$k <= m$$$ therefore all sweets will be eaten on same day — day 1 so sugar penalty is just the sum of them. We want to choose first $$$k$$$ sweets after sort because they have lowest $$$a_i$$$ so their sum is the smallest (simple greedy)

    To calculate sum of elements in $$$O(1)$$$ we use cumulative sum — we hold sum of first $$$k-1$$$ elements and when we come on $$$k$$$ we add $$$k^{th}$$$ element to it.

    Now lets think about $$$k > m$$$ — some of the sweets will be eaten on day 2, 3, ... So its optimal to sweets with lowest sugar to be eaten as late as possible.

    Lets remember what happened with $$$k-m$$$ sweets — we have some solution using $$$k-m$$$ smallest sugar concentration sweets. From then till now we added m sweets — we know they have $$$a_i$$$ at least as big as $$$a_i$$$ used for $$$k-m$$$. So lets eat that m sweets on first day.

    Now we need to eat $$$k-m$$$ sweets starting from day 2. But multiplier is now bigger by one — so we need to solve what is $$$2 \cdot a_{k-m} + ... x \cdot a_1$$$ but we know that answer for $$$k-m$$$ sweets is $$$1 \cdot a_{k-m} + ... (x-1)\cdot a_1$$$ Therefore answer for that part is sum of $$$a_i$$$ ($$$i$$$ in range $$$[1$$$, $$$k-m]$$$) + answer for $$$k-m$$$

    And now dont forget about first part — answer is sum of $$$a_i$$$ ($$$i$$$ in range $$$(k-m$$$, $$$k]$$$)

    So — answer is sum of those two parts — sum of numbers $$$a_i$$$ ($$$i$$$ in range $$$[1$$$, $$$k]$$$) + answer for $$$k-m$$$

    Sum is still hold as cumulative so that part is $$$O(1)$$$ and we can get answer for $$$k-m$$$ if we iterated from 1 to $$$n$$$ in $$$O(1)$$$ — Overall complexity of this part is $$$O(n)$$$

    (my solution — 65182757)

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +4 Vote: I do not like it

      As you have used sort stl .That's why Overall complexity will be O(nlogn)

      • »
        »
        »
        »
        4 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        You are right — thanks! I'll correct it! :)

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +4 Vote: I do not like it

      Great explanation, thanks!

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

D can be solved in $$$O(n+m)$$$

(my solution for reference — 65186179)

So let's construct graph using vectors to represent neighborhood (if a is in vector c[b] then it exists link from b to a and vice versa) We are also using bool array $$$v[]$$$ to represent if you visited some node.

Let $$$maks$$$ be maximum index of node we visited so far — initially is 0.

It is enough to check for every node from 1 to $$$n$$$ whether we visited it (thats why we have v[])

  • if answer is yes, then continue to next one

  • if the answer is no

    • if its index is smaller then current maks then we need to add edge to connect maks to current index because maks was connected to some node i whose index is smaller then current index
    • start recursive dfs (in my implementation, bfs or any other algorithm that finds all nodes of segment is ok) to find all nodes connected to node with current index. We keep track of maks while doing this — maks=max(maks, index of node currently being processed with dfs)

$$$O(n)$$$ to check for every v[i] if is it visited and $$$O(m)$$$ to do bfs/dfs -> $$$O(n+m)$$$

»
4 years ago, # |
  Vote: I like it +18 Vote: I do not like it

Intended solution for D was in fact already $$$O(n + m)$$$. I was sorting an already sorted array...

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

I have wrong answer on test 13 on problem B, can you please check my submission to see what is wrong with it ?

»
4 years ago, # |
  Vote: I like it -15 Vote: I do not like it

This is my code for SWEETS EATING PROBLEM :

include<bits/stdc++.h>

using namespace std;

define ll long long

int main(){ ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); ll n,m; cin>>n>>m;

ll arr[100005];

    for(ll i=0;i<n;++i){
     cin>>arr[i];   
    }
  sort(arr,arr+n);
  for(ll i=1;i<n;++i)
  arr[i] = arr[i-1] + arr[i];

  ll ans[100005];

  for(ll i=0;i<n;++i){
      if(i<m)
      ans[i] = arr[i];
      if(i>=m)
      ans[i] = arr[i] + ans[i-m];
  }
  for(ll i=0;i<n;++i){
      cout<<ans[i]<<" ";
  }

return 0;

}

its giving TLE . Can anyone point out why?

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Any source code by editorial on a problem E?

»
4 years ago, # |
  Vote: I like it +8 Vote: I do not like it

A really good contest, thanks

»
4 years ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

For problem F is there coutertest for not just replacing edge weight but also replacing ends to nearest centrals?

I'm wondering because, there can be ambiguity in terms of centrals, because distance to two or more centrals can be same. So, is it always working or there is countertest?

  • »
    »
    4 years ago, # ^ |
    Rev. 7   Vote: I like it 0 Vote: I do not like it

    I'm pretty sure it works, due to key insights (going to nearest central is always possible and will not change set of reachable nodes).

    More precise proof: note $$$t_u$$$ the nearest central to node $$$u$$$ (choose any central if there are many closest ones). Add your supplementary edges into the original graph (keeping old ones) and edges $$$(u \leftrightarrow t_u)$$$ with weight $$$d_u$$$, they will obviously not change answer.

    Then, take any optimal path from $$$a_i$$$ to $$$b_i$$$. As long as there is a "bad" edge $$$u \rightarrow v$$$ in the path, with ($$$u > k$$$ and $$$t_u \neq v$$$) or ($$$v > k$$$ and $$$t_v \neq u$$$), replace $$$u \rightarrow v$$$ by $$$u \rightarrow t_u \rightarrow t_v \rightarrow v$$$ (it can't be heavier). Each time, number of such edges decrease by one, so it will reach zero. Delete useless $$$t_u \rightarrow u \rightarrow t_u$$$ parts : there will be no more non-central nodes. The resulting path will be in your second graph, and will require at most the same capacity.

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

For problem F — Cheap Robot i have one alternative approach 65217904 that not use the observation "replace the weight of each edge $$$(u,v,w)$$$ by $$$w′=du+dv+w$$$". Run Dijkstra from node 1, when reach other node $$$u \in [1, k]$$$ not already seen connect $$$u$$$ with the origin of the minimum path that reach $$$u$$$ with cost $$$dist[u]$$$, set $$$dist[u] = 0$$$ and push $$$u$$$ again on the queue and continue running Dijkstra. Basically i am doing Prim over the nodes $$$[1, k]$$$, so i have a Spanning Tree of the nodes $$$[1, k]$$$ on which for every pair of nodes $$$u, v \in [1, k]$$$ in the original graph the maximum traveled distance (without recharge) between the nodes $$$u, v$$$ is minimized, so the answer for every query is the maximum on the unique path between $$$u$$$ and $$$v$$$. The most interesting part is the complexity of the modified Dijkstra, I'm 99% sure that the complexity still $$$O(E log E)$$$

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

In problem F solution 1 can be extended to online queries with persistent DSU

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

I don't understand why I got TLE for D... Can anyone help? 65226875 I basically followed the solution above and implemented myself.

»
4 years ago, # |
  Vote: I like it +6 Vote: I do not like it

I was solving E and came up with the similar solution. but my solution was wrong because I did not take default Transition( dp[x] := (m−x).) after a while I still could not understand the use of the default transition and what it means.

"If i was the last interval used, we don't care because the default transition will take care of this case."

I believe that this sentence is the key but still confused. Can anyone describe it more detailed ? Any help will be appreciated.

(Also sorry for not using Latex, I haven't learn it well)

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Oh I get it

    consider there is an antenna at i+1 with initial scope 0 and when we are at i and calculating the new value what it will be we extend it by one and get (1 + dp[ i+3 ]) as the cost of choosing it i, i+1, i+2, i+3, i+4...m ,_,____.............m (___ means being covered while ... means not) however if the best choice is to extend it to the end, we cannot get the correct value without setting dp[ i+3 ] to m — (i+3) + 1 when we are setting default dp[x] := (m-x) it is actually the cost of extending the antenna to the end.

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Do you mean antenna at (i+2) in first line??

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

In problem D : A small change in DSU find algorithm changes TLE to AC.

TLE
AC

Submissions : 65229651 65229747

Could someone point out why this is happening?

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    The first one is equivalent to

    return par[node]==node ? node : find(par[node]);
    

    This is just normal unionfind (O(logN) with union by rank/size, or O(N) without)

    The second one is unionfind with path compression ((1) with union by rank/size, or O(logn) without)

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I think the first code may be run in this way:

    if (par[node] == node) {
        return par[node] = node;
    } else {
        return find(par[node]);
    }
    

    So you can see every time you call find(u) for any node u, it will cost O(distance to the root) time, in the worst case the distance to the root could be n (think about a chain), so your first code caused TLE.

    And for your second code, the biggest difference with the first code is that whenever you called find(u) for any node u, the every per[node], node between u and root, will be changed to the root.

    For example, if par[1]=1, per[2]=1, per[3]=2, per[4]=3, we call find(4), and after the call par[1]=1, par[2]=1, par[3]=1, par[4]=1.

    So your second code got AC.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Oh my bad, I was thinking that par[node] would be updated to either one the returned values in the first case. Thanks, both.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    if/else exist for a reason...

»
4 years ago, # |
  Vote: I like it +11 Vote: I do not like it

It's a really interesting contest, thanks to all the contributors :D

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Problem p1253B — Silly Mistake, please help does not able to find what is wrong in code my submission

»
4 years ago, # |
  Vote: I like it +1 Vote: I do not like it

In tutorial E: "If position x+1 is initially covered, dpx:=dpx+1" , I wonder if it's wrong, maybe it should be "If position x is initially covered, dpx:=dpx+1".

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    No. The state $$$dp_x$$$ suppose that the position $$$x$$$ is covered.

»
4 years ago, # |
Rev. 4   Vote: I like it 0 Vote: I do not like it

what should be the question A ans on this input?

1

2

10 20

15 25

»
4 years ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I think it's very similar to my own implementation (but in reverse order), which is available in the editorial. You should look at it.

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

I have a question about E's editorial — why does this "If position x+1 is initially covered, dp[x]=dp[x+1]" work ? How are we sure that if x+1 is covered x is ready too without the need to add 1 coin for the distance from x to x+1?

  • »
    »
    4 years ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    If position x is initially covered, dp[x]=dp[x+1],i think it's right after the change and i have passed it.

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Ye I know it should be right , I was just asking why is it like that

      • »
        »
        »
        »
        4 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        I mean you should replace x+1 with x to understand...if(cover(x)dp[x]=dp[x+1];

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Can anyone explain the edoitorial of D in some other way???

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    First of all, the problem is about undirected graph. For them there are following properties:

    • $$$u$$$ is reachable from $$$u$$$
    • if $$$v$$$ is reachable from $$$v$$$, then $$$u$$$ is reachable from $$$v$$$
    • if $$$v$$$ is reachable from $$$u$$$ and $$$w$$$ is reachable from $$$v$$$ then $$$w$$$ is reachable from $$$u$$$

    In other words, for undirected graph reachability is Equivalence relation (google if you don't know what it is). So, for each node $$$v$$$ you can define Equivalence class $$$C(v)$$$ — set of all nodes within the same Equivalence class of $$$v$$$. Why do I name it by $$$C(v)$$$? Because for undirected graph this Equivalence classes called components. Thing that I wan't to stress out: components of undirected graph is equivalence classes of reachability relation. So, if $$$v$$$ is reachable from $$$u$$$ then $$$C(u) = C(v)$$$ and $$$u \in C(u)$$$ and $$$v \in C(u) = C(v)$$$.

    Now, from the statement:

    For every triple of integers $$$(l,m,r)$$$ such that $$$1\leq l< m < r\leq n$$$, if there exists a path going from node $$$l$$$ to node $$$r$$$, then there exists a path going from node $$$l$$$ to node $$$m$$$.

    Obvious key observation: pick any node $$$v$$$, find minimum node $$$l_v = \min \{u : u \in C(v)\}$$$ and maximum node $$$r_v = \max \{u : u \in C(v)\}$$$ that is reachable from node $$$v$$$, then for each $$$m$$$ that is $$$l_v < m < r_v$$$ should be reachable from $$$l$$$. This is so obvious, so it's not described in details.

    We're not allowed to delete edges, the only thing we can do is to add edges. When we add edge $$$(u, v)$$$ within the same component then $$$l_u = l_v, r_u = r_v, C(u) = C(v)$$$ and reachability doesn't change, so nothing change and we just waste edge, we should not do that. So we should add edges $$$(u, v)$$$ from different components $$$C(u) \neq C(v)$$$. When we do that, components merge. New component is $$$C(u) \cup C(v)$$$ — union of sets. That's why we need DSU (Disjoint set union). New $$$l'_v = \min (l_u, r_v),\;r'_v = \max(r_u, r_v)$$$. The problem is: new $$$l'_v$$$ can be lower than previous. We wan't to avoid that.

    From now on should be obvious that answer is set of segments $$$[l_v, r_v]$$$, so we can order them by increasing $$$l$$$. Idea is to make components in exact this order: from lowest $$$l_v$$$ to highest. It's just loop by $$$v$$$ in increasing order. Then $$$l_v = v$$$. Previous components is complete, so $$$l_v$$$ will never change, the only thing that will change is $$$r_v$$$. So we run loop by $$$m$$$ from fixed $$$l_v = v$$$ to changing $$$r_v$$$ until we reach $$$r_v$$$. And what we do in the loop? We always join $$$m$$$ to $$$v$$$ if they are in different components. Important to note: when $$$m$$$ reach $$$r_v$$$, component is complete, so we don't need to check $$$v \in [l_v, r_v]$$$ and we should skip all $$$v$$$ within that range and continue from $$$r_v + 1$$$. Otherwise we'll get $$$O(n^2)$$$ solution.

  • »
    »
    4 years ago, # ^ |
    Rev. 3   Vote: I like it +5 Vote: I do not like it

    Another way to use DSU for this problem: Modify your DSU to also keep track of the maximum-indexed vertex in each component. This can be done by keeping a new array $$$max$$$ alongside $$$parent$$$ / $$$rank$$$ / $$$size$$$ or whatever you use in your DSU implementation, then changing the $$$join$$$ function to update it as well. Add a function $$$max(v)$$$ (internally max[root(v)]) for easy retrieval of the maximum-indexed vertex connected to $$$v$$$.

    Now read the graph as input and connect components as usual. Then iterate through the vertices from $$$1$$$ to $$$n - 1$$$. If $$$max(v) > v + 1$$$ and $$$v$$$ is not connected to $$$v + 1$$$, connect $$$v$$$ to $$$v+1$$$ (making sure to update the DSU online) and increment the answer.

    Sample: 65261938

»
4 years ago, # |
  Vote: I like it +4 Vote: I do not like it

In problem E,why are we not considering the transition for x>ri?

»
4 years ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Can someone explain why this idea doesn't work for problem D?:
-sort all the antennas by position (part1)
-eliminate all antennas that have all their signal area overridden by another antenna(but only one), so that if antenna a and antenna b has xa<xb, then xa+sa<xb+sb and xa-sa<xb-sb (part2)
-since the last antenna's signal is now the closest to m, add to the signal the remaining distance to m and, again, eliminate all the antennas to it's left that it will now override (part3)
-get a variable called bord=the last position which I don't know for sure if it's covered or not(so all the positions from 1 to bord-1 are covered 100%)
-from the first to the last antenna: (part 4)
-if bord > x+s just ignore it(since the entirety of this antenna is covered)
-else if bord>=x-s and bord<=x+s make bord x+s+1(since you know for sure that all positions between bord and x+s are covered)
-if the above cases are false then bord<x-s, so add to both the solution and s the distance between bord and x-s-1, which is x-s-bord, so now both the areas between bord and x-s-1 AND x-s and x+s are covered, so make bord x+s+1
I got the 10th test wrong so I made a mistake in my code or in my understanding of the problem, but since it passed 9 tests it mustn't be that bad.

#include <bits/stdc++.h>
#define x first
#define s second
typedef long long ll;
ll n, m, i, j, l, bord=1, sol;
std::pair<ll, ll> cin[85], list[85];
int main()
{
    scanf("%lld%lld", &n, &m);
    for(i=1; i<=n; ++i) scanf("%lld%lld", &cin[i].x, &cin[i].s);
    ///part 1
    std::sort(cin+1, cin+n+1);
    ///part 2
    for(i=1; i<=n; ++i){
        ///if the present antenna is overriden by the closest antenna ignore the present antenna
        if(l && cin[i].x+cin[i].s<=list[l].x+list[l].s) continue;
        ///while the present antenna overrides the closest antenna eliminate the closest antenna
        while(l && list[l].x-list[l].s>=cin[i].x-cin[i].s) --l;
        list[++l]=cin[i];
    }
    ///part 3
    if(m>list[l].x+list[l].s){
        sol+=(m-(list[l].x+list[l].s));
        list[l].s+=(m-(list[l].x+list[l].s));
        while(l-1 && list[l].x-list[l].s<=list[l-1].x-list[l-1].s){
            list[l-1]=list[l];
            --l;
        }
    }
    ///part 4
    for(i=1; i<=l; ++i){
        if(bord>list[i].x+list[i].s) continue;
        else if(bord>=list[i].x-list[i].s) bord=list[i].x+list[i].s+1;
        else{
            sol+=std::max(0LL, (list[i].x-list[i].s-bord));
            list[i].s=list[i].x-bord;
            bord=list[i].x+list[i].s+1;
        }
    }
    printf("%lld", sol);
    return 0;
}
  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Part 3 seems wrong. Consider the following test:

    2 50
    25 0
    40 1
    

    Best choice is to increase the scope of the middle antenna by 25. You should not increase the scope of the last antenna.

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

I have use Disjoint Data Structure in problem D. In union function i did not change the value of size after joining it. And got wrong answer. But got accepted when i change the value of size[parent_a] or size[parent_b] to 0 or 1 or any int. why it happen?

Accepted 65228886 changing value of size[root] to 0 after joining in join()

Accepted 65247284 changing value of size[root] to 1 after joining in join()

Wrong ans 65228524

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +9 Vote: I do not like it

    In your DSU join function, you should return if parent_a = parent_b (already connected nodes).

    Otherwise, you will be joining a node to itself, and it will artificially multiply the size of the component by 2, and doing this like 65 times will be enough to overflow with long long.

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

i have written exactly the same code in both the languages but c++ code is giving wrong answer on test 2 while python code is giving correct answer and is accepted. could anyone explain me why this is happening, if both the logic are same only syntax change is there.

Your code here...
C++

#include <bits/stdc++.h>
using namespace std;

void func() {
	int n, x;
    cin >> n;

    vector<int> a(n);
    vector<int> d(n);
    set<int> s;
    bool cond = false;

    for(int i = 0; i < n; ++i) {
    	cin >> x;
    	a[i] = x;
    }

    for(int i = 0; i < n; ++i) {
    	cin >> x;
    	d[i] = (x - a[i]);
    	if(d[i] > 0) s.insert(d[i]);
    	if(d[i] < 0) {
    		cond = true;
    		break;
    	}
    }

    if(s.size() > 1 || cond == true)
    	cout << "NO\n";
    else {
		for(int i = 1; i < n-1; ++i) {
		    if(d[i] == 0) {
		    	if(d[i-1] > 0 && d[i+1] > 0) {
		    		cond = true;
		    		break;
		    	}
		    }
		}
		if(cond == true)
		    cout << "NO\n";
		else 
			cout << "YES\n";
	}
}

int main() {
	ios_base::sync_with_stdio(false);
	cin.tie(NULL);
	cout.tie(NULL);

	int t;
	cin >> t;
	while(t--) {
		func();
	}
	return 0;
}

Your code here...

Python

def func():
	n = int(input())
	a = list(map(int,input().split()))
	b = list(map(int,input().split()))
	
	d = []
	s = set()
	cond = False
	for i in range(n):
		x = b[i] - a[i]
		d.append(x)
		if x > 0:
			s.add(x)
		if x < 0:
			cond = True
			break
				
	if len(s) > 1 or cond == True:
		print("NO")
	else:
		for i in range(1,n-1):
			if d[i] == 0:
				if d[i-1] > 0 and d[i+1] > 0:
					cond = True
					break
		if cond == True:
			print("NO")
		else:
			print("YES")
				
t = int(input())
for i in range(t):
	func()

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    In your C++ code, you're doing a "break" without having completly read the input of the current testcase. You will not start at the right place when you'll be reading the next testcase.

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Can some one explain why the same code could run different result with different compiler? 65170371 here's one input: 1 6 3 7 1 4 1 2 3 7 3 6 3 4

the correct answer should be Yes. but if you use custom invocation, the result is: Yes (c++11) NO (c++14)

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +1 Vote: I do not like it

    Local variables pos and pos2 are uninitialized, their default value is undefined and may depend on the compiler and the environment (it's not always 0 for local variables).

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

What's wrong with problem A's 9th test? I can't my mistake. Here is my submission 65207960 Can anyone help me with this?

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Your solution fails on that test

    1
    3
    1 2 3
    1 2 3
    

    Note that you should do at most one operation, so if arrays are initially equal, the answer is YES.

»
4 years ago, # |
  Vote: I like it +15 Vote: I do not like it

I think Problem E can be solved with shortest path algorithms.It's necessary to add some edges with 0 weight.

See my code

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

For the problem F, why should we go to the nearest central, then go back to u, if x≥du?

if we just from another central(the distense may be longer then du)gone to the u,

so go to the nearest central then back to u makes the x≤c−du.

But if not, may be the x>c−du and x≥du, so why we must go to the nearest cnetarl and go back?

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    oh, i got it! the source is a central.

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

For problem A, can't we used a set which contains only '0's and elements(distance b/w b[i] — a[i]). And then if the size of the set is 2 then, we print "yes" otherwise "no"?

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    A: 2 1 2

    B: 1 1 1

    This test involves the two problems your solution has

    • »
      »
      »
      4 years ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      What problem this test case would have? Won't there be only two elements in the set (i.e 0 and 1) as b[i] — a[i] will yield on only 1 and 0 and they will be pushed back into the set. Correct me if I am wrong. thanks in advance :)

      • »
        »
        »
        »
        4 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        No. b[0]-a[0] is -1.

        And remember that you can apply the update at most once, so checking if the set is of size two is not enough.

        A: 1 1 1

        B: 2 1 2

        Here your set will have size two but you need two updates with k=1

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

hey, i need a help in silly mistake. i am getting wrong answer but i dont know the reason. i have simply implemented the code. can anyone please help me to find out my error.``

Problem : Silly Mistake

My Solution

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Solution for F is so beautiful...

»
4 years ago, # |
Rev. 3   Vote: I like it 0 Vote: I do not like it

Hey guys, does anyone know why my solution 1253F — Cheap Robot

TLE by using "priority_queue<pair,vector,greater>"

but AC by using "priority_queue<pair>" in 670 ms

TLE code

AC code

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    Both priority queue code are wrong, just that the test cases are not strong enough to handle the other code. You need to have int v = pq.top().S; if ( /* - */ pq.top().F != d[v]) continue; pq.pop(); to avoid the code to have O(n^2 log n) worst case.

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

I tried problem D using the same idea as explained in tutorial and still getting error. https://codeforces.com/contest/1253/submission/65542541

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

For Problem D, I used the following approach —

  1. Firstly, I initialized all the pairs of vertices in an adjacency list.
  2. I used DFS to create an array of components along with visited boolean array for tracking which vertices are visited, we can use DSU to do the same.

This step takes O(V+E) time complexity.

So after the DFS/DSU, we arrive at the following -

[[1,2,7,5],[3,4,6,8],[9],[10],[11,12],[13],[14]]

Now, we can compute the max/min range for each components, so we would get —

[[1,7],[3,8],[9,9],[10,10],[11,12],[13,13],[14,14]]

Now, the logic for counting how many edges have to be added if we have a path from a->b and we want a path a->c where a<=c<=b is such that, if we have an intersection between two ranges, then we need to have an edge between those two components.

Note that we don't need to sort the range array as it is pre-sorted.

Now, we can keep a prevRange for storing the previous range, and if we find that the just next range intersects with the prevRange, we have to connect both the components and we increase the count of extraEdge by 1. If we don't have any intersection, we can set prevRange = currRange and then move to next range. For the interval logic, please refer — link

Submission Link — link

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

in the F solution, you DO NOT need a Kruskal algorithm.

You can just use a dsu with weights as a replacement.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

A simple approach to problem D:
Use DSU with value of parent deciding the dominant parent during union operation. Then run a simple loop from 1 to n and check if the root of the node is greater than node and then check if the next node root is same as that of this node and if not increase ans by 1. see my submission