### Stepavly's blog

By Stepavly, 4 months ago, translation,

1490A - Dense Array

Problem author: MikeMirzayanov

Editorial
Solution

1490B - Balanced Remainders

Problem author: MikeMirzayanov

Editorial
Solution

1490C - Sum of Cubes

Problem author: MikeMirzayanov

Editorial
Solution

1490D - Permutation Transformation

Problem author: MikeMirzayanov

Editorial
Solution

1490E - Accidental Victory

Problem authors: Stepavly, Supermagzzz

Editorial
Solution

1490F - Equalize the Array

Problem author: MikeMirzayanov

Editorial
Solution

1490G - Old Floppy Drive

Problem authors: Stepavly, Supermagzzz, MikeMirzayanov, sodafago

Editorial
Solution

• +87

 » 4 months ago, # |   +40 Balanced Round with good and Interesting problems !!
 » 4 months ago, # |   0 Can we use ternary search to solve F?
•  » » 4 months ago, # ^ |   +8 I think no, in order ternary search to work you would need the cost to be decreasing and then increasing as a function of C (U shape).But here it's more complicated it's increasing on some of the value which are number of times a given number appears, and decreasing everywhere else
•  » » 4 months ago, # ^ |   +1 I think you can not use ternary search on integers if there are f(i) == f(i+1).
•  » » 4 months ago, # ^ |   +3 You can solve it with ternary search if you erase counter duplicates. But then it is not necessary to use ternary search because the size drops to sqrt(n) and it fits in the time limit.
•  » » 4 months ago, # ^ |   +21 I used ternary search and it passed pretests
•  » » » 4 months ago, # ^ |   +14 I am sure it only passed pretests because no setter considered that someone would try ternary search on a function so potentially far from having a single local minimum. You are lucky nobody hacked any similar-enough solutions during the open hacking phase.
•  » » » » 4 months ago, # ^ |   0 Thanks for the hack, I wasn't pretty sure that it would pass systests I guess I was lucky enough
•  » » 4 months ago, # ^ |   0 can anyone please tell me why this code(link )for F gives tle while this code(link) passes? Unordered map is faster on average than map right?
•  » » » 4 months ago, # ^ |   +1 Anti-hash test. Read this
 » 4 months ago, # |   +9 For E maybe the linear solution is find largest i such that prefix_sum[i-1]
•  » » 4 months ago, # ^ | ← Rev. 2 →   0 deleted
•  » » » 4 months ago, # ^ |   0 radix-sort ((:
•  » » » 4 months ago, # ^ |   +2 They do say its linear after sorting
•  » » 4 months ago, # ^ |   0 just have this possibilty too.... for example: if prefix sum till third element is x which is less than largest element but greater than some elements between third element and last element. Now if third player defeates some players in between and gets their tokens , may be he can defeat the largest token player now..... PEACE.
 » 4 months ago, # |   0 In D, isn't the worst-case time complexity of given solution O(n^2)? Then how will it not give TLE?
•  » » 4 months ago, # ^ |   +3 $n\le 100$ so $O(n^2)$ is enough.
•  » » » 4 months ago, # ^ |   0 But t<=100, this means at worst case there would be (100*100)^2 which is 10^8?
•  » » » » 4 months ago, # ^ | ← Rev. 2 →   +4 $O(n^2)$ is the time complexity for a single run.There're $t$ cases, so the total complexity is $O(t\cdot n^2)$.$100 \times 100^2 =10^6$ is enough.
•  » » » » 4 months ago, # ^ |   +3 It's $t \times n^2 = 100 \times 100 \times 100 = 10^6$.
 » 4 months ago, # |   0 In Problem G Once we change x = x — full_spins*sHow we can be sure that new x will be less than max(pref) and we will be able to find it using lower_bound?
•  » » 4 months ago, # ^ | ← Rev. 2 →   +3 Now we have a new problem, given x, find the first position i in the array, such that pref[i]≥x. This problem can be solved using binary search. If pref is not sorted into the array, that is, there is j, such that pref[j−1]>pref[j], then pref[j] can simply be thrown out of the array (the answer will never be reached on it).After assigning new value to x = x — full_spins*(sum) we can not be sure that it will be less than max(pref) and above argument is not valid then.E.X.a = [1,1,1,1,-2] x = 15Sol S = 2 full_spins = (15-4)//S = 11/2 = 5so new x = 15 — 5*2 = 5 and as max(pref) = 4 < 5 we can not find it using lower_bound()I think full_spins should be ceil instead of floor
•  » » » 4 months ago, # ^ |   +3 You are right
•  » » 4 months ago, # ^ |   +8 Have you noticed how full_spins is calculated? full_spins = (x — maxPref + sum — 1) / sum, where maxPref is the maximum prefix of the array, and sum is the sum of the entire array. After full_spins turnovers, at least maxPref is suitable for us to cover the entire amount. There is a small error in editorial — you need to divide with rounding up.
•  » » » 4 months ago, # ^ |   0 Yeah their is mistake in editorial while code is right. I didn't see the code and thought my approach would be wrong which is basically same
•  » » » 4 months ago, # ^ |   0 I also tried to implement it based on the editorial but didn't get the correct output for the sample. I looked at the solution and I found that ceil was used instead of floor.So the formula in the editorial should be changed from: ($\lfloor \frac{x - \max_{i=0}^{n-1} pref[i]}{S} \rfloor$) to ($\lceil \frac{x - \max_{i=0}^{n-1} pref[i]}{S} \rceil$)Stepavly Can you please check this and update the editorial accordingly? Thanks
•  » » » » 6 weeks ago, # ^ |   0 its updated now
 » 4 months ago, # |   +5 I overlooked the constraints in problem D and ended up with an efficient solution of nlogn. :)
•  » » 4 months ago, # ^ |   +3 I don't think your solution is n*log n. Take this case, for example, [1, 2, 3, 4] or any sorted array.
•  » » » 4 months ago, # ^ | ← Rev. 5 →   +1 I used input from 2e5 and it ran 410ms, 52.67MBsort array
•  » » » » 4 months ago, # ^ |   0 Try like a sorted array of size 2*10^5 or something, it will probably take a while.
 » 4 months ago, # |   +19 Here are some video solutions to all of the problems, and a bit of hype around being 1st
•  » » 3 months ago, # ^ |   0 it will be good if u can make the codes available. thank you
 » 4 months ago, # | ← Rev. 3 →   +2 Linear solution(after sorting) for E:Sort players by tokens in ascending order. The player $i$ can win if $a_1+\dots + a_i\ge a_{i+1}$ and player $i+1$ can win(player $n$ can always win). So we can calculate the prefix sum and iterate from player with max tokens to player with min tokens and this gives us the linear solution.code: 107566237
•  » » 4 months ago, # ^ |   0 Hey Thallium54 I have also done by prefix sum approach by sorting and my code looks similar to you but my code is giving wrong answer on test case 2 , can you please tell me why it is giving wrong answer Code 107641891
•  » » » 4 months ago, # ^ |   +8 You only considered the first half of the winning condition.
•  » » » » 4 months ago, # ^ |   0 thanks got it
•  » » 4 months ago, # ^ |   +5 i did exactly did this, did not even think about the solution from the editorial, since this was just right there in front of my eyes after realizing what the problem wanted lol
•  » » 4 months ago, # ^ | ← Rev. 2 →   0 Hey Thallium54, i solve the problem exactly like your approach but i got stuck in the last loop and i know exactly the test case that i got stuck into but i dont know why iterating from the max giving me AC while iterating from minimum giving me WA on 2, i would appreciate your help WA: 107739869 AC: 107733400
•  » » » 4 months ago, # ^ |   0 See here, the reason your second code works is that you broke the loop if the winning condition isn't satisfied, thus the later players won't be counted. The editorial also says that winning players form a suffix, that's why you should iterate from the max.
•  » » » » 4 months ago, # ^ |   0 I only tried to reverse the winning condition but i dont still get the approach why it didnt work and have to iterate from max
•  » » 4 months ago, # ^ |   0 hi, i try nearly the same code as yours, but change a little when sort, then get WA at test 5. sadly, i can't see the wrong case. and when i use a normal sort it passed. so... maybe the order of a[i].second does matter the result. but why...i think only tokens matter. or are there other problems. thx
•  » » » 4 months ago, # ^ |   0 Your comparison doesn't establish a "strict weak ordering". See this for more information.
•  » » » » 4 months ago, # ^ |   0 oh, i see. i'd better just use '<'. thx
 » 4 months ago, # | ← Rev. 2 →   0 Recursion Solution for D was very elegant, I wasted a lot of time on coming up with an iterative Solution :( My Iterative solution for anyone interested. AC_CODE#include "bits/stdc++.h" using namespace std ; void solve(){ int n;cin>>n ; vectora(n),right(n,-1),left(n,-1) ,idx(n),dp(n); for(int i=0;i>a[i],--a[i],idx[a[i]]=i ; for(int i=n-2;~i;--i){ int rt=n-1 ; auto upd=[&](vector&A){ A[rt]=(A[rt]==-1?i:A[rt] ) ; rt=A[rt] ; } ; while(rt^i){ upd(idx[i]>TC ; while(TC--) solve() ; }
•  » » 4 months ago, # ^ |   0 Can you explain this iterative solution
 » 4 months ago, # | ← Rev. 2 →   0 Editorial has mentioned nlogn solution for problem F. but my O(nlogn) solution got TLE on test case 5 :( Link: https://codeforces.com/contest/1490/submission/107615964
•  » » 4 months ago, # ^ |   0 It is O(N^2) I think , TC : when all elements occur only once
 » 4 months ago, # |   +18 Can problem G be solved if the drive only stopped when the sum was exactly $x$?I misread the problem statement and while trying to solve it and I got some interesting observations (until I realized my mistake) but I'm not sure they make a solution that always works.I noticed that for each query we need to get something like $k*sum + pref = x$, being $sum$ the sum of all the elements of the array, $pref$ any prefix of the array (even an empty one), $x$ the number in each query, and $k \geq 0$. So we can rewrite it as $k = \frac{x-pref}{sum}$ and since $k$ has to be an integer, then $x-pref$ has to be divisible by $sum$, i.e. $x-pref \equiv 0 \pmod{sum}$, so $x \equiv pref \pmod{sum}$. Knowing this, I can store and sort all the prefixes of the array by having them $\mathrm{mod}\ sum$, while also keeping the original numbers (for example, by storing the prefix sum in a separate array and the corresponding indices in the sorted array, with pairs of numbers, so then I know both the original value and the index to calculate the number of movements).Then when answering queries, I only need to get $x \pmod{sum}$ and find it in the sorted prefix array I calculated before, with binary search, for example. If I can't find it, then it's impossible. Otherwise I can calculate the number of movements retrieving the original value of the prefix I found, and the answer would be I think $\lfloor{}\frac{x-pref}{sum}{}\rfloor{}n + prefInd$ being $prefInd$ the index of the chosen prefix. Note that I would have to be careful if $x \equiv 0 \pmod{sum}$, which in that case we don't sum any prefix index, and even subtract $1$ from the previous expression as we are looking for the amount of movements (we didn't do this before when having a prefix since if we have the array 0-indexed, we were already considering one movement more from the end back to the start of the array).Something to note is that I may not have to store all the prefixes from the original array, as they can be both positive and negative. This means I can end up choosing a negative prefix while $sum$ was positive. To explain this better, let's look at how I'm supposedly getting $k$. I said $k = \frac{x-pref}{sum}$. $x$ is always a positive integer (given by the constraints of the problem), while both $sum$ and $pref$ can either be positive or negative. $k$ has to be non-negative (obviously, we can't make a negative amount of full turns with the drive), so we need to consider some cases: If $sum < 0$, then I would have to keep all $pref \geq x$ so that $k$ is not negative. If $sum > 0$, then I would keep all $pref \leq x$, because of the same reason. If $sum = 0$, then we would only reach our goal if some $pref = x$ exists, because after each full turn is practically a reset back to the start, and if there is no such $pref$, the drive will keep spinning indefinitely. And I think by processing the queries offline from lower to greater $x$ (we can sort them beforehand), and also using some data structure like a set to add and erase the prefixes, sorted by $\mathrm{mod}\ sum$, while keeping up with the cases mentioned, it's possible to do it efficiently.Please correct me if I'm wrong, as it's quite probable that I messed up when thinking of this and it's everything wrong. But I thought it was an interesting problem to think of, which derived from a silly mistake in my part.
•  » » 4 months ago, # ^ |   +3 I too misread and was thinking how it could be solved
 » 4 months ago, # |   0 Can Someone please explain how is the first optimization in F working, i.e. how are there there are no more than sqrt(n) unique values of C, thanks in advance.P.S.: I tried to write sqrt(n) in the fashion shown on the Katex website, but didn't see the symbol in the preview, so if someone could help me with that too, it would be great. Thanks in Advance
•  » » 4 months ago, # ^ |   0 Cnt cannot have more than root N values Because in worst case you will have count array as 1,2,3,4,5,...X which will lead you to n=X*(X+1)/2 . So X
•  » » » 4 months ago, # ^ |   +1 got it thanks
 » 4 months ago, # |   0 For G, the number of spins should be ceil instead of floor?
•  » » 4 months ago, # ^ |   0 Yess , It should be ceil.
 » 4 months ago, # | ← Rev. 2 →   0 I think Problem C's Editorial is not considered about Decimal point error. For Using Python, We can calculate this $b = \sqrt[3]{x-a^{3}}$ by this code b = (x - a ** 3) ** (1/3) However I think this calculation sometime have a problem of Decimal point error > < For Example ,if $x = 65, a = 1$, I calculate this $b$ Code x = 65 a = 1 b = (x - a **3) ** (1/3) print(b) Output 3.9999999999999996 $b$ should be $4$, however this calculation has Decimal errors. by Using Decimal and round function, this problem will be goneMy Submission is here and I using $x$ minus cubes $i$ ($i$ in $(1,10^4)$).https://codeforces.com/contest/1490/submission/107547316I'm afraid of explanations that make integer decisions with decimal errors. I don't think it's something that Gray can interfere with, but...
 » 4 months ago, # |   0 For problem F, just curious, what is the binary search approach? Any explanation or code will be helpful..
•  » » 4 months ago, # ^ |   +3 107604489 of user Ranger uses ternary search
•  » » » 4 months ago, # ^ |   +8 bcollet It turns out that my solution doesn't work. thanks to clyring for the hack. here is the graph for the hack where my solution fail. There are several local minimums Spoiler
•  » » » » 4 months ago, # ^ |   0 What exactly is plotted in this graph? The costs I calculated when preparing the hack were [24, 14, 15, 12], tricking your code with the local minimum at $C=2$. It is possible to create more local minima, but the size $n$ of such testcases will get rather large fairly quickly.
•  » » » » » 4 months ago, # ^ | ← Rev. 2 →   0 The x-axes represent the value of $C$. and the y-axes represent the cost (which is the minimum number of elements to remove) based on the hack test data. $cc$ in my code represent the frequency of frequencies of numbers provided by your hack test data. The function I used is: Spoilerauto get = [&](int mid) { int cnt = 0 ; for(auto &p : cc){ if (p.first > cc[mid].first){ cnt += (p.first - cc[mid].first) * p.second; } else if ( p.first < cc[mid].first ) { cnt += p.first * p.second; } } return cnt; }; Spoiler$[1, 2, 2, 3, 3, 11, 11, 4, 4, 5, 5, 5, 6, 6, 6, 6, 7, 7, 7, 7, 8, 8, 8, 8, 9, 9, 9, 9, 10, 10, 10, 10, 12, 12, 12, 12]$$[cc: [(1, 1), (2, 4), (3, 1), (4, 6)]]$
•  » » » » » » 4 months ago, # ^ |   +3 That's what I calculated... and I used your code to calculate it! Now I see the problem: The x-axis on your chart is the parameter mid you passed into get, which causes out-of-bounds accesses when it is greater than 3, since cc.size() is 4.
 » 4 months ago, # |   0 In problem F, complexity of my code is O(nlogn) which is fine according to constraints but it is giving TLE in system testing can anyone please find out the problem. solution link
•  » » 4 months ago, # ^ | ← Rev. 2 →   +4 Use map instead of unordered_map. In case of unordered_map, it can hit worst case complexity (O(n)) in case of collisions. Map will never go beyond log(n), since it implements self-balancing tree. Please correct me if I am wrong.
•  » » » 4 months ago, # ^ |   0 But in either case complexity will not increase from nlogn...
•  » » » » 4 months ago, # ^ |   0 Codeunordered_mapm1; //ll count=0; for(ll i=0;i>v[i]; m1[v[i]]++; } The above loop will itself can take O(n*n) in worst case if you use unordered_map.
•  » » 4 months ago, # ^ | ← Rev. 3 →   +4 use map instead of unordered_map because worst case complexity is o(n) my solution was also giving tle but when I used map it got accepted in a given time limit see my solution I have also used binary search. Heuit Invincible06
•  » » 4 months ago, # ^ | ← Rev. 2 →   0 I also used unordered_map , and the solution got Hacked. I didn't understand why it was giving TLE , but after reading this i finally understand.P.S :- solution compiled using Microsoft C++ 2017 compiler with Unordered_map was also Accepted. Hashing was bad in GNU compiler in this case.
•  » » 4 months ago, # ^ |   0 Use a custom hash function to avoid anti-hash hack. I added one to your code and it passed easily 107711223
•  » » » 4 months ago, # ^ |   0 Thanks a lot, so should I add it to my template and use whenever i want to use unordered_map, or do i need to change something in struct anytime?
•  » » » » 4 months ago, # ^ |   0 no need to change
 » 4 months ago, # |   0 Can someone explain how to get the formula for the number of full spins in problem G.
•  » » 4 months ago, # ^ |   +1 Let us suppose we get maxima as the maximum prefix sum during first spin. Now let's assume we have y full spins and sum[n] is the prefix sum at the end of first spin.So we should have (y*sum[n] +maxima)>=x. This implies y =((x- maxima +sum[n]- 1)/sum[n]).
 » 4 months ago, # |   +11 .
 » 4 months ago, # |   +8 .
 » 4 months ago, # |   0 Could anyone please help me as to why my submission gets a TLE for D? I have implemented the same idea during contest. I tried several times to debug but I couldn't :/ I can see that it's because there are 1e6 operations involved at max, there needs to be some optimisation to prevent TLE. But I don't know what exactly to do. My submission: 107614061
•  » » 4 months ago, # ^ |   +3 You're solving subparts(recursion) inside the loop.Try taking it outside,it'd work then.
•  » » 4 months ago, # ^ |   +2 rec(l,mx_idx — 1, depth + 1); rec(mx_idx+ 1, r, depth + 1);These function calls should be outside(after) the loop.
•  » » » 4 months ago, # ^ |   0 Oh no... my bad. Thanks a ton to both of you! Missed it by the blink of the eye
 » 4 months ago, # | ← Rev. 3 →   0 Found the answer. Ignore
 » 4 months ago, # |   0 1490D - Permutation Transformation can be solve in $O(n)$ time and space complexity using Cartesian tree 107689365
•  » » 4 months ago, # ^ |   0 Nice solution!! Can we use Sparse table, similarly to improve the complexity? I thought of using sparse table during contest but then saw the constraints and abandoned the idea.
 » 4 months ago, # |   0 Can anyone prove the TC for B. Thanks in advance :)
 » 4 months ago, # | ← Rev. 2 →   0 Problem F — Equalize the Array"you can consider only unique values of C (there are no more than O(n*√n)), and get a solution in O(n*√n)"I used this approach mentioned in the editorial during the contest and get a TLE in test case 12.Here is my submission:107609642
•  » » 4 months ago, # ^ |   +1 Use map instead of unordered_map. Same thing happened with me. Also read this blog
 » 4 months ago, # |   0 I didnt expect first three ones to be so easy. Missed a golden opportunity to increase rank.
•  » » 4 months ago, # ^ |   0 Nice pfp
•  » » » 4 months ago, # ^ |   0 lol
 » 4 months ago, # |   +1 Will someone please attach this editorial to the contest itself? I think my color grants me the power to do so, but haven't tried and don't want to risk making a mess on a public round.
 » 4 months ago, # | ← Rev. 2 →   0 Can we use two pointers to solve C?Code: 107687231
 » 4 months ago, # |   0 Can someone please elaborate why only O(sqrt(n)) unique values of C are possible in F?
 » 4 months ago, # |   0 For problem 1490C — Sum of Cubes : In the Editorial's code there is precomputation of the cubes upto 10^9. It means this loop is going to iterate that 10^9 times , then it is supposed to give TLE. Since we are allowed to have only 10*8 operation per sec as per the standards of OJ. Please suggest if I am wrong.
•  » » 4 months ago, # ^ | ← Rev. 4 →   0 it iterates $\sqrt[3]{10^{12}} = 10^4$ times.for-loop constraint is i * i * i <= N
•  » » » 4 months ago, # ^ |   0 Thanks for clarification.
 » 4 months ago, # |   0 Can somebody help me with my solution for F? Solution for F. IMO this solution works in O(N) time but somehow it is giving TLE.
•  » » 4 months ago, # ^ |   0 It was because of unordered_map in my case. Taking map instead of unordered_map did the trick :)
 » 4 months ago, # |   0 There is a small bug in the editorial for 1490G - Old Floppy Drive . The formula $\left\lfloor\frac{t}{s}\right\rfloor \cdot S + pref[t \bmod n]$needs to be changed to $\left\lfloor\frac{t}{n}\right\rfloor \cdot S + pref[t \bmod n]$So that in the first member you have amount of full cycles multiplied by the total sum during the cycle.
 » 4 months ago, # |   0 Wrong answer on test 9 in F. Can anyone help? t=int(input()) for _ in range(t): n=int(input()) arr=list(map(int,input().split())) arr.sort() freq=[1] for i in range(n-1): if arr[i+1]==arr[i]: freq[-1]+=1 else: freq.append(1) freq.sort() ans=len(freq) sett=set(freq) for c in sett: val=0 for i in range(len(freq)+1): if i==len(freq) or freq[i]>=c: break val+=freq[i] val+=sum(freq[i:])-c*(len(freq[i:])) ans=min(ans,val) print(ans)
 » 4 months ago, # | ← Rev. 2 →   0 for Problem G: for testcase12 22 01 2according to me ans should be:[-1 0]but the query is giving: [0 0]can someone help me understand where i am going wrong. here is my code:
•  » » 4 months ago, # ^ |   0 see the third step in the algorithm, 'sum is at least x'. so when query for the case x=1, the sum=2 will be just ok, and time is 0.
•  » » » 4 months ago, # ^ |   0 my mistake i didn't notice atleast in the statement. thanks for helping me out.
 » 4 months ago, # |   0 107766764 Problem F,Can anyone help me, what part did i messed up in my code. I created an unordered_map for storing frequencies of each elements so it can be of size O(N) and i created a set for storing unique frequencies it can be of size O(sqrt(N)), then i iterated on my set and checked all the elements in my map and computed the answer for that frequency. So i should be O(Nsqrt(N)), but it is showing TLE can anyone tell me where did i messes up.
•  » » 4 months ago, # ^ |   0 Sorry my bad i forgot to use safe hashing for minimal collisions.
 » 4 months ago, # |   0 I am getting TLE on test case 11 in problem f(https://codeforces.com/contest/1490/submission/107779522) and this is my code. I almost used the same approach as given in the editorial. Can someone please help?
 » 4 months ago, # |   0 Why I'm getting WA on problem E :(107859076
 » 4 months ago, # | ← Rev. 2 →   0 I have a 'problem' with task F.I use map to count number of occurences of a given number. When i use unordered_map I get TLE on test 12( > 2sec). https://codeforces.com/contest/1490/submission/107876739When I change to map my solution takes < 200ms. This means, that unordered_map in this case is at least 10 times slower. Why is it so? My previous experience (including performance tests) was, that for unordered_map is not slower than map, especially, if the number of elements is large (as probably in test case 12, where the sequence seems to be arithmetic). https://codeforces.com/contest/1490/submission/107876786Test case 12: 1 200000 53201 106402 159603 212804 266005 319206 372407 425608 478809 532010 585211 638412 691613 744814 798015 851216 904417 957618 1010819 1064020 ...Edit: I could have read previous comments before posting this, sorry. Problem solved.
 » 4 months ago, # | ← Rev. 3 →   0 Hey guys. Can you please suggest me when to use unordered_map, becasue in case of problem F, collision increases the time complexity of unordered map and so it can happen in other problems also. So should I always prefer to stick with normal map ?
 » 4 months ago, # |   0 Really liked this contest!
 » 4 months ago, # | ← Rev. 2 →   0 )
 » 4 months ago, # |   0 Problem G.In the full spin calculating step, Why do we have to (x — max(pref[0..n])) before / sum? Because the pointer could not reach the max(pref[0..n]) at the end, and we will substract more than necessary. The full spin could less than the answer. Please help me, thank you.
 » 4 months ago, # |   0 In F, unordered_map gives TLE on test 11 on all the solutions of O(n^2), O(nlogn) and O(n). But using map works just fine. Why?
•  » » 4 months ago, # ^ |   0 Thanks for pointing this out, I was scratching my head trying to figure out what my problem was. From what I researched it seems that the default hashing function for unordered_map can be slow enough and cause enough collisions that it's just better to use a regular map: https://stackoverflow.com/questions/4846798/why-would-map-be-much-faster-than-unordered-map
 » 4 months ago, # |   0 In problem F(equalise the array) i am getting wrong answer at test case 20. I have tried a lot of test cases but i am not able to figure out the problem. Can someone please help me with this. This is my submission 109139946.
 » 3 months ago, # | ← Rev. 2 →   0 For C, I have stored the cubes in a vector instead of an unordered set. Inside the loop when I am using "i*i*i", the code is accepted. But when I am using "cubes[i]", I am getting wrong answer, when the value of cubes[i] is i*i*i. Why this is happening?? Thanks in advance!Accepted Code : int t; cin>>t; while(t--) { ll n; cin>>n; bool res = false; for(ll i=1;i*i*i<=n;i++) { ll x = n-i*i*i; ll pos = lower_bound(cubes.begin(), cubes.end(), x) - cubes.begin(); if(pos>0 && cubes[pos]==x) { res = true; break; } } res ? cout<<"YES" : cout<<"NO"; cout<>t; while(t--) { ll n; cin>>n; bool res = false; for(ll i=1;cubes[i]<=n;i++) { ll x = n-cubes[i]; ll pos = lower_bound(cubes.begin(), cubes.end(), x) - cubes.begin(); if(pos>0 && cubes[pos]==x) { res = true; break; } } res ? cout<<"YES" : cout<<"NO"; cout<
•  » » 37 hours ago, # ^ |   0 In the wrong answer part where cubes[i] is used, you missed that starting index for vector will be 0. for(ll i=1;cubes[i]<=n;i++)So i=1 means a^3 cannot take the initial value 1^3 but still b^3 handles the error. Problem arises when n=2 that is both a and b are 1^3.Just starting i=0 will solve it. for(ll i=0;cubes[i]
 » 3 months ago, # | ← Rev. 2 →   0 Your code here... #include using namespace std; typedef long long int lli; typedef long double lld; #define FOR(i,l,u) for(lli i=l;i<=u;i++) void solve() { lli t;cin>>t; while(t--) { lli n;cin>>n; lli A[n]; FOR(i,0,n-1) { cin>>A[i]; } unordered_map count; FOR(i,0,n-1) { count[A[i]]++; } vector C; for(auto x: count) { C.push_back(x.second); } sort(C.begin(),C.end()); lli greater_than_or_equal_to_count,mini = n,size=C.size() ; for(lli i=0;i
 » 5 weeks ago, # |   0 I'm a bit late but can someone please explain why in the 1st problem it is better to use the cycle instead of the explicit formula.