SlavicG's blog

By SlavicG, history, 17 months ago,

Thank you for participating!

1676A - Счастливый?

Idea: mesanu and SlavicG

Tutorial
Solution

1676B - Равенство конфет

Idea: Errichto

Tutorial
Solution

1676C - Самые похожие слова

Idea: MikeMirzayanov

Tutorial
Solution

1676D - X-сумма

Idea: mesanu

Tutorial
Solution

1676E - Запросы на поедание конфет

Idea: mesanu

Tutorial
Solution

1676F - Самый длинный страйк

Idea: MikeMirzayanov

Tutorial
Solution

1676G - Чёрно-белые сбалансированные поддеревья

Idea: MikeMirzayanov

Tutorial
Solution

1676H1 - Максимизация пересечений (простая версия)

Idea: flamestorm

Tutorial
Solution

1676H2 - Максимизация пересечений (сложная версия)

Idea: flamestorm

Tutorial
Solution
• +131

| Write comment?
 » 17 months ago, # | ← Rev. 2 →   +25 It was honor to test this contest.Thanks for the Great round.Also great blog thanks for helping the community.
 » 17 months ago, # |   +5 Thanks for the effort, much appreciated!
 » 17 months ago, # |   0 I had another solution for 1676G - White-Black Balanced Subtrees — I used DFS to recursively compute all vertices' values, saved them in an array and then just scanning the array for the vertices that have the equal number of white and black vertices in their subtrees. Maybe this is an easier explanation not containing DP clearly.
•  » » 17 months ago, # ^ |   0 This approach is DP as well as you are finding and storing all the values for black and white vertices for each subtree and then later only just iterating over all vertices to check which have equal no. of white and black vertices in their subtree. Actually it is quite similar to what is being done in the editorial as well except the fact that they are just finding number of balanced subtrees while doing dfs only and you are just running a separate loop after the dfs.
 » 17 months ago, # |   0 I really threw on C and D cuz I'm bad at determining time complexity.
•  » » 17 months ago, # ^ |   0 The same with D
•  » » » 17 months ago, # ^ |   0 For C, I calculated the number of operations as (100 * 50 * 8)^2 instead of (50 * 8)^2 * 100. For D, I assumed that the intended solution was actually O(n^2)
 » 17 months ago, # |   +2 my first unrated round and I have solved all of the problem in the contest time. happy coding :) problems were so interesting.
•  » » 17 months ago, # ^ |   0 wow..i saw ur profile if ur rating was just 12 less, your rating would go like a slingshot , just a little back,,,and so much further.
•  » » » 17 months ago, # ^ |   0 Yes you are right.. but I enjoyed this round. :)
 » 17 months ago, # |   -16 The Time complexity of Problem D O(qlogn+n). But If we calculate for the worst-case according to given constraints then it should be -: 10^7 * 10^3 + 10^7. Also you are not considering the loop for test cases, so according to me its T.C should be. O(T*N*qlogN) However this T.C also not fast. So I am getting this doubt. Please correct me if I am wrong.
•  » » 17 months ago, # ^ |   0 I guess you are talking about problem E but not D.It is guaranteed $\sum n$ and $\sum q$ over all test cases don't exceed $1.5\times10^5$, so there's no need to multiply $T$ when calculating the time complexity.On this condition, the time complexity is $O(q\log n)\approx 1.5\times 10^5\times 17 = 2.6\times 10^6$, it's able to pass easily under the 3.5 seconds time limit.
 » 17 months ago, # |   +1 Great Round, I am really excited to see my new cyan color.
 » 17 months ago, # |   +4 E, F and H2 are really good problems.Nice Div. 4!
 » 17 months ago, # |   +3 problem h2 : Has anyone used ordered_set (multiset using pair) (pbds) ??
 » 17 months ago, # | ← Rev. 2 →   +3 Can someone please help me to understand the ordered_set solution to problem H2? I understand that "order_of_key(k)" returns the number of elements strictly less than k. However, I have seen many solutions that use an ordered_set with a pair to find the number of elements i= a[j], like this one: ordered_setst; for(i = n;i>0;i--) { ans+=(st.order_of_key({a[i], inf})); st.insert({a[i],i}); } cout<
•  » » 17 months ago, # ^ |   0 To what I can understand, people use pairs so that they do not have to worry about equal values. Since we need to find inversion that might be equal, it is a good idea to use pair where every value had its unique index and we can get out answers simply by using order_of_key function call. You can also do this without using pairs, just use greater_equal` in your ordered_set typedef and use a map to store equal values.
 » 17 months ago, # |   +6 I loved the round! The problems all had short and clean implementations, and I had a good take taking the contest.
 » 17 months ago, # |   +3 thanks a lot for the round. It was great. hope to have many more Div4 rounds like this.
 » 17 months ago, # |   +91
•  » » 17 months ago, # ^ |   0 and they sort too
 » 17 months ago, # |   0 Can someone please explain how the BIT approach works for H2? Or maybe redirect me to some article which explains it well. Thanks.
•  » » 17 months ago, # ^ |   0 My BIT approach for H2:156714981
•  » » 17 months ago, # ^ |   +3 so you have to find ai >= aj for i < j, so if you know BIT than we can process range sum queries so, at each step we will calcumlate, rangesum(a[j], max(a)) = this will give count of all elements greater than j that you hav eupdated in BIT, after doing this you will inccrease count of a[j] in BIT
•  » » 17 months ago, # ^ |   0 https://codeforces.com/blog/entry/86294 — Inversion count
 » 17 months ago, # | ← Rev. 2 →   0 Editorial solution for F in python TLE's
 » 17 months ago, # | ← Rev. 2 →   0 Hi, can someone explain why my nlogn solution failed for problem F. It got accepted during the contest and even after the hacking phase finished it was accepted. But now when I checked (14 hours) after the contest it suddenly got TLE on 18th test case.
•  » » 17 months ago, # ^ |   +3 It's because after the hacking phase is over all the solution are again tested with the test case used in hacks and so is yours, the main reason the worst time complexity is not nlogn it n^2, because the c++ unordered map uses linear data structure in case of collision to search for the elements , ideally it should be nlogn if instead of linear data structure, AVL tree is used which is the case with Java
•  » » » 17 months ago, # ^ |   0 Thank you for the explanation. So is it not suggested to use unordered_map? I usually avoid using ordered map for better time complexity, but if unordered_map is not consistent, should I just stick with ordered map or any other alternative?
•  » » » » 17 months ago, # ^ |   0 Try to avoid them as much as you can but use ordered map always if Time complexity allows
 » 17 months ago, # |   0 Can someone please tell me why its giving tle its very similar to whats given in tutorial here is my submission https://codeforces.com/contest/1676/submission/156791528
•  » » 17 months ago, # ^ |   0 You are getting TLE because rather than iterating over the n values in the array your loop is iterating over all values in mini to maxi which can be very large of the range 10^9 as it is given in the constraints that 1 <= a[i] <= 10^9
 » 17 months ago, # |   +10 If you use python and get TLE in problem F, I suggest you read this blog.https://codeforces.com/blog/entry/101817
 » 17 months ago, # |   0 Can someone explain F? I got confused during the contest.
•  » » 17 months ago, # ^ |   0 you have to output a range L, R such that every element between L, R has count greater than equal to k, if there is no element in array its count is considered zero
 » 17 months ago, # |   0 orz
 » 17 months ago, # |   0 Can somebody help me with my Solution for question F. It got accepted last night during the contest but it was rejected in the system checking stating that it is giving TLE for Test case 18. My Solution for FI am basically doing exactly what is mentioned in the editorial but mine is getting TLE.
•  » » 17 months ago, # ^ |   0 See this comment https://codeforces.com/blog/entry/102710?#comment-911393
•  » » 17 months ago, # ^ |   0
 » 17 months ago, # |   0 I have experienced some shocking results in F problem where if I use unordered_map it gives me TLE but on using map its working fine! Am I missing something?
•  » » 17 months ago, # ^ | ← Rev. 2 →   +3 When the value of n is large (usually larger than 10^5) then because there can be high number of collisions so hashmaps gives its worst case time complexity for insertion operations that is O(n) and not O(1) which is why the overall time complexity becomes O(n^2).The solution is to use a map as there we don't have a problem of collisions and the time complexity is logn per operation.
 » 17 months ago, # |   0 Please can you tell me what my submission giving TLE ?Submission
•  » » 17 months ago, # ^ |   0 It is because you are using unordered map and not map, I have already explained the reason in an earlier comment. Check this out if interested
 » 17 months ago, # |   0 Hi mesanu, for D. X-Sum my python code got TLE. I looked up the editorial, it looked similar to mine. Is it a language problem (like I have to use C++ rather than python) or I am missing something in my code?My submission Id: 156738601Thanks!
•  » » 17 months ago, # ^ |   0 You don't have to use c++. Try submitting in pypy 3, it is way faster.
 » 17 months ago, # |   +1 The time complexity of F is not O(n) as mentioned in the solution, its O(n * log(n)) since we need to sort the array.
 » 17 months ago, # |   0 The g question why java is used in the tle code as follows https://codeforces.com/contest/1676/submission/156893634
 » 16 months ago, # |   0 in ques D, why you did num-=ar[i][j]*3
•  » » 16 months ago, # ^ |   0 cause you count it 4 times in 4 ( top right, top left, bottom left, bottom right ) diagonals.
 » 16 months ago, # |   0 Hi, The time complexity for problem $D$ should be $\mathcal{O}(n \cdot m \cdot min(n, m))$ ?
•  » » 11 months ago, # ^ |   0 i also have same doubt
 » 16 months ago, # |   0 Still getting TLE in Problem E even after applying lower bound binary search. Solution: https://codeforces.com/contest/1676/submission/159866893
•  » » 15 months ago, # ^ | ← Rev. 3 →   0 You are sorting array and calculating sums in each query. That is the reason you are getting TLE. You can do that before first query, since those values will be same every query. See my submission.
 » 14 months ago, # |   0 For F, finding the longest subarray of the good values array takes O(n) time. However, we must sort the array in advance which will be a O(nlogn) operation. I think that the overall time complexity should be O(nlogn) instead of O(n). Correct me if I am wrong.
 » 8 months ago, # |   0 I guess H2 can also be solved using the ordered multiset Link