Автор Baadshah_, история, 7 месяцев назад, ,

How to solve this question using MO algorithm Codechef SMARKET

•
• +7
•

 » 7 месяцев назад, # |   +6 I am not sure whether this will pass TL. Maintain the addition and deletion using BIT. Now for each query you need to find number of elements greater than k in our current window. This can be done using BIT. BIT adds an extra log(n) factor though.
•  » » 7 месяцев назад, # ^ |   +11 You can get rid of the log(n) factor using a pretty standard trick. Suppose cnt[K] is the number of blocks of lowest order K. Maintain another array block_cnt[SQRT_N] which stores the count of stable blocks corresponding to that range. So if you need to count the number of stable blocks >= x, you iterate over SQRT_N blocks at most. Final asymptotic is O(N*sqrtN) which should fit in given time limit.
•  » » » 7 месяцев назад, # ^ |   0 Yeah.That passed for me with sqrt decomposition.That is when I realized segment tree is not the most complete data structure for query problems
•  » » » » 7 месяцев назад, # ^ |   0 Of course SQRT decomposition is stronger than segment tree, But you can solve this problem by persistent segment tree in O(nlgn) and also simple segment tree O(nlg^2n).
•  » » » » » 7 месяцев назад, # ^ |   0 can you please explain how can we do that?
•  » » » » » » 7 месяцев назад, # ^ |   0 Predetermine all the stable blocks and sort them by order. Add all the blocks of order 1 to the persistent tree, then remove them again; add all the blocks of order 2 to the persistent tree, then remove then again; and so on. Now you have a timepoint corresponding to blocks of order 1; a timepoint corresponding to blocks of order 2 and so on. When you process a query, you might end up with a block that is split up by Li, you might also end up with a block that is split up by Ri. There are also cases where Li and Ri belong to the same block. You can use a BST or similar method to consider these edge cases.
•  » » 7 месяцев назад, # ^ | ← Rev. 3 →   +9 You neither need a BIT nor the block_cnt[SQRT_N] method which satyaki3794 mentioned. All you need is a simple cnt[N] array which you need to update every time the size of a stable block increases or decreases by 1. Example. If the current stable block of lenght x increases in length by one, just do ++cnt[x+1] or if it decreases by one, do --cnt[x]. The reason this works is because the left/right pointer in MO's algo moves by one unit every step and hence the size of a stable block increases or decreases by 1 every step.Therefore if we have a stable block of size 5, then it must have existed with size 1,2,3,4 at some point of time and hence the corresponding count array had been updated. So finally, the number of blocks of order 'k' is simply cnt[k]. You can refer to my solution which used the above method here
•  » » » 7 месяцев назад, # ^ |   0 Shit! This was awesome. Another easy way told to me by hm_98 is to divide each (L, R, k) into two queries (L, k) and (R, k). Sort them, then proceed left to right, use bit and query number of integers > k. Answer for one query would be query(R, k) — query(L, k).
 » 7 месяцев назад, # |   0 I solved with persistent segment tree O(N * logN)
•  » » 7 месяцев назад, # ^ |   0 can you please explain it?
 » 7 месяцев назад, # | ← Rev. 2 →   +1 I am not familiar with a persistent segment tree so please point it out if what I describe resembles the idea of a persistent segment tree. My solution uses binary search to find the answer. First I decomposed the problem into an equivalent problem. For every continuous range of identical numbers of size k, I replace that range with the numbers 1,2,...,k. For example, the array 20,10,10,7,7,7,10 becomes 1,1,2,1,2,3,1.Now the query (l,r,k) becomes to find how many times the number 'k' occurs in the range (l,r). This would be equal to the query(1,r,k) — query(1,l-1,k). To find query (1,r,k) we can simply maintain vectors for each 'k' that store the index of each occurrence of 'k' in the array and binary search for number of such indices less than or equal to 'r'. The only thing we need to be careful with is if we overcounted the left most instance of 'k' or undercounted the left most absence of 'k'. This can be easily checked by maintaining a pointer to the next '1' in the array. The overall complexity comes out to be O(Q*log(n)). Here is a link to my submission : https://www.codechef.com/viewsolution/13229877