### LittleMaster_7's blog

By LittleMaster_7, history, 8 years ago,

SPOJ — Most Frequent Value

I solved this problem using Mo's algorithm.

Is there any Online solution for each query for this problem.

• +44

| Write comment?
 » 8 years ago, # | ← Rev. 3 →   +21 I only know the solution in .Tried to find better solution for quite a long time, but no results yet :(
•  » » 8 years ago, # ^ |   +8 Can you describe it?
•  » » » 8 years ago, # ^ | ← Rev. 2 →   +31 Note: (I consider all values in array  ≤ N. If that is not true, then you can use hashmap instead of array or with precalculations you can make that so)Split your array into blocks with size K. Precalculate answers for all intervals between all beginnings of these blocks along with array cnt[x] which tells you how many numbers x are in that interval. You can do that simply in linear time for every interval. We have wasted time and memory by this point, what can we do now?Let's consider we have query [L;R] such that R - L ≥ 2·K (Otherwise we can do linear search to calculate number of occurrences for every value on interval.) Now we know for sure that one of precalculated arrays is completely inside of our query. To be exact, this array covers . (Further I'll call these borders [A;B])Notice that A - L ≤ K, same is for R - B ≤ K We can simply use the precalculated array for [A;B], recalculate the value for [L;R] with linear approach which will work in O(K) for each query. NoteNote: you can not copy this array, because this can take up to O(N) time, you need to use it's values without copying it and backtrack it after you're done.You can solve this task using this idea even when you have update query (saying set Arr[i] = X), you can try it out for yourself, this isn't that hard.To sum up: This approach works in by choosing you can get the time complexity I was talking about. Note 2You can improve if you are using some sort of persistent structure instead of linear memory for every interval, but then: Most likely you will get additional to the query procession and precalc also. This is kinda hard for some cases
•  » » » » 4 years ago, # ^ |   0 Same approach with a different k can be usedLets say k=N^z and Q= N^y (y<=1) Total Complexity=O( N^(z*z — 2*z +2) + N^(z+y) ) We would want the two powers to be the same Equating we get: z*z — 3*z + (2-y) = 0we get z= ( 3 — sqrt(4*y+1) ) / 2for y=1(Q=N) , z=0.38 Total complexity = O(N^1.38)
 » 8 years ago, # |   +18 is there any offline without mo's?
•  » » 3 years ago, # ^ |   -7 The recent solution for the problem https://codeforces.com/contest/1514/problem/D proposed by galen_colin in his recent stream on youtube, worth watching it.
•  » » 8 years ago, # ^ |   +13 Well explained. :)
•  » » 8 years ago, # ^ | ← Rev. 2 →   +16 If there is update of any value, then how to solve it ?
•  » » » 8 years ago, # ^ |   +24 And linear memory please.Seriously, isn't this problem already hard enough? Encho's solution is quite complicated (and btw. I tried to solve it yesterday, spent 40-50 minutes and didn't succeed) and you just casually ask "ok, what if we also change values".
•  » » » » 8 years ago, # ^ |   +28 I also tried to implement that idea , but failed :(
•  » » 8 years ago, # ^ |   +15 Wow, that's really cool :) Liked magic section much
•  » » 8 years ago, # ^ |   +5 Thanks a lot , nice idea .
•  » » 8 years ago, # ^ | ← Rev. 2 →   0 Great stuff! Although note that there is no need for the Next matrix, as some prefix sums would do the trick just fine! I wonder if there is some preprocessing that would let us find out information about the "frequent" elements faster than O(numberofelements). I doubt it, but it would surely be interesting to check out!EDIT: By keeping the prefix matrix as n vectors of size sqrt(n) the solution will be very cache friendly for the big values.
 » 8 years ago, # |   +18 There is an entry on Wikipedia: Range Mode Query.
•  » » 8 years ago, # ^ | ← Rev. 2 →   +38 That page mentioned an O(n) space and method.Theorem 1 Let A, B be any multiset. .Proof TrivialNow assume we have an array A of size n. Split it into blocks, each of which sized . Precompute the mode and frequency of each consecutive blocks. It took O(n) space and time.For each query, we have a prefix, a span and a suffix. By Theorem 1, the mode must be the mode of the span, an element of the prefix, or an element of the suffix. For each element in the prefix or the suffix, check if it is more frequent than the current mode. With additional preprocessing and analysis, per query can be achieved.You can refer to the original page for more detail if you can't figure out yourself.
•  » » » 8 years ago, # ^ |   0 What if c is mode of A and B ?
•  » » » » 8 years ago, # ^ |   +8 Well, nothing. What do you think is the problem (issue) then?
 » 8 years ago, # | ← Rev. 3 →   0 Would you please share source? I've tried to solve it with MO O(n+q*sqrt(n)*log(n)) and got TLThan I tried sqrt decomposition O((n+q)*sqrt(n)) with every little optimization I could and I'm still getting TL Thanks in advance.
•  » » 8 years ago, # ^ | ← Rev. 2 →   +3 Can't you get rid of the log n factor?
•  » » » 8 years ago, # ^ | ← Rev. 5 →   0 I will need some structure which supports inclusion & exclusion of elements and maximum query for O(1)... And this is impossible.probably there is completely another wayUPD probably but it gives it in averageUPD2 Since we need just frequency and not element, it is possible to support it O(1) w.c. Thanks for forcing me to think about it again. AC with MO's
 » 8 years ago, # | ← Rev. 2 →   -13 Edit: My bad, was a different problem
•  » » 8 years ago, # ^ | ← Rev. 2 →   0 The LightOJ problem is different. It only deals with numbers with consecutive frequency, so that simplifies the solution.