### ismail_2311's blog

By ismail_2311, history, 4 weeks ago, ,

Given N intervals and Q queries of the form L and R. Find the largest sized interval completely lying inside the query range. 1 <= N, Q, L, R <= 1e6.

• +7

 » 4 weeks ago, # | ← Rev. 2 →   +18 Assuming that the queries are offline (i.e. all known in advance) one can write quite a simple $O(n*logn)$ solution with just a max Fenwick tree.Put all ranges (both queries and intervals) in the same container and sort them by the left bound. Also create a maximum Fenwick tree $ft$, that will for each position $x$ store a maximum size of the interval, having right bound equal to $x$. Initially $ft$ should be filled with zeroes.Then process ranges in the order of decreasing their left bound: If current range is an interval $[l, r]$, just set $ft[r]=r-l+1$, because its size is not less than the size of the previous one, ending in $r$.If current range is a query $[L, R]$, then the answer for that query will be the maximum in $ft$ on the prefix $[0..R]$. On this range will be stored only intervals with $l >= L$, since we haven't processed others yet, and also with $r <= R$ since we take maximum on the prefix $0..R$. So there will be only internal intervals and we simply taking the largest of them.Also, use of Fenwick tree is correct, because we have only non-decreasing updates and prefix maximum queries.
•  » » 4 weeks ago, # ^ |   0 Thanks a lot for the explanation. Is there a way to process the queries online?
•  » » » 4 weeks ago, # ^ |   0 For an offline solution we can use a segment tree.First let's make a segment tree where each node contains a list of all the segments that have their left endpoint in the segment, sorted by their right endpoint. Since each segment can only appear in log(n), this takes n*log(n) space total. If we do the sorting by merging as we go upwards, the construction will take n*log(n) time as well.After constructing this, we need to run a prefix Max on each segment tree node.After that, our queries look like: walk the segment tree to the appropriate nodes to find all the segments that start in bounds. Use binary searches to limit to those segments that end in bounds. Finally read our prefix sum arrays to get the longest such segment.Total time complexity is n*log(n)^2, but can probably be refined.
•  » » » » 4 weeks ago, # ^ |   +8 You mean online? :)Yes, it can be improved by replacing binary searches with fractional cascading (i.e. remember positions while merging, and for a query run a single binary search in the root and then go down in $O(1)$ using that positions).Then it will be $O(n*logn)$ time and memory.