### LashaBukhnikashvili's blog

By LashaBukhnikashvili, 7 years ago,

I am trying to solve this problem,I need your helps ;)

Is any idea to solve it,without 2d Trees? or what kind of 2d Trees should I use,that dont use O(n*m) memory,and update,query as much fast as it possible(like a logNlogM)

P.S I was reading material about fenwik tree,but it needs O(n*m) memory(sorry for my poor english )

• 0

 » 7 years ago, # |   -11 think it up yourself
•  » » 7 years ago, # ^ |   +12 Probably he alredy tried if he asks for help?
 » 7 years ago, # | ← Rev. 2 →   0 Compress the pairs into numbers 1..X (UTFG compressing coordinates), then it's a classic LIS in .Is wrong. At least for this operator (which, in fact, is not actually a comparison operator, so it technically isn't an increasing sequence because the term "increasing" remains undefined; it's actually a pair of LIS with common indices in 2 sequences). Well, an obvious extension to compressed 2D segment trees works, and I found an answer on Stack Overflow about it.
•  » » 7 years ago, # ^ |   0 Thanksss, I GOT IT :)
•  » » » 7 years ago, # ^ |   0 告訴我怎麼解決問題?...
•  » » » » 7 years ago, # ^ |   0 lol,I was thinking about numbering this pairs,but it seems it is incorrect :\
•  » » 7 years ago, # ^ |   0 No, you can't compare 2 pairs (x1, y1) and (x2, y2) if x1 < x2 and y1 > y2. As far as I know, there is no way to solve this without 2D tree. Using set from C++ can simplify the implementation for this problem.
•  » » » 7 years ago, # ^ |   0 Yeah, I found out already. The problem is that I didn't read beyond "pair (a,b) is less than (c,d) if" because there is just one comparison operator on pairs.
•  » » 7 years ago, # ^ | ← Rev. 3 →   0 I think you could maybe sort all pairs by their x coordinates. and then you could build a segment tree by their y coordinates . Oh I am wrong
•  » » 7 years ago, # ^ |   0 can you explain what UTFG compressing coordinates is? I googled it but couldnt find anything.
•  » » » 7 years ago, # ^ |   -6 http://lmgtfy.com/?q=utfgOne of the first page results: Urban Dictionary entry.
 » 7 years ago, # |   0 Need help again,if you can please tell my idea to solve this problem without 2d trees :\
•  » » 7 years ago, # ^ |   0 A solution without trees was explained by Xellos up here.
•  » » » 7 years ago, # ^ |   -7 And is wrong. More specifically, is wrong for the inequalities from the problem statement.
•  » » » » 7 years ago, # ^ |   0 The explanation you linked from stack overflow, on the other hand, doesn't need an implementation of 2d trees and is correct.My accepted code on SPOJ has 55 lines.
•  » » 7 years ago, # ^ |   0 I think flashmt is right. I think you can use range tree which is a 2d tree :D
 » 7 years ago, # | ← Rev. 3 →   0 Notice that when you call update for fenwick you just update O((logN)d) nodes. If any node is not updated then it's value is zero. So you dont need O(NM) memory at all. O(M(logN)d) is enough.PS: d means dimension
•  » » 7 years ago, # ^ |   +5 How would you remember the nodes to avoid O(N2) memory? The most obvious way to store Fenwick trees this way is using a map instead of a vector, but that adds a nasty factor of to the complexity.
•  » » » 7 years ago, # ^ | ← Rev. 3 →   0 Hash tables would be more usefull. So we can get rid of that O(logN)
•  » » » » 7 years ago, # ^ |   0 While introducing a large constant. So much win...
•  » » » » » 7 years ago, # ^ |   0 We are not dealing with that huge constant. It is much more better then red-black trees in current realistic world limits too.
•  » » » » » » 7 years ago, # ^ |   0 So, you're saying that we use a hashmap and still aren't dealing with the huge constant that it involves because... magic?
•  » » » » » » » 7 years ago, # ^ |   0 Im just saying that constant is not huge but obviously exists.
•  » » » » » » » » 7 years ago, # ^ |   0 Here, it would appear that inserts have only about twice better performance for hashmap than map in C++, while fetches are much faster. And we still need to do a lot of inserts.If you want to convince me, you'd need to provide a better argument than that you're saying, because I'm saying hashmap is a bad idea here and if anything then quicksort on subFenwick trees (1D) with many accessed elements and bubblesort on those with few (~5?) is faster.
•  » » » » » » » » » 7 years ago, # ^ |   0 Well, i dont care about convincing you. From my point of view twice faster hash table is tolerable.
•  » » » » » » » » » 7 years ago, # ^ | ← Rev. 2 →   0 Did you get AC on that problem ? (if not and you're still satisfied, then it isn't a good attitude for competitive programming :D) It's SPOJ and SPOJ is slow, if map is a factor of 20 and hashmap is a factor of 10 then it's just a question of getting TLE with 6 or 3 seconds...Also: how hash always works.
•  » » » 7 years ago, # ^ |   +5 We can avoid this by preprocessing. Let's simulate all the queries to find out for each value of x, which values of y we will need to access.
•  » » » » 7 years ago, # ^ |   0 I realize that we can find these values, but there is still of them in total and we still need to sort them or map them or something. How else would you determine that query q1 and q2 both access the same vertex (x, y)?Quicksorting (map and hashmap are both slow IRL) all y for all x would probably be enough to pass this on any fast online judge (I'd comfortably go code it if it was CF), but SPOJ isn't fast in any way, that's why I'm doubting stuff so much.
•  » » » » » 7 years ago, # ^ | ← Rev. 7 →   0 That's exactly what I did for this problem on SPOJ. Since we break N(logN)2 values into N different lists of x, the sorting part runs quite fast. After we get all y sorted for all x, when processing a query (x, y), we only need to find the actual index of y once for each of logN x values (by binary search) so it would cost O(N(logN)2) in total.