Laggy's blog

By Laggy, 7 years ago, In English

tl-dr version of the problem: You have a set of numbers and you need to look for a subset with a sum that ranges from l,u given that u — l is bigger than or equal the difference between the max,min elements from the set of given numbers.

Why does the two pointer method work here? Sorting the numbers and just adding the elements until you get over the range boundaries then subtracting until you get it right (in case there's a solution this will always work), why? What's the logic behind it?

  • Vote: I like it
  • +1
  • Vote: I do not like it

| Write comment?
»
7 years ago, # |
  Vote: I like it +6 Vote: I do not like it

Suppose there exists an x length subset whose sum is in the range [l, u]. If no such x exists, answer is  - 1.

Now consider the sorted list of elements, A:

  • Look at the sum of the first x elements. This must be  ≤ u, else our assumption is false.
  • If this sum is  ≥ l, we have our solution.
  • Else, this sum is  < l. Now we can slide one element at a time i.e. look at A2 + A3.... + Ax + 1, A3 + A4.... + Ax + 2 and so on.
  • Each sum will differ from the previous by at most u - l. So we are guaranteed to hit at least one value in the range [l, u].
»
7 years ago, # |
  Vote: I like it +1 Vote: I do not like it

problem link ?!

»
7 years ago, # |
  Vote: I like it +10 Vote: I do not like it

Protip : you can find all IOI 2016 solutions in http://ioinformatics.org/locations/ioi16/contest/index.shtml