Codeforces и Polygon могут быть недоступны в период с 23 мая, 7:00 (МСК) по 23 мая, 11:00 (МСК) в связи с проведением технических работ. ×

Блог пользователя Laggy

Автор Laggy, 7 лет назад, По-английски

tl-dr version of the problem: You have a set of numbers and you need to look for a subset with a sum that ranges from l,u given that u — l is bigger than or equal the difference between the max,min elements from the set of given numbers.

Why does the two pointer method work here? Sorting the numbers and just adding the elements until you get over the range boundaries then subtracting until you get it right (in case there's a solution this will always work), why? What's the logic behind it?

  • Проголосовать: нравится
  • +1
  • Проголосовать: не нравится

»
7 лет назад, # |
  Проголосовать: нравится +6 Проголосовать: не нравится

Suppose there exists an x length subset whose sum is in the range [l, u]. If no such x exists, answer is  - 1.

Now consider the sorted list of elements, A:

  • Look at the sum of the first x elements. This must be  ≤ u, else our assumption is false.
  • If this sum is  ≥ l, we have our solution.
  • Else, this sum is  < l. Now we can slide one element at a time i.e. look at A2 + A3.... + Ax + 1, A3 + A4.... + Ax + 2 and so on.
  • Each sum will differ from the previous by at most u - l. So we are guaranteed to hit at least one value in the range [l, u].
»
7 лет назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

problem link ?!

»
7 лет назад, # |
  Проголосовать: нравится +10 Проголосовать: не нравится

Protip : you can find all IOI 2016 solutions in http://ioinformatics.org/locations/ioi16/contest/index.shtml