When submitting a solution in C++, please select either C++14 (GCC 6-32) or C++17 (GCC 7-32) as your compiler. ×

arujbansal's blog

By arujbansal, 2 years ago, In English

Problem

Let's look at the following two problems:
1. Given an array $$$A$$$ of $$$N$$$ elements where $$$0 \le A_i \le X$$$ for some positive integer $$$X$$$, find all possible subset sums.
2. Given an array $$$A$$$ of $$$N$$$ elements where $$$0 \le A_i \le X$$$ for some positive integer $$$X$$$, for all possible subset sums, calculate the minimum number of elements required to achieve that sum.

Thanks to socho for improving the tutorial!

Solution

This is a classical dynamic programming problem which can be solved in $$$O(N^2 \times X)$$$ (the sum of elements is bounded by $$$N \times X$$$) and hence that solution won't be discussed in much detail here. We can create an array $$$\it{dp}$$$ of size $$$X + 1$$$ and say that $$$\it{dp}_a$$$ is $$$1$$$ if it's possible to form a sum of $$$a$$$ using some elements from the array and $$$0$$$ otherwise. Then, we try to include every element $$$A_i$$$ and set all $$$\it{dp}_a$$$ for which $$$\it{dp}_{a - A_i}$$$ is $$$1$$$, to $$$1$$$.

However, if the sum of elements is bounded by $$$N$$$, it turns out that this problem can be solved in $$$O(N\sqrt{N})$$$. Formally, we have the following constraint:

$$$1 \le \sum_{i = 1}^{N} A_i \le N$$$


So, what is the maximum number of distinct elements our array can have if its sum is bounded by $$$N$$$? Since we are trying to maximise, it is optimal for us to consider the smallest possible elements, starting from $$$1$$$. We want to find the smallest $$$P$$$ such that the sum of the first $$$P$$$ natural numbers is equal to $$$N$$$. The sum of the first $$$N$$$ natural numbers is given by the formula $$$\frac{P (P + 1)}{2}$$$. Solving $$$\frac{P (P + 1)}{2} = N$$$ for $$$P$$$, we get:

$$$P = \sqrt{2N - P}$$$

It's okay for $$$P$$$ to be slightly bigger, as we want a rough estimate only.

$$$P = \sqrt{2N}$$$

Therefore, we have at most $$$\sqrt{2N}$$$ distinct values in the array, which allows us to develop an $$$O(N\sqrt{N})$$$ algorithm.

Let's retain the definition of $$$\it{dp}$$$. Our transitions will change. We can process elements in pairs of $$$(W_i, K_i)$$$ where $$$W_i$$$ and $$$K_i$$$ correspond to the value and frequency of element i of the compressed array. How do we update the $$$\it{dp}$$$ array? To understand this, first take a look at what an array looks like if the $$$i^{\it{th}}$$$ element is $$$i \mathbin{\%} W_i$$$. Consider $$$W_i = 4$$$.

$$$[0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2...]$$$

So, for each $$$W_i$$$, let $$$T$$$ be any value smaller than $$$W_i$$$ and greater than or equal to $$$0$$$ $$$(0 \le T \le W_i - 1)$$$. We see that array indices with the same value of $$$T$$$ are exactly $$$W_i$$$ elements apart. Let's consider indices with the same value of $$$T$$$ together. Let's set $$$T = 1$$$ and look at the indices where $$$T = 1$$$ appears in the array above. Call this array $$$B$$$.

$$$[1, 5, 9]$$$

Recall that the $$$i^{\it{th}}$$$ index of the $$$\it{dp}$$$ array stores whether we can form a sum of $$$i$$$. Since the same $$$T$$$ values occur $$$W_i$$$ elements apart, going from one occurrence of $$$T$$$ to the next uses a single copy of $$$W_i$$$. Since we have exactly $$$K_i$$$ copies of $$$W_i$$$, we can form sum $$$i$$$ if $$$\it{dp}_{i - W_i \times Y}$$$ for some $$$Y$$$ $$$(0 \le Y \le K_i)$$$ is one. In other words, if the sum of all such $$$\it{dp}$$$ values is positive, then $$$\it{dp}_i$$$ is possible. We can maintain a variable which stores the sum of the last $$$K_i$$$ such $$$\it{dp}$$$ values. For example, for the ninth index, if we have at least two copies of $$$W_i = 4$$$, then we want at least one of $$$dp_5$$$ or $$$dp_1$$$ to be $$$1$$$.

For every $$$W_i$$$, for every index $$$P$$$ in the range $$$[0, W_i - 1]$$$, we loop over all multiples of $$$W_i$$$ starting at $$$P$$$. That is a total of $$$\frac{N}{W_i}$$$ multiples per $$$P$$$. Since $$$P$$$ assumes exactly $$$W_i$$$ different values, the complexity for doing so is $$$O(W_i \times \frac{N}{W_i})$$$. Since we do this $$$\sqrt{2N}$$$ times, the final complexity is $$$O(\sqrt{N} \times W_i \times \frac{N}{W_i}) = O(N\sqrt{N})$$$. We require $$$O(N)$$$ memory. Take a look at the code below to understand this better.

Code (C++)

This successfully solves the first problem mentioned at the starting of the post. Let's now take a look at how to modify our approach to solve the second one as well. Before proceeding, make sure you understand the tutorial till this point. If you want to try the second problem once yourself, you may do so here: 95E - Lucky Country. Here's my code for it: 137173086.

This time, we want to find the minimum number of elements in a subset, for each subset sum. $$$\it{dp}_i$$$ should now store the minimum number of elements required to form a subset sum of $$$i$$$. $$$\it{dp}_i = \infty$$$ if it is impossible to do so. How do we transition between states? This time, it is optimal to take the minimum among the last $$$K_i$$$ $$$\it{dp}$$$ values (remember that the last $$$K_i$$$ values refers to the last $$$K_i$$$ indices with value $$$T$$$ as shown for the first problem i.e. array $$$B$$$). We can do so by maintaining the minimum over the last $$$K_i$$$ $$$\it{dp}$$$ values with indices belonging to array $$$B$$$. This essentially becomes a sliding window minimum problem, and the time complexity remains the same. At this point, you might be wondering what value to push inside the deque so that we can accurately find the minimum since we are iterating over multiples. Let $$$L$$$ we the number of times we have added $$$W_i$$$ to $$$P$$$. We can simply insert $$$\it{dp}_{B_j} - L$$$. Once we have the minimum, we add a constant value, which is the current value of $$$L$$$. This is the same idea used to solve the problem Lucky Countries mentioned above. Take a look at the code snippet below.

Code (C++)

Check out my YouTube channel Algorithms Conquered to learn many more interesting algorithms!

  • Vote: I like it
  • +157
  • Vote: I do not like it

»
2 years ago, # |
  Vote: I like it +1 Vote: I do not like it
  • »
    »
    2 years ago, # ^ |
    Rev. 3   Vote: I like it +11 Vote: I do not like it

    Hey, I've seen those posts. I think the method I have described here is slightly different, though the $$$O(\frac{N\sqrt{N}}{W})$$$ version is surely faster wherever it can be used.

    Although this particular problem doesn't use the $$$\sqrt{2N}$$$ distinct elements observation, there are at most $$$100$$$ elements and the method described in my tutorial for the second problem can be slightly modified to solve this problem in $$$O(N.X)$$$.

    • »
      »
      »
      2 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      actually i meant for reference, it is always good to add similar blogs as reference for future readers.

»
2 years ago, # |
  Vote: I like it +2 Vote: I do not like it

That is actually a very interesting idea.

»
2 years ago, # |
  Vote: I like it +36 Vote: I do not like it

There is an (IMO) simpler variant of this trick, at least for your first problem. Based on a cursory reading I'm not sure if it's just a different way to explain or actually a distinct method, but it does have the same complexity.

Imagine your array contains at least 3 copies of the element $$$a$$$. We can notice that it doesn't change the answer if you replace these 3 copies $$$a, a, a$$$ with 2 numbers: $$$a, 2a$$$. You can repeat this process until the array has only up to 2 copies of each element. Because the sum also doesn't change, by the bounds you derived the length of the array must now be $$$O(\sqrt{N})$$$. Now you can use a classical algorithm.

  • »
    »
    2 years ago, # ^ |
    Rev. 5   Vote: I like it 0 Vote: I do not like it

    Wow, that's a cool trick! The algorithm described in this post is quite different. I think the second problem too is solvable that way.