How to solve this problem ?

Given two arrays A[] and B[] of length N, the task is to find the minimum number of operations in which the array A can be converted into array B where each operation consists of adding any integer K into a subarray from L to R.

PS: I came across this problem on gfg, but then realised that their solution is **Wrong**. It fails on the following case: A = {0, 0, 0}, B = {1, 2, 1} Output should be 2, but their code is giving 3.

Edit: there can be two variation of this problem:

1st — When K is a positive integer

2nd — When K can be any integer

There might be different solution for both. So answer by assuming each case.

If the question is to convert all 0 array to target array by adding any integer x, we can use coordinate compression on target array.

And then answer is summation of max(a[i] — a[i-1],0) for all i assuming a[-1] = 0

Not exactly, assume target array is {2, 0, 1}. Then your algorithm gives 3 but we can generate target array in two operations.

(To OP: I hope this problem serves as a cautionary tale to not trust gfg in the future.)

Yes actually the problem get converted to find difference array of target array, like for 2,0,1 it's 2 -2 1 -1 , and find as many as subsets possible with sum 0 , so answer is 4 — 2 = 2 .

But I don't how to solve that efficiently.

Is it always possible to convert array $$$A$$$ into $$$B$$$ with only positive $$$K$$$?

Anyway, when there is no restriction on $$$K$$$, I think it can be solved in this way assuming $$$C_i = B_i - A_i$$$:

Final answer:

`fun(0,n-1,c[rmq(0,n-1)]-1)`

Please tell me if you have a counter case for this approach.

If you're allowing negative K, then I can solve {2, 1, 2} in two operations. But if I understand the approach correctly your code would return 3.

`Is it always possible to convert array A into B with only positive K?`

If A[i]<=B[i], then it is always possible to convert A into B, otherwise not possible.

I think I have a solution for positive

`K`

. (EDIT2: As clyring pointed out, this solution is wrong. )Let

`C[i]`

be`B[i]`

—`A[i]`

. In other words,`C[i]`

shows us how much we need to increase`A[i]`

. If there is an index i such that`C[i] < 0`

, then there is no solution. Otherwise, let’s maintain a set. For every index i, this set will store the numbers by which we can increase`A[i]`

without using a new operation. We firstly add 0 to this set. Say that the variable storing our answer is`ans`

(initialised to 0). For every i from 1 to`n`

, if`C[i-1] < C[i]`

(we suppose`C[0]`

is 0), we increment`ans`

, else, we remove from the set all the numbers that are strictly bigger than`C[i]`

and if we don’t find the value`C[i]`

in the set, we increment`ans`

. At every step, we add to the set`C[i]`

—`every value currently in the set`

.I discussed with many people, many of them suggested their approach, but all of them failed on some test cases. So I will give a test case to you too. let array C (which is difference array) is {2, 3, 1}. It's output should be 2. While I think your approach will give output 3. Please verify this test case.

[Deleted]

I think the test case with

`N=4`

and`c = [4, 11, 7, 3]`

breaks this method (as revised), if I understand you correctly. 7 enters the set as 11-4 and 3 enters the set as 7-4, so this outputs ans=2, but the optimal answer has 3 steps: Since 11 > 7 > 3 > 0, there must be increased intervals ending at each of positions 2, 3, and 4 in any construction.For variation 2 of the problem, I don't have a solution handy, but here is some insight:

Suppose for convenience that the arrays $$$A$$$ and $$$B$$$ naturally have indices from $$$1$$$ to $$$N$$$, and that they have been extended to have $$$A[0] = B[0] = A[N+1] = B[N+1] = 0$$$. I claim that for any $$$k \in \mathbb{Z}, 0 \leq k \leq N$$$, there exists a solution with $$$k$$$ operations if and only if the multiset $$$S = \{ (B[i] - A[i]) - (B[i-1] - A[i-1]) : i \in \mathbb{Z},\, 1 \leq i \leq N+1 \}$$$ can be partitioned into $$$N+1-k$$$ nonempty multisets each with sum zero.

To see this, note that adding $$$x$$$ to every position from $$$L$$$ to $$$R$$$ in the original problem basis amounts to adding $$$x$$$ in position $$$L$$$ and $$$-x$$$ in position $$$R+1$$$ in a backwards-differences basis. Thus, for any sequence of $$$k$$$ operations that transforms $$$A$$$ into $$$B$$$, if we consider the graph $$$G$$$ with vertices numbered $$$1$$$ to $$$N+1$$$ and with an edge between $$$L$$$ and $$$R+1$$$ for every operation used, the sum

must be equal to $$$0$$$ for every connected component $$$C$$$ of the graph $$$G$$$. These connected components thus give us a partition of $$$S$$$ into at least $$$N - 1 - E$$$ nonempty multisets with sum zero, where $$$E$$$ is the number of edges, which is at most the number of operations $$$k$$$. We can then merge a few of these if $$$E < k$$$ to get exactly $$$N + 1 - k$$$ multisets.

In the reverse direction, given a partition of $$$S$$$ into $$$N + 1 - k$$$ nonempty multisets each with sum zero, we can select $$$k$$$ edges between pairs of numbers between $$$1$$$ and $$$N+1$$$ to give us a graph with connected components corresponding to this partition, and after a little bit of fairly simple linear algebra there will be a unique selection of integer weights $$$x$$$ for the operations on the $$$k$$$ intervals corresponding to these edges that will transform $$$A$$$ into $$$B$$$.

EDIT: A similar idea works for variation 1 of the problem, but with partitions of $$$\mathbb{Z} \cap [1, N+1]$$$ into nonempty sets $$$S$$$ such that the following sum is $$$0$$$ for $$$j = \max S$$$ and non-negative for all other $$$j \in S$$$:

The argument is more or less the same.

The main problem is how to maximize k, I explained same thing in my second comment above , but I am stuck how to maximize k.

I couldn't find any other approach than repeatedly finding minimal subset with sum 0,as all this family of subset sum to 0, this greedy + knapsack dp will work for polynomial time.

if n is small then we can use exponential method.

If u can help here that would be great.

I'm not too optimistic about a greedy strategy providing an optimal solution in general, but I'd be happy to be proven wrong. A bitmask-dp approach isn't too hard to implement based on these ideas, but it will obviously struggle except for very small $$$N$$$.

Here is my implementation: 91605890. It takes from standard input the problem variation (1 or 2) and the size of the array (N), then the two arrays themselves. It outputs the length of the optimal solution (or -1 if there is no solution) and then an optimal sequence of moves (L, R, increment) with 0-indexed coordinates including both endpoints.

I can't currently prove a better worst-case runtime than $$$O(3^N)$$$ for variation 1 or $$$O(\frac{3^N}{\sqrt{N}})$$$ for variation 2 of the problem, but my code solved all of the testcases I generated locally up to $$$N=20$$$ within around half of a second each, so those bounds seem to be quite loose in practice. I'm also not 100% sure that it is bug-free, since I don't have anything to compare it to, but its output looks OK and passed a sanity-checker I wrote.

It can be implemented in $$$O(2^{N}N)$$$ time. Basically we want to find a sequence of sets $$$\emptyset = S_{0} \subset S_{1} \subset \ldots \subset S_{k - 1} \subset S_{k} = U$$$ such that for every $$$S_{i}$$$ sum over it is 0. Then our components will be $$$S_{i} \setminus S_{i - 1}$$$. So we can mark good subsets on boolean cube and we want to find a path from $$$\emptyset$$$ to $$$U$$$ with maximum number of good subsets. This can be easily done with DP.

It took me a while to catch your meaning: Consider paths adding one bit at a time and count the maximum number of usable subsets found along the way, rather than considering potentially large steps and suffering the cost of the myriad ways of so doing. Nice trick! It doesn't directly apply to variation 1 of the problem, since if $$$S_1 \subseteq S_2$$$ are good sets in that version of the problem it does not imply that $$$S_2 \setminus S_1$$$ is a good set. But I think this issue can be worked around by making several passes, in each of which adding a good set one bit at a time from left to right is considered, with a total cost of $$$O(2^N \cdot N \cdot (k_{\mathrm{optimal}}+1))$$$, which is $$$O(2^N N^2)$$$ for worst-case input.

I think S1 should be proper subset of S2, also if S1 and S2 are good,then S2 — S1 should be good as 0 + 0 = 0. Please correct me if I am wrong.

This reasoning goes through when we allow incrementing intervals by any integer (variation 2), but fails when we require intervals to be incremented only by positive integers (variation 1). For a specific example, take $$$N=3$$$, $$$A=[0,0,0]$$$, $$$B=[2,1,2]$$$. Then our backward differences are $$$\Delta = [+2, -1, +1, -2]$$$. The set $$$S_1 = \{1,4\}$$$ is a good set, as is its proper superset $$$S_2 = \{1,2,3,4\}$$$, but $$$S_2 \setminus S_1 = \{2, 3\}$$$ is not a good set because although $$$\Delta[2] + \Delta[3] = -1 + 1 = 0$$$, this set corresponds to the action of increasing the middle element of $$$A$$$ by $$$-1$$$, which is not allowed.

I've implemented the $$$O(2^N N^2)$$$ multiple-scanning idea (91667731) and verified that it works by testing it against my $$$O(3^N)$$$ algorithm, but it doesn't seem to be significantly faster on normal inputs with $$$n \leq 20$$$, at least not with my implementations. (Of course, it has the advantage of being less likely to fall victim to some pathological test case!)

I implemented for variation 2, As umnik said , i implemented according to that mostly,still i didn't know what boolean cube is, so i tried to maximise chain length considering proper subsets, i haven't implemented backtracking for how to increment, but it is giving correct output for most cases i know.

O(2^N * N) solutionBasically I used DP for maximising chain length, could you please verify it on your part.

My code is basically for calculating minimum steps to reach from [0,0,0..0] to [a[0],a[1],a[2]..,a[n-1]]