```
Return the length of the shortest, non-empty, contiguous subarray of A with sum at least K.
If there is no non-empty subarray with sum at least K, return -1.
EXAMPLE:
Input: A = [2,-1,2], K = 3
Output: 3
---------
Input: A = [1,2], K = 4
Output: -1
---------
Input: A = [84,-37,32,40,95], K = 167
Output: 3
```

I have tried 2-pointers/sliding-window but it can't handle negative cases and gives the wrong answer. Can the geniuses of CF help me out on how to approach this problem?

Here is my (non working) code:

```
public int shortestSubarray(int[] A, int K) {
int ptr = 0;
int sum = 0;
int sol = Integer.MAX_VALUE;
for(int i = 0; i < A.length; ++i){
sum += A[i];
while(sum >= K){
sol = Math.min(sol, i - ptr + 1);
sum -= A[ptr++];
}
}
return (sol == Integer.MAX_VALUE) ? -1 : sol;
}
```

Two pointer method should work. When you are advancing the right pointer, if the sum becomes negative, then move the left pointer to the current index and update the sum to 0, because it is never better to take a subarray with negative sum. Rest of it is the same as normal two pointers on an array with positive integers.

This would fail for {5, - 4, 1, 2} and

K= 3.It works for that case, but I get your point. The logic fails for sample 3 because we can not directly subtract a[1] when l=1, r=5, but we can remove both a[1] and a[2]

My bad!

So in conclusion this doesn't work, right?

An example test case would be :

The same two pointer idea you mentioned is written on GFG (here) and it fails on the above case.

Is there a way to do this in O(n) time and O(1) auxiliary space ?

It can be done in O(n) time and O(n) auxiliary space.

https://leetcode.com/problems/shortest-subarray-with-sum-at-least-k/solution/

Complexity of the solution is

O(N*log(N))For every index

iwe'll try and find the smallest length subarray such that it ends at positioniand it's sum is >= k.pref[i]is equal to the prefix sum of the array that ends at positioni.So if we're looking at position

iand we don't plan on taking the entire prefix we know that there has to be some previous prefix sum at indexjsuch that:pref[i] — pref[j] >= k. Reforming this inequality we get:pref[i] — k >= pref[j]. So what we're essentially looking for at everyiis the largestjsuch that it's prefix sum is <= pref[i] — k (the largest j so that we can cut off as much of the array as possible). This problem can be solved with a segment tree that handles point updates and stores the max on a range.I think that solves the problem and I hope that made sense.

You can also implement this using only an

`std::set`

: http://codeforces.com/blog/entry/59912?#comment-436136It turns out there is an O(N) solution, can anyone give some hints for that?

Keep a dequeue of the 'best' prefix sums found so far. This will be increasing both in index as well as in value. When we go from left to right, we add new prefix sums to the back (making sure it stays increasing in value) and remove prefix sums from the front (if we can use it here, it will never be better to use it later).

The code will look something like this:

Hey there,

Thanks for the reply and solution. Would you be willing to share why you thought of these things so I can get an idea of your thought process and apply it to future problems?

Thanks

The first step I took is to think of the problem in terms of prefix sums. This is a very common approach in problems dealing with sums over subarrays, so the reason I thought of is is experience.

The second step I took is to notice that were only interested in the last position in the list that has a value of at most x. Therefore, we can remove items from the list when there is a later smaller prefix sum. This will make the list increasing in both index and value. This is also quite a common trick, but I think it should be possible to come up with this even if you haven't seen it before.

Now for a given end position we can binary search the list to find the best start position for it. If we want to make it linear, we somehow have to get rid of the binary search. Suppose the optimal start position for a given end position is in the middle of the list. If we could remove the elements before, this would make the algorithm linear. The problem with this is that those elements before could be the optimal starting positions for some later ending positions. The third step is to notice that allthough this indeed might cause the algorithm to miss some best local minimums, it will never cause the algorithm to miss a global minimum.

The way to make algorithm linear (drop elements before current optimum) is also quite a common trick, but one I think you should be able to come up with even if you haven't seen it before. The most difficult part of this problem for me was to see why this works (local minimums vs global minimums).

So in conclusion, to solve this problem in linear time I think you have to have some experience (to know the prefix sums technique), make some logical simplifications (here experience helps) and finally make the observation why this works (the creative, thinking part needing intuition and problem understanding).

Thank you very much for the answer.

I understood why we are removing elements from the front when inserting. If we remove the element, it means if we could use that here, its as good as using anywhere else. That part is clear as to why we are removing from front. What is not clear to me is why we are removing smaller elements from the back when we are inserting. I tried to co-relate this with how we remove the elements when we are finding the maximum element in sliding windows. The reason why remove there is because if Im greater, you are better of having me in the sliding window and since Im greater than you all, Ill be the max guy anyways. But the same reasoning is hard to apply here. Could you share some light as to why we are removing elements from the back ? Thank you!

It is very similar. To put into similar words: If my prefix is smaller, the interval will still be at least K if you use me, and since I'm later, the interval will become shorter.

ahhhh I see! aha.. Thank you!

There’s another easy solution with O(nlogn). You can check in linear time if there exists a subsequence of length l with the sum >= k (with two pointers) and then apply binary search to find the most optimal length.

Why did I get so many downvotes? Is my solution wrong?

Oh well I just realised it doesn't work Sorry, my bad

For those who don't know, this idea is not entirely wrong. It works for arrays with only either all non negative numbers, or all non positive numbers (plus 0).

Are there any problems like that for testing ?