Google Kick Start hosts online rounds throughout the year, giving participants the opportunity to test and grow their coding abilities while getting a sample of the programming skills needed for a technical career at Google. Top candidates might be invited for interviews.

I invite you to solve some fun and interesting problems on Apr 20 2019, 23:00 UTC.

Dashboard can be accessed here during the contest. Problem analysis will be published soon after the contest.

For C large, I somehow feel that $$$Opt(l_1) <= Opt(l_2)$$$ if $$$l_1 <= l_2$$$, where $$$Opt(l)$$$ is the right-most index $$$r$$$ such that the tickets in range $$$[l, r]$$$ is maximum. Then I applied some divide-and-conquer, but failed to pass. Can anyone give an example where this is not the case? Thanks!

:`|

I changed my code during the contest to find the "leftmost" but still failed.

Could you please explain why $$$ans[l_1][j] - ans[l_2][j]$$$ is non-increasing with $$$j$$$? Or would you like to give a example where my assumption is wrong?

Sorry, I made mistake in my proof, This case will fail your argument.

6 2

5 5 5 1 5 2

But the $$$Opt$$$ value for each left point is the right-most point in the array, right?

Following 1-based indexing Opt(1) = 6, but Opt(2) = 4), right?

I see! Thank you so much!

I feel this year is harder than last year. Am I wrong..

darkshadows I am Ranked 58th globally. Are there any chances of getting call for an interview? Thanks.

Could anyone please explain to me why B — Test 1 uses sorting by 'L_i'? I thought it must be min(E_i, L_i) at each step since L_i is meaningless if it is more than the current energy?

Infact i had the same doubt. Your condition i.e min(Ei,Li) works good for only 1st second.Consider a following case:

E1=70

L1=50

E2=80

L2=40

lets say that optimal sequence wants to get above stones eaten after 1st second ( may be due to other stones having large L and E as well) . Then even though L2<L1 but L2 is causing more enery loss if they get eaten after 1st second because after 1st second L1 can cause only 20 loss (70-50) and L2 can still cause 40 loss (80-40). So how to get intuition or any proof in these kind of problems.If some one can help me .

SO if i apply dp after sorting according to Li only i might get wrong answer .

No matter what E_i is, the solution is always sorted by L_i. This is the key point. So the problem is simply to choose or not to choose stones from a L_i-sorted list.

Yes but I want to understand why it works like that and why is E_i not considered?

Imagine why the smaller L is preferred. In such cases the actual potential loss of the bigger is the current E_big (< L_small < L_big). After the smaller is picked, the bigger reduces to zero, and can never be picked. So the order smaller -> bigger is actually smaller -> not picked. It's the same as bigger -> smaller, which is not picked -> smaller. That's why the bigger-L-first-order covers all the cases.

okay, it does make sense more if we exclude the not picked. Thanks.

Let's say there are two sweets and the corresponding happiness factor and decay factors are $$$(e_{1}, l_{1})$$$ and $$$(e_{2}, l_{2})$$$. And suppose $$$l_{1} \ge l_{2}$$$. If you eat sweet1 then sweet2, you get the total happiness as $$$H_{1} = e_{1} + e_{2} - s*l_{2}$$$. But if you eat sweet2 then sweet1, the total happiness is $$$H_{2} = e_{1} + e_{2} - s*l_{1}$$$. You can easily observe that the order in which you eat depends on $$$l_{1}$$$ and $$$l_{2}$$$. And you can also observe that $$$H_{1} \ge H_{2}$$$ — Which means that eating sweet with higher decay factor first is better.

It's not hard to see that this can be easily extended to large test.

actually, I think it should be e_1 + e_2 — s * min(e_2, l_2). Am I right?

Can somebody please explain segment tree operations for Problem 3 Diverse Subarray large test case?

I'm not able to understand how max prefix sums of two child nodes can be used to calculate max prefix sum of the parent node.

Please include complexity analysis too.

Store the maximum prefix sum and sum of the elements covered by a node's range for each node. Consider a node P with a left child L and a right child R. The value of P.(max prefix sum) is max(L.(max prefix sum), L.(sum)+R.(max prefix sum)). Also, P.(sum) = L.(sum)+R.(sum). Recomputing the value of a single node is therefore O(1). During an update, if we start at the leaf node and work up through the parents, we visit O(log n) nodes. Each node requires O(1) time. Thus, the total complexity of a single update is O(log n).

Can someone help me in my code to problem Diverse Subarray https://code.hackerearth.com/cdc08bM because i cant find my mistake ... sample test cases and my own cases are running fine.