### lunchbox's blog

By lunchbox, 11 months ago,

This post contains the editorials for all tasks contained in the AtCoder DP Contest, since there is no official editorial. This is written by nwatx and me.

Analysis
Code

Analysis
Code

Analysis
Code

Analysis
Code

Analysis
Code

Analysis
Code

Analysis
Code

Analysis
Code

Analysis
Code

Analysis
Code

Analysis
Code

Analysis
Code

Analysis
Code

Analysis
Code

Analysis
Code

Analysis
Code

Analysis
Code

Analysis
Code

Analysis
Code

Analysis
Code

Analysis
Code

Analysis

Analysis
Code

Analysis
Code

Analysis
Code

## Z — Frog 3

Analysis
Code

• +242

 » 11 months ago, # |   +3 LUNCHBOX OFZ
•  » » 11 months ago, # ^ |   -8 LUNCHBOX OFZ
•  » » » 11 months ago, # ^ |   -22 LUNCHBOX OFZ
•  » » 11 months ago, # ^ |   0 LUNCHBOX OFZ
 » 11 months ago, # |   +16 LUNCHBOFZ
 » 11 months ago, # | ← Rev. 2 →   +20 Someone finally wrote one!
 » 11 months ago, # |   +6 Thanks a lot! It will be really helpful :)
 » 11 months ago, # |   +8 Good Job,,, I was waiting for this from ages. Thanks a lot ... OFZ
 » 11 months ago, # |   +13 ​I’m telling you, lunchbox is as cracked as he is jacked. Saw him the other day benching 696lbs while winning IOI. I asked him what he was doing and he said "better than you" and walked out with a explosion.
 » 11 months ago, # |   +1 Thank you very much. I was solving these problems for past 2-3 days but couldn't find editorial. What a nice timing.
 » 11 months ago, # |   +3 For the problem T-Permutation, I came up with the following idea: We first create n-1 edges directed from small numbered index to large numbered index. i.e. if we have $i^{th}$ sign as $<$ then we create an edge from $i$ to $i+1$. Now, the answer is the number of topological orderings for this graph. But I am stuck here. Can anybody help me how can I find the answer? Thanks in advance.
 » 11 months ago, # |   +14 I can't find the Editorial for N — Slimes anywhere in this post... Am I missing something?
•  » » 11 months ago, # ^ |   0 Thanks, I just added it!
 » 11 months ago, # |   +3 This is one of the best list which covers all varients of Dp and gradient starting from absolute beginners. Very nicely written Analysis and Code! Thanks a lot:)
 » 10 months ago, # | ← Rev. 10 →   +10 For task U(Grouping), you can reduce finding cost[mask] from $O(2^n \cdot n^2)$ to $O(2^n\cdot n)$ by doing like this. Spoilerfor (int i = 1; i < 1 << n; i ++) { int first = (int) log2(i), last = (int) log(i & -i); int p_first = 1 << first, p_last = 1 << last; if (first == last) cost[i] = 0; else cost[i] = cost[i -p_first] + cost[i - p_last] - cost[i - p_first - p_last] + a[first][last]; } And yeah, you can reduce calculation time from log2 by some precalculation, so you can do it in $O(2^n)$
 » 10 months ago, # | ← Rev. 2 →   +5 I think in the analysis of W-Intervals the range is supposed to be [0,l-1] not [0,a-1]. lunchbox
•  » » 10 months ago, # ^ |   0 Thanks for catching that!
 » 9 months ago, # | ← Rev. 8 →   0 sed
 » 9 months ago, # |   0 Time complexity for K is wrong.
 » 8 months ago, # |   0 Can someone explain problem L more clearly? Why is the transitions like that, and why in the analysis dp[i][i] = 0 but in the code dp[i][i] = a[i]?
•  » » 8 months ago, # ^ |   0 Errichto explained it well on his stream about that contest. In case you didn't saw that https://youtu.be/FAQxdm0bTaw?t=6892The transition formula , can be rephrased in that way."Pick up the left bound , the opponent will pick up the best value on interval dp[L + 1][R]. The difference between your turn and opponents' next turn will be your answer for that interval.Pick up the right bound , the opponents will pick up the best value on interval dp[L][R — 1]. The difference betwene your turn and opponents' next turn will be your answer for that interval.Find max between these two choices."Can't explain why dp[i][i] = 0 in analysis , while in the actual implementation dp[i][i] = a[i].
 » 6 months ago, # |   0 I am not able to understand problem E editorial, Can anyone please explain it more ? Editorials are awesome thanks for editorial It helped me a lot, but anyone please explain E editorial
 » 6 months ago, # |   0 In problem J, in the memo function, why are we adding n to the expected value ?
•  » » 5 months ago, # ^ | ← Rev. 4 →   0 It's a bit late but (n-x-y-z)/(x+y+z) is the contribution of the expected value of wasted choices because (n-x-y-z) is the number of empty plates and we know that E[wasted]=(x+y+z)/N(E[wasted]+1). Now after that they need to add another (x+y+z)/(x+y+z) AKA 1 because that is the "price" of transitioning from the smaller states. So they just put the two together into (n-x-y-z+x+y+z)/(x+y+z) ==> n/(x+y+z)
 » 5 months ago, # |   -7 in the Q question, you can use lis instead of segment tree. https://atcoder.jp/contests/dp/submissions/28292226
 » 4 months ago, # |   +2 Sorry for necroposting, but since I found Problem J (Sushi)'s solution a bit confusing, am posting my approach here. E(x) = Σ(pi*xi) So for each 3d DP state (one, two, three), we can divide the event-space into 4 mutually exclusive groups: P(1 being selected), P(2 being selected), P(3 being selected), and P(none of these being selected). Since an infinite number of steps are required for the last group, it's probability approaches 0 and so can be ignored. Now, the xi of each of these 3 states can be calculated as a sum of 'Number of steps taken to finally select a dish belonging to this group' and the expectation of the state that will be reached as a result of selecting an element belonging to this group. Now here too, the number of steps taken can be infinite, and so we can use a suitable epsilon to limit our processing to only the first few states as the value added becomes negligible after a while. My Submission#include using namespace std; int n; long double dp[301][301][301] = {}; bool flags[301][301][301] = {}; long double acolyte(int one, int two, int three) { if(one + two + three == 0) return 0; if(flags[one][two][three]) return dp[one][two][three]; flags[one][two][three] = true; long double output = 0, unsel = 1 - (long double)(one + two + three) / n, eps = 1e-16L; if(one) { long double partial = acolyte(one - 1, two, three); long double tout = 1 + partial, tunsel = unsel; int i = 1; while(tunsel > eps) { tout += tunsel * (i + 1 + partial); tunsel *= unsel; i++; } output += (tout) * one / n; } if(two) { long double partial = acolyte(one + 1, two - 1, three); long double tout = 1 + partial, tunsel = unsel; int i = 1; while(tunsel > eps) { tout += tunsel * (i + 1 + partial); tunsel *= unsel; i++; } output += (tout) * two / n; } if(three) { long double partial = acolyte(one, two + 1, three - 1); long double tout = 1 + partial, tunsel = unsel; int i = 1; while(tunsel > eps) { tout += tunsel * (i + 1 + partial); tunsel *= unsel; i++; } output += (tout) * three / n; } return dp[one][two][three] = output; } void processor() { cin >> n; int one = 0, two = 0, three = 0; for(int i = 0; i < n; i++) { int current; cin >> current; if(current == 1) one++; else if(current == 2) two++; else three++; } cout.precision(18); cout << acolyte(one, two, three) << endl; return; } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); int count, counter = 0; count = 1; while(counter++ < count) {processor();} return 0; } 
 » 3 months ago, # |   0 Thank you very much
 » 3 months ago, # |   0 Hey, could anyone here tell me why a greedy approach to N — Slimes might fail? What I'm doing is maintaining a multiset of all the slimes. At each iteration take out the two least weighted slimes, increment my total cost by their sum, erase the two slimes, and insert the sum of their weights into the set. This passes 50% of the test cases but fails the rest, any clue?
 » 3 months ago, # |   0 Thanx for ur all great efforts ^w^
 » 3 months ago, # | ← Rev. 3 →   0 I had to do a double take at this code from Problem U. What does it do? for (int k = j; k; k = (k - 1) & j) 
•  » » 2 months ago, # ^ |   0 It just loops over all submasks for some mask.
•  » » » 2 months ago, # ^ |   0 Thank you!
 » 2 months ago, # |   0 Actually the time complexity for problem K — Stones is O(N * K)Any way thanks for the awesome editorial, that helps a lot <3