### icecuber's blog

By icecuber, 3 years ago,

I'm using bottom-up implementations and pull dp when possible. Pull dp is when we calculate each dp entry as a function of previously calculated dp entries. This is the way used in recursion / memoization. The other alternative would be push dp, where we update future dp entries using the current dp entry.

I think CSES is a nice collection of important CP problems, and would like it to have editorials. Without editorials users will get stuck on problems, and give up without learning the solution. I think this slows down learning significantly compared to solving problems with editorials. Therefore, I encourage others who want to contribute, to write editorials for other sections of CSES.

Feel free to point out mistakes.

## Dice Combinations (1633)

dp[x] = number of ways to make sum x using numbers from 1 to 6.

Sum over the last number used to create x, it was some number between 1 and 6. For example, the number of ways to make sum x ending with a 3 is dp[x-3]. Summing over the possibilities gives dp[x] = dp[x-1] + dp[x-2] + dp[x-3] + dp[x-4] + dp[x-5] + dp[x-6].

We initialize by dp[0] = 1, saying there is one way with sum zero (the empty set).

The complexity is $O(n)$.

Code

## Minimizing Coins (1634)

This is a classical problem called the unbounded knapsack problem.

dp[x] = minimum number of coins with sum x.

We look at the last coin added to get sum x, say it has value v. We need dp[x-v] coins to get value x-v, and 1 coin for value v. Therefore we need dp[x-v]+1 coins if we are to use a coin with value v. Checking all possibilities for v must include the optimal choice of last coin.

As an implementation detail, we use dp[x] = 1e9 = $10^9 \approx \infty$ to signify that it is not possible to make value x with the given coins.

The complexity is $O(n\cdot target)$.

Code

## Coin Combinations I (1635)

This problem has a very similar implementation to the previous problem.

dp[x] = number of ways to make value x.

We initialize dp[0] = 1, saying the empty set is the only way to make 0.

Like in "Minimizing Coins", we loop over the possibilities for last coin added. There are dp[x-v] ways to make x, when adding a coin with value v last. This is since we can choose any combination for the first coins to sum to x-v, but need to choose v as the last coin. Summing over all the possibilities for v gives dp[x].

The complexity is $O(n\cdot target)$.

Code

## Coin Combinations II (1636)

dp[i][x] = number of ways to pick coins with sum x, using the first i coins.

Initially, we say we have dp[0][0] = 1, i.e we have the empty set with sum zero.

When calculating dp[i][x], we consider the i'th coin. Either we didn't pick the coin, then there are dp[i-1][x] possibilities. Otherwise, we picked the coin. Since we are allowed to pick it again, there are dp[i][x — <value of i'th coin>] possibilities (not dp[i-1][x — <value of i'th coin>] possibilities).

Because we consider the coins in order, we will only count one order of coins. This is unlike the previous task, where we considered every coin at all times.

The complexity is $O(n\cdot target)$.

Code

## Removing Digits (1637)

dp[x] = minimum number of operations to go from x to zero.

When considering a number x, for each digit in the decimal representation of x, we can try to remove it. The transition is therefore: dp[x] = $\min_{d \in digits(x)}$ dp[x-d].

We initialize dp[0] = 0.

The complexity is $O(n)$.

Note that the greedy solution of always subtracting the maximum digit is also correct, but we are practicing DP :)

Code

## Grid Paths (1638)

dp[r][c] = number of ways to reach row r, column c.

We say there is one way to reach (0,0), dp[0][0] = 1.

When we are at some position with a ., we came either from the left or top. So the number of ways to get to there is the number of ways to get to the position above, plus the number of ways to get to the position to the left. We also need to make sure that the number of ways to get to any position with a # is 0.

The complexity is $O(n^2)$, so linear in the number of cells of input.

Code

## Book Shop (1158)

This is a case of the classical problem called 0-1 knapsack.

dp[i][x] = maximum number of pages we can get for price at most x, only buying among the first i books.

Initially dp[0][x] = 0 for all x, as we can't get any pages without any books.

When calculating dp[i][x], we look at the last considered book, the i'th book. We either didn't buy it, leaving x money for the first i-1 books, giving dp[i-1][x] pages. Or we bought it, leaving x-price[i-1] money for the other i-1 books, and giving pages[i-1] extra pages from the bought book. Thus, buying the i'th book gives dp[i-1][x-price[i-1]] + pages[i-1] pages.

The complexity is $O(n\cdot x)$.

Code

## Array Description (1746)

dp[i][v] = number of ways to fill the array up to index i, if x[i] = v.

We treat i = 0 separately. Either x[0] = 0, so we can replace it by anything (i.e dp[0][v] = 1 for all v). Otherwise x[0] = v $\ne$ 0, so that dp[0][v] = 1 is the only allowed value.

Now to the other indices i > 0. If x[i] = 0, we can replace it by any value. However, if we replace it by v, the previous value must be either v-1, v or v+1. Thus the number of ways to fill the array up to i, is the sum of the previous value being v-1, v and v+1. If x[i] = v from the input, only dp[i][v] is allowed (i.e dp[i][j] = 0 if j $\ne$ v). Still dp[i][v] = dp[i-1][v-1] + dp[i-1][v] + dp[i-1][v+1].

The complexity is $O(n\cdot m)$ with worst-case when x is all zeros.

Code

## Edit Distance (1639)

This is a classic problem called edit distance.

We call the input strings a and b, and refer to the first i characters of a by a[:i].

dp[i][k] = minimum number of moves to change a[:i] to b[:k].

When we calculate dp[i][k], there are four possibilities to consider for the rightmost operation. We check all of them and take the cheapest one.

1. We deleted character a[i-1]. This took one operation, and we still need to change a[:i-1] to b[:k]. So this costs 1 + dp[i-1][k] operations.

2. We added character b[k-1] to the end of a[:i]. This took one operation, and we still need to change a[:i] to b[:k-1]. So this costs 1 + dp[i][k-1] operations.

3. We replaced a[i-1] with b[k-1]. This took one operation, and we still need to change a[:i-1] to b[:k-1]. So this costs 1 + dp[i-1][k-1] operations.

4. a[i-1] was already equal to b[k-1], so we just need to change a[:i-1] to b[:k-1]. That takes dp[i-1][k-1] operations. This possibility can be viewed as a replace operation where we don't actually need to replace a[i-1].

The complexity is $O(|a|\cdot |b|)$.

Code

## Rectangle Cutting (1744)

dp[w][h] = minimum number of cuts needed to cut a w x h piece into squares.

Consider a $w \times h$ piece. If it is already square (w = h), we need 0 cuts. Otherwise, we need to make the first cut either horizontally or vertically. Say we make it horizontally, then we can cut at any position 1,2,..,h-1. If we cut at position k, then we are left with two pieces of sizes $w \times k$ and $w \times h-k$. We can look up the number of moves to reduce these to squares in the dp array. We loop over all possibilities k and take the best one. Similarly for vertical cuts.

The complexity is $O(a^2\cdot b + a\cdot b^2)$.

Code

## Money Sums (1745)

This is a case of the classical problem called 0-1 knapsack.

dp[i][x] = true if it is possible to make x using the first i coins, false otherwise.

It is possible to make x with the first i coins, if either it was possible with the first i-1 coins, or we chose the i'th coin, and it was possible to make x — <value of i'th coin> using the first i-1 coins.

Note that we only need to consider sums up to 1000 $\cdot$ n, since we can't make more than that using n coins of value $\le$ 1000.

The complexity is $O(n^2\cdot \max x_i)$.

Code

## Removal Game (1097)

The trick here is to see that since the sum of the two players' scores is the sum of the input list, player 1 tries to maximize $score_1-score_2$, while player 2 tries to minimize it.

dp[l][r] = difference$score_1-score_2$if considering the game played only on interval [l, r].

If the interval contains only one element (l = r), then the first player must take that element. So dp[i][i] = x[i].

Otherwise, player 1 can choose to take the first element or the last element. If he takes the first element, he gets x[l] points, and we are left with the interval [l+1,r], but with player 2 starting. $score_1-score_2$ on interval [l+1,r] is just dp[l+1][r] if player 1 starts. Since player 2 starts, it is -dp[l+1][r]. Thus, the difference of scores will be x[l]-dp[l+1][r] if player 1 chooses the first element. Similarly, it will be x[r]-dp[l][r-1] if he chooses the last element. He always chooses the maximum of those, so dp[l][r] = max(x[l]-dp[l+1][r], x[r]-dp[l][r-1]).

In this problem dp[l][r] depends on dp[l+1][r], and therefore we need to compute larger l before smaller l. We do it by looping through l from high to low. r still needs to go from low to high, since we depend only on smaller r (dp[l][r] depends on dp[l][r-1]). Note that in all the other problems in this editorial, dp only depends on smaller indices (like dp[x] depending on dp[x-v], or dp[i][x] depending on dp[i-1][x]), which means looping through indices in increasing order is correct.

We can reconstruct the score of player 1 as the mean of, the sum of all input values, and $score_1-score_2$.

The complexity is $O(n^2)$.

Code

## Two Sets II (1093)

This is a 0-1 knapsack in disguise. If we are to have two subsets of equal sum, they must sum to half the total sum each. This means if the total sum $\frac{n(n+1)}{2}$ is odd, the answer is zero (no possibilities). Otherwise we run 0-1 knapsack to get the number of ways to reach $\frac{n(n+1)}{4}$ using subsets of the numbers 1..n-1. Why n-1? Because by only considering numbers up to n-1, we always put n in the second set, and therefore only count each pair of sets once (otherwise we count every pair of sets twice).

dp[i][x] = number of ways to make sum x using subsets of the numbers 1..i .

We say there is one way (the empty set) to make sum 0, so dp[0][0] = 1;

For counting number of ways to make sum x using values up to i, we consider the number i. Either we didn't include it, then there are dp[i-1][x] possibilities, or we included it, and there are dp[i-1][x-i] possibilities. So dp[i][x] = dp[i-1][x] + dp[i-1][x-i].

The complexity is $O(n^3)$.

Code

## Increasing Subsequence (1145)

This is a classical problem called Longest Increasing Subsequence or LIS for short.

dp[x] = minimum ending value of an increasing subsequence of length x+1, using the elements considered so far.

We add elements one by one from left to right. Say we want to add a new value v. For this to be part of an increasing subsequence, the previous value in the subsequence must be lower than v. We might as well take the maximum length subsequence leading up to v, as the values don't matter for the continuation to the right of v. Therefore we need to extend the current longest increasing subsequence ending in a value less than v. This means we want to find the rightmost element in the dp array (as the position corresponds to the length of the subsequence), with value less than v. Say it is at position x. We can put v as a new candidate for ending value at position x+1 (since we have an increasing subsequence of length x+1 + 1, which ends on v). Note that since x was the rightmost position with value less than v, changing dp[x+1] to v can only make the value smaller (better), so we can always set dp[x+1] = v without checking if it is an improvement first.

Naively locating the position x with a for loop gives complexity $O(n^2)$. However, dp is always an increasing array. So we can locate x position by binary search (std::lower_bound in C++ directly gives position x+1).

The final answer is the length of the dp array after considering all elements.

The complexity is $O(n\cdot \log n)$.

In this task we were asked to find the longest strictly increasing subsequence. To find the longest increasing subsequence where we allow consecutive equal values (for example 1,2,2,3), change lower_bound to upper_bound.

Code

## Projects (1140)

Even though days can go up to $10^9$, we only care about days where we either start or just finished a project. So before doing anything else, we compress all days to their index among all interesting days (i.e days corresponding to $a_i$ or $b_i+1$ for some i). This way, days range from 0 to less than $2 n \le 4\cdot 10^5$.

dp[i] = maximum amount of money we can earn before day i.

On day i, maybe we just did nothing, so we earn what we earned on day i-1, i.e dp[i-1]. Otherwise, we just finished some project. We earned some money doing the project, and use dp[start of project] to know how much money we could have earned before starting the project. Loop through all projects finishing just before day i, and take the best one.

The complexity is $O(n\cdot \log n)$, log comes from day compression.

Code

• +192

| Write comment?
 » 3 years ago, # |   0 Hello. Thanks for editorial. Do you have links to above problems or where I can submit them. I want to try all of them as they look good from IPOV.
•  » » 3 years ago, # ^ |   0
 » 3 years ago, # |   +46 Cool, thanks. I already recommend CSES problem set to some people and I will now link to this blog for dp. Just next time consider using names like ways or best_score in the future instead of dp in every single problem — if you want to then publish your codes to help others.
 » 3 years ago, # |   +21 In my opinion, open editorials to CSES problems (with code!) goes against the spirit of the platform. Not providing solutions or access to accepted submissions must have been a conscious decision by the maintainers.
•  » » 3 years ago, # ^ | ← Rev. 2 →   +37 I thought about that, and therefore asked pllk for permission before writing the editorial. He encourages editorials for the problems. However, he mentioned that it might cause some people to read the editorial without thinking about the problem.
•  » » » 3 years ago, # ^ |   0 meooow I think you are wrong here. I agree one should think before rushing to editorials but having no editorials forces one to leave the question without any further learning and development.Thanks to icecuber who invested his precious time in writing some editorials and pllk for allowing it.
•  » » » » 3 years ago, # ^ |   +20 Having no editorial is not enough reason to leave a problem. There are many avenues to ask for help once you have truly exhausted all approaches you can think of.However, since pllk approves of this, my point of complaint isn't valid.
•  » » » » » 3 years ago, # ^ |   +1 Can you kindly suggest some of the many avenues you mention of? And are they more effective and efficient than having the editorial at your disposal?
•  » » » » » » 3 years ago, # ^ |   +2 Asking for help FAQIs the answer you get by asking for help going to be better than an editorial? Maybe not.Is the absence of an editorial going to make you work harder to solve the problem? Absolutely yes.Please do not think that I am preaching for a world with no editorials. It is just that the design of the CSES problemset implies that it is meant to solved by putting in maximum individual effort.
•  » » » » » » » 3 years ago, # ^ |   +1 Well, I believe that every problem should have an editorial. Let's say I put in 2 days worth of effort solving a problem without making any real progress then an editorial guarantees me closure for my efforts. On the other hand, asking help from others doesn't provide me that certainty. Especially for the guys like me who don't have coding circles where we can ask for help from friends. Cyans(or lower rating) people asking for help from a random person usually gets ignored. That's just what I have observed and my opinion.
•  » » » » » » » » 3 years ago, # ^ |   0 Idk, I usually answer people that ask nicely. Show the problem source and more people might help.
 » 3 years ago, # |   +12 Thanks for this. I love the work here. CSES needs more editorials like this; sometimes even just stalking people's code isn't enough to understand what's going on. I hope you keep going with this too.
•  » » 3 years ago, # ^ |   0 How to see other peoples submission in CSES.fi ?
•  » » » 3 years ago, # ^ |   0 I meant looking at people’s GitHubs or something like that since you can’t view other solutions until you solve it yourself
 » 3 years ago, # |   0 Thanks to this blog, I go back to my account on CSES problem set. I found out that I didn't solve 2 problems in DP section (Got TLE several testcases). I solved it and AC in first submit, but the logic is the same as I have solved it before. Lmao =)))
•  » » 3 years ago, # ^ |   +7 I've gotten two more questions about TLE in Coin Combinations II, so I guess it deserves a comment. Since we are doing $10^8$ operations in one second, we need those operations to be efficient. This means we can't have many cache misses.You get cache misses by accessing array entries that are far away from each other. My implementation loops through i, then j. It accesses dp[i][j], dp[i-1][j] and dp[i][j-x[i-1]].If you order your loops differently (j, then i), or use dp[j][i] instead of dp[i][j] (so you swapped the meaning of the dimensions), you will likely get TLE.In terms of rules of thumb, we see that the dimension containing j-x[i-1] varies most, and therefore put it as the inner dimension of the dp array. And we loop through the dimensions the same order as the dimensions of the array. Below are some more detailed explanations.If we loop through i, j-x[i-1] varies a lot, this means cache misses. Therefore we need to loop through i in the outer loop. Looping through j gives contiguous memory accesses, so we get few cache misses by having j as the inner loop.If you define your array as dp[j][i] instead of dp[i][j], then dp[j-x[i-1]][i] goes far from dp[j][i], compared to dp[i][j] and dp[i][j-x[i-1]]. This is because the outer dimension gives smaller distance in memory than the inner dimension (you can think of dp[i][j] as accessing index $10^6 i + j$ in a linear array, if the second dimension has size $10^6$).
•  » » » 3 years ago, # ^ | ← Rev. 2 →   0 Is it the same reason (that O(n * sum) solution requires 108 ops, in worst case) and hence top-down is not feasible due to extra overhead of recursion ?
•  » » » 10 months ago, # ^ |   0 I can't believe this cache miss concept has only 8 upvotes! It's amazing!
 » 3 years ago, # |   0 In Book Shop , if I buy the i'th book then shouldn't x-price[i] money be left for the remaining i-1 books , so I can't see why x-price[i-1] money is left for the remaining i-1 books as that would imply that the money left is after buying the (i-1)'th book. So if icecuber or anyone else could please explain this it would be a great help. Thanks.
•  » » 3 years ago, # ^ |   0 The array "price" is 0-indexed, so the price of the i'th book is price[i-1].
•  » » » 3 years ago, # ^ |   +3 I feel so dumb for asking such a question as now it is obvious what you were trying to say. And I am sorry for asking something like this. I am thankful to you for answering and also for the editorial. Thanks icecuber.
 » 3 years ago, # | ← Rev. 2 →   0 For Coin Combinations II (1636), my solution getting TLE in 3 TCs. Unlike editorial, I'm trying top-down approach. Can someone help please?
•  » » 3 years ago, # ^ |   0 TLE on 100 1000000? I don't know but I think top-down will be slower when the table is filled-up
 » 3 years ago, # |   0 In Two Sets II problem, Because by only considering numbers up to n-1, we always put n in the second set, and therefore only count each pair of sets once (otherwise we count every pair of sets twice). Why putting n always in second set gurantees unique sets? How to prove above statement.
•  » » 3 years ago, # ^ | ← Rev. 2 →   0 Since you aren't having 'n' in your first set, you'll never have a first set that consists of a combination involving 'n' i.e. all the times a combination requires 'n' you are simply not counting it... thus you end up counting things only once.
 » 3 years ago, # |   +7 Please do the same for "Graph Algorithms" section. Thanks.
 » 3 years ago, # |   0 please provide editorial for graphs problem. Thanks:)
•  » » 3 years ago, # ^ |   +3 It takes a long time to write up an editorial like this, and the graph section has like a billion problems. If icecuber provides an editorial that's great, but there exists code on github for all the problems so if you're stuck you shouldn't rely on him making an editorial, as he may not want to. Notice how people asked 7 months ago (comment) and he still has not, so I doubt it'll happen smh. No need for people to keep asking.
 » 3 years ago, # | ← Rev. 3 →   +13 Check out the AC codes for all CSES dp problems here:https://github.com/pratikkundnani/cses-dp-solutions/tree/masterI will keep updating the repository! I will be solving graph problems too.Star it if you liked it.Thank you!
 » 3 years ago, # |   0 Can you write an editorial for the graph section too...it will be a great help..
 » 3 years ago, # | ← Rev. 2 →   0 Why my code for Minimizing Coins gives TLE for some test cases, even it's complexity is also O(n*target)? Code#pragma GCC optimize("Ofast") #pragma GCC target("avx,avx2,fma") #include using namespace std; typedef long long int ll; ll dp[1000005]; int main(){ ios_base::sync_with_stdio(false); cin.tie(0);cout.tie(0); ll n,target,i,j; cin>>n>>target; ll arr[n]; for(i=0;i>arr[i]; for(i=0;i<=target;i++) dp[i]=INT_MAX; for(i=0;i<=target;i++){ for(j=0;j
•  » » 3 years ago, # ^ | ← Rev. 2 →   +1 The i%arr[j] is causing the TLE I believe. I removed it, since it isn't required as dp[i-arr[j]] will handle it.This is because the constraints are already too tight and modulo operation is time costly operation. Also, sorting arr also improved the time a bit, but major contribution was made by remove the module operation.Hope this helps. Code#include "bits/stdc++.h" using namespace std; typedef long long int ll; ll dp[1000005]; ll arr[101]; int main(){ ios_base::sync_with_stdio(false); cin.tie(0);cout.tie(0); ll n,target,i,j; cin>>n>>target; for(i=0;i>arr[i]; sort(arr,arr+n); for(i=0;i<=target;i++) dp[i]=INT_MAX; dp[0] = 0; for(i=0;i<=target;i++){ for(j=0;j
 » 3 years ago, # |   0 Can anyone explain the matrix expo solution to Dice Combinations? It has much lower complexity ($6^3 \log n$ afaict). This is the code, for example, which used the matrix 1 1 1 1 1 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 raises it to the power $n$, and then prints dp[0][0] as the answer. Can someone explain how do we come up with this matrix in the first place?
•  » » 3 years ago, # ^ |   0 Check CP handbook's matrix chapter's linear recurrences section.
•  » » » 22 months ago, # ^ |   0 Please give the link of CP handbook , you mentioned above.
•  » » » » 20 months ago, # ^ |   0
 » 3 years ago, # |   0 Thanks a lot icecuber! I found your editorial super useful.It was hard for me to understand the Removal Game (1097) solution at first. I even found another version here, and struggled comprehending it as well :-) Like, I saw how the code could possibly work, of course, but the trail of thought leading to that code was eluding me. So I found my own trail then, which may be somewhat more beginner-friendly. So, here it is:Given the input numbers xs, I started with two arrays: A[l][r], B[l][r] -- the scores for each player, when they have the first move on the range of [l, r]. Now, say, I'm the player A, and I pull from the front of the range -- the xs[l]. Let t be the range_sum(l, r). Then, t = xs[l] + B[l + 1][r] + alpha, which leads to alpha = t - xs[l] - B[l + 1][r]. Thus, A[l][r] = xs[l] + alpha = t - B[l + 1][r]. Similarly, if I pull from the back of the range, A[l][r] = t - B[l][r - 1]. The optimal strategy is to take the largest of the two numbers.Next, we make an observation, that the A will be identical to B, and we only need one DP matrix. The codeusing ll = long long; using vll = vector; using vvll = vector; ll max_a_score(const vll &xs) { const int sz = xs.size(); vll psums(sz, 0); partial_sum(xs.cbegin(), xs.cend(), psums.begin()); const auto range_sum = [&psums](const int l, const int r) { const ll b = psums[r]; const ll a = l > 0 ? psums[l - 1] : 0; return b - a; }; // score when playing [from index l] [to index r] vvll score(sz, vll(sz, 0)); for (int r = 0; r != sz; ++r) { score[r][r] = xs[r]; for (int l = r - 1; l >= 0; --l) { score[l][r] = range_sum(l, r) - min(score[l + 1][r], score[l][r - 1]); } } return score.front().back(); } 
•  » » 2 years ago, # ^ | ← Rev. 2 →   0 What is the alpha here?and can u explain how A and B and identical?
•  » » » 2 years ago, # ^ |   0 Oh, right. I didn't explain that alpha, did I? alpha is whatever remains from A[l][r] after xs[l] is taken: alpha := A[l][r] - xs[l]. It's just a temporary variable that helps in deriving the recurrence.Now, regarding the sameness of A and B. As defined, A[l][r] is the top score for a player, when they have the first move on the range of [l, r]. And, whether that player 1st or 2nd is, is fully determined by the parity (even/odd) of the interval's length: r - l + 1. Thus, designating the A from the B is actually superfluous.
•  » » » » 2 years ago, # ^ |   0 That's clear nowthanks
 » 3 years ago, # |   0 In the edit distance problem, it says -- "When we calculate dp[i][k], there are four possibilities to consider for the rightmost operation. We check all of them and take the cheapest one."Why do we only need to consider the rightmost operation? I feel this is worth discussing (either in the post or as a reply to my comment).
 » 3 years ago, # | ← Rev. 4 →   +3 In Coin Combinations 2, you don't need extra dimension/state. dp[x + 1] = {0} would work fine. dp[0] = 1. dp[i] represents no. of ways to reach value 'i' with coins considered so far. So loop through coins in any order, and update values of future states of dp by looping through previous states of dp from left to right i.e. from 0 to x. That is dp[current_coin + state] += dp[state]; Spoiler - Code#include using namespace std; #define watch(x) cout << (#x) << " is " << (x) << endl #define fr(i,n) for(int i=0; i=en; i--) #define sq(a) (a) * (a) typedef long long ll; typedef pair pii; typedef pair pll; ll mod = 1e9+7; #define INF INT_MAX void add_self(int &a, int b){ a += b; if(a >= mod) a -= mod; } void solve(){ int n, x; cin>>n>>x; int c[n]; fr(i, n) cin>>c[i]; vector dp(x + 1, 0); /// dp[i] = no. of ordered ways to have value i dp[0] = 1; for(int coin: c){ for(int i = 0; i + coin <= x; i++){ add_self(dp[i + coin], dp[i]); } } cout<>t; while(t--){ solve(); } return 0; } Space: O(target)Time: O(target * n)
•  » » 2 years ago, # ^ |   0 Can you please clarify your approach?
 » 3 years ago, # |   0 In Coin Combinations II(1636) shouldn't the vector x be sorted?
 » 3 years ago, # | ← Rev. 3 →   0 I am making detailed video explanations for some of the problems of this section, I will keep updating this comment whenever I add a new video. Along with the explanation I live code the solution at the end of the video and submit the code on the platform.Interested may check the following videos. Dice Combinations: https://youtu.be/5gd5jptXWAM Coin Combinations: https://youtu.be/-pXjopzMVrE Grid Paths: https://youtu.be/V64F4wlodUM Book Shop: https://youtu.be/qpNy2nuSuaI Also added on my channel videos for coin combinations 2, array Description, removing digits, edit distance, Rectangle cutting and projects.Hopefully people will find this helfpul.Links to Video solution for all problems can be found here: https://codeforces.com/blog/entry/80064
 » 3 years ago, # | ← Rev. 2 →   0 I have been trying the problem Book Shop for a very long time now. I am using the same logic as given by you. Still, it is giving me TLE. Is there any other approach for this question or some optimization that I can do on my code:Here is my code: /* Priyansh Agarwal*/ #include using namespace std; #define fastio() ios_base::sync_with_stdio(false);cin.tie(NULL);cout.tie(NULL) #define MOD 1000000007 #define MOD1 998244353 #define nline "\n" #define pb push_back #define mp make_pair #define ff first #define ss second #define PI 3.141592653589793238462 #define debug(x) cout << #x << " " << x <> n >> x; int value[n]; int weight[n]; for (int i = 0; i < n; i++) cin >> weight[i]; for (int i = 0; i < n; i++) cin >> value[i]; int **dp = new int*[n]; for (int i = 0; i < n; i++) dp[i] = new int[x + 1]; for (ll i = 0; i <= x; i++) { for (ll j = 0; j < n; j++) { dp[j][i] = j - 1 >= 0 ? dp[j - 1][i] : 0; if (weight[j] <= i) dp[j][i] = max(dp[j][i], (j - 1 >= 0 ? dp[j - 1][i - weight[j]] : 0) + value[j]); } } cout << dp[n - 1][x]; return 0; } 
•  » » 3 years ago, # ^ |   +3 Incase you missed it, I have replied to your comment on my youtube video: "Honestly it's sad that the time limit on the platform is very tight. Try changing the loops, make outer loop for n and inner loop for x and it should pass. Try it and tell me if it works. I'll give you a plausible explanation why this works."
•  » » » 3 years ago, # ^ |   0 Yes, I just did that and it worked. But, I have no idea why this happened. The time complexity for both of them is same but just interchanging the loops is reducing the time it is taking for the program to run. Looking forward to your explanation on this. And yes, Thanks a lot.
•  » » » » 3 years ago, # ^ |   +3 I am no expert but here is what I can tell you, There is something known as locality of reference, if your program uses a part of the memory which is located in contiguous memory addresses(or nearby memory addresses) it will execute faster.Why is that so? Caching. The OS(again I am no expert, but hopefully someone will put more light on this) will cache some of the regions of the data stored in main memory, accessing cache is much faster than accessing main memory.Ever Thought why iterating an array is much faster than iterating a link list of same size even though asymptotically both of them are O(N) operations.Try laying out your 2d array and think which of two approaches will make more compact memory region accesses. Hope this helps!
•  » » » » » 3 years ago, # ^ | ← Rev. 3 →   0 Thanks for the reply again but correct me if I am wrong, If I have 2D array of N*X and another of X*N and in both of them I iterate wouldnt it take the same time. I understood the part that contiguous memory will be faster to iterate on but aren't both the loops doing the same thing. In both cases we are iterating on contiguous memory locations. In the first case, we will do like X iterations N times and in the second we will do N itetations X times. So, does it affect the running tume considering X is greater than N. Sorry for the trouble but it would really help me if you van explain a little by taking an example. How about you explain this when N is 2 and X is 5. This will really clear the doubt.
•  » » » » » » 3 years ago, # ^ |   +3 keeping in mind caching and locality of reference think which of the two recurrences will likely evaluate faster: 1. dp[i] = f(dp[i-1]) 2. dp[i] = f(dp[i-some_random_number(1,i)])In main memory 2d, 3d, 4d all arrays are flattened and stored as 1d arrays. So flatten N.X array into N arrays of size X each and X.N array into X arrays of size N each.Take 5mins and then check if this pic helps you a bit: Even I would appreciate a response from someone who is an intermediate or expert at COA and OS concepts but till then this is the best explanation I can come up with.
•  » » » » » » » 3 years ago, # ^ |   +5 Ohhh I think now I am having a much better understanding of this thing. The picture was quite explanatory. I looked at both of my codes and realised this is actually quite sensible. When I am doing dp[i-1][j- some random number] it is like going 1 time back in 10^3 array and then it is about 10^5 places to move in the end to reach j-some random number. But in case I do dp[j-some random number][i-1] it will go back in 10^5 array which has 10^3 size sub array and this will happen arbitrary number of times so it becomes 10^8. I guess this is only what you are trying to say. Thanks for the wonderful explanation, really appreciate your effort to explain this. I have understood it.
•  » » » » » » » » 3 years ago, # ^ |   0 Yes that is what I was trying to say, I am glad it helped :)
•  » » » » » » » » » 23 months ago, # ^ |   0 bro,can you send solution for elavator rides question
 » 3 years ago, # |   0 Thanks a lot for this! Best problems for beginner in DP and editorial is perfect for beginner to understand dp!
 » 2 years ago, # |   0 This is my code for (Two sets II) problem. My approach:- dp[i]= number of ways to make sum using subset 1..i. #include using namespace std; typedef long long ll; typedef unsigned long long ull; int main(){ ios_base::sync_with_stdio(false); cin.tie(0); int n;cin>>n; ll mod=1e9+7; int sum=(n*(n+1))/2; if(sum%2){ cout<<0;exit(0); } sum/=2; vector dp(sum+1); dp[0]=1; for(int i=1;i<=n/2;i++){ for(int j=sum;j>=0;j--){ if(j-i<0)continue; dp[j]+=dp[j-i]; dp[j]%=mod; } } //total number of ways would be double since i am counting for both the sets. dp[sum]/=2; dp[sum]=(dp[sum]%mod+mod)%mod; cout<
•  » » 2 years ago, # ^ |   0 you can't make dp[sum]/=2 after moduleyou need to use modular inverse for the divison problem
 » 2 years ago, # | ← Rev. 2 →   0 hey did anyone else get a runtime error (like heap space error or something) on coin combination 2 or is it just my code?
 » 2 years ago, # |   0 Can someone explain why the solution for increasing subsequence works? Specifically I am confused about how we can replace an element in the dp array with an element later found. Doesn't this not make dp a subsequence?
 » 2 years ago, # |   0 i didn't get about removal game can anyone make me understand why we have to take the mean at the end
•  » » 2 years ago, # ^ |   0 As stated, score1 + score2 = sum. With our dp, we calculate max(score1 — score2).Add both of these, sum + (score1-score2). = score1 + score2 + score1 — score2 = 2 * score1Divide by 2 to get max score1.
 » 2 years ago, # |   0 Hey! Can someone give the formal proof for the Greedy Solution of 'Removing Digits'?I tried to formulate a proof, but no luck :(
 » 2 years ago, # |   0 Thanks for the great editorial. I solved some problems on my own. If I didn't got the logic for some, I read the editorial and got the idea. Some problems involved some good tricks which I learnt. Thanks a lot.
 » 2 years ago, # |   0 Two Sets II (1093)for this problem why the recursive solution works, even if we take n instead of n-1. vector>dp; int n; int solve(int current , int sum) { if(current<1) return 0; if(sum==0) return 1; if(sum<0) return 0; if(dp[current][sum]!=-1) return dp[current][sum]; int ans = 0; ans = (ans + solve(current-1,sum-current))%MOD; ans = (ans + solve(current-1,sum))%MOD; return dp[current][sum] = ans; } int main(){ cin>>n; int sum = ((n)*(n+1))/2; if(sum&1) { cout<<0; return; } int req = sum/2; dp.resize(n+1,vector(req+1,-1)); cout<
•  » » 2 years ago, # ^ |   0 When current = 1 and sum = 1 when you go to solve(0,0) you returns 0 because of condition current<1 returns 0your code is not taking 1 in the first set instead of n
 » 2 years ago, # | ← Rev. 2 →   0 The problem Projects — 1140 can also be solved without using the compression technique. ApproachThe problem can be treated as a 0-1 Knapsack optimisation problem with one extra constraint that the events performed should not overlap. For evaluating this constraint, sort the events by their start times alone, so that we can use upper bound of an event E to find the index of an event which can be performed after an event E.See the implementation below for more details. Code#include #define int long long using namespace std; int n; struct event { int start, end, profit; }; bool comp(event e1, event e2) { return (e1.start < e2.start); } vector events; vector dp; int solve(int i) { if (i == n) return 0; if (dp[i] != -1) return dp[i]; int p = events[i].profit; int ans = solve(i + 1); event e = {events[i].end + 1, events[i].end + 1, 0}; int ind = lower_bound(events.begin(), events.end(), e, comp) - events.begin(); if (ind != n) { ans = max(ans, p + solve(ind)); } else ans = max(ans, p); return dp[i] = ans; } int32_t main() { cin >> n; events.resize(n); for (int i = 0; i < n; i++) { cin >> events[i].start >> events[i].end >> events[i].profit; } sort(events.begin(), events.end(), comp); dp.assign(n + 1, -1); cout << solve(0); return 0; } 
 » 2 years ago, # |   0 Why does CSES problem Set give TLE in some problems (in one or two test cases), even if my asymptotic time complexity is the same as that of the intended solution?
 » 2 years ago, # |   0 In Rectangle cutting, one can exploit symmetry and stop both the innermost loops at i/2 and j/2 respectively.
 » 2 years ago, # |   0 Hey I am stuck in problem of Counting towers of DP section CSES problem set can someone plz help !! Thanks in advance..
•  » » 2 years ago, # ^ |   +5 i found this useful: youtube video solution
 » 2 years ago, # |   0 Code for new problem counting towers. long long dp[10000005][2] int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); cout.tie(NULL); int test_case = 1; cin >> test_case; while (test_case--) { long long n; cin >> n; dp[1][0] = 1; dp[1][1] = 1; for(int i = 2;i<=n;i++){ dp[i][0] = (4*dp[i-1][0] + dp[i-1][1])%mod; dp[i][1] = (dp[i-1][0] + 2*dp[i-1][1])%mod; } cout << (dp[n][0] + dp[n][1] )%mod<< endl; } } 
•  » » 19 months ago, # ^ |   0 Thank you for your useful comment, but I'm too foolish to understand the code. :
•  » » » 18 months ago, # ^ | ← Rev. 2 →   +8 Basically dp[i] means the number of ways to build a tower of height i. The next index means if the tile at the ith place is split or not. That is, if 0 the tile is split (2 tiles of 1 x 1) and if 1 we have a single tile of 2 x 1.Finally, dp[i][0] means the number of ways to build a tower of height i such that the top tile is split ( 2 tiles of 1x1) and dp[i][1] means number of ways to build a tower of height i such that the top tile is continuous (1 tile of 2 x 1)Now for the recursive function:If the last tile is split (dp[i][0]), it can have the following options: It can extend both the split tiles at (i — 1): dp[i - 1][0] It can extend on the left split tile at (i — 1): dp[i - 1][0] It can extend on the right split tile at (i — 1): dp[i - 1][0] It can extend none of the split tiles at (i — 1) but add it's own: dp[i - 1][0] This means 4*dp[i -1][0]And the last case, it can add split tiles over the continuous tile of i - 1th height: dp[i - 1][1]dp[i][0] = 4 * dp[i - 1][0] + dp[i - 1][1]Similarly if the last tile is continuous (dp[i][1]): It can extend the continuous tile at (i — 1): dp[i - 1][1] It can not extend the continuous tile at (i — 1): dp[i - 1][1] It can add a continuous tile over split tiles of (i — 1): dp[i - 1][0] dp[i][1] = 2 * dp[i - 1][1] + dp[i - 1][0]Hope this helps!PS: Use an int array instead of vector in the code otherwise you'd get a TLE
 » 2 years ago, # | ← Rev. 2 →   +4 It would be great if everyone could enclose their code into a spoiler, it creates unnecessary mess sometimes while navigating.
 » 2 years ago, # |   0 I was trying to solve these problems using recursive dp, I was facing an issue in the Book Shop problem, I was getting runtime error on large test cases though I feel I am using the same space as required in iterative dp. Link to my solution.
 » 2 years ago, # |   0 icecuber For Two Sets II, why won't taking it all and dividing my 2 work? Is it because of mod?
•  » » 2 years ago, # ^ |   0 yes, it's because of the mod. However for odd mod, you can divide by 2 by multiplying with 1/2 which is just (mod+1)/2.
•  » » » 2 years ago, # ^ |   0 Wow! Thank you so much :-)
•  » » » » 23 months ago, # ^ |   0 Just for information if mod is prime and you have to divide it by x, then multiply the numerator by pow(x,mod-2). Do not use predefined pow function. Create your own pow function to handle mod.
 » 23 months ago, # | ← Rev. 3 →   0 Can someone explain me this: vector> dp(n+1,vector(x+1,0)); in book shop problem in code given by author.I was also implementing 0-1 Knapsack but it is giving me Runtime Error on both iterative and recursive solutions.Link to Recursive: https://cses.fi/paste/f6b687511890d6331a4ea4/Link to iterative: https://cses.fi/paste/3fbfe3cdf25c866d1a4e16/UPD: When I just replaced int dp[n+1][x+1] with this code, the code worked. Please tell me what does this do?
•  » » 23 months ago, # ^ |   0 See OEIS A034999
•  » » 23 months ago, # ^ |   0 Consider the top row of a rect filled with n rows. There are two cases. The both cells of the top row can belong to one element, or to two elements.In both cases we can formulate how the numbers are calculated for n+1 rows. Spoiler* dp[i][j]=number of towers with hight i and * j=0: top row single elemnt; * j=1: top row two elements * * dp[1][0]=1; * dp[1][1]=1; * dp[i][0]=1+dp[i-1][0]+dp[i-1][1]; * dp[i][1]=0+dp[i-1][0]+dp[i-1][1]; 
 » 23 months ago, # |   0 Bookmarked this blog. Thanks a lot :)
 » 23 months ago, # |   0 What about the Elevator Rides?A hint would help. It seems to be some form of bitmask dp, but anything I implemented goes TLE :/Here one of my not working brute force solution#include using namespace std; const bool ready = [](){ ios_base::sync_with_stdio(false); cin.tie(0); cout << fixed << setprecision(12); return true; }(); const double PI = acos(-1); using ll= long long; //#define int ll #define all(v) (v).begin(), (v).end() #define fori(n) for(int i=0; i>i; #define cins(s) string s; cin>>s; #define cind(d) double d; cin>>d; #define cinai(a,n) vi a(n); fori(n) { cin>>a[i]; } #define cinas(s,n) vs s(n); fori(n) { cin>>s[i]; } #define cinad(a,n) vd a(n); fori(n) { cin>>a[i]; } using pii= pair; using pdd= pair; using vd= vector; using vb= vector; using vi= vector; using vvi= vector; using vs= vector; //#define endl "\n" /* Since N=20 there are only 1e6 different * possibilities to choose some persons. * Each such combination has weight. * Use the ones with weight <=limit. * * Observations: * It never makes sense to send the elevator * while there could fit another person into it. * So, if we find a combination where * the weight is so heigh that we cannot put another * person into the elevator, then we can remove * all subsets of this set from possibilities, since * the would not make sense. * * step 1: find all subsets with weight<=max weight * step 2: remove all subsets of other sets from that list * (iterating the numbers sorted by number of set bits desc???) * step 3: kind of dijkstra: * sort the sets by included heaviest person first * then use edges only from i to j if i()); cerr<<"start"< q; /* q[cntBits]=vector */ vi wok2; for(size_t i=1; i(i)<
•  » » 23 months ago, # ^ |   +3 My bitmask dp state was dp[mask] = minimum elevator rides possible for people in the mask and the weight less than x . Its a pair of the form {minimum elevator rides, weight}. For every mask, the people are fixed, so we try finding the minimum elevator rides for them by adding each person to the mask individually and check if the elevator rides were enough(by checking that weight of people stays less than x) or we need to add another ride to accommodate the new person.
•  » » 23 months ago, # ^ |   +3 Check pages 103-104 of the CP Handbook.
 » 22 months ago, # |   -8 can anyone tell me why my code is giving runtime error on some of the test cases for problem Coin Combinations II #include using namespace std; #define ll long long int const int mod = 1e9+7; int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); #ifndef ONLINE_JUDGE freopen("input.txt", "r", stdin); freopen("output.txt", "w", stdout); #endif ll n,x; cin>>n>>x; ll a[n+1]; for(int i=1;i<=n;i++) cin>>a[i]; ll dp[n+1][x+1]; for(int i=1;i<=n;i++) dp[i][0] = 1; for(int i=1;i<=n;i++) { for(int j=1;j<=x;j++) { ll option1 = (a[i] > j) ? 0 : dp[i][j-a[i]]; ll option2 = (i==1) ? 0 : dp[i-1][j]; dp[i][j] = (option1 + option2)%mod; } } cout<
•  » » 22 months ago, # ^ | ← Rev. 3 →   -8 update: I found my mistake.changing the type of array dp from ll to int worked.
 » 20 months ago, # |   0 Coin Change 2 give TLE with O(n*target) time complexitysolution link: https://cses.fi/paste/c0e3ad19fd184ec3227a11/
•  » » 20 months ago, # ^ |   0 While defining your dp vector just change it from vll dp(amount+1,0) to vi dp(amount+1,0). Using 'long long' instead of 'int' is probably why you're getting TLE
•  » » » 20 months ago, # ^ |   0 I tried it and got TLE again:https://cses.fi/paste/04b6ac464bf1ec0b228547/
•  » » » » 20 months ago, # ^ |   0 Here have a look at this : https://cses.fi/paste/262b104a22044a1a22864a/ Just changed the way you applied your mod operation and the code got AC!
•  » » » » » 20 months ago, # ^ | ← Rev. 3 →   0 Yes got AC. Thanx. But why my code gave TLE even changing after vector to vi
 » 20 months ago, # |   0 Can anyone give me some hints or kind of editorial to solve the problem, Counting Towers? Thanks in advance.
•  » » 20 months ago, # ^ |   +1 There are some youtube video explaining the approach, you can check it out. It helped me understand how to approach these kind of problem.
 » 19 months ago, # |   0 I don't know why but the recursive approach is giving me TLE for some of the problems.
•  » » 18 months ago, # ^ |   0 CSES grader is slower than CF, and recursive approach has higher constant factor than bottom-up.
 » 17 months ago, # |   0 A doubt Regarding that edit distance problem, would it not be fine if we just do it like. dp[i][nb,nb-1]... i goes all the way from 0 to na, but k takes only either nb or nb-1. i dont see any point of storing for all the k from 0 to nb...when we are just trying to make a equal to b and using only k and k-1. let me know if it makes any sense....thanks!
 » 15 months ago, # |   0 For Two Sets II (1093), the article above says the time complexity is O(n^3), isn't it supposed to be O(n^2)? Am I missing something?Also, the solution seems to get accepted even if you include the nth element and not divide the final count by 2? See https://cses.fi/paste/c3ec33b10e14f1392ddcb6/
•  » » 8 months ago, # ^ |   0 Target = n *(n + 1) / 4; for(i...n) for(j...target) => n * n * (n + 1)/4 => O(n^3);
 » 15 months ago, # |   +1 I made a YouTube playlist where I solved all the problems. Check it out here: https://www.youtube.com/watch?v=vuEWTjqf9Wo&list=PLC6tU0n3mgCL-WyIIxzNt5BEgPpquufeB&ab_channel=AnthonyNgeneGood luck.
•  » » 15 months ago, # ^ |   0 great job
 » 11 months ago, # |   0 counting tower ?? please provide counting tower solution as well.
•  » » 11 months ago, # ^ |   0 Ohk
 » 11 months ago, # |   0 Thanks for giving editorials really it's helpful
 » 10 months ago, # |   0 While doing Book Shop Problem, creating 2D vector of long long gives RE and creating 2D vector of int give AC. Does anyone know why this happens ?2D vector may take up space complexity upto 10^8.RE Code  ll n, x; cin >> n >> x; vll price(n + 1), pages(n + 1); rep(i, 1, n + 1) { cin >> price[i]; } rep(i, 1, n + 1) { cin >> pages[i]; } vvll dp(n + 1, vll(x + 1, 0)); for (ll i = 1; i <= n; i++) { for (ll j = 0; j <= x; j++) { if (j >= price[i]) { dp[i][j] = max(dp[i - 1][j], pages[i] + dp[i - 1][j - price[i]]); } else { dp[i][j] = dp[i - 1][j]; } } } cout << dp[n][x]; link to full code: https://cses.fi/problemset/result/3791349/AC Code  int n, x; cin >> n >> x; vi price(n + 1), pages(n + 1); for (int i = 1; i <= n; i++) { cin >> price[i]; } for (int i = 1; i <= n; i++) { cin >> pages[i]; } vector dp(n + 1, vi(x + 1, 0)); for (int i = 1; i <= n; i++) { for (int j = 0; j <= x; j++) { if (j >= price[i]) { dp[i][j] = max(dp[i - 1][j], pages[i] + dp[i - 1][j - price[i]]); } else { dp[i][j] = dp[i - 1][j]; } } } cout << dp[n][x]; link to full code: https://cses.fi/problemset/result/3791442/
 » 7 months ago, # |   0 How to solve Coin Combination 1 using a 2D DP table where states are index and value?
 » 5 months ago, # |   0 How to solve Dice Combination with recursion. Please explain someone.
 » 5 months ago, # |   0 Can Someone Explain to me why does a Greedy Algorithm does not give the correct answer in the problem Rectangle Cutting Link. What I am doing is to cut the maximum possible square I can at every step, Let's say at the current step the size of the rectangle is a*b (a < b); then i cut a square of size a*a; after which the size of the rectangle becomes a*(b-a), and I do this until the size of the rectangle becomes 1*1. I was not able to come up with an argument as to why this does not work, it seemed pretty intuitive to me and thus I just wanted an explanation proving it wrong
•  » » 5 months ago, # ^ |   +12 Take a $5 \times 6$ rectangle. Here's what you do vs a better solution. PictureReally the better question is to ask "why would greedy be correct", not "why is greedy wrong". Greedy is usually wrong; if it is not, there is a good reason for it.
•  » » » 5 months ago, # ^ |   0 Understood. Thanks
 » 4 months ago, # | ← Rev. 2 →   0 Problem: Coin Combinations I if (i — c[j] >= 0) (dp[i] += dp[i-c[j]]) %= mod; if (i — c[j] >= 0) dp[i] = ((dp[i] % mod) + (dp[i — c[j]] % mod)) % mod; If I write line 2. it gives TLE but on writing line 1., it accepts the code. What's the difference if both statement does the same thing.
 » 4 months ago, # |   0 Thanks its gonna help me and other one also.
 » 4 months ago, # |   0 Isnt the removing digit code actually n*log(n) since for each number we have to iterate over all its digits and the number of digits grows logarithmically?
»
3 months ago, # |
0

coin combination II can be done using 1-D dp , below I have attached my code for reference

state: dp[x] = all unique combinations to form sum x;

# include<bits/stdc++.h>

using namespace std; const int mod = 1e9 + 7; long long int dp[1'000'007]; int main() { int n , sum; cin>>n>>sum; int coin[n + 1]; for(int i=1;i<=n;i++) cin>>coin[i]; dp[0] = 1; for(int i=1;i<=n;i++) { for(int j=1;j<=sum;j++) { if(j — coin[i] >= 0) (dp[j] += dp[j — coin[i]]) %= mod; } } cout<<dp[sum]; return 0; }

 » 3 months ago, # |   0 I have noticed that Removing Digits gives AC for greedy solution too, but I am not able to prove the solution. Can someone give me its proof?
 » 2 months ago, # |   0 I think this can be a much simpler implementation from Array Description problem following the same idea you presented
 » 2 months ago, # | ← Rev. 2 →   0 hello everyone! I have a question about Dice Combination. why this code giving runtime error for big numbers like 1000000? #include using namespace std; typedef long long ll; #define MAXN 1000001 #define MOD 1000000007 ll DP[MAXN], n; ll compute (ll left) { if (DP[left] != 0) { return DP[left]; } for (int i = 1; i <= 6; i++) { if (left — i >= 0) { DP[left] += compute(left — i); DP[left] %= MOD; } } return DP[left]; } int main() { cin >> n; memset(DP, 0, sizeof(DP)); DP[0] = 1; cout << compute(n) << endl; }