Several recent problems on Codeforces concerned dynamic programming optimization techniques.
The following table summarizes methods known to me.
Name | Original Recurrence | Sufficient Condition of Applicability | Original Complexity | Optimized Complexity | Links |
---|---|---|---|---|---|
Convex Hull Optimization1 | b[j] ≥ b[j + 1] | O(n ^{2}) | O(n) | 1 2 3 p1 | |
Convex Hull Optimization2 | dp[i][j] = min _{ k < j}{dp[i - 1][k] + b[k] * a[j]} | b[k] ≥ b[k + 1] | O(kn ^{2}) | O(kn) | 1 p1 p2 |
Divide and Conquer Optimization | dp[i][j] = min _{ k < j}{dp[i - 1][k] + C[k][j]} | A[i][j] ≤ A[i][j + 1] | O(kn ^{2}) | O(knlogn) | 1 p1 |
Knuth Optimization | dp[i][j] = min _{ i < k < j}{dp[i][k] + dp[k][j]} + C[i][j] | A[i, j - 1] ≤ A[i, j] ≤ A[i + 1, j] | O(n ^{3}) | O(n ^{2}) | 1 2 p1 |
Notes:
- A[i][j] — the smallest k that gives optimal answer, for example in dp[i][j] = dp[i - 1][k] + C[k][j]
- C[i][j] — some given cost function
- We can generalize a bit in the following way: dp[i] = min _{ j < i}{F[j] + b[j] * a[i]}, where F[j] is computed from dp[j] in constant time.
- It looks like Convex Hull Optimization2 is a special case of Divide and Conquer Optimization.
- It is claimed (in the references) that Knuth Optimization is applicable if C[i][j] satisfies the following 2 conditions:
- quadrangle inequality:
- monotonicity:
- It is claimed (in the references) that the recurrence dp[j] = min _{ i < j}{dp[i] + C[i][j]} can be solved in O(nlogn) (and even O(n)) if C[i][j] satisfies quadrangle inequality. WJMZBMR described how to solve some case of this problem.
Open questions:
- Are there any other optimization techniques?
- What is the sufficient condition of applying Divide and Conquer Optimization in terms of function C[i][j]? Answered
References:
- "Efficient dynamic programming using quadrangle inequalities" by F. Frances Yao. find
- "Speed-Up in Dynamic Programming" by F. Frances Yao. find
- "The Least Weight Subsequence Problem" by D. S. Hirschberg, L. L. Larmore. find
- "Dynamic programming with convexity, concavity and sparsity" by Zvi Galil, Kunsoo Park. find
- "A Linear-Time Algorithm for Concave One-Dimensional Dynamic Programming" by Zvi Galil, Kunsoo Park. find
Please, share your knowledge and links on the topic.
Here is another way to optimize some 1D1D dynamic programming problem that I know.
Suppose that the old choice will only be worse compare to the new choice(it is quite common in such kind of problems).
Then suppose at current time we are deal with dp _{ i}, and we have some choice a _{0} < a _{1} < a _{2}, ..., a _{ k - 1} < a _{ k}. then we know at current time a _{ i} should be better than a _{ i + 1}. Otherwise it will never be better than a _{ i + 1},so it is useless.
we can use a deque to store all the a _{ i}.
And Also Let us denote D(a, b) as the smallest i such that choice b will be better than a.
If D(a _{ i}, a _{ i + 1}) > D(a _{ i + 1}, a _{ i + 2}),we can find a _{ i + 1} is also useless because when it overpass a _{ i},it is already overpass by a _{ i + 2}.
So we also let D(a _{ i}, a _{ i + 1}) < D(a _{ i + 1}, a _{ i + 2}). then we can find the overpass will only happen at the front of the deque.
So we can maintain this deque quickly, and if we can solve D(a, b) in O(1),it can run in O(n).
could you please give some example problems?
Please give a sample problem or describe by a problem.
For question 2: The sufficient condition is: C[a][d] + C[b][c] ≥ C[a][c] + C[b][d] where a < b < c < d.
Is it quadrangle inequalities?
∀i≤ j,w[i, j]+w[i+1, j+1]≤w[i+1, j]+w[i, j+1], and are these two inequalities equivalent except the >= & <=?
There is both concave & convex quadrangle inequalities. concave is for minimization problems, while convex is for maximization problems. refer to Yao'82.
How do you prove that if this condition is met, then A[i][j]<= A[i][j+1]?
This is an intuitive proof. While not being very rigorous in the equality case it shows how the inequality can be deduced.
(based on https://codeforces.com/blog/entry/8192#comment-491779):
Assume for some i, j, then A[i][j] > A[i][j+1]. Let x = A[i][j], y = A[i][j+1]. (x > y)
For [i][j], the value is better at x than at y. $$$\iff$$$ d[i-1][x]+C[x][j] $$$\leq$$$ (better than) d[i-1][y]+C[y][j]
For [i][j+1], the value is better at y than at x. $$$\equiv$$$ d[i-1][y]+C[y][j+1] $$$\leq$$$ (better than) d[i-1][x]+C[x][j+1]
Adding the inequalities together removes d[i-1][x] and d[i-1][y]:
C[x][j]+C[y][j+1] $$$\leq$$$ C[y][j]+C[x][j+1]
< is a contradiction. In the = case you can prove that it doesn't affect the correctness of the algorithm.
There is one more optimization of dimanic progamming: 101E - Candies and Stones (editoral)
More Problem Collection.
you have put problem "B. Cats Transport" in "Convex Hull Optimization1", actually it belongs to "Convex Hull Optimization2"
fixed
For this moment it's the most useful topic of this year. Exactly in the middle: June 30th, 2013.
this one seemed a nice dp with optimization to me:https://www.hackerrank.com/contests/monthly/challenges/alien-languages
The problem mentioned in the article (Breaking Strings) is "Optimal Binary Search Tree Problem" , traditional one.
It can be solved by simple DP in O(N^3), by using Knuth's optimization , in O(N^2) . But it still can be solved in O(NlogN) — http://poj.org/problem?id=1738 (same problem but bigger testcases) (I don't know how to solve it. I hear the algorithm uses meld-able heap)
Convex Hull Optimization 1 Problems:
APIO 2010 task Commando
TRAKA
ACQUIRE
SkyScrapers (+Data Structures)
Convex Hull Optimization 2 Problems:
Convex Hull Optimization 3 Problems (No conditions for a[] array and b[] array) :
GOODG
BOI 2012 Day 2 Balls
Cow School
Solution-Video
GOODG can be solved with Type 1
EDIT: I explain that below.
How? I noticed that, in this problem, b[j] follows no order and a[i] can be either decreasing or increasing, depending on how the equation is modeled. I was able to solve it using the fully dynamic variant, but I can't see how to apply the "type 1" optimization.
Can you add a link to your code I tried to implement the dynamic variant few weeks ago but there were so many bugs in my code :( .Maybe yours can help :/ .
Yeah, I'm sorry about saying this and not explaining. Actually I should give credit because ItsYanBitches first realized the fully dynamic approach was not necessary. Here's my code.
Maybe the most natural approach for this problem is to try to solve the following recurrence (or something similar) where f(0) = 0 and d _{0} = 0:
Well, this recurrence really requires a fully dynamic approach. We'll find one that doesn't. Instead of trying to solve the problem for each prefix, let's try to solve it for each suffix. We'll set g(n + 1) = 0, a _{0} = d _{0} = 0 and compute
which can be written as
now we notice that the function inside the max is actually a line with angular coefficient j and constant term a _{ j} + g(j) (which are constant on i) evaluated at - d _{ i}. Apply convex trick there (the standart one) and we're done.
Notifying possibly interested people after a long delay (sorry about that again): fofao_funk, samier_aldroubi and synxazox. And sorry in advance for any mistake, the ideia for the solution is there.
Why the downvotes? Is it wrong?
New link for Commando:
http://www.spoj.com/problems/APIO10A/
For some reason I cannot open the links with firefox because they go over the Top Rated table.
Try to zoom out, pressing Ctrl + -
One more problem where Knuth Optimization is used:
Andrew Stankevich Contest 10, Problem C.
BTW, does anybody know how to insert a direct link to a problem from gyms?
I need some problems to solve on Divide and Conquer Optimization. Where can I find them? An online judge / testdata available would be helpful.
Check this one : Guardians of the Lunatics
Learnt Divide and Conquer Optimization just from there. :P That is why I'm asking for more problems to practice. :D
Is this the best complexity for this problem? Can't we do any better? Can't we somehow turn the logL needed into a constant?
We can, using that
opt[i-1][j] <= opt[i][j] <= opt[i][j+1]
.Key thing is to see that opt function is monotone for both arguments. With that observation, we don't need to use binary search.
Check out my submission.
can anyone provide me good editorial for dp with bitmask .
Has matrix-exponent optimizations been included here?
wtf do you mean by matrix-expo optimization?
Can matrix chain multiplication problem b also optimized by knuth optimization? If not, dn why?
Quote from the first of the references above:
The second reference gives O(n ^{2}) dynamic programming solution, based on some properties of the matrix chain multiplication problem.
There is also an algorithm by Hu and Shing.
Link to the Hu and Shing algorithm?
Here is a link to a 1981 version of the thesis. The original was published in two parts in 1982 and 1984.
http://i.stanford.edu/pub/cstr/reports/cs/tr/81/875/CS-TR-81-875.pdf
However, I doubt that this will be used in competitive programming.
What are some recent USACO questions that use this technique or variations of it?
Can this problem be solved using convex hull optimization?
You are given a sequence A of N positive integers. Let’s define “value of a splitting” the sequence to K blocks as a sum of maximums in each of K blocks. For given K find the minimal possible value of splittings.
N <= 10^{5}
K <= 100
I don't think so, but I guess it can be solved by Divide And Conquer optimization.
Divide and Conquer optimization doesn't work here since the monotonicity of the argmin doesn't hold, consider e.g.
2 3 1 5
. The optimal partition is[2] [3 1 5]
but when you remove the5
it becomes[2 3] [1]
.Could you elaborate a little me more in the "Convex Hull Optimization2" and other sections for the clearer notations.
For example, You have "k" — a constant in O(kn^2). So the first dimension is of the length K and the second dimension is of the length N?
I think it would be clearer if you can write dp[n], dp[k][n] ... instead of dp[i], dp[i][j] .
Best regards,
I don't get it why there is a O(logN) depth of recursion in Divide and conquer optimization ?
Can someone explain it ?
Because each time range is decreased twice.
Oh, that was very trivial.
I get it now, we spend total O(N) for computing the cost at each depth 2N to be specific at the last level of recursion tree.
And therefore O(N * logN) is the cost of whole computation in dividing conquer scheme for relaxation.
Thanks
Hello , I have a doubt can anyone help?
In the divide and conquer optimization ,can we always say that it is possible to use in a system where we have to minimize the sum of cost of k continuous segments( such that their union is the whole array and their intersection is null set) such that the cost of segment increases with increase in length of the segment?
I feel so we can and we can prove it using contradiction Thanks :)
For convex hull optimizations, with only b[j] ≥ b[j + 1] but WITHOUT a[i] ≤ a[i + 1],
I don't think the complexity can be improved to O(n), but only O(n log n) Is there any example that can show I am wrong?
I think you're right
ZOJ is currently dead. For the problem "Breaking String" (Knuth opt.), please find at here
fixed
please someone tell me why in convex hull optimization should be b[j] ≥ b[j + 1] and a[i] ≤ a[i + 1]
in APIO'10 Commando the DP equation is
Dp[i] = -2 * a * pre_sum[j] * pre_sum[i] + pre_sum[j]^2 + Dp[j] -b * pre_sum[j] + a * pre_sum[i]^2 + b * pre_sum[i] + c
we can use convex hull trick so the line is y = A * X + B
A = -2 * a * pre_sum[j]
X = pre_sum[i]
B = pre_sum[j]^2 + Dp[j] -b * pre_sum[j]
Z = a * pre_sum[i]^2 + b * pre_sum[i] + c
and then we can add to Dp[i] += Z , because z has no relation with j
the question is , since a is always negative (according to the problem statement) and pre_sum[i],pre_sum[j] is always increasing we conclude that b[j] ≤ b[j + 1] and a[i] ≤ a[i + 1]
I've coded it with convex hull trick and got AC , and the official solution is using convex hull trick
someone please explain to me why I'm wrong or why that is happening
thanks in advance
if b[j] >= b[j + 1], then the technique is going to calculate the minimum value of the lines, if b[j] <= b[j + 1], then it's going to calculate the maximum value of the lines, as this problem requires.
Is it necessary for the recurrence relation to be of the specific form in the table for Knuth's optimization to be applicable? For example, take this problem. The editorial mentions Knuth Optimization as a solution but the recurrence is not of the form in the table. Rather, it is similar to the Divide-and-Conquer one, i.e. dp[i][j] = mink < j{dp[i - 1][k] + C[k][j]}. Does anyone know how/why Knuth's optimization is applicable here?
It is also worthwhile to mention the DP Optimization given here http://codeforces.com/blog/entry/49691 in this post.
Can we have the same dp optimizations with dp[i][j] = max (dp....)?
Yes.
In that case (dp[i][j] = max (dp...) ) the condition still unchanged : A[i][j] ≤ A[i][j + 1]. Is that true? Thanks!
Yes.
can someone please give me intuition on proof of A[i, j - 1] ≤ A[i, j] ≤ A[i + 1, j] given for knuth optimization of optimal binary search tree.
You can try LARMY on spoj for Divide and Conquer Optimization.
Fresh problem. Can be solved with CHT.
What will be the actual complexity of Knuth optimization on a O(n^2 * k) DP solution ? Will it be O(n^2) or O(n*k)? Here n = number of elements and k = number of partitions.
I would like to mention Zlobober, rng_58, Errichto, Swistakk here politely. It would be really helpful if you kindly answered my query.
It can only remove n from the complexity because it (more or less) gets rid of iterating over all elements of the interval (up to n elements).
Thanks a lot for replying to me. So what you are saying is — a O(n^2 * k) solution will turn into a O(n * k) solution? I got TLE in problem SPOJ NKLEAVES (n=10^5, k=10) using Knuth optimization, but passed comfortably using divide & conquer optimization in O(n k log n). That's why I am curious to know whether knuth optimization reduces a n factor or a k factor from the original O(n^2 * k) solution, since a O(n*k) solution should have definitely passed. Would you please have a look?
Can you show the code? I'm not sure but I think that Knuth optimization can be used here to speed up the O(n ^{3}·k) solution with states dp[left][right][k]. The improved complexity would be O(n ^{2}·k).
How does it help in any way xd?
He asked if N or K will be removed from the complexity, I answered. What is wrong here?
You said "at most n" what brings 0 bits of information since k<=n.
It is O(n ^{2}) as the complexity upper bound is proven by summating the running time of DP value calculation over the diagonals of the n × k DP matrix. There are n + k - 1 diagonals, on each of them running time is at most O(n) due to the given inequalities, so the total running time is O((n + k)n) = O(n ^{2}).
Someone know where i can find an article about Lagrange optimization? (i know that this can be used for reduce one state of the
dp
performing a binary search on a constant and add this on every transition until the realized number of transition for reach the final state become the desired)I know this can be treated as off-topic by many of us, but what about solving DP states which are direct arithmetic (specifically sum/difference) of previous states:
Say the DP state is 1D, like in Fibonacci series (
dp[i] = dp[i-1] + dp[i-2]
), we already know of a solution inO(k^3 * log n)
using matrix binary exponentiation, wherek
represents thatdp[i]
depends ondp[i-1], dp[i-2], .. dp[i-k]
.Is there an optimisation for 2D states like the same? Say the state is as follows:
I am finding it hard to reduce it to any less than
O(n * m)
for findingdp[n][m]
I guess you should ask it after the Long Challenge
.
Yeah I Agree, I was also trying to solve it.
I found a good Tutorial on 2d-dp Matrix Exponentiation.
https://www.youtube.com/watch?v=OrH2ah4ylv4 a video by algorithms live for convex hull trick. The link given for convex hull optimization is dead now please update.
use this archive https://web.archive.org/web/20181030143808/http://wcipeg.com/wiki/Convex_hull_trick
I know it's necroposting, but still. Can anyone give a link to tutorial or something(in English) for the trick used in IOI16 Aliens problem?
Here's a similar problem with a nice description of the algorithm used
You can find the link here.
What is the meaning of table , how to apply them , suppose row 1 in table 1 , what to do ?
Can someone help me prove that the sufficient conditions for Divide and Conquer Optimization are satisfied in this problem.
I was able to solve it using the same technique with some intuition. But wasn't able to formally prove it.
Please update link of Convex Hull Optimize1 that link is now dead.
https://web.archive.org/web/20181030143808/http://wcipeg.com/wiki/Convex_hull_trick
I solved Problem 1. Redistricting from the Platinum division of the USACO 2019 January contest in
O(n)
using the deque optimization described in this comment while the official solution isO(n log n)
.First, for each suffix, we compute the number of Guernseys minus the number of Holsteins and store it in the
suffix
array. We definedp[i]
to be the minimum number of Guernsey majority or tied districts with the firsti
pastures. Sodp[i] = min(f(k, i)) for k < i
, wheref(k, i) = dp[k] + (suffix[k] - suffix[i + 1] >= 0)
. For two indicesa < b
, ifdp[a] > dp[b]
, thenf(a, i) >= f(b, i)
for anyi > b
. The same is also true ifdp[a] == dp[b] && suffix[a] >= suffix[b]
. In all other cases,f(a, i) <= f(b, i)
for alli > b
. We'll definebad(a, b) = dp[a] != dp[b] ? dp[a] > dp[b] : suffix[a] >= suffix[b]
which is true ifa
will never be the optimal value fork
in the future.We can maintain a deque
q
such that!bad(u, v) for u < v
. Before we insert a new indexi
, we pop off indices from the end of the queue whilebad(q.back(), i)
since those indices will never be better thani
. After popping off indices from the front of the deque which are more thanK
away fromi
, the optimalk
value is always at the front. Both the loops popping from the front and back are amortizedO(1)
since there are a total ofN
indices, each of which can only be popped off at most once. This makes the solutionO(n)
.orz indy256 meta-san. optimizing the optimization technique itself?
Cp-algorithms has added official translation of Divide and Conquer Optimization, thought it might be useful to add. https://cp-algorithms.com/dynamic_programming/divide-and-conquer-dp.html
Also exists Alien's optimization (link1, link2).
Alien's optimization? Seriously? Can humans use it?
i dont know how to use knuth optimizaation?? can someone help me plz <3
iShibly