Блог пользователя indy256

Автор indy256, 4 года назад, По-английски,

Several recent problems on Codeforces concerned dynamic programming optimization techniques.

The following table summarizes methods known to me.

Name Original Recurrence Sufficient Condition of Applicability Original Complexity Optimized Complexity Links
Convex Hull Optimization1 b[j] ≥ b[j + 1]
optionally a[i] ≤ a[i + 1]
O(n2) O(n) 1
p1
Convex Hull Optimization2 dp[i][j] = mink < j{dp[i - 1][k] + b[k] * a[j]} b[k] ≥ b[k + 1]
optionally a[j] ≤ a[j + 1]
O(kn2) O(kn) 1
p1 p2
Divide and Conquer Optimization dp[i][j] = mink < j{dp[i - 1][k] + C[k][j]} A[i][j] ≤ A[i][j + 1] O(kn2) O(knlogn) 1
p1
Knuth Optimization dp[i][j] = mini < k < j{dp[i][k] + dp[k][j]} + C[i][j] A[i, j - 1] ≤ A[i, j] ≤ A[i + 1, j] O(n3) O(n2) 1 2
p1

Notes:

  • A[i][j] — the smallest k that gives optimal answer, for example in dp[i][j] = dp[i - 1][k] + C[k][j]
  • C[i][j] — some given cost function
  • We can generalize a bit in the following way: dp[i] = minj < i{F[j] + b[j] * a[i]}, where F[j] is computed from dp[j] in constant time.
  • It looks like Convex Hull Optimization2 is a special case of Divide and Conquer Optimization.
  • It is claimed (in the references) that Knuth Optimization is applicable if C[i][j] satisfies the following 2 conditions:
  •      quadrangle inequality:
  •      monotonicity:
  • It is claimed (in the references) that the recurrence dp[j] = mini < j{dp[i] + C[i][j]} can be solved in O(nlogn) (and even O(n)) if C[i][j] satisfies quadrangle inequality. WJMZBMR described how to solve some case of this problem.

Open questions:

  1. Are there any other optimization techniques?
  2. What is the sufficient condition of applying Divide and Conquer Optimization in terms of function C[i][j]? Answered

References:

  • "Efficient dynamic programming using quadrangle inequalities" by F. Frances Yao. find
  • "Speed-Up in Dynamic Programming" by F. Frances Yao. find
  • "The Least Weight Subsequence Problem" by D. S. Hirschberg, L. L. Larmore. find
  • "Dynamic programming with convexity, concavity and sparsity" by Zvi Galil, Kunsoo Park. find
  • "A Linear-Time Algorithm for Concave One-Dimensional Dynamic Programming" by Zvi Galil, Kunsoo Park. find

Please, share your knowledge and links on the topic.

 
 
 
 
  • Проголосовать: нравится  
  • +388
  • Проголосовать: не нравится  

»
4 года назад, # |
Rev. 4   Проголосовать: нравится +27 Проголосовать: не нравится

Here is another way to optimize some 1D1D dynamic programming problem that I know.

Suppose that the old choice will only be worse compare to the new choice(it is quite common in such kind of problems).

Then suppose at current time we are deal with dpi, and we have some choice a0 < a1 < a2, ..., ak - 1 < ak. then we know at current time ai should be better than ai + 1. Otherwise it will never be better than ai + 1,so it is useless.

we can use a deque to store all the ai.

And Also Let us denote D(a, b) as the smallest i such that choice b will be better than a.

If D(ai, ai + 1) > D(ai + 1, ai + 2),we can find ai + 1 is also useless because when it overpass ai,it is already overpass by ai + 2.

So we also let D(ai, ai + 1) < D(ai + 1, ai + 2). then we can find the overpass will only happen at the front of the deque.

So we can maintain this deque quickly, and if we can solve D(a, b) in O(1),it can run in O(n).

»
4 года назад, # |
  Проголосовать: нравится +5 Проголосовать: не нравится

For question 2: The sufficient condition is: C[a][d] + C[b][c] ≥ C[a][c] + C[b][d] where a < b < c < d.

  • »
    »
    4 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Is it quadrangle inequalities? ∀i≤ j,w[i, j]+w[i+1, j+1]≤w[i+1, j]+w[i, j+1], and are these two inequalities equivalent except the >= & <=?

    • »
      »
      »
      4 месяца назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      There is both concave & convex quadrangle inequalities. concave is for minimization problems, while convex is for maximization problems. refer to Yao'82.

»
4 года назад, # |
  Проголосовать: нравится +18 Проголосовать: не нравится

There is one more optimization of dimanic progamming: 101E - Конфеты и Камни (editoral)

»
4 года назад, # |
  Проголосовать: нравится +13 Проголосовать: не нравится

you have put problem "B. Cats Transport" in "Convex Hull Optimization1", actually it belongs to "Convex Hull Optimization2"

»
4 года назад, # |
Rev. 2   Проголосовать: нравится +55 Проголосовать: не нравится

For this moment it's the most useful topic of this year. Exactly in the middle: June 30th, 2013.

»
4 года назад, # |
  Проголосовать: нравится +8 Проголосовать: не нравится

this one seemed a nice dp with optimization to me:https://www.hackerrank.com/contests/monthly/challenges/alien-languages

»
4 года назад, # |
Rev. 4   Проголосовать: нравится +29 Проголосовать: не нравится

The problem mentioned in the article (Breaking Strings) is "Optimal Binary Search Tree Problem" , traditional one.

It can be solved by simple DP in O(N^3), by using Knuth's optimization , in O(N^2) . But it still can be solved in O(NlogN) — http://poj.org/problem?id=1738 (same problem but bigger testcases) (I don't know how to solve it. I hear the algorithm uses meld-able heap)

»
4 года назад, # |
  Проголосовать: нравится +20 Проголосовать: не нравится

Convex Hull Optimization 1 Problems:

Convex Hull Optimization 2 Problems:

Convex Hull Optimization 3 Problems (No conditions for a[] array and b[] array) :

»
4 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

For some reason I cannot open the links with firefox because they go over the Top Rated table.

»
4 года назад, # |
Rev. 2   Проголосовать: нравится +8 Проголосовать: не нравится

One more problem where Knuth Optimization is used:
Andrew Stankevich Contest 10, Problem C.
BTW, does anybody know how to insert a direct link to a problem from gyms?

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I need some problems to solve on Divide and Conquer Optimization. Where can I find them? An online judge / testdata available would be helpful.

»
3 года назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

can anyone provide me good editorial for dp with bitmask .

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Has matrix-exponent optimizations been included here?

»
3 года назад, # |
  Проголосовать: нравится +2 Проголосовать: не нравится

Can matrix chain multiplication problem b also optimized by knuth optimization? If not, dn why?

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    Quote from the first of the references above:

    The monotonicity property for the division points does not hold for the matrix multiplication chain problem...

    Consider the matrices M1,M2,M3,M4 with dimensions 2x3, 3x2, 2x10, and 10x1, respectively. As can be easily verified, the proper order to compute M1M2M3 is to parenthesize it as (M1M2)M3, while the optimal computation of M1M2M3M4 corresponds to M1(M2(M3M4)).

    The second reference gives O(n2) dynamic programming solution, based on some properties of the matrix chain multiplication problem.

    There is also an algorithm by Hu and Shing.

»
2 года назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

What are some recent USACO questions that use this technique or variations of it?

»
22 месяца назад, # |
Rev. 6   Проголосовать: нравится 0 Проголосовать: не нравится

Can this problem be solved using convex hull optimization?

You are given a sequence A of N positive integers. Let’s define “value of a splitting” the sequence to K blocks as a sum of maximums in each of K blocks. For given K find the minimal possible value of splittings.

N <= 105

K <= 100

Input:       Output: 
5 2          6
1 2 3 4 5    
  • »
    »
    22 месяца назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I don't think so, but I guess it can be solved by Divide And Conquer optimization.

»
21 месяц назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Could you elaborate a little me more in the "Convex Hull Optimization2" and other sections for the clearer notations.

For example, You have "k" — a constant in O(kn^2). So the first dimension is of the length K and the second dimension is of the length N?

I think it would be clearer if you can write dp[n], dp[k][n] ... instead of dp[i], dp[i][j] .

Best regards,

»
10 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I don't get it why there is a O(logN) depth of recursion in Divide and conquer optimization ?
Can someone explain it ?

  • »
    »
    10 месяцев назад, # ^ |
    Rev. 2   Проголосовать: нравится +3 Проголосовать: не нравится

    Because each time range is decreased twice.

    • »
      »
      »
      10 месяцев назад, # ^ |
      Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

      Oh, that was very trivial.

      I get it now, we spend total O(N) for computing the cost at each depth 2N to be specific at the last level of recursion tree.

      And therefore O(N * logN) is the cost of whole computation in dividing conquer scheme for relaxation.
      Thanks

»
10 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Hello , I have a doubt can anyone help?

In the divide and conquer optimization ,can we always say that it is possible to use in a system where we have to minimize the sum of cost of k continuous segments( such that their union is the whole array and their intersection is null set) such that the cost of segment increases with increase in length of the segment?

I feel so we can and we can prove it using contradiction Thanks :)

»
9 месяцев назад, # |
  Проголосовать: нравится +5 Проголосовать: не нравится

For convex hull optimizations, with only b[j] ≥ b[j + 1] but WITHOUT a[i] ≤ a[i + 1],

I don't think the complexity can be improved to O(n), but only O(n log n) Is there any example that can show I am wrong?

»
8 месяцев назад, # |
  Проголосовать: нравится +15 Проголосовать: не нравится

ZOJ is currently dead. For the problem "Breaking String" (Knuth opt.), please find at here

»
8 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

please someone tell me why in convex hull optimization should be b[j] ≥ b[j + 1] and a[i] ≤ a[i + 1]
in APIO'10 Commando the DP equation is
Dp[i] = -2 * a * pre_sum[j] * pre_sum[i] + pre_sum[j]^2 + Dp[j] -b * pre_sum[j] + a * pre_sum[i]^2 + b * pre_sum[i] + c
we can use convex hull trick so the line is y = A * X + B
A = -2 * a * pre_sum[j]
X = pre_sum[i]
B = pre_sum[j]^2 + Dp[j] -b * pre_sum[j]
Z = a * pre_sum[i]^2 + b * pre_sum[i] + c
and then we can add to Dp[i] += Z , because z has no relation with j
the question is , since a is always negative (according to the problem statement) and pre_sum[i],pre_sum[j] is always increasing we conclude that b[j] ≤  b[j + 1] and a[i] ≤ a[i + 1]
I've coded it with convex hull trick and got AC , and the official solution is using convex hull trick
someone please explain to me why I'm wrong or why that is happening
thanks in advance

  • »
    »
    7 месяцев назад, # ^ |
      Проголосовать: нравится +8 Проголосовать: не нравится

    if b[j] >= b[j + 1], then the technique is going to calculate the minimum value of the lines, if b[j] <= b[j + 1], then it's going to calculate the maximum value of the lines, as this problem requires.

»
4 месяца назад, # |
  Проголосовать: нравится +10 Проголосовать: не нравится

Is it necessary for the recurrence relation to be of the specific form in the table for Knuth's optimization to be applicable? For example, take this problem. The editorial mentions Knuth Optimization as a solution but the recurrence is not of the form in the table. Rather, it is similar to the Divide-and-Conquer one, i.e. dp[i][j] = mink < j{dp[i - 1][k] + C[k][j]}. Does anyone know how/why Knuth's optimization is applicable here?