When submitting a solution in C++, please select either C++14 (GCC 6-32) or C++17 (GCC 7-32) as your compiler. ×

ouuan's blog

By ouuan, history, 4 years ago, In English

Many people think they know how to solve the min-cost-max-flow problem, but most of them only know how to solve it with pseudo-polynomial time algorithm (although it usually runs very fast). And in this blog, min_25 provided a counter case generator, most people's MCMF algorithm runs in $$$O(2^{\frac n 2}n^2\log n)$$$ on it.

the counter case

I tried to find learning materials of polynomial minimum cost flow algorithms, but I can only find papers like A Faster Strongly Polynomial Minimum Cost Flow Algorithm and Theoretical Improvements in Algorithmic Efficiency for Network Flow Problems, and I can't understand them very well.

Are there any other learning materials besides the papers? Thanks in advance.

UPD. I've learned an algorithm with time complexity of $$$O(m^2\log U\log m)$$$ ($$$U$$$ refers to the maximum capacity), for Chinese and those who want to see my code, I wrote a blog; for those who want English learning materials, the one mentioned in Laakeri's comment is good.

  • Vote: I like it
  • +179
  • Vote: I do not like it

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

cp-algorithms has an article for Dinic's.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +55 Vote: I do not like it

    It's an algorithm for maximum flow without cost.

»
4 years ago, # |
  Vote: I like it +25 Vote: I do not like it

I am a bit confused here. According to CP-Algorithms, if we use the Edmonds-Karp algorithm with Bellman-Ford in order to find the lowest-cost path between the source and the sink vertices, then the complexity is $$$O(n^2m^2)$$$, where $$$n$$$ is the number of vertices and $$$m$$$ is the number of edges. Where are you getting this $$$O(2^{n/2}n^2\log n)$$$ complexity from?

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +47 Vote: I do not like it

    Have you tested it on the counter case I mentioned in this blog?

    • »
      »
      »
      4 years ago, # ^ |
      Rev. 2   Vote: I like it +8 Vote: I do not like it

      Unfortunately, my main language is English, so I did not understand the blog post. What is the format of the test case? Does every line represent an edge?

      • »
        »
        »
        »
        4 years ago, # ^ |
          Vote: I like it +21 Vote: I do not like it

        The format for each line is $$$(from, to, capacity, cost)$$$.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +42 Vote: I do not like it

    The runtime of that algorithm is $$$O(f \cdot n m)$$$ where $$$f$$$ is the number of augmenting paths. The article incorrectly claims that $$$f \in O(n m)$$$. This is wrong, as we're considering shortest paths with edge weights. For unweighted graphs, any path has length at most $$$n$$$ (and you have at most $$$m$$$ augmenting paths of every length, hence the $$$n m$$$ bound on $$$f$$$ in the unweighted case), but this is obviously not true for weighted graphs (where the length of a path is the sum of its weights).

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +8 Vote: I do not like it

      Oh, that is very interesting. Just curious, but is there any way to get the complexity of $$$f$$$ in terms of $$$n$$$ and $$$m$$$? Also, where does the $$$O(2^{n/2}n^2\log n)$$$ complexity come from?

      • »
        »
        »
        »
        4 years ago, # ^ |
          Vote: I like it +21 Vote: I do not like it

        You can read this paper. It's related with $$$n$$$ because it's a specific construction.

»
4 years ago, # |
  Vote: I like it +19 Vote: I do not like it

While I've only seen in described in papers, I've found the successive approximation algorithm by Goldberg and Tarjan a natural generalization of the push relabel algorithm (it also uses a height function and push and relabel operations). The paper Solving Minimum-Cost Flow Problems by Successive Approximation on it should be easier to read than the two you linked, especially if you're familiar with push-relabel. The algorithm runs in $$$O(n^3 \log(n C))$$$ where $$$C$$$ is the maximum edge cost (so it's polynomial in the input size but not strongly polynomial) and it is fast in practice.

IIRC you can do capacity scaling with the successive shortest path algorithm (linked on CP-Algorithms above) to get something like $$$O(m^2 \log n \log U)$$$ where $$$U$$$ is the largest edge capacity. This would also be polynomial but not strongly polynomial. This algorithms might be easier to understand as you can first study both components (the capacity scaling and the successive shortest paths) separately.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +16 Vote: I do not like it

    Thank you very much! I watched a video and I think now I understand the basic idea of capacity scaling.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    Won't scaling for min cost max flow possibly create negative cycles?

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +34 Vote: I do not like it

      Sure, so I guess it's not that simple. At the start of each scaling phase, let's add edges that now get a positive capacity one by one and at each point check whether it is included in a negative cycle and update the potentials so that we have no negative edges. (And only look for augmenting paths after adding all edges). Note that the newly added edge is the only edge with negative reduced weight, so this check boils down to running Dijkstra's algorithm to find a path from one endpoint of the edge to the other one (similarly to how we find the augmenting paths).

      Equivalently, we could saturate all newly added edges which have negative reduced weight. This creates some excesses and deficits, which we resolve by finding augmenting paths. As we saturated all negative edges, we only deal with edges of non-negative costs, so we can keep using Dijkstra's. (This involves at most $$$m$$$ augmenting paths, so the runtime remains the same.)

»
4 years ago, # |
  Vote: I like it +13 Vote: I do not like it

https://web.stanford.edu/class/cs361b/files/cs361b-notes.pdf — this material has an in-depth explanation about a capacity scaling mincost flow algorithm that has $$$O(m^2 \log U)$$$ time complexity. I have implemented the algorithm in here.

  • »
    »
    4 years ago, # ^ |
    Rev. 2   Vote: I like it +10 Vote: I do not like it

    Could you please explain the "fixPotentials" function (especially how does it avoid the overflow, and the range of potentials after fixing)? Or is it in the notes (the pdf)? I have just read the first 19 pages.

    I tried to implement the capacity scaling algorithm mentioned in Theorem 3.1 but failed, because there are many places which need "sufficiently large number"s, and the relationship of these "sufficiently large number"s are really confusing, overflow may also happen. I read your implementation and found the "fixPotentials" function which may help.

    • »
      »
      »
      4 years ago, # ^ |
      Rev. 2   Vote: I like it +8 Vote: I do not like it

      You are right. The issue with the algorithm is that there is no proof about how the values of the potential function behave, and in naive implementations they might grow exponentially during the algorithm.

      The fixPotentials function is intended to fix this issue. I wrote it 3 years ago, so I don't remember details anymore, but my documentation (which is written in finnish) says that it works in $$$O(m \log n)$$$ time, and after running it, the maximum absolute value of a potential will be $$$nC$$$, where $$$C$$$ is the maximum absolute value of a cost in the input.

      Here is my old description of the fixPotentials function: Select a vertex $$$v$$$ whose potential has not been fixed yet. Then compute the shortest paths (with the old potential function) from $$$v$$$ using only the vertices that are not fixed yet. For all vertices $$$u$$$ reached from $$$v$$$, set the new potential $$$p(u)$$$ as the length of the shortest path from $$$v$$$ + the greatest fixed potential before this stage. Repeat this process until all vertices have been fixed.

      • »
        »
        »
        »
        4 years ago, # ^ |
        Rev. 2   Vote: I like it +10 Vote: I do not like it

        I have just implemented it, and I found another way to "fix potentials" (in fact it's mentioned in the notes) — add an extra node s to the residual graph, add zero-cost edges from s to each node in the residual graph, for each node v, assign p(v) = dist(s, v).

        And I also found negative reduced costs in Dijkstra in your implementation.

        My implementation is here.

        • »
          »
          »
          »
          »
          4 years ago, # ^ |
            Vote: I like it +3 Vote: I do not like it

          Thx. I implemented your approach and got even faster solution, thanks to using indexed heap instead of priority queue