Link: https://github.com/stjepang/snippets

Many problems require repetitive algorithms like network flows or suffix arrays. Whenever I need such an algorithm, I just copy-paste the same well-tested implementation.

This is my personal collection of algorithms and data structures, which have been in use during the last few years on hundreds of problems and two ACM-ICPC world finals. My favorites are FFT, suffix array, and convex hull.

I hope they will inspire you to write your own library or serve as a learning resource.

Enjoy! Let me know if you have any feedback or questions. :)

Great Job, thanks a lot.

Your codes are really smart and well-written. Thanks for sharing! :)

Just a brief suggestion from my side. It would be nice if you included the complexities of the code/functions. (For example: I had never thought of count primes below N other than sieve. Knowing the complexity of your code may help me in reading further the code to find something new. Since the space complexity was mentioned, I knew that your method was something different, but complexity analysis may help).

Also, what type of indexing is used 0-based/1-based could help us.

BTW, nice short templates. Also, not much headers in terms of for loops and typedef required, which I liked the most.

To be honest, I have no idea what the complexity of that function is :)

The only thing I can tell you is that you can count primes up to 10^12 in around 1 second.

Regarding indexing, 0-based indexing is used everywhere.

I'll go through the comments in the code and try to make them more detailed. Thanks for the suggestion!

Edit: updated.

looks like it is Meissel-Lehmer algorithm from wiki

Wolfram says its complexity is

https://en.wikipedia.org/wiki/Prime-counting_function http://mathworld.wolfram.com/PrimeCountingFunction.html

Not really, its just an optimized version of Legendre's formula.

You can check out Problem 10 from Project Euler. This method is explained in the thread, and the complexity is about n^(3/4).

What exactly does this code do? How do you define the maximum amount of circulation around the graph? I believe that O(VE) would be too good time complexity for so simple code if you could compute mincost maxflow with it. (Though in practice it could be really fast)

Suppose you have a directed weighted graph. All capacities are 1, while weights are the costs of edges.

Let's say the maximum number of (edge-wise) disjoint cycles you can mark in the graph is F. That's the circulation flow.

Now choose a set of F disjoint cycles such that the sum of costs of all their edges is minimum. This minimum cost is what the algorithm computes.

Yeah, O(VE) is definitely too good :) However, when I'm solving problems and see the runtime of the algorithm, in practice it behaves as roughly O(VE). I'll change the comment to remove this confusion.

Edit: Hopefully the comment about time complexity is clearer now.

Thanks for clarifications

In programming contests it is good to know the worst case time complexity of an algorithm because the worst case could be included in the tests. Here is a test generator that generates a graph with 51 vertices and 196 edges where your code takes 21 seconds to run. In general it behaves exponentially to the argument of genBadGraph and therefore exponentially to the size of the graph. The test is based on figure 2 in this paper.

However this test might be kind of irrelevant to programming contests since usually the capacities will be small or the structure of the graph does not allow constructions like this.

Ah, but this is easy to fix. Instead of always augmenting cycles by one, we can augment by the minimum capacity on the cycle. I pushed a better version to the repo.

Diff: https://github.com/stjepang/snippets/commit/4dc97bceb57a9edd0c4ebe9cef921e77c1d13ac8

Now it takes 0.002 seconds on your test :) Also, this implementation holds the 3rd position on this problem: http://www.spoj.com/ranks/PHU09K/

I apologize for my confusing "rough" complexity estimate. Shouldn't have mentioned O(VE) at all.

What I wanted to say is: I've successfully solved problems where V <= 1000 and E <= 1000 using this algorithm.

Oh, I though that I was clever with that test but it seems that a trivial test would have been sufficient to make your old code TLE :). I tried to modify the test but I could not get it to break your solution. The same test case worked for breaking my cycle cancelling algorithm even though it augmented maximum capacity always, so it seems that you use some useful heuristics.

Yeah, on some problems with circulations it's crazy difficult to get AC even if you have a correct algorithm with good time complexity.

One of past ACM world finals had a problem with circulations. If you want AC, you must put a lot of effort into heuristics/optimizations, otherwise you get TLE. Pretty frustrating. :)

(My implementation passes on that problem.)

If you're curious, I'll try to dig that problem from the archives.

Edit: This is the problem.

Please, check the comment for "run" function in mcmf_dijkstra.cpp. Looks like this function returns

`{total_cost, total_flow}`

instead of`{total_flow, total_cost}`

Thanks! Fixed.