Dekacc's blog

By Dekacc, history, 8 years ago, In English

Recently I've been reading about matrix multiplication algorithms, more specifically Strassen's algorithm for fast matrix multiplication that runs in O(n^2.81) time as opposed to the standard O(n^3) time for the naive approach. But this speed up comes at a cost, for example we need to pad the matrix with zeroes if n isn't a power of 2, and even though it is asymptotically faster, it also carries a bigger constant factor since it does more additions. Also, it is more memory intensive, and it certainly is harder to implement correctly and optimally.

So my question is: Is the time complexity improvement worth all of these drawbacks in a competitive setting? Are there any problems that are unsolvable with standard matrix multiplication, but are solvable with fast matrix multiplication? And what method do all the best competitors use when they need to multiply 2 matrices?

  • Vote: I like it
  • +46
  • Vote: I do not like it

»
8 years ago, # |
  Vote: I like it +51 Vote: I do not like it

I have never heard that someone used something faster than O(n3) to get AC.

»
8 years ago, # |
  Vote: I like it 0 Vote: I do not like it

There are practical matrix sizes where a good implementation of Strassen's algorithm beats the naive algorithm. I don't think I've ever seen a competition task that required this, but I've encountered situations in practice when Strassen helped. As far as algorithms go, Strassen isn't that complicated and the overhead certainly isn't that bad. The newer algorithms I've seen (that are asymptotically faster than Strassen but with even bigger constant factors) probably aren't worth the effort in practice.

»
8 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Strssen's method is actually pretty helpful for research purposes,where you have huge sizes of matrices to work with. These simulations take days to complete. This is when this power of 2.81 outperforms 3 .

But for the case of competitive programming I don't think it is possible to work with such huge data within 1 second for maybe 10 seconds, where we can exploit this difference.

And I've always seen people to work with the traditional O(n^3) matrix multiplication:)

»
8 years ago, # |
  Vote: I like it +30 Vote: I do not like it

Not very related but might be useful, once I came with a problem that given 3 matrices A, B and C of size NxN, you needed to check if A*B=C, of course doing this in the naive way O(N^3) would get a TLE, so you needed a non-deterministic algorithm called Freivald's Algorithm, the quick idea is to generate a random vector of size Nx1 and then multiply both sides by the vector (equality should hold) A*B*v=C*v, so now you multiply in time O(N^2) and compare resulting vectors, if you get that they are different the answer is definite, if you get they are the same it could potentially be wrong (a false positive), so you iterate K times, or until you found they are different, and the probability of it being a false positive is 2^-K. Therefore it runs in O(K*N^2).

Wikipedia link https://en.wikipedia.org/wiki/Freivalds%27_algorithm