When submitting a solution in C++, please select either C++14 (GCC 6-32) or C++17 (GCC 7-32) as your compiler. ×

Jakube's blog

By Jakube, history, 6 years ago, In English

I solved 986F - Oppa Funcan Style Remastered today and faced the following problem.

You had to solve a linear diophantine equation:

a·x + b·y = c

This is of course quite easy, with the extended Euclidean algorithm you can find a solution:

a·x0 + b·y0 = g

and by multiplying with c / g gives a final solution:

a·(x0·c / g) + b·(y0·c / g) = c

And all solutions can be written as

a·(x0·c / g + k·b / g) + b·(y0·c / g - k·a / g) = c

The problem is, that this can easily give an integer overflow. E.g. when a, b and c are as large as 1018. To solve the problem I just took my BigInteger implementation. But this is of course quite ugly (400 additional lines of code just for one simple calculation).

How can we avoid the overflow? I want a solution for a·x + b·y = c with |x|, |y| ≤ 1018.

I found the following code in the solution of tourist. But I'm unable to wrap my head around it. Can somebody explain me the logic of this formula?

g = extgcd(a, b, x, y);
if (c % g != 0) {
  return false;
}
long long dx = c / a;
c -= dx * a;
long long dy = c / b;
c -= dy * b;
x = dx + mulmod(x, c / g, b);
y = dy + mulmod(y, c / g, a);
g = abs(g);
  • Vote: I like it
  • +19
  • Vote: I do not like it

| Write comment?
»
6 years ago, # |
  Vote: I like it +11 Vote: I do not like it

Exactly same question, but I gave up and just copied it

»
6 years ago, # |
  Vote: I like it +28 Vote: I do not like it

Note: Oversized LaTeX doesn't work for me right now, so some of the formulas are ugly.

The lines with dx, dy reduce to the case where |c| < min(|a|, |b|). (If |c| ≥ |a|, we can subtract/add (depending on signs) a from c and increment/decrement x by one in the end and similarly for |b| and y.)

Note that his mulmod function computes a·b mod c with sign, so if a·b is negative, so is the results. This part does need 128 bit multiplication, done in the while(b>0), but it's simpler than a full blown big-integer implementation.

In the last part, he computes X:  = x0·c / g, X':  = X mod b and Y:  = y0·c / g, Y':  = Y mod a, so we don't get equality right away (X, Y are not computed in his code, but I'll use then in my equations.). Instead we get the congruence

Xa + Yb ≡ c (mod ab)

If Y' = 0, then a divides c, hence c = 0. This implies 0 = X = X'. The case X' = 0 implies c = 0 and Y' = 0 in similar fashion. The cases with a = 0 or b = 0 are treated separately elsewhere in his code, so we can ignore then here.

Note that if X·a and Y·b have the same sign (not sure if this can happen actually), we get the inequality

|X| ≤ |X·a| + |Y·b| = |X·a + Y·b| = |c| < |b|

hence X = X' and by symmetry Y = Y', so we get equality, not only  ≡ .

The case where X·a and Y·b have opposite sign is more tricky. W.l.o.g let X·a > 0 > Y·b and assume C:  = Xa + Yb > c. From the congruence, we get C ≥ c + |ab|. Note that

c - Yb ≥  - |c| - Yb ≥ 0

as |c| < |b| ≤ |Y'b| and Y'b < 0 by sign preservation in the computation of Y'. Therefore

|X'a| = Xa = C - Yb ≥ c - Yb + |ab| ≥ |ab|

Hence |X'| ≥ |b| contradiction. The case C < c is similar and leads to |Y'| > |a| contradiction. Therefore c = C', this proves correctness.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Thanks for the proof. Where is the 128-bit multiplication done in his solution?