I solved 986F - Oppa Funcan Style Remastered today and faced the following problem.

You had to solve a linear diophantine equation:

*a*·

*x*+

*b*·

*y*=

*c*

This is of course quite easy, with the extended Euclidean algorithm you can find a solution:

*a*·

*x*

_{0}+

*b*·

*y*

_{0}=

*g*

and by multiplying with *c* / *g* gives a final solution:

*a*·(

*x*

_{0}·

*c*/

*g*) +

*b*·(

*y*

_{0}·

*c*/

*g*) =

*c*

And all solutions can be written as

*a*·(

*x*

_{0}·

*c*/

*g*+

*k*·

*b*/

*g*) +

*b*·(

*y*

_{0}·

*c*/

*g*-

*k*·

*a*/

*g*) =

*c*

The problem is, that this can easily give an integer overflow. E.g. when *a*, *b* and *c* are as large as 10^{18}. To solve the problem I just took my BigInteger implementation. But this is of course quite ugly (400 additional lines of code just for one simple calculation).

How can we avoid the overflow? I want a solution for *a*·*x* + *b*·*y* = *c* with |*x*|, |*y*| ≤ 10^{18}.

I found the following code in the solution of tourist. But I'm unable to wrap my head around it. Can somebody explain me the logic of this formula?

```
g = extgcd(a, b, x, y);
if (c % g != 0) {
return false;
}
long long dx = c / a;
c -= dx * a;
long long dy = c / b;
c -= dy * b;
x = dx + mulmod(x, c / g, b);
y = dy + mulmod(y, c / g, a);
g = abs(g);
```

Exactly same question, but I gave up and just copied it

Swistakk or Xellos may help.

Note: Oversized LaTeX doesn't work for me right now, so some of the formulas are ugly.

The lines with

`dx, dy`

reduce to the case where |c| <min(|a|, |b|). (If |c| ≥ |a|, we can subtract/add (depending on signs)afromcand increment/decrementxby one in the end and similarly for |b| andy.)Note that his

`mulmod`

function computesa·bmodcwith sign, so ifa·bis negative, so is the results. This part does need 128 bit multiplication, done in the`while(b>0)`

, but it's simpler than a full blown big-integer implementation.In the last part, he computes

X: =x_{0}·c/g,X': =XmodbandY: =y_{0}·c/g,Y': =Ymoda, so we don't get equality right away (X,Yare not computed in his code, but I'll use then in my equations.). Instead we get the congruenceX'·a+Y'·b≡c(modab)If

Y' = 0, thenadividesc, hencec= 0. This implies 0 =X=X'. The caseX' = 0 impliesc= 0 andY' = 0 in similar fashion. The cases witha= 0 orb= 0 are treated separately elsewhere in his code, so we can ignore then here.Note that if

X·aandY·bhave the same sign (not sure if this can happen actually), we get the inequalityX| ≤ |X·a| + |Y·b| = |X·a+Y·b| = |c| < |b|hence

X=X' and by symmetryY=Y', so we get equality, not only ≡ .The case where

X·aandY·bhave opposite sign is more tricky. W.l.o.g letX·a> 0 >Y·band assumeC: =X'·a+Y'·b>c. From the congruence, we getC≥c+ |ab|. Note thatc-Y'·b≥ - |c| -Y'·b≥ 0as |

c| < |b| ≤ |Y'b| andY'b< 0 by sign preservation in the computation ofY'. ThereforeX'a| =X'·a=C-Y'·b≥c-Y'·b+ |ab| ≥ |ab|Hence |

X'| ≥ |b| contradiction. The caseC<cis similar and leads to |Y'| > |a| contradiction. Thereforec=C', this proves correctness.Thanks for the proof. Where is the 128-bit multiplication done in his solution?