### mango_lassi's blog

By mango_lassi, history, 12 months ago,

I saw a blog (thanks to Bugman for finding it!) earlier today asking how to solve the following problem:

Given a linear function $ax + b$, find the minimum $x \geq 0$ such that $ax + b \in [L, R]\ (\text{mod } M)$.

To solve that problem, we can make the following reduction: If $gcd(a, M) > 1$, we divide everything by the GCD. The $x$ for which $ax + b = L\ (\text{mod } M)$ is $(L - b) a^{-1}\ (\text{mod } M)$. Denote this value by $b_0$. Then, the minimum $x$ to get to $L + y$ is $a^{-1} y + b_0\ (\text{mod } M)$. This gives us a reduced problem:

Find the minimum value of $ay + b \text{ mod } M$ over $y \leq k$.

This seemed pretty hard, but surprisingly I figured out how to do it in $\mathcal{O}(\log M)$ time! The algorithm is as follows:

In every step, we reduce the modulo from $M$ to $\min(a, M - a) \leq M / 2$. This guarantees we do at most $\mathcal{O}(\log M)$ steps.

To reduce it to $a$, we consider the first value $s$ among $[0, a)$ achieved by $ay + b$. If $b < a$, it is $b$. Otherwise, it is $b - M\text{ mod } a$. We check in $\mathcal{O}(1)$ if we reach $s$ for some $y \leq k$. If we do, set $M$ to $a$, $a$ to $-M \text{ mod } a$ and $b$ to $s$. Otherwise, output $b$ as the first $a$ values are the only values such that the previous value was larger, thus if we never attain them, the first value we attain is the largest we do.

To reduce it to $M - a$, we consider the first value $s = b \text{ mod } M - a$ among $[0, M - a)$ achieved by $ay + b$. If we reach $s$ for some $y \leq k$, set $M$ to $M - a$, $a$ to $a \text{ mod } M - a$ and $b$ to $s$. Otherwise, we can calculate the number of steps to go from $v$ to $v - (M - a)$ for $v \geq M - a$, and from this the smallest value we can reach with $y \leq k$.

code

What I wanted to ask is, is this a known problem, and is there a simpler or perhaps even faster solution to it? The problem seems fairly simple, so I doubt nobody has thought about it before.

• +179

 » 12 months ago, # |   +35 The problem has some obvious connections to rational approximations, and I would expect that it is previously studied in this context.In the case $b = L = 0$ it is equivalent to the following problem about finding a good approximation to $\frac{a}{M}$ from below: "What is the fraction $\frac{p}{q}$ with smallest positive denominator $q$ satisfying $0 \leq q \cdot \left( \frac{a}{M} - \frac{p}{q} \right) \leq \frac{R}{M}$?" The case of $b \neq 0$ can be incorporated into this approximation problem by asking more specifically about denominators $q \geq (a^{-1}b \text{ mod } M)$. (And we can take $L = 0$ by subtracting $L$ from $b$.)Without having yet fully thought out the details, I notice that your algorithm has a close resemblance to the Euclidean gcd algorithm which is well-known to be equivalent to producing continued-fraction approximations, and those in turn are well-known to be a useful tool for finding good or best rational approximations.
 » 12 months ago, # |   0 Auto comment: topic has been updated by mango_lassi (previous revision, new revision, compare).
 » 12 months ago, # |   +12 "If gcd(a,M)>1, we divide everything by the GCD."I have some doubts with reference to this line. Which variables will you divide by gcd(a,M)? What if b is not divisible by the gcd(a,M).
•  » » 12 months ago, # ^ |   +3 Note that we can get rid of $b$ and transform $[L, R]$ into at most 2 intervals, then solve the problem for $b = 0$ twice and combine the results.
•  » » 12 months ago, # ^ | ← Rev. 2 →   0 First assume that $b = 0$. Let the value we end up in the interval be $v \in [L, R]$. $ax\equiv v$$(mod M). Since gcd(a,M) maybe >1 we cannot directly take a^{-1}.\Rightarrow ax\equiv v$$(mod$ $M)$$\Rightarrow ax + qM = v$$\Rightarrow$ $d(a'x + qM') = v$ where $d = gcd(a,M)$$\Rightarrow (a'x + qM') = (v' = v/d) and d|v$$\Rightarrow$ $a'x\equiv v'$$(mod M')Now we can use (a')^{-1} to get value for x given v'.Since, L \leq v \leq R$$\Rightarrow$ $L/d \leq v/d \leq R/d$. We can divide $L$ and $R$ by $d$.Now if $b > 0$ then $d|(v - b)$. Then $d$ divides either both of $v$ and $b$ or none of them, this means that $\frac{v-b}{d} = \frac{v}{d} - \frac{b}{d}$. Now we can also divide $b$ by $d$ and use the inverse
•  » » 12 months ago, # ^ |   0 See the if (g > 1) case in firstInRange. We add $1$ to $L$ after dividing if its remainder is greater than that of $b$ (mod GCD) and subtract $1$ from $R$ after dividing if its remainder is less than that of $b$.
 » 12 months ago, # |   +41 This is a very nice algorithm! It's definitely "well-known to some people". There is for example problem F from 2014-2015 Летние Петрозаводские сборы, Petr Mitrichev Contest 12 and at least one Project Euler problem based on this. I'm sure you can find it in various guises in the literature as well. I'll explain how I've come to understand the algorithm. This makes clear a symmetry to the problem, and hopefully also the connection with extended Euclidean algorithm and continued fractions.I'll rephrase the problem a bit: suppose we have real numbers $x$, $y$, $l$, $r$, with $x > 0$, $y < 0$, $l < r$, and we want to find the "first" pair $(u, v)$ of non-negative naturals such that $l \le ux + vy \le r$ (assuming such a pair exists). (In your case, $y = M$, $x = a$, $l = L - b$, $r = R - b$.) Imagine that $r - l$ is very small, in particular smaller than $x$, $-y$, so that the "first" pair simultaneously minimises $u$ and $v$. Here is a very naive algorithm: Initialize $u = v = 0$. We keep track of $p = ux + vy$, initially $0$. If $l \le p \le r$, return $(u, v)$. While $p < l$, add $x$ to $p$ and $1$ to $u$. While $r < p$, add $y$ to $p$ and $1$ to $v$. Go back to 2. It's not hard to see that this algorithm is correct, but of course it's hopelessly slow -- it's linear in the answer. To speed it up, imagine we are at step 5, and we have $x + y > 0$. From now on, whenever we get to step 3, we can only add $x$ once, and we must add $y$ immediately afterwards. So we may as well combine this into one operation of adding $x+y$ to $p$ and $1$ to both $u$ and $v$. So the efficient algorithm is this: Initialize $ans = (0, 0)$, $p = 0$, $e_x = (1, 0)$, $e_y = (0, 1)$. If $l \le p \le r$, return $ans$. While $p < l$, add $x$ to $p$ and $e_x$ to $ans$. While $r < p$, add $y$ to $p$ and $e_y$ to $ans$. Compute $x + y$. If it's positive, replace $x$ with $x + y$ and $e_x$ with $e_x + e_y$. Else replace $y$ with $x+y$ and $e_y$ with $e_x + e_y$. This one does very well on generic inputs. But to optimise worst-case performance you want to insert remainder operations in steps 3, 4, and 5, to avoid doing the same thing many times in a row.
•  » » 12 months ago, # ^ |   0 I imagine you can apply this technique to this Project Euler problem.
•  » » » 12 months ago, # ^ |   0 Yep! I was thinking of another PE problem, but this one is a good example. Apparently problem G from Good Bye 2014 also used this technique.
•  » » 12 months ago, # ^ |   0 That's a super nice solution! Seems like that algorithm has terrible, terrible precision issues though (when working with FP). Can you pass F from that Petrozavodsk camp in C++ or do you need Python?(I assume that you use the algorithm with $x = \log_{10}(2)$, $y = -1$, $l = \log_{10}(p)$, $r = \log_{10}(p + 1) - \epsilon$)As a side note, does the judge for F even work? My submission fails on the first test (with input $7$) even though it works as a custom invocation.
•  » » » 12 months ago, # ^ |   +10 It's a really cool problem, one of my favourites! I don't actually know how to pass it with logarithms and floating points. I guess you need >50 digits of precision. There is a simple trick which makes it feasible to do in C++: don't take logarithms. Instead work with decimal integers and do multiplication instead of addition. You need to implement something like multiplication of 100-digit integers, but that's not so bad. (I have no idea about these issues with the judge.)
 » 12 months ago, # |   +31 It was discussed here
 » 12 months ago, # |   +19 "To reduce it to M−a, we consider the first value s=b mod M−a among [0,M−a) achieved by ay+b". How to prove this?
•  » » 12 months ago, # ^ |   +17 The next value after $v$ is $v + a$ if $v + a < M$ and $v - (M - a)$ otherwise. The first condition is just $v < M - a$.Thus, before the first time $v$ is in $[0, M - a)$, it decreases by $M - a$ as $y$ increases by $1$. Thus, the first value we achieve in $[0, M - a)$ is $b \text{ mod } (M - a)$.
 » 12 months ago, # |   +39 It reminds me of a legacy problem in 2008 (editorial).
•  » » 12 months ago, # ^ | ← Rev. 2 →   +5 Oh, so always reducing to $a$ also works >_>Well that simplifies things a lot, thanks!
•  » » » 12 months ago, # ^ |   +15 Yes, but sorry for lacking complementing the detail. Your new code fails for the $M = D + 1$ case. Please refer to the comments below!
•  » » 12 months ago, # ^ |   +20 Why does the recursion stop quickly?If I choose $M = D + 1, L = R « D$, then for many first steps,$(M, D, L, R) = (D + 1, D, L, L)$ and $(M', D', L', R') = (D, -1 \% D, L \% D, L \% D) = (D, D - 1, L, L)$. Doesn't it run in $O(D)$ steps?
•  » » » 12 months ago, # ^ |   +5 Good point. We need to replace $(M, D)$ with $(D, M \bmod D)$ to achieve the Euclidean complexity, so my understanding for "recursively" is actually for $(M', M' - D', M' - R', M' - L')$.
•  » » » 12 months ago, # ^ |   0 Wow, good point, so it's not unnecessary after all >>_>>I guess I'll revert the blog to the previous version.
 » 12 months ago, # |   0 If I am not mistaken, during reducing to a we go to another step of recursion with new M, a, b, but why can't we get the answer, that will be possible with new values M, a, b and y \leq k, but impossible for start values?
•  » » 12 months ago, # ^ |   0 Not sure if I understand you correctly, but the function that counts how many steps we need to reach a residue is always called with the initial parameters $a, b, m$, so exactly the same set of residues is reachable at every step (though some of them get eliminated as unoptimal).
•  » » » 12 months ago, # ^ |   0 But why there can't be situation like this:x — is a minimal value achived by ay + b mod M, where y < k; we go to another step of recursion with new M, a, b (lets call them M1, a1, b1), and for this step we find out that there is z < x achived by a1 * y + b1 mod M1 and y < k, but z it is unreacheable for ay + b.Shorter why can't the algoritm find the answer that is less than the minimum value of ay+b mod M over y≤k. (I hope this sounds better))
•  » » » » 12 months ago, # ^ |   0 We don't check if we have $a_1 \cdot y + b_1 \equiv z\ (\text{mod } m_1)$ for some $y \leq k$.We check if $a \cdot y + b \equiv z\ (\text{mod } m)$ for some $y \leq k$ at every level of the recursion, no matter what the current values $a_1, b_1, m_1$ are. The current values only help us determine the value $z$ to test, and the next values $a_2, b_2, m_2$.Since we always test with the original values, we clearly never return a value only achievable with $y > k$.
•  » » » » » 12 months ago, # ^ |   +10 So you find s with Mi, ai, bi, but check if it satisfies the condition with the original values? I thought that you were checking with new values, and that is why I had problems with understanding :-)Thank you! I'll try to figure out with problem) (And ask sometimes questions ^-^)
•  » » » » » » 11 months ago, # ^ |   0 I've tried really hard, but I can't understand why do we set M, a and b to these values! And why if there are more than k steps to reach s, there will be no x such, that x is less than b, but you will need less than k steps to reach it. Please help! :-(