I saw a blog (thanks to Bugman for finding it!) earlier today asking how to solve the following problem:

*Given a linear function $$$ax + b$$$, find the minimum $$$x \geq 0$$$ such that $$$ax + b \in [L, R]\ (\text{mod } M)$$$*.

To solve that problem, we can make the following reduction: If $$$gcd(a, M) > 1$$$, we divide everything by the GCD. The $$$x$$$ for which $$$ax + b = L\ (\text{mod } M)$$$ is $$$(L - b) a^{-1}\ (\text{mod } M)$$$. Denote this value by $$$b_0$$$. Then, the minimum $$$x$$$ to get to $$$L + y$$$ is $$$a^{-1} y + b_0\ (\text{mod } M)$$$. This gives us a reduced problem:

*Find the minimum value of $$$ay + b \text{ mod } M$$$ over $$$y \leq k$$$*.

This seemed pretty hard, but surprisingly I figured out how to do it in $$$\mathcal{O}(\log M)$$$ time! The algorithm is as follows:

In every step, we reduce the modulo from $$$M$$$ to $$$\min(a, M - a) \leq M / 2$$$. This guarantees we do at most $$$\mathcal{O}(\log M)$$$ steps.

To reduce it to $$$a$$$, we consider the first value $$$s$$$ among $$$[0, a)$$$ achieved by $$$ay + b$$$. If $$$b < a$$$, it is $$$b$$$. Otherwise, it is $$$b - M\text{ mod } a$$$. We check in $$$\mathcal{O}(1)$$$ if we reach $$$s$$$ for some $$$y \leq k$$$. If we do, set $$$M$$$ to $$$a$$$, $$$a$$$ to $$$-M \text{ mod } a$$$ and $$$b$$$ to $$$s$$$. Otherwise, output $$$b$$$ as the first $$$a$$$ values are the only values such that the previous value was larger, thus if we never attain them, the first value we attain is the largest we do.

To reduce it to $$$M - a$$$, we consider the first value $$$s = b \text{ mod } M - a$$$ among $$$[0, M - a)$$$ achieved by $$$ay + b$$$. If we reach $$$s$$$ for some $$$y \leq k$$$, set $$$M$$$ to $$$M - a$$$, $$$a$$$ to $$$a \text{ mod } M - a$$$ and $$$b$$$ to $$$s$$$. Otherwise, we can calculate the number of steps to go from $$$v$$$ to $$$v - (M - a)$$$ for $$$v \geq M - a$$$, and from this the smallest value we can reach with $$$y \leq k$$$.

**code**

What I wanted to ask is, is this a known problem, and is there a simpler or perhaps even faster solution to it? The problem seems fairly simple, so I doubt nobody has thought about it before.

The problem has some obvious connections to rational approximations, and I would expect that it is previously studied in this context.

In the case $$$b = L = 0$$$ it is equivalent to the following problem about finding a good approximation to $$$\frac{a}{M}$$$ from below: "What is the fraction $$$\frac{p}{q}$$$ with smallest positive denominator $$$q$$$ satisfying $$$0 \leq q \cdot \left( \frac{a}{M} - \frac{p}{q} \right) \leq \frac{R}{M}$$$?" The case of $$$b \neq 0$$$ can be incorporated into this approximation problem by asking more specifically about denominators $$$q \geq (a^{-1}b \text{ mod } M)$$$. (And we can take $$$L = 0$$$ by subtracting $$$L$$$ from $$$b$$$.)

Without having yet fully thought out the details, I notice that your algorithm has a close resemblance to the Euclidean gcd algorithm which is well-known to be equivalent to producing continued-fraction approximations, and those in turn are well-known to be a useful tool for finding good or best rational approximations.

Auto comment: topic has been updated by mango_lassi (previous revision, new revision, compare)."If gcd(a,M)>1, we divide everything by the GCD."

I have some doubts with reference to this line. Which variables will you divide by gcd(a,M)? What if b is not divisible by the gcd(a,M).

Note that we can get rid of $$$b$$$ and transform $$$[L, R]$$$ into at most 2 intervals, then solve the problem for $$$b = 0$$$ twice and combine the results.

First assume that $$$b = 0$$$. Let the value we end up in the interval be $$$v \in [L, R]$$$. $$$ax\equiv v$$$$$$(mod$$$ $$$M)$$$. Since $$$gcd(a,M)$$$ maybe $$$>1$$$ we cannot directly take $$$a^{-1}$$$.

$$$\Rightarrow$$$ $$$ax\equiv v$$$$$$(mod$$$ $$$M)$$$

$$$\Rightarrow$$$ $$$ax + qM = v$$$

$$$\Rightarrow$$$ $$$d(a'x + qM') = v$$$ where $$$d = gcd(a,M)$$$

$$$\Rightarrow$$$ $$$(a'x + qM') = (v' = v/d)$$$ $$$and$$$ $$$d|v$$$

$$$\Rightarrow$$$ $$$a'x\equiv v'$$$$$$(mod$$$ $$$M')$$$

Now we can use $$$(a')^{-1}$$$ to get value for $$$x$$$ given $$$v'$$$.

Since, $$$L \leq v \leq R$$$

$$$\Rightarrow$$$ $$$L/d \leq v/d \leq R/d$$$. We can divide $$$L$$$ and $$$R$$$ by $$$d$$$.

Now if $$$b > 0$$$ then $$$d|(v - b)$$$. Then $$$d$$$ divides either both of $$$v$$$ and $$$b$$$ or none of them, this means that $$$\frac{v-b}{d} = \frac{v}{d} - \frac{b}{d}$$$. Now we can also divide $$$b$$$ by $$$d$$$ and use the inverse

See the

`if (g > 1)`

case in`firstInRange`

. We add $$$1$$$ to $$$L$$$ after dividing if its remainder is greater than that of $$$b$$$ (mod GCD) and subtract $$$1$$$ from $$$R$$$ after dividing if its remainder is less than that of $$$b$$$.This is a very nice algorithm! It's definitely "well-known to some people". There is for example problem F from 2014-2015 Летние Петрозаводские сборы, Petr Mitrichev Contest 12 and at least one Project Euler problem based on this. I'm sure you can find it in various guises in the literature as well. I'll explain how I've come to understand the algorithm. This makes clear a symmetry to the problem, and hopefully also the connection with extended Euclidean algorithm and continued fractions.

I'll rephrase the problem a bit: suppose we have real numbers $$$x$$$, $$$y$$$, $$$l$$$, $$$r$$$, with $$$x > 0$$$, $$$y < 0$$$, $$$l < r$$$, and we want to find the "first" pair $$$(u, v)$$$ of non-negative naturals such that $$$l \le ux + vy \le r$$$ (assuming such a pair exists). (In your case, $$$y = M$$$, $$$x = a$$$, $$$l = L - b$$$, $$$r = R - b$$$.) Imagine that $$$r - l$$$ is very small, in particular smaller than $$$x$$$, $$$-y$$$, so that the "first" pair simultaneously minimises $$$u$$$ and $$$v$$$. Here is a very naive algorithm:

It's not hard to see that this algorithm is correct, but of course it's hopelessly slow -- it's linear in the answer. To speed it up, imagine we are at step 5, and we have $$$x + y > 0$$$. From now on, whenever we get to step 3, we can only add $$$x$$$ once, and we must add $$$y$$$ immediately afterwards. So we may as well combine this into one operation of adding $$$x+y$$$ to $$$p$$$ and $$$1$$$ to both $$$u$$$ and $$$v$$$. So the efficient algorithm is this:

This one does very well on generic inputs. But to optimise worst-case performance you want to insert remainder operations in steps 3, 4, and 5, to avoid doing the same thing many times in a row.

I imagine you can apply this technique to this Project Euler problem.

Yep! I was thinking of another PE problem, but this one is a good example. Apparently problem G from Good Bye 2014 also used this technique.

That's a super nice solution! Seems like that algorithm has terrible, terrible precision issues though (when working with FP). Can you pass F from that Petrozavodsk camp in C++ or do you need Python?

(I assume that you use the algorithm with $$$x = \log_{10}(2)$$$, $$$y = -1$$$, $$$l = \log_{10}(p)$$$, $$$r = \log_{10}(p + 1) - \epsilon$$$)

As a side note, does the judge for F even work? My submission fails on the first test (with input $$$7$$$) even though it works as a custom invocation.

It's a really cool problem, one of my favourites! I don't actually know how to pass it with logarithms and floating points. I guess you need >50 digits of precision. There is a simple trick which makes it feasible to do in C++: don't take logarithms. Instead work with decimal integers and do multiplication instead of addition. You need to implement something like multiplication of 100-digit integers, but that's not so bad. (I have no idea about these issues with the judge.)

It was discussed here

"To reduce it to M−a, we consider the first value s=b mod M−a among [0,M−a) achieved by ay+b".

How to prove this?

The next value after $$$v$$$ is $$$v + a$$$ if $$$v + a < M$$$ and $$$v - (M - a)$$$ otherwise. The first condition is just $$$v < M - a$$$.

Thus, before the first time $$$v$$$ is in $$$[0, M - a)$$$, it decreases by $$$M - a$$$ as $$$y$$$ increases by $$$1$$$. Thus, the first value we achieve in $$$[0, M - a)$$$ is $$$b \text{ mod } (M - a)$$$.

It reminds me of a legacy problem in 2008 (editorial).

Oh, so always reducing to $$$a$$$ also works >_>

Well that simplifies things a lot, thanks!

Yes, but sorry for lacking complementing the detail. Your new code fails for the $$$M = D + 1$$$ case. Please refer to the comments below!

Why does the recursion stop quickly?

If I choose $$$M = D + 1, L = R « D$$$, then for many first steps,

$$$(M, D, L, R) = (D + 1, D, L, L)$$$ and

$$$(M', D', L', R') = (D, -1 \% D, L \% D, L \% D) = (D, D - 1, L, L)$$$.

Doesn't it run in $$$O(D)$$$ steps?

Good point. We need to replace $$$(M, D)$$$ with $$$(D, M \bmod D)$$$ to achieve the Euclidean complexity, so my understanding for "recursively" is actually for $$$(M', M' - D', M' - R', M' - L')$$$.

Wow, good point, so it's not unnecessary after all >>_>>

I guess I'll revert the blog to the previous version.

If I am not mistaken, during reducing to a we go to another step of recursion with new

,M,a, but why can't we get the answer, that will be possible with new valuesb,M,aandb, but impossible for start values?y \leq kNot sure if I understand you correctly, but the function that counts how many steps we need to reach a residue is always called with the initial parameters $$$a, b, m$$$, so exactly the same set of residues is reachable at every step (though some of them get eliminated as unoptimal).

But why there can't be situation like this:

— is a minimal value achived byx, whereay + b mod M; we go to another step of recursion with newy < k(lets call themM, a, b), and for this step we find out that there isM1, a1, b1achived byz < xanda1 * y + b1 mod M1, but z it is unreacheable fory < kay + b.Shorter why can't the algoritm find the answer that is less than the minimum value of

ay+b mod Movery≤k. (I hope this sounds better))We don't check if we have $$$a_1 \cdot y + b_1 \equiv z\ (\text{mod } m_1)$$$ for some $$$y \leq k$$$.

We check if $$$a \cdot y + b \equiv z\ (\text{mod } m)$$$ for some $$$y \leq k$$$ at

every level of the recursion, no matter what the current values $$$a_1, b_1, m_1$$$ are. The current values only help us determine the value $$$z$$$ to test, and the next values $$$a_2, b_2, m_2$$$.Since we always test with the original values, we clearly never return a value only achievable with $$$y > k$$$.

So you find

swithMi, ai, bi, but check if it satisfies the condition with the original values? I thought that you were checking with new values, and that is why I had problems with understanding :-)Thank you! I'll try to figure out with problem) (And ask sometimes questions ^-^)

I've tried really hard, but I can't understand why do we set

M,aandbto these values! And why if there are more thanksteps to reachs, there will be noxsuch, thatxis less thanb, but you will need less thanksteps to reach it. Please help! :-(