### SWExShanto's blog

By SWExShanto, history, 9 days ago, ## How to find how many numbers in range [l, r] are coprime to n where r-l < n? [ in O(sqrt(n)) complexity]

Thanks in advance.  Comments (5)
 » You might want to look at Euler Totient function. This function can also be done in O(sqrt(N)) time after factorization.
•  » » It can only give us the total number of integers that are coprime to some number. But can it also give the list of those numbers or the list of those who are not coprime to that number under that complexity???Thanks in advance!
 » 9 days ago, # | ← Rev. 2 →   We will focus on calculating number of numbers coprime to $n$ which are also less than or equal to a number $l$. Then we can easily manipulate the result to obtain the answer for a range $[l,r]$. Also, we could simply solve for finding the numbers which are NOT coprime to $n$ and are less than or equal to $l$ and then subtract the result from $l$ later. SolutionLet's now try to calculate the number of numbers $x$ such that $1\leq x\leq l$ and $gcd(x,n)\gt 1$. For that, we should first calculate all the prime factors of $n$ and then all elements which are factors of these and are not greater than $l$ are what we need. Lets denote this list as $A[]=${$a_1,a_2,\cdots,a_m$} But there is a difficult task still unsolved, i.e... Avoiding overcountingLet's first understand that the number of factors of $a$ not greater than $b$ is given by $\left \lfloor{\frac{b}{a}}\right \rfloor$. So let's define $f(a,b) =\left \lfloor{\frac{b}{a}}\right \rfloor$. From now, the work becomes a little tricky, so please reply me if a detailed explanation is needed. What we will do is to pick a subset of the list $A[]$, calculate the LCM of the elements of it, let's call it $L$. Now, if the size of this subset was odd, we add the value of $f(L,l)$ to answer else we subtract it. This alternate addition and subtraction is used many times to avoid overcounting. In this case, we are adding the contribution when picking single elements and the subtracting the contribution when picking two at a time, then adding when picking three at a time (since it has been added and removed as well from the result) and then subtracting when picking four at a time and so on...This approach should work in $O(\sqrt{n})$ time. The cost of generating subsets is almost constant because the size of the list of prime factors doesn't depend on the value of $n$ much and it's always less enough to sustain an exponential solution. And prime factorization can be done in $\sqrt{n}$ time. I have tried my best to provide a satisfactory explanation but do reply me if any clarification is needed or I have messed up somewhere ':D. Also I have found an implementation to this problem on the internet. Link : https://www.geeksforgeeks.org/count-of-natural-numbers-in-range-l-r-which-are-relatively-prime-with-n/
•  » » 9 days ago, # ^ | ← Rev. 3 →   Can you give me a direct link to the theory that you have used to solve this part or the name of the theory? What we will do is to pick a subset of the list A[], calculate the LCM of the elements of it, let's call it L. Now, if the size of this subset was odd, we add the value of f(L,l) to answer else we subtract it.And how generating subsets from a set can be done in that very complexity? It would be more helpful if you added a short explanation for me.Thanks in advance!
•  » » » There is no such theory link, but I think it's explainable. We first count the numbers which are direct multiples of $a_i$. Then we subtract the numbers which are direct multiples of $a_i\cdot a_j$ (because we have counted them twice, once with $a_i$ and once with $a_j$). But in this process we have nullified the contribution of $a_i\cdot a_j\cdot a_k$ (added with $a_i,a_j,a_k$ and subtracted with $a_i\cdot a_j,a_j\cdot a_k,a_k\cdot a_i$). So we should add the contribution of subset of size three. Similarly subtracting for size four and so on...And for the case with bitmasking approach, it will take exponential time. But the size of the list of prime factors is at most $11$ (considering $1\leq n\leq 10^{12}$). So, a bitmasking approach will always work within the needed time complexity.