How to find how many numbers in range [l, r] are coprime to n where r-l < n? [ in O(sqrt(n)) complexity]
Thanks in advance.
№ | Пользователь | Рейтинг |
---|---|---|
1 | ecnerwala | 3648 |
2 | Benq | 3580 |
3 | orzdevinwang | 3570 |
4 | cnnfls_csy | 3569 |
5 | Geothermal | 3568 |
6 | tourist | 3565 |
7 | maroonrk | 3530 |
8 | Radewoosh | 3520 |
9 | Um_nik | 3481 |
10 | jiangly | 3467 |
Страны | Города | Организации | Всё → |
№ | Пользователь | Вклад |
---|---|---|
1 | maomao90 | 174 |
2 | adamant | 164 |
2 | awoo | 164 |
4 | TheScrasse | 160 |
5 | nor | 159 |
6 | maroonrk | 156 |
7 | -is-this-fft- | 150 |
7 | SecondThread | 150 |
9 | orz | 146 |
10 | pajenegod | 145 |
Thanks in advance.
Название |
---|
We will focus on calculating number of numbers coprime to $$$n$$$ which are also less than or equal to a number $$$l$$$. Then we can easily manipulate the result to obtain the answer for a range $$$[l,r]$$$. Also, we could simply solve for finding the numbers which are NOT coprime to $$$n$$$ and are less than or equal to $$$l$$$ and then subtract the result from $$$l$$$ later.
Let's now try to calculate the number of numbers $$$x$$$ such that $$$1\leq x\leq l$$$ and $$$gcd(x,n)\gt 1$$$. For that, we should first calculate all the prime factors of $$$n$$$ and then all elements which are factors of these and are not greater than $$$l$$$ are what we need. Lets denote this list as $$$A[]=$$${$$$a_1,a_2,\cdots,a_m$$$} But there is a difficult task still unsolved, i.e...
Let's first understand that the number of factors of $$$a$$$ not greater than $$$b$$$ is given by $$$\left \lfloor{\frac{b}{a}}\right \rfloor $$$. So let's define $$$f(a,b) =\left \lfloor{\frac{b}{a}}\right \rfloor $$$.
From now, the work becomes a little tricky, so please reply me if a detailed explanation is needed. What we will do is to pick a subset of the list $$$A[]$$$, calculate the LCM of the elements of it, let's call it $$$L$$$. Now, if the size of this subset was odd, we add the value of $$$f(L,l)$$$ to answer else we subtract it.
This alternate addition and subtraction is used many times to avoid overcounting. In this case, we are adding the contribution when picking single elements and the subtracting the contribution when picking two at a time, then adding when picking three at a time (since it has been added and removed as well from the result) and then subtracting when picking four at a time and so on...
This approach should work in $$$O(\sqrt{n})$$$ time. The cost of generating subsets is almost constant because the size of the list of prime factors doesn't depend on the value of $$$n$$$ much and it's always less enough to sustain an exponential solution. And prime factorization can be done in $$$\sqrt{n}$$$ time.
I have tried my best to provide a satisfactory explanation but do reply me if any clarification is needed or I have messed up somewhere ':D.
Also I have found an implementation to this problem on the internet.
Link : https://www.geeksforgeeks.org/count-of-natural-numbers-in-range-l-r-which-are-relatively-prime-with-n/
Can you give me a direct link to the theory that you have used to solve this part or the name of the theory?
What we will do is to pick a subset of the list A[], calculate the LCM of the elements of it, let's call it L. Now, if the size of this subset was odd, we add the value of f(L,l) to answer else we subtract it.
And how generating subsets from a set can be done in that very complexity? It would be more helpful if you added a short explanation for me.
Thanks in advance!
There is no such theory link, but I think it's explainable.
We first count the numbers which are direct multiples of $$$a_i$$$. Then we subtract the numbers which are direct multiples of $$$a_i\cdot a_j$$$ (because we have counted them twice, once with $$$a_i$$$ and once with $$$a_j$$$). But in this process we have nullified the contribution of $$$a_i\cdot a_j\cdot a_k$$$ (added with $$$a_i,a_j,a_k$$$ and subtracted with $$$a_i\cdot a_j,a_j\cdot a_k,a_k\cdot a_i$$$). So we should add the contribution of subset of size three. Similarly subtracting for size four and so on...
And for the case with bitmasking approach, it will take exponential time. But the size of the list of prime factors is at most $$$11$$$ (considering $$$1\leq n\leq 10^{12}$$$). So, a bitmasking approach will always work within the needed time complexity.