By adamant, history, 6 weeks ago,

Hi everyone!

You probably know that the primitive root modulo $m$ exists if and only if one of the following is true:

• $m=2$ or $m=4$;
• $m = p^k$ is a power of an odd prime number $p$;
• $m = 2p^k$ is twice a power of an odd prime number $p$.

Today I'd like to write about an interesting rationale about it through $p$-adic numbers.

Hopefully, this will allow us to develop a deeper understanding of the multiplicative group modulo $p^k$.

### Tl;dr.

For a prime number $p>2$ and $r \equiv 0 \pmod p$ one can uniquely define

$\exp r = \sum\limits_{k=0}^\infty \frac{r^k}{k!} \pmod{p^n}.$

In this notion, if $g$ is a primitive root of remainders modulo $p$ lifted to have order $p-1$ modulo $p^n$ as well, then $g \exp p$ is a primitive root of remainders modulo $p^n$.

Finally, for $p=2$ and $n>2$ the multiplicative group is generated by two numbers, namely $-1$ and $\exp 4$.

### $p$-adic numbers

We define a $p$-adic number $r$ as a formal power series $r = \sum\limits_{i=k}^\infty r_i p^i$, where $0 \leq r_i < p$ are integer coefficients.

Essentially, as formal power series of $x$ extend polynomials of $x$, $p$-adic numbers extend regular numbers base $p$, using the same rules for the summation and multiplication of the base $p$ numbers. The number $k$ in the definition above is called the $p$-adic valuation of $r$.

Generally, any rational number can be uniquely represented as $p^k \frac{n}{d}$, where $n$ and $d$ are co-prime integers, also co-prime with $p$ and $d$ is positive. In this representation, the possibly negative number $k$ corresponds to the number $k$ in the definition above.

This number $k$ is called the $p$-adic valuation of $r$. The numbers with non-negative valuation are called $p$-adic integers and we will focus on such numbers in this article, because then we can say

$r \equiv \sum\limits_{i=0}^{k-1} r_i p^i \pmod{p^k}$

and treat the $p$-adic number as nested set of remainders modulo $p^k$ for increasing $k$, which is essentially the same as with regular polynomials of $x$ that are much more known in competitive programming.

### $p$-adic logarithms and exponents

One of the things that we regularly do with formal power series is taking logarithms and exponents of it. They have some combinatorics meaning in terms of generating functions, but also e.g. provide an efficient way to compute several first coefficients of $P^k(x)$ as

$P^k = \exp (k \log P).$

Similar thing could be done for $p$-adic numbers. But in this case it would be done to investigate multiplicative group of remainders modulo $p^k$ in additive terms. Let's recall the power series definitions of log and exp:

$\begin{matrix} \log (1 - x) = -\sum\limits_{k=1}^\infty \frac{x^k}{k}, \\ \exp x = \sum\limits_{k=0}^\infty \frac{x^k}{k!}. \end{matrix}$

The former can be rewritten for $\log x$ as

$\log x = -\sum\limits_{k=1}^\infty \frac{(1-x)^k}{k}.$

For these expressions to be meaningful we would need them to converge. That is, for each coefficient near $p^k$ we should be able to eventually determine the exact value of such coefficient. Since we have $x^k$ in the summation, a natural thing to demand here is $x \equiv 0 \pmod p$ for $\exp x$ and $x \equiv 1 \pmod p$ for $\log x$.

In this case, $x^k$ for $\exp$ and $(1-x)^k$ for $\log$ would be divisible by $p^k$, generally leading for increasing valuation of each summand.

Note that dividing by $k$ for $\log$ and by $k!$ would drag the valuation of the $k$-th summand down.

For $\log$ it would generally be decreased by at most $\log_p k$ and for $\exp$ by at most $\frac{k}{p} + \frac{k}{p^2} + \dots \approx \frac{k}{p-1}$.

Therefore the valuation of $k$-th summand for $\log$ is at least $k-\log_p k$, which is good, as it limits to infinity. But for $\exp$ it is nearly $\frac{k(p-2)}{p-1}$ which is also fine for most $p$. For most $p$ except $2$, for which we will consistently get valuation of exactly $1$ when $k$ is a power of $2$.

So, for odd prime numbers, $\exp x$ converges for all $x$ that are divisible by $p$.

But for $p=2$, we need $x$ to be divisible by at least $4=p^2$ for $\exp$ to converge.

### Why does it matter for primitive roots?

Let's take for granted that primitive root $g$ does exist for every prime number $p > 2$.

Any invertible remainder modulo $p^k$ can be represented as $r \equiv g^{l}t \pmod{p^k}$, where $t \equiv 1 \pmod p$.

It is possible to prove that $\exp \log x = x$ when both $\log x$ and $\exp \log x$ converge. That being said, any $p$-adic number $x$ such that $x \equiv 1 \pmod{p}$ can be uniquely represented as $x = \exp \alpha = \exp \log x$.

Note that there are $p^{k-1}$ numbers $x$ (taken modulo $p^k$) such that $x \equiv 1 \pmod p$, as well as $p^{k-1}$ numbers $\alpha$ (modulo $p^k$) such that $\alpha \equiv 0 \pmod p$, making $\log$-$\exp$ define a one-to-one correspondence between numbers having remainder $0$ and $1$.

Therefore, the remainder $r$ can be represented alternatively as

$r \equiv g^{l(r)} \exp \alpha(r) \pmod{p^k}.$

Here $l(r)$ is a number such that $r \equiv g^{l(r)} \pmod p$ and $\alpha(r)$ is a number such that the equation above is true.

To make sure that what's written next is correct, $g$ should be lifted (see the comment below) to have order $p-1$ in the ring of $p$-adic numbers as well.

There are $p-1$ different meaningful values of $l(r)$ and $p^{k-1}$ different meaningful values of $\alpha(r)$, making up for $(p-1)p^{k-1}$ different invertible numbers modulo $p^k$. If we were to multiply to numbers modulo $p^k$, the following would stand:

$xy \equiv g^{l(x) + l(y)} \exp(\alpha(x) + \alpha(y)) \pmod{p^k}.$

So, any number in the multiplicative group can be uniquely represented by the pair $(l,\alpha)$, where $l$ goes from $0$ to $p-1$ and $\alpha$ goes from $0$ to $(p-1)(p+p^2+\dots+p^{k-1})=p^k - p$, traversing all remainder modulo $p^k$ that are divisible by $p$.

In this notion, all meaningful values of $l$ are generated by the multiples of number $1$, granted that they're taken modulo $p-1$ and all meaningful values of $\alpha$ are generated by the multiples of number $p$, taken modulo $p^k$.

First number has a period of $p-1$, second number has a period of $p^{k-1}$. These numbers are co-prime, hence the joint period of $(1, p)$ is $(p-1)p^{k-1}$, which means that $(1, p)$ generates all possible pairs of $(l, \alpha)$.

This, in turn, means that all remainders modulo $p^k$ are generated by the $p$-adic number $g \exp p$.

### Ok, and what about $p=2$?

As we mentioned earlier, $\exp r$ is still well-defined for $r \equiv 0 \pmod 4$, so modulo $2^k$ for $k>2$, $\exp$-$\log$ pair define a bijection between numbers that have remainder $1$ modulo $4$ and numbers that have remainder $0$ modulo $4$.

As such, it means that powers of $\log 4$ generate all the numbers that are equal to $1$ modulo $4$. This fact highlights that in remainders modulo $2^k$, there is an element of degree $2^{k-2}$ that generates a multiplicative subgroup of numbers that are equal to $1$ modulo $4$.

Therefore the whole multiplicative group of remainders modulo $2^k$ is generated by $-1$ and $\exp 4$.

• +145

 » 6 weeks ago, # | ← Rev. 3 →   +10 By the way, Newton's method which is used to derive e.g. inverses of formal power series in $x$ works just fine with $p$-adic numbers as well, where it's called Hensel's lifting. One of nice examples of this technique in competitive programming is the following problem about automorphic numbers:Timus — Square Country 5. Count all integers $x$ of length at most $n$ such that first $n$ digits of $x^2$ is $x$ itself.Formally speaking, we have to find $x$ such that $x^2 \equiv x \pmod{10^n}$.The equation above has $4$ solutions modulo $10$, which are $x\in \{0, 1, 5, 6\}$.With Hensel's lemma, each of them can be uniquely extended to $10^n$, using the Newton's algorithm:In Newton's method terms, we need to find zeroes of the function $F(x) = x^2 - x$. Assuming $x \equiv x_k \pmod{10^{2^k}}$ and knowing $x_0$, the Newton's method gives the following formula: $x_{k+1} \equiv x_k - \frac{F(x_k)}{F'(x_k)} \equiv x_k - \frac{x_k^2 - x_k}{2x_k-1} \equiv \frac{x_k^2}{2x_k-1} \pmod{10^{2^{k+1}}}.$The inverse of $t$ modulo $10^k$ can also be computed with Newton's method as $x_{k+1} \equiv x_k (2-x_k t) \pmod{10^{2^{k+1}}}.$With this in mind, finding inverse modulo $10^n$ and then finding automorphic numbers can be implemented as def inv10(x, n): y = x % 10 if y == 3 or y == 7: y = 10 - y k = 1 while k < n: k *= 2 y = y * (2 - y * x) % (10**k) return y % 10**n def autom(x, n): k = 1 while k < n: k *= 2 x = x * x * inv10(2 * x - 1, k) % (10**k) return x % 10**n Using Python for long arithmetic. Leading to $O(n \log n)$ solution if FFT is used for multiplications.
 » 6 weeks ago, # | ← Rev. 2 →   +8 Your main argument actually needs $g^{p-1} \cong_{p^n} 1$, not just that $g$ is a primitive root mod $p$. Otherwise you run into trouble since $g^{l(x) + l(y)}$ might not equal $g^{(l(x) + l(y)) \!\mod (p-1)}$. (Of course, it is easy to lift $g$ so that it satisfies this equation.)It's also possible to observe that $x$ is a primitive root mod $p^n$ if and only if $l(x)$ is coprime with $p-1$ and $\alpha(x) / p$ is coprime with $p$, and these are determined entirely by $x \!\mod p^2$. So any primitive root mod $p^2$ is a primitive root mod $p^n$ for all $n$. (This conclusion can also be proved more directly using the observation that if $x \cong_{p^n} 1 + kp^{n-1}$ and $\max(p,n) > 2$, then $x^p \cong_{p^{n+1}} 1 + kp^n$.)PS: If you know only the basics of FFT/NTT, the large-power-of-two roots of $1$ mod $2^k$ might excite you. But this doesn't actually lead anywhere, because the necessary* analogue of "primitive $n$-th root of unity" in a commutative ring that is not a field is "root of the $n$-th cyclotomic polynomial," and no such roots exist in $\mathbb{Z}_{2^k}$. ("Artificially" introducing such roots leads to the Schönhage-Strassen algorithm, of course.)*OK, the exact condition, being a principal $n$-th root of unity is a slightly weaker condition. However, any principal $n$-th root of unity is a root of $n$ times the $n$-th cyclotomic polynomial, and taking the inverse discrete Fourier transform involves a division by $n$ anyway.
•  » » 6 weeks ago, # ^ |   0 Yep, you're absolutely right about $g^{p-1} \equiv 1 \pmod{p^n}$, thanks for pointing this out! I just got a bit confused with how it would work in the ring of polynomials modulo $x^n$. It also adds a bit of sense to my comment above, as lifting wasn't mentioned in the article before :)
•  » » 6 weeks ago, # ^ |   0 By the way, what would be the problem with FFT/NTT? Inverse FFT would probably require division by $2$, which is not that good modulo $2^k$ and maybe would require computations with higher powers of $2$, but is there anything else?
•  » » » 6 weeks ago, # ^ | ← Rev. 2 →   0 I edited my post at about the same time as this comment clarifying just a little. Any base $\omega$ which is a principal $2^n$-th root of unity mod $2^{k+n}$ satisfies $2^n \cdot \Phi_{2^n}(\omega) \cong_{2^{k+n}} 0$ and thus $\Phi_{2^n}(\omega) \cong_{2^k} 0$. But $\Phi_{2^n}(\omega) = \omega^{2^{n-1}} + 1$ and $-1$ has no square root mod $4$, so such an $\omega$ cannot exist if $n>1$.