Блог пользователя Everule

Автор Everule, история, 18 месяцев назад, По-английски

Those of us who have been doing cp for a long time, have come to love the representation of numbers as an infinite prime power vector i.e.

$$$n = \prod p_i^{e_i} = [e_1, e_2, \ldots ] = v_n$$$

Where $$$n$$$ is a number and $$$p_i$$$ is the $$$i$$$ th prime number. This has many useful properties for problems, such as mapping

$$$n \times m \sim v_n + v_m$$$
$$$n / m \sim v_n - v_m$$$
$$$n \mid m \sim v_n \le v_m$$$
$$$\gcd(n, m) \sim \min(v_n, v_m)$$$
$$$lcm(n, m) \sim \max(v_n, v_m)$$$

All the above operations are done elementwise. However in some cases, the problem authors may forbid you from factorising the numbers, by perhaps increasing $$$n$$$ to around $$$10^{36}$$$ where there isn't any simple factorisation algorithm to do it in reasonable time. However using effective prime factorisation, we can use these properties freely on a set of numbers. If this set contained all integers, then it would require us to truly prime factorise the integer. However, we only need to use these operations on a subset of integers, making our task a lot simpler. We will make a finite array $$$q$$$ of size $$$t$$$ with the following properties.

$$$(1)$$$ Any element in $$$x \in S$$$ is representable by $$$q$$$, i.e. there exists $$$[f_1, f_2, \ldots f_t] = v'_n$$$ such that $$$x = \prod q_i^{f_i}$$$

$$$(2)$$$ For all the above operations, replacing $$$v_n$$$ with $$$v'_n$$$ should produce the same result.

Let us show, that if $$$q$$$ satisfies $$$(1)$$$ and $$$gcd(q_i, q_j) = 1$$$ for all $$$i < j$$$, then $$$q$$$ satisfies $$$(2)$$$.

Proofs

Let us now consider how we can find $$$q$$$ given $$$S$$$. Let us proceed element wise, and assume we have a valid solution, and we want to add another element.

Now let us add the next element $$$x \in S$$$. Go through $$$q$$$ until we spot an element that has a common factor with $$$x$$$. Let that element be $$$q_i$$$.

Let $$$P(q)$$$ be the set of elements produced by some product of elements in $$$q$$$. Notice that if $$$q$$$ has 2 elements that divide each other $$$a \mid b$$$, then $$$a, b$$$ is replaceable by $$$a, \frac{b}{a}$$$, and $$$P$$$ will not lose any elements. Let us do this recursively until neither divides the other, or an element becomes 1. Let us call this procedure reduce_pair.

(It maybe useful to look at the example implementation in parallel from here)

We run the procedure reduce_pair on the elements $$$q_i, x$$$, if $$$x$$$ becomes 1, we're done. If $$$q_i$$$ becomes 1, we remove it from the set and continue.

We then run an iterative algorithm on a list $$$L$$$, initialised with $$$[q_i, x]$$$. Let the elements before $$$L[ptr]$$$ be all co-prime. Initialise $$$ptr$$$ with 1. Run reduce_pair on all elements before $$$ptr$$$ with it. Then add their gcd to the end of the list if its not one, and divide both elements by it. This will make all elements upto $$$ptr + 1$$$ coprime. There can only be as many elements in this list as there are true primes in $$$q_i, x$$$. Therefore we can deduce that the size of the list will be at most $$$O(\log{\log{max(S)}})$$$.

Since this algorithm will be run $$$O(\log{x})$$$ times, the total algorithm will run in $$$O(n^2\log{A} + n\log{A} \times (\log{\log{A}})^2)$$$ Where $$$A = max(S)$$$, and we will get our array $$$q$$$.

Example implementation : 159590539

  • Проголосовать: нравится
  • +287
  • Проголосовать: не нравится

»
18 месяцев назад, # |
  Проголосовать: нравится +38 Проголосовать: не нравится
»
18 месяцев назад, # |
  Проголосовать: нравится +19 Проголосовать: не нравится

I did indeed use something like this before, can't remember the exact problem

»
18 месяцев назад, # |
  Проголосовать: нравится +16 Проголосовать: не нравится

Wait this algo also gives the smallest set with the given properties?

  • »
    »
    18 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I doubt so. However one strong property we have is that the this is the smallest set that works on the elements generated by $$$S$$$, i.e. all elements that can be created by applying the 5 operations to some elements in $$$S$$$.

»
18 месяцев назад, # |
  Проголосовать: нравится +17 Проголосовать: не нравится

Great blog! I believe I have used this idea before on trying to implement a way to represent a limited kind of irrational numbers. More precisely, I wanted to represent irrational numbers in the following form.

$$$a_1 \sqrt{k_1} + a_2 \sqrt{k_2} + \cdots + a_n \sqrt{k_n}$$$

For convenience, we would also want all values under the root to be square-free, as square factors will be better represented on their multiplied values ("coefficients") instead.

For these values to be square-free, all factors should be below 2, and we can now think of their implicit prime power vector as a binary vector. For convenience, let us define these functions to convert between numbers and their prime power vectors.

$$$V(n)=[e_{n,1},e_{n,2},\cdots]$$$
$$$N(v)=\prod p_i ^{v_i}$$$

Now, we know that multiplying two values gives the sum of the two vectors in its prime power vector. However, we want to keep the powers of each factor to under 2. This lets us think about operations on the F2 field ("binary operations"). We want to keep factors with power of 0 or 1, but remove powers of 2 and move them to the coefficients. We can now define multiplication between two square roots with coefficients in a new way.

$$$a \sqrt{k} \times a' \sqrt{k'} = a a' N(V(k) \& V(k')) \sqrt{N(V(k) \oplus V(k'))}$$$

Now this lets us compute multiplication of sum-of-roots in quadratic time complexity, assuming there are limited amounts of factors so their implicit vectors can be represented on indexes of arrays (or keys of a map, if you want). The two other operations, addition and subtraction, are very trivial to implement, and so I only explained how multiplication would work.

I have been thinking of a way to do this with FFT since then, but I could still not find a way for it. Please notice me if anyone finds it.

»
18 месяцев назад, # |
  Проголосовать: нравится +10 Проголосовать: не нравится

I have a slight implementation question, in the add_element method:

    void add_element(T val){
        for(int i=0;i<basis.size();i++){
            reduce_pair(val, basis[i]);
            if(basis[i] == 1){
                swap(basis[i], basis.back());
                basis.pop_back();
                continue;
            }
            if(val == 1) return;
            if(gcd(basis[i], val) > 1) solve_inner(i, val);
        }
        if(val > 1) basis.push_back(val);
    }

When basis[i] == 1, you do continue after replacing basis[i] with basis.back(). But this essentially means you are skipping over doing reduce_pair on val and what was originally basis.back().

I think this is OK because if basis[i] == 1, this means the original basis[i] had the new val as a divisor and thus every other element of basis[i] will be coprime with the new value of val. But in this case, why not make this a break instead of a continue? I tried submitting your code with a break in place of this continue and it got AC: 176634276

  • »
    »
    18 месяцев назад, # ^ |
    Rev. 2   Проголосовать: нравится +10 Проголосовать: не нравится

    There is also

    if(basis[pos] == 1){
        swap(basis[pos], basis.back());
        basis.pop_back();
    }
    

    in solve_inner, which might skip iterating some elements in basis as well. But I am not sure whether skipping some elements in this case is also fine.

    • »
      »
      »
      18 месяцев назад, # ^ |
      Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

      It always works here, since if basis[pos] == 1 There was some g > 1 added to the end of the basis, which would be useless to check against again.

  • »
    »
    18 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    This is true, but I didn't think of it at the time. When I wrote solve_innerI realised the reduce_pair is unnecessary. But I left it there because its faster that way.

    The continue was a mistake in my thinking. break works there and is faster. You can just have solve_innerinside the for loop and it would work.

»
18 месяцев назад, # |
Rev. 2   Проголосовать: нравится +21 Проголосовать: не нравится

Can you say a little bit more about the $$$\log \log \max (S)$$$ bound? I see that the expected number of prime divisors of a number up to $$$n$$$ is $$$O(\log \log n)$$$, but why the worst case?

Super nice and well-written blog, thanks!

  • »
    »
    18 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    If all $$$n$$$ numbers are primes, won't the size be just $$$n$$$?

    • »
      »
      »
      18 месяцев назад, # ^ |
        Проголосовать: нравится +8 Проголосовать: не нравится

      Based on the example implementation, when we call add_element, it may happen that we need to call solve_inner multiple times. $$$\log \log \max(S)$$$ is supposed to bound how much the basis vector can increase in size for each add_element call.

      In the case of a sequence of $$$n$$$ primes, solve_inner won't be called, and so the size of basis will be indeed $$$n$$$. But think of a case where you add a new number $$$X$$$ and each of its prime factors are elements of basis. You will end up adding precisely the number of prime factors of $$$X$$$ to basis, which is not bounded by $$$\log \log X$$$.

      My question is: is this really the bound? or just an approximation for CP purposes? As I said, the expected number of primers that divide $$$n$$$ is $$$\log \log n$$$, but the only worst case bound I could find is $$$inverseFactorial(n)$$$, which is worse than claimed by the blog.

      • »
        »
        »
        »
        18 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        I'm very sorry, I thought you meant that size of basis will be $$$\log \log max(S)$$$

      • »
        »
        »
        »
        18 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        It was an approximation for CP purposes. I find it really hard to pin down the exact worst case, as it depends on really complicated factors. It seems to be something related to integer ratios between set of primes. I'm sure with a better understanding, you can optimize that loop.

»
13 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Just found this lol — this reminds me a lot of finding an orthogonal basis of a given vector space!