Блог пользователя Elegia

Автор Elegia, история, 3 года назад, По-английски

This is a part of my report on China Team Selection 2021. I might translate some other parts later if you are interested.

I'm going to show that we can calculate $$$f(G)$$$ for any polynomial $$$f\in R[x]$$$ and set power series $$$G: 2^{n} \rightarrow R$$$ under $$$\Theta(n^2 2^n)$$$ operations on $$$R$$$.

Let's first consider a special case: calculate $$$F = \exp G$$$. We consider set power series as truncated multivariate polynomial $$$R[x_1,\dots,x_n]/(x_1^2,\dots,x_n^2)$$$. We can see that $$$F' = FG'$$$ whichever partial difference $$$\frac{\partial}{\partial x_k}$$$ we take. Then we can rewrite this equation as $$$[x_n^1] F = ([x_n^1]G)\cdot [x_n^0]F$$$. Hence we reduced the problem into calculating $$$[x_n^0] F$$$, and then one subset convolution gives the rest part. The time complexity is $$$T(n)=\Theta(n^22^n) + T(n-1)$$$, which is exactly $$$\Theta(n^22^n)$$$. I call this method "Point-wise Newton Iteration".

This method sounds more reasonable than calculating the $$$\exp$$$ on each rank vector. Because $$$\frac{G^k}{k!}$$$ counts all "partitions" with $$$k$$$ nonempty sets in $$$G$$$. So it exists whenever $$$1\sim n$$$ is invertible in $$$R$$$.

Now we try to apply this kind of iteration into arbitrary polynomial $$$f$$$. Let $$$F=f(G)$$$, then $$$F'=f'(G)G'$$$. Then we reduced the problem into calculating $$$[x_n^0]f(G)$$$ and $$$[x_n^0]f'(G)$$$. This becomes two problems! But don't worry. In the second round of recursion, it becomes $$$[x_{n-1}^0 x_n^0]$$$ of $$$f(G), f'(G), f"(G)$$$. The number of problem grows linearly. You will find out that the complexity is $$$\sum_{0\le k\le n} (n-k) \cdot \Theta(k^2 2^k)$$$.

It's not hard to see that the complexity is still $$$\Theta(n^2 2^n)$$$, because $$$\sum_{k\ge 0} k\cdot 2^{-k}$$$ converges.

Noted that $$$\frac{(A+B)^2}2-\frac{A^2}2-\frac{B^2}2=AB$$$, we proved the equivalence between the composition and subset convolution.

Here are some remarks:

  • If $$$G$$$ has no constant term, $$$f$$$ is better to be comprehended as an EGF.
  • This algorithm holds the same complexity on calculating $$$f(A, B)$$$ and so on.
  • We can optimize the constant of this algorithm by always using the rank vectors during the iteration. Then it can beat lots of methods that work for some specialized $$$f$$$.
  • If you have some knowledge of the "Transposition Principle", you will find out that, for any fixed $$$F, G: 2^n \rightarrow R$$$, the transposed algorithm gives $$$\sum_S F_S \cdot (G^k)_S$$$ for all $$$k$$$. It immediately gives an $$$\Theta(n^22^n)$$$ algorithm to calculate the whole chromatic polynomial of a graph with $$$n$$$ vertices.
  • Проголосовать: нравится
  • +312
  • Проголосовать: не нравится

»
3 года назад, # |
  Проголосовать: нравится +16 Проголосовать: не нравится

Does it generalize for $$$R[x_1,\dots,x_n]/(x_1^{k_1},\dots,x_n^{k_n})$$$?

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +16 Проголосовать: не нравится

    I thought about it for arbitrary $$$(k_i)$$$ and at least we can do $$$\Theta(nK\log K)$$$ where $$$K=\prod k_i$$$ when $$$f$$$ is elementary function. I made a problem (Chinese) for calculating $$$\log$$$.

    I'm not familiar with the arbitrary $$$f$$$ because the now best algorithm[1] in $$$n=1$$$ is very complicated :)

    [1] Kedlaya, Kiran & Umans, Christopher. (2008). Fast polynomial factorization and modular composition.

»
3 года назад, # |
  Проголосовать: нравится +40 Проголосовать: не нравится

Looks like for "or" convolution we consider $$$R[x_1,\dots,x_n]/(x_1^{2}-x_1,\dots,x_n^{2}-x_n)$$$ and for "xor" it is $$$R[x_1,\dots,x_n]/(x_1^{2}-1,\dots,x_n^{2}-1)$$$ in this notation...

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +18 Проголосовать: не нравится

    That's right, and then FMT and FWT are exactly multipoint evaluations.

    • »
      »
      »
      3 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      What does FMT mean?

      • »
        »
        »
        »
        3 года назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        fast Möbius transform?

      • »
        »
        »
        »
        3 года назад, # ^ |
        Rev. 2   Проголосовать: нравится +18 Проголосовать: не нравится

        Fast Möbius Transformation (on the subset lattice). It's called Möbius transformation in the paper on subset convolution and some other articles. It's also known as zeta transformation such as in https://arxiv.org/abs/0711.2585 (same authors with previous paper) and its inverse is called Möbius transformation. In Enumerative Combinatorics, it is closely related to the zeta function, and its inverse is closely related to the Möbius function.

        • »
          »
          »
          »
          »
          3 года назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          Where do you guys get this much of knowledge from ? Means the source from where you guys find the theory I am a newbie and i found difficulty in finding the right source to study :( Plese help !!

          • »
            »
            »
            »
            »
            »
            3 года назад, # ^ |
              Проголосовать: нравится +8 Проголосовать: не нравится

            Although these stuff are not necessary at all to succeed in CP or even be a LGM, university classes usually teach (or try to teach) advanced algorithmic theory. In most cases, those classes are not the part of the departmental program or are graduate classes, but you can always take them. In case the staff does not do a good job teaching it, they at least provide necessary materials and refer you to sources. It's up to you, after all, to choose to learn advanced theory since the only place you will use them is the academia (or some crazy rare Div.1 E problem).

          • »
            »
            »
            »
            »
            »
            3 года назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится

            The subset convolution paper is the origin of all these things, you can just google "subset convolution" then find it. The Enumerative Combinatorics book is the standard textbook(?) on enumerative combinatorics. But just like what MITRocks have said, "these stuff are not necessary at all to succeed in CP or even be a LGM".

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    By the way, the subset convolution can be viewed as multipoint evaluation on $$$R[z][x_1,\dots,x_n]/(x_1^2-zx_1,\dots,x_n^2-zx_n)$$$ and finally let $$$z \to 0$$$.

    • »
      »
      »
      3 года назад, # ^ |
      Rev. 3   Проголосовать: нравится +16 Проголосовать: не нравится

      Yep, that's exactly how I understood in when wrote this. Though $$$[z^l x^k]AB$$$ has a meaning as well, that is summing everything up by $$$|i \cap j|=l$$$ and $$$i \cup j = k$$$.

    • »
      »
      »
      3 года назад, # ^ |
      Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

      You provided me the idea to consider $$$R[z][x_1, \dots, x_n]/(x_1^2-z^2, \dots,x_n^2-z^2)$$$ and evaluate it in the cube {$$$-z,z$$$}$$$^n$$$. It also provides subset-convolution with $$$z \to 0$$$, but $$$[z^l x^k]AB$$$ in this ring is a sum of $$$a_i b_j$$$ over $$$i \Delta j = k$$$ and $$$|i \cap j|=\frac{l}{2}$$$.

      • »
        »
        »
        »
        3 года назад, # ^ |
        Rev. 3   Проголосовать: нравится +16 Проголосовать: не нравится

        This algorithm has been presented by vfleaking in "集合幂级数的性质与应用及其快速算法(Properties, Applications, and Fast Algorithms of Set Power Series)". The "set power series (集合幂级数)" itself also was presented in this article. You can find it in IOI2015中国国家候选队论文集(IOI 2015 China Candidate Team Essay Collection). It's noticed that this algorithm cannot be used for the ring $$$R$$$ of characteristic 2 (while FMT-like one is ok). At that time, set power series were not considered as the elements of a quotient ring. Later some found in this way many things can be explained naturally. Then this approach has become increasingly well-known in some small groups over the years.

»
3 года назад, # |
  Проголосовать: нравится +101 Проголосовать: не нравится

All hail Elegia, the king of combinatorics!

»
3 года назад, # |
  Проголосовать: нравится +18 Проголосовать: не нравится

Can you please elaborate on transposition principle part?

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +10 Проголосовать: не нравится

    If you don't know what is the transposition principle, google it. It's something like duality. Then polynomial composite a fixed set power series is a linear map from polynomial to set power series. Just apply the transposition principle to it.