### zscoder's blog

By zscoder, history, 3 years ago, Hi everyone! Inspired by the recent Codeforces Round 641, I decided to write an introductory tutorial on generating functions here. I am by no means an expert in generating functions so I will write about what I currently know about them. amiya has written a really interesting blog here on Codeforces about more advanced applications of generating functions, but I think there is no English tutorial on the basics of this topic yet (or at least on CP sites). Thus, I would like to share about this topic here.

I plan to split this tutorial into two parts. The first part (this post) will be an introduction to generating functions for those who have never learned about them at all, and some standard examples and showcases of generating functions. The second part will be a collection of several applications of generating functions in CP-style problems. If you are already familiar with generating functions, you may just skip to Part 2 directly.

In this part, many examples are from the book generatingfunctionology which I think is also a great introductory material to generating functions.

### What are Generating Functions?

Let's say we have a sequence $a_0, a_1, a_2, ...$. We associate with $a$ a series $A$ which "encodes" the terms in $a$ with its coefficients.

Formally, for a sequence of numbers $\{a_{i}\}_{i=0}^{\infty}$, we define the ordinary generating function (OGF) of $a$ to be $A(x) = \displaystyle\sum_{i=0}^{\infty}a_{i}x^{i}$.

For example, consider the Fibonacci sequence $f$ with the terms $0, 1, 1, 2, 3, 5, 8, …$. Then, $F(x) = 0 + x + x^{2} + 2x^{3} + 3x^{4} + 5x^{5} + 8x^{6} + …$.

You may imagine that you are putting the (infinitely many) terms of the sequence in a line, and assigning a power of $x$ to each term of the sequence in order. Adding them up, you get an “infinite polynomial” which somewhat encodes the sequence. The nice thing about generating functions is that sometimes the series is nicer to play around with which will sometimes uncover surprising properties of the sequence.

There are other types of generating functions, such as the Exponential Generating Function (EGF) and Dirichlet Generating Function. We will look at some examples of EGF later in this post, but for the next few examples we will focus on OGFs.

Before that, we introduce a simple notation. For a series $A(x) = \displaystyle\sum_{n \ge 0}a_{n}x^{n}$, we let $[x^{n}]A(x) = a_{n}$ (i.e. the coefficient of $x^n$ in $A$).

#### Simple Examples of OGFs

Let’s start with a very simple example. What is the OGF of the sequence $1,1,1,...,1$? By definition, we have $A(x) = 1+x+x^{2}+x^{3}+...$. Does this series look familiar?

$A(x)$ is actually a geometric series with common ratio $x$! Thus, we can use the geometric series formula to write $A(x) = \frac{1}{1-x}$.

Note: Don’t we need to care about convergence issues like $|x|<1$? Well, it depends on what you want to do with your series. We will work on formal power series most of the time, which allows us to ignore the issue of convergence. However, this also means that we can’t “substitute” $x$ as some fixed value without care. For example, when $x=2$, the series $A(x)$ diverges but for say $x=-\frac{1}{2}$, $A(x)$ converges. In this post, we won’t deal with the analytic properties of the series most of the time. If you really need to substitute values though, the general rule of thumb is that you can do it if the series converges.

We can manipulate generating functions very much like how we would manipulate other algebraic expressions. Here is a classic example.

Example. Consider the sequence $f_{n}$ defined by $f_{0}=0$, $f_{1}=1$ and $f_{n}=f_{n-1}+f_{n-2}$ for $n \ge 2$. Find the OGF of $f$ (we usually denote it with a capital letter, say $F$).

Clearly, $f_{n}$ is the $n$-th Fibonacci number. We will use the recurrence relation to find the OGF of $f_{n}$.

Firstly, we need to make the terms of the series appear. The easiest way to do this is to multiply the recurrence relation by $x^{n}$ to obtain $f_{n}x^{n} = f_{n-1}x^{n} + f_{n-2}x^{n}$.

Next, we sum up the terms on both sides over all valid $n$ (in this case $n \ge 2$) to obtain:

$\displaystyle\sum_{n=2}^{\infty}f_{n}x^{n} = x\displaystyle\sum_{n=2}^{\infty}f_{n-1}x^{n-1} + x^{2}\displaystyle\sum_{n=2}^{\infty}f_{n-2}x^{n-2}$.

This is equivalent to:

$F(x) - f_{0}x^{0} - f_{1}x^{1} = x(F(x) - f_{0}x^{0}) + x^{2}F(x)$.

$\Rightarrow F(x) - x = (x+x^2)F(x)$

$\Rightarrow F(x)(1-x-x^2)=x$

$\Rightarrow F(x) = \frac{x}{1-x-x^2}$

Thus, we obtain the OGF of Fibonacci numbers.

Let’s see another example of OGF of a common sequence.

Example. The Catalan numbers $c_{n}$ are defined by $c_{0} = 1$ and $c_{n+1} = \displaystyle\sum_{i=0}^{n}c_{i}c_{n-i}$ for $n \ge 0$. Find the OGF of $c_{n}$.

Again, our strategy is to multiply a power of $x$ to both sides and summing up for all $n$. We obtain:

$\displaystyle\sum_{n=0}^{\infty}c_{n+1}x^{n+1} = \displaystyle\sum_{n=0}^{\infty}\sum_{i=0}^{n}c_{i}c_{n-i}x^{n+1} = \displaystyle x\sum_{n=0}^{\infty}\sum_{i=0}^{n}c_{i}x^{i}c_{n-i}x^{n-i}$

The LHS is easy to intepret: it is just $C(x) - 1$.

How do we interpret the RHS? We claim that it is $xC(x)^{2}$. Consider the expansion of $C(x)^2$. Which terms contribute to the coefficient of $x^{n}$? If we look at $C(x)^2 = (c_{0}+c_{1}x+c_{2}x^2+...)(c_{0}+c_{1}x+c_{2}x^2+...)$, we see that we can only obtain $x^{n}$ by picking $c_{i}x^{i}$ from the first bracket and $c_{n-i}x^{n-i}$ from the second bracket. Hence, the coefficient of $x^{n}$ in $C(x)^2$ is $\displaystyle\sum_{i=0}^{n}c_{i}c_{n-i}$, as desired.

Hence, we have $C(x)-1=xC(x)^{2}$, which is a quadratic equation in $C(x)$! Using the quadratic formula, we can obtain $C(x) = \frac{1 \pm \sqrt{1 - 4x}}{2x}$. Which sign should we choose? If we choose the + sign, then the numerator $\rightarrow 2$ as $x \rightarrow 0$, while the denominator $\rightarrow 0$, so the ratio will become infinite at $0$. However, $C(x)$ can be expanded as a power series at $x=0$, so $C(x)$ should converge at $x=0$. Thus, we should choose the minus sign to obtain $C(x) = \frac{1 - \sqrt{1-4x}}{2x}$ (indeed, by L'Hopital Rule it converges to $c_{0}=1$ at $x=0$).

Tip: Try looking for common sequences and see if you can derive the formula for their OGFs from scratch. It is really helpful if you can derive the intuition where you can see the functional equation directly from the recurrence.

#### OGFs in more than one variable

We don’t have to limit ourselves to one variable. We can have multivariable OGFs. Let’s look at the following simple example.

Example. The binomial coefficients $c(n,k)$ is defined by the recurrences $f(n,0)=1$ for $n \ge 0$, $f(0,n)=0$ for $n \ge 1$ and $f(n,k)=f(n-1,k)+f(n-1,k-1)$ for $n,k \ge 1$. Find the OGF of $f(n,k)$.

We define the OGF $F(x,y) = \displaystyle\sum_{n \ge 0}\sum_{k \ge 0}f(n,k)x^{n}y^{k}$. As usual, we try to relate $F$ with itself using the recurrence. We have

$\displaystyle\sum_{n \ge 1}\sum_{k \ge 1}f(n,k)x^{n}y^{k} = x\displaystyle\sum_{n \ge 1}\sum_{k \ge 1}f(n-1,k)x^{n-1}y^{k} + xy\displaystyle\sum_{n \ge 1}\sum_{k \ge 1}f(n-1,k-1)x^{n-1}y^{k-1}$.

Hence, we have

$F(x,y) - \displaystyle\sum_{n \ge 0}x^{n} = x(F(x,y) - \displaystyle\sum_{n \ge 0}x^{n}) + xyF(x,y)$

$\Rightarrow F(x,y) - \frac{1}{1-x} = (x+xy)F(x,y) - \frac{x}{1-x}$

$\Rightarrow (1-x-xy)F(x,y) = 1$

$\Rightarrow F(x,y) = \frac{1}{1-x-xy}$

From the bivariate OGF, we can deduce some interesting identities in one-variable. For example, we have

$F(x,y) = \frac{1}{1-x-xy} = \frac{1}{1 - x(y+1)} = \displaystyle\sum_{k \ge 0}(y+1)^{k}x^{k}$

Hence, $[x^{n}]F(x,y) = (y+1)^{n}$. However, $[x^{n}]F(x,y) = \displaystyle\sum_{k=0}^{\infty}f(n,k)y^{k}$, so $\displaystyle\sum_{k=0}^{\infty}f(n,k)y^{k} = (y+1)^{n}$. Note that this gives the same result as binomial theorem on $(y+1)^{n} = \binom{n}{0}y^{0}+\binom{n}{1}y^{1}+...+\binom{n}{n}y^{n}$.

It is more interesting to look at $[y^{k}]F(x,y)$ in terms of an OGF in $x$. We have

$F(x,y) = \frac{1}{(1-x)-xy} = \frac{\frac{1}{1-x}}{1-\frac{x}{1-x}y} = \frac{1}{1-x}(1 + \frac{x}{1-x}y + (\frac{x}{1-x})^2y^2 + …)$

Comparing coefficients, we obtain $[y^{k}]F(x,y) = \frac{x^{k}}{(1-x)^{k+1}}$, so using the same argument as before, we have

$\displaystyle\sum_{n=0}^{\infty}f(n,k)x^{n} = \frac{x^{k}}{(1-x)^{k+1}}$.

This identity is interesting because it allows us to “expand” $\frac{1}{(1-x)^{k+1}}$ in terms of the OGF of binomial coefficients! In particular, we have $[x^{n-k}]\frac{1}{(1-x)^{k+1}} = \binom{n}{k}$, so $[x^{n}]\frac{1}{(1-x)^{k}} = \binom{n+k-1}{k-1}$. This identity is very useful especially when dealing with sums involving binomial coefficients where $k$ is fixed and $n$ varies.

#### Exponential Generating Function

So far we have looked at ordinary generating functions. Now, let me introduce a new type of generating functions called the exponential generating function.

Definition: Let $a_{0},a_{1},a_{2},...$ be a sequence of numbers. Then, the EGF of $a$ (say $A(x)$ is defined as $\displaystyle\sum_{i=0}^{\infty}\frac{a_{i}}{i!}x^{i}$.

In other words, the EGF is just the OGF but every term $a_{i}$ is now divided by $i!$. Why the weird choice of division by $i!$? The next example will shed some light on this choice.

Example. Let $b_{n}$ denote the $n$-th Bell number, which counts the number of ways to partition $\{1,2,...,n\}$ into disjoint sets. For example, $b_{3}=5$, because there are $5$ ways to partition $$ into sets: $123$; $12$, $3$; $13$, $2$; $1$, $23$; $1$, $2$, $3$. Find the EGF of $b_{n}$.

Our first step is to look for a recurrence relation. Suppose you have this as a Div. 2 C problem. What is the simplest dp you can come up with?

We can fix the size of the set containing the element $1$, say $i$. Then, there are $\binom{n-1}{i-1}$ ways to choose the other $i-1$ elements of the set, and $b_{n-i}$ ways to partition the remaining $n-i$ elements. Hence, we obtain the simple recurrence formula

$b_{n} = \displaystyle\sum_{i=1}^{n}\binom{n-1}{i-1}b_{n-i} = \displaystyle\sum_{i=0}^{n-1}\binom{n-1}{i}b_{i}$ for $n \ge 1$.

By precomputing binomial coefficients, this is an $O(n^2)$ dp, which should be sufficient for a Div. 2 C. However, why stop here? Suppose the problem asks you to find $b_{n}$ for $n \le 3 \cdot 10^{5}$. Can you still solve this?

The answer is yes. Let’s use our recurrence to find the EGF of $b_{n}$. Note that

$b_{n} = \displaystyle\sum_{i=0}^{n-1}\binom{n-1}{i}b_{i} = \displaystyle\sum_{i=0}^{n-1}\frac{(n-1)!}{i!(n-1-i)!}b_{i}$

$\Rightarrow n\frac{b_{n}}{n!} = \displaystyle\sum_{i=0}^{n-1}\frac{b_i}{i!}\frac{1}{(n-1-i)!}$

$\Rightarrow \displaystyle\sum_{n \ge 1}n\frac{b_{n}}{n!}x^{n} = \displaystyle\sum_{n \ge 1} x\displaystyle\sum_{i=0}^{n-1}\frac{b_{i}x^{i}}{i!}\frac{x^{n-1-i}}{(n-1-i)!}$

Now we see why EGFs are convenient for us. If our convolutions involve binomial coefficients (which is often the case when we deal with combinatorial objects), then multiplying EGFs kind of “automatically” helps us multiply our terms by a suitable binomial coefficient (more details later).

Back to our problem, we want to write everything in terms of $B(x)$. RHS is easy, since it is just $xB(x)e^{x}$ (Recall that $\displaystyle\sum_{n \ge 0}\frac{x^{n}}{n!}$ is the Maclaurin series of $e^x$). However, the LHS needs a bit of work, since we have the unfortunate $nb_{n}$ term instead of the $b_{n}$ term. To deal with this obstacle, we use a common trick when dealing with formal power series. Let us differentiate B(x), then multiply by $x$. Verify that if $A(x)$ is a formal power series with $A(x) = a_{0}+a_{1}x^{1}+a_{2}x^{2}+... = \displaystyle\sum_{n \ge 0}a_{n}x^{n}$ then $xA’(x) = a_{1}x^{1} + 2a_{2}x^{2} + 3a_{3}x^{3} + … = \displaystyle\sum_{n \ge 0}na_{n}x^{n}$.

Thus, looking back at our equation, we have

$xB’(x) = xB(x)e^{x}$, which implies $\frac{B'(x)}{B(x)} = e^{x}$. If you are familiar with calculus, you will recognize that if we integrate both sides, we get $\ln B(x) = e^{x} + c$. Since $b_{0}=1$, $B(0)=1$ and we have $c = -1$. Thus, $B(x) = e^{e^{x}-1}$ is our desired EGF.

So, how to find $b_{n}$ in faster than $O(n^2)$ time. The idea is that we can find the first $n$ terms of $e^{P(x)}$ in $O(n\log n)$ time, so we just need to compute the first few terms of our EGF and read off the answer! In this 2-part article, I will omit explaining how to do certain well-known polynomial operations in $O(n\log n)$ time or $O(n\log^{2} n)$ time like $\sqrt{P(x)}$, $\ln(P(x))$ etc. There are already tutorials written for them (for example cp-algorithms). Hence, I will just quote that we can do those polynomial operations since that is not the main focus of this article.

### Algebraic Manipulation of Generating Functions

Here are some common ways to manipulate generating functions and how they change the sequence they are representing. In this section, $a_{i}$, $b_{i}$ will represent sequences and $A(x)$ and $B(x)$ are their corresponding generating functions (OGF or EGF depending on context which will be stated clearly). As an exercise, verify these statements.

For both OGF and EGF, $C(x)=A(x)+B(x)$ generates the sequence $c_{n}=a_{n}+b_{n}$.

#### Shifting

For OGF, $C(x) = x^{k}A(x)$ generates the sequence $c_{n}=a_{n-k}$ where $a_{i}=0$ for $i<0$. For EGF, you need to intergrate the series $A(x)$ $k$ times to get the same effect.

For OGF, $C(x) = \frac{A(x) - (a_{0} + a_{1}x + a_{2}x^2 + ... + a_{k-1}x^{k-1})}{x^{k}}$ generates the sequence $c_{n} = a_{n+k}$.

For EGF, $C(x) = A^{(k)}(x)$ generates the sequence $c_{n} = a_{n+k}$, where $A^{(k)}(x)$ denotes $A$ differentiated $k$ times.

#### Multiplication by $n$

For both OGF and EGF, $C(x) = xC'(x)$ generates the sequence $c_{n}=na_{n}$.

In general, you can get the new generating function when you multiply each term of the original sequence by a polynomial in $n$ by iterating this operations (but I do not include the general form here to avoid confusion).

#### Convolution

This is really the most important operation on generating functions.

For OGF, $C(x)=A(x)B(x)$ generates the sequence $c_{n} = \displaystyle\sum_{k=0}^{n}a_{k}b_{n-k}$.

For EGF, $C(x)=A(x)B(x)$ generates the sequence $c_{n} = \displaystyle\sum_{k=0}^{n}\binom{n}{k}a_{k}b_{n-k}$ (verify this!). This is also why EGF is useful in dealing with recurrences involving binomial coefficients or factorials.

#### Power of Generating Function

This is just a direct consequence of convolution, but I include it here because it is so commonly used.

For OGF, $C(x)=A(x)^{k}$ generates the sequence $c_{n} = \displaystyle\sum_{i_{1}+i_{2}+...+i_{k}=n}a_{i_{1}}a_{i_{2}}...a_{i_{k}}$

For EGF, $C(x)=A(x)^{k}$ generates the sequence $c_{n} = \displaystyle\sum_{i_{1}+i_{2}+...+i_{k}=n}\frac{n!}{i_{1}!i_{2}!...i_{k}!}a_{i_{1}}a_{i_{2}}...a_{i_{k}}$

#### Prefix Sum Trick

This only works for OGF, but is useful to know. Suppose want to generate the sequence $c_{n} = a_{0}+a_{1}+...+a_{n}$. Then, we can take $C(x) = \frac{1}{1-x}A(x)$.

Why does this work? If we expand the RHS, we get $(1+x+x^{2}+...)A(x)$. To obtain the coefficient of $x^n$ which is $c_{n}$, we need to choose $x^{i}$ from the first bracket and $a_{n-i}x^{n-i}$ from $A(x)$, so summing over all $i$ gives us $c_{n} = \displaystyle\sum_{i=0}^{n}a_{i}$.

### List of Common Series

Before we delve into applications, I want to compile a short list of series that we will use frequently below. They are

$\frac{1}{1-x} = 1 + x + x^{2} + ... = \displaystyle\sum_{n \ge 0}x^{n}$

$-\ln (1-x) = x + \frac{x^2}{2} + \frac{x^3}{3} + ... = \displaystyle\sum_{n \ge 1}\frac{x^{n}}{n}$

$e^{x} = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + ... = \displaystyle\sum_{n \ge 0}\frac{x^{n}}{n!}$

$(1-x)^{-k} = \binom{k-1}{0}x^{0} + \binom{k}{1}x^{1} + \binom{k+1}{2}x^{2} + ... = \displaystyle\sum_{n}\binom{n+k-1}{n}x^{n}$

Our goal in many problems will be to reduce the EGF or OGF involved in the problem into some composition of functions that we know above.

You can find a more complete list on Page 57 on generatingfunctionology.

### Generating Functions in Counting Problems

Generating functions is a powerful tool in enumerative combinatorics. There are so many applications that I can only cover a small fraction of them here. If you are interested in more examples of counting using generating functions, you can try the books generatingfunctionology and Enumerative Combinatorics.

Here, I will show some classical examples of counting problems involving generating functions. In the next post, I will focus on CP problems which utilizes generating functions.

#### Catalan Numbers, revisited

We have shown before that the OGF of the Catalan numbers is $C(x) = \frac{1 - \sqrt{1-4x}}{2x}$. Suppose we want to find a closed-form formula for $c_{n}$. Of course, it is well-known that $c_{n} = \frac{1}{n+1}\binom{2n}{n}$, but let's pretend we don't know that yet. We want to "expand" our generating function $C(x)$, but there is a troublesome square root in our way.

This is where the generalized binomial theorem comes to our rescue. Before that, we need to define generalized binomial coefficients.

Definition. Let $r$ be any complex number and $n$ be a nonnegative integer. Then, $\binom{r}{n} = \frac{r(r-1)...(r-(n-1))}{n!}$.

This is the same as the usual binomial coefficients, but now we no longer require the first term to be a nonnegative integer.

Next, we show a special case of the theorem.

Theorem. Let $r$ be a real number and $n$ be a nonnegative integer, then

$(1+x)^{r} = \displaystyle\sum_{n \ge 0}\binom{r}{n}x^{n}$.

The proof is just differentiating the left side $n$ times and compare the constant term. I leave this as an exercise.

In particular, our mysterious function $\sqrt{1-4x} = (1-4x)^{\frac{1}{2}} = \displaystyle\sum_{n \ge 0}\binom{\frac{1}{2}}{n}(-4x)^{n}$

$= \displaystyle\sum_{n \ge 0}\frac{1}{2} \cdot \frac{-1}{2} \cdot \frac{-3}{2} \cdot ... \cdot \frac{-(2n-3)}{2} \cdot \frac{1}{n!} \cdot (-4)^{n}x^{n}$

$= 1 + \displaystyle\sum_{n \ge 1}\frac{(-1)^{n-1}(1 \cdot 3 \cdot ... \cdot (2n-3))}{2^{n}} \cdot \frac{(-4)^{n}}{n!} x^{n}$

$= 1 + \displaystyle\sum_{n \ge 1}-2^{n} \cdot \frac{(2n-2)!}{2^{n-1}(n-1)!}\cdot \frac{1}{n!}x^{n}$

$= 1 + \displaystyle\sum_{n \ge 1}\frac{-2 \cdot (2n-2)!}{(n-1)!n!}x^{n}$

$= 1 + \displaystyle\sum_{n \ge 1}-\frac{2}{n} \cdot \binom{2n-2}{n-1}x^{n}$.

Hence, $C(x) = \frac{1-\sqrt{1-4x}}{2x} = \frac{1}{2x}\left[1 - 1 - \displaystyle\sum_{n \ge 1}-\frac{2}{n} \cdot \binom{2n-2}{n-1}x^{n}\right]$

$= \displaystyle\sum_{n \ge 1}\frac{1}{n} \cdot \binom{2n-2}{n-1}x^{n-1}$

$= \displaystyle\sum_{n \ge 0}\frac{1}{n+1}\binom{2n}{n}x^{n}$, as desired.

#### Some problems involving permutations

For a permutation $p = (p_{1},p_{2},...,p_{n})$, consider the graph formed by the edges $i \rightarrow p_{i}$. It is well-known that the graph is a union of several disjoint cycles.

Problem. Count the number of permutations of length $n$ with $k$ cycles.

These numbers are also called Stirling numbers of the first kind.

Let $c_{n} = (n-1)!$ be the number of permutations of length $n$ which is a cycle. Let $C(x) = \displaystyle\sum_{n \ge 0}\frac{c_{n}}{n!}x^{n}$ denote the EGF of $c$. Let $f_{n}$ be our answer and $F(x)$ be its EGF. The key observation here is that $F(x) = \frac{1}{k!}C(x)^{k}$.

Suppose for a moment our cycles are labelled from $1$ to $k$. For every permutation, label each element with the label of the cycle it is in. Let's fix the length of cycle $i$ to be $a_{i}$ (so $\sum a_{i} = n$). Then, there are $c_{a_{i}}$ ways to permute the elements in the $i$-th cycle and $\frac{n!}{a_{1}!a_{2}!...a_{k}!}$ ways to assign cycle labels to the elements of the permutation. Finally, in our actual problem, the order of cycles doesn't matter, so we need to divide by $k!$ in the end.

To summarize, the answer is $\frac{n!}{k!}\displaystyle\sum_{a_{1}+a_{2}+...+a_{k}=n}\frac{c_{a_{1}}c_{a_{2}}...c_{a_{k}}}{a_{1}!a_{2}!...a_{k}!}$. Verify that $\displaystyle\sum_{a_{1}+a_{2}+...+a_{k}=n}\frac{c_{a_{1}}c_{a_{2}}...c_{a_{k}}}{a_{1}!a_{2}!...a_{k}!}$ is $[x^{n}]C(x)^{k}$, so $F(x) = \frac{1}{k!}C(x)^{k}$ (the $n!$ disappears into $F(x)$ because we are dealing with EGFs).

Let's assume this is a CP problem and we are asked to find the answer for $(n,k)$. Then, we can calculate the answer directly using generating functions in $O(n\log n)$ since we can do exponentiation in $O(n\log n)$ ($P(x)^{k} = \exp(k\ln(P(x)))$).

Actually, what we just did is a special case of the more general Exponential Formula. However, I feel that it is easier to understand the Exponential Formula from these specific examples and you should try to understand it intuitively until it becomes common sense.

Problem. Count the number of permutations of length $n$ such that all cycle lengths are in a fixed set of positive integers $S$.

We use the same trick as the previous problem, but let $c_{i}=0$ if $i$ is not in $S$.

This time, we need to find $[x^{n}]\displaystyle\sum_{k \ge 0}\frac{1}{k!}C(x)^{k} = [x^{n}]\exp(C(x))$ because we need to sum over all values of $k$ (number of cycles), which can also be computed easily.

Problem. Find the expected number of cycles of a permutation of length $n$.

To compute the expected number of cycles, we count the sum of number of cycles over all permutations of length $n$. Let $g_{n}$ denote the sum of number of cycles over all permutations of length $n$ and $G(x)$ as the EGF of $g$. Using the same function $C$ in the previous problems, we need to find (note the extra factor $k$ which is the difference between this and the previous examples) $[x^{n}]G(x) = [x^{n}]\displaystyle\sum_{k \ge 0}\frac{k}{k!}C(x)^{k} = [x^{n}]C(x)\displaystyle\sum_{k \ge 1}\frac{1}{(k-1)!}C(x)^{k-1} = [x^{n}]C(x)\exp(C(x))$

However, $C(x) = \displaystyle\sum_{k \ge 1}\frac{(k-1)!}{k!}x^{k} = \displaystyle\sum_{k \ge 1}\frac{x^{k}}{k} = -\ln(1 - x)$. Hence, $C(x)\exp(C(x)) = -\frac{\ln(1-x)}{(1-x)}$.

For $n \ge 1$, $[x^{n}](-\ln(1-x)) = \frac{1}{n}$. By the Prefix Sum trick, $[x^{n}]\frac{-\ln(1-x)}{1-x} = 1+\frac{1}{2}+...+\frac{1}{n}$.

Thus, $[x^{n}]G(x) = 1+\frac{1}{2}+...+\frac{1}{n}$. Since $\frac{g_n}{n!}$ is the expected number of cycles of a permutation of length $n$, our answer is $1 + \frac{1}{2}+... +\frac{1}{n}$, the $n$-th Harmonic number!

We see that the exponential trick is viable when we are dealing with items that are made up from smaller pieces.

#### Stirling Numbers of the Second Kind

Problem. Find the number of ways to partition the set $\{1,2,...,n\}$ into $k$ subsets.

These numbers are also called Stirling numbers of the second kind.

Denote the answer by $f(n,k)$. The trick is to consider the polynomial (also known as deck enumerator) $D(x) = \displaystyle\sum_{n \ge 1}\frac{x^{n}}{n!}$. What is $D(x)^{k}$? We have $[x^{n}]D(x)^{k} = \displaystyle\sum_{a_{1}+a_{2}+...+a_{k}=n, a_{i} \ge 1}\frac{1}{a_{1}!a_{2}!...a_{k}!}$. This sum has a similar combinatorial interpretation as the ones in the previous problems. Let's assume the partition sets are labelled from $1$ to $k$. Then, $a_{i}$ denotes the size of the $i$-th set and there are $\frac{n!}{a_{1}!a_{2}!...a_{k}!}$ ways to assign a set to each element (by the multinomial theorem). However, we have counted each partition $k!$ times, since in our final answer the sets shouldn't be ordered. Thus, $k!f(n,k) = n![x^{n}]D(x)^{k}$.

Rearranging gives us $\frac{f(n,k)}{n!} = \frac{[x^{n}]D(x)^{k}}{k!}$. Hence, $\displaystyle\sum_{n \ge 0}\frac{f(n,k)}{n!}x^{n} = \frac{D(x)^{k}}{k!}$. Introducing the variable $y$ to correspond to the variable $k$, we have $\displaystyle\sum_{k \ge 0}\displaystyle\sum_{n \ge 0}\frac{f(n,k)}{n!}x^{n}y^{k} = \displaystyle\sum_{k \ge 0}\frac{[D(x)y]^{k}}{k!} = \exp(D(x)y)$.

Note: The polynomial $H(x,y) = \displaystyle\sum_{k \ge 0}\displaystyle\sum_{n \ge 0}f(n,k)\frac{x^{n}}{n!}y^{k}$ is also known as a hand enumerator.

Thus, we have the simple formula $H(x,y) = \exp(D(x)y)$. Note that $D(x) = \displaystyle\sum_{n \ge 1}\frac{x^{n}}{n!} = e^{x} - 1$, so we have $H(x,y) = e^{(e^{x}-1)y}$ (note how similar this is to the EGF of Bell numbers! In fact $H(x,1)$ is the EGF of Bell numbers (can you see why?)).

To get the answer, we just need to find $n![x^{n}y^{k}]H(x,y) = n![x^{n}]\frac{(e^{x}-1)^{k}}{k!}$ which you can compute using polynomial operations efficiently.

#### Graph Counting

Problem. Find the number of vertex-labeled undirected graphs with $n$ vertices so that each vertex has degree $2$.

Every such graph must be a union of disjoint cycles (why?). As usual, we start by considering the generating function for one "component" in the item we need to count. Let $d_{n}$ denote the number of undirected cycles of length $n$ and $D(x)$ denote its EGF. Then, $D(x) = \displaystyle\sum_{n \ge 3}\frac{(n-1)!}{2n!}x^{n} = \frac{1}{2}\displaystyle\sum_{n \ge 3}\frac{x^{n}}{n} = \frac{1}{2}\left(-\ln(1-x) - x - \frac{x^2}{2}\right)$.

Let $G(x)$ denote the EGF of our answer. Using the same argument as before, we find that $G(x) = \exp(D(x))$, so we get the formula $G(x) = \exp\left(\ln\left(\sqrt{\frac{1}{1-x}}\right) - \frac{x}{2} - \frac{x^2}{4}\right) = \frac{e^{-\frac{x}{2}-\frac{x^2}{4}}}{\sqrt{1-x}}$, and you can compute the coefficient of $x^{n}$ to obtain the answer.

Let's look at a trickier example.

Problem. Find the number of bipartite vertex-labeled graphs with $n$ vertices.

It is tempting to try a similar approach as the previous problem. We can relate the number of bipartite graphs with the number of connected bipartite graphs. Can we count the number of connected bipartite graphs easily? Unfortunately, it does not seem to be too easy to count.

Instead, let us color each vertex of the graph with red or blue, and count the number of colored bipartite graphs (not necessarily connected). Suppose we choose $k$ vertices to color red and $n-k$ to color blue. Then, there are $\binom{n}{k}$ ways to choose the coloring and $2^{k(n-k)}$ to choose the edges (since each edge must be between $2$ components). Thus, the number of colored bipartite graphs on $n$ vertices is $\displaystyle\sum_{k \ge 0}\binom{n}{k}2^{k(n-k)}$. Call this number $a_{n}$ and its EGF as $A(x)$.

The next step is to relate the number of colored bipartite graphs with the number of colored connected bipartite graphs. Let $b_{n}$ denote the number of colored connected bipartite graphs on $n$ vertices and $B(x)$ be its EGF. Using a similar argument as before, we have the relation $A(x) = \exp(B(x))$, and thus $B(x) = \ln(A(x))$.

Returning to our original problem, our next step is to count the number of connected bipartite graphs on $n$ vertices (call the count $c_{n}$ and EGF $C(x)$). However this is easy, since each connected bipartite graph can be colored in exactly two ways (the coloring is fixed once we choose the color of a vertex). Hence, $C(x) = \frac{B(x)}{2}$.

Finally, let $d_{n}$ be the number of bipartite graphs on $n$ vertices and $D(x)$ be its EGF. Then, we have $D(x) = \exp(C(x))$ using the exponential argument. Thus, $D(x) = \exp(C(x)) = \exp\left(\frac{B(x)}{2}\right) = \exp\left(\frac{\ln(A(x))}{2}\right) = \sqrt{A(x)}$, which is a nice formula!

#### Placing Rooks and PIE

Let's look at a last example which demonstrates the use of the Inclusion-Exclusion Principle (PIE).

Consider a $n \times n$ chessboard where some cells are colored black and others are colored white. Suppose we magically know the sequence $r_{k}$, the number of ways to place $k$ non-attacking rooks on white cells of the chessboard (i.e. no two are on the same row or column, no rooks are on black cells). Let $e_{k}$ denote the number of ways to place $n$ non-attacking rooks on the chessboard so that exactly $k$ of the rooks are on white squares. Can we find $e_{k}$ in terms of $r_{k}$?

The trick is that exact conditions are usually harder to count while "at least" conditions are easier to count. For a fixed subset of white cells $S$, denote $N(S)$ as the number of ways to place $n$ non-attacking rooks on the chessboard such that there is at least one rook on each cell in $S$. Let $n_{k} = \displaystyle\sum_{|S| = k}N(S)$.

We relate $n_{k}$ with $e_{k}$. Consider a subset $T$ of size $t$ and a way to place $n$ non-attacking rooks so that the white cells they occupy is exactly $T$. Every $k$-element subset of $T$ contributes to the sum $n_{k}$. Thus, we obtain the recurrence $n_{k} = \displaystyle\sum_{t \ge 0}\binom{t}{k}e_{t}$.

Let $N(x)$ and $E(x)$ be the OGFs of $n_{k}$ and $e_{k}$. We can derive a simple relation between $N(x)$ and $E(x)$. Indeed, we have

$N(x) = \displaystyle\sum_{k \ge 0}x^{k}\displaystyle\sum_{t \ge 0}\binom{t}{k}e_{t} = \displaystyle\sum_{t \ge 0}e_{t}\displaystyle\sum_{k \ge 0}\binom{t}{k}x^{k} = \displaystyle\sum_{t \ge 0}e_{t}(x+1)^{t} = E(x+1)$. Thus, we have the simple relation $E(x) = N(x-1)$.

It turns out that $n_{k}$ is usually much easier to find. In our problem, $n_{k} = r_{k}(n-k)!$, since we can first choose our set $S$ as any set of $k$ non-attacking rooks on white cells, then place the other $n-k$ rooks in $(n-k)!$ ways. Thus, we obtain $N(x) = \displaystyle\sum_{k \ge 0}r_{k}(n-k)!x^{k}$ and $E(x) = \displaystyle\sum_{k \ge 0}r_{k}(n-k)!(x-1)^{k}$. Hence, we can read $e_{j}$ from the coefficients of $E(x)$.

### Proving Some Interesting Theorems via Generating Functions

This is not entirely CP related but here are some cool theorems you can prove easily with generating functions.

#### Partition in odd parts = Partition in distinct parts

A partition of $n$ into $k$ parts is a multiset of positive integers of size $k$ which sum up to $n$. For example, $\{3,1,1\}$ is a partition of $5$ into $3$ parts. Note that the order of elements do not matter, so $\{3,1,1\}$ and $\{1,3,1\}$ are the same partition.

You might have heard of the well-known problem of proving that the number of partitions of $n$ into odd parts is equal to the number of partitions of $n$ into distinct parts. Here, we prove a generalization of it.

Problem. Prove that the number of partitions of $n$ into parts of size not divisible by $k+1$ is equal to the number of partitions of $n$ into parts such that there are at most $k$ parts of each size.

Note that when $k=1$ we reduce this to the standard problem.

Fix $k$ and let $A(x)$ be the OGF of the first object and $B(x)$ be the OGF of the second object. Observe that choosing a partition is the same as choosing the number of times we use each integer in our multiset, so

$B(x) = \displaystyle\prod_{r \ge 1}(1 + x^{r} + x^{2r} + ... + x^{kr})$

$= \displaystyle\prod_{r \ge 1}\left(\frac{1 - x^{r(k+1)}}{1 - x^{r}}\right)$

$= \displaystyle\prod_{r \ge 1, (k+1) \nmid r}\left(\frac{1}{1 - x^{r}}\right)$

$= \displaystyle\prod_{r \ge 1, (k+1) \nmid r}(1 + x^{r} + x^{2r} + ...) = A(x)$

#### Binet's Formula (and solving Linear Recurrences)

Let $f_{n}$ denote the $n$-th Fibonacci number (with $f_{0}=0$, $f_{1}=1$, $f_{n}=f_{n-1}+f_{n-2}$). You may have heard of Binet's Formula, which states that $f_{n} = \frac{1}{\sqrt{5}}\left[\left(\frac{1+\sqrt{5}}{2}\right)^{n} - \left(\frac{1-\sqrt{5}}{2}\right)^{n}\right]$.

This might look very random, but it actually comes directly from the generating function. Recall that $F(x) = \frac{x}{1-x-x^2}$ is the OGF of $f$. The trick here is to use partial fractions (we will explore another example in the next part). Let $-\gamma_{1}, -\gamma_{2}$ be the roots of the equation $1-x-x^{2}=0$ (where $\gamma_{1} = \frac{1+\sqrt{5}}{2}$, $\gamma_{2} = \frac{1 - \sqrt{5}}{2}$). Then, we can write $F(x) = \frac{A}{x + \gamma_{1}} + \frac{B}{x + \gamma_{2}}$. With some calculation, we can obtain $F(x) = \frac{1}{\gamma_{1} - \gamma_{2}}\left(\frac{1}{1 - \gamma_{1}x} - \frac{1}{1 - \gamma_{2}x}\right) = \frac{1}{\sqrt{5}}\left(\displaystyle\sum_{j \ge 0}\gamma_{1}^{j}x^{j} - \displaystyle\sum_{j \ge 0}\gamma_{2}^{j}x^{j}\right)$.

Comparing coefficients, we get $f_{n} = \frac{1}{\sqrt{5}}(\gamma_{1}^{n} - \gamma_{2}^{n})$. Recalling that $\gamma_{1} = \frac{1+\sqrt{5}}{2}$ and $\gamma_{2} = \frac{1 - \sqrt{5}}{2}$ gives us Binet's Formula.

Note that this method is generalizable for general linear recurrences.

#### Probability that a random permutation has no cycle length which is the square of an integer

Problem. What is the probability that a random permutation has no cycle length which is the square of an integer?

This problem seems pretty random (no pun intended), but I include it here to show the power of generating functions.

Firstly, suppose we know that our permutation is of length $n$ and there are $a_{i}$ cycles of length $i$ (so $\sum_{i \ge 1} ia_{i} = n$). How many such permutations are there? With some simple counting, we can obtain the formula $\frac{n!\displaystyle\prod_{i\ge 1}((i-1)!)^{a_{i}}}{\displaystyle\prod_{i\ge 1}(i!)^{a_{i}} \cdot \displaystyle\prod_{i\ge 1}(a_{i}!)} = \frac{n!}{\displaystyle\prod_{i\ge 1}i^{a_{i}} \cdot \displaystyle\prod_{i\ge 1}(a_{i}!)}$ (Hint: Assume the cycles are labelled first, and assign the elements into cycles, then arrange the elements within each cycle. Divide some factorials to handle overcounts).

The sequence $a = (a_{1},a_{2},...)$ defined above is also called the cycle type of a permutation.

Let $c(a)$ denote the number of permutations of length $n = a_{1}+2a_{2}+...$ with cycle type $a$ and $p(a)$ denote the probability that a permutation of length $n = a_{1}+2a_{2}+...$ has cycle type $a$. Hence, $c(a) = \frac{n!}{\displaystyle\prod_{i\ge 1}i^{a_{i}} \cdot \displaystyle\prod_{i\ge 1}(a_{i}!)}$ and $p(a) = \frac{c(a)}{n!}$

Now, the trick is to consider the infinite-variable generating function in $x_{1},x_{2},...$:

$C(x,y) = \displaystyle\sum_{n \ge 0}\frac{y^{n}}{n!}\displaystyle\sum_{a_{1}+2a_{2}+...=n,a_{i} \ge 0}c(a)x_{1}^{a_{1}}x_{2}^{a_{2}}...$

From our discussion above, we know how to find $c(a)$, thus we can write $C(x,y)$ as

$C(x,y) = \left(\displaystyle\sum_{a_{1} \ge 0}\frac{(yx_{1})^{a_{1}}}{a_{1}!1^{a_{1}}}\right)\left(\displaystyle\sum_{a_{2} \ge 0}\frac{(y^{2}x_{2})^{a_{2}}}{a_{2}!2^{a_{2}}}\right)... = \exp(yx_{1})\exp\left(y^{2}\frac{x_{2}}{2}\right)... = \exp\left(\displaystyle\sum_{i \ge 1}\frac{y^{i}x_{i}}{i}\right)$.

Hence, for a fixed cycle type $a = (a_1,a_2,...)$, $p(a) = [x_{1}^{a_{1}}x_{2}^{a_{2}}...]\exp\left(\displaystyle\sum_{i \ge 1}\frac{y^{i}x_{i}}{i}\right)$.

Let us return to our problem. Call a cycle type $a$ good if $a_{j}=0$ for all perfect squares $j$. We want to find $\displaystyle\lim_{n \rightarrow \infty}\displaystyle\sum_{a \text{ good}}[y^{n}x_{1}^{a_{1}}x_{2}^{a_{2}}...]\exp\left(\displaystyle\sum_{i \ge 1}\frac{y^{i}x_{i}}{i}\right)$. We can "substitute" $x_{j}=1$ for all non-perfect squares $j$ to indicate that we don't care about the power of $x_{j}$ if $j$ is not a perfect square. so we reduce our problem to finding (noting that $a_{j}=0$ for all perfect squares $j$)

$\displaystyle\lim_{n \rightarrow \infty}[y^{n}]\exp\left(\displaystyle\sum_{i=z^{2} }\frac{y^{i}x_{i}}{i} + \displaystyle\sum_{i \neq z^{2}}\frac{y^{i}}{i}\right)$

$= \displaystyle\lim_{n \rightarrow \infty}[y^{n}]\exp\left(\displaystyle\sum_{i=z^{2} }\frac{y^{i}(x_{i}-1)}{i} + \displaystyle\sum_{i \ge 1}\frac{y^{i}}{i}\right)$

$= \displaystyle\lim_{n \rightarrow \infty}[y^{n}]\exp\left(\displaystyle\sum_{i=z^{2} }\frac{y^{i}(x_{i}-1)}{i} - \ln(1-y)\right)$

$= \displaystyle\lim_{n \rightarrow \infty}[y^{n}]\frac{1}{1-y}\exp\left(\displaystyle\sum_{i=z^{2}}\frac{y^{i}(x_{i}-1)}{i}\right)$

If we let $A(y)$ be the OGF of $a_{n} = [y^{n}]\exp\left(\displaystyle\sum_{i=z^{2}}\frac{y^{i}(x_{i}-1)}{i}\right)$, then by the Prefix Sum trick, our limit is equal to $\displaystyle\lim_{n \rightarrow \infty}\sum_{i \ge 0}a_{i}$ (assuming the sum converges).

Intuitively, we can get the "sum to infinity" by substituting $y=1$ into $\exp\left(\displaystyle\sum_{i=z^{2}}\frac{y^{i}(x_{i}-1)}{i}\right)$, and we are only interested in the terms without $x_{i}$ (for $i$ a perfect square), so we let these $x_{i}=0$, to obtain

$\displaystyle\lim_{n \rightarrow \infty}[y^{n}]\frac{1}{1-y}\exp\left(\displaystyle\sum_{i=z^{2}}\frac{y^{i}(x_{i}-1)}{i}\right) = \exp\left(\displaystyle\sum_{i = z^{2}}-\frac{1}{i}\right) = e^{-\frac{\pi^{2}}{6}}$, which is actually our answer (recall that $\displaystyle\sum_{i \ge 1}\frac{1}{i^2} = \frac{\pi^{2}}{6}$).

### Snake Oil Trick in Proving (or Finding) Combinatorial Identities

To end the first part of this tutorial, I will briefly introduce a trick to simplify combinatorial identities using generating functions. The idea is that instead of dealing with the sum directly, it is easier to deal with the series obtained from the generating functions.

Problem. Find the sum $\displaystyle\sum_{k \ge 0}\binom{k}{n-k}$ for a fixed positive integer $n$.

Suppose the answer to our problem is $f(n)$. The idea is that it is easier to consider the OGF of $f$, which is $F(x) = \displaystyle\sum_{n \ge 0}f(n)x^{n}$. We have

$F(x) = \displaystyle\sum_{n \ge 0}f(n)x^{n}$

$= \displaystyle\sum_{n \ge 0}\displaystyle\sum_{k \ge 0}\binom{k}{n-k}x^{n}$

Now, we switch summation signs to obtain

$= \displaystyle\sum_{k \ge 0}\displaystyle\sum_{n \ge 0}\binom{k}{n-k}x^{n}$

The key idea is to make the inner sum easy to compute, and we know how to compute $\displaystyle\sum_{n \ge 0}\binom{k}{n-k}x^{n-k}$, since it is just $\displaystyle\sum_{r \ge 0}\binom{k}{r}x^r$ in disguise with $r=n-k$!

Thus, we factor out $x^{k}$ to obtain

$= \displaystyle\sum_{k \ge 0}x^{k}\displaystyle\sum_{n \ge 0}\binom{k}{n-k}x^{n-k}$

$= \displaystyle\sum_{k \ge 0}x^{k}(1+x)^{k}$

$= \displaystyle\sum_{k \ge 0}(x(1+x))^{k}$

$= \frac{1}{1 - x - x^2}$

Do you recognize the last expression? It is actually $\frac{1}{x}F(x)$ where $F(x)$ is the OGF of the Fibonacci numbers! Thus, $\frac{1}{1-x-x^2} = \displaystyle\sum_{n \ge 0}f_{n+1}x^{n}$, and by comparing coefficients we get $f(n) = f_{n+1}$, the $(n+1)$-th Fibonacci number!

There are many other similar applications of the Snake Oil Method but I won't go into detail here. In general, this method might be useful in CP if you encounter some math problems and reduce the problem into double or triple summation of binomial coefficients but you need an $O(n)$ solution. Sometimes, you can forcefully simplify your summations using the Snake Oil method. We will use the trick of introducing a power series and swapping summation signs again to simplify some expressions in the next part of this article.

As an exercise, try to prove the following identity with the Snake Oil method:

$\displaystyle\sum_{r=0}^{k}\binom{m}{r}\binom{n}{k-r} = \binom{n+m}{k}$ (there is an obvious bijective proof, can you see why this is true?). This identity is very useful in simplifying sums involving binomial coefficients.

This ends the first part of my tutorial on generating functions. The next part will focus more on applications on GFs in CP problems, so stay tuned!

UPD: Part 2 is now available here.

P.S. Let me know if I made any errors or typos in the post (which is likely to happen).  Comments (31)
| Write comment?
 » Auto comment: topic has been updated by zscoder (previous revision, new revision, compare).
 » Auto comment: topic has been updated by zscoder (previous revision, new revision, compare).
 » I love how an article which considers taking the square root of a polynomial implicitly known is tagged with #basic math. Time to get back to my mortal activities now.
 » Thanks for writing this tutorial! However, the tutorial is really difficult for being an introduction to the topic.The tutorial would be easier to understand if you would explain more what you are doing, what is the intuition behind each step and not skip "simple" things and observations.Some examples: "Which sign should we choose? If we choose the + sign, then the numerator ->2 as x->0, while the denominator ->0, so the ratio will become infinite at 0. Thus, we should choose the minus sign" What is happening here? Why is it a bad thing if the ratio becomes infinite, and why are we interested in x->0? We are working with formal power series, so why can't we ignore this? "Verify that this agrees with the binomial theorem." What should we verify? What is the binomial theorem? How should we interpret the given formula in terms of binomial coefficients?
•  » » Thank you for your feedback! I have updated the blog to try and answer these questions. For the limit problem, I wanted to stay away from talking about convergence issues in the blog as much as possible but for this specific case, I think it's because $C(x)$ can be written as a power series with center $0$, so it should converge at $x=0$.I agree that it's not completely for beginners, especially for those who have trouble dealing with summations and convolutions (I guess). I tried to not skip too much steps but I guess some of the counting arguments and manipulations might be non-obvious. I guess the intuition behind some manipulations aren't explicitly stated (though I try to make sure to "separate" terms especially for convolutions, which I think is the main motivation behind most of these manipulations). Which parts do you think require more explanation? (I think it will be hard to explain every single step especially when there are similar manipulations done previously and it is easier to answer specific questions, but maybe not some important steps).
•  » » » Thanks, the binomial theorem part is now much easier to understand. The +/- sign part is still somewhat unclear.One technique that may help is that you read your text again after some time (for example, a week). Then you may see some things that can be improved in the text when you try to read it after a while. This has helped me a lot in writing.
 » For both OGF and EGF, $C(x) = x^k A(x)$ generates the sequence $c_n = a_{n-k}$ (...) I believe this is not true for EGF?
•  » » Ah yes thanks for pointing it out. It should be repeated integration (and picking $C=0$) for EGF.
 » Auto comment: topic has been updated by zscoder (previous revision, new revision, compare).
 » 3 years ago, # | ← Rev. 2 →   Auto comment: topic has been updated by zscoder (previous revision, new revision, compare).Added list of common series that are used in this article for reference.
 » From the graph counting problem, it looks like if we want to get a disjoint set of elements, we use exp() and if we want to connect the disjoint set into a single element, we use ln().
•  » » Yea that's kind of the general idea. This phenomenon is also known as exponential formula.
•  » » » Here are some slides on the exponential formula that are also very useful.
 » 3 years ago, # | ← Rev. 2 →   Can you explain the last step of "Probability that a random permutation has no cycle length which is the square of an integer" ? How did you calculate the limit? (Doesn't seem obvious to me)UPD: Undestood (L'Hospital's rule)
 » In the problem "Probability that a random permutation has no cycle length which is the square of an integer", I get that u took a summation over "good a", but could not understand why u took the limit n tends to infinity to compute the final answer. I hope u could provide me with some reason.
•  » » I have the same problem here in Section "Probability that a random permutation has no cycle length which is the square of an integer" what I first came up with was:$\displaystyle\lim_{k \rightarrow \infty}\sum_{n=1\ldots k}\frac{n!}{1!+2!+\ldots +k!}[y^{n}]...$I'm not good at infinities, am I wrong or they are the same?
 » Let cn=(n−1)! be the number of permutations of length n which is a cycle.How did this (n-1)! came??I don't think I am able to understand the definition of cn number of permutations of length n which is a cycle , Can someone please help me. Thanks in advance.
•  » » If I understood well:Let's say we've decided there will be a cycle of length 3, involving indices 1, 4, and 6. There are (3-1)! ways to have these three numbers in a distinct cycle: 1 can go to 4 or 6, and whichever of these two was chosen will go to the other one (for example 4 to 6), and that last one will go back to 1. So there are 2*1 ways to organize those 3 numbers into a cycle.More generally, if we have determined n indices will be part of a cycle, the first of these n indices can go to each of the other (n-1) indices in the permutation, and each of those (n-1) indices can go to (n-2) remaining indices, and so on, until the last chosen index goes back to the first index. So that means there are (n-1)! ways to organize a cycle among n elements.
•  » » » Yeah, That makes sense. Thank you :)
 » This is unrelated to generating functions, but I have some useful things to say about the "find the expected number of cycles in a random permutation" problem.There's a more elementary approach. Let $dp[n]$ denote the answer. The idea is to choose an appropriate algorithm to move from a permutation with length $n-1$ to a permutation with length $n$ that conserves as much as possible about the cycles. Assume we have a permutation $p$ with length $n-1$, and we want to choose $p_n$, then we can: Set $p_n=n$. This adds one more cycle to the permutation (the one containing $n$ alone.) Set $p_n=i$ where $i \neq n$. Then, let's look at the index $j$ such that $p_j=i$. Change $p_j$ to be equal to $n$. You can see that this doesn't change the number of cycles in the permutation (it inserts $n$ into an existing cycle.) You can see, by symmetry, that this algorithm can form all possible permutations with equal probability, so we can use it in our recurrence. The former case, that has a $\frac{1}{n}$ chance to occur, adds one cycle, and the latter case doesn't add any, so $dp[n]=dp[n-1]+\frac{1}{n}$ which indeed means the answer is the $n^{th}$ harmonic number.A few more insights to add: I knew beforehand that the expected number of times the prefix maximum of a permutation changes is also the $n^{th}$ harmonic number (try to prove it yourself by finding an appropriate algorithm to generate permutations that doesn't change much about the number of times the prefix maximum changes, and see a useful result here) This is not a coincidence; there's a bijection between the permutations that have $k$ cycles and the permutations where the prefix maximum changes $k$ times! To see that, take the cycles of the permutation and write them down. Shift every cycle so that its maximum comes first. Then sort them by their maxima and concatenate the results. It's clear that the prefix maximum changes $k$ times (again, we chose the algorithm so that this happens; this is not magic.) Simulated exampleLet's start with the permutation $[4,5,2,1,6,3]$. Its cycles are $[1,4]$ and $[2,5,6,3]$. Let's shift them so that the maximum comes first. They become $[4,1]$ and $[6,3,2,5]$. Let's sort them by the first (maximum) element and concatenate; we get the permutation $[4,1,6,3,2,5]$ in which the prefix maximum changes twice (at the points of concatenation.)
 » With respect to the Binet's formula application,I think $-γ_1$ and $-γ_2$ are the solutions of $1-x-x^2=0$, not $γ_1$ and $γ_2$ (using the quadratic formula we get the roots of the equation to be $-\frac{1\pm\sqrt{5}}{2}$. So the partial fraction representation would be $\frac{A}{x+γ1} + \frac{B}{x+γ2}$ if we want to define $γ_1$ and $γ_2$ being $\frac{1\pm\sqrt{5}}{2}$.
•  » » 3 years ago, # ^ | ← Rev. 2 →   Fixed, thanks.
 » 3 years ago, # | ← Rev. 3 →   In "Find the number of bipartite vertex-labeled graphs with $n$ vertices",How do you calculate $A(x)$, i.e EGF for count of colored bipartite graphs on $n$ vertices efficiently? The $2^{k^2}$ in the summation is not allowing me to get the formula in a nice form.
•  » » Hmm ya unfortunately I'm not sure how to compute $A(x)$ faster than using the naive recurrence relation in $O(n^2)$. This example was more of a demonstration of concept.
•  » » 3 years ago, # ^ | ← Rev. 2 →   $k(n-k) = \binom n 2 - \binom k 2 - \binom {n - k} 2$, so $\displaystyle{ \sum_n \left( \sum_k \binom n k 2^{k(n-k)} \right) 2^{-\binom n 2}\frac{x^n}{n!} = \left(\sum_n 2^{-\binom n 2}\frac{x^n}{n!}\right)^2}$It's called "Chirp-Z Transform".
 » How to compute the exact value of $a_n$ by OGF ? for example the case: the OGF of Fibonacci numbers $\Rightarrow F(x) = \frac{x}{1-x-x^2}$Or for unkown problem, we compute the OGF of $a_n$, and find a recurrence array with same OGF of $a_n$, then use matrix power(or poly mod power) to deal with recurrence array, finally we get $a_n$.
 » Thank you for writing this useful tutorial.I can't understand how to find $a_i$ using OGF/EGF of $a$, for example, I can't find Fibonacci numbers using its OGF. Can someone explain it, please?
•  » » 3 years ago, # ^ | ← Rev. 4 →   Usually, you can find $a_i$ from $A(x)$ if the function is expandable with taylor serires.$a_i$ is simply $i-th$ term after you do partial fraction and expand it with taylor series.For example, $A(x) = \frac{2x}{1-x^2} = \frac{1}{1-x} -\frac{1}{1+x} = (1+x+x^2+x^3+...)-(1-x+x^2-x^3+...)$$=2(x+x^3+x^5+....)$ Then, you get $a_i=2$ if $i$ is odd, otherwise zero.For Fibonacci number, I think explanation in generatingfunctionology (The book linked on the top of blog) is quite good.
•  » » » Thanks, it really helped.
 » Can someone help me, how to solve this using generating functions? https://atcoder.jp/contests/abc198/tasks/abc198_f, Generating function is https://oeis.org/A054473How to solve it in O(logS) time?
 » 23 months ago, # | ← Rev. 2 →   This is a good article, but what saddens me is that generating functions are explained only through algebraic manipulations while many of them have deep meaning with respect to combinatorial species. For example, if $A(x)$ and $B(x)$ are generating functions of two classes of objects then $A(x)B(x)$ corresponds to the class of objects which consists of pairs, such that first object in a pair is of class $A$ while the second object in a pair is of class $B$.And it is meaningful for both OGF and EGF if you consider OGF to represent unlabeled objects and EGF to represent labeled ones! And further examining it, we can see that, for example, $\frac{1}{1-A(x)}$ for generating functions corresponds to the generating function of the class of finite sequences of objects, such that each of their elements are from the class $A$.Applying the same intuition to EGFs, you'll find that $\exp A(x)$ corresponds to the class of sets of objects of class $A$ and $\log \frac{1}{1-A(x)}$ corresponds to the class of cycles of objects from this class. Now if we combine these two operators like $\exp \log \frac{1}{1-A(x)}=\frac{1}{1-A(x)}$we'll have an EGF for permutations over class $A$ on RHS and EGF for sets of cycles over class $A$ on LHS, essentially demonstrating that permutations and sets of cycles are one and same!Now there was a problem about counting the number of permutations with the fixed number of cycles. If cycles are marked with $y$ and elements are marked with $x$, the EGF for a single cycle is $y\log\frac{1}{1-x}$ while the EGF for a sequence of cycles is $\exp\left(y \log \frac{1}{1-x}\right)=\sum\limits_{k=0}^\infty \frac{y^k}{k!} \left(\log\frac{1}{1-x}\right)^k=(1-x)^{-y}$If we have a generating function $F(x, y)$ and want to average it by a parameter represented by $y$, we might take the derivative and evaluate it in $y=1$. In our case, it would be $\frac{d}{dy}(1-x)^{-y}=(1-x)^{-y}\ln \frac{1}{1-x}=\frac{1}{1-x}\ln\frac{1}{1-x}\bigg|_{y=1}$Another example of more intuitive approach is to enumerate trees. We may say that tree is a root and (possibly empty) sequence of subtrees:\begin{equation} F(x) = x \cdot\left( \frac{1}{1-F(x)}\right) \end{equation}Solving this, we obtain $F(x)=\frac{1-\sqrt{1-4x}}{2}$ which is the genfunc for Catalan numbers.