Hi everyone!
Today I would like to write about some identities that might come handy when using generating functions to solve competitive programming problems. I will also try to include some useful examples about them.
Some notation
For brevity, we will sometimes skip the specific bounds in indexed sums, meaning that the summation happens among all valid indices. Please also read the information below if you're not familiar with generating functions, or want to brush up on some aspects.
Definitions and notationLet's briefly recall that, for a sequence $$$a_0, a_1, \dots$$$, it's ordinary generating function (OGF) is defined as a formal power series
$$$ F(x) = \sum\limits_k a_k x^k, $$$and its exponential generating function (EGF) is defined as
$$$ F(x) = \sum\limits_k a_k \frac{x^k}{k!}. $$$Generating functions are used because it's very simple to represent convolutions with them.
For two sequences $$$a_0, a_1, \dots$$$ and $$$b_0, b_1, \dots$$$, their convolution is defined as a sequence $$$c_0, c_1, \dots$$$ such that
$$$ c_k = \sum\limits_{i+j=k} a_i b_j. $$$Conversely, their binomial convolution is defined as a sequence $$$c_0, c_1, \dots$$$, such that
$$$ c_k = \sum\limits_{i+j=k} \binom{k}{i} a_i b_j. $$$If $$$A(x)$$$ and $$$B(x)$$$ are the OGFs of $$$a_i$$$ and $$$b_j$$$ then $$$A(x) B(x)$$$ is the OGF of their convolution. Conversely, if $$$A(x)$$$ and $$$B(x)$$$ are the EGFs of $$$a_i$$$ and $$$b_j$$$, then $$$A(x) B(x)$$$ is the EGF of their binomial convolution. You can read more about it here.
We will often need to extract the coefficient near $$$x^k$$$ in the expansion of an OGF $$$A(x)$$$. The conventional notation for this is
$$$ [x^k] A(x) = a_k. $$$Note that if $$$A(x)$$$ is an EGF, it will return $$$\frac{a_k}{k!}$$$ instead. To avoid this, for an EGF $$$A(x)$$$ we write
$$$ \left[\frac{x^k}{k!}\right] A(x) = a_k, $$$meaning that $$$a_k$$$ is the coefficient near $$$\frac{x^k}{k!}$$$ in the expansion of $$$A(x)$$$.
The concepts above naturally generalize for multivariate generating functions.
For example, we can consider a two-dimensional sequence $$$a_{ij}$$$ and its bivariate OGF $$$A(x, y)$$$ defined as
$$$ A(x, y) = \sum\limits_{i, j} a_{ij} x^i y^j. $$$Such two-dimensional sequences are often useful, for example, when analyzing sequence of polynomials. When extracting coefficients, it is possible to extract a coefficient near specific pair of powers $$$x^i y^j$$$, as well as near specific power $$$x^i$$$. In these cases,
$$$\begin{align} [x^i y^j] A(x, y) &= a_{ij}, \\ [x^i] A(x, y) &= \sum\limits_j a_{ij} y^j. \end{align}$$$In other words, $$$[x^i y^j]$$$ would extract the specific coefficient, while $$$[x^i]$$$ would treat $$$A(x, y)$$$ as a power series over $$$x$$$, whose coefficients are power series over $$$y$$$, and extract the coefficient near $$$x^i$$$, which in itself is a power series over $$$y$$$.
Multinomial coefficients
If you're not familiar with them, multinomial coefficients are defined as
$$$ \large \binom{n}{a_1, \dots, a_r} = \frac{n!}{a_1! \dots a_r!}, $$$where $$$a_1+\dots+a_r=n$$$. When dealing with identities that contain binomial coefficients, we often can combine them and "reshuffle" into another combination of binomial coefficients which is more convenient to us. This is because multinomial coefficients can be expanded as
$$$ \boxed{\binom{n}{a_1,\dots,a_n} = \binom{n}{a_1} \binom{n-a_1}{a_2} \binom{n-a_1-a_2}{a_3} \dots \binom{a_{r-1}+a_r}{a_{r-1}} = \binom{a_1+a_2}{a_2} \binom{a_1+a_2+a_3}{a_3} \dots \binom{n}{a_r}} $$$In other words, we can represent any multinomial coefficient as a chain of binomial coefficients which account for each $$$a_i$$$ one piece at a time. We will see a lot of examples of this below, mostly with trinomial coefficients and for now we can look on the following one:
$$$ \binom{n}{k} \binom{n-k}{k} = \binom{n}{k,k,n-2k} = \binom{n}{2k} \binom{2k}{k}, $$$which allows to fully exclude $$$n$$$ from one of the coefficients which will then simplify our work with the part that is related to $$$n$$$.
Binomial theorem
Let's start with the simplest substitution ever. We most likely know the binomial theorem:
$$$ \boxed{(x+y)^n = \sum\limits_k \binom{n}{k} x^k y^{n-k}} $$$This allows us, whenever we have a binomial coefficient in our expression, to substitute it with
$$$ \boxed{\binom{n}{k} = [x^k] (1+x)^n = [x^{n-k}] (1+x)^n = [x^n] x^k (1+x)^n} $$$The last identity is especially useful, as it allows to drag $$$[x^n]$$$ outside of the summation.
Example 1. Consider the sum from one of Elegia's blog posts:
$$$ \sum\limits_{k} \binom{n}{k}^2 \binom{3n+k}{2n}. $$$Assume that you're given several values of $$$n$$$ and you need to compute it in $$$O(1)$$$ with $$$O(n)$$$ pre-processing.
SolutionLet's substitute two of the binomial coefficients:
$$$ =\sum\limits_k \binom{n}{k} \left([x^n]x^{n-k}(1+x)^n\right)\left( [y^{2n}](1+y)^{3n+k}\right). $$$Taking $$$[x^n y^{2n}]$$$ out from the summation, we need to compute
$$$\begin{align} &=[x^n y^{2n}] (1+x)^n (1+y)^{3n} \sum\limits_k \binom{n}{k} x^{n-k} (1+y)^k \\&= [x^n y^{2n}] (1+x)^n (1+y)^{3n} (1+y+x)^n. \end{align}$$$Now that we regrouped and simplified the formula a bit, let's expand $$$(1+x+y)^n$$$ as $$$((1+x)+y)^n$$$ instead of $$$((1+y)+x)^n$$$:
$$$\begin{align} &=[x^n y^{2n}] \sum\limits_k \binom{n}{k} y^{n-k} (1+x)^{n+k}(1+y)^{3n} \\&= \sum\limits_k \binom{n}{k} \binom{n+k}{n} [y^{n+k}] (1+y)^{3n} \\&= \sum\limits_k \binom{n}{k} \binom{n+k}{n} \binom{3n}{n+k}. \end{align} $$$Two last binomial coefficients can be regrouped as a multinomial coefficient:
$$$\begin{align} &= \sum\limits_k \binom{n}{k} \binom{3n}{n,k,2n-k} \\&= \binom{3n}{n} \sum\limits_k \binom{2n}{2n-k} \binom{n}{k} \\&= \binom{3n}{n} \sum\limits_k \binom{n}{k} [x^{2n-k}] (1+x)^{2n} \\&= \binom{3n}{n} [x^{2n}] (1+x)^{2n} \sum\limits_k \binom{n}{k} x^k \\&= \binom{3n}{n} [x^{2n}] (1+x)^{3n} = \boxed{\binom{3n}{n}^2} \end{align}$$$which indeed can be computed in $$$O(1)$$$ with $$$O(n)$$$ pre-computation.
Exercise 1: Solve ProjectEuler 831 — Triple Product.
Negative binomial expansion
Now, let's learn few more well-known expansions. You most likely know that
$$$ \frac{1}{1-x} = 1+x+x^2+\dots $$$This expression is also the consequence of the binomial expansion, as it also allows to use values of $$$n$$$ that are not positive integer. So, being more general, we can write that
$$$ (1-x)^{-n} = \sum\limits_{k} \binom{-n}{k} (-1)^k x^k, $$$but at the same time the binomial coefficient rewrites as
$$$ \binom{-n}{k} = \frac{(-n)(-n-1)\dots(-n-(k-1))}{k!} = (-1)^k \binom{n+k-1}{k}. $$$The result above allows us to rewrite
$$$ \boxed{\frac{1}{(1-x)^{k+1}} = \sum\limits_t \binom{t+k}{k} x^t} $$$This, in turn, means that while $$$\binom{n}{k}$$$ is often substituted with $$$[x^k]{}(1+x)^n$$$, there is another useful substitution:
$$$ \boxed{\binom{a+b}{a} = [x^a] \frac{1}{(1-x)^{b+1}} = [x^b] \frac{1}{(1-x)^{a+1}}} $$$To differentiate between the two, the substitution $$$\binom{n}{k}=[x^k]{}(1+x)^n$$$ is typically more useful when $$$n$$$ is constant and summation goes over $$$k$$$. Conversely, when $$$k$$$ is constant and summation goes over $$$n$$$, it would likely be more convenient to use the substitution above.
Inverse square root expansion
Another important case of binomial expansion is when $$$n=-\frac{1}{2}$$$. Consider the expansion
$$$ \frac{1}{\sqrt{1-x}} = \sum\limits_{k} \binom{-1/2}{k} (-x)^k. $$$We can rewrite the fractional binomial as
$$$ \binom{-1/2}{k} = \frac{(-1/2)(-1/2-1)\dots(-1/2-k+1)}{k!} = \frac{(-1)^k}{2^k} \frac{1\cdot3\cdot5\cdot\ldots\cdot(2k-1)}{k!} = \frac{(-1)^k}{2^k}\frac{(2k-1)!!}{k!}, $$$where $$$k!! = k(k-2)(k-4)\dots$$$ is the double factorial of $$$k$$$. Note that $$$(2k)!! (2k-1)!! = (2k)!$$$ and that $$$(2k)!! = 2^k k!$$$.
Thus, multiplying the numerator and denominator with $$$2^k k!$$$, we get
$$$ =\frac{(-1)^k}{4^k} \frac{(2k)!}{k!^2} = \frac{(-1)^k}{4^k} \binom{2k}{k}, $$$which means that
$$$ \boxed{\frac{1}{\sqrt{1-x}} = \sum\limits_k \frac{x^k}{4^k} \binom{2k}{k}} $$$This identity allows to collapse some more complicated sums that involve $$$\binom{2k}{k}$$$, as it means that
$$$ \boxed{\binom{2k}{k} = [x^k] \frac{1}{\sqrt{1-4x}}} $$$Example 2. The problem 446227I — O Andar do Trêbado reduces to computing
$$$ [x^n](1+x+x^2)^n. $$$It's well known that this is doable in $$$O(n)$$$. But in this problem we need to do it for all $$$n$$$ up to some limit.
SolutionExpanding it as a binomial, we get
$$$[x^n]\sum\limits_k \binom{n}{k} x^{2k}(1+x)^{n-k} = \sum\limits_k \binom{n}{k} \binom{n-k}{k}.$$$As we already seen above, it rewrites as
$$$ \binom{n}{k}\binom{n-k}{k} = \binom{n}{k,k,n-2k} = \binom{n}{2k}\binom{2k}{k}. $$$Let's now find the generating function for $$$[x^n]{}(1+x+x^2)^n$$$:
$$$ \sum\limits_n x^n \sum\limits_{k} \binom{n}{2k}\binom{2k}{k} = \sum\limits_k \binom{2k}{k} \sum\limits_n \binom{n}{2k} x^n. $$$The inner sum reduces to
$$$ \sum\limits_n \binom{n}{2k} x^n = \sum\limits_t \binom{2k+t}{2k} x^{2k+t} = \frac{x^{2k}}{(1-x)^{2k+1}}. $$$From this, we get the generating function
$$$ \frac{1}{1-x}\sum\limits_k \binom{2k}{k} \frac{x^{2k}}{(1-x)^{2k}} =\frac{1}{1-x} \frac{1}{\sqrt{1-4\frac{x^2}{(1-x)^2}}}=\boxed{\frac{1}{\sqrt{(1-x)^2-4x^2}}} $$$Conveniently, we can compute first $$$n$$$ terms of it in $$$O(n)$$$ using the same recurrence as for computing first $$$n$$$ terms of $$$(1+x+x^2)^n$$$ itself. Note that the derivation of the recurrence did not rely on the fact that $$$n$$$ is a positive integer, so we can just use $$$n=-\frac{1}{2}$$$ here.
Note that the linked problem also allowed $$$O(n^2)$$$ solutions. Thanks to tfg for telling me about the problem!
Example 3. The function $$$f(n,m)$$$ defined below can be computed in $$$O(1)$$$ with $$$O(n+m)$$$ pre-computation:
$$$ f(n, m) = \sum\limits_{i,j} \binom{i+j}{i}^2 \binom{n+m-2i-2j}{n-2j}. $$$ SolutionIts genfunc has a somewhat nice closed form:
$$$ \boxed{G(x, y)=\frac{1}{1-x-y}\frac{1}{\sqrt{(1-x^2-y^2)^2-4x^2y^2}}} $$$See my previous blog post for details on how compute $$$f(n,m)$$$. Note that it also requires series multisection.
Example 4. Find the closed form of a function
$$$ G(x, y) = \sum\limits_{i,j} \binom{i+j}{i}^2 x^i y^j. $$$ Example 5. Find the closed form of a function
$$$ G(t, x) = \sum\limits_n t^n P_n(x), $$$where $$$P_n(x)$$$ is the sequence of Legendre polynomials, defined for this example as
$$$ P_n(x) = \frac{1}{2^n}\sum\limits_k \binom{n}{k}^2 (x-1)^k (x+1)^{n-k}. $$$ Note: Although examples 4 and 5 don't look useful for competitive programming, they can be used to solve the example 3, which is based on a real problem from some training camp (and also some prior publications, apparently).
Powers of $$$n$$$
Another less known, but quite useful substitution involves the exponential function:
$$$ e^{nx} = \sum\limits_k \frac{n^k x^k}{k!}, $$$from which it follows that we can use the substitution
$$$ \boxed{n^k = \left[\frac{x^k}{k!}\right] e^{nx}} $$$Let's see how it can be used to solve some problems.
Example 6. Given $$$m$$$, we want to compute $$$f(k)$$$ in $$$O(1)$$$ per $$$k$$$ with $$$O(k \log k)$$$ pre-computation:
$$$ f(k)=\sum\limits_{n=0}^{m-1} n^k. $$$Solution: Using the substitution above, we turn the sum into the geometric progression:
$$$ f(k) = \left[\frac{x^k}{k!}\right]\sum\limits_{n=0}^{m-1} e^{nx} = \left[\frac{x^k}{k!}\right] \frac{1-e^{mx}}{1-e^x}. $$$Well, not exactly new, as I already wrote a blog about it quite some time ago...
Example 7. Given $$$A(x) = a_0 + a_1 x + \dots$$$, we want to quickly compute $$$f(k)$$$ defined as
$$$ f(k) = \sum\limits_{n} a_n n^k. $$$Solution: Using the same substitution, we get
$$$ f(k) = \left[\frac{x^k}{k!}\right] \sum\limits_n a_n e^{nx} = \left[\frac{x^k}{k!}\right] A(e^x). $$$Well, still not new, as I had another blog about it some time ago.
At the same time, the result above assumes that we're able to compute series expansion of $$$A(e^x)$$$ up to $$$k$$$ sufficiently quickly. Which might turn out to be quite problematic if $$$A(x)$$$ is sufficiently arbitrary. However, when $$$A(x)$$$ is sufficiently nice, there is an interesting side effect of such solution. It allows $$$A(x)$$$ to have quite a large amount of terms if it still can be represented in a "nice" way.
For this, we rely on the following statement:
Conjecture: Let $$$F(x)$$$, $$$G(x)$$$ and $$$H(x)$$$ be formal power series, and let $$$G_0 = [x^0] G(x)$$$. Then the following implication stands:
$$$ \boxed{F(x) \equiv H(x) \pmod{(x-G_0)^k} \implies F(G(x)) \equiv H(G(x)) \pmod{x^k}} $$$In particular, as $$$[x^0] e^x = 1$$$, it means that
$$$ \boxed{F(x) \equiv H(x) \pmod{(x-1)^k} \implies F(e^x) \equiv H(e^x) \pmod{x^k}} $$$ ProofDue to the Taylor expansion formula, the left-hand side essentially says that $$$F^{(i)}(G_0) = H^{(i)}(G_0)$$$ for $$$i=0,\dots, k-1$$$.
On the other hand, we can represent $$$G(x) = G_0 + x G_1(x)$$$, and consider the Taylor expansions
$$$\begin{align} F(G(x)) &= \sum\limits_{i} F^{(i)}(G_0) x^i G_1(x)^i, \\ H(G(x)) &= \sum\limits_{i} H^{(i)}(G_0) x^i G_1(x)^i \end{align}$$$The terms for $$$F$$$ and $$$H$$$ match up to $$$i=k-1$$$, and all the consequent terms are divisible by $$$x^k$$$, meaning that they're equal modulo $$$x^k$$$.
From the result above it follows that to find first $$$k$$$ elements of $$$F(e^x)$$$, we can instead find $$$H(x)$$$ such that
$$$ F(x) \equiv H(x) \pmod{(x-1)^k}, $$$and compute first $$$k$$$ elements of $$$H(e^x)$$$ instead. This allows to efficiently solve the following problem:
Example 8. 392C - Yet Another Number Sequence. Let $$$f_n=f_{n-1} + f_{n-2}$$$. Given $$$m$$$ and $$$k$$$, compute $$$f(k)$$$ in $$$O(k \log k)$$$:
$$$ f(k) = \sum\limits_{n=0}^{m-1} f_n n^k. $$$ SolutionLet $$$F(x)$$$ be the generating functions of Fibonacci numbers. It is generally known that
$$$ F(x) = \sum\limits_{n=0}^\infty f_n x^n = \frac{1}{1-x-x^2}. $$$But the expression above assumes that the sequence is infinite. In our particular example, however, the sequence is finite, so we should account for it. Let's say that $$$f_0 = a$$$ and $$$f_1 = b$$$ instead of $$$f_0=f_1=1$$$. How would it change the genfunc? The denominator won't change, as it corresponds to the characteristic polynomial of the linear recurrence. But the numerator will change to $$$c+dx$$$ such that
$$$ F(x) (1-x-x^2) \equiv (c+dx) \pmod{x^2}, $$$which is equivalent to $$$(a+bx)(1-x) \equiv c+dx \pmod{x^2}$$$, thus $$$c=a$$$ and $$$d=b-a$$$. Hence, the new genfunc would be
$$$ F'(x) = \frac{a+(b-a)x}{1-x-x^2}. $$$Now we can use $$$a = f_m$$$ and $$$b=f_{m+1}$$$ and subtact $$$F'(x)$$$ multiplied by $$$x^m$$$, so that only terms from $$$0$$$ to $$$m-1$$$ remain:
$$$ F_m(x) = \sum\limits_{n=0}^{m-1} f_n x^n = \frac{1}{1-x-x^2} - \frac{f_m x^m + f_{m-1} x^{m+1}}{1-x-x^2} = \boxed{\frac{1 - f_m x^m - f_{m-1} x^{m+1} }{1-x-x^2}} $$$Here we used the identity $$$f_{m+1} - f_m = f_{m-1}$$$ to simplify the expression. Now, we can compute $$$H(x)$$$, the remainder of $$$F_m(x)$$$ modulo $$$(x-1)^{k+1}$$$. To do this, we can substitute $$$t=x-1$$$, thus computing
$$$ \frac{1- f_m (1+t)^m - f_{m-1} (1+t)^{m+1}}{1-(1+t)-(1+t)^2} \pmod{t^{k+1}}, $$$after which we can do the Taylor shift back to $$$x = 1+t$$$ to get $$$H(x)=h_0 + h_1 x + \dots + h_k x^k$$$. After that, the answer is simply
$$$ \boxed{f(k) = \sum\limits_{i=0}^{m-1} f_i i^k = \sum\limits_{i=0}^{k} h_i i^k} $$$ Note: Apparently, this problem is also a subject of one of Elegia's blogs, but it is difficult for me to translate on my own...
Exercise 2. Find a way to solve example 8 in $$$O(k)$$$.
HintSimilar to $$$F_m(x)$$$, it is possible to find the exact expression for
$$$ \frac{1- f_m (1+t)^m - f_{m-1} (1+t)^{m+1}}{1-(1+t)-(1+t)^2} $$$truncated modulo $$$t^{k+1}$$$. It would be of the form $$$\frac{1-\alpha t^{k+1} - \beta t^{k+2}}{1-(1+t)-(1+t)^2}$$$. Then substitute $$$t=x-1$$$ back and compute it modulo $$$x^{k+1}$$$ in $$$O(k)$$$.
Example 9. Solve Library Judge — Sum of Exponential times Polynomial Limit. Given $$$r \neq 1$$$ and $$$d$$$, compute $$$f(d)$$$ in $$$O(d)$$$, where
$$$ f(d) = \sum\limits_{i=0}^{\infty} r^i i^d. $$$ SolutionSolution is similar to example 8, except for now we also deal with limits and convergence. However, the answer is still
$$$ \sum\limits_{i=0}^{\infty} r^i i^d = [\frac{x^d}{d!}] \frac{1}{1-re^x}, $$$because LHS and RHS converge to the same limit value for $$$x \in \mathbb C$$$. So, using similar approach to the previous problem, we need to find
$$$ H(x) \equiv \frac{1}{1-rx} \pmod{(x-1)^{d+1}}. $$$This is also done with substitution $$$t = x-1$$$, thus we get
$$$ H(t+1) \equiv \frac{1}{1-r-rt} \pmod{t^{d+1}}. $$$Similar to the previous example, we can find this expression exactly instead of truncating modulo $$$t^{d+1}$$$:
$$$ H(t+1) = \frac{1}{1-r}\frac{1-\frac{r^{d+1}}{(1-r)^{d+1}} t^{d+1}}{1-\frac{r}{1-r}t} = \frac{(1-r)^{d+1} - r^{d+1} t^{d+1}}{(1-r)^{d+2} - (1-r)^{d+1} r t}. $$$From this we get
$$$ \boxed{H(x) = \frac{(1-r)^{d+1} - r^{d+1} (x-1)^{d+1}}{(1-r)^{d+2} - (1-r)^{d+1} r (x-1)}} $$$From the definition above, $$$H(x)$$$ can be computed exactly in $$$O(d)$$$, as the numerator is obtained from the binomial expansion, and consequent division by the denominator can also be done in $$$O(d)$$$ because it is the exact division, and the denominator is small, so one can simply use long division algorithm. After that, when you get $$$H(x) = h_0 + h_1 x + h_2 x^2 + \dots$$$, we compute
$$$ \sum\limits_{i=0}^\infty r^i i^d = \sum\limits_{i=0}^d h_i i^d. $$$In the later sum, to compute it in $$$O(d)$$$, we need to find all the values $$$1^d, 2^d, \dots, d^d$$$ in $$$O(d)$$$. This is doable in $$$O(d)$$$ with linear sieving by only computing the value with binary exponentiation in primes and using factorization to compute the value in all the other numbers. I have implemented the solution to test it out: #144632.
Exercise 3. Solve Library Judge — Sum of Exponential times Polynomial. Given $$$r$$$, $$$d$$$ and $$$n$$$, compute $$$f(d)$$$ in $$$O(d)$$$, where
$$$ f(d) = \sum\limits_{i=0}^{n-1} r^i i^d. $$$Exercise 4. Solve CodeChef — Quasi-Polynomial Sum.
This might be a good practice of negative binomial expansion GR21-pE
Thank you for this wonderful blog, it may not be such a big deal but there is a flaw in the last trick, for $$$f(g(x))$$$ to be defined in a formal series, either $$$f(x)$$$ should be finite or $$$g(0) = 0$$$ , hence $$$f(e^x)$$$ may not be defined in formal series.
The way to rectify that would be:
$$$F(x) \equiv H(x) mod (x^k) \Rightarrow F(G(x)-G(0)) \equiv H(G(x)-G(0)) mod (x^k)$$$,
Where $$$F$$$, $$$G$$$ and $$$H$$$ are formal series Following this the proof almost remains the same, just shifted,
This is mostly what has been used in the following sections
References: Chapter 2 — https://www2.math.upenn.edu/~wilf/gfologyLinked2.pdf
counting reorders
a difficult task on cses, i solved it with laguerre poly and unsigned lah numbers
https://en.wikipedia.org/wiki/Laguerre_polynomials#Explicit_examples_and_properties_of_the_generalized_Laguerre_polynomials
https://dmtcs.episciences.org/2369/pdf
You overcomplicated. I solved with dp and inclusion/exclusion principle.
I got the idea before that somehow inclusion exclusion is used. but was not able to reduce the complexity , like mine was going around n^3.
but then i found this, and then got the How n^2 could be implemented using inclusion exclusion from the generating function implemention. but it was very good task, learned something new.
Wait... there is something called multinomial coefficient? :o
I finally worked out Elegia's blog seemed to work only if we are able to come up with recurrence relations.
The hint is actually misleading because if you think about it the numerator in the non-reduced and reduced version should at least be equal in $$$ mod(t^{k+1}) $$$ which wouldn't happen in this case.
The easiest way I could come up with to solve in $$$O(k)$$$ was to use the fact that fib numbers are powers of golden ratio and using exercise 3 one could obtain exercise 2 as well, just $$$r$$$ would be of the from $$$a+b\sqrt{5}$$$