Hi everyone!

This blog is inspired by some contestant solutions to 104234F - Palindromic Polynomial. In short, problem asks you to find a polynomial such that $$$A(x_i) = y_i$$$ for several given pairs $$$(x_i, y_i)$$$, and the coefficients of $$$A(x)$$$ read the same left-to-right and right-to-left (i.e., they form a palindrome). The intended solution utilizes the fact that for such polynomial, it always holds that

as $$$x^n A(x^{-1})$$$ is exactly the polynomial $$$A(x)$$$ with reversed coefficients (assuming the degree is $$$n$$$).

#### Representing palindromic polynomials with polynomials in $$$x+\frac{1}{x}$$$

Several solutions from contestants, however, relied on a different criterion for $$$A(x)$$$. Rather than using the identity above, participants found out that palindromic polynomials can always be represented as

when its degree $$$2d$$$ is even, or as

when its degree $$$2d+1$$$ is odd. In both cases, $$$B(x)$$$ is a polynomial of degree $$$d$$$. This representation allows for much simpler solution, but in this blog we will talk about why it exists in the first place, rather than how to use it to solve the problem.

Thanks to ecnerwala for explaining me this approach in more detail!

#### Getting rid of odd degrees

First things first, we should notice that a palindromic polynomial $$$A(x)$$$ of odd degree always has a root $$$x=-1$$$, as it is an alternating sum of the polynomial's coefficients, and coefficients from the right half go into this sum with different sign from corresponding coefficients in the left half. Then, dividing $$$A(x)$$$ by $$$1+x$$$ we get a palindromic polynomial of degree $$$2d$$$ instead of $$$2d+1$$$.

This allows us to focus on the polynomials of even degree.

#### Representing polynomials of even degrees with $$$x^k + \frac{1}{x^k}$$$

First of all we may notice that palindromic polynomials of even degree $$$2d$$$ can always be represented as

for some sequence $$$a_0, a_1, \dots, a_d$$$ which is easily derived from the coefficients of $$$A(x)$$$. Now we should show that the sum

can be rewritten as

with some sequence $$$b_0, b_1, \dots, b_d$$$.

#### Changing basis from $$$x^k + \frac{1}{x^k}$$$ to $$$\left(x+\frac{1}{x} \right)^k$$$

Consider the sequence of Laurent polynomials (i.e. polynomials, in which negative degrees are allowed)

and another sequence of Laurent polynomials

As it turns out, these two sequences have the same linear span. What it practically means, is that every element of $$$B_k$$$ may be represented as a finite linear combination of the elements of $$$A_k$$$, and vice versa. It is somewhat trivial in one direction:

With some special consideration for even $$$k$$$, as the middle coefficient should be additionally divided by $$$2$$$. You can represent it with lower triangular matrix with $$$1$$$ on the diagonal (except for the first row where it's $$$\frac{1}{2}$$$), which would mean that the matrix is invertible and there is, indeed, the representation for $$$B_k$$$ in terms of $$$A_k$$$ as well.

#### But what about cosines?

We will need a bit of complex numbers magic here. The explanation above, while technically correct, doesn't really shed any light on why it is true at all. To better understand it, consider the substitution $$$x=e^{i \theta}$$$, then one can use the explicit formula for $$$\cos \theta$$$:

and notice that

Then, it is a well known fact that $$$\cos k \theta$$$ can be represented in terms of $$$1, \cos \theta, \cos^2 \theta, \dots, \cos^k \theta$$$. To prove it, consider the identity

then on one hand

on the other hand

Now, we're only interested in the real part of the said expression, hence the one with even $$$j$$$. Using $$$\sin^2 \theta = 1 - \cos^2 \theta$$$, we get

Hence, if we define

we can see that $$$T_k (\cos x) = \cos kx$$$. The sequence of polynomials $$$T_0, T_1, T_2, \dots$$$ defined in such way is also known as Chebyshev polynomials. In the context of our problem, it would mean that

meaning that the desired transition could be expressed as

Thanks! From this blog I learned that plural of cosine is cosines.

It's nice to know that the most important part of the blog did not go unnoticed ^.^'

"The explanation above, while technically correct, doesn't really shed any light on why it is true at all. " — Idk, for me the trivial induction sheds much more light on this than any cosines will ever do :P

I kind of agree, I think the trivial indiction sheds more light on Chebyshev polynomials than the other way around.

I guess what I mean there is that cosines showcase how this result is connected to some kind of grander scheme of things, rather than just being isolated fact on its own.

Btw I think that one of the coolest (and most known) Chebyshev's polynomials applications is the following problem:

What is the highest leading coefficient of a polynomial $$$P$$$ of degree n such that $$$|P(x)| \le 1$$$ for each $$$|x| \le 1$$$?

Solution?So, expanding $$$T_k(x)$$$, it seems to be $$$\sum\limits_{j=0}^{\lfloor n / 2 \rfloor} \binom{n}{2j} = 2^{n-1}$$$? Funny fact, indeed. But why is it unique?

I'll just give the full solution here.

As $$$T_k(cos x) = cos(kx)$$$ we get that Chebyshev's polynomials alternate between touching $$$y=1$$$ and $$$y=-1$$$ on the interval $$$[-1, 1]$$$ as many times as their degree allows them to.

We will prove that Chebyshev's polynomials yield optimal answers for our question. Let us assume that there is a polynomial $$$P$$$ of degree $$$n$$$ satisfying the problem's assumptions with bigger leading coefficient than $$$T_n$$$. Let $$$r>1$$$ be the ratio of leading coefficient of $$$P$$$ to the leading coefficient of $$$T_n$$$. Then consider polynomial $$$\frac{P(x)}{r}$$$ whose leading coefficient is the same as for $$$T_n$$$. Its image of $$$[-1, 1]$$$ is contained within $$$(-1, 1)$$$ (note it's an open interval), so from the picture it is clear that it intersects the graph of $$$T_n$$$ at least $$$n$$$ times. Hence $$$T_n(x) - \frac{P(x)}{r}$$$ has at least $$$n$$$ zeros, but it has degree smaller than $$$n$$$, hence it must be a zero polynomial — contradiction.

Btw if we change the requirements from $$$P([-1, 1]) \subseteq [-1, 1]$$$ to $$$P([-2, 2]) \subseteq [-2, 2]$$$, then the answer is a clean $$$1$$$ :)

There's a much cleaner way (IMO) to get to the conclusion that $$$x^k + x^{-k}$$$ can be written as a polynomial in $$$x + x^{-1}$$$, via the idea of a norm in quadratic extensions (Pell's equation, anyone?).

Let's consider $$$x$$$ such that $$$x + x^{-1}$$$ is an even positive integer greater than $$$2$$$, say $$$2t$$$ for $$$t > 1$$$. Then $$$x = t \pm \sqrt{t^2 - 1}$$$. Let's choose the positive sign. Consider the quadratic extension $$$\mathbb{Z}[\sqrt{t^2 - 1}]$$$ of the ring of integers, where the norm is $$$N(a + b \sqrt{t^2 - 1}) = a^2 - b^2(t^2 - 1)$$$. (This is a quadratic extension since $$$t > 1$$$). Clearly, $$$x$$$ is in this quadratic extension. Remember that the norm is multiplicative. Since this is a ring, $$$x^k$$$ is in this ring too. Its norm is $$$1$$$ too, so if $$$x^k = a + b \sqrt{t^2 - 1}$$$, then $$$a^2 - b^2(t^2 - 1) = 1$$$. That is, $$$x^k = a + \sqrt{a - 1}$$$. When expanding $$$(t + \sqrt{t^2 - 1})^k$$$ via the binomial theorem, we note that $$$a$$$ is a polynomial in $$$t$$$. However, from the previous relation, we know that $$$a = x^k + x^{-k}$$$. Hence, for a fixed polynomial $$$P$$$, the identity $$$P(x + x^{-1}) = x^k + x^{-k}$$$ holds for infinitely many integers, which means that this is an identity, by FTA.

Note that if you actually expand out the whole thing using binomial theorem, you get the same expression for $$$T_k$$$ as you claimed.

Essentially, this is almost isomorphic to your argument, but has the added benefit that you don't have to deal with imaginary powers of $$$e$$$. Indeed, if you set $$$t = \cos \vartheta$$$, then tracing this argument for $$$|t| \le 1$$$, we recover the main parts of your proof.

Do we really need Chebyshev polynomials, or quadratic extension to prove such an intuitive thing?

Calm down guys!!

SpoilerDegree is 2k - evenDegree is 2k + 1 - oddIn both the cases doesn't it become a function of $$$(x + \frac{1}{x})$$$ by induction?

Ooorrr $$$(x+x^{-1})^k = x^k + x^{-k} + \sum_{i=0}^{k-1}a_i (x^i + x^{-i})$$$ for some sequence $$$a_0, \ldots, a_{k-1}$$$ and we're done by induction :P

As adamant pointed out in the comment above, it's not so much about simplicity as it is about being connected to the bigger picture. You could also get the same conclusion from $$$f_k = x^k + x^{-k} \implies f_k = (x + x^{-1}) f_{k-1} - f_{k-2}$$$.

Now using this recurrence, someone can use tridiagonal matrices to show more properties but I won't bother you with those.

Oh, I like this one! So, as I understand, from this it follows that

where the matrix has size $$$k \times k$$$. Neat, I like it when things are expressed directly through such determinants, as you may guess from 104234D - Triterminant!

From this also follows that the Chebyshev polynomials can be expressed as

as for them the recurrence is $$$T_k(x) = 2 x T_{k-1}(x) - T_{k-2}(x)$$$.