Several years ago, a wise person told me that a convolution on a bitwise operator is possible: Given $$$A, B$$$ of size $$$2^N$$$, you can compute

$$$C[i] = \sum_{j \oplus k = i} A[j] B[k]$$$

$$$C[i] = \sum_{j \land k = i} A[j] B[k]$$$

$$$C[i] = \sum_{j \lor k = i} A[j] B[k]$$$

in $$$O(2^N N)$$$ time. Cool!

I asked a wise person, how such things are possible. A wise person replied, *"Of course you know how FFT works, let's begin with Fast Welsh-Hadamard Transform..."* I said, No. I don't know how FFT works. Thank you. Then I just threw it into my ICPC teamnote.

Years have passed, I still don't know how FFT works, and while writing some stupid essay, a random idea came to my mind. I wondered, "Does nobody really know this? Why anyone didn't explain OR convolution this way?". I searched on Google, and nobody was telling things this way, so this is certainly not a common explanation. But why? It should be. Let me use my time to change things for good.

## Sum of Subsets

For convenience, I'll use a set notation. We want to compute:

$$$C[i] = \sum_{j \cup k = i} A[j] B[k]$$$

If we can do this, we can also do $$$j \cap k$$$ easily. My approach can't do XOR convolution anyway, let's skip it.

Let's relax the condition as follows: $$$C^\prime[i] = \sum_{(j \cup k) \subseteq i} A[j] B[k]$$$

Which is $$$C^\prime[i] = \sum_{(j \subseteq i) \land (k \subseteq i)} A[j] B[k] = (\sum_{j \subseteq i} A[j]) (\sum_{k \subseteq i} B[k])$$$

Given an array $$$A$$$, how to compute $$$(\sum_{j \subseteq i} A[j])$$$? This is just a sum-of-subsets DP. Let's do it for both arrays $$$A$$$, $$$B$$$. Code:

```
// compute sum-of-subset
for (int i = 0; i < n; i++) {
for (int j = 0; j < (1 << n); j++) {
if ((j >> i) & 1) {
A[j] += A[j - (1 << i)];
B[j] += B[j - (1 << i)];
}
}
}
```

Then we have $$$C^\prime[i] = A[i] \times B[i]$$$.

## A naughty cat

You have $$$C^\prime[i] = \sum_{j \subseteq i} C[j]$$$. How to get $$$C$$$ from $$$C^\prime$$$?

Think about this. You had an array $$$A$$$, but a naughty cat took a sum-of-subset of it and replaced it. You want to take $$$A$$$ back. What should you do? Just undo it!

```
for (int i = n - 1; i >= 0; i--) {
for (int j = (1 << n) - 1; j >= 0; j--) {
if ((j >> i) & 1) {
A[j] -= A[j - (1 << i)];
}
}
}
```

You know what's going on, you are doing everything in reverse.

But $$$C^\prime$$$ is a sum-of-subset of $$$C$$$. What?

```
// compute C^\prime
for (int i = 0; i < (1 << n); i++) {
C[i] = A[i] * B[i];
}
// reverse sum-of-subset
for (int i = n - 1; i >= 0; i--) {
for (int j = (1 << n) - 1; j >= 0; j--) {
if ((j >> i) & 1) {
C[j] -= C[j - (1 << i)];
}
}
}
```

That's it, enjoy your convolution without some crazy ad-hoc maths!

**Remark 1.** This same approach works for GCD and LCM convolution since it's something like (num of primes $$$\leq n$$$)-dimension equivalent of the above approach, and "sum of divisors" can be done in $$$O(n \log n)$$$ time.

**Remark 2.** This article used 50 minutes of time that should be used to complete the stupid essay.

Wow, and this approach almost trivializes subset convolution. Now I can do subset convolution without any complicated tutorial :)

I think it works in any lattice, that is a set with a $$$\leq$$$ relation (partially ordered set) with a "least common ancestor" $$$i \wedge j$$$ and a "highest common descendent" $$$i \vee j$$$. We can kind of interpret them as follows:

In this notion, for $$$C[k]$$$ defined as

it always holds that

And you can always inverse the sum over $$$t \leq k$$$ via Möbius inversion formula as

where

is the Möbius function of the poset. See also this article by nor which extensively goes into the topic from poset point of view.

I think, this approach is generally known to CP community, as there are problems about GCD/LCM convolutions on Library Checker, but maybe it's not standard to use it in tutorials because tutorial authors perceive polynomial evaluation/interpolation, or sometimes matrix Kronecker product as more natural, insightful and relatable to cpers than the poset point of view?

Besides, these tutorials are usually written in a way that is supposed to be uniform for OR convolution, AND convolution and XOR convolution, and as you pointed it out yourself it doesn't work with XOR convolution.

Lattice, such a cool name, thanks!

Here's a fun lattice: subspaces of $$$\mathbb{F}_q^n$$$ under inclusion. (This recently appeared as part of problem A on UCup 1 Stage 13: Iberia).

You can experimentally find the closed form of the Mobius function or derive it using some math.

SpoilerThe result turns out to be $$$\mu(x, y) = (-1)^{d} q^{\binom{d}{2}}$$$ where $$$d = \dim y - \dim x$$$, for $$$x \subseteq y$$$.

Does anyone have a nice combinatorial interpretation of this? (Combinatorial means something like this comment for the lattice of partitions.)

Let me try.

Let $$$f(d)=(-1)^d q^{d \choose 2}$$$. For a subspace $$$W \subset V$$$, the equation we ultimately want to have is $$$\sum_{W \subset X \subset V} f(\dim(V)-\dim(X))=1$$$ if $$$W=V$$$ and $$$0$$$ otherwise. If we let $$$n=\dim(V)-\dim(W)$$$ and $$$k=\dim(V)-\dim(X)$$$, the equation becomes $$$\sum_{0 \leq k \leq n} f(k) \times \unicode{35} \{$$$ subspaces of $$$F_q^n$$$ with dimension $$$k\}$$$. (BTW How did you use \mathbb{F} in math mode? It gave me an error...)

The case of $$$n=0$$$ is easy, so let's focus on $$$n>0$$$.

Take an arbitrary vector $$$v$$$, say, $$$v=(1,0,0,\cdots,0)$$$. For a subspace of $$$F_q^n$$$, let's call it type A if it contains $$$v$$$ and type B otherwise. I will show that $$$\unicode{35} \{$$$ type A subspaces of dimension $$$(k+1)\} \times q^k=\unicode{35}\{$$$ type B subspaces of dimension $$$k\}$$$, then everything calcels out in the sought sum.

Consider a type A subspace of dimension $$$k+1$$$ and let $$$b_i$$$ ($$$1 \leq i \leq k+1$$$) be its basis. We can safely assume $$$b_{k+1}=v$$$. Then, for any $$$(x_1,x_2,\cdots,x_k) \in F_q^k$$$, $$$(b_i-x_i \times v)$$$ ($$$1 \leq i \leq k$$$) form a basis and they correspond to a type B subspace of dimension $$$k$$$. Thus we have a $$$q^k$$$-to-one correspondence and this completes the proof.

Wow, did you upsolve it?

Yeah, it was a pretty cute problem, the math wasn't too bad. I wonder if it's possible to solve in subquadratic time, and I also wonder if it's possible without PIE (e.g. over floating point numbers).

From your comment it seems that you found it not that hard? Mind telling the gist of your solution? We have some digit dps with insane tricks.

First, we'll do PIE on the basis condition: for each subpsace $$$B'$$$ of the given basis $$$B$$$, we want to count the number of subsets of $$$\{1, \ldots, X \}$$$ with all elements in $$$B'$$$, weighted by $$$\mu(B', B)$$$.

The key observation is that, the problem on a subspace $$$S$$$ of dimension $$$M$$$ and an upper bound $$$X$$$ can be reduced to the problem on $$$\mathbb{F}_q^M$$$ with some upper bound of $$$X'$$$ (which clearly has $$$2^{X'}$$$ valid subsets).

The proof is not too difficult: consider the reduced-row-echelon-form basis $$$B$$$ of $$$S$$$, and express each element of $$$S$$$ in terms of this basis (i.e. as $$$S \ni Bv$$$ for some $$$v \in \mathbb{F}_q^M$$$). One nice property of an RREF basis is that lexicographic ordering is preserved, i.e. $$$v \le u \iff Bv \le Bu$$$. Thus, the elements of $$$S$$$ less than $$$X$$$ must be the elements of $$$\mathbb{F}_q^M$$$ at most some upper bound $$$X'$$$.

We can give an explicit construction of $$$X'$$$: consider the element $$$Bv$$$ of $$$S$$$ which matches $$$X$$$ for as many digits as possible. Then, if $$$Bv > X$$$, our upper bound is $$$X' = v-1$$$; otherwise, our upper bound is $$$X' = v + 2^d - 1$$$, where $$$d$$$ is the number of digits of $$$v$$$ after the mismatch.

To finish the problem, we do digit-DP on the RREF basis of the subspace $$$S$$$. We go from the least significant bit upwards, and our state is the current dimension, as well as whether we've hit the first mismatch of $$$Bv$$$ with $$$X$$$ or not. For each digit, we either a vector starting at this digit to our basis, or we don't. It's helpful to note that, in the second case, exactly half of the bases will match $$$X$$$ at that digit.

It's kind of interesting that the problem statement gives you an explicit subspace; it's almost a hint that you should look at subspaces and try to reduce the problem. I think the solution would've been harder to come up with if the problem just asked for full-rank-subsets of $$$\{1, \ldots, X \}$$$.

A little nitpick: you only need semilattices for such an argument to work.

One obvious case where this becomes significant is the case of trees — a tree forms a semilattice (but not a lattice) under the LCA operation, and you can find things like $$$f(v) = \sum_{LCA(a, b) = v} x_a x_b$$$ where $$$x_i$$$ are some numbers assigned to vertices.

Looking at it from a poset perspective, the DP to recover $$$f$$$ from the zeta transform — which is the square of the subtree sum of that vertex — is just a more manageable way to compute the Möbius function via the theorem for lattices mentioned in my blog, and since the answer is linear in the Mobius function, it manages to also solve the problem.

Yea, sorry, I'm not a lattice expert and did not know the name semilattice >.<

super straightforward approach, I love it.

"Does nobody really know this?" — it's certainly the way I always thought about it and I didn't even know that it has any FWHT/FFT connections xd

I still don't know what FWHT is...

This is literally the transform you do to the array for XOR convolution...

Thanks! Maybe you are finding this?

Looks interesting, thank you.

1103E - Radix sum may be relevant too. Is it what you are asking about in (2.)?

There is a super nice and general way to view all kinds of transforms that are used for different kinds of convolution. The trick is to think about properties (functions) $$$f$$$ such that $$$f(A) \cdot f(B) = f(conv(A,B))$$$. In your example of OR-convolution, the proprties you are using are $$$f_0,...,f_{2^n-1}$$$ where $$$f_j(A) := \sum_{i \subseteq j} A[i]$$$.

Let's as another example take circular convolution with periodicity $$$n$$$. Meaning you are given two polynomials $$$A(x)$$$ and $$$B(x)$$$, and your goal is to find the polynomial $$$A(x) := B(x) \cdot C(x) \pmod{x^n - 1}$$$, which is an $$$(n-1)$$$-degree polynomial.

Note that for any complex number $$$\omega$$$ such that $$$\omega^n = 1$$$, we have that $$$A(\omega) \cdot B(\omega) = C(\omega)$$$. There are $$$n$$$ such complex numbers, and $$$C$$$ is an $$$(n - 1)$$$-degree polynomial. This means that by evaluating $$$A$$$ and $$$B$$$ in all of the $$$\omega$$$:s we have enough information to calculate $$$C$$$ using interpolation. The properties "$$$f$$$" that we are using here is the evaluation of a polynomial at the $$$\omega$$$:s.

So what is FFT? If you give FFT a $$$(n-1)$$$-degree polynomial, then it evaluates the polynomial in all $$$n$$$-th roots of unity (the $$$\omega$$$:s). Inverse FFT is the reverse of this, it finds a $$$(n-1)$$$-degree polynomial using interpolation. So if you have FFT and IFFT implemented, then it is trivial to implement circular convolution.

Here is an example of a Python implemention of circular convolution using FFT.

Sir, I think the point of this blog is specifically to avoid this general way to view them...

True (and this blog does a good job of explaining it in that way), but the DPs used for such transforms usually end up being representable as such maps. There's stuff like the infimal convolution that feels hard to explain in such a simplified manner, but is useful in a lot of places nevertheless (for example, in Alien's trick).

No, just using sensible terminology and approach.

But pajenegod uses the same approach as the blogs on fwht you complain about...

Yeah, glad that you know where it got wrong

If something has an established name, it's sensible to call it by the name, rather than explain it without mentioning the name...

Opposite of how I learned to write: Use the easiest way to describe things.

I mean, I think your posts have a specific audience, they probably know what's going on, so it's ok. I just found it sad that nobody discussed this for general audience (such as me).

I mean, I get your point. It's much worse in math textbooks and I hate it. But complaining that blog about something that is exactly FWHT (XOR convolution) mentions FWHT and FFT and skipping it because of this is still weird.

Considering that Um_nik don't know FWHT, nor I do, most schoolchildrens won't, and my 2400+ friends saying they are enlightened, I don't think this is a niche position.

Bruh, it's more like you and Um_nik know it, but refuse to acknowledge it by this name.

No I really don't know it. Really.. I kinda know FFT, but zero idea on FWHT.

I don't think that's always true. Do you know what "Kadane's algorithm" is?

It's the algorithm to find the max-sum segment. I constantly forget its details because prefix sums are more natural for this. I think, it still makes sense to mention its name in a tutorial about it, and it shouldn't divert people from reading the tutorial? Besides, I don't know a better name for it anyway.

That depends on

peopleI think.The algorithmin this situation is so simple, that giving it a name (from my viewpoint) creates an unearned aura of scientific significance and/or complexity. I don't think that algorithm deserves a name, and I would prefer to refer to it by "max-sum subsegment".The situation is a bit different with FWHT and XOR convolution. Here I just think that "XOR convolution in $$$O(k 2^k)$$$" is the algorithm, and FWHT

mightbe the name of the transform used in the algorithm, but mentioning itinstead ofXOR convolution for me seems like an obfuscation, like you are trying to hide XOR convolution and/or give it a scientific-sounding name.And I have to acknowledge that my position is weird because I don't have anything against saying FFT instead of "polynomial multiplication in $$$O(n \log n)$$$.

One day someone said he was using an "imos method" to do some stuff. I asked, what's that trick? And it was a difference array. I almost died. I don't know who "imos" is, but shame on him. I can say that. That's like a communication crashing because of some crazy ego.

Here's an even more obtuse example: you can use something called the Bird-Meerten's formalism to derive Kadane's algorithm from the $$$O(n^3)$$$ definition. Though it is definitely a cool thing when it comes to deriving programs formally from their specification (and I really like it for that reason), it's not really useful to learn in the context of CP.

I would say it is more useful to name ideas rather than to name specific algorithms (so stuff like posets etc. make more sense in this context, but FWHT in isolation does not).

Well yes, this blog is about doing OR-convolution using sum-of-subsets.

However I think that it is still good to point out that there are very similar explanations of how to do other convolutions. For example the blog just leaves out XOR-convolution claiming that "My approach can't do XOR convolution". Which is not entirely true.

The transform used for XOR-convolution is $$$A'[j] = \sum_i A[i] \cdot (-1)^{|i \cap j|}$$$.

The reason for how I know this is explained in my original comment. It has to do with the fact that $$$f_j(A) \cdot f_j(B) = f_j(\text{XOR_CONV}(A,B))$$$ for $$$f_j(A) := \sum_i A[i] \cdot (-1)^{|i \cap j|}$$$, $$$j = 0, ..., 2^n - 1$$$.

Does it have any natural explanation?

The one I know is to treat $$$A$$$ and $$$B$$$ as multivariate polynomials, where the entry with the index

corresponds to the coefficient near

for $$$(n+1)$$$ variables $$$x_0, x_1, \dots, x_n$$$, one per bit. We want them to behave like

where $$$i \oplus j$$$ is the XOR of $$$i$$$ and $$$j$$$. So, we evaluate them in numbers $$$\omega_0, \omega_1, \dots, \omega_n$$$ such that

for each bit $$$k$$$. Conveniently it's equivalent to $$$\omega_k^2 = \omega_k^1 \omega_k^1 = \omega_k^0 = 1$$$, hence $$$\omega_k \in \{-1, 1\}$$$, so the transform for the XOR convolution is equivalent to evaluating these multivariate polynomials in the vertices of the cube $$$[-1, 1]^{n+1}$$$.

ko_osaga, this (poly eval/interpolation interpretation) and the comment below (set theoretic interpretation) is essentially FWHT. Do you have any idea of it now? @[email protected]

Ok, adamant eventually made me understand this weird math stuff. You win ;_;

I think I get that set theory 100%, but for the poly one I don't know how it corresponds to an algorithm. But I just want to chill, so plz don't explain it

I think, you just change

to

In your impl of the OR transform, as OR transform is eval in $$$[0, 1]^{n+1}$$$.

Here is how I would intuitively explain the transforms of many different types of convolutions.

OR-convolution$$$C[i] = \sum_{j \cup k = i} A[j] B[k]$$$

uses the transform $$$A'[j] = \sum_i A[i] I_{i \subseteq j}$$$ (called sum-of-subsets transform)

since $$$I_{i_1 \subseteq j} \cdot I_{i_2 \subseteq j} = I_{(i_1 \cup i_2) \subseteq j}$$$.

AND-convolution$$$C[i] = \sum_{j \cap k = i} A[j] B[k]$$$

uses the transform $$$A'[j] = \sum_i A[i] I_{i \supseteq j}$$$

since $$$I_{i_1 \supseteq j} \cdot I_{i_2 \supseteq j} = I_{(i_1 \cap i_2) \supseteq j}$$$.

XOR-convolution$$$C[i] = \sum_{j \triangle k = i} A[j] B[k]$$$

uses the transform $$$A'[j] = \sum_i A[i] (-1) ^ {|i \cap j|}$$$ (called FWHT)

since $$$(-1) ^ {|i_1 \cap j|} \cdot (-1) ^ {|i_2 \cap j|} = (-1) ^ {|(i_1 \triangle i_2) \cap j|}$$$.

Circular convolution mod $$$n$$$$$$C[i] = \sum_{j + k \equiv i (\text{mod }n)} A[j] B[k]$$$

uses the transform $$$A'[j] = \sum_i A[i] (e^{\mathrm{i} \frac{2 \pi j}{n}}) ^ i$$$ (called FFT)

since $$$(e^{\mathrm{i} \frac{2 \pi j}{n}}) ^ {i_1} \cdot (e^{\mathrm{i} \frac{2 \pi j}{n}}) ^ {i_2} = (e^{\mathrm{i} \frac{2 \pi j}{n}}) ^ {(i_1 + i_2 (\text{mod }n))}$$$.

GCD-convolution$$$C[i] = \sum_{gcd(j,k) = i} A[j] B[k]$$$

uses the transform $$$A'[j] = \sum_i A[i] I_{j \mid i}$$$

since $$$I_{j \mid i_1} \cdot I_{j \mid i_2} = I_{j \mid gcd(i_1,i_2)}$$$.

LCM-convolution$$$C[i] = \sum_{lcm(j,k) = i} A[j] B[k]$$$

uses the transform $$$A'[j] = \sum_i A[i] I_{i \mid j}$$$

since $$$I_{i_1 \mid j} \cdot I_{i_2 \mid j} = I_{lcm(i_1, i_2) \mid j}$$$.

Oh, I see. Yeah, looking on it this way makes a lot of sense. The family of transforms for XOR convolution still looks arbitrary and somewhat unmotivated, but it's still a very nice way to present it.

I suppose, from set-theoretic point of view you define it as

and then

which then rewrites as

which explains the formula. But there's no way I would ever consider such transform naturally... Like, I don't see a meaningful relaxation of $$$i \Delta j = k$$$ similar to how the blog goes from $$$i \cup j = k$$$ to $$$i \cup j \subset k$$$.

P.S. $$$i \Delta j$$$ is the symmetric difference if $$$i$$$ and $$$j$$$.

This is clean, good!

We need $$$n = 2^k$$$ to have an $$$O(n \log n)$$$ algorithm, right?

Standard FFT needs it, but you can reduce arbitrary $$$n$$$ to a convolution (which is doable with powers of $$$2$$$) with chirp Z-transform. On the other hand, if you only want to compute $$$A(x) B(x) \pmod{x^n - 1}$$$, rather than $$$A(\omega)$$$ for each root of $$$\omega^n=1$$$, you can as well compute $$$A(x) B(x)$$$ in full and reduce it modulo $$$x^n-1$$$ in $$$O(n)$$$.

Essentially yes.

However, there is a common trick to get around this limitation. By paying a factor of 2 extra, you can use circular convolution to calculate usual convolution (no modding by $$$(x^n - 1)$$$). This is because modding with $$$(x^n - 1)$$$ doesn't matter if $$$n$$$ becomes large enough.

So to do circular convolution for an $$$n$$$ that is not a power of 2, you start by doing a circular convolution with a large power of 2 as its periodicity. Then you simply mod the result of that by $$$x^n - 1$$$.

Modding by $$$x^n$$$ is removing all coefficients with index $$$\geq n$$$. You want to mod by $$$x^n-1$$$, that is substitute $$$x^n = 1$$$ and turn $$$x^{kn+b}$$$ into $$$x^b$$$. Note that you can't do this if you want to invert the circular convolution (e.g. in 901E - Cyclic Cipher).

Ah ops, meant $$$\pmod{x^n - 1}$$$ and not $$$\pmod{x^n}$$$. Thanks for pointing that out. I'll fix it.

What for uncommon people like us?

Thanks for the easy to understand blog — it is clearly another step in the right direction since it provides more evidence for the claim that all problems can be solved using DP.

You may also consider what you did (specifically sum-of-subset) as if you calculated the following DP:

`dp[b][i]`

is sum of $$$f(j)$$$ over all $$$j$$$ so that $$$j$$$ is a subset of $$$i$$$ and may differ only in the last (or first, doesn't matter) $$$b$$$ bits. Do you know how to calculate`dp[b][i]`

from`dp[b - 1][*]`

? Yes, if the $$$b$$$-th bit of $$$i$$$ is zero, then these values coincide, otherwise`dp[b][i] = dp[b - 1][i] + dp[b - 1][i - (1 << b)]`

. Your two-cycles is computing this dp on one layer.Keeping this in mind, you may generalize this to xor-convolution. Indeed, let

`dp[b][i]`

be the sum of $$$f(j)$$$ over $$$j$$$ that only differ from $$$i$$$ in the last $$$b$$$ bits, but also each $$$f(j)$$$ is multiplied by $$$-1$$$ if the number of bits they differ in is odd. Now I think you also know how to compute`dp[b][i]`

from`dp[b - 1][*]`

.Idk if this makes any visual sense to you, but probably it is easier to understand xor-convolution in terms of dp

Number of legend's comment in this blog >>>>>>>>>>>

Do not read this article if you are not red. Go solve binary search instead.

Was funny before 1000000007th time.

It's not funny and the post is not exclusively for 2400+. If you understand SOS DP you can read it.

keep it up. do nto lsten to ahders. yuo aer fnuonny.

-Yuor Snciernly Btsowana

Wow FMT&FWT!

Cool! I used this technique to solve these problems:

You'll have to join this group for the second one.

Thank you for writing such a clear and concise blog.

What is this set notation in your summation ($$$\sum$$$) notation? Apologies if the question is duplicate or too noob.

The number $$$j, k$$$ is not a set per se, but I'm considering it as a set, which makes sense if you consider the bitmask.

Can somebody explain how to read/understand that formular: $$$C[i] = \sum_{j \cup k = i} A[j] B[k]$$$

If A={1,2,3} and B={4,5,6}, what would C[] be? Has A[] and B[] same size?

Using bitwise operations, the formula is $$$C[i] = \displaystyle\sum_{j | k = i}A[j]B[k]$$$.

The formula tells us that for all $$$0 \le i < 2^N$$$, $$$C[i]$$$ is the sum of $$$A[j]B[k]$$$ over all pairs $$$(j, k)$$$ such that $$$0 \le j, k < 2^N$$$ and $$$j | k = i$$$.

The first line of the blog says

... Given $$$A$$$, $$$B$$$ of size $$$2^N$$$.... Your arrays $$$A$$$ and $$$B$$$ are not of size $$$2^N$$$. But let's suppose that $$$N = 2$$$ and $$$A = [1, 2, 3, 4]$$$ and $$$B = [5, 6, 7, 8]$$$.Now, we go through all pairs $$$(j, k)$$$ such that $$$0 \le j, k < 2^N$$$ or in this case, $$$0 \le j, k < 4$$$. We will calculate the bitwise OR of each of these pairs.

Bitwise OR of all pairsWe can group them like this:

Thus,

$$$C[0] = A[0]B[0] = 1 \cdot 5 = 5$$$

$$$C[1] = A[0]B[1] + A[1]B[0] + A[1]B[1]$$$

$$$C[1] = 1 \cdot 6 + 2 \cdot 5 + 2 \cdot 6$$$

$$$C[1] = 6 + 10 + 12 = 28$$$

$$$C[2] = A[0]B[2] + A[2]B[0] + A[2]B[2]$$$

$$$C[2] = 1 \cdot 7 + 3 \cdot 5 + 3 \cdot 7$$$

$$$C[2] = 7 + 15 + 21 = 43$$$

$$$C[3] = A[0]B[3] + A[1]B[2] + A[1]B[3] + A[2]B[1] + A[2]B[3] + A[3]B[0] + A[3]B[1] + A[3]B[2] + A[3]B[3]$$$

$$$C[3] = 1 \cdot 8 + 2 \cdot 7 + 2 \cdot 8 + 3 \cdot 6 + 3 \cdot 8 + 4 \cdot 5 + 4 \cdot 6 + 4 \cdot 7 + 4 \cdot 8$$$

$$$C[3] = 8 + 14 + 16 + 18 + 24 + 20 + 24 + 28 + 32 = 184$$$

This means that $$$C = [5, 28, 43, 184]$$$.

You can check that this is correct using the code in the blog.

CodeResultsThanks a lot!