By adamant, history, 6 months ago, ,

Hi there! Imagine you're participating in codechef long challenge and you see a problem from chemthan asking you to calculate some sums of powers like 1p + 2p + ... + np for all p from 0 to k. You immediately understand what's going on here and take Faulhaber's formula from your wide pants, do ya? Just take a look at it!

Beautiful, right? Wrong! This formula is dumb and it's hard to understand and remember! Why would you ever think about Bernoulli's number on any contest which lasts less than a week? And what the hell are those Bernoulli numbers?! Here is what you should do:

Let Sp = 1p + ... + np. Consider its exponential generating function (EGF):

Now you can simply find S0, ..., Sk by finding inverse series for and multiplying it with . Enjoy your power sums without Stirling and/or Bernoulli numbers!

Exercise: Solve this problem in .

P.S. Yes, you basically perform the same calculations as in Faulhaber's formula, but now you hopefully understand what you're doing.

•
• +131
•

 » 6 months ago, # |   +11 First of all, you don't need to remember Faulhaber's formula, you only need to know that it exists and interpolate.Also, these solutions are substantially different, because Faulhaber's formula finds Sp(n) for fixed p and all n, and yours finds it for fixed n and all p.
•  » » 6 months ago, # ^ | ← Rev. 2 →   0 I don't really agree with you here because calculating inverse to you calculate nothing but EGF for Bernoulli numbers so approaches are exactly the same, but with different interpretation. And I think, you should be more precise about what do you mean by all n. If it's all n from 1 to N then you don't seem to need any special formulas at all, you can calculate it in . If you mean fixed p and arbitrary single n then my solution allows you to do so as well. And yes, you're right, interpolation also allows to solve this particular case with both n and p fixed in O(p). But as you may see at the beginning of my post, I wanted to write specifically about case when n is fixed and you need to calculate values for all p from 0 to k.I understand that it might be a bit misguiding since I only paste formula here without further comments, but it was supposed to be considered as convolution: So in this interpretation you clearly may calculate values for all p.
•  » » » 6 months ago, # ^ |   0 Sorry, I forgot that evaluating polynomial takes O(p) time which is not even better than interpolation.I got that your solution works for all p for 0 to k. But still, for fixed n and p, I don't see a way to make it work in linear time, like interpolation. Calculating single coefficient of product is easy, but here we have quotient. So it seems, your solution is only good when we need to calculate the sum for different p. In other cases additional logarithm with pretty big constant from inverse seems bad.
•  » » » » 6 months ago, # ^ |   0 That is true, for fixed n and p Lagrange interpolation is better. Though it worth mentioning that you seemingly can't apply interpolation in this problem while similar EGF can be calculated in to solve the problem.
 » 6 months ago, # |   0 How to find inverse series? I know there is a post by vfleaking on codeforces on the same.I read it several times but still I can't understand . Please tell me ,how can we find inverse series?
•  » » 6 months ago, # ^ |   0 link
•  » » » 6 months ago, # ^ |   0 Here. Read editorial part for problem E from the line "_There is an interesting algorithm which calculate the inverse of a power series F(z)_ "
•  » » 6 months ago, # ^ |   0 I have paragraph about it here.
 » 6 months ago, # |   0 i am thinking of Maclaurin expansion for finding inverse series for (1-e^x)/x but it's hard to implement is there any other way to do this
•  » » 6 months ago, # ^ |   0 using some complex analysis
 » 6 months ago, # |   +3 this problem too but here lagrange interpolation is much better https://codeforces.com/contest/622/problem/F
 » 6 months ago, # | ← Rev. 5 →   0 It should follow from these expressions that for p ≥ 0The first three cases are: p = 0: S0 = n p = 1: S0 + 2S1 = n2 + 2n, and p = 2: S0 + 3S1 + 3S2 = n3 + 3n2 + 3n, and Therefore, the following recursive expression can be derived for p ≥ 1where the base case is S0 = n
•  » » 6 months ago, # ^ |   0 Yep, there is this formula but you can't calculate fast enough with it.
•  » » » 6 months ago, # ^ | ← Rev. 8 →   0 I wonder why this recursive formula may not be fast enough for modular arithmetic with large n and small p, as both (n + 1)p + 1 and may be computed recursively in reasonable time when p is small. Furthermore, the division by (p + 1) operation may be preformed using modular multiplicative inverse. Any plausible explanation for such presumed inefficiency would be appreciated.
•  » » » » 6 months ago, # ^ |   0 Because you'll have to compute each of Sj for j from 0 to p.
•  » » » » » 6 months ago, # ^ |   0 But isn't it required in the problem statement to compute Sp for all p from 0 to k? It seems that the only difference between both requirements is the change in the parameter names.
 » 6 months ago, # |   0 I'd just like to clarify a doubt — Is it possible to solve these types of sums using Lagrange interpolation ?
•  » » 6 months ago, # ^ |   0 Yes, but only if you have to calculate it for single fixed n and p.
•  » » » 6 months ago, # ^ |   0 Thanks for clarifying :)
 » 6 months ago, # |   +11 Finally nice and short text about it! You are definitely my favorite blog writer since now :D Big +
•  » » 6 months ago, # ^ |   0 Great to see such feedback! Now you really have to learn Russian so you can enjoy even more of my blogs X)