lbm364dl's blog

By lbm364dl, 19 months ago, In English

The editorial says the complexity is $$$O(M \log M \log N)$$$. I would think one part is from $$$O(M \log M)$$$ FFT, but I don't get the $$$\log N$$$ part. The editorial just says that each 'element' (aka a $$$dp'_{(A_i)}$$$) is 'involved' in at most $$$O(\log N)$$$ calls. I understand this claim, but I don't see how it helps. The only conclusion that comes to mind is that we're doing $$$O(\log N)$$$ FFTs, but that doesn't seem to be the case.

I would say there are roughly $$$\frac{n}{2} + \frac{n}{4} + \frac{n}{8} + \dots \approx n$$$ multiplications, so for me this would be $$$O(N M \log M)$$$.

Some time ago I also had this misconception in the complexity analysis in the editorial of a Codechef problem (you can also see the comments).

Basically, I want to know what I didn't understand, and why the complexity is actually $$$O(M \log M \log N)$$$.

PS: I made this blog because it seems there was no announcement (nor any discussion blog). Feel free to use it to discuss other problems too.

Full text and comments »

  • Vote: I like it
  • 0
  • Vote: I do not like it

By lbm364dl, 2 years ago, In English

This problem is from here (in Spanish). You can submit there. Here I leave a simplified statement.

You have an array $$$a$$$ of $$$n$$$ numbers ($$$n \le 20$$$) with values $$$a_i \le 20$$$. The value of a merged set of numbers is the sum of the numbers. You are asked to find the maximum possible value of a number, after merging numbers with a total cost of at most $$$m$$$, $$$m \le 500$$$. The cost of merging two numbers is the sum of the cost of each number. Initially, the cost of each number is its value.

Note that this is not the same as this problem, which has a greedy solution related to Huffman encoding (there the cost of a merge is just the sum of values).
The reason of the title is that I guess I should just find the minimum cost of merging each subset of numbers and then find the maximum value among those with a cost not exceeding $$$m$$$.
Also, in the original problem $$$n \le 15$$$, but I write $$$20$$$ here because my $$$O(3^n)$$$ bruteforce iterating subsets of subsets gets time limit because of too many test cases.
I guess the solution has something to do with bitmask DP. I was not familiar with this technique so I searched some problems but I can't relate them to this one. Any help is welcome.

EDIT: I misunderstood the statement and this is actually the same as Huffman (the other problem I claimed was not the same as this one). I apologize.

Full text and comments »

  • Vote: I like it
  • +2
  • Vote: I do not like it

By lbm364dl, history, 2 years ago, In English

I read the editorial of this problem and I'm struggling to understand why the matrix construction works. I think the editorial is poor because proving it is important, and it's far from trivial for me. How does this ensure there is the same number of ones in every column? I get that by picking $$$d$$$ such that $$$dn \equiv 0 \; mod \; m$$$, the pattern repeats after (at least) $$$n$$$ rows, but I don't really know what's happening in between.
This comment explains the choice of $$$d = a$$$ which (I think) I understand intuitively but may not be a proof and I can't really see how it would work for other values of $$$d$$$, and this comment is a proof but for a different approach unfortunately. And, as a bonus, this comment shares my frustration with the problem.
Can someone prove it? Thank you.

Full text and comments »

  • Vote: I like it
  • +5
  • Vote: I do not like it

By lbm364dl, history, 3 years ago, In English

I'll leave it short. I got time limit because of big input so I started submitting some IO variations. I also decided to compare with using scanf, which should be faster, but for some reason using cin with ios_base::sync_with_stdio(false) shows about 400ms (120387767) while using scanf shows about 750ms (120392401). Am I doing something wrong? Why is scanf variant slower here?

Full text and comments »

  • Vote: I like it
  • +3
  • Vote: I do not like it