I read somewhere that the C++ STL *parital_sort* function (used to find **k smallest numbers**) has a running time complexity of **O(n.logk)**.

It achieves this by going through the entire data, maintaining a *k-sized max-heap*, throwing away the top whenever size exceeds k, and in the end printing the k remaining elements from the heap.

Can't an *n-sized min-heap* be used instead and then the first k smallest numbers simply extracted from it. I understand it won't remain in-place and would require additional memory.

But time complexity wise, it would take **O(n+k.logn)** which is better than **O(n.logk)**, right? (assuming k to be any number smaller than n)

Then why is the **O(n.logk)** version preferred? Why is it mentioned everywhere and used by the std template?