I tried to solve 1801C-Music Festival. First I also compress the albums as the editorial solution, then normalize all the coolness value to avoid the case the maximum of $$$a_i$$$ is too big, then sort the albums in increasing order of the track with maximum coolness. This got me TLE at test case 25

In short, I have a

```
vector<vector<int>> a(n);
```

The sort function is

```
sort(a.begin(), a.end(), [&] (auto a1, auto a2) {
return (a1.back() < a2.back());
});
```

Instead of sorting, create a map to store the position of albums with each maximum coolness pass

I didn't know about this, so I'm curious what's the time complexity of the sort function in this case ?

I think it's O(n^2log(n)) because in the sorting lambda you are passing the vectors by value not by reference, so you make a copy of the 2 vectors for every comparison which is O(n) per comparison. Changing

`auto`

to`auto&`

gives ACInteresting. Thanks a lot !

If I were to bet, given the quite strict TL, I’m not sure I agree with aymanrs ‘s hypothesis.

I’ve never seen a counter-test where passing vectors by value would degenerate complexity (I conjecture that it’s $$$O(\log n)$$$ comparisons per element, yielding correct complexity). Can someone indicate one such test or a strategy to build it and prove me I’m blatantly wrong?

However, I do think that the reason why passing by value is slow here and passing by reference is not is

memory allocation(one allocates a linear amount of memory, the other allocates linearithmic memory). This adds a condiderate constant factor to the solution.one big vector + many small?

Still, you would need to build a test case where the big vector gets compared with $$$\omega(\log n)$$$ elements, which is non-trivial as far as I'm aware (without digging into the intrinsics of how the

`std::sort`

algorithm works).if you will use merge sort, complexity will be O((sum len)* log(n)). It's because every element used in comparator O(log(n)) times.

But std::sort using quick sort algorithm. It's pick element and put lower element at begin and bigger element at end. Then recursive sort. So, if we will pick long vector it will be compared O(len(curr arr)) size.

you may use stable_sort: [sorry, can't add link, because my submissions are hidden in this contest, but you can try it by yourself]

it's working in O(n log^2 n), but only O (log^2 n) comparisons for every element. So total will be O((sum len) * log^2 n)

hmm, ok, my words are true until C++11

since C++11 it's using other one algotihm:

https://github.com/gcc-mirror/gcc/blob/d9375e490072d1aae73a93949aa158fcd2a27018/libstdc%2B%2B-v3/include/bits/stl_algo.h#L1950

is using aditional function

https://gcc.gnu.org/onlinedocs/libstdc++/libstdc++-html-USERS-4.4/a00978.html#7db47762a47726a861b61ecddd91e12e

It's using not a random element, but median of first, middle and last element and then split by this element.

So, in this case, first element is a median what will be found in first execution. You can random_shuffle array and get AC too. [also can't give a link for submission, but you also can check it by yourself. Add random_shuffle(a.begin(), a.end()); before sort]

So, I think it's a good hack: std::random_shuffle array before sorting to minimize chance of picking big element by separator.

But also it's a good hack: using references in your functions)

also, if you have a question: why std::sort using such type of sorting instead of merge sort?

It's all because of additional memory. If you want to sort elements in-place, you have to use in-place merging. It's not really hard algorithm of merging in O(n log (n)), but if you want to do it in O(n) — good luck, have fun. It's a really hard algorithm with big constant. you can read this article: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwi3gd-o_Oz9AhVXyYsKHffiDW4QFnoECCQQAQ&url=https%3A%2F%2Fnms.kcl.ac.uk%2Finformatics%2Ftechreports%2Fpapers%2FTR-04-05.pdf&usg=AOvVaw3SELuk7H9SIx22TdHTn9ft

Also you can use make_heap and sort_heap for sorting in place. It’s also unstable, and working O(n log n), but in real life some slower when sort.

I think you should have edited the first comment instead of continuing to reply to it.

I have some misunderstanding…

Why did you downvote this thread? It's a lot of interesting things about C++ standards, compilers, algorithms, and features of competitive programming.

Only because of a lot of messages? Lol?

What difference for you: read one big message or some small ones?

If you have questions, or you didn't agree – you can write it here…

You're just downvoting an interesting thread and what all.

Maybe because it's too hard for your Specialist brain? LMAO.

I hope you will be Expert one day…

Sometimes I cant understand codeforces's users, tbh

As @oversolver said, the test case that got me TLE has 1 big vector and many small one. I have verified that it really is the sorting that kill my solution

I never said otherwise. I was debating whether it's because the complexity becomes quadratic or the constant factor is too big.

just tested some cases and seems like std::sort make no more than log comparisons per item.

and talking about auto vs auto& in comparator generally, I have lot of experience even with pair<int,int> where reference gives major performance

When n is sufficiently large, sorting an ordered array causes the second element to be compared $$$\Theta(n)$$$ times. (However, I am still don't know how to construct it so that it reaches the complexity of $$$\Theta(n^2\log n)$$$)

CodeIf you do random_shuffle first and then sort, then the complexity is also correct. But I don't think there is a sorting algorithm that can do $$$O(\log n)$$$ comparisons with worst-case luck.

upd: here's a sorting algorithm named "Bitonic Sort" can do $$$\Theta(\log n)$$$ comparisons for each element with worst-case luck.

Let's keep things simple and assume

`std::sort`

is a merge sort algorithm. For the merge sort algorithm conjecture that it's $$$O(logN)$$$ comparison per element is blatantly wrong. It has a total $$$O(N*logN)$$$ comparison, but nowhere guaranteed that it's $$$O(logN)$$$ comparisons per element. Here is one such trivial counter-test case.Spoiler`Inf | 1 | 2 3 | 4 5 6 7 | 8 9 10 11 12 13 14 15 | ...`

`Inf`

will be compared with all elements.I do not want to dig into the intrinsics of how the

`std::sort`

algorithm works. What I know is`std::stable_sort`

is implemented using merge sort. Here is a working exampleJFYI: you can merge two arrays with O(log(n)) comparison for every element.

then you have two arrays A and B just do binary search how many element you have to use from A until first B element. It will be work in O(log(ans)), ans then you will move this elements in O(ans), so total is O(ans).

To do it you can firstly check all powers of two until first negative result and then search between $$$2^i$$$ and $$$2^{i+1}$$$

Total will be O(n)

I never claimed all algorithms have $$$O(N)$$$ for an element. I just claimed native textbook code for merge sort can take $$$O(N)$$$ per element.

Also, can you elaborate more on your analysis?

Either I do not understand your algorithm, or it's $$$O(log^2N)$$$ comparisons per element, with the overall complexity being $$$O(N*log^2N)$$$

Merge sort has $$$O(logN)$$$ layers, and if we use your algorithm as an immediate step, it will take $$$O(log 2^i)$$$ comparisons on $$$i$$$ step, totalling $$$log^2(N)$$$ per element.

I told about merge component only. In merge sort — yes, it will be $$$O(log^2(N))$$$ comparisons per element.

Yes, that's indeed true. But for randomized quick sort on the other hand, the number of comparisons seems to be $$$O(\log n)$$$ per element on average.

Proof: An element $$$i$$$ will be compared to any other element:

As far as I'm aware,

`std::sort`

is more like quick-sort than merge-sort, so I thought maybe it behaves similarly.about std::sort — I wrote a thread under your previous comment. It downvoted a lot (Lol), but you can open comments.

https://codeforces.com/blog/entry/114167?#comment-1014962

If I'm the problem setter, here is one way I will try to hack quick-sort solutions.

Here, $$$N$$$ is up to $$$200000$$$.

Let's add $$$100$$$ vectors of length $$$1000$$$ each, rest $$$100000$$$ vectors with length $$$1$$$. For a particular test case, these large vectors get picked up as the first pivot with probability $$$0.001$$$. Add 100 such test cases, and the probability of avoiding TLE is $$$(1-0.001)^{100} = 0.904$$$; it's too risky when only one submission will be evaluated during system tests. It might work for educational rounds where people can make $$$10$$$ submissions. Parameters can be tweaked. Works well when sum is up to 2e6.

And what's wrong with having a pivot of length 1000? Only 100 comparisons on each level of recursion will look through all these 1000 elements, other comparisons will stop at length 1.

The comparator in the blog copies instead of pass-by reference. So it will make 1e5 copies of vector length 1000. Will probably tle for sumN 2e5. A sure shot tle if sumN is raised to 2e6.