At my university, we are doing this semester the algorithm course. The course is easy for me, since I know most of the course syllabus because of my high-school CP background. However, yesterday I learnt at this course a quite surprising thing. We discussed about time complexities: the big $$$O$$$, $$$\Omega$$$, and $$$\Theta$$$. Take a look at this (mostly) formal definition of $$$O$$$:
Notice that any function $$$g$$$ works, as long as it's $$$\leq$$$ than $$$f$$$ for large enough $$$n$$$. So the big $$$O$$$ is just an upper bound for the number of operations of an algorithm, not necessarily the tightest upper bound for the complexity of an algorithm. In other words, when you say "the number of operations of this algorithm is $$$O(n^2)$$$", it actually means that the algorithm does not do significantly more operations than $$$n^2$$$. That means that not only an algorithm with $$$n^2$$$ steps is $$$O(n^2)$$$, but also algorithms with $$$n \space log \space n$$$ or $$$n$$$ operations are also $$$O(n^2)$$$!
The notation that actually corresponds to the worst-case running time we are talking about in CP seems to be the $$$\Theta$$$ (big theta) notation.
For example, binary search does in the worst case $$$\Theta(log \space n)$$$ steps.
So my question is: why are we using the big-O notation for time complexity in CP, when it's not fully accurate? Or have I misunderstood something about these notations? I don't want to be pedantic, but I'm geniunely wondering why we are doing this.