Why no floating point precision errors?

Revision en1, by ShubhamAvasthi, 2020-08-09 03:39:03

I solved a problem recently using floating point arithmetic. I suspect the solution should not work for some test case because of floating point precision errors, but cannot find a case where it would fail. Moreover, the solution gets "Accepted" verdict.

Link to the submission (Codeforces Round #660 E): 89366718

The submission is an implementation of the Convex Hull Trick approach mentioned in the editorial of the problem.

If you look through the code, you will find that I have maintained forbidden ranges of cotangents of an angle and then iterate over the critical angles (i.e. extremeties of some range of forbidden angles, which are not forbidden themselves).

I suspect that float(xr[j] - xl[i]) / (y[i] - y[j]) should return unequal values for some mathematically equal fractions (like $$$5/3$$$ and $$$15/9$$$) because of precision errors.

In such case, let's say there are two ranges $$$[a, b]$$$ and $$$[c, d]$$$ with $$$b = c$$$ (mathematically). In such a case a possible value that is not forbidden is $$$b$$$. This obviously will not work if there are precision errors which make the calculated values of $$$b$$$ and $$$c$$$ such that $$$b$$$ is slightly greater $$$c$$$. Then, $$$b$$$ will become part of a forbidden range.

My question is, does the above case not show up in the test cases of the problem or is the floating point arithmetic I talked about guaranteed to be exact?

Tags floating-point, precision

History

 
 
 
 
Revisions
 
 
  Rev. Lang. By When Δ Comment
en2 English ShubhamAvasthi 2020-08-09 03:49:09 904 Tiny change: 'o occur:\n~~~~~\n#' -> 'o occur:\n\n~~~~~\n#'
en1 English ShubhamAvasthi 2020-08-09 03:39:03 1472 Initial revision (published)