Hi all,

I would like to know if there is a mathematical way to choose the appropriate value of epsilon given the maximum/minimum value of the x and y coordinates?

And will choosing a very small value of epsilon cause any sort of problems?

# | User | Rating |
---|---|---|

1 | tourist | 3882 |

2 | maroonrk | 3539 |

3 | Benq | 3513 |

4 | MiracleFaFa | 3466 |

5 | ksun48 | 3462 |

6 | ecnerwala | 3446 |

7 | slime | 3428 |

8 | Um_nik | 3426 |

9 | jiangly | 3401 |

10 | greenheadstrange | 3393 |

# | User | Contrib. |
---|---|---|

1 | awoo | 192 |

2 | -is-this-fft- | 191 |

3 | Monogon | 184 |

4 | YouKn0wWho | 182 |

4 | Um_nik | 182 |

6 | antontrygubO_o | 171 |

7 | maroonrk | 169 |

8 | kostka | 165 |

9 | SecondThread | 164 |

9 | errorgorn | 164 |

Hi all,

I would like to know if there is a mathematical way to choose the appropriate value of epsilon given the maximum/minimum value of the x and y coordinates?

And will choosing a very small value of epsilon cause any sort of problems?

↑

↓

Codeforces (c) Copyright 2010-2022 Mike Mirzayanov

The only programming contests Web 2.0 platform

Server time: May/24/2022 22:27:24 (h3).

Desktop version, switch to mobile version.

Supported by

User lists

Name |
---|

That doesn't depend on the max or min value in a direct way. Say, your algorithm doesn't add any error. Then the only source of error is the round off error. Now, Let, The final value is a summation of 1000 values which have the probable error in them, then, maximum error can be roundOfErrorForOneNumber*1000.

Now, adding that much value should keep the other bits unchanged, if the error doesn't influence the result. And adding that number to result have a good chance that the result will be corrected at a certain number of places.

Write your algorithm in such a way that, it minimizes the error, and you need to add a very small number, if you need less accuracy than you can allow your algorithm to add more error, but you have to add a bigger epsilon.

Doubles can store the 53 most significant bits of a number. As 2^53 is slightly less than 10^16, this means that you'll get the 15 most significant decimal digits. A good rule of thumb is to say that the last few of those digits can become garbage due to rounding errors. So, for example, if you are working with numbers up to 10^6, a good choice of epsilon can be 10^{-6}.

Many competition problems use the phrase "10^{-9} relative or absolute error". This essentially means that just the first 9 most significant digits of your final result have to be correct. This precision requirement is intentionally lower than the precision you want to be using internally during your computation.

You can get a better estimate of a suitable epsilon if you also take into account the type and number of operations you do with your numbers. Still, using epsilons for floating-point comparison will always be a hack, and as such it is sometimes best not to overcomplicate it -- just close your eyes and hope that you guesstimated it correctly :)