### forthright48's blog

By forthright48, 6 years ago,

I was trying to solve a problem on HackerRank, Devu and a Journey on Metro.

I wrote a solution using BFS and top-down memoization. Here is my submission: Submission.

When I compared my output with the judge output, I found that I am getting Wrong Answer due to precision error. The 6th decimal place seems to be different on some cases. So I looked into the analysis to see if they had some trick to reduce the precision error. Well, they did the same thing but just bottom up.

Bottom up is faster than top-down. For some cases, we can even save memory using bottom up. But I never heard about bottom-up being more precise.

I changed all double variables into long double later, which got more test cases correct but not all. Submission.

Any idea what is going on? Why is this happening?

UPD1: Got my answer from misof's comment.

• +33

 » 6 years ago, # |   -11 Thank you for the site, I was not aware this site
 » 6 years ago, # | ← Rev. 2 →   +5 I've also solved this problem with top-down approach. My code might be useful to you.
•  » » 6 years ago, # ^ |   0 Thanks for the code. After looking at your code, I change a single line on mine. From: p = 1 / SZ(adj[pos]); res *= p; To: res /= SZ(adj[pos]); This got me even more accepted test cases(Submission). I have a question though. At line 235 of your code, what are you doing exactly?  if (res == res) return res; Also, our solutions differ in certain aspect. You run the dp N times, finding probability to reach X from 1 and then multiplying with the cost. I do it all at once. Calculating them separately gives more accurate results? Shouldn't they be the same.
•  » » » 6 years ago, # ^ |   +45 That is trick to check number for nan
•  » » » 6 years ago, # ^ |   +5 As Egor pointed out, it's a trick to check if current index of memoization array is initialized. In the beginning, it sets every bit of that array to 1 (memset(memo, -1, sizeof memo)) which equals to NaN for doubles (I don't know if it's in standards, but it works that way in gcc). And according to IEEE standard, comparisons involving NaN values always equal to false, that's why expression res == res is only true when res is not NaN. Some answer related to it on Stackoverflow
 » 6 years ago, # |   +8 I had come across the same situation before , solving a probabilities problem with dp , my solution got WA due to precision error but implementing the same dp approach in other way got AC!I am interested to hear explanation for this situation from experiences programmers and how to tell which implementation for a particular solution is better?
•  » » 6 years ago, # ^ | ← Rev. 2 →   0 I am wondering, if this is cause of extra "assignment" operations. In recursive functions, we store the variables in stack. So could it be, pushing and pulling from stack is causing precision loss?I can't think of any other explanations. Compiler optimizations during recursions?
•  » » » 6 years ago, # ^ | ← Rev. 2 →   +5 Machine dependent — some processors (including majority of modern ones) do calculation internally (in FPU registers) with 80 bits, which gets truncated to 64 bits (double) when written to memory.It means that result can be different in case: going through the whole calculation on FPU only and then get final result doing operations one by one and saving results in memory. At C++ level you have very limited control over what compiler does to the calculation, so you can't rely on it. But most likely if you do whole calculation in one place it will get optimized and run on FPU only, while if you spread it, saving results to memory, you'll get less precision in the end. Unless compilers notices that and optimizes. You can use "long double" which is compiler / machine specific, but even if you use just "double", internally FPU calculations will still be done with 80 bits precision.Long story short — treat floating point arithmetic with respect :)
 » 6 years ago, # | ← Rev. 4 →   +16 UPD: This comment is not applicable to this specific situation, as misof properly mentions below — last digit errors are clearly below 1e-6 relative error and should be AC. I hope it still makes some sense for general case. Going from bottom up you're generally summing up small numbers with small numbers and they grow big as you go up, so in the end you add up big numbers with big numbers. In case with probabilities (when you add up probabilities of a "branchy" tree of events it is even more so).When you go from top to bottom you end up adding up big numbers from adjacent branches to small numbers in current branch, so might get into rounding error.When I face situation like this (need to sum up lots of small and not so small floating numbers) in the contest I get very nervous. I always try to go from small one to the big ones. So in situation like this I wouldn't try top to bottom, just out of precaution.Once in the contest I even remember sorting array of positive doubles before summing the up, just to reduce rounding errors. Proved to be unnecessary in the end, but I didn't want to get failed testcase because of 1e-6 rounding error.