This project is read-only.

Floating Point Precision and rounding

Apr 10, 2012 at 7:03 PM
Edited Apr 10, 2012 at 7:24 PM

Hello,

Back to some CUDA work now and I've got a few issues with rounding on floating point numbers.  More precisely I'm trying to mimimize the differences between CPU and GPU which add up over time in my application (even short time).  I really just want a float / single representation of Math.Round(number, decimals).  What's a good alternative?

The strange part is that the issue only appears ~20% of the time.  The models are all stochastic so I've not really tracked down specifics.

Apr 11, 2012 at 7:43 PM

If it only appears some of the time then it is likely there is an issue with something like uninitialized memory.  Do you cudafy the code between pass/fail runs?  Is there a difference in the generated CUDA code?

Apr 12, 2012 at 5:58 PM

I think it has to do with large numbers of additions and multiplications.  I'm working with small numbers as inputs (currency exchange rates) that only have four decimal places, but I think over time it's just adding up.  Something like an 80 period moving average causes it.  The thing is I'm not so sure the GPU result is wrong, it's just different.  I've removed some casting and converted an Int's involved to float and now I'm down to about half the discrepencis I saw before.  I'm sure it's just rounding issues and that since the CPU is using 64bit registers that the over/underflows show up differently.