|Sent on:||Wednesday, March 21, 2012 11:01 AM|
I would second the vote for trying to use the CPU if possible. GPU fragment shaders can use all sorts of funky floating point formats and have subtly different handling of rounding, denormals, etc. Many are limited to 16 bit floats. Since there are so many different embedded GPUs out there, this is a testing problem. CPU is the safer option.