Monday, February 24, 2020

FPU double precision : mathematical errors: Thoughts and questions

FPU double precision mathematical errors: Thoughts and questions  


https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5317

Double precision maths is luckily the modern CPU core FPU feature..
Precision of 2 errors over 1000000 result will mean one or two errors per trillion calculations,
However feasible perfectly error free results are..
The net result is believably within the error factor posed by Intel guidelines.

Error free incites perfect pre-fetch and super class stability,
The facts remain the same on error count:

AVX float buffer overrun,
FPU Buffer overrun.
Rounded results.

Most plausibly a buffer over run.
Consistently very high precision numbers require a 180 bit buffer...
Around 16KB stack buffer should prevent overflow (per result)
Compressing the resultant into a compressed memory zip saves data space.

(c)Rupert S

perhaps this will help: http://science.n-helix.com/2018/01/integer-floats-with-remainder-theory.html

http://science.n-helix.com/2019/05/compiler-optimisation.html

http://science.n-helix.com/2018/06/compression-libraries-index-prime.html

http://science.n-helix.com/2020/01/float-hlsl-spir-v-compiler-role.html

"My priority is to publish "IEEE 754 as intended - or how to
obtain identical double precision floating-point results".

SixTrack LHC@home is running well but tasks are
still being taken rather slowly and I reckon we use,
at best, one third of your provided capacity. Still your
support is vital for the High Luminescent LHC studies.

Personally I have a couple of issues:
I need to access old migrated data to define SixTrack
performance and this is a problem right now.
While the Invalid result error rate is in general very low,
24 Invalid and over one million valid today, I need to
identify the genuine computational errors.

You can find my recent CERN Open Days presentation at
http://mcintosh.web.cern.ch/mcintosh/
along with the state of my research.
Eric Mcintosh"