[MLton] Experimental 64-bit binary package (& SVN sources)

skaller skaller at users.sourceforge.net
Wed Mar 7 19:05:41 PST 2007


On Wed, 2007-03-07 at 21:43 -0500, Stephen Weeks wrote:
> > We generally ignore such differences on x86, because it all depends
> > upon how much of the floating-point computation gets done at 80bits,
> ...
> > perhaps we should take the x86_64 results as "correct" and ignore
> > differences on x86.
> 
> Shouldn't we just have two different platform-specific regression
> outputs?

Actually no, I don't think this is the way forward.

I think you have two options here.

1) Change the test code to output portable results,
for example round to 3 decimal places.

2) Change the test harness so it can compare
floating point numbers with some kind of precision
control.

Option 2 is messy and theoretically unsound, IMHO.

OTOH, option 1 restricts the tests to what is expected
to give the same result on all architectures and so
can't easily push calculations to a point that would
detect certain kinds of bugs: for example a failure
to set the correct hardware rounding mode would
probably be swamped.

But still, I'd consider 1: rewrite the tests,
since really you're trying to test the compiler,
not the FP math system.



-- 
John Skaller <skaller at users dot sf dot net>
Felix, successor to C++: http://felix.sf.net



More information about the MLton mailing list