[MLton-user] Mostly MLton performance.

Matthew Fluet fluet@cs.cornell.edu
Sat, 25 Mar 2006 14:57:42 -0500 (EST)


> I am trying out MLton and am very impressed.

Glad to hear it.

> If I run the MLton compiled program several times, running times
> increase significantly, as in:
>
> My first guess was that the GC may behaves differently in different
> runs, but fixed-heap did not eliminate the difference, and using
> gc-summary I can see that the GC only takes slightly more time in the
> longer runs (the differences in "total GC time" are less than one
> second). When the program is run on a computer that has "rested" (low
> load average last minute or so) it seems to always take roughly the same
> time.

My guess is that MLton's memory management is behaving poorly with respect 
to the OS memory management.  In general, MLton will try to grab a large 
block of virtual address space for the heap.

> The second point is that when removing checks for nan-ness on reals, I
> get a significant speedup.

Are you otherwise explicitly checking for nan-ness (with Real.isNan or 
Real.class)?

> I would like to use nan-ness as a kind of flag (instead of real option) 
> as I expected it not to give me a large penalty. In C++ using gcc I feel 
> there is a much smaller penalty (around 4 seconds) compared to MLton 
> (around 15 seconds), and my gcc compiled program produces reasonable 
> results.

It's pretty hard to say what contributes to the difference between the SML 
and the C++ behavior without seeing the code.

> In a post to the devel-list Stephen Weeks wrote about a faster version
> of Real.abs. Could this be related to my question? Is there a faster way
> of doing things that is not used due to standard compliance? Is this
> discussed somewhere?

The issue with many of the floating point operations is that the Standard 
ML Basis Library (www.standardml.org/Basis) is quite specific on the 
behavior of these operations when the arguments are INF, NAN, etc. 
Unfortunately, the math libraries for C often do not match the semantics 
required by the Basis, so we resort to specifically querying the class of 
the arguments and behave accordingly.

> I think I read in a recent message to the list that the ffi is being
> reworked.

Not significantly.

> I would be interested in knowing if it is an aim extend the
> number of types that can be passed as parameters to c-functions. In
> particular I would be interested in records or tuples and functions
> (i.e., as c-function pointers). Specifically, what I have in mind are
> functions returning a result and an error estimate (as tuples or
> records) and minimizers (taking (among other things) a function as a
> parameter).

There are no immediate plans for adding the ability to allow FFI functions 
to return complex results.


You can export an SML function to C:
   http://mlton.org/CallingFromCToSML
and you can then take the address of that exported SML function as a 
C-function pointer.

Suppose you want to call a function that minimizes another function on a 
given interval.  So, in the C world, we have:

   double minimizer(double (*f)(double), double, double);

Now, the function we're interested in minimizing is expressed in SML as:

   fun myFun r = (r + 1.0) / (r + 2.0)

Essentially, you do the following:

  (* Export the SML function "myFun" as the C function "myCFun" *)
  val doExport = _export "myCFun" : (real -> real) -> unit;
  val _ = doExport myFun

  (* Take the address of the C function "myCFun" *)
  val addr = _address "myCFun": MLton.Pointer.t

  (* Import the minimizer function;
   * we can't specify that the first argument is a function pointer
   * argument, so we just use "MLton.Pointer.t", which corresponds
   * to "void*".
   *)
  val minimizer = _import "minimizer" : MLton.Pointer.t * real * real -> real;

  (* Run the minimizer. *)
  val z = minimizer (addr, 1.0, 2.0)

> Ps. In the "fold" page on the MLton homepage (liked from the for-loop
> page), the term "eta expand" is used without any reference. Is this the
> same as the eta-reduction mentioned in the wikipedia article on lambda
> calculus? In that case I think that a link to the wikipedia article
> might be helpful for beginners like me.

Yes, "eta expansion" is essentially the same as eta-conversion mentioned 
in the wikipedia article.  "Expansion" refers to the direction of 
conversion (from "f" to "\x. f x").