[MLton] IntInf_to_WordVector semantics

Matthew Fluet fluet@cs.cornell.edu
Fri, 9 Jun 2006 20:21:53 -0400 (EDT)


> We've implemented a variety of the primitive operations. But we are unsure
> about IntInf_to_WordVector. Looking at the signature of WordXVector it
> looks like the only way to create one is to use the fromString function.
> We are wondering how exactly any given number should be translated to a
> string to produce the correct word vector. Assuming that is the right
> thing to do at all. It would seem logical that we would just produce the
> number as a string, but the, IntInf -> string -> word vector, sequence
> seems odd.

No, the IntInf_toWordVector primitive is simply a type-cast.  It behaves 
as the identity function at run-time; however, note that it behaves as the 
identity on the MLton representation of IntInf.int values:

   IntInf_toVector : IntInf.int -> Word32.word vector
   IntInf_toWord : IntInf.int -> Word32.word
   Word_toIntInf : Word32.word -> IntInf.int
   WordVector_toIntInf : Word32.word vector -> IntInf.int
     cast between representations; the invariant is that an IntInf.int
     denoting a value in [-2^31, 2^31 - 1] is represented by a
     Word32.word, where the upper 31 bits are the twos complement
     representation and the low bit is 1, while an IntInf.int denoting a
     value in (-inf, -2^31-1] union [2^31, inf) is represented by a
     (pointer to a) Word32.word vector in which the 0th element is the sign
     (0 for positive, 1 for negative) and the remaining elements are the
     twos complement representation of the absolute value of the integer.
     The Basis Library and runtime maintain this invariant.  Note that the
     Basis Library performs IntInf_toWord on integers represented by a
     (pointer to a) Word32.word vector, checking the low bit to determine
     whether or not the integer is 'small'.