On Mon, Oct 6, 2008 at 2:58 PM, Cale Gibbard <[EMAIL PROTECTED]> wrote:
> 2008/10/6 Don Stewart <[EMAIL PROTECTED]>:
>> dagit:
>>>    data and newtype vary in one more subtle way, and that's how/when they
>>>    evaluate to bottom.  Most of the time they behave identically, but in the
>>>    right cases they act sightly differently.  newtype is usually regarded as
>>>    more efficient than data.  This is because the compiler can choose to
>>>    optimize away the newtype so that it only exists at type check time.  I
>>>    think this is also possible with data in some, but not all, uses.
>>
>> The compiler *must* optimise away the use. They're sort of 'virtual'
>> data, guaranteed to have no runtime cost.
>
> I'm not sure that I'd want to be that emphatic about what an
> implementation *must* do regarding something so operational.
>
> [..]
>
> We can say however that newtypes have no additional runtime cost in
> GHC regardless of the optimisation level you pick.
>
Not even that is true in general. One can in general end up doing
unnecessary work just for the sake of converting types.

Suppose you have a newtype Price = Price Int and you're given [Int]
and want to have [Price]. This is simple to do, just 'map Price'. But
since Price and Int are represented the same way this ought to be just
the identity function. But it is in general very difficult for a
compiler to figure out that this traversal of the list in fact is just
the identity function. Simple type conversions like these can
unfortunately force you to do some work even though the representation
is identical.

Cheers,

Josef
_______________________________________________
Haskell-Cafe mailing list
[email protected]
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to