It's great to be discussing the possibility of enhanced primitive support.

I like most of the proposals.  But this part troubles me:

# For bigints, specify bigints
    * new literal bigint format 42N
# bigints contagious, no more auto-reductions
    * (\+ 0N 21 21) => 42N

One of the big reasons I prefer Clojure over languages like Scala/F#
is that in Clojure, numbers "just work".  I don't have to worry about
overflow errors, and the use of numbers in sets/hash tables is
generally pretty intuitive (it gets a little wonky if you're using
numbers produced by Java methods, but at least you're good if you stay
within Clojure).

As a case in point, consider writing a naive recursive factorial function.

(defn fact [n]
  (if (zero? n) 1 (* n (fact (dec n)))))

In Clojure, (fact 3) produces the Integer 6.  (fact 40) produces the
BigInteger 815915283247897734345611269596115894272000000000.  It just
works.  The answers produced by fact can be used in collections,
compared for equality with other numbers.

In F#, Scala, if you want factorial to not break on something like an
input of 40, you have to essentially use the bigint literal syntax
like what you're proposing:
(defn fact [n]
  (if (zero? n) 1N (* n (fact (dec n)))))

Now, the results of this function are "contaminating bigints" which
can't be used as effectively with other Clojure data structures.  As
you pointed out, it causes problems when the same number can have two
different type representations depending on the sequence of
computations used to get to that number.  This will always be somewhat
of a problem due to Java interop, but it's important to me that all
numbers arrived at through Clojure computations just work with respect
to equality and use in sets/hash tables.  The lack of auto-reduction,
in conjunction with the proposal to make equality take numeric type
into account, will really cripple the utility of bigints.

Even more frustrating, in my experience with languages like F# and
Scala, functions written like this are way, way slower.  The reason is
that Java's bigint arithmetic is sloooooow.  To protect yourself from
overflow, you've forced the function to use bigint arithmetic on
1N*2N*3N*..., well before you even reach a bigint.  For this
particular function, it reaches bigint status quite quickly, so the
performance degradation is fairly minimal.  But I have found that in
general, requiring programmers to explicitly use bigint arithmetic for
small numbers in order to protect from overflow is an absolute
performance killer.  Clojure's technique of auto-promotion to bigint
upon overflow, and then auto-reduction when appropriate, is way, way
faster than using bigints all the time.

In summary:
No auto-reduction would make bigints second-class citizens and make
them difficult to use in other functions and collections.
No auto-promotion to bigints means that programmers who want to
protect from overflow have to use bigints everywhere, which destroys
performance.

So what would be the downside of implementing most of these proposals
(e.g., 1 is a literal for long 1), but retain auto-promotion and
auto-reduction rather than throwing an error for primitive long math?
Seems to me like this would be the best of both worlds.  What am I
missing?

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en

Reply via email to