On Thu, May 30, 2019 at 9:03 AM Laurent <laurent.ors...@gmail.com> wrote:
If no one is really relying on them as of today, then I would strongly support
allowing Matthew to break things and move fast. If anyone has a real need for
such a data structure it can still probably be implemented later as a
third-party library, possibly extended to user-specified-precision flonums.
I suspect that if Matthew feels a need to ask about this, this means the price
of backward compatibility to pay for all of us may be quite significant.
Personally, I have supported them in the past but not really used them, and I'm
happy to revise my code accordingly.
On Thu, May 30, 2019 at 11:37 AM Hendrik Boom <hend...@topoi.pooq.com> wrote:
On Thu, May 30, 2019 at 12:10:37PM +0200, Konrad Hinsen wrote:
Am 29.05.19 um 17:52 schrieb Matthew Flatt:
Does anyone use single-flonums in Racket?
Right now, no, but I have used them briefly in a past project, for testing
the impact of single-precision on a numerical algorithm.
The main reason to use single-precision floats nowadays is cutting memory
use in half, both because it is sometimes a scarce resource and because a
smaller memory footprint means better cache utilisation. Single-precision
arrays thus matter more than individual numbers. I have even seen
half-precision floats being used for the same reason. With the current
interest in "big data" and machine learning, I expect this tendency to
increase.
Way back in the 60's, on a decimal computer, when memories were small, a
friend reduced floating point precision to two digits in order to save
space. Two digits isn't much, but it ws enough.
-- hendrik
Adding high performance number crunching to an existing compiler and
runtime would be very hard.
Traditionally, people who need high performance floating point use a
BLAS library. Those are highly tuned to each specific architecture,
because they use platform specific techniques.
https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms
Think of memory as a slow device that is far away from the CPU. Modern
processors are horribly I/O bound - cache effects dominate everything.
Just for fun, try timing simple C programs that just read
progressively larger blocks of consecutive memory locations. There are
huge decreases in speed near the limits of each cache level.
--
Josh Rubin
jlru...@gmail.com
Hi to all my friends at NSA
--
You received this message because you are subscribed to the Google Groups "Racket
Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to racket-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/racket-users/1640a12f-4032-78b0-17f2-1d982b042601%40gmail.com.
For more options, visit https://groups.google.com/d/optout.