I've already figured out how it works and have found the same 2:1
ratio. (This time on my 1.4GHz MacBook Air; The previous tests were on
a 2.4 GHz Core2Duo running Linux.)

When I did the quick-and-dirty benchmarking this afternoon I used
larger random inputs (1 to 8 MiB) allowing me to calculate a MiB/s
value. I found performance dropped off a noticeably when input size
increased from 2 to 4 MiB. (Processor cache effect?)

Just now, I wrote a round-trip unit test and have begun poking at an
implementation of a decode function.  (As an exercise to better
understand the techniques you used in your encode function; If it
turns out not to suck and can compete with whatever you must already
be writing, I'll offer it for inclusion -- my C.A. is on file.)

// Ben

On Mon, Oct 10, 2011 at 20:37, Alexander Taggart <m...@ataggart.ca> wrote:
> I see about a 50% increased throughput over apache commons-codec as well.  I
> use the perf-base64 ns generate input data and output timing files to keep
> track of changes to the performance over time, lest a regression creep in.
>  I'll add some documentation if you want to play with it.
>
> --
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clojure@googlegroups.com
> Note that posts from new members are moderated - please be patient with your
> first post.
> To unsubscribe from this group, send email to
> clojure+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/clojure?hl=en

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en

Reply via email to