It’s hard to predict cache behavior without an actual workload, so i would
recommend using cachegrind with a real program (not a benchmark) to
evaluate the cost of doing things one way or the other.

On Tue, Apr 28, 2020 at 05:24 Guido Stepken <gstep...@gmail.com> wrote:

> Certainly. But how is it *implemented* internally? Mostly you suffer
> massive performance loss when prepending, because complete linked list gets
> moved to a new place in memory. If internal representaion ist a double
> cell, one value, pointer to next, then you quickly suffer CPU cache misses.
> Wild jumps across memory with upto 18 CPU waitstates for random access.
> Means: Your proud 4 GHz machine gets slower than a 250 MHz embedded ESP32
> ARM CPU. E.g Python dqueue doesn't show any performance loss here.
>
> Have fun!
>
> Am Dienstag, 28. April 2020 schrieb Wilhelm Fitzpatrick <raf...@well.com>:
> > On 4/27/20 2:42 PM, Guido Stepken wrote:
> >
> >> In most Lisp languages, you only can "append" to a list, never
> "prepend".:
> >
> > "Prepend", aka "add to the beginning" seems the natural (and
> non-destructive) operation of Lisp, e.g.
> >
> > (cons 9 (1 2 3)) -> (9 1 2 3)
> >
> > ..perhaps that is what you meant?
> >
> > -wilhelm
> >
> >
> > --
> > UNSUBSCRIBE: mailto:picolisp@software-labde <picolisp@software-lab.de>
> ?subject=Unsubscribe
> >

-- 
John Duncan

Reply via email to