since that would
minimize the kind of performance problems discussed here.
--
Linus Björnstam
On Fri, 12 Jun 2020, at 22:13, Ludovic Courtès wrote:
> Hi,
>
> Linus Björnstam skribis:
>
> > You can cut another 15-ish % from that loop by making an inline loop, btw
&g
e-3.0.so.1.1.1 [.] scm_call_1
2.31% guilelibguile-3.0.so.1.1.1 [.] scm_string_for_each
--8<---cut here---end--->8---
Indeed, we get better performance when turning off JIT:
--8<---cut here---start->8---
$
ring-for-each is in the default environment, and is probably the same as
the srfi-13 C implementation.
--
Linus Björnstam
On Sun, 7 Jun 2020, at 08:27, Aleix Conchillo Flaqué wrote:
> Hi,
>
> in the latest guile-json, 4.1.0. I changed some code to use
> for-each+string->list. The
optimization).
--
Linus Björnstam
On Sun, 7 Jun 2020, at 08:27, Aleix Conchillo Flaqué wrote:
> Hi,
>
> in the latest guile-json, 4.1.0. I changed some code to use
> for-each+string->list. The performance seemed nice and I released it.
>
> Christopher Lam pointed out that I could
On Sat, Jun 6, 2020 at 11:27 PM Aleix Conchillo Flaqué
wrote:
> Hi,
>
> in the latest guile-json, 4.1.0. I changed some code to use
> for-each+string->list. The performance seemed nice and I released it.
>
> Christopher Lam pointed out that I could have used string-for-each
Hi,
in the latest guile-json, 4.1.0. I changed some code to use
for-each+string->list. The performance seemed nice and I released it.
Christopher Lam pointed out that I could have used string-for-each instead.
I made the change but the performance degraded a lot:
string-for-each:
scheme@(j
Hi,
I think I found a gc leak in guile 3.0
Isn't it so that so the continuation keep a copy of the stack. The issue is
that in the stack a raw integer or float may be present and so the gc
properties is less then ideal as those may be interpreted as pointers by
the GC and lead to parts of the hea
> On 3 Dec 2017, at 23:15, Arne Babenhauserheide wrote:
>
> Hans Åberg writes:
>
>> I see the the expected behavior when turing off the GC altogether,
>> using malloc and free without cleanup: I third of the time on one
>> thread, decreasing up to hardware concurrency.
>
> I also saw that -
Hans Åberg writes:
> I see the the expected behavior when turing off the GC altogether,
> using malloc and free without cleanup: I third of the time on one
> thread, decreasing up to hardware concurrency.
I also saw that - I think we tracked it down to the GC heuristics
miscalculating when to r
On Sat, 02 Dec 2017 10:50:29 +0200
Marko Rauhamaa wrote:
> Linas Vepstas :
> > I cannot speak to GC, but I freuqently encounter situations in guile
> > where using the parallel constructs e.g. par-for-each, end up
> > running slower than the single-threaded version. For example, using
> > 2 or 3
excessive locking: all data access needs to take
> place in a critical section.
> These and other issues with multithreading have caused many of us to
> reject multithreading as a paradigm altogether. Use event-driven
> programming for multiplexing and multiprocessing for perfor
he CPU
hardware to flush its memory cache.
The same issue hampers C developers as well, although the newest C
standards explicitly shift to the coder the responsibility for the
consistency of the data model. Thus, in principle, a C programmer can
try clever tricks to eke out performance with mu
> On 1 Dec 2017, at 20:49, Linas Vepstas wrote:
>
> On Mon, Nov 27, 2017 at 5:44 PM, Hans Åberg wrote:
>
>>
>>
>>> On 28 Nov 2017, at 00:23, Marko Rauhamaa wrote:
>>>
>>> Hans Åberg :
I saw overhead also for the small allocations, 20-30% maybe. This is
in a program that makes a
On Mon, Nov 27, 2017 at 5:44 PM, Hans Åberg wrote:
>
>
> > On 28 Nov 2017, at 00:23, Marko Rauhamaa wrote:
> >
> > Hans Åberg :
> >> I saw overhead also for the small allocations, 20-30% maybe. This is
> >> in a program that makes a lot of allocations relative other
> >> computations. So that ma
Hans Åberg :
> I saw overhead also for the small allocations, 20-30% maybe. This is
> in a program that makes a lot of allocations relative other
> computations. So that made me wonder about Guile.
I don't have an answer to your question although I would imagine GC
cannot be effectively scaled acr
> On 28 Nov 2017, at 00:23, Marko Rauhamaa wrote:
>
> Hans Åberg :
>> I saw overhead also for the small allocations, 20-30% maybe. This is
>> in a program that makes a lot of allocations relative other
>> computations. So that made me wonder about Guile.
>
> I don't have an answer to your ques
> On 28 Nov 2017, at 00:05, Stefan Israelsson Tampe
> wrote:
>
> There are lists of free memory kept in each thread so usually small
> allocation is one multithreaded large allocation and then
> a bunch of small allocation in thread consuming that large allocation. So for
> small memory segme
There are lists of free memory kept in each thread so usually small allocation
is one multithreaded large allocation and then
a bunch of small allocation in thread consuming that large allocation. So for
small memory segments I would say that the
overhead probably is neglible.
On Mon, Nov 27, 2
With the Boehm GC in Guile, does multithread allocations take longer time than
the same amount in a single thread?
On Wed 12 Apr 2017 09:28, Vijay Pratap Chaurasia writes:
> Thanks for input and putting the code as attachment. I am consistently
> getting bad performance with guile.2.2 even through it is reasonably good
> with guile2.1 compare to guile2.0 . I have built guile on RHEL5 platfor
ter compilation.
So 2.2 is three times faster than 2.0, right? That sounds like good
news. :-)
Note that if you’re looking for performance, you definitely need to
compile the code beforehand, both with 2.0 and 2.2.
Ludo’.
Hi Thomas,
Thanks for input and putting the code as attachment. I am consistently
getting bad performance with guile.2.2 even through it is reasonably good
with guile2.1 compare to guile2.0 . I have built guile on RHEL5 platform
using gcc4.4 with large file support (CFLAGS='-D_FILE_OFFSET
Hi,
On 4/11/17, Thomas Morley wrote:
...
> Reformated versions are attached, if someone else want to check, too.
Thank you for those.
Results on my machine are as follows:
Guile 2.2:
8<
$ /usr/local/bin/guile --version
guile (GNU
Hi Thomas!
Have the lilipond team retried benchmarks with the newly released guile
2.2? One of the most resent improvements was fixing a gc issue that caused
performance issues with files of large scale. Could be, with some luck,
that that fix helped the lilypond benchmark that underperforms as
t. All most all calls were taking more than double the time compare to
> guile-2.0.11. It is contrary to the claim of 30% performance boost with
> guile-2.2 release. Can some one point the possible reason for slowness?
>
> I have created a simple test program which reports the diff of
-2.0.11. It is contrary to the claim of 30% performance boost with
guile-2.2 release. Can some one point the possible reason for slowness?
I have created a simple test program which reports the diff of two alist.
*time /home/guile-2.2/bin/guile -s ./performanceTest.scm*
real0m34.375s
user
e
value I put into the methods.
What I cannot give is performance information about the methods: There
are some important implications of having many methods, but I just don’t
know them.
>> I just added this to guile-basics:
>> http://www.draketo.de/proj/guile-basics/#sec-2-5
>
Andy Wingo writes:
> Hello,
>
> On Mon 21 Jun 2010 11:48, Cecil Westerhof writes:
>
>> standard input:2:2: In expression (unquote profile):
>> standard input:2:2: Unbound variable: unquote
>> ABORT: (unbound-variable)
>
> Ah, I didn't know you were using Guile 1.8. The Guile 2.0 snap
- Oorspronkelijk bericht -
> On Mon 21 Jun 2010 11:48, Cecil Westerhof writes:
>
> > standard input:2:2: In expression (unquote profile):
> > standard input:2:2: Unbound variable: unquote
> > ABORT: (unbound-variable)
>
> Ah, I didn't know you were using Guile 1.8. The Guile 2.0 snapshot
Hello,
On Mon 21 Jun 2010 11:48, Cecil Westerhof writes:
> standard input:2:2: In expression (unquote profile):
> standard input:2:2: Unbound variable: unquote
> ABORT: (unbound-variable)
Ah, I didn't know you were using Guile 1.8. The Guile 2.0 snapshots are
faster, and they have a
Op zaterdag 19 jun 2010 20:16 CEST schreef Andy Wingo:
>> Op zaterdag 19 jun 2010 11:16 CEST schreef Andy Wingo:
>>
>>> ,profile (call-my-function)
>>
>> (main ("temp/input" "dummy.log" "^ +" "1234567890"))
>
> Almost. At the repl, type:
>
> (load "dummy.scm")
>
> Then:
>
> ,profile (main '("dummy
Again, because the first I only send to Thien-Thi instead of to the
mailing list.
Op zaterdag 19 jun 2010 17:44 CEST schreef Thien-Thi Nguyen:
> Re performance, take a look at the lower-level procedures used to
> implement the high-level ‘read-line’. The lowest ones require an
> explic
Hello,
On Sat 19 Jun 2010 17:05, Cecil Westerhof writes:
> Op zaterdag 19 jun 2010 11:16 CEST schreef Andy Wingo:
>
>> ,profile (call-my-function)
>
> (main ("temp/input" "dummy.log" "^ +" "1234567890"))
Almost. At the repl, type:
(load "dummy.scm")
Then:
,profile (main '("dummy.scm" "
() Cecil Westerhof
() Sat, 19 Jun 2010 17:05:50 +0200
(main ("temp/input" "dummy.log" "^ +" "1234567890"))
To answer this, you can try the following experiment:
$ cat > program <
Op zaterdag 19 jun 2010 11:16 CEST schreef Andy Wingo:
> On Fri 18 Jun 2010 22:50, Cecil Westerhof writes:
>
>> Why is this so expensive?
>
> The general answer to this question can be found by profiling. You
> should factor your code into a function, then from the repl:
>
> ,profile (call-my-fun
On Fri 18 Jun 2010 22:50, Cecil Westerhof writes:
> Why is this so expensive?
The general answer to this question can be found by profiling. You
should factor your code into a function, then from the repl:
,profile (call-my-function)
I wonder, perhaps we should have a --profile command-line
I have the following code:
#!/usr/bin/guile \
-e main -s
!#
(use-modules (ice-9 rdelim))
(use-modules (ice-9 regex))
(define (main args)
(let* ((arg-vector (list->vector args))
(input-file-name (vector-ref arg-vector 1))
(output-file
I think it would be good if we could track Guile's performance better,
and how it changes over time. But...
1. We don't currently have many benchmarks. There are just 3 in the
repo, and they're all pretty trivial.
2. I have no experience in, and no immediate feel for, how w
38 matches
Mail list logo