On Sun, May 25, 2014 at 1:26 PM, Dimitris Karampinas wrote:
> My deployment is "NUMA-aware". I allocate cores that reside on the same
> socket. Once I reach the maximum number of cores, I start allocating cores
> from a neighbouring socket.
I'm not sure if it solves your issue, but on a NUMA env
Increasing the shared_buffers size improved the performance by 15%. The
trend remains the same though: steep drop in performance after a certain
number of clients.
My deployment is "NUMA-aware". I allocate cores that reside on the same
socket. Once I reach the maximum number of cores, I start allo
On Fri, May 23, 2014 at 10:25 AM, Dimitris Karampinas
wrote:
> I want to bypass any disk bottleneck so I store all the data in ramfs (the
> purpose the project is to profile pg so I don't care for data loss if
> anything goes wrong).
> Since my data are memory resident, I thought the size of the s
I want to bypass any disk bottleneck so I store all the data in ramfs (the
purpose the project is to profile pg so I don't care for data loss if
anything goes wrong).
Since my data are memory resident, I thought the size of the shared buffers
wouldn't play much role, yet I have to admit that I saw
On Fri, May 23, 2014 at 7:40 AM, Dimitris Karampinas wrote:
> Thanks for your answers. A script around pstack worked for me.
>
> (I'm not sure if I should open a new thread, I hope it's OK to ask another
> question here)
>
> For the workload I run it seems that PostgreSQL scales with the number of
Dne 23.5.2014 16:41 "Dimitris Karampinas" napsal(a):
>
> Thanks for your answers. A script around pstack worked for me.
>
> (I'm not sure if I should open a new thread, I hope it's OK to ask
another question here)
>
> For the workload I run it seems that PostgreSQL scales with the number of
concur
Thanks for your answers. A script around pstack worked for me.
(I'm not sure if I should open a new thread, I hope it's OK to ask another
question here)
For the workload I run it seems that PostgreSQL scales with the number of
concurrent clients up to the point that these reach the number of core
On Thu, May 22, 2014 at 10:48 PM, Tom Lane wrote:
> Call graph data usually isn't trustworthy unless you built the program
> with -fno-omit-frame-pointer ...
This page is full of ideas as well:
https://wiki.postgresql.org/wiki/Profiling_with_perf
--
Michael
--
Sent via pgsql-performance mailin
Dimitris Karampinas writes:
> Is there any way to get the call stack of a function when profiling
> PostgreSQL with perf ?
> I configured with --enable-debug, I run a benchmark against the system and
> I'm able to identify a bottleneck.
> 40% of the time is spent on an spinlock yet I cannot find o
On 5/22/2014 7:27 AM, Dimitris Karampinas wrote:
Is there any way to get the call stack of a function when profiling
PostgreSQL with perf ?
I configured with --enable-debug, I run a benchmark against the system
and I'm able to identify a bottleneck.
40% of the time is spent on an spinlock yet I
Is there any way to get the call stack of a function when profiling
PostgreSQL with perf ?
I configured with --enable-debug, I run a benchmark against the system and
I'm able to identify a bottleneck.
40% of the time is spent on an spinlock yet I cannot find out the codepath
that gets me there.
Usi
On Tue, Apr 12, 2005 at 08:43:59AM -0600, Michael Fuhr wrote:
> 8.1devel changes frequently (sometimes requiring initdb) and isn't
> suitable for production, but if the trigger statistics would be
> helpful then you could set up a test server and load a copy of your
> database into it. Just beware
On Tue, Apr 12, 2005 at 12:46:43PM +0200, hubert lubaczewski wrote:
>
> the problem is that both the inserts and updated operate on
> heavy-tirggered tables.
> and it made me wonder - is there a way to tell how much time of backend
> was spent on triggers, index updates and so on?
> like:
> total
On Tue, Apr 12, 2005 at 10:18:31AM -0400, Alex Turner wrote:
> Speaking of triggers...
> Is there any plan to speed up plpgsql tiggers? Fairly simple
> crosstable insert triggers seem to slow my inserts to a crawl.
plpgsql is quite fast actually. if some triggers slow inserts too much,
i guess yo
Speaking of triggers...
Is there any plan to speed up plpgsql tiggers? Fairly simple
crosstable insert triggers seem to slow my inserts to a crawl.
Is the best thing just to write triggers in C (I really don't want to
put this stuff in the application logic because it really doesn't
belong there
hubert lubaczewski <[EMAIL PROTECTED]> writes:
> and it made me wonder - is there a way to tell how much time of backend
> was spent on triggers, index updates and so on?
In CVS tip, EXPLAIN ANALYZE will break out the time spent in each
trigger. This is not in any released version, but if you're
hi
i'm not totally sure i should ask on this mailing list - so if you think
i should better ask someplace else, please let me know.
the problem i have is that specific queries (inserts and updates) take a
long time to run.
of course i do vacuum analyze frequently. i also use explain analyze on
qu
17 matches
Mail list logo