> I think all of this data cannot fit in shared_buffers, you might want
to increase shared_buffers
> to larger size (not 30GB but close to your data size) to see how it
behaves.
When I use shared_buffers larger than my data size such as 10 GB, results
scale nearly as expected at least for this
> Maybe you could help test this patch:
>
http://www.postgresql.org/message-id/20131115194725.gg5...@awork2.anarazel.de
Which repository should I apply these patches. I tried main repository, 9.3
stable and source code of 9.3.1, and in my trials at least of one the
patches is failed. What patch co
On Wed, Dec 4, 2013 at 9:19 AM, Metin Doslu wrote:
>
> Here are the results of "vmstat 1" while running 8 parallel TPC-H Simple
> (#6) queries: Although there is no need for I/O, "wa" fluctuates between 0
> and 1.
>
> procs ---memory-- ---swap-- -io --system--
> -cpu--
On 2013-12-04 14:27:10 -0200, Claudio Freire wrote:
> On Wed, Dec 4, 2013 at 9:19 AM, Metin Doslu wrote:
> >
> > Here are the results of "vmstat 1" while running 8 parallel TPC-H Simple
> > (#6) queries: Although there is no need for I/O, "wa" fluctuates between 0
> > and 1.
> >
> > procs ---
>Notice the huge %sy
>What kind of VM are you using? HVM or paravirtual?
This instance is paravirtual.
> I'd strongly suggest doing a "perf record -g -a ;
> perf report" run to check what's eating up the time.
Here is one example:
+ 38.87% swapper [kernel.kallsyms] [k] hypercall_page
+ 9.32% postgres [kernel.kallsyms] [k] hypercall_page
+ 6.80% postgres [kernel.kallsyms] [k] xen_
On 2013-12-04 18:43:35 +0200, Metin Doslu wrote:
> > I'd strongly suggest doing a "perf record -g -a ;
> > perf report" run to check what's eating up the time.
>
> Here is one example:
>
> + 38.87% swapper [kernel.kallsyms] [k] hypercall_page
> + 9.32% postgres [kernel.kallsyms] [k] h
On Wed, Dec 4, 2013 at 1:54 PM, Andres Freund wrote:
> On 2013-12-04 18:43:35 +0200, Metin Doslu wrote:
>> > I'd strongly suggest doing a "perf record -g -a ;
>> > perf report" run to check what's eating up the time.
>>
>> Here is one example:
>>
>> + 38.87% swapper [kernel.kallsyms] [k] hyp
> You could try HVM. I've noticed it fare better under heavy CPU load,
> and it's not fully-HVM (it still uses paravirtualized network and
> I/O).
I already tried with HVM (cc2.8xlarge instance on Amazon EC2) and observed
same problem.
On 2013-12-04 16:00:40 -0200, Claudio Freire wrote:
> On Wed, Dec 4, 2013 at 1:54 PM, Andres Freund wrote:
> > All that time is spent in your virtualization solution. One thing to try
> > is to look on the host system, sometimes profiles there can be more
> > meaningful.
>
> You cannot profile th
> Didn't follow the thread from the start. So, this is EC2? Have you
> checked, with a recent enough version of top or whatever, how much time
> is reported as "stolen"?
Yes, this EC2. "stolen" is randomly reported as 1, mostly as 0.
Here are some extra information:
- When we increased NUM_BUFFER_PARTITIONS to 1024, this problem is
disappeared for 8 core machines and come back with 16 core machines on
Amazon EC2. Would it be related with PostgreSQL locking mechanism?
- I tried this test with 4 core machines including my perso
On 2013-12-04 20:19:55 +0200, Metin Doslu wrote:
> - When we increased NUM_BUFFER_PARTITIONS to 1024, this problem is
> disappeared for 8 core machines and come back with 16 core machines on
> Amazon EC2. Would it be related with PostgreSQL locking mechanism?
You could try my lwlock-scalability im
> You could try my lwlock-scalability improvement patches - for some
> workloads here, the improvements have been rather noticeable. Which
> version are you testing?
I'm testing with PostgreSQL 9.3.1.
We have several independent tables on a multi-core machine serving Select
queries. These tables fit into memory; and each Select queries goes over
one table's pages sequentially. In this experiment, there are no indexes or
table joins.
When we send concurrent Select queries to these tables, query
On Wed, Dec 4, 2013 at 11:49 PM, Metin Doslu wrote:
> Here are some extra information:
>
> - When we increased NUM_BUFFER_PARTITIONS to 1024, this problem is
> disappeared for 8 core machines and come back with 16 core machines on
> Amazon EC2. Would it be related with PostgreSQL locking mechanism
16 matches
Mail list logo