Yes, I run
VACUUM VERBOSE ANALYZE sym_data;
VACUUM VERBOSE ANALYZE sym_data_gap;
after create index.
If i remove ORDER BY, the query run faster.
explain analyze select d.data_id, d.table_name, d.event_type, d.row_data,
d.pk_data, d.old_data, d.create_time, d.trigger_hist_id, d.channel_id,
d.trans
Tuan --
>
> Thanks for your help.
> I create index on channel_id and data_id like your comment.
...
<...>
>
> explain analyze select d.data_id, d.table_name, d.event_type, d.row_data,
> d.pk_data, d.old_data, d.create_time, d.trigger_hist_id, d.channel_id,
> d.transaction_id, > d.source_node_
Thanks for your help.
I create index on channel_id and data_id like your comment.
- Index: idx_d_channel_id2
-- DROP INDEX idx_d_channel_id2;
CREATE INDEX idx_d_channel_id2
ON sym_data
USING btree
(channel_id);
-- Index: idx_d_channel_id3
-- DROP INDEX idx_d_channel_id3;
CREATE INDEX id
tuanhoanganh wrote:
> I have postgresql 9.0.1
http://www.postgresql.org/support/versioning
> 6GB ram
> work_mem = 2097151
I think that has the potential to push you into swapping:
cc=> set work_mem = 2097151;
SET
cc=> show work_mem;
work_mem
---
2097151kB
(1 row)
That's 2GB
On 08/05/2011 09:58 AM, Kevin Grittner wrote:
What I'm saying is that if processes are blocked waiting for disk
they are not going to be using CPU, and there is room for that many
additional processes to be useful, as the CPUs and other drives
would otherwise be sitting idle.
Haha. The way you
I have postgresql 9.0.1 on windows 2003 ent with 6GB ram, 4 disk SATA RAID
10.
I am running SymmetricDS to replication over WAN. But yesterday there was a
big problem, i updated alot of rows and query to gap data of SymmetricDS run
verry very slowly.
Here is my postgresql.conf to tunning PostgreSQ
"Kevin Grittner" wrote:
> which would typically be running with 26 blocked waiting
> on a read for a cache miss,
I meant 24 there.
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/p
Shaun Thomas wrote:
> On 08/05/2011 09:00 AM, Kevin Grittner wrote:
>
>> optimal pool size = ((2 * actual core count) + effective spindle
>> count)
>
> How does that work? If your database fits in memory, your optimal
> TPS is only constrained by CPU. Any fetches from disk reduce your
> throughp
On 08/05/2011 09:00 AM, Kevin Grittner wrote:
optimal pool size = ((2 * actual core count) + effective spindle
count)
How does that work? If your database fits in memory, your optimal TPS is
only constrained by CPU. Any fetches from disk reduce your throughput
from IO Waits. How do you accou
Shaun Thomas wrote:
> So with a dual X5675, that's 12 cores. My numbers peaked at
> 24-concurrency. At that concurrency, HT was 60% faster than
> non-HT. Sorry if I mixed my terminology. :)
No problem -- I appreciate the information. I just wanted to be
sure I was understanding it correctly
05.08.11 11:44, Robert Ayrapetyan написав(ла):
Yes, you are right. Performance become even more awful.
Can some techniques from pg_bulkload be implemented in postgres core?
Current performance is not suitable for any enterprise-wide production system.
BTW: I was thinking this morning about index
On 7/29/11, lars hofhansl wrote:
> Thanks Pavan!
>
>
> I think the most important points are still that:
> 1. The WAL write should be happening asynchronously (if that is possible)
I think it is agreed that this is a "todo"; but since you reported
that turning off synchronous commit did not impro
In my tests it greatly depends on if index writes are random or sequential.
My test time goes down from few hours to seconds if I add to the end of
index.
As for me, best comparision would be to make two equal int4 columns with
same data as in int8, two indexes, then perform the test. My bet it wil
13 matches
Mail list logo