Hi All;
I'm looking for tips / ideas per performance tuning some specific queries.
These are generally large tables on a highly active OLTP system
(100,000 - 200,000 plus queries per day)
First off, any thoughts per tuning inserts into large tables. I have a large
table with an insert like thi
"Greg Jaman" <[EMAIL PROTECTED]> writes:
> I have a problem with partitioning and I'm wondering if anyone can provide
> some insight. I'm trying to find the max value of a column across multiple
> partitions. The query against the partition set is quite slow while queries
> against child partit
Swappiness optimization is going to vary. Definitely test on your own.
For a bulk load database, with large page cache, swappines = 60 (default) is
_GUARANTEED_ to force the OS to swap out some of Postgres while in heavy use.
This is heavily dependent on the page cache size, work_mem size, an
>>> Scott Carey <[EMAIL PROTECTED]> wrote:
> Set swappiness to 0 or 1.
We recently converted all 72 remote county databases from 8.2.5 to
8.3.4. In preparation we ran a test conversion of a large county over
and over with different settings to see what got us the best
performance. Setting swap
Tuning for bulk loading:
Make sure the Linux kernel paramters in /proc/sys/vm related to the page cache
are set well.
Set swappiness to 0 or 1.
Make sure you understand and configure /proc/sys/vm/dirty_background_ratio
and /proc/sys/vm/dirty_ratio well.
With enough RAM the default on some kernel
"Ryan Hansen" <[EMAIL PROTECTED]> writes:
> I have a fairly robust server running Ubuntu Hardy Heron, 24 GB of memory,
> and I've tried to commit half the memory to PG's shared buffer, but it seems
> to fail. I'm setting the kernel shared memory accordingly using sysctl,
> which seems to work fine
Ryan Hansen wrote:
>
> Hey all,
>
> This may be more of a Linux question than a PG question, but I’m
> wondering if any of you have successfully allocated more than 8 GB of
> memory to PG before.
>
> I have a fairly robust server running Ubuntu Hardy Heron, 24 GB of
> memory, and I’ve tried to comm
> I used 1000 since doc wrote that max value is 1000
> Rid table contains 3.5millions rows, will increase 1 millions of rows per
> year and is updated frequently, mostly by adding.
> Is it OK to leave
> SET STATISTICS 1000;
> setting for this table this column or should I try to decrease it ?
On Wednesday 26 November 2008, "Ryan Hansen"
<[EMAIL PROTECTED]> wrote:
> This may be more of a Linux question than a PG question, but I'm
> wondering if any of you have successfully allocated more than 8 GB of
> memory to PG before.
>
CentOS 5, 24GB shared_buffers on one server here. No problems
Hey all,
This may be more of a Linux question than a PG question, but I'm wondering
if any of you have successfully allocated more than 8 GB of memory to PG
before.
I have a fairly robust server running Ubuntu Hardy Heron, 24 GB of memory,
and I've tried to commit half the memory to PG's sh
Andrus wrote:
> Richard,
>
>>> Results are provided in bottom of the message to which you replied.
>>
>> No - the explains there were contrasting a date test BETWEEN versus =.
>
> I changed rid.toode statitics target to 100:
>
> ALTER TABLE firma2.rid ALTER COLUMN toode SET STATISTICS 100;
> ana
Richard,
Results are provided in bottom of the message to which you replied.
No - the explains there were contrasting a date test BETWEEN versus =.
I changed rid.toode statitics target to 100:
ALTER TABLE firma2.rid ALTER COLUMN toode SET STATISTICS 100;
analyze firma2.rid;
Analyze takes 3
Is it OK to run
ALTER TABLE rid ALTER COLUMN toode SET STATISTICS 1000
in prod database or should I try to decrease 1000 to smaller value ?
rid is big increasing table and is changed frequently, mostly by adding
rows.
pgAdmin shows default_statistic_target value has its default value 10 in
p
Andrus wrote:
> Richard,
>
>> And the results were?
>
> Results are provided in bottom of the message to which you replied.
No - the explains there were contrasting a date test BETWEEN versus =.
>> One problem at a time. Let's get the pattern-matching speed problems on
>> your live server sorte
Hello,
I have following common situation:
Category IDs: about 50 000
Document IDs: about 3 000 000
Many to many relationship.
A document id have a relation with 10 up to 1000 category ids
One query, with input set of document ids, resulting set of category ids,
having relation with input ids. (v
Richard,
And the results were?
Results are provided in bottom of the message to which you replied.
One problem at a time. Let's get the pattern-matching speed problems on
your live server sorted, then we can look at different queries.
First message in this thread described the issue with q
Scott,
My first thought on the query where a pattern being faster than the query
with an exact value is that the planner does not have good enough
statistics on that column. Without looking at the explain plans further, I
would suggest trying something simple. The fact that it is fasster on
Andrus wrote:
> Richard and Mario,
>
>> You can't use xxx_pattern_ops indexes for non-pattern tests.
>
> I missed regular index. Sorry for that. Now issue with testcase is
> solved. Thank you very much.
>
> I researched issue in live 8.1.4 db a bit more.
> Performed vacuum and whole db reindex.
Richard and Mario,
You can't use xxx_pattern_ops indexes for non-pattern tests.
I missed regular index. Sorry for that. Now issue with testcase is solved.
Thank you very much.
I researched issue in live 8.1.4 db a bit more.
Performed vacuum and whole db reindex.
Tried several times to run t
Chavdar Kopoev wrote:
> I want to use as a data storage postgresql. Tried several data structures,
> testing btree, gin, gist indecies over them, but best achieved performance
> for a 10 times smaller dataset (10k cat ids, 100k doc ids, 1m relations) is
> slower more than 5 times.
Can you post
Hello,
I have following common situation:
Category IDs: about 50 000
Document IDs: about 3 000 000
Many to many relationship.
A document id have a relation with 10 up to 1000 category ids
One query, with input set of document ids, resulting set of category ids,
having relation with input ids. (v
Andrus schrieb:
Richard,
These are the same but the times are different. I'd be very surprised if
you can reproduce these times reliably.
I re-tried today again and got same results: in production database
pattern query is many times slower that equality query.
toode and rid base contain o
Andrus wrote:
>
> So patter index is 10 .. 20 times (!) faster always.
> No idea why.
Because you don't have a normal index on the product_id column? You
can't use xxx_pattern_ops indexes for non-pattern tests.
> Test data creation script:
The only change to the script was the obvious char(nn)
23 matches
Mail list logo