On 10/04/2011 07:50 PM, Venkat Balaji wrote:
I was thinking to increase checkpoint_segments to around 16 or 20.
I think 50 is a bit higher.
Don't be afraid to increase that a lot. You could set it to 1000 and
that would be probably turn out fine; checkpoints will still happen
every 5 minut
I was thinking to increase checkpoint_segments to around 16 or 20.
I think 50 is a bit higher.
Greg,
Sure. I would collect the info from pg_stat_bgwriter on regular intervals.
As we have too many transactions going on I am thinking to collect the info
every 6 or 8 hrs.
Thanks
VB
On Wed, Oct 5
Nowak Micha* wrote:
> Lowering random_page_cost didn't help -- I've tried values 2.0 and
> 1.5.
First off, I don't remember you saying how much RAM is on the
system, but be sure to set effective_cache_size to the sum of your
shared_buffers and OS cache. I've often found that the optimizer
unde
On Tue, Oct 4, 2011 at 4:32 PM, Greg Smith wrote:
> On 10/04/2011 03:50 AM, Venkat Balaji wrote:
>>
>> I had a look at the pg_stat_bgwriter as well.
>
> Try saving it like this instead:
>
> select now(),* from pg_stat_bgwriter;
>
> And collect two data points, space a day or more apart. That give
On 10/04/2011 03:50 AM, Venkat Balaji wrote:
I had a look at the pg_stat_bgwriter as well.
Try saving it like this instead:
select now(),* from pg_stat_bgwriter;
And collect two data points, space a day or more apart. That gives a
lot more information about the rate at which things are actu
Hi,
On 4 October 2011 21:25, Venkat Balaji wrote:
> I got hold of two catalog tables "pg_stats" and "pg_class".
> Column "avg_width" and "distinct" in pg_stats gets me lot of sensible
> information regarding, column values and size of the column.
> Can someone help me know when the values in thes
On Tue, Oct 4, 2011 at 2:16 AM, Gavin Flower
wrote:
> On 01/10/11 01:23, Vitalii Tymchyshyn wrote:
>>
>> Since you are using except and not except all, you are not looking at
>> arrays with duplicates.
>> For this case next function what the fastest for me:
>>
>> create or replace function array_e
On Tue, Oct 4, 2011 at 4:06 PM, Anssi Kääriäinen
wrote:
> On 10/04/2011 04:27 PM, Robert Klemme wrote:
>>
>> On Tue, Oct 4, 2011 at 11:39 AM, Anssi Kääriäinen
>> wrote:
>>>
>>> I have the following setup:
>>>
>>> create table test(id integer, seq integer);
>>> insert into test select generate_se
On 10/04/2011 05:52 PM, Robert Klemme wrote:
But then why do require using the second index column in the first
place? If the data set is small then the query is likely fast if the
selection via id can use any index.
I mean the fetched dataset is not large, I didn't mean the dataset in
total is
On 10/04/2011 05:36 PM, Tom Lane wrote:
The cost estimates I get are 806 for bitmap scan and sort, 2097 for
seqscan and sort, 4890 for indexscan without sort. It *can* use the
index for that query ... it just doesn't think it's a good idea. It's
probably right, too. At least, the actual runtim
=?ISO-8859-1?Q?Anssi_K=E4=E4ri=E4inen?= writes:
> I have the following setup:
> create table test(id integer, seq integer);
> insert into test select generate_series(0, 100), generate_series(0, 1000);
> create unique index test_idx on test(id, seq);
> analyze test;
> Now I try to fetch the latest
8 checkpoint segments is very small, try 50
2011/10/4, Venkat Balaji :
> Hello,
>
> Sorry. I should have put some more details in the email.
>
> I have got a situation where in i see the production system is loaded with
> the checkpoints and at-least 1000+ buffers are being written for every
> che
On 10/04/2011 04:27 PM, Robert Klemme wrote:
On Tue, Oct 4, 2011 at 11:39 AM, Anssi Kääriäinen
wrote:
I have the following setup:
create table test(id integer, seq integer);
insert into test select generate_series(0, 100), generate_series(0, 1000);
create unique index test_idx on test(id, seq
On Tue, Oct 4, 2011 at 11:39 AM, Anssi Kääriäinen
wrote:
> I have the following setup:
>
> create table test(id integer, seq integer);
> insert into test select generate_series(0, 100), generate_series(0, 1000);
> create unique index test_idx on test(id, seq);
> analyze test;
>
> Now I try to fetc
On Mon, Oct 3, 2011 at 11:48 AM, Soporte @ TEKSOL S.A.
wrote:
> Hi,
>
> I need help to understand the issue on a productive database for a select
> that takes more time than expected.
> 1- On a development database I ran the query (select) and I can see on
> Explain Analyze pgAdmin use all t
On Mon, Oct 03, 2011 at 02:48:10PM -0300, Soporte @ TEKSOL S.A. wrote:
> Hi,
>
>
>
> I need help to understand the issue on a productive database for a select
> that takes more time than expected.
>
>
>
> 1- On a development database I ran the query (select) and I can see on
> Explain
On 01/10/11 01:23, Vitalii Tymchyshyn wrote:
Since you are using except and not except all, you are not looking at
arrays with duplicates.
For this case next function what the fastest for me:
create or replace function array_except2(anyarray,anyarray) returns
anyarray as $$
select ARRAY(
(
sele
Hi,
I need help to understand the issue on a productive database for a select
that takes more time than expected.
1- On a development database I ran the query (select) and I can see on
Explain Analyze pgAdmin use all the indexes and primary keys defined. Dev db
has almost 10% of produ
Thanks Heikki !
Regards,
VB
On Tue, Oct 4, 2011 at 4:38 PM, Heikki Linnakangas <
heikki.linnakan...@enterprisedb.com> wrote:
> On 04.10.2011 13:50, Venkat Balaji wrote:
>
>> I have got a situation where in i see the production system is loaded with
>> the checkpoints and at-least 1000+ buffers a
On 04.10.2011 13:50, Venkat Balaji wrote:
I have got a situation where in i see the production system is loaded with
the checkpoints and at-least 1000+ buffers are being written for every
checkpoint.
1000 buffers isn't very much, that's only 8 MB, so that's not alarming
itself.
I am thinkin
Hello,
Sorry. I should have put some more details in the email.
I have got a situation where in i see the production system is loaded with
the checkpoints and at-least 1000+ buffers are being written for every
checkpoint.
Checkpoint occurs every 3 to 4 minutes and every checkpoint takes 150
seco
Hello Everyone,
Generally when it comes to query performance, I check how the vacuuming and
statistics collection is performed on Tables and Indexes hit by the query.
Apart from the above i check the code logic ( for any bad joins ) and column
statistics as well.
I got hold of two catalog tables
I have the following setup:
create table test(id integer, seq integer);
insert into test select generate_series(0, 100), generate_series(0, 1000);
create unique index test_idx on test(id, seq);
analyze test;
Now I try to fetch the latest 5 values per id, ordered by seq from the
table:
select
Lowering random_page_cost didn't help -- I've tried values 2.0 and 1.5.
Then I tried "order by id -1" hack Marcin Mańk proposed...
a9-dev=> create index foo on records(source_id, (id - 1));
CREATE INDEX
a9-dev=> explain analyze select * from records where source_id
='http://www.wbc.poznan.pl/dl
2011/10/4 Nowak Michał :
>
> a9-dev=> select attname, null_frac, avg_width, n_distinct, correlation from
> pg_stats where tablename = 'records';
> attname | null_frac | avg_width | n_distinct |
> correlation
> --+---+--
Wiadomość napisana przez Tom Lane w dniu 3 paź 2011, o godz. 17:12:
> I'm thinking it probably sees the pkey index as cheaper because that's
> highly correlated with the physical order of the table. (It would be
> useful to see pg_stats.correlation for these columns.) With a
> sufficiently unsel
26 matches
Mail list logo