8 checkpoint segments is very small, try 50
2011/10/4, Venkat Balaji :
> Hello,
>
> Sorry. I should have put some more details in the email.
>
> I have got a situation where in i see the production system is loaded with
> the checkpoints and at-least 1000+ buffers are being written for every
> che
It may be difficult, i think. When unsorted recordset is stored in
temp table, number of records may be saved and used. Otherwise it is
unknown.
2011/9/30, Josh Berkus :
> All,
>
> Here's a case which it seems like we ought to be able to optimize for:
>
> datamart-# ORDER BY txn_timestamp DESC
> d
I think , you may add a ramdisk as tablespace for temporary tables.
This should work similar to bigger work_mem.
2011/9/12, Robert Schnabel :
>
> On 9/12/2011 12:57 PM, Shaun Thomas wrote:
>> On 09/12/2011 12:47 PM, Andy Colson wrote:
>>
>>> work_mem is not the total a query can use. I believe eac
For 10 TB table and 3hours, disks should have a transfer about 1GB/s (seqscan).
2011/9/11, Scott Marlowe :
> On Sun, Sep 11, 2011 at 6:35 AM, Igor Chudov wrote:
>> I have a server with about 18 TB of storage and 48 GB of RAM, and 12
>> CPU cores.
>
> 1 or 2 fast cores is plenty for what you're do
Look at developer faq.
2011/9/9, Antonio Rodriges :
> Hello,
>
> Does anyone know whether PostgreSQL uses DMA (Direct Memory Access) in
> certain cases to improve networking IO performance?
>
> I mean "simple" query is which doesn't require any CPU processing, for ex
> SELECT column_a FROM table_b
I think that wal_segments are too low, try 30.
2011/9/5, Andy Colson :
> On 09/05/2011 05:28 AM, Richard Shaw wrote:
>>
>> Hi,
>>
>> I have a database server that's part of a web stack and is experiencing
>> prolonged load average spikes of up to 400+ when the db is restarted and
>> first accessed
Hi.
Autoexplain module allow to log plans and statistics of live queries. Try it.
2011/9/3, Gerhard Wohlgenannt :
> Dear list,
>
> we are encountering serious performance problems with our database.
> Queries which took around 100ms or less last week now take several seconds.
>
> The database runs
Try to use single regular expression.
2011/8/10, Grzegorz Blinowski :
> Dear All,
>
> I have some problems with regexp queries performance - common sense tells me
> that my queries should run faster than they do.
>
> The database - table in question has 590 K records, table's size is 3.5GB. I
> am
Hi. I have the idea: hints joined to function. For example instead of
WHERE table1.field1=table2.field2
write:
WHERE specificeq(table1.field1,table2.field2)
and hints add to declaration of specificeq function.
2011/2/23, Robert Haas :
> On Wed, Feb 16, 2011 at 4:22 PM, Bruce Momjian wrote:
>> I
hi.
>I think you're missing the point, which is that all the hash work is
>just pure overhead in such a case (and it is most definitely not
>zero-cost overhead). You might as well just do a nestloop join.
>Hashing is only beneficial to the extent that it allows a smaller subset
>of the inner rela
2011/3/30, Lars Feistner :
> Hello Kevin,
>
>
> On 03/29/2011 09:28 PM, Kevin Grittner wrote:
>> Lars Feistner wrote:
>>
>>> The log tells me that certain update statements take sometimes
>>> about 3-10 minutes. But we are talking about updates on tables
>>> with 1000 to 1 rows and updates tha
Hi.
I do small test of plsql and perl.Result is that perl may be
2xfaster in simple loops.
CREATE OR REPLACE FUNCTION _.test1() RETURNS void AS
$BODY$
declare i integer; j bigint := 0;
begin
for i in 1..100 loop j:=j+i; end loop;
end;
$BODY$ LANGUAGE plpgsql VOLATILE COST 100;
"Result
Try
order by created_at+0
On 1/6/11, Mike Broers wrote:
> Thanks for the assistance.
>
> Here is an explain analyze of the query with the problem limit:
>
> production=# explain analyze select * from landing_page.messages where
> ((messages.topic = E'x') AND (messages.processed = 'f')) ORDER BY
Hi.
I install auto_explain module for monitoring queries.
By the way, is any tool to tune planner automatically ?
pasman
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-per
Thanks for reply.
I tested random changes and query runs fastest after:
set seq_page_cost = 0.1;
set random_page_cost = 0.1;
cpu_operator_cost = 0.01
pasman
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://
hello.
I ve the table NumeryA with 3 indices. Query below uses incorrect index.
SELECT
A."NKA",
A."NTA",
Min("PołączeniaMin") || ',' || Max("PołączeniaMax") AS "Biling",
Sum("Ile")::text AS "Ilość CDR",
R."LP"::text AS "Sprawa",
R."Osoba weryfikująca" AS "Osoba",
to_char(min("Warto
> UNION will remove all duplicates, so that the result additionally requires to
> be sorted.
>Right, to avoid the SORT and UNIQUE - operation you can use UNION ALL
by the way maybe apply hashing to calculate UNION be better ?
pasman
--
Sent via pgsql-performance mailing list (pg
hello.
i tested how are distributed values xmin,xmax on pages.
in my tables . typically there are no more than 80 records
on pages.
maybe its possible to compress xmin & xmax values to
1 byte/per record (+table of transactions per page)?
its reduce the space when more than 1 record is
from the sa
>FWIW, it isn't going to happen anyway, because the TID scan mechanism
>doesn't support scanning based on a join condition. That hasn't gotten
>to the top of the to-do list because the use case is almost vanishingly
>small. ctids generally aren't stable enough for it to be useful to
>store refere
Hello.
How to use tid scans? This below not works :-(
Always is used merge join.
DROP TABLE IF EXISTS test1;
CREATE TABLE test1 as select i,hashint4(i)::text from
generate_series(1,1) as a(i);
DROP TABLE IF EXISTS test2;
CREATE TABLE test2 as select j,j%1 as i,null::tid as ct from
gener
Thanks for reply.
First query:
SELECT
R."Osoba weryfikująca" AS "Osoba",
R."LP"::text AS "Sprawa",
A."NKA",
A."NTA",
Sum(A."Ile")::text AS "Ilość CDR"
FROM
ONLY "NumeryA" A
LEFT JOIN
"Rejestr stacji do naprawy" R
ON
A."NKA" = R."Numer kierunkowy"
and A."NTA" like R."Numer stacji"
and
Hello.
I have a query which works a bit slow.
It's runned on desktop computer: AMD Athlon X2 2GHz , Win Xp sp2, 1GB ram.
Postgres 8.4.5 with some changes in config:
shared_buffers = 200MB # min 128kB
# (change requires restart)
temp_buffers = 8MB
22 matches
Mail list logo