.pgh.pa.us]
Enviado el: viernes, 12 de octubre de 2012 05:39 p.m.
Para: Anibal David Acosta
CC: pgsql-performance@postgresql.org
Asunto: Re: [PERFORM] Do cast affects index usage?
"Anibal David Acosta" writes:
> I have a table with a column of type timestamp with time zone, this
> c
I have a table with a column of type timestamp with time zone, this column
has an index
If I do a select like this
select * from mytable where cast(my_date as timestamp without time zone) >
'2012-10-12 20:00:00'
this query will use the index over the my_date column?
Thanks
original-
De: Claudio Freire [mailto:klaussfre...@gmail.com]
Enviado el: viernes, 05 de octubre de 2012 10:27 a.m.
Para: Jeff Janes
CC: Anibal David Acosta; pgsql-performance@postgresql.org
Asunto: Re: [PERFORM] how to avoid deadlock on masive update with multiples
delete
On Thu, Oct 4, 2012 at 1
Hi,
I have a table with about 10 millions of records, this table is update and
inserted very often during the day (approx. 200 per second) , in the night
the activity is a lot less, so in the first seconds of a day (00:00:01) a
batch process update some columns (used like counters) of this table
Using explain analyze I saw that many of my queries run really fast, less
than 1 milliseconds, for example the analyze output of a simple query over a
table with 5millions of records return "Total runtime: 0.078 ms"
But the real time is a lot more, about 15 ms, in fact the pgadmin show this
v
on for the server
condition
Thanks!
-Mensaje original-
De: Kevin Grittner [mailto:kevin.gritt...@wicourts.gov]
Enviado el: jueves, 16 de agosto de 2012 04:52 p.m.
Para: Anibal David Acosta; pgsql-performance@postgresql.org
Asunto: Re: [PERFORM] best practice to avoid table bloat?
"Anib
Hi,
if I have a table that daily at night is deleted about 8 millions of rows
(table maybe has 9 millions) is recommended to do a vacuum analyze after
delete completes or can I leave this job to autovacuum?
This table is very active during the day but less active during night
I think that
More information.
After many "WARNING: pgstat wait timeout" in the log also appear "ERROR:
canceling autovacuum task "
De: Anibal David Acosta [mailto:a...@devshock.com]
Enviado el: viernes, 27 de julio de 2012 06:04 p.m.
Para: pgsql-performance@postgresq
In my postgres log I saw a lot of warning like this.
WARNING: pgstat wait timeout
Every 10 seconds aprox since yesterday and after one year working without
any warning
I have postgres 9.0.3 on a Windows Server 2008 R2.
I have only one big table with aprox. 1,300,000,000 (yes 1,300
Hi,
yesterday I delete about 200 million rows of a table (about 150GB of data),
after delete completes the autovacuum process start.
The autovacuum is running for about 11 hours but no space is released
Autovacuum parameters are with default values in postgresql.conf
The postgres version is
...@ringerc.id.au]
Enviado el: lunes, 12 de diciembre de 2011 11:45 a.m.
Para: Anibal David Acosta
CC: pgsql-performance@postgresql.org
Asunto: Re: [PERFORM] autovacuum, exclude table
Top-posting because this is context free:
You need to provide more info for anybody to help you. Are the tables
I have a couple of tables with about 400millions of records increasing about
5 millions per day.
I think that disabling autovac over those tables, and enabling daily manual
vacuum (in some idle hour) will be better.
I am right?
Is possible to exclude autovacuum over some tables?
Tha
Hello, I have a postgres 9.0.2 installation.
Every works fine, but in some hours of day I got several timeout in my
application (my application wait X seconds before throw a timeout).
Normally hours are not of intensive use, so I think that the autovacuum
could be the problem.
Is threre any l
n Grittner [mailto:kevin.gritt...@wicourts.gov]
Enviado el: lunes, 14 de noviembre de 2011 02:27 p.m.
Para: 'Richard Huxton'; Anibal David Acosta; 'Sergey Konoplev'
CC: pgsql-performance@postgresql.org; 'Stephen Frost'
Asunto: Re: [PERFORM] unlogged tables
"Anibal D
nes, 14 de noviembre de 2011 07:39 a.m.
Para: Richard Huxton
CC: Stephen Frost; Anibal David Acosta; pgsql-performance@postgresql.org
Asunto: Re: [PERFORM] unlogged tables
On 14 November 2011 14:17, Richard Huxton wrote:
> On 14/11/11 10:08, Sergey Konoplev wrote:
>>
>> On 14 November 2
Hello, just for clarification.
Unlogged tables are not memory tables don't?
If we stop postgres server (normal stop) and start again, all information in
unlogged tables still remain?
So, can I expect a data loss just in case of crash, power failure or SO
crash don't?
In case of cras
For example:
Table A
-id (PK)
-name
Table B
-table_a_id (PK, FK)
-address
When I do an insert on table B, the database check if value for column
"table_a_id" exists in table A
But, if I do an update of column "address" of table B, does the database
check again?
My question is due
viado el: sábado, 10 de septiembre de 2011 02:30 p.m.
Para: Anibal David Acosta
CC: pgsql-performance@postgresql.org
Asunto: Re: [PERFORM] should i expected performance degradation over time
On Sat, Sep 10, 2011 at 10:55 AM, Anibal David Acosta
wrote:
> Sometimes I read that postgres perfor
I have a lot of wasted bytes in some tables.
Somewhere I read that maybe auto-vacuum can't release space due to a low
max_fsm_pages setting.
I want to increase it, but I don't found the param in the postgres.conf.
This param exists? If not? How can I deal with bloated tables?
I have ma
Sometimes I read that postgres performance is degraded over the time and
something people talk about backup and restore database solve the problem.
It is really true?
I have postgres 9.0 on a windows machine with The autovacuum is ON
I have some configuration tables
And a couple of tr
-performance-ow...@postgresql.org] En nombre de Greg Smith
Enviado el: jueves, 08 de septiembre de 2011 09:29 p.m.
Para: pgsql-performance@postgresql.org
Asunto: Re: [PERFORM] how delete/insert/update affects select performace?
On 09/08/2011 12:40 PM, Anibal David Acosta wrote:
> Postgres 9.0 on windows
it that index "reindex" or rebuild or
something? Or just select view another "version" of the table?
Thanks
-Mensaje original-
De: Kevin Grittner [mailto:kevin.gritt...@wicourts.gov]
Enviado el: jueves, 08 de septiembre de 2011 01:01 p.m.
Para: Anibal David Acosta; p
ed='T')
So, do you think I must remove the enabled from index?
Thanks
-Mensaje original-
De: Kevin Grittner [mailto:kevin.gritt...@wicourts.gov]
Enviado el: jueves, 08 de septiembre de 2011 10:51 a.m.
Para: Anibal David Acosta; pgsql-performance@postgresql.org
Asunto: Re: [PERF
Hi!
I have a table not too big but with aprox. 5 millions of rows, this table
must have 300 to 400 select per second. But also must have 10~20
delete/insert/update per second.
So, I need to know if the insert/delete/update really affect the select
performance and how to deal with it.
Th
, Anibal David Acosta wrote:
Hi everyone,
My question is, if I have a table with 500,000 rows, and a SELECT of one row
is returned in 10 milliseconds, if the table has 6,000,000 of rows and
everything is OK (statistics, vacuum etc)
can i suppose that elapsed time will be near to 10
Hi everyone,
My question is, if I have a table with 500,000 rows, and a SELECT of one row
is returned in 10 milliseconds, if the table has 6,000,000 of rows and
everything is OK (statistics, vacuum etc)
can i suppose that elapsed time will be near to 10?
mith
Enviado el: lunes, 01 de agosto de 2011 03:53 p.m.
Para: pgsql-performance@postgresql.org
Asunto: Re: [PERFORM] synchronous_commit off
On 08/01/2011 09:29 AM, Anibal David Acosta wrote:
Can a transaction committed asynchronously report an error, duplicate key or
something like that, c
Can a transaction committed asynchronously report an error, duplicate key or
something like that, causing a client with a OK transaction but server with
a FAILED transaction.
Thanks
:50 p.m.
Para: pgsql-performance@postgresql.org
Asunto: Re: [PERFORM] how much postgres can scale up?
On 06/10/2011 07:29 AM, Anibal David Acosta wrote:
> When 1 client connected postgres do 180 execution per second With 2
> clients connected postgres do 110 execution per second With 3 c
Excellent.
Thanks I'll buy and read that book :)
Thanks!
-Mensaje original-
De: Craig Ringer [mailto:cr...@postnewspapers.com.au]
Enviado el: viernes, 10 de junio de 2011 09:13 a.m.
Para: Anibal David Acosta
CC: t...@fuzzy.cz; pgsql-performance@postgresql.org
Asunto: Re: [PE
, Is possible in excellent conditions that two connections duplicate the
quantity of transactions per second?
Thanks!
-Mensaje original-
De: t...@fuzzy.cz [mailto:t...@fuzzy.cz]
Enviado el: viernes, 10 de junio de 2011 08:10 a.m.
Para: Anibal David Acosta
CC: pgsql-performance
I have a function in pgsql language, this function do some select to some
tables for verify some conditions and then do one insert to a table with NO
index. Update are not performed in the function
When 1 client connected postgres do 180 execution per second
With 2 clients connected postgres do 11
I have a strange situation.
I have a table of detail with millones of rows and a table of items with
thousands of rows
When I do..
select count(*) from wiz_application_response where application_item_id in
(select id from wiz_application_item where application_id=110)
This query NOT use the inde
Hello,
How fillfactor impact performance of query?
I have two cases,
One is a operational table, for each insert it have an update, this table
must have aprox. 1.000 insert per second and 1.000 update per second (same
inserted row)
Is necessary to change the fill factor?
The other case is a tab
34 matches
Mail list logo