## Robert Sosinski (rsosin...@ticketevolution.com):
> When using array_agg on a large table, memory usage seems to spike up until
> Postgres crashes with the following error:
This sounds like bug #7916.
http://www.postgresql.org/message-id/e1uceeu-0004hy...@wrigleys.postgresql.org
As noted in tha
Hi,
Thanks. So we are close to the tentative release date. Good.
Regards,
Jayadevan
On Mon, Aug 19, 2013 at 10:16 AM, Sandro CAZZANIGA <
cazzaniga.san...@gmail.com> wrote:
> Le 19/08/2013 06:38, Jayadevan M a écrit :
> > Hello all,
> > Is the release date for PostgreSQL 9.3 production decided? W
Le 19/08/2013 06:38, Jayadevan M a écrit :
> Hello all,
> Is the release date for PostgreSQL 9.3 production decided? We are going
> live in a couple of weeks with a portal and if possible, would like to
> go with 9.3, Materialized Views being the key feature that will add value.
> Regards,
> Jayade
Le 19/08/2013 06:38, Jayadevan M a écrit :
> Hello all,
> Is the release date for PostgreSQL 9.3 production decided? We are going
> live in a couple of weeks with a portal and if possible, would like to
> go with 9.3, Materialized Views being the key feature that will add value.
> Regards,
> Jayade
thanks for ur help,this was the requirement which assigned for us,so i had to
ask even though we are having many options.thanks again
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Here-is-my-problem-tp5766954p5767778.html
Sent from the PostgreSQL - general mailing
Hello all,
Is the release date for PostgreSQL 9.3 production decided? We are going
live in a couple of weeks with a portal and if possible, would like to go
with 9.3, Materialized Views being the key feature that will add value.
Regards,
Jayadevan
On Sun, Aug 18, 2013 at 10:33 PM, Kevin Grittner wrote:
> Tyler Reese wrote:
> > Kevin Grittner wrote:
> >> Tyler Reese wrote:
>
> >>> mydb=> explain analyze SELECT * FROM "cdr" WHERE
> lower("CallingPartyNumber") = '9725551212' order by "key" limit 100;
> >>>
> >>> Limit (cost=0.00..72882.05
Tyler Reese wrote:
> Kevin Grittner wrote:
>> Tyler Reese wrote:
>>> mydb=> explain analyze SELECT * FROM "cdr" WHERE
>>> lower("CallingPartyNumber") = '9725551212' order by "key" limit 100;
>>>
>>> Limit (cost=0.00..72882.05 rows=100 width=757) (actual
>>> time=20481.083..30464.960 rows=11
When using array_agg on a large table, memory usage seems to spike up until
Postgres crashes with the following error:
2013-08-17 18:41:02 UTC [2716]: [2] WARNING: terminating connection because
of crash of another server process
2013-08-17 18:41:02 UTC [2716]: [3] DETAIL: The postmaster has comma
(Including the typo mistake mentioned in the 2nd email)
On Sat, Aug 17, 2013 at 8:47 PM, Piotr Gasidło wrote:
> All on 9.3beta2. Current setup:
>
> server1 (MASTER) -> server2 (SLAVE) -> server3 (SLAVE)
>
> server2 is hot_standby and gets WALs from server1
> server3 is hot_standby and gets WALs f
So, since it thinks it needs to read 1/412th of the table is the reason why
the query planner chooses to use the primary key index instead of the
callingpartynumber index, like it does in the first 3 cases? I'm curious
as to why it says "rows=41212". Is that the estimate of the number of rows
tha
On 08/18/2013 01:14 PM, Janek Sendrowski wrote:
Hi,
How can I do a query on a record variable in a function.
I want to do a dirty fulltextsearch on a table and then choose the string which
have a low levenshtein-distance.
I wanted to it like this, but it doesn't work:
v_query := 'SELECT col FR
Hi,
How can I do a query on a record variable in a function.
I want to do a dirty fulltextsearch on a table and then choose the string which
have a low levenshtein-distance.
I wanted to it like this, but it doesn't work:
v_query := 'SELECT col FROM table WHERE LENGTH(dede) BETWEEN x AND y AND
p
Tyler Reese wrote:
> I don't understand why the performance of case 4 is so much slower
>case 4:
>mydb=> explain analyze SELECT * FROM "cdr" WHERE lower("CallingPartyNumber") =
>'9725551212' order by "key" limit 100;
> Limit (cost=0.00..72882.05 rows=100 width=757) (actual
> time=20481.083..
14 matches
Mail list logo