Tom Lane wrote:
"Chris Spotts" writes:
many groups are you expecting in that query? Does the plan for the
array_agg query show hash or group aggregation?
GroupAggregate
Huh, there's no reason it should take much memory then. Maybe you've
found a memory leak. Can you pu
> From: Sim Zacks [mailto:s...@compulab.co.il]
>-Original Message-
>From: gsst...@gmail.com [mailto:gsst...@gmail.com] On Behalf Of Greg Stark
>
>On Tue, Jul 21, 2009 at 11:25 PM, Ow Mun Heng wrote:
> RETURN NULL;
>
>Just make your trigger return NEW and it won't kill the inse
Ow Mun Heng wrote:
-Original Message-
From: gsst...@gmail.com [mailto:gsst...@gmail.com] On Behalf Of Greg Stark
On Tue, Jul 21, 2009 at 11:25 PM, Ow Mun Heng wrote:
RETURN NULL;
Just make your trigger return NEW and it won't kill the insert
Aleksander Kmetec writes:
> there's another bug I've been seeing from time to time, but was unable to
> reproduce it until today.
Hmm, I think you meant to send this to the Slony lists? It looks like
a Slony problem not a core-PG problem.
regards, tom lane
--
Sent via
Accidentally sent this to pgsql-general instead of slony1-general. Sorry about
that. :(
-- Aleksander
Aleksander Kmetec wrote:
Hi,
there's another bug I've been seeing from time to time, but was unable
to reproduce it until today.
--
Sent via pgsql-general mailing list (pgsql-general@post
Hi,
there's another bug I've been seeing from time to time, but was unable to
reproduce it until today.
We have statement_timeout set to 10 seconds and we would sometimes get timeouts on extremely simple statements, like
single-row inserts. I believe this is caused by a waiting TRUNCATE in log
-Original Message-
From: gsst...@gmail.com [mailto:gsst...@gmail.com] On Behalf Of Greg Stark
>> On Tue, Jul 21, 2009 at 11:25 PM, Ow Mun Heng wrote:
>> RETURN NULL;
> From the docs:
> "It can return NULL to skip the operation for the current row."
> -- http://www.postgresql.org/docs/c
On Tue, Jul 21, 2009 at 11:25 PM, Ow Mun Heng wrote:
> RETURN NULL;
From the docs:
"It can return NULL to skip the operation for the current row."
-- http://www.postgresql.org/docs/current/static/trigger-definition.html
Just make your trigger return NEW and it won't kill the insert to the
chi
On Tue, Jul 21, 2009 at 02:29:12PM -0400, Merlin Moncure wrote:
> On Tue, Jul 21, 2009 at 10:47 AM, Sam Mason wrote:
> > On Tue, Jul 21, 2009 at 04:08:51PM +0600, Murat Kabilov wrote:
> >> I would like to know if there is a function that extracts an element by its
> >> index from an array?
> >
> >
I think I'm doing this wrongly. Before I go out re-invent the wheel, I
thought I'll just check w/ the list. (I previously got the idea from IRC)
Table "Master"
--> Table "Child1"
--> Table "Child2"
--> Table "Child2"
Table "Update" --> Table to update come key items from source table.
The
On Tuesday 21 July 2009 18:09:57 you wrote:
> Janning Vygen writes:
> > On Tuesday 21 July 2009 15:49:36 Tom Lane wrote:
> >> Well, you could turn it off during the peak times.
> >
> > It affords a server restart which is not a good idea.
>
> Changing logging options does not require a server rest
2009/7/21 Bjørn T Johansen :
> I have had som problem with my filesystem and now I get these error messsages
> when trying to access my database..:
>
> ERROR: could not access status of transaction 2495690984
> Could not open file "pg_clog/094C": No such file or directory.
got backups?
One way
On Mon, Jul 20, 2009 at 1:28 PM, Joshua D. Drake wrote:
> On Mon, 2009-07-20 at 13:24 -0400, Bill Moran wrote:
>> In actual practice, full query logging
>> is 1/50 the amount of disk I/O as the actual database activity. If your
>> systems are so stressed that they can't handle another 2% increas
On Tue, Jul 21, 2009 at 10:47 AM, Sam Mason wrote:
> On Tue, Jul 21, 2009 at 04:08:51PM +0600, Murat Kabilov wrote:
>> I would like to know if there is a function that extracts an element by its
>> index from an array?
>
> No, the syntax most people expect and use would be arr[ndx]. If you
> want
On Mon, Jul 20, 2009 at 9:56 PM, Dennis Gearon wrote:
>
> I once talked to a company that made a custome version of Postgres. It split
> tables up on columns and also by rows, had some other custome features. It
> was enormously faster from what I gathered.
Faster at some things :-). column ori
On Tue, Jul 21, 2009 at 1:47 PM, David Kerr wrote:
> On Tue, Jul 21, 2009 at 01:13:18PM -0400, Tom Lane wrote:
> - David Kerr writes:
> - I tried it on a table with 81 random values. It took frickin'
> - forever, but seemed to be willing to respond to cancels anywhere
> - along the line. I'm
> - type, and access into a varwidth array is O(n), so the sorting
> - step you've got here is O(n^2). It might help to use unnest()
> - instead of this handmade version of it ...)
>
> unnest() is 8.4 only, right?
>
> I'm actually probably just going to install R and use the median
> function fr
On Tue, Jul 21, 2009 at 01:13:18PM -0400, Tom Lane wrote:
- David Kerr writes:
- I tried it on a table with 81 random values. It took frickin'
- forever, but seemed to be willing to respond to cancels anywhere
- along the line. I'm not sure why you're seeing differently.
Hehe, yeah. For me
> -Original Message-
> From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> Sent: Tuesday, July 21, 2009 12:16 PM
> To: Chris Spotts
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] array_agg crash?
>
> "Chris Spotts" writes:
> >> many groups are you expecting in that query? Does the
"Chris Spotts" writes:
>> many groups are you expecting in that query? Does the plan for the
>> array_agg query show hash or group aggregation?
> GroupAggregate
Huh, there's no reason it should take much memory then. Maybe you've
found a memory leak. Can you put together a self-contained test
David Kerr writes:
> On Mon, Jul 20, 2009 at 11:14:22PM -0400, Tom Lane wrote:
> - Huh. How big an array were you trying to invoke it on?
> there are 81 records in the table it's just
> test=# \d test2
>Table "public.test2"
> Column | Type | Modifiers
> +---
> >> How big were the arrays you were trying to push around here?
>
> > Like I'd said originally, there were no arrays that ended up being
> more 4
> > elements long - all integers. The vast majority of them were 1 or 2
> long.
>
> Hm, maybe the problem is not so much array size as too many arra
I have had some problem with my filesystem and now I get these error messsages
when trying to access my database..:
ERROR: could not access status of transaction 2495690984
Could not open file "pg_clog/094C": No such file or directory.
Is there a way to fix this or do I have to find a backup?
"Chris Spotts" writes:
>>> killed
>>> it after 15 minutes and no results...rather I tried to kill it, but
>>> it looks like I'm going to have to -9 it...
>>
>> How big were the arrays you were trying to push around here?
> Like I'd said originally, there were no arrays that ended up being more 4
> killed
> > it after 15 minutes and no results...rather I tried to kill it, but
> it looks
> > like I'm going to have to -9 it...
>
> How big were the arrays you were trying to push around here? I tried
> interrupting a similar query and it came right back; but if you were
> building some really
Janning Vygen writes:
> On Tuesday 21 July 2009 15:49:36 Tom Lane wrote:
>> Well, you could turn it off during the peak times.
> It affords a server restart which is not a good idea.
Changing logging options does not require a server restart.
regards, tom lane
--
Sent
On Tuesday 21 July 2009 15:49:36 Tom Lane wrote:
> Janning Vygen writes:
> > On Monday 20 July 2009 19:24:13 Bill Moran wrote:
> >> Have you benchmarked the load it creates under your workload?
> >
> > Yes, it takes up to 15% of our workload in an average use case. But we
> > have peak times where
On Mon, Jul 20, 2009 at 11:14:22PM -0400, Tom Lane wrote:
- David Kerr writes:
- > But, i don't see any coded loop or way for me to insert a signal check.
(I'm not much of a
- > programmer) the function was just:
-
- > CREATE OR REPLACE FUNCTION array_median(anyarray)
- > RETURNS anyelement AS
"Chris Spotts" writes:
> That seems to have fixed it in the sense that it doesn't crash, but I killed
> it after 15 minutes and no results...rather I tried to kill it, but it looks
> like I'm going to have to -9 it...
How big were the arrays you were trying to push around here? I tried
interrupt
In response to Phoenix Kiula :
> On Tue, Jul 21, 2009 at 6:12 PM, Peter Eisentraut wrote:
> > On Tuesday 21 July 2009 04:36:41 Phoenix Kiula wrote:
> >> On Mon, Jul 20, 2009 at 2:04 PM, Peter Eisentraut wrote:
> >> > On Monday 20 July 2009 06:45:40 Phoenix Kiula wrote:
> >> >> explain analyze sele
Thanks Bricklen
Dennis Gearon
--- On Tue, 7/21/09, bricklen wrote:
> From: bricklen
> Subject: Re: [GENERAL] commercial adaptation of postgres
> To: "Dennis Gearon"
> Cc: pgsql-general@postgresql.org
> Date: Tuesday, July 21, 2009, 7:33 AM
> Greenplum uses a modified version of
> PostgreSQL f
Craig Ringer writes:
> On Tue, 2009-07-21 at 10:13 -0400, Tom Lane wrote:
>> It's not unusual for "top" to show the postmaster's child processes as
>> "postmaster" as well. Depends on the platform and the options given
>> to top.
> Ah. Thanks for clearing that one up. That'd make more sense, sin
On Tue, Jul 21, 2009 at 04:08:51PM +0600, Murat Kabilov wrote:
> I would like to know if there is a function that extracts an element by its
> index from an array?
No, the syntax most people expect and use would be arr[ndx]. If you
want a function, it would be easy to do:
CREATE FUNCTION array
Greenplum uses a modified version of PostgreSQL for their MPP product.
http://www.greenplum.com/
On Mon, Jul 20, 2009 at 6:56 PM, Dennis Gearon wrote:
>
> I once talked to a company that made a custome version of Postgres. It split
> tables up on columns and also by rows, had some other custome
On Tue, 2009-07-21 at 10:13 -0400, Tom Lane wrote:
> Craig Ringer writes:
> > I'm a bit puzzled about why you have three "postmaster" instances shown
> > as running.
>
> It's not unusual for "top" to show the postmaster's child processes as
> "postmaster" as well. Depends on the platform and the
> "Chris Spotts" writes:
> > LOG: 0: autovacuum launcher process (PID 10264) was terminated
> by
> > signal 9: Killed
>
> Looks like the OOM killer is loose on your system. Disable memory
> overcommit in the kernel and things will get better.
> http://www.postgresql.org/docs/8.4/static/kern
I had one simple query that kept crashing the connection. It crashes
after several minutes.
Tried restarting, it still error'd at the same place.
Tried recreating the table it was selecting from, it still error'd at
the same place.
I rewrote the query with an ARRAY subselect and it finished fl
Hello,
I would like to know if there is a function that extracts an element by its
index from an array?
Craig Ringer writes:
> I'm a bit puzzled about why you have three "postmaster" instances shown
> as running.
It's not unusual for "top" to show the postmaster's child processes as
"postmaster" as well. Depends on the platform and the options given
to top.
regards, tom la
"Chris Spotts" writes:
> LOG: 0: autovacuum launcher process (PID 10264) was terminated by
> signal 9: Killed
Looks like the OOM killer is loose on your system. Disable memory
overcommit in the kernel and things will get better.
http://www.postgresql.org/docs/8.4/static/kernel-resources.htm
Janning Vygen writes:
> On Monday 20 July 2009 19:24:13 Bill Moran wrote:
>> Have you benchmarked the load it creates under your workload?
> Yes, it takes up to 15% of our workload in an average use case. But we have
> peak times where we can not afford 15% lost for logging!
Well, you could tu
On Tue, Jul 21, 2009 at 6:12 PM, Peter Eisentraut wrote:
> On Tuesday 21 July 2009 04:36:41 Phoenix Kiula wrote:
>> On Mon, Jul 20, 2009 at 2:04 PM, Peter Eisentraut wrote:
>> > On Monday 20 July 2009 06:45:40 Phoenix Kiula wrote:
>> >> explain analyze select * from sites where user_id = 'phoenix'
Phoenix Kiula wrote:
> > I wonder: why do you spend so much time complaining instead of
> > simply locating the buggy data and fixing them?
>
>
> I'd love to fix them. But if I do a search for
>
> SELECT * FROM xyz WHERE col like '%0x80%'
>
> it doesn't work. How should I search for these chara
On Tue, 2009-07-21 at 19:39 +0900, tanjunhua wrote:
> I'm sorry for the twice action. because the mail server reject my response.
> I should compress it with ciper code(11) and the execute program is
> compressed also.
When I build your example from source I see no indication of anything
wro
Hello,
The problem was with locale. To fix it I changed it to ru_RU.CP1251 and now
everything works as expected.
---
Konstantin Pavlov
I had one simple query that kept crashing the connection. It crashes after
several minutes.
Tried restarting, it still error'd at the same place.
Tried recreating the table it was selecting from, it still error'd at the
same place.
I rewrote the query with an ARRAY subselect and it finished fl
On Tue, Jul 21, 2009 at 09:37:04AM +0200, Daniel Verite wrote:
> >I'd love to fix them. But if I do a search for
> >SELECT * FROM xyz WHERE col like '%0x80%'
> >
> >it doesn't work. How should I search for these characters?
>
> In 8.2, try: WHERE strpos(col, E'\x80') > 0
>
> Note that this may fi
On Tuesday 21 July 2009 04:36:41 Phoenix Kiula wrote:
> On Mon, Jul 20, 2009 at 2:04 PM, Peter Eisentraut wrote:
> > On Monday 20 July 2009 06:45:40 Phoenix Kiula wrote:
> >> explain analyze select * from sites where user_id = 'phoenix' order by
> >> id desc limit 10;
> >>
> >>
On Monday 20 July 2009 19:46:01 Radek Novotný wrote:
> query_to_xml('select Nazev as "TITLE", Datum as "DATE", Autor_Akce as "meta
> rel=''action_author''",
>
> gave me
>
> http://www.w3.org/2001/XMLSchema-instance";>
>
> Test
> 2009-07-20
> test
>
>
> How can i transcode this well gen
I have had som problem with my filesystem and now I get these error messsages
when trying to access my database..:
ERROR: could not access status of transaction 2495690984
Could not open file "pg_clog/094C": No such file or directory.
Is there a way to fix this or do I have to find a backup?
I didn't find a definitive answer on how to check for the existence
of a temporary table.
I did a not scientific test to see if I could see a temp table from
another connection in the information_schema... and I can't.
The schema system is more direct (so cleaner) but it seems to rely on
some "be
On Monday 20 July 2009 19:24:13 Bill Moran wrote:
> > > It is not possible for us. Logging millions of statements take too much
> > > time.
>
> This is a ridiculous statement. In actual practice, full query logging
> is 1/50 the amount of disk I/O as the actual database activity. If your
> system
Phoenix Kiula wrote:
I'd love to fix them. But if I do a search for
SELECT * FROM xyz WHERE col like '%0x80%'
it doesn't work. How should I search for these characters?
In 8.2, try: WHERE strpos(col, E'\x80') > 0
Note that this may find valid data as well, because the error you get
On Monday 20 July 2009 18:58:21 Greg Sabino Mullane wrote:
> Perhaps, but I don't think you've quite overcome the 'log everything'
> counter-argument.
#
Not everybody can afford a system with lots of raid arrays or dedicated logging
boxes. Many people log to the same disk. I do it in some project
54 matches
Mail list logo