Alvaro Herrera wrote:
I think you can do this very easily with PL/Tcl. For a somewhat
unrelated example, see General Bits issue #47,
http://www.varlena.com/GeneralBits/47.php
_I think_ there are examples closer to what you want to achieve in the
archives. The array of column names in a trigger i
"Carl E. McMillin" <[EMAIL PROTECTED]> writes:
> BTW, it might be nice to give the backends a diagnostic port for "remote
> debugging" in similar wise as the Java Virtual Machine provides so that some
> degree of live-time workflow examination can happen. What do you
> think?
It's called gdb ;-
=?iso-8859-2?Q?Egy=FCd_Csaba?= <[EMAIL PROTECTED]> writes:
> Limit (cost=30.28..30.28 rows=1 width=58) (actual time=0.19..0.19 rows=1
> loops=1)
> -> Sort (cost=30.28..30.30 rows=7 width=58) (actual time=0.18..0.18
> rows=2 loops=1)
> Sort Key: stockid, productid, changeid, date, "time
In article <[EMAIL PROTECTED]>,
Tom Lane <[EMAIL PROTECTED]> wrote:
>Depends which startup script you are using. I know that up till
>recently the Red Hat init script did
>
> su -l postgres -s /bin/sh -c "pg_ctl start ..."
>
>and because it forced /bin/sh, anything you might have put in say
On Tue, Jun 29, 2004 at 01:59:11PM +1000, Justin Clift wrote:
Justin,
> I'm creating a centralised table to keep a log of changes in other tables.
>
> In thinking about the PL/pgSQL trigger to write and attach to the
> monitored tables (probably a row level AFTER trigger), I can see two
> appr
Hi Tom,
I did the modifications you suggested on the t_stockchanges_fullindex and
the result tells everthing:
-
explain analyze select date,time from t_stockchanges where stockid='1' and
productid='234' and date<='2004.06.29' and changeid=1 order by stockid,
productid, changeid, date, time
Hi all,
I'm creating a centralised table to keep a log of changes in other tables.
In thinking about the PL/pgSQL trigger to write and attach to the
monitored tables (probably a row level AFTER trigger), I can see two
approaches:
a) Write a separate PL/pgSQL function for each table, with the ha
Could you do this by setting debug-level higher and do log-analysis?
BTW, it might be nice to give the backends a diagnostic port for "remote
debugging" in similar wise as the Java Virtual Machine provides so that some
degree of live-time workflow examination can happen. What do you think?
Car
Tom Lane wrote:
It's a *really* bad idea to expose that to users of the PL.
Alvaro Herrera wrote:
You want to abort the transaction on the callback? What for? You could
have aborted it earlier.
Of course, in a function you could save the mails you are going to send
and register a callback for the
On Mon, Jun 28, 2004 at 01:33:33PM -0700, Steve Atkins wrote:
> Is there any way to look at the database as though you were inside another
> sessions transaction?
Not currently.
Maybe actually you _could_ do it with a C function, but it will require
a lot of backend internal knowledge.
--
Alvar
On Mon, Jun 28, 2004 at 01:54:20PM -0700, Patrick Hatcher wrote:
> I'm about to update a server that is currently using 7.4.1 to 7.4.3. I see
> that in the instructions for upgrading to 7.4.2 from 7.4.1 it said to
> either do a dump or follow a set of special instructions. Should I still
> follo
perhaps this is why you've received no answer?
Sorry, unable to deliver your message to [EMAIL PROTECTED] for
the following reason:
552 Quota violation for admin at econ dot com
A copy of the original message below this line:
Return-Path:
Received: from redwing.mail.pas.earthlink.net ([20
I'm about to update a server that is currently using 7.4.1 to 7.4.3. I see
that in the instructions for upgrading to 7.4.2 from 7.4.1 it said to
either do a dump or follow a set of special instructions. Should I still
follow these instruction when I upgrade?
TIA
Patrick Hatcher
Macys.Com
On Mon, 2004-06-28 at 14:37, Jaime Casanova wrote:
> Hi all,
>
> There is a way to actually eliminate those dropped tables so they
> don't affect the 1600 columns limit? I know it's very difficult to end
> up with this problem but apparently "it is" posible.
>
You may want to try recreating the
Hi all,
There is a way to actually eliminate those dropped tables so they don't affect the 1600 columns limit? I know it's very difficult to end up with this problem but apparently "it is" posible.
Thanx in advance,
Jaime Casanova
Tom Lane <[EMAIL PROTECTED]> wrote:
Alvaro Herrera <[EMAIL P
Is there any way to look at the database as though you were inside another
sessions transaction?
I've had two cases recently where this would have been somewhat useful. In one,
a select into query ran for several hours and it would have been nice to
see that it was running correctly. In the other
On Mon, 2004-06-28 at 09:15, UMPA Development wrote:
> Hello all!
>
> Is it possible to setup a group by to be case insensitive and if so how?
group by lower(field)
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send a
In the last exciting episode, [EMAIL PROTECTED] (UMPA Development) wrote:
> Is it possible to setup a group by to be case insensitive and if so how?
Well, you could presumably canonicalize the field to one case or the
other, thereby forcing the issue. That's not _exactly_ the same thing
as "case
Tom Lane <[EMAIL PROTECTED]> writes:
> "Philippe Lang" <[EMAIL PROTECTED]> writes:
>
> > Another solution would be to use cron every 5 minutes, and read the
> > content of a table.
>
> This would probably be better because the cron job could only see the
> results of committed transactions. The
=?iso-8859-2?Q?Egy=FCd_Csaba?= <[EMAIL PROTECTED]> writes:
>> I'd also suggest dropping the EXECUTE approach, as this is costing you
>> a re-plan on every call without buying much of anything.
> Do you mean I should use PERFORM instead? Or what else?
> Do you mean the "for R in execute" statements
Richard Huxton <[EMAIL PROTECTED]> writes:
> W.B.Hill wrote:
>> SELECT d+'45 days ago'::interval FROM test;
>>
>> Why the different times??? Why the times???
> At a guess, the date is being converted into a timestamp with timezone
> so you can add the interval to it.
Yeah, I think that will be
> The major time sink is clearly here:
>
> > -> Index Scan using t_stockchanges_fullindex on
> t_stockchanges
> > (cost=0.00..28.74 rows=7 width=46)
> > (actual time=0.14..9.03 rows=6 loops=1)
> >Index Cond: ((date <= '2004.06.28'::bpchar)
> AND (stockid = 1)
On Mon, Jun 28, 2004 at 05:32:55PM +0200, Thomas Hallgren wrote:
> Thomas Hallgren wrote:
> >I would like to make it possible to add a callback that's called just at
> >the start of a transaction as well. Such callbacks would have the
> >ability to generate an error and abort the transaction. Wou
23 matches
Mail list logo