I have two tables software.orders and software.products.
I created two views.
CREATE OR REPLACE VIEW software.o1 AS
SELECT orders.orderid, orders.productid, orders.amount
FROM software.orders;
CREATE OR REPLACE VIEW software.o2 AS
SELECT o1.orderid, o1.productid, o1.amount, products.productn
David Salisbury writes:
> Actually, figured I'd post the whole function, painful as it
> might be for anyone to read. If anyone sees something that's a bit
> of a risk ( like perhaps the whole thing ;)
Well, I don't know exactly what's causing your issue, but I see a few
things that seem rather
On Mon, May 21, 2012 at 6:12 AM, John R Pierce wrote:
> On 05/20/12 12:52 PM, John Townsend wrote:
>>
>> By by-passing the "dll" (or "so" on Linux) library I mean you write
>> function or procedure calls to the server that is running as a service on
>> Windows. You don't use the library with its 1
I also needed to give the privilege to execute pg_terminate_backend to
non-superusers and I made it in a separate schema, too. But, to avoid users
killing other user connections I made another function that only gives the
option to kill connections made by the same user that's executing the
functio
On 05/20/12 12:52 PM, John Townsend wrote:
By by-passing the "dll" (or "so" on Linux) library I mean you write
function or procedure calls to the server that is running as a service
on Windows. You don't use the library with its 160 exported functions.
You connect directly to the server thus sa
By by-passing the "dll" (or "so" on Linux) library I mean you write
function or procedure calls to the server that is running as a service
on Windows. You don't use the library with its 160 exported functions.
You connect directly to the server thus saving one layer of protocols.
To do this, y
Hi all,
Many times I have to dump all objects from a schema (single schema holding
only functions and views) in plain text format. It is found that pg_dump
includes a set search_path statement at the beginning and drops all
occurrences of the schema name (to which dumped object belongs) from
funct
Yes, It is useful. But situation is different. Most of the times objects
are first created, tested and only after finalization can go in the
extension update file. Also it can be difficult not to get the object
definitions from an extension in the backup from pg_dump. A customer could
have to execu
On Sunday, May 20, 2012, Martijn van Oosterhout wrote:
> On Sun, May 20, 2012 at 12:26:26AM -0700, Ian Harding wrote:
> > I have a situation where an increase in volume of inserts into the
> > main transaction table causes a huge slowdown. The table has lots of
> > indexes and foreign keys and a
On Sun, May 20, 2012 at 8:39 AM, Poul Møller Hansen wrote:
>> Anyway, if you are seeing no activity at the end of the restore for quite
>> a while you may want to see if large objects are the reason.
>>
>
> The dump are from a version 9.0.7 and it's being restored in a version
> 9.1.3. There are
Josh Kupershmidt writes:
> ... Notice that pg_backup_archiver.c and
> pg_backup_tar.c use an inconsistent log level for this same message,
> which might explain where you saw the message previously.
Seems like that ought to be fixed.
> ... Or change this bit in pg_restore.c to assign a 2
> inste
One thing you may want to look at (if this is PG 8.4.x) is the number of
large objects in pg_largeobjects. If your apps don't use large objects
this is not relevant. If they do, then it may be. I've noticed that
pg_restore no longer reports the restoration of individual LO items. It
used to b
Chris Angelico writes:
> On Sat, May 19, 2012 at 3:09 AM, Steve Crawford
> wrote:
>> I also don't understand the "xcrm.channel_id in (1)" instead of
>> "xcrm.channel_id = 1" unless this is a generated query and there could be
>> multiple ids in that condition.
> Side point from this thread. Is t
On Sun, May 20, 2012 at 02:51:42PM +0200, zeljko wrote:
> John Townsend wrote:
>
> > It appears that some developers (Davart) are by-passing the standard
> > client library, ???libpq.dll???, and directly accessing the server using
> > Delphi or FPC. I am not sure of the advantage here. All libpq.d
On Sun, May 20, 2012 at 12:26:26AM -0700, Ian Harding wrote:
> I have a situation where an increase in volume of inserts into the
> main transaction table causes a huge slowdown. The table has lots of
> indexes and foreign keys and a trigger.
>
> Clearly, something is causing a resource contentio
John Townsend wrote:
> It appears that some developers (Davart) are by-passing the standard
> client library, “libpq.dll”, and directly accessing the server using
> Delphi or FPC. I am not sure of the advantage here. All libpq.dll
I'm FPC user and I use libpq.so(.dll,.dylib) via zeoslib.
Those w
On Sat, May 19, 2012 at 10:57 PM, Bosco Rama wrote:
> Hey Josh,
>
> I found the message I was seeing. It was/is(?) in StartRestoreBlob() and it
> looks like this:
>
> ahlog(AH, 2, "restoring large object with OID %u\n", oid);
>
> But I don't know how to find it in the current git tree or how t
I have a situation where an increase in volume of inserts into the
main transaction table causes a huge slowdown. The table has lots of
indexes and foreign keys and a trigger.
Clearly, something is causing a resource contention issue, but here's
my main question:
I have log_lock_waits = on and
Hi,
Il 19/05/12 19:09, c k ha scritto:
similar tasks and already using it. This problem arises when I have to
issue an update script to the client having only function mostly. And
as most of the functions are dependent on others and having more than
1100 functions it becomes hard to write a li
19 matches
Mail list logo