Paul Tillotson wrote:
Does anyone know a safe way to shutdown just one backend
Sending it a SIGTERM via kill(1) should be safe.
-Neil
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail c
On Fri, Feb 04, 2005 at 09:27:08AM +0200, Ben-Nes Yonatan wrote:
> Hi all,
>
> Does anyone know if PostgreSQL got a function which work like
> load_file() of mySQL?
I am not quite sure what load_file() does, but check the COPY command
and the analgous \copy in psql. As with many other PostgreSQ
From: "Joshua D. Drake" <[EMAIL PROTECTED]>
> >Since pgpool has this capability, how about including a hook that allows
a
> >script to be run when pgpool detects a problem with the master? That
would
> >allow action to be taken to investigate further and, if required,
switchover
> >or failover an
God morning, everybody!
I have a problem that I don't seem to be able to solve by my self,
thats why I kindly ask the list now…
I have a database containing some tables, containing different receipts.
Every receipt have an unknown number of ingredients linked to it, and
every ingredients name is
Hello Tom,
On Feb 4, 2005, at 12:37 AM, Tom Lane wrote:
Joseph Kiniry <[EMAIL PROTECTED]> writes:
Does anyone have any suggestions on this problem? How can I recreate
pg_user?
Sure, just run the CREATE VIEW command executed by initdb; it's in the
initdb shell script. Note that all the objects cre
> I'm not suggesting that it's the place of pgpool to *force* a failover. I
> am suggesting that one of the criteria that is likely to be useful is the
> inability to connect to the master, and that's something that pgpool,
> apparently, detects. It seems unnecessary to use completely different
>
Doh, sorry - you're completely correct! Silly me...
Can you not add a serial or sequence column to the table for the
purposes of the de-dupe?
Then create an index on that column in one operation at the end and use
that in the way that you would use Oracle's rowid from the examples?
John Sidney-
Bricklen Anderson wrote:
Any ideas on what I should try next? Considering that this db is not in
production yet, I _do_ have the liberty to rebuild the database if
necessary. Do you have any further recommendations?
I recall reading something in this ML about problems with the way that
Ext3 FS r
> If you have so much update load that one server cannot accomodate that
> load, then you should wonder why you'd expect that causing every one
> of these updates to be applied to (say) 3 servers would "diminish"
> this burden.
The update/query load isn't the real issue here, it's that these two
s
<[EMAIL PROTECTED]> wrote:
>
> Can you not add a serial or sequence column to the table for the
> purposes of the de-dupe?
>
> Then create an index on that column in one operation at the end and use
> that in the way that you would use Oracle's rowid from the examples?
Yes. It could work. I hav
I'm trying to fill a table with several million rows that are obtained
directly from a complex query.
For whatever reason, Postgres at one point starts using several
gigabytes of memory, which eventually slows down the system until it no
longer responds.
At first I assumed I had unintentionall
Martha Stewart called it a Good Thing when [EMAIL PROTECTED] (David Fetter)
wrote:
> On Thu, Feb 03, 2005 at 07:03:36PM -0600, Mike Nolan wrote:
>> > Slony-1 is perfectly capable of replicating to a slave database,
>> > then letting you decide to promote it to master, which is just
>> > what you'd
On Fri, 4 Feb 2005 09:44:15 +0100, Victor SpÃng Arthursson
<[EMAIL PROTECTED]> wrote:
> God morning, everybody!
>
> I have a problem that I don't seem to be able to solve by my self,
> thats why I kindly ask the list nowâ
>
> I have a database containing some tables, containing different receipts
On Fri, 4 Feb 2005, Mike Nolan wrote:
If you have so much update load that one server cannot accomodate that
load, then you should wonder why you'd expect that causing every one
of these updates to be applied to (say) 3 servers would "diminish"
this burden.
The update/query load isn't the real issu
Hello everyone,
I'm building a postgresql db which will have to get lots of data
from "the outside" (customers, that is). The db has lots of
constraints, and I'm sure that our customers will offer lots of
invalid information. We receive the information in csv format. My
first thought was to read t
On Fri, 4 Feb 2005 13:32:40 +0100 (CET), Joolz
<[EMAIL PROTECTED]> wrote:
> Hello everyone,
>
> I'm building a postgresql db which will have to get lots of data
> from "the outside" (customers, that is). The db has lots of
> constraints, and I'm sure that our customers will offer lots of
> invalid
Victor Spång Arthursson wrote:
The tables are link according to the following:
receipts <- related_ingredients <- ingredients <- languages
If I just do JOINs, I will not be able to find out if only one or all of
the ingredients are translated. What I need is something that, for
example, returns
Mike Rylander zei:
> On Fri, 4 Feb 2005 13:32:40 +0100 (CET), Joolz
> <[EMAIL PROTECTED]> wrote:
>> Hello everyone,
>>
>> I'm building a postgresql db which will have to get lots of data
>> from "the outside" (customers, that is). The db has lots of
>> constraints, and I'm sure that our customers w
On Feb 4, 2005, at 21:32, Joolz wrote:
What I need is an import where all valid lines from the csv files
are read into the db, and I also get a logfile for all invalid
lines, stating the line number plus the pg error message so I can
see which constraint was violated.
I can't think of a direct, ele
Michael Glaesemann zei:
>
> On Feb 4, 2005, at 21:32, Joolz wrote:
>
>> What I need is an import where all valid lines from the csv files
>> are read into the db, and I also get a logfile for all invalid
>> lines, stating the line number plus the pg error message so I can
>> see which constraint wa
I use a trigger on tables with foreign key references to either ignore
the insert row or insert an appropriate matching row in the referenced
table, if it does not exist. In the function, I just raise a notice
that I am doing this. This is a simple example:
create or replace function tgf_inser
[snip]
> I'm afraid this is a bit too indirect IMHO. As I want to know the
> line number in which an error occurs, I would have to traverse the
> error-tolerant table with limit 1 offset N, and report N when an
> error occurs, hoping that the row order is identical to the line
> order in the csv fi
Csaba Nagy zei:
> [snip]
>> I'm afraid this is a bit too indirect IMHO. As I want to know the
>> line number in which an error occurs, I would have to traverse the
>> error-tolerant table with limit 1 offset N, and report N when an
>> error occurs, hoping that the row order is identical to the lin
>On Fri, Feb 04, 2005 at 09:27:08AM +0200, Ben-Nes Yonatan wrote:
>> Hi all,
>>
>> Does anyone know if PostgreSQL got a function which work like
>> load_file() of mySQL?
>
> I am not quite sure what load_file() does, but check the COPY command
> and the analgous \copy in psql. As with many other P
Sean Davis zei:
> I use a trigger on tables with foreign key references to either
> ignore
> the insert row or insert an appropriate matching row in the
> referenced
> table, if it does not exist. In the function, I just raise a notice
> that I am doing this. This is a simple example:
> create or
On Feb 4, 2005, at 8:30 AM, Joolz wrote:
Sean Davis zei:
I use a trigger on tables with foreign key references to either
ignore
the insert row or insert an appropriate matching row in the
referenced
Thanks Sean, but in my situation I don't want the database to be so
versatile as to handle all the e
On Feb 4, 2005, at 8:27 AM, Joolz wrote:
Csaba Nagy zei:
[snip]
I'm afraid this is a bit too indirect IMHO. As I want to know the
line number in which an error occurs, I would have to traverse the
error-tolerant table with limit 1 offset N, and report N when an
error occurs, hoping that the row ord
Hello,
We have a table cm_quotastates which has exactly
4624564 rows and 25 columns and 9 indexes... Out of
these, our code retrieves 75262 rows and modifies just
one column in each row... but updating these to
database is taking some significant time (around 20
minutes)... Tried the following wit
On Fri, 4 Feb 2005, Eric Jain wrote:
> I'm trying to fill a table with several million rows that are obtained
> directly from a complex query.
>
> For whatever reason, Postgres at one point starts using several
> gigabytes of memory, which eventually slows down the system until it no
> longer resp
On Thu, Feb 03, 2005 at 23:04:57 -0200,
Clodoaldo Pinto <[EMAIL PROTECTED]> wrote:
> This one must be obvious for most here.
>
> I have a 170 million rows table from which I want to eliminate
> duplicate "would be" keys and leave only uniques.
>
> I found a query in http://www.jlcomp.demon.co.u
OK, I can uninstall and then reinstall, but still do not have answer to
my question. Where is there a step byi step setup procedure for windows
after install like there is for linux?
Art
Dann Corbit wrote:
You cannot install the service as administrator, because of security
risks.
Try this thing
On Friday 04 Feb 2005 7:04 pm, Ben-Nes Yonatan wrote:
First thanks for your answer David but im afraid that i still got a problem
> with this solution... im not trying to upload a big file which contain data
> which is supposed to be divided to plenty of rows, i want to upload a big
> file (wav,pp
On Fri, Feb 04, 2005 at 05:59:26 -0800,
Stephan Szabo <[EMAIL PROTECTED]> wrote:
> On Fri, 4 Feb 2005, Eric Jain wrote:
>
> > I'm trying to fill a table with several million rows that are obtained
> > directly from a complex query.
> >
> > For whatever reason, Postgres at one point starts using
On Feb 4, 2005, at 8:34 AM, Ben-Nes Yonatan wrote:
On Fri, Feb 04, 2005 at 09:27:08AM +0200, Ben-Nes Yonatan wrote:
Hi all,
Does anyone know if PostgreSQL got a function which work like
load_file() of mySQL?
I am not quite sure what load_file() does, but check the COPY command
and the analgous \cop
http://pginstaller.projects.postgresql.org could be what you are looking
for.
//Magnus
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Art Fore
> Sent: Friday, February 04, 2005 3:09 PM
> To: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] P
"Mike Cox" <[EMAIL PROTECTED]> wrote in message
Well, yes. You have to be a member of the mailing list you want to
post to even if you are posting through usenet. Otherwise your post
will bounce to you *email* account. Half of those who respond even
when you are a member of the list, respond via
Stephan Szabo wrote:
Explain output would also be useful. I would wonder if it's a problem
with a hash that misestimated the necessary size; you might see if
analyzing the tables involved changes its behavior.
I executed ANALYZE just before running the problematic statement. Will
post the output
Alban Hertroys wrote:
Bricklen Anderson wrote:
Any ideas on what I should try next? Considering that this db is not
in production yet, I _do_ have the liberty to rebuild the database if
necessary. Do you have any further recommendations?
I recall reading something in this ML about problems with
Bruno Wolff III wrote:
I think deferred triggers can also use a lot of memory.
I do indeed have several columns with REFERENCES x DEFERRABLE INITIALLY
DEFERRED...
Next time I run the procedure, I will try dropping the foreign key
constraints first.
Incidently, would be nice if Postgres had some
On Fri, 4 Feb 2005 13:56:23 +0100 (CET), Joolz
<[EMAIL PROTECTED]> wrote:
> If is has to be perl, so be it, although I'm not a big fan. Do you
> think this is possible in python?
>
Sure. I just suggested Perl since that's my QnD tool of choice.
--
Mike Rylander
[EMAIL PROTECTED]
GPLS -- PINES
> We could decree that a contrib module's script should create a schema
> and shove everything it makes into that schema. Then "DROP SCHEMA CASCADE"
> is all you need to get rid of it. However, you'd probably end up having
> to add this schema to your search path to use the module conveniently.
>
Tom Lane wrote:
Bricklen Anderson <[EMAIL PROTECTED]> writes:
Tom Lane wrote:
But anyway, the evidence seems pretty clear that in fact end of WAL is
in the 73 range, and so those page LSNs with 972 and 973 have to be
bogus. I'm back to thinking about dropped bits in RAM or on disk.
memtest86+ ran
On Fri, 2005-02-04 at 00:08, Tope Akinniyi wrote:
> Hi,
>
> Is there a replication solution for PostgreSQL? I learnt Slony 1 is
> for Linux OS.
Actually, it will work on any flavor of Unix as far as I know. And
apparently at least some folks on the slony mailing list are interested
in / work
Joseph Kiniry <[EMAIL PROTECTED]> writes:
> I'm currently blocked on the system catalog schema "pg_catalog";
> whence is it initialized?
That row in pg_namespace is missing, you mean? That's very odd ... what
rows do you see in pg_namespace? That should be loaded as part of the
basic bootstrap
Sorry about the last email, I sent before adding comment.
Will try this. thanks.
Art
Magnus Hagander wrote:
http://pginstaller.projects.postgresql.org could be what you are looking
for.
//Magnus
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Art Fore
Sen
Magnus Hagander wrote:
http://pginstaller.projects.postgresql.org could be what you are looking
for.
//Magnus
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Art Fore
Sent: Friday, February 04, 2005 3:09 PM
To: pgsql-general@postgresql.org
Subject: Re: [
Hi All,
I have stored event records in Postgresql 7.3.4 and now need to
calculate the duration between each event in succession. I have
"record_id" and a" timestamp without time zone" columns for each event.
What is a good way to calculate the difference in timestamp and store it
in the record
Hello.
I have been in this list for a very short period of time so, if my
questions have been answered before, please tell me and I will browse again in
the archives.
1. Is there anything in Postgre or third-party solutions similar to
Oracle’s SQL Loader, to upload a flat file into a tabl
On Fri, Feb 04, 2005 at 08:56:37 -0600,
Shawn Harrison <[EMAIL PROTECTED]> wrote:
> >"Mike Cox" <[EMAIL PROTECTED]> wrote in message
> >>Well, yes. You have to be a member of the mailing list you want to
> >>post to even if you are posting through usenet. Otherwise your post
> >>will bounce to
On Fri, Feb 04, 2005 at 10:24:44AM -0600, Bruno Wolff III wrote:
> For the benefit of other people reading this thread, you don't have to
> be a member of the lists to post. Nonsubscriber messages are moderated
> (which adds a delay), but on topic messages will get through.
Additionally, you can
Martin,
This looks really good. I wish it were going into Sarge, though of
course the timing isn't right for that. :)
A couple things I noticed about the automated upgrade procedure
(pg_version_upgrade):
1) Since it uses pg_dumpall, it doesn't seem to be capable of handling
databases with l
Venkatesh Babu <[EMAIL PROTECTED]> writes:
> We have a table cm_quotastates which has exactly
> 4624564 rows and 25 columns and 9 indexes... Out of
> these, our code retrieves 75262 rows and modifies just
> one column in each row... but updating these to
> database is taking some significant time (
2005-02-04 kl. 13.00 skrev Mike Rylander:
Can you send the table structure and the query that does this? It may
just be a matter of adding a subselect with a HAVING clause, but we
won't know until we have more information.
Sure - coming up!
First table is the main receipt table:
tostipippitest=# s
Hi,
I'm doing some complicated joining and am getting error
messages about unknown relations and can't figure out
what's up. I'm wondering if "as" aliasing gives
an alias to the product of a join, not just the
one table that appears immediately in front of the
"as". ?
Rather than try to describe
On Thu, 20 Jan 2005, Joshua D. Drake wrote:
Is there any evidence of the above claim? I've seen a link to a l-k
bug report about ext3, but apparently it was totally unconfirmed
(and a single bug does not mean a FS is not good - I remember XFS
being hammered heavily before being accepted into Linux)
Title: plpgsql function errors
Hi Everyone -
I am new to this list and although I have been using postgresql on and off for about a year now. I am trying to develop a webapp using perl and cgi with postgresql 7.4.6 as a backend database. One of the things I need is to create a transaction
On 02/04/2005 10:06:49 AM, Ignacio Colmenero wrote:
Hello.
I have been in this list for a very short period of time so, if my
questions
have been answered before, please tell me and I will browse again in
the
archives.
1. Is there anything in Postgre or third-party solutions similar to
Oracle's
SQL
Hi,
On Thu, 03 Feb 2005 10:03:34 -0600, Pam Eggler <[EMAIL PROTECTED]> wrote:
> I noticed I was running low on space on my system, so I found this vacuum
> command. I ran it and it failed because it ran out of space:
>
> vacuum mytable;
> FATAL 2: ZeroFill failed to write
> /var/lib/pgsql/data/pg
Karl O. Pinc wrote:
4. Can I query an object in another database, like in Oracle's dblink?
I'm no expert. I don't believe so. You can query across scheams
in the same database but not across databases. You could do
something (anything!) by writing an external function in C or
whatever, but I c
On Fri, Feb 04, 2005 at 10:37:52AM -0500, phil campaigne wrote:
>
> Hi All,
> I have stored event records in Postgresql 7.3.4 and now need to
> calculate the duration between each event in succession. I have
> "record_id" and a" timestamp without time zone" columns for each event.
>
> What is
On Fri, 2005-02-04 at 11:59, Karl O. Pinc wrote:
> On 02/04/2005 10:06:49 AM, Ignacio Colmenero wrote:
> > 4. Can I query an object in another database, like in Oracle's
> > dblink?
>
> I'm no expert. I don't believe so. You can query across scheams
> in the same database but not across datab
On Fri, Feb 04, 2005 at 09:06:49AM -0700, Ignacio Colmenero wrote:
> 1. Is there anything in Postgre or third-party solutions similar to Oracle's
> SQL Loader, to upload a flat file into a table, according to certain rules?
> Any solutions you have tried before to solve this issue?
PostgreSQL (or
On Fri, Feb 04, 2005 at 11:40:50AM -0600, Juan Casero (FL FLC) wrote:
> Hi Everyone -
>
> I am new to this list and although I have been using postgresql on and
> off for about a year now. I am trying to develop a webapp using perl
> and cgi with postgresql 7.4.6 as a backend database. One of th
"Karl O. Pinc" <[EMAIL PROTECTED]> writes:
> I'm doing some complicated joining and am getting error
> messages about unknown relations and can't figure out
> what's up. I'm wondering if "as" aliasing gives
> an alias to the product of a join, not just the
> one table that appears immediately in f
I tried putting those values into strings like you describe below but
then the server bombs. e.g...
customer_service=# select
trx_id('JUANCASERO3055128218','CREDIT','02/02/05','1','1','Aventura','02
/01/05','Tom');
ERROR: function trx_id("unknown", "unknown", "unknown", "unknown",
"unknown", "un
I have a postgresql 7.5.1 database with one table on a SuSE 9.2 server.
I also have the same database on my windowsxp laptop. I done a
backup-restore operation to get the original database on the windows
machine, but seem I cannot update using that. It comes up with duplicate
primary key errors
On Fri, Feb 04, 2005 at 12:22:43PM -0600, Juan Casero (FL FLC) wrote:
> I tried putting those values into strings like you describe below but
> then the server bombs. e.g...
>
> customer_service=# select
> trx_id('JUANCASERO3055128218','CREDIT','02/02/05','1','1','Aventura','02
> /01/05','Tom');
Let me chime in here for a moment. My objective is not to bash any
filesystem technology. I have been using XFS consistently now for a
couple of years on all my Linux boxes. It works great. I love the
spead and performance. A number of years ago when it was first being
ported to Linux I tried
On Fri, Feb 04, 2005 at 12:44:35PM -0600, Juan Casero (FL FLC) wrote:
> Sorry about that. I did forget one parameter...
>
> customer_service=# select
> trx_id('JUANCASERO3055128218',805,'CREDIT','02/02/05','1','1','Aventura'
> ,'02/01/05','Tom');
> ERROR: function trx_id("unknown", integer, "unk
On Fri, Feb 04, 2005 at 09:06:49AM -0700, Ignacio Colmenero wrote:
> Hello.
>
> I have been in this list for a very short period of time so, if my
> questions have been answered before, please tell me and I will
> browse again in the archives.
>
> 1. Is there anything in Postgre or third-party so
Here is the output of that command. I ran it in a unix shell and
redirected the psql output to a file so I haven't touched it...
Result data type | Schema | Name |
Argument data types | Owner |
Language |
Source code
| Description
--+-
By the way. I took your advice and redesigned the tables and the
function so that it is not needed to lock the table at all. I assume
this work because of MVCC.
Thanks,
juan
-Original Message-
From: Martijn van Oosterhout [mailto:[EMAIL PROTECTED]
Sent: Friday, February 04, 2005 1:56
On Fri, 4 Feb 2005 17:52:45 +0100, Victor SpÃng Arthursson
<[EMAIL PROTECTED]> wrote:
>
> 2005-02-04 kl. 13.00 skrev Mike Rylander:
>
> > Can you send the table structure and the query that does this? It may
> > just be a matter of adding a subselect with a HAVING clause, but we
> > won't know u
Is there any stronger medicine that's available (for instance, when a
backend won't respond to SIGTERM) and has no unfortunate side effects?
I just ran into this situation the other day (and made the unfortunate
discovery that SIGABRT is as bad as SIGKILL as far as a postmaster is
concerned).
oops -- forgot to send this back to list.
-- Forwarded Message --
Subject: Re: [GENERAL] Updating a table on local machine from remote
linux server
Date: Friday 04 February 2005 01:34 pm
From: "Andrew L. Gould" <[EMAIL PROTECTED]>
To: Art Fore <[EMAIL PROTECTED]>
On Friday 04
On Fri, Feb 04, 2005 at 01:44:10PM -0600, Thomas F.O'Connell wrote:
> Is there any stronger medicine that's available (for instance, when a
> backend won't respond to SIGTERM) and has no unfortunate side effects?
> I just ran into this situation the other day (and made the unfortunate
> discover
Which brings up a follow-up question: is it documented anywhere exactly
what goes on in recovery mode? If so, I've not found it.
When I've experienced this, it has seemed quicker just to stop and
restart postgres than to let recovery mode complete. Is that unsafe?
-tfo
--
Thomas F. O'Connell
Co
> On Fri, Feb 04, 2005 at 01:44:10PM -0600, Thomas F.O\'Connell wrote:
> > Is there any stronger medicine that\'s available (for instance, when
a
> > backend won\'t respond to SIGTERM) and has no unfortunate side
effects?
> > I just ran into this situation the other day (and made the
unfortunate
Jim Wilson <[EMAIL PROTECTED]> writes:
> If you are not very careful about how you handle orphaned connections
> in Postgres you will likely lose datanot "maybe" like a long
> shot...but "likely".
[ raised eyebrow ... ] Say again? I don't know of any reason why a
lost connection would cause
On Fri, Feb 04, 2005 at 01:14:44PM -0600, Juan Casero (FL FLC) wrote:
> Here is the output of that command. I ran it in a unix shell and
> redirected the psql output to a file so I haven't touched it...
Well, here's the problem. Your definition is:
> integer | public | trx_id | charact
> Jim Wilson <[EMAIL PROTECTED]> writes:
> > If you are not very careful about how you handle orphaned
connections
> > in Postgres you will likely lose datanot "maybe" like a long
> > shot...but "likely".
>
> [ raised eyebrow ... ] Say again? I don\'t know of any reason why a
> lost connecti
On Fri, Feb 04, 2005 at 05:01:43PM -0500, Jim Wilson wrote:
> Rather than getting into the raised eyebrow thing ;-), I\\\'d suggest
> checking your "qualifiers". Consider that with Postgres, if killing a
> single connection brings the whole server down, you will loose _all_
> uncommitted data. If
Jim Wilson <[EMAIL PROTECTED]> writes:
> Rather than getting into the raised eyebrow thing ;-), I\\\'d suggest
> checking your "qualifiers". Consider that with Postgres, if killing a
> single connection brings the whole server down, you will loose _all_
> uncommitted data. If you did not, then I wo
[EMAIL PROTECTED] ("Joolz") writes:
> Hello everyone,
>
> I'm building a postgresql db which will have to get lots of data
> from "the outside" (customers, that is). The db has lots of
> constraints, and I'm sure that our customers will offer lots of
> invalid information. We receive the informati
Steve Crawford wrote:
On Friday 04 February 2005 7:37 am, you wrote:
Hi All,
I have stored event records in Postgresql 7.3.4 and now need to
calculate the duration between each event in succession. I have
"record_id" and a" timestamp without time zone" columns for each
event.
What is a good way
Correct me if I am wrong, but doesn't the postmaster notice that
something killed a backend and cause all the other ones to roll back?
Paul Tillotson
Neil Conway wrote:
Paul Tillotson wrote:
Does anyone know a safe way to shutdown just one backend
Sending it a SIGTERM via kill(1) should be safe.
> Jim Wilson <[EMAIL PROTECTED]> writes:
> > Rather than getting into the raised eyebrow thing , I\\\'d
suggest
> > checking your "qualifiers". Consider that with Postgres, if killing
a
> > single connection brings the whole server down, you will loose
_all_
> > uncommitted data. If you did not
> On Fri, Feb 04, 2005 at 05:01:43PM -0500, Jim Wilson wrote:
>
> > Rather than getting into the raised eyebrow thing , I\\\'d
suggest
> > checking your "qualifiers". Consider that with Postgres, if killing
a
> > single connection brings the whole server down, you will loose
_all_
> > uncommit
Martijn -
Thank you so much for your help. I finally got the stored procedure to
work as I wanted and your advice on nextval() and currval() helped me
get around the expected problem of how to address two transactions
trying to acquire a lock on the same table.
Best Regards,
Juan
-Original
I am just a newbie but logically:
Maybe the answer to that is much simpler.
Ask your network officer to tell you whats the bandwidth you
have on your current office and remote office.
whats the avg:
a. website bandwidth.
b. current postgress office bandwidth.
I never used replication but it seem
Hi!
Some of my tables include table check constraints like this:
CREATE TABLE t1 (
CHECK(MyCheckFun(c1,c2)),
c1 int,
c2 int
) without oids;
Function MyCheckFun() references to another table and checks if the
related rows exist. MyCheckFun() raises exception and aborts pg_restore
due to the uncon
91 matches
Mail list logo