Alvaro Herrera wrote:
> It is consistent only if it was taken when the postmaster was down.
OR if you are able to take a point-in-time snapshot at the volume level,
say using LVM or some SAN management tool. The effect is the same as if
the server is hardware reset unexpectedly, in that it requir
William Garrison wrote:
> Wait... there really is a pgfsck...? I just made that up as an example
> of something I wanted.
I had no idea either, but it does look like it:
http://svana.org/kleptog/pgsql/pgfsck.html
It's a perl script, so you may have a decent chance of getting it going
on win32.
Regarding: row-wise update.
In a previous thread:
http://archives.postgresql.org/pgsql-sql/2008-08/msg00122.php
it was mentioned that the "UPDATE ... SET ... FROM ... WHERE" syntax
is an extension. (an extension employed by many other db flavors.)
This creates a problems since each db flavor is
Given table T(c1 int) and function F(arg int) create an index on T using
F(c1). It appears that if you execute "create or replace function F" and
provide a different implementation that the index still contains the results
from the original implementation, thus if you execute something like "selec
Steve Clark <[EMAIL PROTECTED]> writes:
> explain insert into myevents select * from t_unit_event_log a where exists
> (select b.event_log_no from myevents b
> where a.event_status = 1 and a.event_ref_log_no IS NOT NULL
> and a.event_ref_log_no = b.event_log_no and
Jeff Davis <[EMAIL PROTECTED]> writes:
> This is in version 8.3.1 (I also tried 8.3.3).
> It looks like the sort is producing more rows than the input. The hash
> aggregate produces 10k, but the sort produces 10M.
> Merge Join (cost=199211.12..660979.37 rows=9998773 width=12) (actual
> time=8887
Hi all!
I'm playing with client thread abort issues in Npgsql. And with a test
sample one of our users provided us we are seeing that even after the
client finishes its processing, I'm seeing some stalled server
processes processing the query.
The problem is that those server processes seem not t
Thanks all. Turns out I didn't have any problems at all. I just ran
this command, all rpms together as Devrim Gunduz suggested -
--
rpm -Uvh postgresql-8.2.9-1PGDG.rhel4.i386.rpm
postgresql-contrib-8.2.9-1PGDG.rhel4.i386.rpm
postgresql-devel-8.2.9-1PGDG.rhel4.i386.rpm
postgresql-libs-8.2.9-1PGDG.r
On Thu, Aug 28, 2008 at 3:00 AM, Phoenix Kiula <[EMAIL PROTECTED]> wrote:
> Now, is it correct that i don't have to stop my server and I can just
> execute these commands:
>
> rpm -Uvh --test postgresql-libs-8.2.9-1PGDG.rhel4.i386.rpm
> rpm -Uvh --test postgresql-8.2.9-1PGDG.rhel4.i386.rpm
> rpm -
Hi,
On Thu, 2008-08-28 at 09:00 +0800, Phoenix Kiula wrote:
> I hope this is a simple question. I checked the PG wiki but didn't
> find an answer:
> http://wiki.postgresql.org/wiki/RPM_Installation
That document may not be up2date. I'll check. Until then,
http://pgfoundry.org/docman/view.php/10
William Garrison wrote:
[snip]
> A database is not just tables - it is tables and
> transaction logs. Why on earth would PostgreSQL put the tables
> separately from the transaction logs?
Because you told it to. If you want everything on Z:\postgresql you just
initdb that location and point PG at
I hope this is a simple question. I checked the PG wiki but didn't
find an answer:
http://wiki.postgresql.org/wiki/RPM_Installation
I have my 8.2.3 RPMs installed. I would like to upgrade to 8.2.9.
Because this is a minor version upgrade (within 8.2) I understand that
I should be able to upgrade t
I have just come to a horrible realization about PostgreSQL that I'm
sure is supposed to be pre-requisite knowledge even going into this. So
everyone may laugh at me now.
We have a SAN volume, and we created a tablespace that that points to
that SAN volume (Z: drive). This put all the table
Wait... there really is a pgfsck...? I just made that up as an example
of something I wanted. Great! And... how would I tell postgres to
start without using any indexes?
Martijn van Oosterhout wrote:
On Wed, Aug 27, 2008 at 01:45:43PM -0400, William Garrison wrote:
Are there any kind of
Is there a mechanism to check exactly which row may have characters
that are not UTF8? I am trying with this kind of a command:
select id from employee
where modify_date between '2008-03-01' and '2008-07-01'
and joint_field_of_name_etc <> convert(joint_field_of_name_etc, 'UTF8')
This is of course
On Wed, Aug 27, 2008 at 01:45:43PM -0400, William Garrison wrote:
> Since I made my post, we found that we can't do a pg_dump. :( Every
> time this error appears in the logs, postgres forcably closes any
> connections (including any running instances of pgadmin or pg_dump) when
> it runs this l
William Garrison wrote:
> 1) I have a file system backup that *IS* consistent. So I should not
> need any WAL files at all right?
It is consistent only if it was taken when the postmaster was down.
> **update**
> I got it working. Here's how
> 1) I have a file system snapshot. No WAL files
Hi,
Phoenix Kiula wrote:
Hello. I have a database dump. With data and schema, which includes
all the constraints and rules. But it seems the pg_dumpall command
does not copy data in such a way that foreign keys are satisfied upon
restoring. Because tables are inter-related, importing them keep
g
Alvaro Herrera wrote:
William Garrison wrote:
I have a PostgreSQL database on Windows Server 2003, and the database is
kept on a SAN that has the ability to make instantaneous snapshots.
Once I have made such a snapshot, I am unclear how to re-attach it to
another postgres database on a
Hi,
Anybody has an experience in running postgres database server from a solaris
zone? Do you see any problems with such a setup? Any traps?
Thanks,
-'\/\/\/`v^v^v^v^v^v^v
Nina Markova,
Database Analyst
Sorry Alvaro, i sent to you the message ;P
Is the same way (from snapshot) that use Mysql on ZFS.
IF you don't change anything in the database, why it don't works?
Then you restart the service with the same path.
The problem it will be that you need to stop postgres BEFORE snapshot.
--
Sent
This is in version 8.3.1 (I also tried 8.3.3).
It looks like the sort is producing more rows than the input. The hash
aggregate produces 10k, but the sort produces 10M.
Am I just misinterpreting this output? Even the optimizer thinks that
the output of the hashagg and the output of the sort shoul
William Garrison wrote:
> I have a PostgreSQL database on Windows Server 2003, and the database is
> kept on a SAN that has the ability to make instantaneous snapshots.
> Once I have made such a snapshot, I am unclear how to re-attach it to
> another postgres database on another machine. Po
> Any suggestions? Is my procedure correct? Would I need to also copy
> the transaction logs or something like that?
the 'by the book' procedure for this operation is to use
pg_dumpall . > dump_file.sql
and later
psql -f dump_file.sql postgres
pg_dumpall gives you a transaction
I have a PostgreSQL database on Windows Server 2003, and the database is
kept on a SAN that has the ability to make instantaneous snapshots.
Once I have made such a snapshot, I am unclear how to re-attach it to
another postgres database on another machine. Postgres seems to create
a directory
"Phoenix Kiula" <[EMAIL PROTECTED]> writes:
> This is the dump command being used on a 8.2.3 database on Linux:
>$ pg_dumpall > mydb.sql
>$ umask 077
>$ gzip mydb.sql
> Then I download the mydb.sql.gz file into my local computer (Mac OSX
> with 8.3.3) and unzip it to mydb.sql.
That's
Craig Ringer wrote:
William Garrison wrote:
I fear I have a corrupted database, and I'm not sure what to do.
First, make sure you have a recent backup. If your backups rotate, stop
the rotation so that all currently available historical copies of the
database are preserved from now on
On Wed, Aug 27, 2008 at 8:40 AM, Phoenix Kiula <[EMAIL PROTECTED]> wrote:
> Thanks. But there seems to be a tangible slowdown of DB operations
> during the time that pg_dump is running. Perhaps some of my
> postgresql.conf variables are not geared to this. (I vaguely recall
> setting it up so that
Martijn van Oosterhout wrote:
On Wed, Aug 27, 2008 at 11:00:54AM -0400, Steve Clark wrote:
So I did an explain and I see it is doing a seq scan in the last sub plan -
there are about 375,000 rows
in myevent - why isn't it using the index instead of doing a seq scan?
create unique index indx1m
On Wed, Aug 27, 2008 at 05:14:46PM +0200, Ivan Sergio Borgonovo wrote:
> On Wed, 27 Aug 2008 14:47:24 +0100 Sam Mason <[EMAIL PROTECTED]> wrote:
> > OK, so it's the UPDATE that's taking the time. I'm not experienced
> > enough to say why though.
>
> I can't get it either.
> I'd say that:
> - use
On Wed, Aug 27, 2008 at 11:54:27PM +0800, Phoenix Kiula wrote:
> OLD DB:
> Since 2002. May contain non-UTF8 data. But I thought I had modified it
> all when I changed it to UTF-8 pgsql database (it was originally
> Mysql). The database works very well on a very busy website.
> Everything on that w
On Wed, Aug 27, 2008 at 11:00:54AM -0400, Steve Clark wrote:
> So I did an explain and I see it is doing a seq scan in the last sub plan -
> there are about 375,000 rows
> in myevent - why isn't it using the index instead of doing a seq scan?
>
> create unique index indx1myevents on myevents (eve
Ok. Here is what it is.
OLD DB:
Since 2002. May contain non-UTF8 data. But I thought I had modified it
all when I changed it to UTF-8 pgsql database (it was originally
Mysql). The database works very well on a very busy website.
Everything on that website is now UTF-8. I wish to mirror this
databa
In response to "Phoenix Kiula" <[EMAIL PROTECTED]>:
> On Wed, Aug 27, 2008 at 10:53 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
>
> >
> > No, you were right the first time: just feed the dump script to psql.
> >
>
> Ok. Tried that. Because there's no database, I have to execute the
> psql command as
On Wed, Aug 27, 2008 at 11:20:55PM +0800, Phoenix Kiula wrote:
> This seems to be it. This is what I should be executing, except that
> it spews out many errors, like:
>
>ERROR: invalid byte sequence for encoding "UTF8": 0x80
You have bad data in your database. Apparently, you have an enco
On Wed, Aug 27, 2008 at 10:40:41PM +0800, Phoenix Kiula wrote:
> Thanks. But there seems to be a tangible slowdown of DB operations
> during the time that pg_dump is running.
Yes. Pg_dump copies all the data out, so it puts load on your
database and disks.
A
--
Andrew Sullivan
[EMAIL PROTECTE
On Wed, Aug 27, 2008 at 10:53 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
>
> No, you were right the first time: just feed the dump script to psql.
>
Ok. Tried that. Because there's no database, I have to execute the
psql command as "postgres" otherwise it doesn't work.
So here's my command:
$
On Wed, 27 Aug 2008 14:47:24 +0100
Sam Mason <[EMAIL PROTECTED]> wrote:
> On Wed, Aug 27, 2008 at 02:58:18PM +0200, Ivan Sergio Borgonovo
> wrote:
> > On Wed, 27 Aug 2008 12:45:42 +0100 Sam Mason <[EMAIL PROTECTED]>
> > wrote:
> > > Have you been vacuuming (non-full) between runs? and as always,
>
Hello List,
I have inherited an existing application. It has a table of events and some
events may reference an earlier
event. We have a script that saves the first N number of events for each
device, but we also want to save
any additional event that is referenced by one of the first N events.
"Phoenix Kiula" <[EMAIL PROTECTED]> writes:
> Then I tried this:
> $ pg_restore -h localhost mydb.sql
> This tells me:
> pg_restore: [archiver] input file does not appear to be a valid archive
No, you were right the first time: just feed the dump script to psql.
regar
On Wed, Aug 27, 2008 at 10:42 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> For pg_dumpall, you shouldn't even have created the
> databases, just start from a virgin installation.
Should I have CREATEd the db at least without the table structure?
I dropped the database locally. Entirely. Gone.
T
On Wed, Aug 27, 2008 at 10:34 PM, Bill Moran
<[EMAIL PROTECTED]> wrote:
>>
>>
>> Dumb question. Will this kind of pg_dumpall lead to downtime, I mean
>> is there a database lock during this time?
>
> No.
Thanks. But there seems to be a tangible slowdown of DB operations
during the time that pg_du
In response to "Phoenix Kiula" <[EMAIL PROTECTED]>:
> Thanks Bill for this recommendation:
>
>
> > *) In the early step dump the data only with pg_dumpall --disable-triggers
> > -a
>
>
> Dumb question. Will this kind of pg_dumpall lead to downtime, I mean
> is there a database lock during thi
Thanks Bill for this recommendation:
> *) In the early step dump the data only with pg_dumpall --disable-triggers -a
Dumb question. Will this kind of pg_dumpall lead to downtime, I mean
is there a database lock during this time?
Thanks!
--
Sent via pgsql-general mailing list (pgsql-general@p
In response to "Phoenix Kiula" <[EMAIL PROTECTED]>:
> > The recommended solution is to use a regular schema-and-data dump.
> > If you really have to separate schema and data, the --disable-triggers
> > switch might help, though you open yourself to the possibility of
> > loading inconsistent data
On Wed, Aug 27, 2008 at 9:52 AM, Michael Nolan <[EMAIL PROTECTED]> wrote:
>> This is a nice touch. With a little bash-fu you could do a find |
>> xargs rm and list/kill the files in one pass. In the standby setups
>> I've done I usually script the whole process, a prep on the main and a
>> startu
On Wednesday 27 August 2008 09:36, Phoenix Kiula wrote:
> On Wed, Aug 27, 2008 at 9:29 PM, Terry Lee Tucker <[EMAIL PROTECTED]>
wrote:
> > We have all sorts of constraints and foreign keys and we have never had
> > any problem with pg_restore related to dumping such that foreign keys are
> > satis
> The recommended solution is to use a regular schema-and-data dump.
> If you really have to separate schema and data, the --disable-triggers
> switch might help, though you open yourself to the possibility of
> loading inconsistent data that way.
Thanks Tom.
This is the dump command being used
--- On Tue, 8/26/08, Richard Broersma <[EMAIL PROTECTED]> wrote:
> From: Richard Broersma <[EMAIL PROTECTED]>
> Subject: [GENERAL] PITR - base backup question
> To: "pgsql-general@postgresql.org" , [EMAIL
> PROTECTED]
> Date: Tuesday, August 26, 2008, 10:53 PM
> From the following link:
> http
"Phoenix Kiula" <[EMAIL PROTECTED]> writes:
> Hello. I have a database dump. With data and schema, which includes
> all the constraints and rules. But it seems the pg_dumpall command
> does not copy data in such a way that foreign keys are satisfied upon
> restoring. Because tables are inter-relate
On Wed, Aug 27, 2008 at 8:32 AM, Merlin Moncure <[EMAIL PROTECTED]> wrote:
> > 3. Shut down the Postgresql server running on the backup server, if any
> > pg_ctl stop
> > (Use 'ps ax' to make sure the server is stopped.)
>
> probably pg_ctl -m fast stop or -m immediate...since we a
On Wed, Aug 27, 2008 at 02:58:18PM +0200, Ivan Sergio Borgonovo wrote:
> On Wed, 27 Aug 2008 12:45:42 +0100 Sam Mason <[EMAIL PROTECTED]> wrote:
> > Have you been vacuuming (non-full) between runs? and as always,
> > are the stats reasonably up to date?
>
> there is autovacuum running regularly an
On Wed, Aug 27, 2008 at 9:29 PM, Terry Lee Tucker <[EMAIL PROTECTED]> wrote:
> We have all sorts of constraints and foreign keys and we have never had any
> problem with pg_restore related to dumping such that foreign keys are
> satisfied. You must have data already in the database that violates t
On Wed, Aug 27, 2008 at 9:18 AM, Michael Nolan <[EMAIL PROTECTED]> wrote:
> I have what I have sometimes called a 'tepid spare' backup. Once a week I
> copy the physical files over to another system (actually to two of them) and
> every few hours I make sure the archived WAL log files are in sync
On Wednesday 27 August 2008 09:19, Phoenix Kiula wrote:
> Hello. I have a database dump. With data and schema, which includes
> all the constraints and rules. But it seems the pg_dumpall command
> does not copy data in such a way that foreign keys are satisfied upon
> restoring. Because tables are
Hello. I have a database dump. With data and schema, which includes
all the constraints and rules. But it seems the pg_dumpall command
does not copy data in such a way that foreign keys are satisfied upon
restoring. Because tables are inter-related, importing them keep
giving errors and eventually
I have what I have sometimes called a 'tepid spare' backup. Once a week I
copy the physical files over to another system (actually to two of them) and
every few hours I make sure the archived WAL log files are in sync (using
rsync.)
Anyway, here's the cookbook guide I wrote for updating one of th
On Wed, 27 Aug 2008 12:45:42 +0100
Sam Mason <[EMAIL PROTECTED]> wrote:
> On Wed, Aug 27, 2008 at 01:19:30PM +0200, Ivan Sergio Borgonovo
> wrote:
> > but this looks much slower than the function:
> > function: 113sec
> > vs.
> > single statement: 488sec
> > I repeated the test 3 times with simila
On Tue, Aug 26, 2008 at 9:04 PM, Richard Broersma
<[EMAIL PROTECTED]> wrote:
> On Tue, Aug 26, 2008 at 5:38 PM, Merlin Moncure <[EMAIL PROTECTED]> wrote:
>
>
>> If you ever want to mess around with log
>> shipping I strongly suggest you go through the motions of setting up a
>> warm standby vi the
On Wed, Aug 27, 2008 at 01:19:30PM +0200, Ivan Sergio Borgonovo wrote:
> but this looks much slower than the function:
> function: 113sec
> vs.
> single statement: 488sec
> I repeated the test 3 times with similar results.
Have you been vacuuming (non-full) between runs? and as always, are the
sta
Teemu Juntunen wrote:
> I think this has been asked before, but I can't find the
> answer from arcive nor google. How to disable/enable all the
> riggers in a database? I have problem with disabled triggers
> after database restore. If there is no simple way, could this
> be made in a function
On Wed, 27 Aug 2008 10:32:43 +0200
Ivan Sergio Borgonovo <[EMAIL PROTECTED]> wrote:
> create or replace function UpdateAuthors()
> returns void
> as
> $$
> declare
> _row record;
> _ItemID bigint;
> _Authors varchar(1024);
> _Name varchar(50);
> begin
> _Authors:='';
> _ItemID:=null;
>
Oops, both my statements were horribly broken. They needed a WHERE
clause for the UPDATE.
On Wed, Aug 27, 2008 at 11:44:20AM +0100, Sam Mason wrote:
> UPDATE catalog_items SET authors=array_to_string(x.authors,', ')
> FROM (
> SELECT ia.itemid, array_accum(trim(' \t]' from a.name)) AS autho
On Wed, Aug 27, 2008 at 10:32:43AM +0200, Ivan Sergio Borgonovo wrote:
> How can I update catalog_items.Authors
>
> create table catalog_items(
> ItemID bigint primary key,
> Authors varchar(1024)
> );
The type "varchar(1024)" looks a little awkward, wouldn't an unadorned
TEXT be easier? if y
On Wednesday 27 August 2008 04:24, Teemu Juntunen wrote:
> Hi,
>
> I think this has been asked before, but I can't find the answer from arcive
> nor google. How to disable/enable all the riggers in a database? I have
> problem with disabled triggers after database restore. If there is no
> simple w
Hi,
I made the function myself. Here is it, if anyone else has a need for this.
Teemu
/* Enable/disable all the triggers in database */
CREATE OR REPLACE FUNCTION fn_triggerall(DoEnable boolean) RETURNS integer AS
$BODY$
DECLARE
mytables RECORD;
BEGIN
FOR mytables IN SELECT relname FROM pg_cl
2008/8/27 Ivan Sergio Borgonovo <[EMAIL PROTECTED]>:
> How can I update catalog_items.Authors
>
> create table catalog_items(
> ItemID bigint primary key,
> Authors varchar(1024)
> );
>
> taking results from
>
> select ia.ItemID, array_accum(trim(' \t]' from a.Name))
> from catalog_itemauthor ia
How can I update catalog_items.Authors
create table catalog_items(
ItemID bigint primary key,
Authors varchar(1024)
);
taking results from
select ia.ItemID, array_accum(trim(' \t]' from a.Name))
from catalog_itemauthor ia
join catalog_author a on a.AuthorID=ia.AuthorID
where a.Name is
Hi,
I think this has been asked before, but I can't find the answer from arcive nor
google. How to disable/enable all the riggers in a database? I have problem
with disabled triggers after database restore. If there is no simple way, could
this be made in a function where you find the table nam
69 matches
Mail list logo