[GENERAL] Serial key mismatch in master and slave, while using slony

2010-12-18 Thread Vishnu S.
Hi,

 

 

I am using Slony-I slony-I-2.0.2 in windows .I have a master and a slave
machine. The replication is working fine. When the master switches to
slave , there exists a serial key mismatch in master and slave machines.
So insertion fails in the slave machines. I am using the failover
command to switch the slave to master.  Now the error message shown is
'Duplicate key violation'. On selecting the next value of the serial key
the value shown is different from the actual value. Ie the shown value
is very much less than the number of records in the table.

 

 

Thanks & Regards,

Vishnu S

 

* Confidentiality Statement/Disclaimer *

This message and any attachments is intended for the sole use of the intended 
recipient. It may contain confidential information. Any unauthorized use, 
dissemination or modification is strictly prohibited. If you are not the 
intended recipient, please notify the sender immediately then delete it from 
all your systems, and do not copy, use or print. Internet communications are 
not secure and it is the responsibility of the recipient to make sure that it 
is virus/malicious code exempt.
The company/sender cannot be responsible for any unauthorized alterations or 
modifications made to the contents. If you require any form of confirmation of 
the contents, please contact the company/sender. The company/sender is not 
liable for any errors or omissions in the content of this message.


Re: [GENERAL] DB files, sizes and cleanup

2010-12-18 Thread Merlin Moncure
On Fri, Dec 17, 2010 at 5:22 PM, Gauthier, Dave  wrote:
> max_fsm_pages = 20
> max_fsm_relations = 12000
>
> There are 12 DBs with roughly 30 tables+indexes each.
>
> There are apparently 2 "bad" DBs.  Both identical in terms of data models 
> (clones with different data).  I've pg_dummped one of them to a file, dropped 
> the DB (took a long time as millions of files were deleted) and recreated it. 
>  It now has 186 files.
>
> ls -1 | wc took a while for the other bad one but eventually came up with 
> exactly 7,949,911 files, so yes, millions.  The other one had millions too 
> before I dropped it.  Something is clearly wrong.  But, since the DB recreate 
> worked for the other one, I'll do the same thing to fix this one too.
>
> What I will need to know then is how to prevent this in the future.  It's 
> very odd because the worst of the 2 bad DBs was a sister DB to one that's no 
> problem at all.  Here's the picture...
>
> I have a DB, call it "foo", that gets loaded with a ton of data at night.  
> The users query the thing readonly all day.  At midnight, an empty DB called 
> "foo_standby", which is identical to "foo" in terms of data model is reloaded 
> from scratch.  It takes hours.  But when it's done, I do a few rename 
> databases to swap "foo" with "foo_standby" (really just a name swap).  
> "foo_standby" serves as a live backup of yesterday's data.  Come the next 
> midnight, I truncate all the tables and start the process all over again.

maybe something in this process is leaking files.  if I was in your
shoes, I'd recreate the database from scratch, then watch the file
count carefully and look for unusual growth.  this is probably not the
case, but if it is in fact a backend bug it will turn up again right
away.

anything else interesting jump out about these files? for example, are
there a lot of 0 byte files?

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Copy From suggestion

2010-12-18 Thread Adrian Klaver
On Friday 17 December 2010 7:46:12 am Mark Watson wrote:
> Hello all,
> Firstly, I apologise if this is not the correct list for this subject.
> Lately, I've been working on a data conversion, importing into Postgres
> using Copy From. The text file I'm copying from is produced from an ancient
> program and produces either a tab or semi-colon delimited file. One file
> contains about 1.8M rows and has a 'comments' column. The exporting
> program, which I am forced to use, does not surround this column with
> quotes and this column contains cr/lf characters, which I must deal with
> (and have dealt with) before I can import the file via Copy. Hence to my
> suggestion: I was envisioning a parameter DELIMITER_COUNT which, if one was
> 100% confident that all columns are accounted for in the input file, could
> be used to alleviate the need to deal with cr/lf's in varchar and text
> columns. i.e., if copy loaded a line with fewer delimiters than
> delimiter_count, the next line from the text file would be read and the
> assignment of columns would continue for the current row/column.
> Just curious as to the thoughts out there.
> Thanks to all for this excellent product, and a merry Christmas/holiday
> period to all.
>
> Mark Watson

A suggestion,give pgloader a look;
http://pgloader.projects.postgresql.org/

If I am following you it might already have the solution to the multi-line 
problem. In particular read the History section of the docs.


Thanks,
-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Have any tricks not to recreate a standby server to switch to the former primary?

2010-12-18 Thread vvoody
Hello, guys ;-)

Env: PostgreSQL 8.4.5 + CentOS 4.8

I have two servers, one primary and one standby, which doing warm
standby. Every thing works fine at the beginning. The primary
generates the archive WAL log files and the standby fetchs them to
merge.

Then, I want to let the standby become primary. So I stop the
postgresql(service postgresql stop) manually, copy the last WAL file
and files under pg_xlog/ to the standby. The former standby now
becomes primary and the data is ok. BTW, it still started as archive
mode.

Not end yet. After the standby running for a while and generating some
data, I want to switch back to the former primary. And I repeat the
above steps: stop the postgresql service manuall, copy WAL log files
and files under pg_xlog/ to the former primary. I delete the pg_xlog/
and files under archive directory. Then the postgresql starts, but the
data generated after last switching do not come over.

The doucment(24.4.3. paragraph 5) said "To return to normal operation
we must fully recreate a standby server". I understand that, but it is
under failover, not a case like my switching normally. Recreating a
standby server costs too much for us(the initial one is ok). We are
building a high availability system, so I hope the postgresql be able
to switch over and over fast without data losing and inconsistent.

Any ideas are appreciated ;-)
Best regards.

-- 
Free as freedom, slack as Slackware.
vvoody

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general