>> >I see that my Master and Slave replication process is not
>> >even running so at this point I am not sure how to start
>> >them running.  
>
>Are the slons running? They should be started from the Services control
>panel applet after the engines are registered. When they are, check the
>connections on each server to ensure they have actually connected. If
>not, check the log files/event log for error messages to see 
>why not. If
>required, modify the config files or pgpass files are required and
>restart the slons.
>
>If they are connected and running, check that you have listens defined
>on each node in the cluster for every other node in the cluster (except
>the admin node).
>
>Regards, Dave.
>

Sigh, I am not getting anywhere, so it seems.

1) I have been able to completely reconstruct the replication
   structure using pgAdmin3
2) I was able to manually run on the Master server, in a command window:
   slon MasterCluster dbname=MyTest user=postgres host=copper.cdkkt.com

   But I noticed the error:
   "2007-09-19 17:42:39 Pacific Daylight Time ERROR  remoteWorkerThread_2:
    "select "_MasterCluster".setAddTable_int(1, 3, '"public"."cars"',
    'cars_pkey', ''); " PGRES_FATAL_ERROR ERROR:  Slony-I: setAddTable_int:
    table id 3 has already been assigned!"

    Ignoring this error for now,

3) I was able to manually run on the Slave server, in a command window:
   slon MasterCluster dbname=MyTest user=postgres host=copper.cdkkt.com
   [No errors reported]

4) Changed a value in MyTest.cars and it was successfully replicated to slave

5) Feeling that all was working, I proceeded on the master to run slonik:
   > slonik slonyReplication.txt
     [No errors reported]

   [File configuration is:
    #--  This defines which namespace the replication system uses
    cluster name = MasterCluster;

    #-- Admin conninfo's are used by the slonik program to connect
    #-- to the node databases.  So these are the PQconnectdb arguments
    #-- that connect from the administrators workstation (where
    #-- slonik is executed).
    node 1 admin conninfo = 'dbname=MyTest host=copper.cdkkt.com user=postgres';
    node 2 admin conninfo = 'dbname=MyTest host=raider.cdkkt.com user=postgres';

    #-- Node 2 subscribes set 1
    subscribe set ( id = 1, provider = 1, receiver = 2, forward = yes);
   ]

6) Proceeded on the slave to run slonik:
   > slonik slonyReplication.txt
   [No errors reported]

   [Same file configuration as above]

7) Restared master and slave services

8) Ran pgAdmin3 and noticed:

   on MASTER:
   a) Master Node: Running PID: not running
   b)  Slave Node: Running PID: administrative node

   on SLAVE:
   a) Master Node: Running PID: not running
   b)  Slave Node: Running PID: not running

   And of course, replication failed when a value is changed in MyTest.cars

9) So, I tried testing to see if the manually running slon on both
   servers would work showing:

   on MASTER:
   a) Master Node: Running PID: 1528
   b)  Slave Node: Running PID: administrative node

   on SLAVE:
   a) Master Node: Running PID: 1528
   b)  Slave Node: Running PID: 1752

   And of course, replication still fails.

   Seems that my running of slonik messed up the configuration
   as is shown where the slave Running PID = 'administrative node'

So -- what now?

What am I doing wrong with the slonik configuration file that so messed
it all up and is it possible to remove this configuration using pssql?

I know I can tear everything down again and start over, but I would end
up using manual slons.  Perhaps I do not have a good handle as to the
proper configuration files for slonik?

Thanks -
Dan

No virus found in this outgoing message.
Checked by AVG Free Edition. 
Version: 7.5.487 / Virus Database: 269.13.22/1015 - Release Date: 9/18/2007 
11:53 AM
 

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
       subscribe-nomail command to [EMAIL PROTECTED] so that your
       message can get through to the mailing list cleanly

Reply via email to