jimbob palmer wrote: > Hello, > > I have a cluster that is all working perfectly. Time to break it. > > This is a two node master/slave cluster with drbd. Failover between > the nodes works backwards and forwards. Everything is happier than a > well fed cat. > > I wanted to see what would happen if the drbd device couldn't be > mounted, so on the slave node I deleted the mountpoint, then failed > over. > > Oh dear. I broke things so badly that I had to fail back, shutdown > corosync on the slave, delete the config files, and start it again. > Since that's not the right way to do it, I thought I should ask the > list for the right way.
A "cleanup" for the resource/node pair should have done it. Regards Dominik > Here are the errors I get on the slave when the fs_BLAH resource tries > to start with a missing mountpoint: > > Failed actions: > fs_BLAH_start_0 (node=X, call=X, status=complete): not installed > > The logs tell me that the mount point doesn't exist, so I create it, > and try to tell pacemaker. > > crm(live)resource# start gr_GROUPNAME > Multiple attributes match name=target-role > (group members listed here) > > Okay so starting a group doesn't work. I try to start the filesystem member: > crm(live)resource# start fs_BLAH > > That didn't work. I expected it to, but it didn't. > > Failover failback doesn't work either. > > What am I doing wrong here? This seems fairly sensible.. > > Thanks > > J > > _______________________________________________ > Pacemaker mailing list > Pacemaker@oss.clusterlabs.org > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > -- IN-telegence GmbH & Co. KG Oskar-Jäger-Str. 125 50825 Köln Registergericht Köln - HRA 14064, USt-ID Nr. DE 194 156 373 ph Gesellschafter: komware Unternehmensverwaltungsgesellschaft mbH, Registergericht Köln - HRB 38396 Geschäftsführende Gesellschafter: Christian Plätke und Holger Jansen _______________________________________________ Pacemaker mailing list Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker