Note that a slave does not have to run on the same machine as its master.
ktecho wrote: > > We need to receive JMS log messages from several machines. Initially, > we're going to setup 2 machines, but this is expected to grow as the load > grows as well. That's why I cannot use the Master/Slave1/Slave2/Slave3 > schema. I need all the brokers to be up and running, and if one of them > fails (the machine shuts down), the other brokers should process the > messages of the broker that has failed. > > What should be the best approach to this needs? > > Thanks. > > > > James.Strachan wrote: >> >> Note that "network of brokers" usually means a store and forward >> network traditionally, which is not a master/slave cluster which is >> what you probably want. >> >> BTW what kind of load do you need? I suspect a single master/slave >> cluster of brokers is all you need. >> >> >> On 19/11/2007, ktecho <[EMAIL PROTECTED]> wrote: >>> >>> Hi, >>> >>> I have been reading the ActiveMQ documentation but something isn't >>> entirelly >>> clear to me. >>> >>> I need to setup a network of brokers with the following requirements: >>> - I need all of them to be able to attend requests, to provide Load >>> Balancing between all the brokers / servers. >>> - I need that if one of them fails, its messages can be consumed by >>> other >>> brokers. >>> >>> Can this be done actually with ActiveMQ? If not, what's the closer >>> approach >>> to this scenary I have in mind? >>> >>> Thanks a lot in advance, >>> >>> Luis Miguel GarcĂa >>> -- >>> View this message in context: >>> http://www.nabble.com/High-Availability-and-Load-Balancing-tf4838228s2354.html#a13841668 >>> Sent from the ActiveMQ - User mailing list archive at Nabble.com. >>> >>> >> >> >> -- >> James >> ------- >> http://macstrac.blogspot.com/ >> >> Open Source Integration >> http://open.iona.com >> >> > > -- View this message in context: http://www.nabble.com/High-Availability-and-Load-Balancing-tf4838228s2354.html#a13844144 Sent from the ActiveMQ - User mailing list archive at Nabble.com.