On Wed, Feb 06, 2019 at 11:49:28AM +0200, Maged Mokhtar wrote:
> It could be used for sending cluster maps or other configuration in a
> push model, i believe corosync uses this by default. For use in sending
> actual data during write ops, a primary osd can send to its replicas,
> they do not h
On 06/02/2019 11:14, Marc Roos wrote:
Yes indeed, but for osd's writing the replication or erasure objects you
get sort of parrallel processing not?
Multicast traffic from storage has a point in things like the old
Windows provisioning software Ghost where you could netboot a room full
och c
For EC coded stuff,at 10+4 with 13 others needing data apart from the
primary, they are specifically NOT getting the same data, they are getting
either 1/10th of the pieces, or one of the 4 different checksums, so it
would be nasty to send full data to all OSDs expecting a 14th of the data.
Den o
Hi,
we have a compuverde cluster, and AFAIK it uses multicast for node
discovery, not for data distribution.
If you need more information, feel free to contact me either by email or
via IRC (-> Be-El).
Regards,
Burkhard
___
ceph-users mailin
Yes indeed, but for osd's writing the replication or erasure objects you
get sort of parrallel processing not?
Multicast traffic from storage has a point in things like the old
Windows provisioning software Ghost where you could netboot a room full
och computers, have them listen to a mcast
Multicast traffic from storage has a point in things like the old Windows
provisioning software Ghost where you could netboot a room full och
computers, have them listen to a mcast stream of the same data/image and
all apply it at the same time, and perhaps re-sync potentially missing
stuff at the
I am still testing with ceph mostly, so my apologies for bringing up
something totally useless. But I just had a chat about compuverde
storage. They seem to implement multicast in a scale out solution.
I was wondering if there is any experience here with compuverde and how
it compared to ce