For EC coded stuff,at 10+4 with 13 others needing data apart from the
primary, they are specifically NOT getting the same data, they are getting
either 1/10th of the pieces, or one of the 4 different checksums, so it
would be nasty to send full data to all OSDs expecting a 14th of the data.


Den ons 6 feb. 2019 kl 10:14 skrev Marc Roos <m.r...@f1-outsourcing.eu>:

>
> Yes indeed, but for osd's writing the replication or erasure objects you
> get sort of parrallel processing not?
>
>
>
> Multicast traffic from storage has a point in things like the old
> Windows provisioning software Ghost where you could netboot a room full
> och computers, have them listen to a mcast stream of the same data/image
> and all apply it at the same time, and perhaps re-sync potentially
> missing stuff at the end, which would be far less data overall than
> having each client ask the server(s) for the same image over and over.
> In the case of ceph, I would say it was much less probable that many
> clients would ask for exactly same data in the same order, so it would
> just mean all clients hear all traffic (or at least more traffic than
> they asked for) and need to skip past a lot of it.
>
>
> Den tis 5 feb. 2019 kl 22:07 skrev Marc Roos <m.r...@f1-outsourcing.eu>:
>
>
>
>
>         I am still testing with ceph mostly, so my apologies for bringing
> up
>         something totally useless. But I just had a chat about compuverde
>         storage. They seem to implement multicast in a scale out solution.
>
>         I was wondering if there is any experience here with compuverde
> and
> how
>         it compared to ceph. And maybe this multicast approach could be
>         interesting to use with ceph?
>
>
>
>
>         _______________________________________________
>         ceph-users mailing list
>         ceph-users@lists.ceph.com
>         http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
> --
>
> May the most significant bit of your life be positive.
>
>
>
>

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to