Ceph already does this by default. For each replicated pool, you can set
the 'size' which is the number of copies you want Ceph to maintain. The
accepted norm for replicas is 3, but you can set it higher if you want to
incur the performance penalty.

On Mon, Jul 1, 2019, 6:01 AM nokia ceph <[email protected]> wrote:

> Hi Brad,
>
> Thank you for your response , and we will check this video as well.
> Our requirement is while writing an object into the cluster , if we can
> provide number of copies to be made , the network consumption between
> client and cluster will be only for one object write. However , the cluster
> will clone/copy multiple objects and stores inside the cluster.
>
> Thanks,
> Muthu
>
> On Fri, Jun 28, 2019 at 9:23 AM Brad Hubbard <[email protected]> wrote:
>
>> On Thu, Jun 27, 2019 at 8:58 PM nokia ceph <[email protected]>
>> wrote:
>> >
>> > Hi Team,
>> >
>> > We have a requirement to create multiple copies of an object and
>> currently we are handling it in client side to write as separate objects
>> and this causes huge network traffic between client and cluster.
>> > Is there possibility of cloning an object to multiple copies using
>> librados api?
>> > Please share the document details if it is feasible.
>>
>> It may be possible to use an object class to accomplish what you want
>> to achieve but the more we understand what you are trying to do, the
>> better the advice we can offer (at the moment your description sounds
>> like replication which is already part of RADOS as you know).
>>
>> More on object classes from Cephalocon Barcelona in May this year:
>> https://www.youtube.com/watch?v=EVrP9MXiiuU
>>
>> >
>> > Thanks,
>> > Muthu
>> > _______________________________________________
>> > ceph-users mailing list
>> > [email protected]
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>> --
>> Cheers,
>> Brad
>>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to