[ceph-users] Is it possible to stripe rados object?

2022-01-26 Thread lin yunfan
Hi, I know with rbd and cephfs there is a stripe setting to stripe data into multiple rodos object. Is it possible to use librados api to stripe a large object into many small ones? linyunfan ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscri

[ceph-users] Re: RGW resharding

2020-05-25 Thread lin yunfan
: > > I'm using only Swift , not S3. We have a container for every customer. > Right now there are thousands of containers. > > > > On 5/25/2020 9:02 AM, lin yunfan wrote: > > Can you store your data in different buckets? > > > > linyunfan > > &

[ceph-users] Re: RGW resharding

2020-05-24 Thread lin yunfan
Can you store your data in different buckets? linyunfan Adrian Nicolae 于2020年5月19日周二 下午3:32写道: > > Hi, > > I have the following Ceph Mimic setup : > > - a bunch of old servers with 3-4 SATA drives each (74 OSDs in total) > > - index/leveldb is stored on each OSD (so no SSD drives, just SATA) > >

[ceph-users] Re: Cluster network and public network

2020-05-14 Thread lin yunfan
That is correct.I didn't explain it clearly. I said that is because in some write only scenario the public network and cluster network will all be saturated the same time. linyunfan Janne Johansson 于2020年5月14日周四 下午3:42写道: > > Den tors 14 maj 2020 kl 08:42 skrev lin yunfan : >&g

[ceph-users] Re: Cluster network and public network

2020-05-13 Thread lin yunfan
Besides the recoverry scenario , in a write only scenario the cluster network will use the almost the same bandwith as public network. linyunfan Anthony D'Atri 于2020年5月9日周六 下午4:32写道: > > > > Hi, > > > > I deployed few clusters with two networks as well as only one network. > > There has little

[ceph-users] Re: Bluestore - How to review config?

2020-05-05 Thread lin yunfan
Is there a way to get the block,block.db,block.wal path and size? what if all of them or some of them are colocated in one disk? I can get the info from a wal,db,block colocated osd like below: ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-0/ { "/var/lib/ceph/osd/ceph-0//block"

[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-23 Thread lin yunfan
Hi Maritin, How is the performance of d120-c21 hdd cluster? Can it utilize the full performance of the 16 hdd? linyunfan Martin Verges 于2020年4月23日周四 下午6:12写道: > > Hello, > > simpler systems tend to be cheaper to buy per TB storage, not on a > theoretical but practical quote. > > For example 1U

[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-22 Thread lin . yunfan
Big nodes are most for HDD cluster and with 40G nic or 100G nic I don't think the network would be the bottleneck. lin.

[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-22 Thread lin . yunfan
I have seen a lot of people saying not to go with big nodes. What is the exact reason for that?I can understand that if the cluster is not big enough then the total nodes count could be too small to withstand a node failure, but if the cluster is big enough would