[ceph-users] Internalls of RGW data store

2017-05-23 Thread Anton Dmitriev
Hi Correct me if I am wrong: when uploading file to RGW it becomes split into stripe units and this stripe units mapped to RADOS objects. This RADOS objects are files on OSD filestore. Whats going under the hood when I delete RGW object? If RADOS object consists of multple stripe units belon

Re: [ceph-users] Large OSD omap directories (LevelDBs)

2017-05-23 Thread Gregory Farnum
On Tue, May 23, 2017 at 6:28 AM wrote: > Hi Wido, > > I see your point. I would expect OMAPs to grow with the number of objects > but multiple OSDs getting to multiple tens of GBs for their omaps seems > excessive. I find it difficult to believe that not sharding the index for a > bucket of 500k

[ceph-users] How does rbd preserve the consistency of WRITE requests that span across multiple objects?

2017-05-23 Thread 许雪寒
Hi, thanks for the explanation:-) On the other hand, I wonder if the following scenario could happen: A program in a virtual machine that uses "libaio" to access a file continuous submit "write" requests to the underlying file system which translates the request into rbd requests. Say,

Re: [ceph-users] Scuttlemonkey signing off...

2017-05-23 Thread Dan Mick
On 05/22/2017 07:36 AM, Patrick McGarry wrote: > I'm writing to you today to share that my time in the Ceph community > is coming to an end this year. You'll leave a big hole, Patrick. It's been great having you along for the ride. -- Dan Mick Red Hat, Inc. Ceph docs: http://ceph.com/docs ___

[ceph-users] mds slow request, getattr currently failed to rdlock. Kraken with Bluestore

2017-05-23 Thread Daniel K
Have a 20 OSD cluster -"my first ceph cluster" that has another 400 OSDs enroute. I was "beating up" on the cluster, and had been writing to a 6TB file in CephFS for several hours, during which I changed the crushmap to better match my environment, generating a bunch of recovery IO. After about 5.

[ceph-users] Object store backups

2017-05-23 Thread Sean Purdy
Hi, Another newbie question. Do people using radosgw mirror their buckets to AWS S3 or compatible services as a backup? We're setting up a small cluster and are thinking of ways to mitigate total disaster. What do people recommend? Thanks, Sean Purdy __

Re: [ceph-users] MDS Question

2017-05-23 Thread John Spray
On Tue, May 23, 2017 at 4:27 PM, James Wilkins wrote: > Thanks : -) > > If we are seeing this rise unnaturally high (e.g >140K - which corresponds > with slow access to CephFS) do you have any recommendations of where we > should be looking - is this related to the messenger service and its > d

Re: [ceph-users] ceph-mon and existing zookeeper servers

2017-05-23 Thread Joao Eduardo Luis
On 05/23/2017 04:04 PM, Sean Purdy wrote: Hi, This is my first ceph installation. It seems to tick our boxes. Will be using it as an object store with radosgw. I notice that ceph-mon uses zookeeper behind the scenes. Is there a way to point ceph-mon at an existing zookeeper cluster, using a

Re: [ceph-users] MDS Question

2017-05-23 Thread James Wilkins
Thanks : -) If we are seeing this rise unnaturally high (e.g >140K - which corresponds with slow access to CephFS) do you have any recommendations of where we should be looking - is this related to the messenger service and its dispatch/throttle bytes? -Original Message- From: John S

Re: [ceph-users] ceph-mon and existing zookeeper servers

2017-05-23 Thread John Spray
On Tue, May 23, 2017 at 4:04 PM, Sean Purdy wrote: > Hi, > > > This is my first ceph installation. It seems to tick our boxes. Will be > using it as an object store with radosgw. > > I notice that ceph-mon uses zookeeper behind the scenes. Is there a way to > point ceph-mon at an existing zooke

[ceph-users] ceph-mon and existing zookeeper servers

2017-05-23 Thread Sean Purdy
Hi, This is my first ceph installation. It seems to tick our boxes. Will be using it as an object store with radosgw. I notice that ceph-mon uses zookeeper behind the scenes. Is there a way to point ceph-mon at an existing zookeeper cluster, using a zookeeper chroot? Alternatively, might cep

Re: [ceph-users] Scuttlemonkey signing off...

2017-05-23 Thread John Wilkins
Sorry to see you go Patrick. You've been at this as long as I have. Best of luck to you! On Tue, May 23, 2017 at 6:01 AM, Wido den Hollander wrote: > Hey Patrick, > > Thanks for all your work in the last 5 years! Sad to see you leave, but > again, your effort is very much appreciated! > > Wido >

Re: [ceph-users] Large OSD omap directories (LevelDBs)

2017-05-23 Thread george.vasilakakos
Hi Wido, I see your point. I would expect OMAPs to grow with the number of objects but multiple OSDs getting to multiple tens of GBs for their omaps seems excessive. I find it difficult to believe that not sharding the index for a bucket of 500k objects in RGW causes the 10 largest OSD omaps to

Re: [ceph-users] Scuttlemonkey signing off...

2017-05-23 Thread Wido den Hollander
Hey Patrick, Thanks for all your work in the last 5 years! Sad to see you leave, but again, your effort is very much appreciated! Wido > Op 22 mei 2017 om 16:36 schreef Patrick McGarry : > > > Hey cephers, > > I'm writing to you today to share that my time in the Ceph community > is coming t

Re: [ceph-users] Large OSD omap directories (LevelDBs)

2017-05-23 Thread Wido den Hollander
> Op 23 mei 2017 om 13:01 schreef george.vasilaka...@stfc.ac.uk: > > > > Your RGW buckets, how many objects in them, and do they have the index > > sharded? > > > I know we have some very large & old buckets (10M+ RGW objects in a > > single bucket), with correspondingly large OMAPs wherever th

Re: [ceph-users] MDS Question

2017-05-23 Thread John Spray
On Tue, May 23, 2017 at 1:42 PM, James Wilkins wrote: > Quick question on CephFS/MDS but I can’t find this documented (apologies if > it is) > > > > What does the q: represent in a ceph daemon perf dump mds > represent? mds]$ git grep "\"q\"" MDSRank.cc:mds_plb.add_u64(l_mds_dispatch_queue_l

[ceph-users] MDS Question

2017-05-23 Thread James Wilkins
Quick question on CephFS/MDS but I can't find this documented (apologies if it is) What does the q: represent in a ceph daemon perf dump mds represent? [root@hp3-ceph-mds2 ~]# ceph daemon /var/run/ceph/ceph-mds.hp3-ceph-mds2.ceph.hostingp3.local.asok perf dump mds { "mds": { "requ

Re: [ceph-users] Large OSD omap directories (LevelDBs)

2017-05-23 Thread george.vasilakakos
> Your RGW buckets, how many objects in them, and do they have the index > sharded? > I know we have some very large & old buckets (10M+ RGW objects in a > single bucket), with correspondingly large OMAPs wherever that bucket > index is living (sufficently large that trying to list the entire thin

Re: [ceph-users] Some monitors have still not reached quorum

2017-05-23 Thread Shambhu Rajak
Hi Alfredo, This is solved, all the listening ports were disabled in my setup, after allowing the monitor/osd ports. Thanks, Shambhu -Original Message- From: Shambhu Rajak Sent: Tuesday, May 23, 2017 10:33 AM To: 'Alfredo Deza' Cc: ceph-users@lists.ceph.com Subject: RE: [ceph-users] Some

Re: [ceph-users] RGW 10.2.5->10.2.7 authentication fail?

2017-05-23 Thread Ingo Reimann
Hi Ben! Thanks for your advice. I included the names of our gateways, but did omit the external name of the service itself. Now, everything is working again. And yes, this change is worth a note J Best regards, Ingo Von: Ben Hines [mailto:bhi...@gmail.com] Gesendet: Dienstag, 23. Ma