Re: [ceph-users] What is recommended ceph docker image for use

2019-05-08 Thread Stefan Kooman
Quoting Ignat Zapolsky (ignat.zapol...@ammeon.com): > Hi, > > Just a question what is recommended docker container image to use for > ceph ? > > CEPH website us saying that 12.2.x is LTR but there are at least 2 > more releases in dockerhub – 13 and 14. > > Would there be any advise on selection

Re: [ceph-users] Nautilus: significant increase in cephfs metadata pool usage

2019-05-08 Thread Dietmar Rieder
On 5/8/19 10:52 PM, Gregory Farnum wrote: > On Wed, May 8, 2019 at 5:33 AM Dietmar Rieder > wrote: >> >> On 5/8/19 1:55 PM, Paul Emmerich wrote: >>> Nautilus properly accounts metadata usage, so nothing changed it just >>> shows up correctly now ;) >> >> OK, but then I'm not sure I understand why

[ceph-users] OSDs failing to boot

2019-05-08 Thread Rawson, Paul L.
Hi Folks, I'm having trouble getting some of my OSDs to boot. At some point, these disks got very full. I fixed the rule that was causing that, and they are on average ~30% full now. I'm getting the following in my logs:     -1> 2019-05-08 16:05:18.956 7fdc7adbbf00 -1 /home/jenkins-build/bui

Re: [ceph-users] Prioritized pool recovery

2019-05-08 Thread Gregory Farnum
On Mon, May 6, 2019 at 6:41 PM Kyle Brantley wrote: > > On 5/6/2019 6:37 PM, Gregory Farnum wrote: > > Hmm, I didn't know we had this functionality before. It looks to be > > changing quite a lot at the moment, so be aware this will likely > > require reconfiguring later. > > Good to know, and not

Re: [ceph-users] ceph mimic and samba vfs_ceph

2019-05-08 Thread Gregory Farnum
On Wed, May 8, 2019 at 10:05 AM Ansgar Jazdzewski wrote: > > hi folks, > > we try to build a new NAS using the vfs_ceph modul from samba 4.9. > > if i try to open the share i recive the error: > > May 8 06:58:44 nas01 smbd[375700]: 2019-05-08 06:58:44.732830 > 7ff3d5f6e700 0 -- 10.100.219.51:0/3

Re: [ceph-users] Nautilus: significant increase in cephfs metadata pool usage

2019-05-08 Thread Gregory Farnum
On Wed, May 8, 2019 at 5:33 AM Dietmar Rieder wrote: > > On 5/8/19 1:55 PM, Paul Emmerich wrote: > > Nautilus properly accounts metadata usage, so nothing changed it just > > shows up correctly now ;) > > OK, but then I'm not sure I understand why the increase was not sudden > (with the update) bu

Re: [ceph-users] Data moved pools but didn't move osds & backfilling+remapped loop

2019-05-08 Thread Gregory Farnum
On Wed, May 8, 2019 at 2:37 AM Marco Stuurman wrote: > > Hi, > > I've got an issue with the data in our pool. A RBD image containing 4TB+ data > has moved over to a different pool after a crush rule set change, which > should not be possible. Besides that it loops over and over to start > remap

Re: [ceph-users] Ceph Bucket strange issues rgw.none + id and marker diferent.

2019-05-08 Thread EDH - Manuel Rios Fernandez
Eric, Yes we do : time s3cmd ls s3://[BUCKET]/ --no-ssl and we get near 2min 30 secs for list the bucket. If we instantly hit again the query it normally timeouts. Could you explain a little more " With respect to your earlier message in which you included the output of `ceph df`, I believe

Re: [ceph-users] Ceph Bucket strange issues rgw.none + id and marker diferent.

2019-05-08 Thread J. Eric Ivancich
Hi Manuel, My response is interleaved. On 5/7/19 7:32 PM, EDH - Manuel Rios Fernandez wrote: > Hi Eric, > > This looks like something the software developer must do, not something than > Storage provider must allow no? True -- so you're using `radosgw-admin bucket list --bucket=XYZ` to list th

[ceph-users] ceph mimic and samba vfs_ceph

2019-05-08 Thread Ansgar Jazdzewski
hi folks, we try to build a new NAS using the vfs_ceph modul from samba 4.9. if i try to open the share i recive the error: May 8 06:58:44 nas01 smbd[375700]: 2019-05-08 06:58:44.732830 7ff3d5f6e700 0 -- 10.100.219.51:0/3414601814 >> 10.100.219.11:6789/0 pipe(0x7ff3cc00c350 sd=6 :45626 s=1 pgs

[ceph-users] Delta Lake Support

2019-05-08 Thread Scottix
Hey Cephers, There is a new OSS software called Delta Lake https://delta.io/ It is compatible with HDFS but seems ripe to add Ceph support as a backend storage. Just want to put this on the radar for any feelers. Best ___ ceph-users mailing list ceph-us

Re: [ceph-users] Clients failing to respond to cache pressure

2019-05-08 Thread Patrick Donnelly
On Wed, May 8, 2019 at 4:10 AM Stolte, Felix wrote: > > Hi folks, > > we are running a luminous cluster and using the cephfs for fileservices. We > use Tivoli Storage Manager to backup all data in the ceph filesystem to tape > for disaster recovery. Backup runs on two dedicated servers, which mo

Re: [ceph-users] Clients failing to respond to cache pressure

2019-05-08 Thread Stolte, Felix
Hi Paul, we are using Kernel 4.15.0-47. Regards Felix IT-Services Telefon 02461 61-9243 E-Mail: f.sto...@fz-juelich.de - - Fors

Re: [ceph-users] Stalls on new RBD images.

2019-05-08 Thread Jason Dillaman
On Wed, May 8, 2019 at 7:26 AM wrote: > > Hi. > > I'm fishing a bit here. > > What we see is that when we have new VM/RBD/SSD-backed images the > time before they are "fully written" first time - can be lousy > performance. Sort of like they are thin-provisioned and the subsequent > growing of the

Re: [ceph-users] Nautilus: significant increase in cephfs metadata pool usage

2019-05-08 Thread Dietmar Rieder
On 5/8/19 1:55 PM, Paul Emmerich wrote: > Nautilus properly accounts metadata usage, so nothing changed it just > shows up correctly now ;) OK, but then I'm not sure I understand why the increase was not sudden (with the update) but it kept growing steadily over days. ~Dietmar -- __

[ceph-users] What is recommended ceph docker image for use

2019-05-08 Thread Ignat Zapolsky
Hi, Just a question what is recommended docker container image to use for ceph ? CEPH website us saying that 12.2.x is LTR but there are at least 2 more releases in dockerhub – 13 and 14. Would there be any advise on selection between 3 releases ? Sent from Mail for Windows 10 -- This email

Re: [ceph-users] Clients failing to respond to cache pressure

2019-05-08 Thread Stolte, Felix
smime.p7m Description: S/MIME encrypted message ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Clients failing to respond to cache pressure

2019-05-08 Thread Paul Emmerich
Which kernel are you using on the clients? Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Wed, May 8, 2019 at 1:10 PM Stolte, Felix wrote: > > Hi folks, > > we are

Re: [ceph-users] Nautilus: significant increase in cephfs metadata pool usage

2019-05-08 Thread Paul Emmerich
Nautilus properly accounts metadata usage, so nothing changed it just shows up correctly now ;) Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Wed, May 8, 2019 at

[ceph-users] Stalls on new RBD images.

2019-05-08 Thread jesper
Hi. I'm fishing a bit here. What we see is that when we have new VM/RBD/SSD-backed images the time before they are "fully written" first time - can be lousy performance. Sort of like they are thin-provisioned and the subsequent growing of the images in Ceph deliveres a performance hit. Does anyo

[ceph-users] Clients failing to respond to cache pressure

2019-05-08 Thread Stolte, Felix
Hi folks, we are running a luminous cluster and using the cephfs for fileservices. We use Tivoli Storage Manager to backup all data in the ceph filesystem to tape for disaster recovery. Backup runs on two dedicated servers, which mounted the cephfs via kernel mount. In order to complete the Ba

[ceph-users] Nautilus: significant increase in cephfs metadata pool usage

2019-05-08 Thread Dietmar Rieder
Hi, we just recently upgraded our cluster from luminous 12.2.10 to nautilus 14.2.1 and I noticed a massive increase of the space used on the cephfs metadata pool although the used space in the 2 data pools basically did not change. See the attached graph (NOTE: log10 scale on y-axis) Is there an

[ceph-users] Data moved pools but didn't move osds & backfilling+remapped loop

2019-05-08 Thread Marco Stuurman
Hi, I've got an issue with the data in our pool. A RBD image containing 4TB+ data has moved over to a different pool after a crush rule set change, which should not be possible. Besides that it loops over and over to start remapping and backfilling (goes up to 377 pg active+clean then suddenly dro

[ceph-users] clients failing to respond to cache pressure

2019-05-08 Thread Stolte, Felix
smime.p7m Description: S/MIME encrypted message ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph Bucket strange issues rgw.none + id and marker diferent.

2019-05-08 Thread Burkhard Linke
Hi, just a comment (and please correct my if I'm wrong) There are no "folders" in S3. A bucket is a plain list of objects. What you recognize as a folder is an artificial construct, e.g. usual path delimiter used by S3 access tool to create "folders". As a result, listing a bucket wit