[ceph-users] Re: nautilus radosgw fails with pre jewel buckets - index objects not at right place

2019-12-10 Thread Jacek Suchenia
Ingo We got this comfort to shut down S3 interface for couple of minutes where we could fix buckets - all the operation took ~3-5 minutes so we didn't lock buckets Jacek pon., 9 gru 2019 o 15:31 Ingo Reimann napisał(a): > Hi Jacek, > > thanks! I wanted to do follow exactly that plan with the m

[ceph-users] Re: Size and capacity calculations questions

2019-12-10 Thread Georg F
Glad to see I am not the only one with unexpected increased disk usage. I do have a case for a few months now where the reported size on disk is 10 times higher than it should be. Unfortunately no solution so far. Therefore, I am very curious whether the min alloc size will solve your problem I

[ceph-users] v14.2.5 Nautilus released

2019-12-10 Thread Abhishek Lekshmanan
This is the fifth release of the Ceph Nautilus release series. Among the many notable changes, this release fixes a critical BlueStore bug that was introduced in 14.2.3. All Nautilus users are advised to upgrade to this release. For the complete changelog entry, please visit the release blog at ht

[ceph-users] Re: Prometheus endpoint hanging with 13.2.7 release?

2019-12-10 Thread Jan Fajerski
On Mon, Dec 09, 2019 at 05:01:04PM -0800, Paul Choi wrote: > Hello, > Anybody seeing the Prometheus endpoint hanging with the new 13.2.7 > release? > With 13.2.6 the endpoint would respond with a payload of 15MB in less > than 10 seconds. I'd guess its not the prometheus module itself: $

[ceph-users] Re: v14.2.5 Nautilus released

2019-12-10 Thread Simon Ironside
Thanks all! On 10/12/2019 09:45, Abhishek Lekshmanan wrote: This is the fifth release of the Ceph Nautilus release series. Among the many notable changes, this release fixes a critical BlueStore bug that was introduced in 14.2.3. All Nautilus users are advised to upgrade to this release. For th

[ceph-users] getfattr problem on ceph-fs

2019-12-10 Thread Frank Schilder
I have a strange problem with ceph fs and extended attributes. I have two Centos machines where I mount cephfs in exactly the same way (I manually executed the exact same mount command on both machines). On one of the machines, getfattr returns this: [root@ceph-01 ~]# getfattr -d -m 'ceph.*' /m

[ceph-users] Re: getfattr problem on ceph-fs

2019-12-10 Thread Yan, Zheng
On Tue, Dec 10, 2019 at 8:06 PM Frank Schilder wrote: > > I have a strange problem with ceph fs and extended attributes. I have two > Centos machines where I mount cephfs in exactly the same way (I manually > executed the exact same mount command on both machines). On one of the > machines, get

[ceph-users] Re: getfattr problem on ceph-fs

2019-12-10 Thread Frank Schilder
Thanks for the fast answer! Is there any (other) way to get a complete list of extended attributes? Is there something documented - meaning what can I rely on in the future? Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 _

[ceph-users] [object gateway] setting storage class does not move object to correct backing pool?

2019-12-10 Thread Gerdriaan Mulder
Hi, If I change the storage class of an object via s3cmd, the object's storage class is reported as being changed. However, when inspecting where the objects are placed (via `rados -p ls`, see further on), the object seems to be retained in the original pool. The idea behind this test setup

[ceph-users] Re: [object gateway] setting storage class does not move object to correct backing pool?

2019-12-10 Thread Matt Benjamin
Hi Gerdriaan, I think actually moving an already-stored object requires a lifecycle transition policy. Assuming such a policy exists and matches the object by prefix/tag/time, it would migrate during an (hopefully the first) eligible lc processing window. Matt On Tue, Dec 10, 2019 at 7:44 AM Ge

[ceph-users] Re: [object gateway] setting storage class does not move object to correct backing pool?

2019-12-10 Thread Gerdriaan Mulder
Hi Matt, On 12/10/19 1:52 PM, Matt Benjamin wrote: I think actually moving an already-stored object requires a lifecycle transition policy. Assuming such a policy exists and matches the object by prefix/tag/time, it would migrate during an (hopefully the first) eligible lc processing window.

[ceph-users] Re: v14.2.5 Nautilus released

2019-12-10 Thread Ilya Dryomov
On Tue, Dec 10, 2019 at 10:45 AM Abhishek Lekshmanan wrote: > > This is the fifth release of the Ceph Nautilus release series. Among the many > notable changes, this release fixes a critical BlueStore bug that was > introduced > in 14.2.3. All Nautilus users are advised to upgrade to this release

[ceph-users] Re: [object gateway] setting storage class does not move object to correct backing pool?

2019-12-10 Thread Casey Bodley
On 12/10/19 8:10 AM, Gerdriaan Mulder wrote: Hi Matt, On 12/10/19 1:52 PM, Matt Benjamin wrote: I think actually moving an already-stored object requires a lifecycle transition policy.  Assuming such a policy exists and matches the object by prefix/tag/time, it would migrate during an (hopefu

[ceph-users] Re: [object gateway] setting storage class does not move object to correct backing pool?

2019-12-10 Thread Gerdriaan Mulder
Hi Casey, On 12/10/19 3:00 PM, Casey Bodley wrote: whereas I would expect "git-tree.png" to only reside in the pool tier2-hdd. This gives me the suggestion that I made an error in configuring the storage_class->pool association. git-tree.png is the 'head' object, which stores the object's a

[ceph-users] Re: v14.2.5 Nautilus released

2019-12-10 Thread Sage Weil
> > If you are not comfortable sharing device metrics, you can disable that > > channel first before re-opting-in: > > > > ceph config set mgr mgr/telemetry/channel_crash false > > This should be channel_device, right? Yep! https://github.com/ceph/ceph/pull/32148 Thanks, sage __

[ceph-users] ORe: Re: getfattr problem on ceph-fs

2019-12-10 Thread David Disseldorp
Hi, On Tue, 10 Dec 2019 12:40:35 +, Frank Schilder wrote: > Thanks for the fast answer! > > Is there any (other) way to get a complete list of extended attributes? Not at the moment, as far as I can tell. > Is there something documented - meaning what can I rely on in the future? When the

[ceph-users] Cephalocon 2020

2019-12-10 Thread Sage Weil
Hi everyone, The next Cephalocon is coming up on March 3-5 in Seoul! The CFP is open until Friday (get your talks in!). We expect to have the program ready for the first week of January. Registration (early bird) will be available soon. We're also looking for sponsors for the conference. T

[ceph-users] Re: Cephalocon 2020

2019-12-10 Thread Sage Weil
On Tue, 10 Dec 2019, Sage Weil wrote: > Hi everyone, > > The next Cephalocon is coming up on March 3-5 in Seoul! The CFP is open > until Friday (get your talks in!). We expect to have the program > ready for the first week of January. Registration (early bird) will be > available soon. ...a

[ceph-users] Shouldn't Ceph's documentation be "per version"?

2019-12-10 Thread Rodrigo Severo - Fábrica
Hi, Shouldn't Ceph's documentation be presented "per version"? I believe there might be documentation for Ceph per version but I can't see in Ceph documentation site how to easily see each version's docs. Regards, Rodrigo Severo ___ ceph-users maili

[ceph-users] Re: Shouldn't Ceph's documentation be "per version"?

2019-12-10 Thread Nathan Fish
When reading, eg, https://docs.ceph.com/docs/nautilus/radosgw/ , you can simply change "nautilus" to any codename. This is perhaps less obvious than it should be since many links go to "master". On Tue, Dec 10, 2019 at 12:47 PM Rodrigo Severo - Fábrica wrote: > > Hi, > > > Shouldn't Ceph's docume

[ceph-users] Re: RESEND: Re: PG Balancer Upmap mode not working

2019-12-10 Thread Rich Bade
I'm finding the same thing. The balancer used to work flawlessly, giving me a very even distribution with about 1% variance. Some time between 12.2.7 (maybe) and 12.2.12 it's stopped working. Here's a small selection of my osd's showing 47%-62% spread. 210 hdd 7.27739 1.0 7.28TiB 3.43TiB

[ceph-users] Re: RESEND: Re: PG Balancer Upmap mode not working

2019-12-10 Thread Bryan Stillwell
Rich, What's your failure domain (osd? host? chassis? rack?) and how big is each of them? For example I have a failure domain of type rack in one of my clusters with mostly even rack sizes: # ceph osd crush rule dump | jq -r '.[].steps' [ { "op": "take", "item": -1, "item_name":

[ceph-users] Re: RESEND: Re: PG Balancer Upmap mode not working

2019-12-10 Thread Rich Bade
Thanks Brian. My failure domain is Host and they're very even. 01-06 have 24x6TB and 08/09 24x8TB. -3131.35547 host bstor01 -5131.32516 host bstor02 -7131.35547 host bstor03 -9

[ceph-users] Re: RESEND: Re: PG Balancer Upmap mode not working

2019-12-10 Thread Rich Bade
Also, sorry for misspelling your name Bryan :-/ ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: RESEND: Re: PG Balancer Upmap mode not working

2019-12-10 Thread David Zafman
If you send me your OSDMap from "ceph osd getmap," I can test it against the latest code. David On 12/10/19 4:30 PM, Rich Bade wrote: I'm finding the same thing. The balancer used to work flawlessly, giving me a very even distribution with about 1% variance. Some time between 12.2.7 (maybe

[ceph-users] Re: RESEND: Re: PG Balancer Upmap mode not working

2019-12-10 Thread David Zafman
Bryan,     Try setting the config osd_calc_pg_upmaps_aggressively=false and see if that helps with mgr getting wedged. David On 12/10/19 4:41 PM, Bryan Stillwell wrote: Rich, What's your failure domain (osd? host? chassis? rack?) and how big is each of them? For example I have a failure

[ceph-users] Re: RESEND: Re: PG Balancer Upmap mode not working

2019-12-10 Thread Rich Bade
Thanks David, I've sent it to you directly. Rich ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: RESEND: Re: PG Balancer Upmap mode not working

2019-12-10 Thread David Zafman
Rich, Using your OSDMap, the code in https://github.com/ceph/ceph/pull/31992 and some additional changes to osdmaptool I was able to balance your cluster.  The osdmaptool changes simulate the mgr active balancer behavior.  It never took more than 0.388674 seconds to calculate more upmaps. A

[ceph-users] Re: RESEND: Re: PG Balancer Upmap mode not working

2019-12-10 Thread Rich Bade
That's good news, thanks David. What's my way forward on this? Is there a point release for Luminous coming? Or will I need to push ahead with my Nautilus upgrade to get it working again? Or build something custom from the Git code? I don't think custom build is an option in this case as this is

[ceph-users] Re: RESEND: Re: PG Balancer Upmap mode not working

2019-12-10 Thread David Zafman
Rich,     The final Luminous release is going through integration testing.  Assuming my pull request passes, it will be there.  The equivalent change for Nautilus is also waiting for its next release, so that wouldn't help. David On 12/10/19 5:52 PM, Rich Bade wrote: That's good news, than