Ingo
We got this comfort to shut down S3 interface for couple of minutes where
we could fix buckets - all the operation took ~3-5 minutes so we didn't
lock buckets
Jacek
pon., 9 gru 2019 o 15:31 Ingo Reimann napisał(a):
> Hi Jacek,
>
> thanks! I wanted to do follow exactly that plan with the m
Glad to see I am not the only one with unexpected increased disk usage. I do
have a case for a few months now where the reported size on disk is 10 times
higher than it should be. Unfortunately no solution so far. Therefore, I am
very curious whether the min alloc size will solve your problem I
This is the fifth release of the Ceph Nautilus release series. Among the many
notable changes, this release fixes a critical BlueStore bug that was introduced
in 14.2.3. All Nautilus users are advised to upgrade to this release.
For the complete changelog entry, please visit the release blog at
ht
On Mon, Dec 09, 2019 at 05:01:04PM -0800, Paul Choi wrote:
> Hello,
> Anybody seeing the Prometheus endpoint hanging with the new 13.2.7
> release?
> With 13.2.6 the endpoint would respond with a payload of 15MB in less
> than 10 seconds.
I'd guess its not the prometheus module itself:
$
Thanks all!
On 10/12/2019 09:45, Abhishek Lekshmanan wrote:
This is the fifth release of the Ceph Nautilus release series. Among the many
notable changes, this release fixes a critical BlueStore bug that was introduced
in 14.2.3. All Nautilus users are advised to upgrade to this release.
For th
I have a strange problem with ceph fs and extended attributes. I have two
Centos machines where I mount cephfs in exactly the same way (I manually
executed the exact same mount command on both machines). On one of the
machines, getfattr returns this:
[root@ceph-01 ~]# getfattr -d -m 'ceph.*' /m
On Tue, Dec 10, 2019 at 8:06 PM Frank Schilder wrote:
>
> I have a strange problem with ceph fs and extended attributes. I have two
> Centos machines where I mount cephfs in exactly the same way (I manually
> executed the exact same mount command on both machines). On one of the
> machines, get
Thanks for the fast answer!
Is there any (other) way to get a complete list of extended attributes?
Is there something documented - meaning what can I rely on in the future?
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
_
Hi,
If I change the storage class of an object via s3cmd, the object's
storage class is reported as being changed. However, when inspecting
where the objects are placed (via `rados -p ls`, see further on),
the object seems to be retained in the original pool.
The idea behind this test setup
Hi Gerdriaan,
I think actually moving an already-stored object requires a lifecycle
transition policy. Assuming such a policy exists and matches the
object by prefix/tag/time, it would migrate during an (hopefully the
first) eligible lc processing window.
Matt
On Tue, Dec 10, 2019 at 7:44 AM Ge
Hi Matt,
On 12/10/19 1:52 PM, Matt Benjamin wrote:
I think actually moving an already-stored object requires a lifecycle
transition policy. Assuming such a policy exists and matches the
object by prefix/tag/time, it would migrate during an (hopefully the
first) eligible lc processing window.
On Tue, Dec 10, 2019 at 10:45 AM Abhishek Lekshmanan wrote:
>
> This is the fifth release of the Ceph Nautilus release series. Among the many
> notable changes, this release fixes a critical BlueStore bug that was
> introduced
> in 14.2.3. All Nautilus users are advised to upgrade to this release
On 12/10/19 8:10 AM, Gerdriaan Mulder wrote:
Hi Matt,
On 12/10/19 1:52 PM, Matt Benjamin wrote:
I think actually moving an already-stored object requires a lifecycle
transition policy. Assuming such a policy exists and matches the
object by prefix/tag/time, it would migrate during an (hopefu
Hi Casey,
On 12/10/19 3:00 PM, Casey Bodley wrote:
whereas I would expect "git-tree.png" to only reside in the pool
tier2-hdd. This gives me the suggestion that I made an error in
configuring the storage_class->pool association.
git-tree.png is the 'head' object, which stores the object's a
> > If you are not comfortable sharing device metrics, you can disable that
> > channel first before re-opting-in:
> >
> > ceph config set mgr mgr/telemetry/channel_crash false
>
> This should be channel_device, right?
Yep!
https://github.com/ceph/ceph/pull/32148
Thanks,
sage
__
Hi,
On Tue, 10 Dec 2019 12:40:35 +, Frank Schilder wrote:
> Thanks for the fast answer!
>
> Is there any (other) way to get a complete list of extended attributes?
Not at the moment, as far as I can tell.
> Is there something documented - meaning what can I rely on in the future?
When the
Hi everyone,
The next Cephalocon is coming up on March 3-5 in Seoul! The CFP is open
until Friday (get your talks in!). We expect to have the program
ready for the first week of January. Registration (early bird) will be
available soon.
We're also looking for sponsors for the conference. T
On Tue, 10 Dec 2019, Sage Weil wrote:
> Hi everyone,
>
> The next Cephalocon is coming up on March 3-5 in Seoul! The CFP is open
> until Friday (get your talks in!). We expect to have the program
> ready for the first week of January. Registration (early bird) will be
> available soon.
...a
Hi,
Shouldn't Ceph's documentation be presented "per version"?
I believe there might be documentation for Ceph per version but I
can't see in Ceph documentation site how to easily see each version's
docs.
Regards,
Rodrigo Severo
___
ceph-users maili
When reading, eg, https://docs.ceph.com/docs/nautilus/radosgw/ , you
can simply change "nautilus" to any codename. This is perhaps less
obvious than it should be since many links go to "master".
On Tue, Dec 10, 2019 at 12:47 PM Rodrigo Severo - Fábrica
wrote:
>
> Hi,
>
>
> Shouldn't Ceph's docume
I'm finding the same thing. The balancer used to work flawlessly, giving me a
very even distribution with about 1% variance. Some time between 12.2.7 (maybe)
and 12.2.12 it's stopped working.
Here's a small selection of my osd's showing 47%-62% spread.
210 hdd 7.27739 1.0 7.28TiB 3.43TiB
Rich,
What's your failure domain (osd? host? chassis? rack?) and how big is each of
them?
For example I have a failure domain of type rack in one of my clusters with
mostly even rack sizes:
# ceph osd crush rule dump | jq -r '.[].steps'
[
{
"op": "take",
"item": -1,
"item_name":
Thanks Brian. My failure domain is Host and they're very even. 01-06 have
24x6TB and 08/09 24x8TB.
-3131.35547 host bstor01
-5131.32516 host bstor02
-7131.35547 host bstor03
-9
Also, sorry for misspelling your name Bryan :-/
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
If you send me your OSDMap from "ceph osd getmap," I can test it against
the latest code.
David
On 12/10/19 4:30 PM, Rich Bade wrote:
I'm finding the same thing. The balancer used to work flawlessly, giving me a
very even distribution with about 1% variance. Some time between 12.2.7 (maybe
Bryan,
Try setting the config osd_calc_pg_upmaps_aggressively=false and
see if that helps with mgr getting wedged.
David
On 12/10/19 4:41 PM, Bryan Stillwell wrote:
Rich,
What's your failure domain (osd? host? chassis? rack?) and how big is each of
them?
For example I have a failure
Thanks David, I've sent it to you directly.
Rich
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Rich,
Using your OSDMap, the code in https://github.com/ceph/ceph/pull/31992
and some additional changes to osdmaptool I was able to balance your
cluster. The osdmaptool changes simulate the mgr active balancer
behavior. It never took more than 0.388674 seconds to calculate more
upmaps. A
That's good news, thanks David. What's my way forward on this? Is there a point
release for Luminous coming? Or will I need to push ahead with my Nautilus
upgrade to get it working again? Or build something custom from the Git code?
I don't think custom build is an option in this case as this is
Rich,
The final Luminous release is going through integration testing.
Assuming my pull request passes, it will be there. The equivalent
change for Nautilus is also waiting for its next release, so that
wouldn't help.
David
On 12/10/19 5:52 PM, Rich Bade wrote:
That's good news, than
30 matches
Mail list logo