Ok,
i changed the value to
"metadata_heap": "",
but it is still used.
Any ideas how to stop this?
Am Mi., 10. März 2021 um 08:14 Uhr schrieb Boris Behrens :
> Found it.
> [root@s3db1 ~]# radosgw-admin zone get --rgw-zone=eu-central-1
> {
> "id": "ff7a8b0c-07e6-463a-861b-78f0adeba8ad",
>
Hello,everyone,
In the OS disk blocked scene, the Mon service is still running ,but cant't
work normally,
soon the mon will out of quorum, but some OSD were still markd down after
mon_osd_report_timeout*2 seconds,
which will cause the cluster unavailable.
At this time, the OS is very slow,ma
Found it.
[root@s3db1 ~]# radosgw-admin zone get --rgw-zone=eu-central-1
{
"id": "ff7a8b0c-07e6-463a-861b-78f0adeba8ad",
"name": "eu-central-1",
...SNIP...,
"metadata_heap": " ",
"realm_id": "5d6f2ea4-b84a-459b-bce2-bccac338b3ef"
}
Am Mi., 10. März 2021 um 07:37 Uhr schrieb Bo
> On 10 Mar 2021, at 09:50, Norman.Kern wrote:
>
> I have used Ceph rbd for Openstack for sometime, I met a problem while
> destroying a VM. The Openstack tried to
>
> delete rbd image but failed. I have a test deleting a image by rbd command,
> it costs lots of time(image size 512G or more)
Hi Guys,
I have used Ceph rbd for Openstack for sometime, I met a problem while
destroying a VM. The Openstack tried to
delete rbd image but failed. I have a test deleting a image by rbd command, it
costs lots of time(image size 512G or more).
Anyone met the same problem with me?
Thanks,
No
Good morning ceph people,
I have a pool that got a whitespace as name. And I want to know what
creates the pool.
I already renamed it, but something recreates the pool.
Is there a way to find out what created the pool and what the content ist?
When I checked it's content I get
[root@s3db1 ~]# ra
Hi everyone,
In case you missed the Ceph Tech Talk and Code Walkthrough for
February, the recordings are now available to watch:
Jason Dillaman | Librbd Part 2: https://www.youtube.com/watch?v=nVjYVmqNClM
Sage Weil | What's New In the Pacific Release: https://youtu.be/PVtn53MbxTc
On Tue, Feb 16,
Hi everyone,
We are approaching our deadline of April 2nd for the Ceph User Survey
to be filled out.
Thank you to everyone for the feedback so far. Please send further
feedback for this survey here:
https://pad.ceph.com/p/user-survey-2021-feedback
On Tue, Feb 16, 2021 at 2:20 PM Mike Perez wro
Hi,
we have replaced some of our OSDs a while ago an while everything
recovery as planned, one PG is still stuck at active+clean+remapped with
no backfilling taking place.
Mpaaing the PG in question shows me that one OSD is missing:
$ ceph pg map 35.1fe
osdmap e1265760 pg 35.1fe (35.1fe) ->
I have a node down and pg's are remapping/backfilling. I have also a lot of
pg's in backfill_wait.
I was wondering if there is a specific order that this is being executed. Eg I
have a large'garbage' pool ec21 that is stuck. I could resolve that by changing
the min size. However I rather hav
Ok in the interface when I create a bucket the index in created automatically
1 device_health_metrics
2 cephfs_data
3 cephfs_metadata
4 .rgw.root
5 default.rgw.log
6 default.rgw.control
7 default.rgw.meta
8 default.rgw.buckets.index
* I think I just could not make an insertion using s3cmd
List
Hi everyone,
I just rebuild a (test) cluster using :
OS : Ubuntu 20.04.2 LTS
CEPH : ceph version 15.2.9 (357616cbf726abb779ca75a551e8d02568e15b17) octopus
(stable)
3 nodes : monitor/storage
1. The cluster looks good :
# ceph -s
cluster:
id: 9a89aa5a-1702-4f87-a99c-f94c9f2cdabd
Hello,
I haven't needed to replace a disk in awhile and it seems that I have misplaced
my quick little guide on how to do it.
When searching the docs it is now recommending that you should use ceph-volume
to create OSDs when doing that it creates LV:
Disk /dev/sde: 4000.2 GB, 4000225165312 byt
just confirming, crashes are gone with gperftools-libs-2.7-8.el8.x86_64.rpm
Cheers,
Andrej
On 09/03/2021 16:52, Andrej Filipcic wrote:
Hi,
I was checking that bug yesterday, yes, and it smells the same.
I will give a try to the epel one,
Thanks
Andrej
On 09/03/2021 16:44, Dan van der Ster
This week's meeting will focus on the ongoing rewrite of the cephadm
documentation and making certain that the documentation addresses the rough
edges in the Pacific release.
Meeting: https://bluejeans.com/908675367
Etherpad: https://pad.ceph.com/p/Ceph_Documentation
__
For those who aren't on the bug tracker, this was brought up (and has
follow-up) here: https://tracker.ceph.com/issues/49618
On Thu, Mar 4, 2021 at 9:55 PM Szabo, Istvan (Agoda)
wrote:
>
> Hi,
>
> I have a 3 DC multisite setup.
>
> The replication is directional like HKG->SGP->US so the bucket is
Hi,
I was checking that bug yesterday, yes, and it smells the same.
I will give a try to the epel one,
Thanks
Andrej
On 09/03/2021 16:44, Dan van der Ster wrote:
Hi Andrej,
I wonder if this is another manifestation of the buggy gperftools-libs
v2.8 bug, e.g. https://tracker.ceph.com/issues
Hi Andrej,
I wonder if this is another manifestation of the buggy gperftools-libs
v2.8 bug, e.g. https://tracker.ceph.com/issues/49618
If so, there is a fixed (downgraded) version in epel-testing now.
Cheers, Dan
On Tue, Mar 9, 2021 at 4:36 PM Andrej Filipcic wrote:
>
>
> Hi,
>
> under heav
Hi,
under heavy load our cluster is experiencing frequent OSD crashes. Is
this a known bug or should I report it? Any workarounds? It looks to be
highly correlated with memory tuning.
it happens with both nautilus 14.2.16 and octopus 15.2.9. I have forced
the bitmap bluefs and bluestore all
Dear Ceph’ers
I am about to upgrade MDS nodes for Cephfs in the Ceph cluster (erasure code
8+3 ) I am administrating.
Since they will get plenty of memory and CPU cores, I was wondering if it would
be a good idea to move metadata OSDs (NVMe's currently on OSD nodes together
with cephfs_data OD
we use s3 api upload file to ceph cluster with non versioned bucket, and
we override many files then want to recover them, any way can recover?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Good day
I'm currently decommissioning a cluster that runs EC3+1 (rack failure
domain - with 5 racks), however the cluster still has some production items
on it since I'm in the process of moving it to our new EC8+2 cluster.
Running Luminous 12.2.13 on Ubuntu 16 HWE, containerized with ceph-ansib
we use s3 api upload file to ceph cluster with non versioned bucket, and
we override many files then want to recover them, any way can recover?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
23 matches
Mail list logo