Dear all,
ceph-mgr-dashboard-15.2.13-0.el7.noarch contains three rpm dependencies
that cannot be resolved here (not part of CentOS & EPEL 7):
python3-cherrypy
python3-routes
python3-jwt
Does anybody know where they are expected to come from?
Thanks,
Andreas
--
| Andreas Haupt| E-Ma
I had a similar problem with pacific when using the build from Centos, I switch
to the rpm directly from ceph and it went fine.
> Le 31 mai 2021 à 10:29, Andreas Haupt a écrit :
>
> Dear all,
>
> ceph-mgr-dashboard-15.2.13-0.el7.noarch contains three rpm dependencies
> that cannot be resolved
Hi,
CentOS7 is only partially supported for octopus
"Note that the dashboard, prometheus, and restful manager modules will
not work on the CentOS 7 build due to Python 3 module dependencies that
are missing in CentOS 7."
https://docs.ceph.com/en/latest/releases/octopus/
cheers
wolfgang
On
Hello.
I have a multisite RGW environment.
When I create a new bucket, the bucket is directly created on master
and secondary.
If I don't want to sync a bucket, I need to stop sync after creation.
Is there any global option as "Do not sync directly, only start if I want to" ?
_
Yes you're right. I have a Global sync rule in the zonegroup:
"sync_from_all": "true",
"sync_from": [],
"redirect_zone": ""
If I need to stop/start the sync after creation I use the command:
radosgw-admin bucket sync enable/disable --bucket=$newbucket
I develop
Hi,
Any way to clean up large-omap in the index pool?
PG deep_scrub didn't help.
I know how to clean in the log pool, but no idea in the index pool :/
It's an octopus deployment 15.2.10.
Thank you
This message is confidential and is for the sole use of the intend
On 5/31/21 3:02 PM, mhnx wrote:
Yes you're right. I have a Global sync rule in the zonegroup:
"sync_from_all": "true",
"sync_from": [],
"redirect_zone": ""
If I need to stop/start the sync after creation I use the command:
radosgw-admin bucket sync enable/d
Bucket is created but if no sync rule set, the data will not be synced across.
Istvan Szabo
Senior Infrastructure Engineer
---
Agoda Services Co., Ltd.
e: istvan.sz...@agoda.com
---
-Original Messa
Yeah, this would be interesting for me as well.
Istvan Szabo
Senior Infrastructure Engineer
---
Agoda Services Co., Ltd.
e: istvan.sz...@agoda.com
---
-Original Message-
From: mhnx
Sent: Mond
All the index data will be in OMAP, which you can see a listing of with
`ceph osd df tree`
Do you have large buckets (many, many objects in a single bucket) with
few shards? You may have to reshard one (or some) of your buckets.
It'll take some reading if you're using multisite, in order to
Unfortunately Ceph 16.2.4 is still not working for us. We continue to have
issues where the 26th OSD is not fully created and started. We've
confirmed that we do get the flock as described in:
https://tracker.ceph.com/issues/50526
-
*I have verified in our labs a way to reproduce easily th
Yeah, I found a bucket at the moment in progress the deleting, will preshard it.
Istvan Szabo
Senior Infrastructure Engineer
---
Agoda Services Co., Ltd.
e: istvan.sz...@agoda.com
---
-Original Mes
Does the image we built fix the problem for you? That's how we worked
around it. Unfortunately, it even bites you with less OSDs if you have
DB/WAL on other devices, we have 24 rotational drives/OSDs, but split
DB/WAL onto multiple NVMEs. We're hoping the remoto fix (since it's
merged upstream and
David,
What I can confirm is that if this fix is already in 16.2.4 and 15.2.13,
then there's another issue resulting in the same situation, as it continues
to happen in the latest available images.
We are going to try and see if we can install a 15.2.x release and
subsequently upgrade using a fixe
So the bucket has been deleted on the master zone which has been removed from
the other zones as well. On the master zone after deep scrub the omap
disappeared but on the secondary zone it's still there.
It was 3 at the beginning after I scrubbed the affected OSDs (not just pgs) I
have 6 omap.
T
Here is the command output:
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META
AVAIL%USE VAR PGS STATUS TYPE NAME
-1 530.89032 - 531 TiB29 TiB 11 TiB31 GiB 338 GiB
502 TiB 5.43 1.00- root default
-5 85.5
16 matches
Mail list logo