Hello Ceph Users,
Is anyone successful in getting the SSL cert for Beast frontend into the config
database?
Octopus 15.2.8
tail -f /var/log/ceph/ceph-client.rgw.*.log
2021-01-05T18:38:35.008+1100 7f7cd6ac9100 1 radosgw_Main not setting numa
affinity
2021-01-05T18:38:35.321+1100 7f7cd6ac9100
Hi,
Using Ceph 15.2.8 installed with cephadm. Trying to get RadosGW to work.
I have managed to get the RadosGW working. I can manage it through a
dashboard and use aws s3 client to create new buckets etc. When trying to
use swift I get errors.
Not sure how to continue to track the problem her
Hi,
Which version of OpenStack do you have ? I guess , since Usurri ( or may be
even before ) swift authentification through keystone require the account in
url . You have to add this option in "/etc/ceph/ceph.conf" , section rgw "rgw
swift account in url = true" or do it via setting directly .
Hello ,
Looking for information about a timeout which occur once a week for a ceph rbd
image mounted on a machine using rbd-nbd (Linux Ubuntu machine).
The error found in 'dmseg' is below :
[798016.401469] block nbd0: Connection timed out [798016.401506] block nbd0:
shutting down sockets
Many T
Hi,
I have setup a ceph cluster with cephadm with docker backend.
I want to move /var/lib/docker to a separate device to get better
performance and less load on the OS device.
I tried that by stopping docker copy the content of /var/lib/docker to
the new device and mount the new device to /v
You can try using the "--timeout X" optional for "rbd-nbd" to increase
the timeout. Some kernels treat the default as infinity, but there
were some >=4.9 kernels that switched behavior and started defaulting
to 30 seconds. There is also known issues with attempting to place XFS
file systems on top
Hi ,
Thank you for your feedback . It seems the error related to the fstrim run
once a week ( default ) .
Do you have more information about the NBD/XFS memeory pressure issues ?
Thanks
-Message d'origine-
De : Jason Dillaman
Envoyé : mardi 5 janvier 2021 14:42
À : Wissem MIMOUNA
C
On Tue, Jan 5, 2021 at 9:01 AM Wissem MIMOUNA
wrote:
>
> Hi ,
>
> Thank you for your feedback . It seems the error related to the fstrim run
> once a week ( default ) .
Do you have object-map enabled? If not, the FS will gladly send huge
discard extents which, if you have a large volume, could
Yes , we have had object-map enabled .
rgds
-Message d'origine-
De : Jason Dillaman
Envoyé : mardi 5 janvier 2021 15:08
À : Wissem MIMOUNA
Cc : ceph-users@ceph.io
Objet : Re: [ceph-users] Timeout ceph rbd-nbd mounted image
On Tue, Jan 5, 2021 at 9:01 AM Wissem MIMOUNA
wrote:
>
> Hi
Hi,
I am using indeed OpenStack Ussuri release. I changed the "gw swift
account in url = true" directly with ceph config set ... command. Also
checked that rgw_keystone_accepted_roles is correctly set and not the admin
one. Also tested disabling rgw_keystone_verify_ssl.
Should radosgw communi
Any comments?
Thanks!
Tony
> -Original Message-
> From: Tony Liu
> Sent: Tuesday, December 29, 2020 5:22 PM
> To: ceph-users@ceph.io
> Subject: [ceph-users] logging to stdout/stderr causes huge container log
> file
>
> Hi,
>
> With ceph 15.2.5 octopus, mon, mgd and rgw dump loggings on
If you are using ceph-container images you should update your image. This
feature has been introduced in v5.0.5:
https://github.com/ceph/ceph-container/releases/tag/v5.0.5
On Wed, Jan 6, 2021 at 1:22 AM Tony Liu wrote:
> Any comments?
>
> Thanks!
> Tony
> > -Original Message-
> > From: T
12 matches
Mail list logo