[ceph-users] RGW SSL key in config database

2021-01-05 Thread Glen Baars
Hello Ceph Users,

Is anyone successful in getting the SSL cert for Beast frontend into the config 
database?

Octopus 15.2.8

tail -f /var/log/ceph/ceph-client.rgw.*.log

2021-01-05T18:38:35.008+1100 7f7cd6ac9100  1 radosgw_Main not setting numa 
affinity
2021-01-05T18:38:35.321+1100 7f7cd6ac9100  0 framework: beast
2021-01-05T18:38:35.321+1100 7f7cd6ac9100  0 framework conf key: 
ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
2021-01-05T18:38:35.321+1100 7f7cd6ac9100  0 framework conf key: 
ssl_private_key, val: config://rgw/cert/$realm/$zone.key
2021-01-05T18:38:35.321+1100 7f7cd6ac9100  0 starting handler: beast
2021-01-05T18:38:35.322+1100 7f7c9826d700  0 RGWReshardLock::lock failed to 
acquire lock on reshard.00 ret=-16
2021-01-05T18:38:35.333+1100 7f7cd6ac9100 -1 ssl_private_key was not found: 
rgw/cert/OURREALM/OURZONE.key
2021-01-05T18:38:35.334+1100 7f7cd6ac9100 -1 ssl_private_key was not found: 
rgw/cert/OURREALM/OURZONE.crt
2021-01-05T18:38:35.334+1100 7f7cd6ac9100 -1 no ssl_certificate configured for 
ssl_port
2021-01-05T18:38:35.334+1100 7f7cd6ac9100 -1 ERROR: failed initializing frontend

Certs are in the key store:

ceph config-key ls | grep rgw/cert/OURREALM/OURZONE.key
"rgw/cert/OURREALM/OURZONE ",

ceph config-key ls | grep rgw/cert/OURREALM/OURZONE.crt
"rgw/cert/OURREALM/OURZONE ",

I don't seem to have any luck with many different options set.

Glen
This e-mail is intended solely for the benefit of the addressee(s) and any 
other named recipient. It is confidential and may contain legally privileged or 
confidential information. If you are not the recipient, any use, distribution, 
disclosure or copying of this e-mail is prohibited. The confidentiality and 
legal privilege attached to this communication is not waived or lost by reason 
of the mistaken transmission or delivery to you. If you have received this 
e-mail in error, please notify us immediately.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Ceph RadosGW & OpenStack swift problem

2021-01-05 Thread Mika Saari
Hi,

  Using Ceph 15.2.8 installed with cephadm. Trying to get RadosGW to work.
I have managed to get the RadosGW working. I can manage it through a
dashboard and use aws s3 client to create new buckets etc. When trying to
use swift I get errors.

  Not sure how to continue to track the problem here. Any tips are welcome.

Thank you very much,
  -Mika

--- What I have done and what are the results. Some data changed
manually  ---
  What I have done:
At OpenStack Side:
  1) openstack user create --domain default --password-prompt swift
  2) openstack role add --project service --user swift admin
  3) openstack endpoint create --region RegionOne object-store public
http://ceph1/swift/v1/AUTH_%\(project_id\)s
  4) openstack endpoint create --region RegionOne object-store internal
http://ceph1/swift/v1/AUTH_%\(project_id\)s
  5) openstack endpoint create --region RegionOne object-store admin
http://ceph1/swift/v1

  At Ceph side:
1) ceph config set mgr rgw_keystone_api_version 3
2) ceph config set mgr rgw_keystone_url http://controller:5000
3) ceph config set mgr rgw_keystone_accepted_admin_roles admin
4) ceph config set mgr rgw_keystone_admin_user swift
5) ceph config set mgr rgw_keystone_admin_password swift_test
6) ceph config set mgr rgw_keystone_admin_domain default
7) ceph config set mgr rgw_keystone_admin_project service
  for project I have tested different projects e.g. service and admin

  Now when testing the API using swift client I get next:
1) swift post test3 --debug

DEBUG:keystoneclient.auth.identity.v3.base:Making authentication request to
http://controller:5000/v3/auth/tokens
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1):
controller:5000
DEBUG:urllib3.connectionpool:http://controller:5000 "POST /v3/auth/tokens
HTTP/1.1" 201 7032

. some openstack data here .

DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): ceph1:80
DEBUG:urllib3.connectionpool:http://ceph1:80 "POST
/swift/v1/AUTH_adsfasdfasdfasdfasdfasdf/test3 HTTP/1.1" 401 12
INFO:swiftclient:REQ: curl -i
http://ceph1/swift/v1/AUTH_adsfasdfasdfasdfasdfasdf/test3 -X POST -H
"X-Auth-Token: " -H "Content-Length: 0"
INFO:swiftclient:RESP STATUS: 401 Unauthorized

and finally I get
Container POST failed:
http://ceph1/swift/v1/AUTH_adsfasdfasdfasdfasdfasdf/test3 401 Unauthorized
  b'AccessDenied'
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Ceph RadosGW & OpenStack swift problem

2021-01-05 Thread Wissem MIMOUNA
Hi,

Which version of OpenStack do you have ? I guess , since Usurri ( or may be 
even before ) swift authentification through keystone require the account in 
url . You have to add this option in "/etc/ceph/ceph.conf" , section rgw "rgw 
swift account in url = true" or do it via setting directly . Also , I noticed 
you did  this ==> 3) ceph config set mgr rgw_keystone_accepted_admin_roles 
 ||  I think , you should use the option "rgw keystone accepted roles " 
instead.

Regards

-Message d'origine-
De : Mika Saari  
Envoyé : mardi 5 janvier 2021 10:03
À : ceph-users@ceph.io
Objet : [ceph-users] Ceph RadosGW & OpenStack swift problem

Hi,

  Using Ceph 15.2.8 installed with cephadm. Trying to get RadosGW to work.
I have managed to get the RadosGW working. I can manage it through a dashboard 
and use aws s3 client to create new buckets etc. When trying to use swift I get 
errors.

  Not sure how to continue to track the problem here. Any tips are welcome.

Thank you very much,
  -Mika

--- What I have done and what are the results. Some data changed manually  
---
  What I have done:
At OpenStack Side:
  1) openstack user create --domain default --password-prompt swift
  2) openstack role add --project service --user swift admin
  3) openstack endpoint create --region RegionOne object-store public 
https://urldefense.proofpoint.com/v2/url?u=http-3A__ceph1_swift_v1_AUTH-5F-25-255C-28project-5Fid-255C-29s&d=DwICAg&c=1tDFxPZjcWEmlmmx4CZtyA&r=h1fIFv3Ydv-kdH6KKa6lmB20LbjUiXP9Kttb6tTs__E&m=EmlYLMTNHaWmSJrApw1U46oD9d1KMRwdpbF9VLg7eX4&s=-1FtdhjTcNA8jPSUoyoUfsPl5uqTqu4I_ThTOJNLjtg&e=
 
  4) openstack endpoint create --region RegionOne object-store internal 
https://urldefense.proofpoint.com/v2/url?u=http-3A__ceph1_swift_v1_AUTH-5F-25-255C-28project-5Fid-255C-29s&d=DwICAg&c=1tDFxPZjcWEmlmmx4CZtyA&r=h1fIFv3Ydv-kdH6KKa6lmB20LbjUiXP9Kttb6tTs__E&m=EmlYLMTNHaWmSJrApw1U46oD9d1KMRwdpbF9VLg7eX4&s=-1FtdhjTcNA8jPSUoyoUfsPl5uqTqu4I_ThTOJNLjtg&e=
 
  5) openstack endpoint create --region RegionOne object-store admin 
https://urldefense.proofpoint.com/v2/url?u=http-3A__ceph1_swift_v1&d=DwICAg&c=1tDFxPZjcWEmlmmx4CZtyA&r=h1fIFv3Ydv-kdH6KKa6lmB20LbjUiXP9Kttb6tTs__E&m=EmlYLMTNHaWmSJrApw1U46oD9d1KMRwdpbF9VLg7eX4&s=bm67b3lMVeLC3sNvuyufFCe3AksJgfIgeI8SDorhHMU&e=
 

  At Ceph side:
1) ceph config set mgr rgw_keystone_api_version 3
2) ceph config set mgr rgw_keystone_url 
https://urldefense.proofpoint.com/v2/url?u=http-3A__controller-3A5000&d=DwICAg&c=1tDFxPZjcWEmlmmx4CZtyA&r=h1fIFv3Ydv-kdH6KKa6lmB20LbjUiXP9Kttb6tTs__E&m=EmlYLMTNHaWmSJrApw1U46oD9d1KMRwdpbF9VLg7eX4&s=lyXWyh-BXrikPWqWM3dcPW4ZofvjiAxnq-nXsjifnEw&e=
 
3) ceph config set mgr rgw_keystone_accepted_admin_roles admin
4) ceph config set mgr rgw_keystone_admin_user swift
5) ceph config set mgr rgw_keystone_admin_password swift_test
6) ceph config set mgr rgw_keystone_admin_domain default
7) ceph config set mgr rgw_keystone_admin_project service
  for project I have tested different projects e.g. service and admin

  Now when testing the API using swift client I get next:
1) swift post test3 --debug

DEBUG:keystoneclient.auth.identity.v3.base:Making authentication request to 
https://urldefense.proofpoint.com/v2/url?u=http-3A__controller-3A5000_v3_auth_tokens&d=DwICAg&c=1tDFxPZjcWEmlmmx4CZtyA&r=h1fIFv3Ydv-kdH6KKa6lmB20LbjUiXP9Kttb6tTs__E&m=EmlYLMTNHaWmSJrApw1U46oD9d1KMRwdpbF9VLg7eX4&s=-98qpMcc8sdRTdN7AwNPIyGsIK1GaFvi_SC5GtZGUpY&e=
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1):
controller:5000
DEBUG:urllib3.connectionpool:http://controller:5000 "POST /v3/auth/tokens 
HTTP/1.1" 201 7032

. some openstack data here .

DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): ceph1:80
DEBUG:urllib3.connectionpool:http://ceph1:80 "POST
/swift/v1/AUTH_adsfasdfasdfasdfasdfasdf/test3 HTTP/1.1" 401 12
INFO:swiftclient:REQ: curl -i
https://urldefense.proofpoint.com/v2/url?u=http-3A__ceph1_swift_v1_AUTH-5Fadsfasdfasdfasdfasdfasdf_test3&d=DwICAg&c=1tDFxPZjcWEmlmmx4CZtyA&r=h1fIFv3Ydv-kdH6KKa6lmB20LbjUiXP9Kttb6tTs__E&m=EmlYLMTNHaWmSJrApw1U46oD9d1KMRwdpbF9VLg7eX4&s=g1inMAENxiOpxc4L8FlmbLypegdcQwgH8drm6aoESZ0&e=
  -X POST -H
"X-Auth-Token: " -H "Content-Length: 0"
INFO:swiftclient:RESP STATUS: 401 Unauthorized

and finally I get
Container POST failed:
https://urldefense.proofpoint.com/v2/url?u=http-3A__ceph1_swift_v1_AUTH-5Fadsfasdfasdfasdfasdfasdf_test3&d=DwICAg&c=1tDFxPZjcWEmlmmx4CZtyA&r=h1fIFv3Ydv-kdH6KKa6lmB20LbjUiXP9Kttb6tTs__E&m=EmlYLMTNHaWmSJrApw1U46oD9d1KMRwdpbF9VLg7eX4&s=g1inMAENxiOpxc4L8FlmbLypegdcQwgH8drm6aoESZ0&e=
  401 Unauthorized
  b'AccessDenied'
___
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to 
ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Timeout ceph rbd-nbd mounted image

2021-01-05 Thread Wissem MIMOUNA
Hello ,

Looking for information about a timeout which occur once a week for a ceph rbd 
image mounted on a machine using rbd-nbd (Linux Ubuntu machine).
The error found in 'dmseg' is below :
[798016.401469] block nbd0: Connection timed out [798016.401506] block nbd0: 
shutting down sockets

Many Thanks
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] cephadm cluster move /var/lib/docker to separate device fails

2021-01-05 Thread Karsten Nielsen

Hi,

I have setup a ceph cluster with cephadm with docker backend.

I want to move /var/lib/docker to a separate device to get better 
performance and less load on the OS device.


I tried that by stopping docker copy the content of /var/lib/docker to 
the new device and mount the new device to /var/lib/docker.
The other containers started as expected and continues to work and run 
as expected.

But the ceph containers seems to be broken.
I am not able to get them back in working state.
I have tried to remove the host with `ceph orch host rm itcnchn-bb4067` 
and readd it but no effect.

The strange thing is that 2 of 4 containers comes up as expected.

ceph orch ps itcnchn-bb4067
NAME  HOSTSTATUS 
REFRESHED  AGE  VERSIONIMAGE NAME   IMAGE ID  
CONTAINER ID
crash.itcnchn-bb4067  itcnchn-bb4067  running (18h)  10m 
ago4w   15.2.7 docker.io/ceph/ceph:v15  2bc420ddb175  
2af28c4571cf
mds.cephfs.itcnchn-bb4067.qzoshl  itcnchn-bb4067  error  10m 
ago4w docker.io/ceph/ceph:v15   
mon.itcnchn-bb4067itcnchn-bb4067  error  10m 
ago18hdocker.io/ceph/ceph:v15   
rgw.ikea.dc9-1.itcnchn-bb4067.gtqedc  itcnchn-bb4067  running (18h)  10m 
ago4w   15.2.7 docker.io/ceph/ceph:v15  2bc420ddb175  
00d000aec32b


Docker logs from the active manager does not say much about what is 
wrong
debug 2021-01-05T09:57:52.537+ 7fdb69691700  0 log_channel(cephadm) 
log [INF] : Reconfiguring mds.cephfs.itcnchn-bb4067.qzoshl (unknown last 
config time)...
debug 2021-01-05T09:57:52.541+ 7fdb69691700  0 log_channel(cephadm) 
log [INF] : Reconfiguring daemon mds.cephfs.itcnchn-bb4067.qzoshl on 
itcnchn-bb4067
debug 2021-01-05T09:57:52.973+ 7fdb64e88700  0 log_channel(cluster) 
log [DBG] : pgmap v347: 241 pgs: 241 active+clean; 18 GiB data, 50 GiB 
used, 52 TiB / 52 TiB avail; 18 KiB/s rd, 78 KiB/s wr, 24 op/s
debug 2021-01-05T09:57:53.085+ 7fdb69691700  0 log_channel(cephadm) 
log [INF] : Reconfiguring mon.itcnchn-bb4067 (unknown last config 
time)...
debug 2021-01-05T09:57:53.085+ 7fdb69691700  0 log_channel(cephadm) 
log [INF] : Reconfiguring daemon mon.itcnchn-bb4067 on itcnchn-bb4067
debug 2021-01-05T09:57:53.625+ 7fdb69691700  0 log_channel(cephadm) 
log [INF] : Reconfiguring rgw.ikea.dc9-1.itcnchn-bb4067.gtqedc (unknown 
last config time)...
debug 2021-01-05T09:57:53.629+ 7fdb69691700  0 log_channel(cephadm) 
log [INF] : Reconfiguring daemon rgw.ikea.dc9-1.itcnchn-bb4067.gtqedc on 
itcnchn-bb4067
debug 2021-01-05T09:57:54.141+ 7fdb69691700  0 log_channel(cephadm) 
log [INF] : Reconfiguring crash.itcnchn-bb4067 (unknown last config 
time)...
debug 2021-01-05T09:57:54.141+ 7fdb69691700  0 log_channel(cephadm) 
log [INF] : Reconfiguring daemon crash.itcnchn-bb4067 on itcnchn-bb4067


- Karsten
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Timeout ceph rbd-nbd mounted image

2021-01-05 Thread Jason Dillaman
You can try using the "--timeout X" optional for "rbd-nbd" to increase
the timeout. Some kernels treat the default as infinity, but there
were some >=4.9 kernels that switched behavior and started defaulting
to 30 seconds. There is also known issues with attempting to place XFS
file systems on top of NBD due to memory pressure issues.

On Tue, Jan 5, 2021 at 4:36 AM Wissem MIMOUNA
 wrote:
>
> Hello ,
>
> Looking for information about a timeout which occur once a week for a ceph 
> rbd image mounted on a machine using rbd-nbd (Linux Ubuntu machine).
> The error found in 'dmseg' is below :
> [798016.401469] block nbd0: Connection timed out [798016.401506] block nbd0: 
> shutting down sockets
>
> Many Thanks
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>


-- 
Jason
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Timeout ceph rbd-nbd mounted image

2021-01-05 Thread Wissem MIMOUNA
Hi ,

Thank you for your feedback  . It seems the error related to the fstrim run 
once a week ( default ) . 
Do you have more information about the NBD/XFS memeory pressure issues ?

Thanks

-Message d'origine-
De : Jason Dillaman  
Envoyé : mardi 5 janvier 2021 14:42
À : Wissem MIMOUNA 
Cc : ceph-users@ceph.io
Objet : Re: [ceph-users] Timeout ceph rbd-nbd mounted image

You can try using the "--timeout X" optional for "rbd-nbd" to increase the 
timeout. Some kernels treat the default as infinity, but there were some >=4.9 
kernels that switched behavior and started defaulting to 30 seconds. There is 
also known issues with attempting to place XFS file systems on top of NBD due 
to memory pressure issues.

On Tue, Jan 5, 2021 at 4:36 AM Wissem MIMOUNA  
wrote:
>
> Hello ,
>
> Looking for information about a timeout which occur once a week for a ceph 
> rbd image mounted on a machine using rbd-nbd (Linux Ubuntu machine).
> The error found in 'dmseg' is below :
> [798016.401469] block nbd0: Connection timed out [798016.401506] block 
> nbd0: shutting down sockets
>
> Many Thanks
> ___
> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an 
> email to ceph-users-le...@ceph.io
>


--
Jason

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Timeout ceph rbd-nbd mounted image

2021-01-05 Thread Jason Dillaman
On Tue, Jan 5, 2021 at 9:01 AM Wissem MIMOUNA
 wrote:
>
> Hi ,
>
> Thank you for your feedback  . It seems the error related to the fstrim run 
> once a week ( default ) .

Do you have object-map enabled? If not, the FS will gladly send huge
discard extents which, if you have a large volume, could result in
hundreds of thousands of ops to the cluster. That's a great way to
hang IO.

> Do you have more information about the NBD/XFS memeory pressure issues ?

See [1].

> Thanks
>
> -Message d'origine-
> De : Jason Dillaman 
> Envoyé : mardi 5 janvier 2021 14:42
> À : Wissem MIMOUNA 
> Cc : ceph-users@ceph.io
> Objet : Re: [ceph-users] Timeout ceph rbd-nbd mounted image
>
> You can try using the "--timeout X" optional for "rbd-nbd" to increase the 
> timeout. Some kernels treat the default as infinity, but there were some 
> >=4.9 kernels that switched behavior and started defaulting to 30 seconds. 
> There is also known issues with attempting to place XFS file systems on top 
> of NBD due to memory pressure issues.
>
> On Tue, Jan 5, 2021 at 4:36 AM Wissem MIMOUNA 
>  wrote:
> >
> > Hello ,
> >
> > Looking for information about a timeout which occur once a week for a ceph 
> > rbd image mounted on a machine using rbd-nbd (Linux Ubuntu machine).
> > The error found in 'dmseg' is below :
> > [798016.401469] block nbd0: Connection timed out [798016.401506] block
> > nbd0: shutting down sockets
> >
> > Many Thanks
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
> > email to ceph-users-le...@ceph.io
> >
>
>
> --
> Jason
>

[1] https://tracker.ceph.com/issues/40822

-- 
Jason
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Timeout ceph rbd-nbd mounted image

2021-01-05 Thread Wissem MIMOUNA
Yes , we have had object-map enabled .

rgds

-Message d'origine-
De : Jason Dillaman  
Envoyé : mardi 5 janvier 2021 15:08
À : Wissem MIMOUNA 
Cc : ceph-users@ceph.io
Objet : Re: [ceph-users] Timeout ceph rbd-nbd mounted image

On Tue, Jan 5, 2021 at 9:01 AM Wissem MIMOUNA  
wrote:
>
> Hi ,
>
> Thank you for your feedback  . It seems the error related to the fstrim run 
> once a week ( default ) .

Do you have object-map enabled? If not, the FS will gladly send huge discard 
extents which, if you have a large volume, could result in hundreds of 
thousands of ops to the cluster. That's a great way to hang IO.

> Do you have more information about the NBD/XFS memeory pressure issues ?

See [1].

> Thanks
>
> -Message d'origine-
> De : Jason Dillaman  Envoyé : mardi 5 janvier 
> 2021 14:42 À : Wissem MIMOUNA  Cc : 
> ceph-users@ceph.io Objet : Re: [ceph-users] Timeout ceph rbd-nbd 
> mounted image
>
> You can try using the "--timeout X" optional for "rbd-nbd" to increase the 
> timeout. Some kernels treat the default as infinity, but there were some 
> >=4.9 kernels that switched behavior and started defaulting to 30 seconds. 
> There is also known issues with attempting to place XFS file systems on top 
> of NBD due to memory pressure issues.
>
> On Tue, Jan 5, 2021 at 4:36 AM Wissem MIMOUNA 
>  wrote:
> >
> > Hello ,
> >
> > Looking for information about a timeout which occur once a week for a ceph 
> > rbd image mounted on a machine using rbd-nbd (Linux Ubuntu machine).
> > The error found in 'dmseg' is below :
> > [798016.401469] block nbd0: Connection timed out [798016.401506] 
> > block
> > nbd0: shutting down sockets
> >
> > Many Thanks
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an 
> > email to ceph-users-le...@ceph.io
> >
>
>
> --
> Jason
>

[1] 
https://urldefense.proofpoint.com/v2/url?u=https-3A__tracker.ceph.com_issues_40822&d=DwIFaQ&c=1tDFxPZjcWEmlmmx4CZtyA&r=h1fIFv3Ydv-kdH6KKa6lmB20LbjUiXP9Kttb6tTs__E&m=LRYmBeK-HhI4iqXi_OnpbiFuIUjebM4K8c6iD1Gyu-M&s=pKIOWb1D_V62IGbSN81qhpFduaDCrVP8du5lfs9BkLE&e=
 

--
Jason

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Ceph RadosGW & OpenStack swift problem

2021-01-05 Thread Mika Saari
Hi,

  I am using indeed OpenStack Ussuri release. I changed the "gw swift
account in url = true" directly with ceph config set ... command. Also
checked that rgw_keystone_accepted_roles is correctly set and not the admin
one. Also tested disabling rgw_keystone_verify_ssl.

  Should radosgw communicate with keystone somehow? I can not see my
ceph-cluster requesting anything from keystone through any interface
(tcpdump checked this one). I have tested restarting the radosgw with
command "ceph orch restart rgw.default.ou" and seems that it brings the
container down and up. Not sure though it is enough to bring the settings
in use.q

  Current status is:
1) swift command seems to be able to authenticate with keystone at the
very beginning, this is done in the client side.
2) swift command makes a request to radosgw and gets 401
   INFO:swiftclient:REQ: curl -i /swift/v1/AUTH_/test3 -X POST -H "X-Auth-Token: " -H "Content-Length:
0"
  INFO:swiftclient:RESP STATUS: 401 Unauthorized

  Thanks a lot again,
 -Mika

On Tue, Jan 5, 2021 at 11:19 AM Wissem MIMOUNA <
wissem.mimo...@fiducialcloud.fr> wrote:

> Hi,
>
> Which version of OpenStack do you have ? I guess , since Usurri ( or may
> be even before ) swift authentification through keystone require the
> account in url . You have to add this option in "/etc/ceph/ceph.conf" ,
> section rgw "rgw swift account in url = true" or do it via setting directly
> . Also , I noticed you did  this ==> 3) ceph config set mgr
> rgw_keystone_accepted_admin_roles  ||  I think , you should use the
> option "rgw keystone accepted roles " instead.
>
> Regards
>
> -Message d'origine-
> De : Mika Saari 
> Envoyé : mardi 5 janvier 2021 10:03
> À : ceph-users@ceph.io
> Objet : [ceph-users] Ceph RadosGW & OpenStack swift problem
>
> Hi,
>
>   Using Ceph 15.2.8 installed with cephadm. Trying to get RadosGW to work.
> I have managed to get the RadosGW working. I can manage it through a
> dashboard and use aws s3 client to create new buckets etc. When trying to
> use swift I get errors.
>
>   Not sure how to continue to track the problem here. Any tips are welcome.
>
> Thank you very much,
>   -Mika
>
> --- What I have done and what are the results. Some data changed
> manually  ---
>   What I have done:
> At OpenStack Side:
>   1) openstack user create --domain default --password-prompt swift
>   2) openstack role add --project service --user swift admin
>   3) openstack endpoint create --region RegionOne object-store public
> https://urldefense.proofpoint.com/v2/url?u=http-3A__ceph1_swift_v1_AUTH-5F-25-255C-28project-5Fid-255C-29s&d=DwICAg&c=1tDFxPZjcWEmlmmx4CZtyA&r=h1fIFv3Ydv-kdH6KKa6lmB20LbjUiXP9Kttb6tTs__E&m=EmlYLMTNHaWmSJrApw1U46oD9d1KMRwdpbF9VLg7eX4&s=-1FtdhjTcNA8jPSUoyoUfsPl5uqTqu4I_ThTOJNLjtg&e=
>   4) openstack endpoint create --region RegionOne object-store
> internal
> https://urldefense.proofpoint.com/v2/url?u=http-3A__ceph1_swift_v1_AUTH-5F-25-255C-28project-5Fid-255C-29s&d=DwICAg&c=1tDFxPZjcWEmlmmx4CZtyA&r=h1fIFv3Ydv-kdH6KKa6lmB20LbjUiXP9Kttb6tTs__E&m=EmlYLMTNHaWmSJrApw1U46oD9d1KMRwdpbF9VLg7eX4&s=-1FtdhjTcNA8jPSUoyoUfsPl5uqTqu4I_ThTOJNLjtg&e=
>   5) openstack endpoint create --region RegionOne object-store admin
> https://urldefense.proofpoint.com/v2/url?u=http-3A__ceph1_swift_v1&d=DwICAg&c=1tDFxPZjcWEmlmmx4CZtyA&r=h1fIFv3Ydv-kdH6KKa6lmB20LbjUiXP9Kttb6tTs__E&m=EmlYLMTNHaWmSJrApw1U46oD9d1KMRwdpbF9VLg7eX4&s=bm67b3lMVeLC3sNvuyufFCe3AksJgfIgeI8SDorhHMU&e=
>
>   At Ceph side:
> 1) ceph config set mgr rgw_keystone_api_version 3
> 2) ceph config set mgr rgw_keystone_url
> https://urldefense.proofpoint.com/v2/url?u=http-3A__controller-3A5000&d=DwICAg&c=1tDFxPZjcWEmlmmx4CZtyA&r=h1fIFv3Ydv-kdH6KKa6lmB20LbjUiXP9Kttb6tTs__E&m=EmlYLMTNHaWmSJrApw1U46oD9d1KMRwdpbF9VLg7eX4&s=lyXWyh-BXrikPWqWM3dcPW4ZofvjiAxnq-nXsjifnEw&e=
> 3) ceph config set mgr rgw_keystone_accepted_admin_roles admin
> 4) ceph config set mgr rgw_keystone_admin_user swift
> 5) ceph config set mgr rgw_keystone_admin_password swift_test
> 6) ceph config set mgr rgw_keystone_admin_domain default
> 7) ceph config set mgr rgw_keystone_admin_project service
>   for project I have tested different projects e.g. service and admin
>
>   Now when testing the API using swift client I get next:
> 1) swift post test3 --debug
>
> DEBUG:keystoneclient.auth.identity.v3.base:Making authentication request
> to
> https://urldefense.proofpoint.com/v2/url?u=http-3A__controller-3A5000_v3_auth_tokens&d=DwICAg&c=1tDFxPZjcWEmlmmx4CZtyA&r=h1fIFv3Ydv-kdH6KKa6lmB20LbjUiXP9Kttb6tTs__E&m=EmlYLMTNHaWmSJrApw1U46oD9d1KMRwdpbF9VLg7eX4&s=-98qpMcc8sdRTdN7AwNPIyGsIK1GaFvi_SC5GtZGUpY&e=
> DEBUG:urllib3.connectionpool:Starting new HTTP connection (1):
> controller:5000
> DEBUG:urllib3.connectionpool:http://controller:5000 "POST /v3/auth/tokens
> HTTP/1.1" 201 7032
>
> . some openstack data here .
>
> DEBUG:urllib3.connectionpool:Starting new HTTP 

[ceph-users] Re: logging to stdout/stderr causes huge container log file

2021-01-05 Thread Tony Liu
Any comments?

Thanks!
Tony
> -Original Message-
> From: Tony Liu 
> Sent: Tuesday, December 29, 2020 5:22 PM
> To: ceph-users@ceph.io
> Subject: [ceph-users] logging to stdout/stderr causes huge container log
> file
> 
> Hi,
> 
> With ceph 15.2.5 octopus, mon, mgd and rgw dump loggings on debug level
> to stdout/stderr. It causes huge container log file
> (/var/lib/docker/containers//-json.log).
> Is there any way to stop dumping logs or change the logging level?
> 
> BTW, I tried "ceph config set  log_to_stderr false".
> It doesn't help.
> 
> 
> Thanks!
> 
> Tony
> ___
> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
> email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: logging to stdout/stderr causes huge container log file

2021-01-05 Thread Seena Fallah
If you are using ceph-container images you should update your image. This
feature has been introduced in v5.0.5:
https://github.com/ceph/ceph-container/releases/tag/v5.0.5

On Wed, Jan 6, 2021 at 1:22 AM Tony Liu  wrote:

> Any comments?
>
> Thanks!
> Tony
> > -Original Message-
> > From: Tony Liu 
> > Sent: Tuesday, December 29, 2020 5:22 PM
> > To: ceph-users@ceph.io
> > Subject: [ceph-users] logging to stdout/stderr causes huge container log
> > file
> >
> > Hi,
> >
> > With ceph 15.2.5 octopus, mon, mgd and rgw dump loggings on debug level
> > to stdout/stderr. It causes huge container log file
> > (/var/lib/docker/containers//-json.log).
> > Is there any way to stop dumping logs or change the logging level?
> >
> > BTW, I tried "ceph config set  log_to_stderr false".
> > It doesn't help.
> >
> >
> > Thanks!
> >
> > Tony
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
> > email to ceph-users-le...@ceph.io
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io