[ceph-users] Re: Ceph iSCSI rbd-target.api Failed to Load

2022-09-09 Thread duluxoz
Hi Bailey, Sorry for the delay in getting back to you (I had a few non-related issues to resolve) - and thanks for replying. The results from `gwcli -d`: ~~~ Adding ceph cluster 'ceph' to the UI Fetching ceph osd information Querying ceph for state information REST API failure, code : 500 Una

[ceph-users] Re: [SPAM] radosgw-admin-python

2022-09-09 Thread Danny Abukalam
Hi Istvan, There’s an example for how you can do this here - https://github.com/SoftIron/rgw-usage-example HTH, Danny > On 9 Sept 2022, at 04:32, Szabo, Istvan (Agoda) > wrote: > > Hi, > > Anybody using radosgw-admin-python? I’m struggling to

[ceph-users] CEPH Balancer EC Pool

2022-09-09 Thread ashley
I had a 5 Node EC (8+3) pool of 6TB disks (some with 12 SAS per a host, some 14 SAS per a host), with the 5 node’s I had the crush map rule set to 3 OSD across 4 HOST, this worked perfectly and I ended up with a perfect balance of PG’s and disk space % across all the hosts/OSD. I recently added

[ceph-users] Re: Ceph iSCSI rbd-target.api Failed to Load

2022-09-09 Thread Xiubo Li
On 07/09/2022 17:37, duluxoz wrote: Hi All, I've followed the instructions on the CEPH Doco website on Configuring the iSCSI Target. Everything went AOK up to the point where I try to start the rbd-target-api service, which fails (the rbd-target-gw service started OK). A `systemctl status

[ceph-users] Re: just-rebuilt mon does not join the cluster

2022-09-09 Thread Jan Kasprzak
TL;DR: my cluster is working now. Details and further problems below: Jan Kasprzak wrote: : I did : : ceph tell mon.* config set mon_sync_max_payload_size 4096 : ceph config set mon mon_sync_max_payload_size 4096 : : and added "mon_sync_max_payload_size = 4096" into the [global] section : of the

[ceph-users] Re: Ceph iSCSI rbd-target.api Failed to Load

2022-09-09 Thread Matthew J Black
Hi Li, Yeah, that's what I thought (about having the api_secure), so I checked for the iscsi-gateway.cfg file and there's only one on the system, in the /etc/ceph/ folder. Any other ideas? Cheers PEREGRINE IT Signature On 09/09/2022 18:35, Xiubo Li wrote: On 07/09/2022 17:37, duluxoz wrot

[ceph-users] Ceph User + Dev Monthly September Meetup

2022-09-09 Thread Neha Ojha
Hi everyone, This month's Ceph User + Dev Monthly meetup is on September 15, 14:00-15:00 UTC. We would like to get some feedback on a POC that we are working on to measure availability in Ceph. Please add other topics to the agenda: https://pad.ceph.com/p/ceph-user-dev-monthly-minutes. Hope to se

[ceph-users] Re: Ceph iSCSI rbd-target.api Failed to Load

2022-09-09 Thread duluxoz
Hi Guys, So, I finally got things sorted :-) Time to eat some crow-pie :-P Turns out I had two issues, both of which involved typos (don't they always?). The first was I had transposed two digits of an IP Address in the `iscsi-gateway.cfg` -> `trusted_ip_list`. The second was that I had c

[ceph-users] External RGW always down

2022-09-09 Thread Monish Selvaraj
Hi all, I have one critical issue in my prod cluster. When the customer's data comes from 600 MiB . My Osds are down *8 to 20 from 238* . Then I manually up my osds . After a few minutes, my all rgw crashes. We did some troubleshooting but nothing works. When we upgrade ceph to 17.2.0. to 17.2.1

[ceph-users] Re: External RGW always down

2022-09-09 Thread Monish Selvaraj
FYI On Sat, Sep 10, 2022 at 11:23 AM Monish Selvaraj wrote: > Hi all, > > I have one critical issue in my prod cluster. When the customer's data > comes from 600 MiB . > > My Osds are down *8 to 20 from 238* . Then I manually up my osds . After > a few minutes, my all rgw crashes. > > We did som

[ceph-users] Re: External RGW always down

2022-09-09 Thread Monish Selvaraj
On Sat, Sep 10, 2022 at 11:25 AM Monish Selvaraj wrote: > FYI > > On Sat, Sep 10, 2022 at 11:23 AM Monish Selvaraj > wrote: > >> Hi all, >> >> I have one critical issue in my prod cluster. When the customer's data >> comes from 600 MiB . >> >> My Osds are down *8 to 20 from 238* . Then I manuall

[ceph-users] Re: OSDs slow to start: No Valid allocation info on disk (empty file)

2022-09-09 Thread Igor Fedotov
You might want to do the following (with osd_fast_shutdown still set to false): Set debug-bluestore to 20 (via ceph config set ... command), restart an osd and share the resulting log. Then undo the setting. Thanks, Igor On 9/8/2022 9:52 PM, Vladimir Brik wrote: > You may try to set osd_f