[ceph-users] Re: radosgw not working after upgrade to Quincy

2022-12-29 Thread Andrei Mikhailovsky
Thanks, Konstantin. Will try 

> From: "Konstantin Shalygin" 
> To: "Andrei Mikhailovsky" 
> Cc: "ceph-users" 
> Sent: Thursday, 29 December, 2022 03:42:56
> Subject: Re: [ceph-users] radosgw not working after upgrade to Quincy

> Hi,
> Just try to read your logs:

>> 2022-12-29T02:07:38.953+ 7f5df868ccc0 0 WARNING: skipping unknown 
>> framework:
> > civetweb

> You try to use the `civetweb`, it was absent in quincy release. You need to
> update your configs and use `beast` instead

> k

>> On 29 Dec 2022, at 09:20, Andrei Mikhailovsky  wrote:

>> Please let me know how to fix the problem?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Increase the recovery throughput

2022-12-29 Thread E Taka
Ceph 17.2.5, dockerized, Ubuntu 20.04, OSD on HDD with WAL/DB on SSD.

Hi all,

old topic, but the problem still exists. I tested it extensively,
with osd_op_queue set either to mclock_scheduler (and profile set to high
recovery) or wpq and the well known options (sleep_time, max_backfill) from
https://docs.ceph.com/en/quincy/rados/configuration/osd-config-ref/

When removing an OSD with `ceph orch osd rm X` the backfilling always ends
with a large number of misplaced objects at a low recovery rate (right now
"120979/336643536 objects misplaced (0.036%); 10 KiB/s, 2 objects/s
recovering"). The rate drops significantly when there are very few PGs
involved.I wonder if someone a similar installation as we have (see above)
doesn't experience this problem.

Thanks, Erich


Am Mo., 12. Dez. 2022 um 12:28 Uhr schrieb Frank Schilder :

> Hi Monish,
>
> you are probably on mclock scheduler, which ignores these settings. You
> might want to set them back to defaults, change the scheduler to wpq and
> then try again if it needs adjusting. there were several threads about
> "broken" recovery ops scheduling with mclock in the latest versions.
>
> So, back to Eugen's answer: go through this list and try solutions of
> earlier cases.
>
> Best regards,
> =
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
>
> 
> From: Monish Selvaraj 
> Sent: 12 December 2022 11:32:26
> To: Eugen Block
> Cc: ceph-users@ceph.io
> Subject: [ceph-users] Re: Increase the recovery throughput
>
> Hi Eugen,
>
> We tried that already. the osd_max_backfills is in 24 and the
> osd_recovery_max_active is in 20.
>
> On Mon, Dec 12, 2022 at 3:47 PM Eugen Block  wrote:
>
> > Hi,
> >
> > there are many threads dicussing recovery throughput, have you tried
> > any of the solutions? First thing to try is to increase
> > osd_recovery_max_active and osd_max_backfills. What are the current
> > values in your cluster?
> >
> >
> > Zitat von Monish Selvaraj :
> >
> > > Hi,
> > >
> > > Our ceph cluster consists of 20 hosts and 240 osds.
> > >
> > > We used the erasure-coded pool with cache-pool concept.
> > >
> > > Some time back 2 hosts went down and the pg are in a degraded state. We
> > got
> > > the 2 hosts back up in some time. After the pg is started recovering
> but
> > it
> > > takes a long time ( months )  . While this was happening we had the
> > cluster
> > > with 664.4 M objects and 987 TB data. The recovery status is not
> changed;
> > > it remains 88 pgs degraded.
> > >
> > > During this period, we increase the pg size from 256 to 512 for the
> > > data-pool ( erasure-coded pool ).
> > >
> > > We also observed (one week ) the recovery to be very slow, the current
> > > recovery around 750 Mibs.
> > >
> > > Is there any way to increase this recovery throughput ?
> > >
> > > *Ceph-version : quincy*
> > >
> > > [image: image.png]
> >
> >
> >
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] ceph failing to write data - MDSs read only

2022-12-29 Thread Amudhan P
Hi,

Suddenly facing an issue with Ceph cluster I am using ceph version 16.2.6.
I couldn't find any solution for the issue below.
Any suggestions?


health: HEALTH_WARN
1 clients failing to respond to capability release
1 clients failing to advance oldest client/flush tid
1 MDSs are read only
1 MDSs report slow requests
1 MDSs behind on trimming

  services:
mon: 3 daemons, quorum strg-node1,strg-node2,strg-node3 (age 9w)
mgr: strg-node1.ivkfid(active, since 9w), standbys: strg-node2.unyimy
mds: 1/1 daemons up, 1 standby
osd: 32 osds: 32 up (since 9w), 32 in (since 5M)

  data:
volumes: 1/1 healthy
pools:   3 pools, 321 pgs
objects: 13.19M objects, 45 TiB
usage:   90 TiB used, 85 TiB / 175 TiB avail
pgs: 319 active+clean
 2   active+clean+scrubbing+deep
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] cephadm ls / ceph orch ps => here does it get its information?

2022-12-29 Thread Ml Ml
Hello,

i seem to not have removed old osd. Now i have:

root@ceph07:/tmp# ceph orch ps |grep -e error -e stopped |grep ceph07
_osd.33   ceph07  stopped   2h ago 2y
  quay.io/ceph/ceph:v15.2.17   
mon.ceph01ceph07  error 2h ago 2y
  quay.io/ceph/ceph:v15.2.17   
osd.0 ceph07  error 2h ago 2y
  quay.io/ceph/ceph:v15.2.17   
osd.1 ceph07  error 2h ago 2y
  quay.io/ceph/ceph:v15.2.17   
osd.11ceph07  error 2h ago 2y
  quay.io/ceph/ceph:v15.2.17   
osd.12ceph07  error 2h ago 2y
  quay.io/ceph/ceph:v15.2.17   
osd.14ceph07  error 2h ago 2y
  quay.io/ceph/ceph:v15.2.17   
osd.18ceph07  error 2h ago 2y
  quay.io/ceph/ceph:v15.2.17   
osd.22ceph07  error 2h ago 2y
  quay.io/ceph/ceph:v15.2.17   
osd.30ceph07  error 2h ago 2y
  quay.io/ceph/ceph:v15.2.17   
osd.4 ceph07  error 2h ago 2y
  quay.io/ceph/ceph:v15.2.17   
osd.64ceph07  error 2h ago 2y
  quay.io/ceph/ceph:v15.2.17   
osd.8 ceph07  error 2h ago 2y
  quay.io/ceph/ceph:v15.2.17   


Which are non-existing Daemons on that node "ceph07". And i can not remove them:
cephadm rm-daemon --fsid=5436dd5d-83d4-4dc8-a93b-60ab5db145df
--name=osd.0 --force
cephadm rm-daemon --fsid=5436dd5d-83d4-4dc8-a93b-60ab5db145df
--name=osd.1 --force
cephadm rm-daemon --fsid=5436dd5d-83d4-4dc8-a93b-60ab5db145df
--name=osd.12 --force
cephadm rm-daemon --fsid=5436dd5d-83d4-4dc8-a93b-60ab5db145df
--name=osd.14 --force
cephadm rm-daemon --fsid=5436dd5d-83d4-4dc8-a93b-60ab5db145df
--name=osd.18 --force
cephadm rm-daemon --fsid=5436dd5d-83d4-4dc8-a93b-60ab5db145df
--name=osd.30 --force
cephadm rm-daemon --fsid=5436dd5d-83d4-4dc8-a93b-60ab5db145df
--name=osd.4 --force
cephadm rm-daemon --fsid=5436dd5d-83d4-4dc8-a93b-60ab5db145df
--name=osd.64 --force
cephadm rm-daemon --fsid=5436dd5d-83d4-4dc8-a93b-60ab5db145df
--name=osd.8 --force

root@ceph07:/tmp# ls /var/lib/ceph/5436dd5d-83d4-4dc8-a93b-60ab5db145df/
crash  home  osd.66  osd.67  osd.68  osd.69  osd.999  removed
  => thats correct.

root@ceph07:/tmp# ls /var/lib/ceph/5436dd5d-83d4-4dc8-a93b-60ab5db145df/removed/
mon.ceph01_2020-09-02T07:11:30.232540
mon.ceph07_2020-11-20T14:17:56.122749
osd.12_2022-12-29T13:17:47.855132  osd.22_2022-12-29T13:13:47.233379
osd.64_2022-12-29T13:17:50.732467   osd.73_2022-12-29T09:54:58.009039Z
mon.ceph01_2022-12-29T13:18:33.702553
osd.0_2022-12-29T13:17:46.661637
osd.14_2022-12-29T13:17:48.485548  osd.30_2022-12-29T13:17:49.685540
osd.70_2022-12-29T09:56:15.014346Z  osd.74_2022-12-29T09:54:59.529058Z
mon.ceph02_2020-09-01T12:07:11.808391
osd.11_2022-12-29T13:15:39.944974
osd.18_2022-12-29T13:17:49.145034  osd.32_2020-07-30T09:44:23.252102
osd.71_2022-12-29T09:54:55.157744Z  osd.75_2022-12-29T09:55:02.647709Z
mon.ceph03_2020-09-01T13:26:34.704724
osd.1_2022-12-29T13:17:47.233991
osd.20_2022-12-29T12:58:27.511277  osd.4_2022-12-29T13:17:50.199486
osd.72_2022-12-29T09:54:56.537846Z  osd.8_2022-12-29T13:17:51.372638


my first try was to rename the old/non-active OSD from osd.33 to
_osd.33, but now i have a dangling module here:

root@ceph07:/tmp# ceph -s
  cluster:
id: 5436dd5d-83d4-4dc8-a93b-60ab5db145df
health: HEALTH_ERR
mons are allowing insecure global_id reclaim
20 failed cephadm daemon(s)
Module 'cephadm' has failed: '_osd'

Any hints on how to clean up my node? :)

Cheers,
Mario
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Cannot create CephFS subvolume

2022-12-29 Thread Milind Changire
Daniel,
Could you, for a brief moment, turn on the debug logs for mgr and mds and
then attempt to create the subvol.
I'd like to see what logs gets dumped in the logs when the EINVAL is
returned.

On Wed, Dec 28, 2022 at 10:13 PM Daniel Kovacs 
wrote:

> We are on: 17.2.4
>
> Ceph fs volume ls output:
> [
>  {
>  "name": "k8s_ssd"
>  },
>  {
>  "name": "inclust"
>  },
>  {
>  "name": "inclust_ssd"
>  }
> ]
>
>
> I'd like to create a subvol in inclust_ssd volume. I can create
> subvolume with same name in inclust without any problems.
>
>
> Best regards,
>
> Daniel
>
> On 2022. 12. 28. 4:42, Milind Changire wrote:
> > Also, please list the volumes available on your system:
> >
> > $ ceph fs volume ls
> >
> >
> > On Wed, Dec 28, 2022 at 9:09 AM Milind Changire 
> wrote:
> >
> >> What ceph version are you using?
> >>
> >> $ ceph versions
> >>
> >>
> >> On Wed, Dec 28, 2022 at 3:17 AM Daniel Kovacs <
> daniel.kov...@inclust.com>
> >> wrote:
> >>
> >>> Hello!
> >>>
> >>> I'd like to create a CephFS subvol, with these command: ceph fs
> >>> subvolume create cephfs_ssd subvol_1
> >>> I got this error: Error EINVAL: invalid value specified for
> >>> ceph.dir.subvolume
> >>> If I use another cephfs volume, there were no error reported.
> >>>
> >>> What did I wrong?
> >>>
> >>> Best regards,
> >>>
> >>> Daniel
> >>>
> >>> ___
> >>> ceph-users mailing list -- ceph-users@ceph.io
> >>> To unsubscribe send an email to ceph-users-le...@ceph.io
> >>>
> >>>
> >> --
> >> Milind
> >>
> >>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>


-- 
Milind
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io