[ceph-users] Re: Different behaviors for ceph kernel client in limiting IOPS when data pool enters `nearfull`?

2023-11-16 Thread Matt Larson
rote: > On Thu, Nov 16, 2023 at 3:21 AM Xiubo Li wrote: > > > > Hi Matt, > > > > On 11/15/23 02:40, Matt Larson wrote: > > > On CentOS 7 systems with the CephFS kernel client, if the data pool > has a > > > `nearfull` status there is a slight reductio

[ceph-users] Different behaviors for ceph kernel client in limiting IOPS when data pool enters `nearfull`?

2023-11-14 Thread Matt Larson
or to have behavior more similar to the CentOS 7 CephFS clients? Do different OS or Linux kernels have greatly different ways they respond or limit on the IOPS? Are there any options to adjust how they limit on IOPS? Thanks, Matt -- Matt Larson, PhD Madison, WI 53705 U.

[ceph-users] Re: Moving devices to a different device class?

2023-10-26 Thread Matt Larson
ssd"-named > OSDs to land on, and move themselves if possible. It is a fairly safe > operation where > they continue to work, but will try to evacuate the PGs which should not > be there. > > Worst case, your planning is wrong, and the "ssd" O

[ceph-users] Re: Moving devices to a different device class?

2023-10-24 Thread Matt Larson
cripting <https://www.youtube.com/watch?v=w91e0EjWD6E> > youtube.com <https://www.youtube.com/watch?v=w91e0EjWD6E> > <https://www.youtube.com/watch?v=w91e0EjWD6E> > > > > > On Oct 24, 2023, at 11:42, Matt Larson wrote: > > I am looking to create

[ceph-users] Moving devices to a different device class?

2023-10-24 Thread Matt Larson
gn to a new device class? Should they be moved one by one? What is the way to safely protect data from the existing pool that they are mapped to? Thanks, Matt -- Matt Larson, PhD Madison, WI 53705 U.S.A. ___ ceph-users mailing list -- ceph-users@ceph

[ceph-users] Removing failing OSD with cephadm?

2023-02-17 Thread Matt Larson
keep it down? There is an example to stop the OSD on the server using systemctl, outside of cephadm:* ssh {osd-host}sudo systemctl stop ceph-osd@{osd-num} Thanks, Matt -- Matt Larson, PhD Madison, WI 53705 U.S.A. ___ ceph-users mailing list

[ceph-users] Re: What to expect on rejoining a host to cluster?

2022-12-05 Thread Matt Larson
ess capacity. > > Maybe someone else an help here? > > Best regards, > = > Frank Schilder > AIT Risø Campus > Bygning 109, rum S14 > > > From: Matt Larson > Sent: 04 December 2022 02:00:11 > To: Eneko Lacu

[ceph-users] Re: What to expect on rejoining a host to cluster?

2022-12-03 Thread Matt Larson
t, destroy the OSDs but keep the IDs > intact (ceph osd destroy). Then, no further re-balancing will happen and you > can re-use the OSD ids later when adding a new host. That's a stable > situation from an operations point of view. > > Hope that helps. > > Best regards,

[ceph-users] What to expect on rejoining a host to cluster?

2022-11-26 Thread Matt Larson
ill this be problematic? Thanks for any advice, Matt -- Matt Larson, PhD Madison, WI 53705 U.S.A. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Best practice for removing failing host from cluster?

2022-11-09 Thread Matt Larson
as 0? Can I do this while the host is offline, or should I bring it online first before setting weights or using `ceph orch osd rm`? Thanks, Matt -- Matt Larson, PhD Madison, WI 53705 U.S.A. ___ ceph-users mailing list -- ceph-users@ceph.io To unsu

[ceph-users] Upgrading ceph to latest version, skipping minor versions?

2021-06-14 Thread Matt Larson
ral minor versions behind? ceph orch upgrade start --ceph-version 15.2.13 -Matt -- Matt Larson, PhD Madison, WI 53705 U.S.A. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Updated ceph-osd package, now get -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2,1]

2021-06-14 Thread Matt Larson
, the MON was able to get back in the quorum with the other monitors I think the issue was that when the MON was out of quorum, the ceph client could no longer connect when only having that MON as an option. Problem is solved - -Matt On Mon, Jun 14, 2021 at 2:07 PM Matt Larson wrote: > I

[ceph-users] Updated ceph-osd package, now get -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2,1]

2021-06-14 Thread Matt Larson
containerized daemons. How can I restore the ability to connect with the command-line `ceph` client to check the status and all other interactions? Thanks, Matt -- Matt Larson, PhD Madison, WI 53705 U.S.A. ___ ceph-users mailing list -- ceph-users

[ceph-users] Building ceph clusters with 8TB SSD drives?

2021-05-07 Thread Matt Larson
-latest-storage-reliability-figures-add-ssd-boot.html ). Are there any major caveats to considering working with larger SSDs for data pools? Thanks, Matt -- Matt Larson, PhD Madison, WI 53705 U.S.A. ___ ceph-users mailing list -- ceph-users@ceph.io

[ceph-users] Re: Unable to clarify error using vfs_ceph (Samba gateway for CephFS)

2020-11-12 Thread Matt Larson
; of the cephfs. > > Best regards, > = > Frank Schilder > AIT Risø Campus > Bygning 109, rum S14 > > > From: Matt Larson > Sent: 12 November 2020 00:40:21 > To: ceph-users > Subject: [ceph-users] Unab

[ceph-users] Unable to clarify error using vfs_ceph (Samba gateway for CephFS)

2020-11-11 Thread Matt Larson
good guide that describes not just the Samba smb.conf, but also what should be in /etc/ceph/ceph.conf, and how to provide the key for the ceph:user_id ? I am really struggling to find good first-hand documentation for this. Thanks, Matt -- Mat

[ceph-users] Re: ceph-volume quite buggy compared to ceph-disk

2020-10-01 Thread Matt Larson
pfs manually to inspect this? Maybe put in the > manual[1]? > > > [1] > https://docs.ceph.com/en/latest/ceph-volume/lvm/activate/ > > _______ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...

[ceph-users] Re: objects misplaced jumps up at 5%

2020-09-30 Thread Matt Larson
ately. > >> > >> "ceph osd pool ls detail" shows that pgp is currently 11412 > >> > >> Each time we hit 5.000% misplaced, the pgp number increases by 1 or 2, > >> this causes the % misplaced to increase again to ~5.1% > -- > Dr Jake Grimmett > Head Of Scientific Computing > MRC

[ceph-users] Re: objects misplaced jumps up at 5%

2020-09-29 Thread Matt Larson
re's a PG change ongoing (either pg autoscaler or > > > a manual change, both obey the target misplaced ratio). > > > You can check this by running "ceph osd pool ls detail" and check for > > > the value of pg target. > > > > > > Also: Looks like you've set osd_scrub_during_recovery = false, this > > > setting can be annoying on large erasure-coded setups on HDDs that see > > > long recovery times. It's better to get IO priorities right; search > > > mailing list for osd op queue cut off high. > > > > > > Paul > > > > -- > > Dr Jake Grimmett > > Head Of Scientific Computing > > MRC Laboratory of Molecular Biology > > Francis Crick Avenue, > > Cambridge CB2 0QH, UK. > > > > > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io -- Matt Larson, PhD Madison, WI 53705 U.S.A. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Unable to restart OSD assigned to LVM partition on Ceph 15.1.2?

2020-09-24 Thread Matt Larson
`ceph-volume lvm create --bluestore --data /dev/sd --block.db /dev/nvme0n1` Is there a workaround for this problem where the container process is unable to read the label of the LVM partition and fails to start the OSD? Thanks, Matt -- Matt Larson, PhD M

[ceph-users] Re: Troubleshooting stuck unclean PGs?

2020-09-21 Thread Matt Larson
see if that is sufficient to let the cluster catch up. The commands are from (http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-January/023844.html) -Matt On Mon, Sep 21, 2020 at 7:20 PM Matt Larson wrote: > > Hi Wout, > > None of the OSDs are greater than 20% full. Howev

[ceph-users] Re: Troubleshooting stuck unclean PGs?

2020-09-21 Thread Matt Larson
tinue once the PGs are "active+clean" > > Kind regards, > > Wout > 42on > > > From: Matt Larson > Sent: Monday, September 21, 2020 6:22 PM > To: ceph-users@ceph.io > Subject: [ceph-users] Troubleshooting stuck unclean

[ceph-users] Troubleshooting stuck unclean PGs?

2020-09-21 Thread Matt Larson
using `cephadm` tools? At last check, a server running 16 OSDs and 1 MON is using 39G of disk space for its running containers. Can restarting containers help to start with a fresh slate or reduce the disk use? Thanks, Matt Matt Larson Associate Scientist Computer Scienti

[ceph-users] Error with zabbix module on Ceph Octopus

2020-05-06 Thread Matt Larson
86_64/zabbix-release-4.4-1.el8.noarch.rpm) - Python version 3.6.8 Any suggestions? I am wondering if this could be requiring Python 2.7 to run? -- Matt Larson, PhD Madison, WI 53705 U.S.A. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe

[ceph-users] Re: Benefits of high RAM on a metadata server?

2020-02-06 Thread Matt Larson
me warnings I have > "client X is failing to respond to cache pressure." > Besides that there are no complaints but I thing you would need the 256GB of > ram specially if the datasets will increase... just my 2 cents.. > > Will you have SSD ? > > > > On Fri, Feb

[ceph-users] Benefits of high RAM on a metadata server?

2020-02-06 Thread Matt Larson
processing of the images. Thanks! -Matt -- Matt Larson, PhD Madison, WI 53705 U.S.A. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io