rote:
> On Thu, Nov 16, 2023 at 3:21 AM Xiubo Li wrote:
> >
> > Hi Matt,
> >
> > On 11/15/23 02:40, Matt Larson wrote:
> > > On CentOS 7 systems with the CephFS kernel client, if the data pool
> has a
> > > `nearfull` status there is a slight reductio
or to have
behavior more similar to the CentOS 7 CephFS clients?
Do different OS or Linux kernels have greatly different ways they respond
or limit on the IOPS? Are there any options to adjust how they limit on
IOPS?
Thanks,
Matt
--
Matt Larson, PhD
Madison, WI 53705 U.
ssd"-named
> OSDs to land on, and move themselves if possible. It is a fairly safe
> operation where
> they continue to work, but will try to evacuate the PGs which should not
> be there.
>
> Worst case, your planning is wrong, and the "ssd" O
cripting <https://www.youtube.com/watch?v=w91e0EjWD6E>
> youtube.com <https://www.youtube.com/watch?v=w91e0EjWD6E>
> <https://www.youtube.com/watch?v=w91e0EjWD6E>
>
>
>
>
> On Oct 24, 2023, at 11:42, Matt Larson wrote:
>
> I am looking to create
gn to a new device class? Should they be moved one by one? What is
the way to safely protect data from the existing pool that they are mapped
to?
Thanks,
Matt
--
Matt Larson, PhD
Madison, WI 53705 U.S.A.
___
ceph-users mailing list -- ceph-users@ceph
keep it down? There is an example to stop the OSD on the server using
systemctl, outside of cephadm:*
ssh {osd-host}sudo systemctl stop ceph-osd@{osd-num}
Thanks,
Matt
--
Matt Larson, PhD
Madison, WI 53705 U.S.A.
___
ceph-users mailing list
ess capacity.
>
> Maybe someone else an help here?
>
> Best regards,
> =
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
>
>
> From: Matt Larson
> Sent: 04 December 2022 02:00:11
> To: Eneko Lacu
t, destroy the OSDs but keep the IDs
> intact (ceph osd destroy). Then, no further re-balancing will happen and you
> can re-use the OSD ids later when adding a new host. That's a stable
> situation from an operations point of view.
>
> Hope that helps.
>
> Best regards,
ill
this be problematic?
Thanks for any advice,
Matt
--
Matt Larson, PhD
Madison, WI 53705 U.S.A.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
as 0? Can I do this while
the host is offline, or should I bring it online first before setting
weights or using `ceph orch osd rm`?
Thanks,
Matt
--
Matt Larson, PhD
Madison, WI 53705 U.S.A.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsu
ral minor versions behind?
ceph orch upgrade start --ceph-version 15.2.13
-Matt
--
Matt Larson, PhD
Madison, WI 53705 U.S.A.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
, the MON was able to get back in the quorum
with the other monitors
I think the issue was that when the MON was out of quorum, the ceph client
could no longer connect when only having that MON as an option.
Problem is solved -
-Matt
On Mon, Jun 14, 2021 at 2:07 PM Matt Larson wrote:
> I
containerized daemons.
How can I restore the ability to connect with the command-line `ceph`
client to check the status and all other interactions?
Thanks,
Matt
--
Matt Larson, PhD
Madison, WI 53705 U.S.A.
___
ceph-users mailing list -- ceph-users
-latest-storage-reliability-figures-add-ssd-boot.html
).
Are there any major caveats to considering working with larger SSDs for
data pools?
Thanks,
Matt
--
Matt Larson, PhD
Madison, WI 53705 U.S.A.
___
ceph-users mailing list -- ceph-users@ceph.io
; of the cephfs.
>
> Best regards,
> =
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
>
>
> From: Matt Larson
> Sent: 12 November 2020 00:40:21
> To: ceph-users
> Subject: [ceph-users] Unab
good guide that describes not just the Samba smb.conf,
but also what should be in /etc/ceph/ceph.conf, and how to provide the
key for the ceph:user_id ? I am really struggling to find good
first-hand documentation for this.
Thanks,
Matt
--
Mat
pfs manually to inspect this? Maybe put in the
> manual[1]?
>
>
> [1]
> https://docs.ceph.com/en/latest/ceph-volume/lvm/activate/
>
> _______
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...
ately.
> >>
> >> "ceph osd pool ls detail" shows that pgp is currently 11412
> >>
> >> Each time we hit 5.000% misplaced, the pgp number increases by 1 or 2,
> >> this causes the % misplaced to increase again to ~5.1%
> --
> Dr Jake Grimmett
> Head Of Scientific Computing
> MRC
re's a PG change ongoing (either pg autoscaler or
> > > a manual change, both obey the target misplaced ratio).
> > > You can check this by running "ceph osd pool ls detail" and check for
> > > the value of pg target.
> > >
> > > Also: Looks like you've set osd_scrub_during_recovery = false, this
> > > setting can be annoying on large erasure-coded setups on HDDs that see
> > > long recovery times. It's better to get IO priorities right; search
> > > mailing list for osd op queue cut off high.
> > >
> > > Paul
> >
> > --
> > Dr Jake Grimmett
> > Head Of Scientific Computing
> > MRC Laboratory of Molecular Biology
> > Francis Crick Avenue,
> > Cambridge CB2 0QH, UK.
> >
> >
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
Matt Larson, PhD
Madison, WI 53705 U.S.A.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
`ceph-volume lvm create --bluestore --data /dev/sd
--block.db
/dev/nvme0n1`
Is there a workaround for this problem where the container process is
unable to read the label of the LVM partition and fails to start the
OSD?
Thanks,
Matt
--
Matt Larson, PhD
M
see if that
is sufficient to let the cluster catch up.
The commands are from
(http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-January/023844.html)
-Matt
On Mon, Sep 21, 2020 at 7:20 PM Matt Larson wrote:
>
> Hi Wout,
>
> None of the OSDs are greater than 20% full. Howev
tinue once the PGs are "active+clean"
>
> Kind regards,
>
> Wout
> 42on
>
>
> From: Matt Larson
> Sent: Monday, September 21, 2020 6:22 PM
> To: ceph-users@ceph.io
> Subject: [ceph-users] Troubleshooting stuck unclean
using `cephadm` tools? At last check, a
server running 16 OSDs and 1 MON is using 39G of disk space for its
running containers. Can restarting containers help to start with a
fresh slate or reduce the disk use?
Thanks,
Matt
Matt Larson
Associate Scientist
Computer Scienti
86_64/zabbix-release-4.4-1.el8.noarch.rpm)
- Python version 3.6.8
Any suggestions? I am wondering if this could be requiring Python 2.7 to run?
--
Matt Larson, PhD
Madison, WI 53705 U.S.A.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe
me warnings I have
> "client X is failing to respond to cache pressure."
> Besides that there are no complaints but I thing you would need the 256GB of
> ram specially if the datasets will increase... just my 2 cents..
>
> Will you have SSD ?
>
>
>
> On Fri, Feb
processing of the images.
Thanks!
-Matt
--
Matt Larson, PhD
Madison, WI 53705 U.S.A.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
26 matches
Mail list logo