Hallo Josh thanks for your feedback!
On 9/22/22 14:44, Josh Baergen wrote:
Hi Fulvio,
https://docs.ceph.com/en/quincy/dev/osd_internals/backfill_reservation/
describes the prioritization and reservation mechanism used for
recovery and backfill. AIUI, unless a PG is below min_size, all
backfills
Well, if that issue occurs it will be at the beginning of the
recovery, so you may not notice it until you get inactive PGs. We hit
that limit when we rebuilt all OSDs on one server with many EC chunks.
Setting osd_max_pg_per_osd_hard_ratio to 5 (default 3) helped avoid
inactive PGs for all
Hello All!
just to bring this knowledge to a wider audience...
Under some circumstances osds/clusters might report (and even suffer
from) spurious disk read errors. The following comment's re-post sheds
light on the root cause. Many thanks to Canonical's folks for that.
Originally posted at:
Hi everyone,
while evaluating different config options at our Ceph cluster, I discovered
that there are multiple ways to apply (ephemeral) config changes to specific
running daemons. But even after researching docs and manpages, and doing some
experiments, I fail to understand when to use which
Hi Fulvio,
> leads to a much shorter and less detailed page, and I assumed Nautilus
> was far behind Quincy in managing this...
The only major change I'm aware of between Nautilus and Quincy is that
in Quincy the mClock scheduler is able to automatically tune up/down
backfill parameters to achiev
Thank you for your reply,
discard is not enabled in our configuration as it is mainly the default
conf. Are you suggesting to enable it?
On 9/22/22 14:20, Stefan Kooman wrote:
Just guessing here: have you configured "discard":
bdev enable discard
bdev async discard
We've see monitor slow op
I found in some articles on the net that in their ceph.ko it depends on the
fscache module.
root@client:~# lsmod | grep ceph
ceph 376832 1
libceph 315392 1 ceph
fscache 65536 1 ceph
libcrc32c 16384 3xfs, raid456, libceph
root@client:~# modinfo ceph
filename: /lib/modules/4.15.0-112-generic/kernel
When doing manual remapping/rebalancing with tools like pgremapper and
placementoptimizer, what are the recommended settings for norebalance,
norecover, nobackfill?
Should the balancer module be disabled if we are manually issuing the pg remap
commands generated by those scripts so it doesn't
On 9/23/22 15:22, J-P Methot wrote:
Thank you for your reply,
discard is not enabled in our configuration as it is mainly the default
conf. Are you suggesting to enable it?
No. There is no consensus if enabling it is a good idea (depends on
proper implementation among other things). From my
On 9/23/22 17:05, Wyll Ingersoll wrote:
When doing manual remapping/rebalancing with tools like pgremapper and
placementoptimizer, what are the recommended settings for norebalance,
norecover, nobackfill?
Should the balancer module be disabled if we are manually issuing the pg remap
commands
Hey Wyll,
> $ pgremapper cancel-backfill --yes # to stop all pending operations
> $ placementoptimizer.py balance --max-pg-moves 100 | tee upmap-moves
> $ bash upmap-moves
>
> Repeat the above 3 steps until balance is achieved, then re-enable the
> balancer and unset the "no" flags set earlier?
Understood, that was a typo on my part.
Definitely dont cancel-backfill after generating the moves from
placementoptimizer.
From: Josh Baergen
Sent: Friday, September 23, 2022 11:31 AM
To: Wyll Ingersoll
Cc: Eugen Block ; ceph-users@ceph.io
Subject: Re: [ceph-
We just got a reply from Intel telling us that there's a new firmware
coming out soon to fix an issue where S4510 and S4610 drives get IO
timeouts that may lead to drive drops when under heavy load. This might
very well be the source of our issue.
On 9/23/22 11:12, Stefan Kooman wrote:
On 9/2
Hi,
The below fstab entry works, so that is a given.
But how do I specify which Ceph filesystem I want to mount in this fstab format?
192.168.1.11,192.168.1.12,192.168.1.13:/ /media/ceph_fs/
name=james_user, secretfile=/etc/ceph/secret.key
I have tried different ways, but always get the erro
Try adding mds_namespace option like so:
192.168.1.11,192.168.1.12,192.168.1.13:/ /media/ceph_fs/
name=james_user,secretfile=/etc/ceph/secret.key,mds_namespace=myfs
On Fri, Sep 23, 2022 at 6:41 PM Sagittarius-A Black Hole <
nigrat...@gmail.com> wrote:
> Hi,
>
> The below fstab entry works, s
On Fri, Sep 23, 2022 at 6:41 PM Sagittarius-A Black Hole
wrote:
>
> Hi,
>
> The below fstab entry works, so that is a given.
> But how do I specify which Ceph filesystem I want to mount in this fstab
> format?
>
> 192.168.1.11,192.168.1.12,192.168.1.13:/ /media/ceph_fs/
> name=james_user, sec
This is what I tried, following the link:
{name}@.{fs_name}=/ {mount}/{mountpoint} ceph
[mon_addr={ipaddress},secret=secretkey|secretfile=/path/to/secretfile
does not work, it reports: source mount path was not specified, unable
to parse mount source:-22
why is mount and mountpoint specified lik
Hi,
thanks for the suggestion of the namespace. I'm trying to find any
documentation over it, how do you set a name space for a filesystem /
pool?
Thanks,
Daniel
On Fri, 23 Sept 2022 at 16:01, Wesley Dillingham wrote:
>
> Try adding mds_namespace option like so:
>
> 192.168.1.11,192.168.1.12,1
Ah, I found it: mds_namespace IS in this case the name of the filesystem
Why not call it filesystem name instead of namespace, a term that is
as far as I could find, not defined in Ceph.
Thanks,
Daniel
On Fri, 23 Sept 2022 at 17:09, Sagittarius-A Black Hole
wrote:
>
> Hi,
>
> thanks for the sug
19 matches
Mail list logo