The filestore_split_multiple command does indeed need a restart of the OSD
daemon to take effect.  Same with the filestore_merge_threshold.  These
settings also only affect filestore.  If you're using bluestore, then they
don't mean anything.

You can utilize the ceph-objectstore-tool to split subfolders while the OSD
is offline as well.  I use the following command to split the subfolders on
my clusters while the OSDs on a node are offline.  It should grab all of
your OSDs on a node and split their subfolders to match the settings in the
ceph.conf file.  Make sure you understand what the commands do and test
them in your environment before running this (or your personalized version
of it).


ceph osd set noout
sudo systemctl stop ceph-osd.target
for osd in $(mount | grep -Eo ceph-[0-9]+ | cut -d- -f2 | sort -nu); do
  for run_in_background in true; do
    echo "Starting osd $osd"
    sudo -u ceph ceph-osd --flush-journal -i=${osd}
    for pool in $(ceph osd lspools | gawk 'BEGIN {RS=","} {print $2}'); do
      sudo -u ceph ceph-objectstore-tool --data-path
/var/lib/ceph/osd/ceph-${osd} \
        --journal-path /var/lib/ceph/osd/ceph-${osd}/journal \
        --log-file=/var/log/ceph/objectstore_tool.${osd}.log \
        --op apply-layout-settings \
        --pool $pool \
        --debug
    done
    echo "Finished osd.${osd}"
    sudo systemctl start ceph-osd@$osd.service
  done &
done
wait
sudo systemctl start ceph-osd.target

On Thu, Nov 16, 2017 at 9:19 AM Piotr Dałek <piotr.da...@corp.ovh.com>
wrote:

> On 17-11-16 02:44 PM, Jaroslaw Owsiewski wrote:
> > HI,
> >
> > what exactly means message:
> >
> > filestore_split_multiple = '24' (not observed, change may require
> restart)
> >
> > This has happend after command:
> >
> > # ceph tell osd.0 injectargs '--filestore-split-multiple 24'
>
> It means that "filestore split multiple" is not observed for runtime
> changes, meaning that new value will be stored in osd.0 process memory, but
> not used at all.
>
> > Do I really need to restart OSD to make changes to take effect?
> >
> > ceph version 12.2.1 () luminous (stable)
>
> Yes.
>
> --
> Piotr Dałek
> piotr.da...@corp.ovh.com
> https://www.ovh.com/us/
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to