Hi John Spray,
Now I am able to update the max_misplaced parameter successfully and
validating it.
We are using balancer with mode upmap and it starts redistributing the PGs.
We observed like the backfilling wait increases a lot , can we create any
plan in balancer to restrict the PG backfilling
On Thu, May 3, 2018 at 1:33 AM, wrote:
>
> Hi all.
Hi Anton,
>
> We try to setup our first CentOS 7.4.1708 CEPH cluster, based on Luminous
> 12.2.5. What we get is:
>
>
> Error: Package: 2:ceph-selinux-12.2.5-0.el7.x86_64 (Ceph-Luminous)
>Requires: selinux-policy-base >= 3.13.1-166
Anton
if you still cannot install the ceph RPMs, becuse of that dependency,
do as Ruben suggests - install selinux-policy-targeted
Then you use the RPM option --nodeps which will ignore the dependency
requirements.
Do not be afraid to use this option - and do not use it blindly either.
Someti
Hi All,
after every reboot the current superblock etc folders get deleted from
/var/lib/ceph/osd/ceph-0(1,etc)
.I have to prepare and activate osd after every reboot. Any suggestions?
ceph.target and ceph-osd are enabled.
Thanks in advance!
___
ceph-u
-Original Message-
From: Alex Gorbachev
Sent: 02 May 2018 22:05
To: Nick Fisk
Cc: ceph-users
Subject: Re: [ceph-users] Bluestore on HDD+SSD sync write latency experiences
Hi Nick,
On Tue, May 1, 2018 at 4:50 PM, Nick Fisk wrote:
> Hi all,
>
>
>
> Slowly getting round to migrating clu
Hi Ruben and community.
Thanks a lot for all the help and hints. Finally I figured out that "base" is
also part of i.e. "selinux-policy-minimum". After installing this pkg via "yum
install", the usual "ceph installation" continues...
Seems like the "ceph packaging" is too much RHEL oriented ;)
Hi Nick,
On 5/1/2018 11:50 PM, Nick Fisk wrote:
Hi all,
Slowly getting round to migrating clusters to Bluestore but I am interested
in how people are handling the potential change in write latency coming from
Filestore? Or maybe nobody is really seeing much difference?
As we all know, in
Which version of ceph, filestore or bluestore, did you use ceph-disk,
ceph-volume, or something else to configure the osds, did you use lvm, is
there encryption or any other later involved?
On Thu, May 3, 2018, 6:45 AM Akshita Parekh
wrote:
> Hi All,
>
>
> after every reboot the current superblo
On Thu, May 3, 2018 at 6:54 AM, Nick Fisk wrote:
> -Original Message-
> From: Alex Gorbachev
> Sent: 02 May 2018 22:05
> To: Nick Fisk
> Cc: ceph-users
> Subject: Re: [ceph-users] Bluestore on HDD+SSD sync write latency experiences
>
> Hi Nick,
>
> On Tue, May 1, 2018 at 4:50 PM, Nick F
Hi Nick,
Our latency probe results (4kB rados bench) didn't change noticeably
after converting a test cluster from FileStore (sata SSD journal) to
BlueStore (sata SSD db). Those 4kB writes take 3-4ms on average from a
random VM in our data centre. (So bluestore DB seems equivalent to
FileStore jou
Hi Dan,
Quoting Dan van der Ster :
Hi Nick,
Our latency probe results (4kB rados bench) didn't change noticeably
after converting a test cluster from FileStore (sata SSD journal) to
BlueStore (sata SSD db). Those 4kB writes take 3-4ms on average from a
random VM in our data centre. (So bluest
The process to create an encrypted bluestore OSD is very simple to make
them utilize dmcrypt (literally just add --dmcrypt to the exact same
command you would run normally to create the OSD). The gotcha is that I
had to find the option by using --help with ceph-volume from the cli. I
was unable t
On Thu, May 3, 2018 at 1:22 PM, David Turner wrote:
> The process to create an encrypted bluestore OSD is very simple to make them
> utilize dmcrypt (literally just add --dmcrypt to the exact same command you
> would run normally to create the OSD). The gotcha is that I had to find the
> option b
Please keep the mailing list in your responses. What steps did you follow
when configuring your osds.
On Fri, May 4, 2018, 12:14 AM Akshita Parekh
wrote:
> Ceph v10.2.0 -jewel , Why ceph disk or ceph volume is required to
> configure disks?encryption where?
>
> On Thu, May 3, 2018 at 6:24 PM, Da
Steps followed during installing ceph-
1) Installing rpms
Then the steps given in -
http://docs.ceph.com/docs/master/start/quick-ceph-deploy/ , apart from
step 2 and 3
Then ceph-deploy osd prepare osd1:/dev/sda1
ceph-deploy osd activate osd1:/dev/sda1
It said conf files were
My ceph status says:
cluster:
id: b2b00aae-f00d-41b4-a29b-58859aa41375
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph01,ceph03,ceph07
mgr: ceph01(active), standbys: ceph-ceph07, ceph03
osd: 78 osds: 78 up, 78 in
data:
pools: 4 pools, 3240 pgs
ob
Hello all,
I can seemingly enable the balancer ok:
$ ceph mgr module enable balancer
but if I try to check its status:
$ ceph balancer status
Error EINVAL: unrecognized command
or turn it on:
$ ceph balancer on
Error EINVAL: unrecognized command
$ which ceph
/bin/ceph
$ rpm -qf /bin/ceph
cep
17 matches
Mail list logo