Hi,
Yes my rbd-mirror is coloctaed with my mon/osd. It only affects nodes where
they are collocated as they all use the "/etc/sysconfig/ceph" configuration
file.
Best
Jocelyn Thode
-Original Message-
From: Vasu Kulkarni [mailto:vakul...@redhat.com]
Sent: vendredi, 20 juillet 2018 17:2
Hi.
For some bucket for backup application is applied S3 retention policy,
at 04:00 2+days backups will be deleted from bucket.
At this time I see very high usage of default.rgw.log pool. Usage log is
enabled, ops log is disabled, index pool on NVMe:
- https://ibb.co/dozqPJ
- https://ibb.c
I even have no fancy kernel or device, just real standard Debian. The
uptime was 6 days since the upgrade from 12.2.6...
Nicolas, you should upgrade your 12.2.6 to 12.2.7 due bugs in this release.
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-July/028153.html
k
_
Hello Ceph Users,
We have added more ssd storage to our ceph cluster last night. We added 4 x 1TB
drives and the available space went from 1.6TB to 0.6TB ( in `ceph df` for the
SSD pool ).
I would assume that the weight needs to be changed but I didn't think I would
need to? Should I change t
I am using openstack-ansible with ceph-ansible to deploy my Ceph
custer and here is my config in yml file
---
osd_objectstore: bluestore
osd_scenario: lvm
lvm_volumes:
- data: /dev/sdb
- data: /dev/sdc
- data: /dev/sdd
- data: /dev/sde
This is the error i am getting..
TASK [ceph-os
>Kernel 3.16 is not *the* LTS kernel but *an* LTS kernel. The current LTS
>kernel is 4.14
Thanks for clarifying that. I guess I forgot how long I've been trying to
get Ceph to work. When I started, 3.16 was the current LTS kernel!
Had I known that it's so stable that serious bugs are left in i
2018-07-22 22:02 GMT+02:00 Bryan Henderson :
> Linux kernel 3.16 (the current long term stable Linux kernel) and so far
>
> So what are other people using? A less stable kernel? An out-of-tree
> driver?
> FUSE? Is there a working process for getting known bugs fixed in 3.16?
>
>
Kernel 3.16 is
Fuse
On 07/22/2018 10:02 PM, Bryan Henderson wrote:
> Is there some better place to get a filesystem driver for the longterm
> stable Linux kernel (3.16) than the regular kernel.org source distribution?
>
> The reason I ask is that I have been trying to get some clients running
> Linux kernel 3.
Is there some better place to get a filesystem driver for the longterm
stable Linux kernel (3.16) than the regular kernel.org source distribution?
The reason I ask is that I have been trying to get some clients running
Linux kernel 3.16 (the current long term stable Linux kernel) and so far
I have
Le dimanche 22 juillet 2018 à 02:44 +0200, Oliver Freyermuth a écrit :
> Since all services are running on these machines - are you by any
> chance running low on memory?
> Do you have a monitoring of this?
I have Munin monitoring on all hosts, but nothing special to notice,
except for a +3°C te
I read that post and that's why I open this thread for few more questions and
clearence,
When you said OSD doesn't come up what actually that means? After reboot of
node or after service restart or installation of new disk?
You said we are using manual method what is that?
I'm building new c
I don’t think it will get any more basic than that. Or maybe this? If
the doctor diagnoses you, you can either accept this, get 2nd opinion,
or study medicine to verify it.
In short lvm has been introduced to solve some issues of related to
starting osd's (which I did not have, probably bec
Generally the recommendation is: if your redundancy is X you should have at
least X+1 entities in your failure domain to allow ceph to automatically
self-heal
Given your setup of 6 severs and failure domain host means you should
select k+m=5 at most. So 3+2 should make for a good profile in your c
13 matches
Mail list logo