On Mon, Oct 24, 2016 at 3:31 AM, Ranjan Ghosh wrote:
> Thanks JC & Greg, I've changed the "mon osd min down reporters" to 1.
> According to this:
>
> http://docs.ceph.com/docs/jewel/rados/configuration/mon-osd-interaction/
>
> the default is already 1, though. I don't remember the value before I
>
Thanks JC & Greg, I've changed the "mon osd min down reporters" to 1.
According to this:
http://docs.ceph.com/docs/jewel/rados/configuration/mon-osd-interaction/
the default is already 1, though. I don't remember the value before I
changed it everywhere, so I can't say for sure now. But I thin
mon_osd_min_down_reporters by default set to 2
I guess you’ll have to set it to 1 in your case
JC
> On Sep 29, 2016, at 08:16, Gregory Farnum wrote:
>
> I think the problem is that Ceph requires a certain number of OSDs or a
> certain number of reports of failure before it marks an OSD down.
I think the problem is that Ceph requires a certain number of OSDs or a
certain number of reports of failure before it marks an OSD down. These
thresholds are not tuned for a 2-OSD cluster; you probably want to set them
to 1.
Also keep in mind that the OSDs provide a grace period of 20-30 seconds
b
On 09/29/16 14:07, Ranjan Ghosh wrote:
> Wow. Amazing. Thanks a lot!!! This works. 2 (hopefully) last questions
> on this issue:
>
> 1) When the first node is coming back up, I can just call "ceph osd up
> 0" and Ceph will start auto-repairing everything everything, right?
> That is, if there are e
Wow. Amazing. Thanks a lot!!! This works. 2 (hopefully) last questions
on this issue:
1) When the first node is coming back up, I can just call "ceph osd up
0" and Ceph will start auto-repairing everything everything, right? That
is, if there are e.g. new files that were created during the tim
Ranjan,
If you unmount the file system on both nodes and then gracefully stop the
Ceph services (or even yank the network cable for that node), what state is
your cluster in? Are you able to do a basic rados bench write and read?
How are you mounting CephFS, through the Kernel or Fuse client? Hav
On 09/29/16 12:08, Ranjan Ghosh wrote:
> Yes, all the pools have min_size 1:
>
> root@uhu2 /scripts # ceph osd lspools
> 0 rbd,1 cephfs_data,2 cephfs_metadata,
> root@uhu2 /scripts # ceph osd pool get cephfs_data min_size
> min_size: 1
> root@uhu2 /scripts # ceph osd pool get cephfs_metadata min_si
Hi Vasu,
thank you for your answer.
Yes, all the pools have min_size 1:
root@uhu2 /scripts # ceph osd lspools
0 rbd,1 cephfs_data,2 cephfs_metadata,
root@uhu2 /scripts # ceph osd pool get cephfs_data min_size
min_size: 1
root@uhu2 /scripts # ceph osd pool get cephfs_metadata min_size
min_size:
On Wed, Sep 28, 2016 at 8:03 AM, Ranjan Ghosh wrote:
> Hi everyone,
>
> Up until recently, we were using GlusterFS to have two web servers in sync
> so we could take one down and switch back and forth between them - e.g. for
> maintenance or failover. Usually, both were running, though. The perfor
Hi everyone,
Up until recently, we were using GlusterFS to have two web servers in
sync so we could take one down and switch back and forth between them -
e.g. for maintenance or failover. Usually, both were running, though.
The performance was abysmal, unfortunately. Copying many small files
11 matches
Mail list logo