Hi list,
The document page of jewel has filestore_split_rand_factor config but I can't
find the config by using 'ceph daemon osd.x config'.
ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
ceph daemon osd.0 config show|grep split
"mon_osd_max_split_count": "32",
"journaler_
No, it is supported in the next version of Jewel
http://tracker.ceph.com/issues/22658
From: ceph-users on behalf of shadow_lin
Date: Sunday, April 1, 2018 at 3:53 AM
To: ceph-users
Subject: EXT: [ceph-users] Does jewel 10.2.10 support
filestore_split_rand_factor?
Hi list,
The document page
Thanks.
Is there any workaround for 10.2.10 to avoid all osd start spliting at the same
time?
2018-04-01
shadowlin
发件人:Pavan Rallabhandi
发送时间:2018-04-01 22:39
主题:Re: [ceph-users] Does jewel 10.2.10 support filestore_split_rand_factor?
收件人:"shadow_lin","ceph-users"
抄送:
No, it is supported
Hello,
I have a small cluster with an inconsistent pg. I've tried ceph pg repair
multiple times to no luck. rados list-inconsistent-obj 49.11c returns:
# rados list-inconsistent-obj 49.11c
No scrub information available for pg 49.11c
error 2: (2) No such file or directory
I'm a bit at a loss her
Hello,
firstly, Jack pretty much correctly correlated my issues to Mark's points,
more below.
On Sat, 31 Mar 2018 08:24:45 -0500 Mark Nelson wrote:
> On 03/29/2018 08:59 PM, Christian Balzer wrote:
>
> > Hello,
> >
> > my crappy test cluster was rendered inoperational by an IP renumbering
> >
> A long time ago I was responsible for validating the performance of CXFS
on an SGI Altix UV distributed shared-memory supercomputer. As it turns
out, we could achieve about 22GB/s writes with XFS (a huge >number at the
time), but CXFS was 5-10x slower. A big part of that turned out to be the
ke
Christian, you mention single socket systems for storage servers.
I often thought that the Xeon-D would be ideal as a building block for
storage servers
https://www.intel.com/content/www/us/en/products/processors/xeon/d-processors.html
Low power, and a complete System-On-Chip with 10gig Ethernet.