Stanislav Butkeev
15.10.2015, 21:49, "John Spray" :
> On Thu, Oct 15, 2015 at 5:11 PM, Butkeev Stas wrote:
>> Hello all,
>> Does anybody try to use cephfs?
>>
>> I have two servers with RHEL7.1(latest kernel 3.10.0-229.14.1.el7.x86_64).
>> Each server
ards,
Stanislav Butkeev
15.10.2015, 23:05, "John Spray" :
> On Thu, Oct 15, 2015 at 8:46 PM, Butkeev Stas wrote:
>> Thank you for your comment. I know what does mean option oflag=direct and
>> other things about stress testing.
>> Unfortunately speed is very slo
utkeev
15.10.2015, 23:26, "Max Yehorov" :
> Stas,
>
> as you said: "Each server has 15G flash for ceph journal and 12*2Tb
> SATA disk for"
>
> What is this 15G flash and is it used for all 12 SATA drives?
>
> On Thu, Oct 15, 2015 at 1:05 PM, John Spray
9 MB/s
I hope that I miss some options during configuration or something else.
--
Best Regards,
Stanislav Butkeev
15.10.2015, 22:36, "John Spray" :
> On Thu, Oct 15, 2015 at 8:17 PM, Butkeev Stas wrote:
>> Hello John
>>
>> Yes, of course, write speed is rising
Hello all,
Does anybody try to use cephfs?
I have two servers with RHEL7.1(latest kernel 3.10.0-229.14.1.el7.x86_64). Each
server has 15G flash for ceph journal and 12*2Tb SATA disk for data.
I have Infiniband(ipoib) 56Gb/s interconnect between nodes.
Cluster version
# ceph -v
ceph version 0.94
Hello everybody
We have ceph cluster that consist of 8 host with 12 osd per each host. It's 2T
SATA disks.
[13:23]:[root@se087 ~]# ceph osd tree
ID WEIGHTTYPE NAMEUP/DOWN REWEIGHT
PRIMARY-AFFINITY
-1 182.99203 root default
Hello, all
I have ceph+RGW installation. And have some problems with "shadow" objects.
For example:
#rados ls -p .rgw.buckets|grep "default.4507.1"
.
default.4507.1__shadow_test_s3.2/2vO4WskQNBGMnC8MGaYPSLfGkhQY76U.1_5
default.4507.1__shadow_test_s3.2/2vO4WskQNBGMnC8MGaYPSLfGkhQY76U.2_2
defa
Thank you Lionel,
Indeed I have forgotten about size > min_size. I have set min_size to 1 and my
cluster is UP now. I have deleted crash osd and have set size to 3 and min_size
to 2.
---
With regards,
Stanislav
01.12.2014, 19:15, "Lionel Bouton" :
> Le 01/12/2014 17:08, Lionel Bouton a éc
Hi all,
I have Ceph cluster+rgw. Now I have problems with one of OSD, it's down now. I
check ceph status and see this information
[root@node-1 ceph-0]# ceph -s
cluster fc8c3ecc-ccb8-4065-876c-dc9fc992d62d
health HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck
unclean