On 07/24/2014 10:58 PM, Robert Fantini wrote:
Hello.
In this set up:
PowerEdge R720
Raid: Perc H710 eight-port, 6Gb/s
OSD drives: qty 4: Seagate Constellation ES.3 ST2000NM0023 2TB 7200 RPM
128MB Cache SAS 6Gb/s
Would it make sense to uses these good sas drives in raid-1 for journal?
Western D
Hi Matt,
I'd recommend setting the RAID controller to JBOD mode and letting Ceph
handle the drives directly. Since Ceph handles replication and
distribution of data, I don't see a real need for RAID behind the OSDs.
In some cases, it even results in worse performance in general and will
definitely
Hi all,
I am in the process of installing and setting up Ceph on a group of
Allwinner A20 SoC mini computers. They are armhf devices and I have
installed Cubian (http://cubian.org/), which is a port of Debian Wheezy. I
tried to follow the instructions at:
http://ceph.com/docs/master/install/b
Hi,
I’ve purchased a couple of 45Drives enclosures and would like to figure out the
best way to configure these for ceph?
Mainly I was wondering if it was better to set up multiple raid groups and then
put an OSD on each rather than an OSD for each of the 45 drives in the chassis?
Regards,
Ma
Erik,
I updated the doc per your suggestion.
As for networks, you can specify a logical "public" network and a logical
"cluster" network. You may specify comma-delimited subnets on them. See
http://ceph.com/docs/master/rados/configuration/network-config-ref/#id1 for
details. I've never actually d
Thanks for the information!
Based on my reading of http://ceph.com/docs/next/rbd/rbd-config-ref I
was under the impression that rbd cache options wouldn't apply, since
presumably the kernel is handling the caching. I'll have to toggle some
of those values and see it they make a difference in my se
Hello.
In this set up:
PowerEdge R720
Raid: Perc H710 eight-port, 6Gb/s
OSD drives: qty 4: Seagate Constellation ES.3 ST2000NM0023 2TB 7200 RPM
128MB Cache SAS 6Gb/s
Would it make sense to uses these good sas drives in raid-1 for journal?
Western Digital XE WD3001BKHG 300GB 1 RPM 32MB Cache
Hi,
the next Ceph MeetUp in Berlin, Germany, happens on July 28.
http://www.meetup.com/Ceph-Berlin/events/195107422/
Regards
--
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de
Tel: 030-405051-43
Fax: 030-405051-19
Zwangsangaben lt. §35a G
What is your kernel version ? On kernel >= 3.11 sysctl -w
"net.ipv4.tcp_window_scaling=0" seems to improve the situation a lot. It
also helped a lot to mitigate processes going (and sticking) in 'D' state.
Le 24/07/2014 22:08, Udo Lembke a écrit :
Hi again,
forget to say - I'm still on 0.72.2!
Hi again,
forget to say - I'm still on 0.72.2!
Udo
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Steve,
I'm also looking for improvements of single-thread-reads.
A little bit higher values (twice?) should be possible with your config.
I have 5 nodes with 60 4-TB hdds and got following:
rados -p test bench -b 4194304 60 seq -t 1 --no-cleanup
Total time run:60.066934
Total reads made
Hi Joao,
In the meanwhile I have done the following things :
$ ceph osd crush move ceph-osd15 rack=rack1-pdu1
moved item id -17 name 'ceph-osd15' to location {rack=rack1-pdu1} in crush map
$ ceph osd crush rm rack2-pdu3
removed item id -23 name 'rack2-pdu3' from crush map
But it does not solve
I found this article very interesting:
http://techreport.com/review/26523/the-ssd-endurance-experiment-casualties-on-the-way-to-a-petabyte
I've got Samsung 840 Pros and while I'm thinking that I wouldn't go with them
again I am interested in the fact that (in this anecdotal experiment) it seemed
It doesn't currently support that. ceph-rest-api only wraps commands
that are sent to the mon cluster, whereas the "ceph daemon" operations
use the local admin socket (/var/run/ceph/*.asok) of the service.
There has been some discussion of enabling calls to admin socket
operations via the mon thou
Hi all,
I want to use ceph-rest-api to view some debug details from ceph daemons.
On linux shell I can get this message from below:
# ceph daemon osd.0 dump_ops_in_flight | python -m json.tool
{ "num_ops": 0,
"ops": []}
This is my question:
Can I get this output from ceph-rest-api ?
Until no
15 matches
Mail list logo