Re: [ceph-users] Cluster network slower than public network

2017-11-16 Thread Jake Young
On Wed, Nov 15, 2017 at 1:07 PM Ronny Aasen wrote: > On 15.11.2017 13:50, Gandalf Corvotempesta wrote: > > As 10gb switches are expansive, what would happen by using a gigabit > cluster network and a 10gb public network? > > Replication and rebalance should be slow, but what about public I/O ? >

Re: [ceph-users] Ceph-ISCSI

2017-10-11 Thread Jake Young
On Wed, Oct 11, 2017 at 8:57 AM Jason Dillaman wrote: > On Wed, Oct 11, 2017 at 6:38 AM, Jorge Pinilla López > wrote: > >> As far as I am able to understand there are 2 ways of setting iscsi for >> ceph >> >> 1- using kernel (lrbd) only able on SUSE, CentOS, fedora... >> > > The target_core_rbd

Re: [ceph-users] tunable question

2017-10-03 Thread Jake Young
On Tue, Oct 3, 2017 at 8:38 AM lists wrote: > Hi, > > What would make the decision easier: if we knew that we could easily > revert the > > "ceph osd crush tunables optimal" > once it has begun rebalancing data? > > Meaning: if we notice that impact is too high, or it will take too long, > that

Re: [ceph-users] What HBA to choose? To expand or not to expand?

2017-09-20 Thread Jake Young
ow how to turn on the drive identification > lights? > storcli64 /c0/e8/s1 start locate Where c is the controller id, e is the enclosure id and s is the drive slot Look for the PD List section in the output to see the enclosure id / slot id list. storcli64 /c0 show > > > >

Re: [ceph-users] What HBA to choose? To expand or not to expand?

2017-09-19 Thread Jake Young
On Tue, Sep 19, 2017 at 9:38 AM Kees Meijs wrote: > Hi Jake, > > On 19-09-17 15:14, Jake Young wrote: > > Ideally you actually want fewer disks per server and more servers. > > This has been covered extensively in this mailing list. Rule of thumb > > is that each ser

Re: [ceph-users] What HBA to choose? To expand or not to expand?

2017-09-19 Thread Jake Young
On Tue, Sep 19, 2017 at 7:34 AM Kees Meijs wrote: > Hi list, > > It's probably something to discuss over coffee in Ede tomorrow but I'll > ask anyway: what HBA is best suitable for Ceph nowadays? > > In an earlier thread I read some comments about some "dumb" HBAs running > in IT mode but still b

Re: [ceph-users] Ceph re-ip of OSD node

2017-08-30 Thread Jake Young
Hey Ben, Take a look at the osd log for another OSD who's ip you did not change. What errors does it show related the re-ip'd OSD? Is the other OSD trying to communicate with the re-ip'd OSD's old ip address? Jake On Wed, Aug 30, 2017 at 3:55 PM Jeremy Hanmer wrote: > This is simply not tru

Re: [ceph-users] Ceph and IPv4 -> IPv6

2017-06-27 Thread Jake Young
On Tue, Jun 27, 2017 at 2:19 PM Wido den Hollander wrote: > > > Op 27 juni 2017 om 19:00 schreef george.vasilaka...@stfc.ac.uk: > > > > > > Hey Ceph folks, > > > > I was wondering what the current status/roadmap/intentions etc. are on > the possibility of providing a way of transitioning a cluste

Re: [ceph-users] CentOS7 Mounting Problem

2017-04-10 Thread Jake Young
I've had this issue as well. In my case some or most osds on each host do mount, but a few don't mount or start. (I have 9 osds on each host). My workaround is to run partprobe on the device that isn't mounted. This causes the osd to mount and start automatically. The osds then also mount on subs

Re: [ceph-users] Analysing ceph performance with SSD journal, 10gbe NIC and 2 replicas -Hammer release

2017-01-07 Thread Jake Young
I use 2U servers with 9x 3.5" spinning disks in each. This has scaled well for me, in both performance and budget. I may add 3 more spinning disks to each server at a later time if I need to maximize storage, or I may add 3 SSDs for journals/cache tier if we need better performance. Another cons

Re: [ceph-users] tgt+librbd error 4

2016-12-18 Thread Jake Young
2016 at 8:37 AM Bruno Silva wrote: > But FreeNAS is based on FreeBSD. > > > > Em dom, 18 de dez de 2016 00:40, ZHONG escreveu: > > Thank you for your reply。 > > 在 2016年12月17日,22:21,Jake Young 写道: > > FreeNAS running in KVM Linux hypervisor > > > ___

Re: [ceph-users] tgt+librbd error 4

2016-12-17 Thread Jake Young
I don't have the specific crash info, but I have seen crashes with tgt when the ceph cluster was slow to respond to IO. It was things like this that pushed me to using another iSCSI to Ceph solution (FreeNAS running in KVM Linux hypervisor). Jake On Fri, Dec 16, 2016 at 9:16 PM ZHONG wrote: >

Re: [ceph-users] Looking for a definition for some undocumented variables

2016-12-12 Thread Jake Young
fault. On Mon, Dec 12, 2016 at 12:26 PM, John Spray wrote: > On Mon, Dec 12, 2016 at 5:23 PM, Jake Young wrote: > > I've seen these referenced a few times in the mailing list, can someone > > explain what they do exactly? > > > > What are the defaults for the

[ceph-users] Looking for a definition for some undocumented variables

2016-12-12 Thread Jake Young
I've seen these referenced a few times in the mailing list, can someone explain what they do exactly? What are the defaults for these values? osd recovery sleep and osd recover max single start Thanks! Jake ___ ceph-users mailing list ceph-users@lis

Re: [ceph-users] problem after reinstalling system

2016-12-08 Thread Jake Young
Hey Dan, I had the same issue that Jacek had after changing my OS and Ceph version from Ubuntu 14 - Hammer to Centos 7 - Jewel. I was also able to recover from the failure by renaming the .ldb files to .sst files. Do you know why this works? Is it just because leveldb changed the file naming st

Re: [ceph-users] Ceph + VMWare

2016-10-07 Thread Jake Young
Hey Patrick, I work for Cisco. We have a 200TB cluster (108 OSDs on 12 OSD Nodes) and use the cluster for both OpenStack and VMware deployments. We are using iSCSI now, but it really would be much better if VMware did support RBD natively. We present a 1-2TB Volume that is shared between 4-8 ES

Re: [ceph-users] ceph + vmware

2016-07-26 Thread Jake Young
On Thursday, July 21, 2016, Mike Christie wrote: > On 07/21/2016 11:41 AM, Mike Christie wrote: > > On 07/20/2016 02:20 PM, Jake Young wrote: > >> > >> For starters, STGT doesn't implement VAAI properly and you will need to > >> disable VAAI in ESXi.

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-07-21 Thread Jake Young
I think the answer is that with 1 thread you can only ever write to one journal at a time. Theoretically, you would need 10 threads to be able to write to 10 nodes at the same time. Jake On Thursday, July 21, 2016, w...@globe.de wrote: > What i not really undertand is: > > Lets say the Intel P3

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-07-21 Thread Jake Young
My workaround to your single threaded performance issue was to increase the thread count of the tgtd process (I added --nr_iothreads=128 as an argument to tgtd). This does help my workload. FWIW below are my rados bench numbers from my cluster with 1 thread: This first one is a "cold" run. This

Re: [ceph-users] ceph + vmware

2016-07-20 Thread Jake Young
On Wednesday, July 20, 2016, Jan Schermer wrote: > > > On 20 Jul 2016, at 18:38, Mike Christie > wrote: > > > > On 07/20/2016 03:50 AM, Frédéric Nass wrote: > >> > >> Hi Mike, > >> > >> Thanks for the update on the RHCS iSCSI target. > >> > >> Will RHCS 2.1 iSCSI target be compliant with VMWare

Re: [ceph-users] ceph + vmware

2016-07-16 Thread Jake Young
> > Oliver Dzombic > IP-Interactive > > mailto:i...@ip-interactive.de > > Anschrift: > > IP Interactive UG ( haftungsbeschraenkt ) > Zum Sonnenberg 1-3 > 63571 Gelnhausen > > HRB 93402 beim Amtsgericht Hanau > Geschäftsführung: Oliver Dzombic > > Steuer Nr.

Re: [ceph-users] ceph + vmware

2016-07-15 Thread Jake Young
ze. > > Any idea with that issue ? > > -- > Mit freundlichen Gruessen / Best regards > > Oliver Dzombic > IP-Interactive > > mailto:i...@ip-interactive.de > > Anschrift: > > IP Interactive UG ( haftungsbeschraenkt ) > Zum Sonnenberg 1-3 > 63571 Gelnha

Re: [ceph-users] 40Gb fileserver/NIC suggestions

2016-07-13 Thread Jake Young
We use all Cisco UCS servers (C240 M3 and M4s) with the PCIE VIC 1385 40G NIC. The drivers were included in Ubuntu 14.04. I've had no issues with the NICs or my network what so ever. We have two Cisco Nexus 5624Q that the OSD servers connect to. The switches are just switching two VLANs (ceph c

Re: [ceph-users] 40Gb fileserver/NIC suggestions

2016-07-13 Thread Jake Young
My OSDs have dual 40G NICs. I typically don't use more than 1Gbps on either network. During heavy recovery activity (like if I lose a whole server), I've seen up to 12Gbps on the cluster network. For reference my cluster is 9 OSD nodes with 9x 7200RPM 2TB OSDs. They all have RAID cards with 4GB o

Re: [ceph-users] ceph + vmware

2016-07-11 Thread Jake Young
I'm using this setup with ESXi 5.1 and I get very good performance. I suspect you have other issues. Reliability is another story (see Nick's posts on tgt and HA to get an idea of the awful problems you can have), but for my test labs the risk is acceptable. One change I found helpful is to run

Re: [ceph-users] Mounting Ceph RBD image to XenServer 7 as SR

2016-06-30 Thread Jake Young
See https://www.mail-archive.com/ceph-users@lists.ceph.com/msg17112.html On Thursday, June 30, 2016, Mike Jacobacci wrote: > So after adding the ceph repo and enabling the cents-7 repo… It fails > trying to install ceph-common: > > Loaded plugins: fastestmirror > Loading mirror speeds from cach

Re: [ceph-users] Mounting Ceph RBD image to XenServer 7 as SR

2016-06-30 Thread Jake Young
s with just an i/o error… I am looking into now. My cluster > health is OK, so I am hoping I didn’t miss a configuration or something. > > > On Jun 29, 2016, at 3:28 PM, Jake Young > wrote: > > > > On Wednesday, June 29, 2016, Mike Jacobacci > wrote: > >> Hi a

Re: [ceph-users] Mounting Ceph RBD image to XenServer 7 as SR

2016-06-29 Thread Jake Young
On Wednesday, June 29, 2016, Mike Jacobacci wrote: > Hi all, > > Is there anyone using rbd for xenserver vm storage? I have XenServer 7 > and the latest Ceph, I am looking for the the best way to mount the rbd > volume under XenServer. There is not much recent info out there I have > found exce

Re: [ceph-users] RBD with iSCSI

2015-09-10 Thread Jake Young
On Wed, Sep 9, 2015 at 8:13 AM, Daleep Bais wrote: > Hi, > > I am following steps from URL > *http://www.sebastien-han.fr/blog/2014/07/07/start-with-the-rbd-support-for-tgt/ > * > to create a RBD pool and share t

Re: [ceph-users] Ceph 0.94 (and lower) performance on >1 hosts ??

2015-07-29 Thread Jake Young
On Wed, Jul 29, 2015 at 11:23 AM, Mark Nelson wrote: > On 07/29/2015 10:13 AM, Jake Young wrote: > >> On Tue, Jul 28, 2015 at 11:48 AM, SCHAER Frederic >> mailto:frederic.sch...@cea.fr>> wrote: >> > >> > Hi again, >> > >> > So I ha

Re: [ceph-users] Ceph 0.94 (and lower) performance on >1 hosts ??

2015-07-29 Thread Jake Young
On Tue, Jul 28, 2015 at 11:48 AM, SCHAER Frederic wrote: > > Hi again, > > So I have tried > - changing the cpus frequency : either 1.6GHZ, or 2.4GHZ on all cores > - changing the memory configuration, from "advanced ecc mode" to "performance mode", boosting the memory bandwidth from 35GB/s to 40G

Re: [ceph-users] Cisco UCS Blades as MONs? Pros cons ...?

2015-05-14 Thread Jake Young
uster size? > > Regards . Götz > > > Am 13.05.15 um 15:20 schrieb Jake Young: > > I run my mons as VMs inside of UCS blade compute nodes. > > > > Do you use the fabric interconnects or the standalone blade chassis? > > > > Jake > >

Re: [ceph-users] Cisco UCS Blades as MONs? Pros cons ...?

2015-05-13 Thread Jake Young
I run my mons as VMs inside of UCS blade compute nodes. Do you use the fabric interconnects or the standalone blade chassis? Jake On Wednesday, May 13, 2015, Götz Reinicke - IT Koordinator < goetz.reini...@filmakademie.de> wrote: > Hi Christian, > > currently we do get good discounts as an Univ

Re: [ceph-users] Using RAID Controller for OSD and JNL disks in Ceph Nodes

2015-05-04 Thread Jake Young
On Monday, May 4, 2015, Christian Balzer wrote: > On Mon, 13 Apr 2015 10:39:57 +0530 Sanjoy Dasgupta wrote: > > > Hi! > > > > This is an often discussed and clarified topic, but Reason why I am > > asking is because > > > > If We use a RAID controller with Lot of Cache (FBWC) and Configure each >

Re: [ceph-users] Cost- and Powerefficient OSD-Nodes

2015-04-28 Thread Jake Young
On Tuesday, April 28, 2015, Nick Fisk wrote: > > > > > > -Original Message- > > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com > ] On Behalf Of > > Dominik Hannen > > Sent: 28 April 2015 15:30 > > To: Jake Young > > Cc: ceph

Re: [ceph-users] Cost- and Powerefficient OSD-Nodes

2015-04-28 Thread Jake Young
On Tuesday, April 28, 2015, Dominik Hannen wrote: > Hi ceph-users, > > I am currently planning a cluster and would like some input specifically > about the storage-nodes. > > The non-osd systems will be running on more powerful system. > > Interconnect as currently planned: > 4 x 1Gbit LACP Bonds

Re: [ceph-users] Ceph on Solaris / Illumos

2015-04-17 Thread Jake Young
but it requires some time and effort to do it right. > > Cheers, > > Michal Kozanecki | Linux Administrator | E: mkozane...@evertz.com > > > Thank you for taking the time to share that, Michal! Jake > > -Original Message- > From: ceph-user

Re: [ceph-users] Ceph on Solaris / Illumos

2015-04-15 Thread Jake Young
On Wednesday, April 15, 2015, Mark Nelson wrote: > > > On 04/15/2015 08:16 AM, Jake Young wrote: > >> Has anyone compiled ceph (either osd or client) on a Solaris based OS? >> >> The thread on ZFS support for osd got me thinking about using solaris as >> an o

Re: [ceph-users] Ceph on Solaris / Illumos

2015-04-15 Thread Jake Young
On Wednesday, April 15, 2015, Alexandre Marangone wrote: > The LX branded zones might be a way to run OSDs on Illumos: > https://wiki.smartos.org/display/DOC/LX+Branded+Zones > > For fun, I tried a month or so ago, managed to have a quorum. OSDs > wouldn't start, I didn't look further as far as d

[ceph-users] Ceph on Solaris / Illumos

2015-04-15 Thread Jake Young
Has anyone compiled ceph (either osd or client) on a Solaris based OS? The thread on ZFS support for osd got me thinking about using solaris as an osd server. It would have much better ZFS performance and I wonder if the osd performance without a journal would be 2x better. A second thought I had

Re: [ceph-users] Cores/Memory/GHz recommendation for SSD based OSD servers

2015-04-02 Thread Jake Young
On Thursday, April 2, 2015, Nick Fisk wrote: > I'm probably going to get shot down for saying this...but here goes. > > As a very rough guide, think of it more as you need around 10Mhz for every > IO, whether that IO is 4k or 4MB it uses roughly the same amount of CPU, as > most of the CPU usage

Re: [ceph-users] tgt and krbd

2015-03-06 Thread Jake Young
On Friday, March 6, 2015, Steffen W Sørensen wrote: > > On 06/03/2015, at 16.50, Jake Young > > wrote: > > > > After seeing your results, I've been considering experimenting with > that. Currently, my iSCSI proxy nodes are VMs. > > > > I would lik

Re: [ceph-users] tgt and krbd

2015-03-06 Thread Jake Young
On Fri, Mar 6, 2015 at 10:18 AM, Nick Fisk wrote: > On Fri, Mar 6, 2015 at 9:04 AM, Nick Fisk wrote: > > > > > > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Jake Young > Sent: 06 March 2015 12:52 > To: Nick Fisk > Cc: ceph-users@li

Re: [ceph-users] tgt and krbd

2015-03-06 Thread Jake Young
re this IO (zeroing) is always 1MB writes, so I don't think this caused my write size to change. Maybe it did something to the iSCSI packets? Jake On Fri, Mar 6, 2015 at 9:04 AM, Nick Fisk wrote: > > > > > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of

Re: [ceph-users] tgt and krbd

2015-03-06 Thread Jake Young
On Thursday, March 5, 2015, Nick Fisk wrote: > Hi All, > > > > Just a heads up after a day’s experimentation. > > > > I believe tgt with its default settings has a small write cache when > exporting a kernel mapped RBD. Doing some write tests I saw 4 times the > write throughput when using tgt ai

Re: [ceph-users] Ceph, LIO, VMWARE anyone?

2015-01-23 Thread Jake Young
ver method it manipulates the ALUA states to present active/standby > paths. It’s very complicated and am close to giving up. > > > > What do you reckon accept defeat and go with a much simpler tgt and > virtual IP failover solution for time being until the Redhat patches make > the

Re: [ceph-users] Ceph, LIO, VMWARE anyone?

2015-01-23 Thread Jake Young
u shed any light on this? As tempting as having active/active is, > I’m wary about using the configuration until I understand how the locking > is working and if fringe cases involving multiple ESXi hosts writing to the > same LUN on different targets could spell disaster. > >

Re: [ceph-users] Ceph, LIO, VMWARE anyone?

2015-01-16 Thread Jake Young
si > locking/reservations…etc between the two targets? > > > > I can see the advantage to that configuration as you reduce/eliminate a > lot of the troubles I have had with resources failing over. > > > > Nick > > > > *From:* Jake Young [mailto:jak3...@gmai

Re: [ceph-users] Ceph, LIO, VMWARE anyone?

2015-01-14 Thread Jake Young
Nick, Where did you read that having more than 1 LUN per target causes stability problems? I am running 4 LUNs per target. For HA I'm running two linux iscsi target servers that map the same 4 rbd images. The two targets have the same serial numbers, T10 address, etc. I copy the primary's confi

Re: [ceph-users] rbd resize (shrink) taking forever and a day

2015-01-06 Thread Jake Young
; written have taken place in the shrinking area(that means there is no > object created in these area), they can use this flag to skip the time > consuming trimming. > > > > How do you think? > > That sounds like a good solution. Like doing "undo grow image" &

Re: [ceph-users] rbd resize (shrink) taking forever and a day

2015-01-05 Thread Jake Young
> > which seem to bear no resemblance to the actual image names that the rbd > command line tools understands? > > Regards, > Edwin Peer > > On 01/04/2015 08:48 PM, Jake Young wrote: > > > > > > On Sunday, January 4, 2015, Dyweni - Ceph-Users > > <

Re: [ceph-users] rbd resize (shrink) taking forever and a day

2015-01-04 Thread Jake Young
On Sunday, January 4, 2015, Dyweni - Ceph-Users <6exbab4fy...@dyweni.com> wrote: > Hi, > > If its the only think in your pool, you could try deleting the pool > instead. > > I found that to be faster in my testing; I had created 500TB when I meant > to create 500GB. > > Note for the Devs: I would

Re: [ceph-users] Double-mounting of RBD

2014-12-17 Thread Jake Young
On Wednesday, December 17, 2014, Josh Durgin wrote: > On 12/17/2014 03:49 PM, Gregory Farnum wrote: > >> On Wed, Dec 17, 2014 at 2:31 PM, McNamara, Bradley >> wrote: >> >>> I have a somewhat interesting scenario. I have an RBD of 17TB formatted >>> using XFS. I would like it accessible from tw

Re: [ceph-users] tgt / rbd performance

2014-12-13 Thread Jake Young
On Friday, December 12, 2014, Mike Christie wrote: > On 12/11/2014 11:39 AM, ano nym wrote: > > > > there is a ceph pool on a hp dl360g5 with 25 sas 10k (sda-sdy) on a > > msa70 which gives me about 600 MB/s continous write speed with rados > > write bench. tgt on the server with rbd backend uses

Re: [ceph-users] running as non-root

2014-12-06 Thread Jake Young
On Saturday, December 6, 2014, Sage Weil wrote: > While we are on the subject of init systems and packaging, I would *love* > to fix things up for hammer to > > - create a ceph user and group > - add various users to ceph group (like qemu or kvm user and > apache/www-data?) Maybe a calamari u

Re: [ceph-users] Giant osd problems - loss of IO

2014-12-06 Thread Jake Young
t;> net.ipv4.tcp_wmem = 4096 65536 4194304 >> net.ipv4.tcp_mem =4194304 4194304 4194304 >> net.ipv4.tcp_low_latency=1 >> >> >> which is what I have. Not sure if these are optimal. >> >> I can see that the values are pretty conservative compare to yours. I >> guess my values should

Re: [ceph-users] Giant osd problems - loss of IO

2014-12-04 Thread Jake Young
On Fri, Nov 14, 2014 at 4:38 PM, Andrei Mikhailovsky wrote: > > Any other suggestions why several osds are going down on Giant and causing IO to stall? This was not happening on Firefly. > > Thanks > > I had a very similar probem to yours which started after upgrading from Firefly to Giant and th

Re: [ceph-users] Admin Node Best Practices

2014-10-31 Thread Jake Young
On Friday, October 31, 2014, Massimiliano Cuttini wrote: > Any hint? > > > Il 30/10/2014 15:22, Massimiliano Cuttini ha scritto: > > Dear Ceph users, > > I just received 2 fresh new servers and i'm starting to develop my Ceph > Cluster. > The first step is: create the admin node in order to cont

Re: [ceph-users] PERC H710 raid card

2014-07-17 Thread Jake Young
There are two command line tools for Linux for LSI cards: megacli and storcli You can do pretty much everything from those tools. Jake On Thursday, July 17, 2014, Dennis Kramer (DT) wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > Hi, > > What do you recommend in case of a disk fai

Re: [ceph-users] Poor performance on all SSD cluster

2014-06-24 Thread Jake Young
On Mon, Jun 23, 2014 at 3:03 PM, Mark Nelson wrote: > Well, for random IO you often can't do much coalescing. You have to bite > the bullet and either parallelize things or reduce per-op latency. Ceph > already handles parallelism very well. You just throw more disks at the > problem and so lo

Re: [ceph-users] Moving Ceph cluster to different network segment

2014-06-13 Thread Jake Young
I recently changed IP and hostname of an osd node running dumpling and had no problems. You do need to have your ceph.conf file built correctly or your osds won't start. Make sure the new IPs and new hostname are in there before you change the IP. The crushmap showed a new bucket (host name) cont

Re: [ceph-users] Ceph with VMWare / XenServer

2014-05-12 Thread Jake Young
Hello Andrei, I'm trying to accomplish the same thing with VMWare. So far I'm still doing lab testing, but we've gotten as far as simulating a production workload. Forgive the lengthy reply, I happen to be sitting on an airplane . My existing solution is using NFS servers running in ESXi VMs. Ea

Re: [ceph-users] Manually mucked up pg, need help fixing

2014-05-05 Thread Jake Young
I was in a similar situation where I could see the PGs data on an osd, but there was nothing I could do to force the pg to use that osd's copy. I ended up using the rbd_restore tool to create my rbd on disk and then I reimported it into the pool. See this thread for info on rbd_restore: http://ww

Re: [ceph-users] Cannot create a file system on the RBD

2014-04-08 Thread Jake Young
Maybe different kernel versions between the box that can format and the box that can't. When you created the rbd image, was it format 1 or 2? Jake On Thursday, April 3, 2014, Thorvald Hallvardsson < thorvald.hallvards...@gmail.com> wrote: > Hi, > > I have found that problem is somewhere within

Re: [ceph-users] No more Journals ?

2014-03-14 Thread Jake Young
You should take a look at this blog post: http://ceph.com/community/ceph-performance-part-2-write-throughput-without-ssd-journals/ The test results shows that using a RAID card with a write-back cache without journal disks can perform better or equivalent to using journal disks with XFS. *As to

[ceph-users] Running a mon on a USB stick

2014-03-08 Thread Jake Young
I was planning to setup a small Ceph cluster with 5 nodes. Each node will have 12 disks and run 12 osds. I want to run 3 mons on 3 of the nodes. The servers have an internal SD card that I'll use for the OS and an internal 16GB USB port that I want to mount the mon files to. >From what I understa