[ceph-users] Fwd: Preparing Ceph for CBT, disk labels by-id

2015-10-21 Thread David Burley
$sp and $ep, hold for us? > Or what may have been the author's intent? > > BTW, although cross--posted, I tried to set a reply-to for CBT list > only. We see how it goes. Thanks in advance. > -az > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ce

Re: [ceph-users] How to get RBD volume to PG mapping?

2015-09-25 Thread David Burley
limit this find to only those PGs in question, which from what you have described is only 1. So figure out which OSDs are active for the PG, and run the find in the subdir for the placement group on one of those. It should run really fast unless you have tons of tiny objects in the PG. -- David Burley

Re: [ceph-users] How to get RBD volume to PG mapping?

2015-09-25 Thread David Burley
every object. > > > > > > Thanks! > > > > Megov Igor > > CIO, Yuterra > > ___ > > ceph-users mailing list > > ceph-users@lists.ceph.com > > http://lists.ceph.com/listin

Re: [ceph-users] ceph failure on sf.net?

2015-07-20 Thread David Burley
.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- David Burley NOC Manager, Sr. Systems Programmer/Analyst Slashdot Media e: da...@slashdotmedia.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] External XFS Filesystem Journal on OSD

2015-07-10 Thread David Burley
> > In a similar direction, one could try using bcache on top of the actual > spinner. Have you tried that, too? > > We haven't tried bcache/flashcache/... -- David Burley NOC Manager, Sr. Systems Programmer/Analyst Slashdot Media e: da.

Re: [ceph-users] External XFS Filesystem Journal on OSD

2015-07-09 Thread David Burley
ners and it seems the xfs journaling process > is eating a lot of my IO. My queues on my OSD drives frequently get into > the 500 ballpark which makes for sad VMs. > > ceph tell bench and also via some mixed IO fio runs on the OSD partition while the OSD it hosted was offline. -- David Burle

Re: [ceph-users] External XFS Filesystem Journal on OSD

2015-07-09 Thread David Burley
o dig into deeper and stick with the simpler configuration of just using the NVMe drives for OSD journaling and leave the XFS journals on the partition. --David On Thu, Jun 4, 2015 at 2:23 PM, Lars Marowsky-Bree wrote: > On 2015-06-04T12:42:42, David Burley wrote: > > > Are there any

Re: [ceph-users] Real world benefit from SSD Journals for a more read than write cluster

2015-07-09 Thread David Burley
nator > > Tel. +49 7141 969 82420 > E-Mail goetz.reini...@filmakademie.de > > Filmakademie Baden-Württemberg GmbH > Akademiehof 10 > 71638 Ludwigsburg > www.filmakademie.de > > Eintragung Amtsgericht Stuttgart HRB 205016 > > Vorsitzender des Aufsichtsrats: Jürge

Re: [ceph-users] NVME SSD for journal

2015-07-07 Thread David Burley
Further clarification, 12:1 with SATA spinners as the OSD data drives. On Tue, Jul 7, 2015 at 9:11 AM, David Burley wrote: > There is at least one benefit, you can go more dense. In our testing of > real workloads, you can get a 12:1 OSD to Journal drive ratio (or even > higher) using

Re: [ceph-users] NVME SSD for journal

2015-07-07 Thread David Burley
units as journals for Ceph. > > > Regards, > > > > > -- David Burley NOC Manager, Sr. Systems Programmer/Analyst Slashdot Media e: da...@slashdotmedia.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] External XFS Filesystem Journal on OSD

2015-06-04 Thread David Burley
Are there any safety/consistency or other reasons we wouldn't want to try using an external XFS log device for our OSDs? I realize if that device fails the filesystem is pretty much lost, but beyond that? -- David Burley NOC Manager, Sr. Systems Programmer/Analyst Slashdot Media

Re: [ceph-users] apply/commit latency

2015-06-03 Thread David Burley
see in a lightly-loaded SSD > cluster are ~2ms commit times for writes, or just a bit less. Anything > over 10 is definitely wrong, although that's close to correct for an > SSD-journaled hard drive cluster — probably more like 5-7.) > -Greg > _

Re: [ceph-users] Cost- and Powerefficient OSD-Nodes

2015-04-28 Thread David Burley
if you have enough of them? > > ___ > Dominik Hannen > _______ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- David Burley NOC Manager, Sr. Systems Programmer/Analy

Re: [ceph-users] running Qemu / Hypervisor AND Ceph on the same nodes

2015-03-26 Thread David Burley
_ > >>> ceph-users mailing list > >>> ceph-users@lists.ceph.com > >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > >>> > >> > >> > > > -- > Wido den Hollander > 42on B.V. > > Phone: +31 (0)2

Re: [ceph-users] Server Specific Pools

2015-03-19 Thread David Burley
_ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > -- David Burley NOC Manager, Sr. Systems Programmer/Analyst Slashdot Media e: da...@slashdotmedia.com ___

Re: [ceph-users] pool distribution quality report script

2015-03-05 Thread David Burley
gt;> | Avg Deviation from Most Subscribed OSD: 19.7% >>>> | >>>> >>>> +--- >>>> -+ >>>> | OSDs in All Roles (Acting) >>>> | >>>> | Expected