$sp and $ep, hold for us?
> Or what may have been the author's intent?
>
> BTW, although cross--posted, I tried to set a reply-to for CBT list
> only. We see how it goes. Thanks in advance.
> -az
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ce
limit this find to only those
PGs in question, which from what you have described is only 1. So figure
out which OSDs are active for the PG, and run the find in the subdir for
the placement group on one of those. It should run really fast unless you
have tons of tiny objects in the PG.
--
David Burley
every object.
> >
> >
> > Thanks!
> >
> > Megov Igor
> > CIO, Yuterra
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listin
.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
David Burley
NOC Manager, Sr. Systems Programmer/Analyst
Slashdot Media
e: da...@slashdotmedia.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> In a similar direction, one could try using bcache on top of the actual
> spinner. Have you tried that, too?
>
>
We haven't tried bcache/flashcache/...
--
David Burley
NOC Manager, Sr. Systems Programmer/Analyst
Slashdot Media
e: da.
ners and it seems the xfs journaling process
> is eating a lot of my IO. My queues on my OSD drives frequently get into
> the 500 ballpark which makes for sad VMs.
>
>
ceph tell bench and also via some mixed IO fio runs on the OSD partition
while the OSD it hosted was offline.
--
David Burle
o dig
into deeper and stick with the simpler configuration of just using the NVMe
drives for OSD journaling and leave the XFS journals on the partition.
--David
On Thu, Jun 4, 2015 at 2:23 PM, Lars Marowsky-Bree wrote:
> On 2015-06-04T12:42:42, David Burley wrote:
>
> > Are there any
nator
>
> Tel. +49 7141 969 82420
> E-Mail goetz.reini...@filmakademie.de
>
> Filmakademie Baden-Württemberg GmbH
> Akademiehof 10
> 71638 Ludwigsburg
> www.filmakademie.de
>
> Eintragung Amtsgericht Stuttgart HRB 205016
>
> Vorsitzender des Aufsichtsrats: Jürge
Further clarification, 12:1 with SATA spinners as the OSD data drives.
On Tue, Jul 7, 2015 at 9:11 AM, David Burley
wrote:
> There is at least one benefit, you can go more dense. In our testing of
> real workloads, you can get a 12:1 OSD to Journal drive ratio (or even
> higher) using
units as journals for Ceph.
>
>
> Regards,
>
>
>
>
>
--
David Burley
NOC Manager, Sr. Systems Programmer/Analyst
Slashdot Media
e: da...@slashdotmedia.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Are there any safety/consistency or other reasons we wouldn't want to try
using an external XFS log device for our OSDs? I realize if that device
fails the filesystem is pretty much lost, but beyond that?
--
David Burley
NOC Manager, Sr. Systems Programmer/Analyst
Slashdot Media
see in a lightly-loaded SSD
> cluster are ~2ms commit times for writes, or just a bit less. Anything
> over 10 is definitely wrong, although that's close to correct for an
> SSD-journaled hard drive cluster — probably more like 5-7.)
> -Greg
> _
if you have enough of them?
>
> ___
> Dominik Hannen
> _______
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
David Burley
NOC Manager, Sr. Systems Programmer/Analy
_
> >>> ceph-users mailing list
> >>> ceph-users@lists.ceph.com
> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>>
> >>
> >>
>
>
> --
> Wido den Hollander
> 42on B.V.
>
> Phone: +31 (0)2
_
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
David Burley
NOC Manager, Sr. Systems Programmer/Analyst
Slashdot Media
e: da...@slashdotmedia.com
___
gt;> | Avg Deviation from Most Subscribed OSD: 19.7%
>>>> |
>>>>
>>>> +---
>>>> -+
>>>> | OSDs in All Roles (Acting)
>>>> |
>>>> | Expected
16 matches
Mail list logo