We have seen similar poor performance with Intel S3700 and S3710 on LSI
SAS3008 with CFQ on 3.13, 3.18 and 3.19 kernels.
Switching to noop fixed the problems for us.
On Fri, Jul 10, 2015 at 4:30 AM, Alexandre DERUMIER
wrote:
> >>That’s very strange. Is nothing else using the disks?
> no. only
) using the P3700. This assumes you are willing to accept the impact
>> of losing 12 OSDs when a journal croaks.
>>
>> On Tue, Jul 7, 2015 at 8:33 AM, Andrew Thrift
>> wrote:
>>
>>> We are running NVMe Intel P3700's as journals for about 8 months now.
We are running NVMe Intel P3700's as journals for about 8 months now.1x
P3700 per 6x OSD.
So far they have been reliable.
We are using S3700, S3710 and P3700 as journals and there is _currently_ no
real benefit of the P3700 over the SATA units as journals for Ceph.
Regards,
Andrew
On Tu
; * /dev/sda2 ceph data
> > * /dev/sda3 ceph journal
> > * /dev/sda4 ceph data
> >
> Yup, the limitations are in the Ceph OSD code right now.
>
> However a setup like this will of course kill multiple OSDs if a single
> SSD fails, not that it matters all tha
Hi All,
We have a bunch of shiny new hardware we are ready to configure for an all
SSD cluster.
I am wondering what are other people doing for their journal configuration
on all SSD clusters ?
- Seperate Journal partition and OSD partition on each SSD
or
- Journal on OSD
Thanks,
Andrew
_
Hi Mark,
Would you see any benefit in using a Intel P3700 NVMe drive as a journal
for say 6x Intel S3700 OSD's ?
On Fri, Oct 3, 2014 at 6:58 AM, Mark Nelson wrote:
> On 10/02/2014 12:48 PM, Adam Boyhan wrote:
>
>> Hey everyone, loving Ceph so far!
>>
>
> Hi!
>
>
>
>> We are looking to role ou
I have recently been wondering the same thing.
Does anyone have any experience with this ?
On Fri, Sep 5, 2014 at 12:18 AM, Andrei Mikhailovsky
wrote:
> Hello guys,
>
> I was wondering if there is a benefit of using journal-less btrfs file
> system on the cache pool osds? Would it speed up the
er recovery handling by preventing recovering OSDs from using up system
resources so that up and in OSDs aren't available or are otherwise slow."
which seems to describe the slowness we are experiencing. I was wondering
what version of Ceph this behavior was resolved in ?
rk. It's more than enough currently but we'd
like to improve RBD read performance.
Cheers,
On Sat, Mar 9, 2013 at 7:27 AM, Andrew Thrift mailto:and...@networklabs.co.nz>> wrote:
Mark,
I would just like to add, we too are seeing the same behavior with
QEMU/KVM
Mark,
I would just like to add, we too are seeing the same behavior with
QEMU/KVM/RBD. Maybe it is a common symptom of high IO with this setup.
Regards,
Andrew
On 3/8/2013 12:46 AM, Mark Nelson wrote:
On 03/07/2013 05:10 AM, Wolfgang Hennerbichler wrote:
On 03/06/2013 02:31 PM, M
10 matches
Mail list logo