Hello Dimitri,
> Le 23 mai 2014 à 22:33, Dimitri Maziuk a écrit :
>
>> On 05/23/2014 03:06 PM, Craig Lewis wrote:
>>
>> 1: ZFS or Btrfs snapshots could do this, but neither one are recommended
>> for production.
>
> Out of curiosity, what's the current beef with zfs? I know what problems
> are
I would think that rbd block are like stripes for RAID or blocks for hard
drives, even if you only need to read or write 1k, the full stripe has to be
read or write.
Cheers
--
Cédric Lemarchand
> Le 5 juin 2014 à 22:56, Timofey Koolin a écrit :
>
> Do for every read/write rbd r
Hi Sebastian,
> Le 2 sept. 2014 à 10:41, Sebastien Han a écrit :
>
> Hey,
>
> Well I ran an fio job that simulates the (more or less) what ceph is doing
> (journal writes with dsync and o_direct) and the ssd gave me 29K IOPS too.
> I could do this, but for me it definitely looks like a major w
> Le 26 mars 2014 à 00:30, Andrei Mikhailovsky a écrit :
>
> The osd fragmentation level of zfs is at 8% at the moment, not sure if this
> should impact the performance by this much. I will defrag it over night and
> check tomorrow to see if it makes the difference.
Sorry if this is a little
Hello there,
> Le 20 avr. 2014 à 12:20, Loic Dachary a écrit :
>
> Hi Sébastien,
>
> I'm available to help setup the ceph-brag machine.
Just curious ;-), could you more specific about that ?
> When would it be more convenient for you to work on this with me ? The
> brag.ceph.com machine hoste
Yes, juste apt-get install ceph ;-)
Cheers
--
Cédric Lemarchand
> Le 25 avr. 2014 à 21:07, Drew Weaver a écrit :
>
> You can actually just install it using the Ubuntu packages. I did it
> yesterday on Trusty.
>
> Thanks,
> -Drew
>
>
> From: cep
> Le 28 avr. 2014 à 17:59, Timofey Koolin a écrit :
>
> Now if I will upgrade monitors and upgrade will fail on second (of three)
> monitor - cluster will down. Becouse it will have
> 1 new monitor
> 1 down monitor
> 1 old monitor
As of my compréhension, migration path is planned, between cep
ents, thanks in advance.
--
Cédric Lemarchand
> Le 7 mai 2014 à 22:10, Cedric Lemarchand a écrit :
>
> Some more details, the io pattern will be around 90%write 10%read, mainly
> sequential.
> Recent posts shows that max_backfills, recovery_max_active and
> recovery_op_pri
could support such thing.
Thanks !
--
Cédric Lemarchand
> Le 10 mai 2014 à 04:30, Craig Lewis a écrit :
>
> I'm still a noob too, so don't take anything I say with much weight. I was
> hoping that somebody with more experience would reply.
>
>
> I see a few pot
Thanks for your answers Craig, it seems this is a niche use case for Ceph, not
a lot of replies from the ML.
Cheers
--
Cédric Lemarchand
> Le 11 mai 2014 à 00:35, Craig Lewis a écrit :
>
>> On 5/10/14 12:43 , Cédric Lemarchand wrote:
>> Hi Craig,
>>
>> Thanks
On Mon, 2017-04-10 at 12:13 -0500, Mike Christie wrote:
>
> > LIO-TCMU+librbd-iscsi [1] [2] looks really promising and seams to
> > be the
> > way to go. It would be great if somebody as insight about the
> > maturity
> > of the project, is it ready for testing purposes ?
> >
>
> It is not matur
11 matches
Mail list logo