On 05/13/2013 09:52 AM, Greg wrote:
Le 13/05/2013 15:55, Mark Nelson a écrit :
On 05/13/2013 07:26 AM, Greg wrote:
Le 13/05/2013 07:38, Olivier Bonvalet a écrit :
Le vendredi 10 mai 2013 à 19:16 +0200, Greg a écrit :
Hello folks,
I'm in the process of testing CEPH and RBD, I have set up a sm
Le 13/05/2013 17:01, Gandalf Corvotempesta a écrit :
2013/5/13 Greg :
thanks a lot for pointing this out, it indeed makes a *huge* difference !
# dd if=/mnt/t/1 of=/dev/zero bs=4M count=100
100+0 records in
100+0 records out
419430400 bytes (419 MB) copied, 5.12768 s, 81.8 MB/s
(caches droppe
On 05/13/2013 10:01 AM, Gandalf Corvotempesta wrote:
2013/5/13 Greg :
thanks a lot for pointing this out, it indeed makes a *huge* difference !
# dd if=/mnt/t/1 of=/dev/zero bs=4M count=100
100+0 records in
100+0 records out
419430400 bytes (419 MB) copied, 5.12768 s, 81.8 MB/s
(caches drop
2013/5/13 Greg :
> thanks a lot for pointing this out, it indeed makes a *huge* difference !
>>
>> # dd if=/mnt/t/1 of=/dev/zero bs=4M count=100
>>
>> 100+0 records in
>> 100+0 records out
>> 419430400 bytes (419 MB) copied, 5.12768 s, 81.8 MB/s
>
> (caches dropped before each test of course)
What
Le 13/05/2013 15:55, Mark Nelson a écrit :
On 05/13/2013 07:26 AM, Greg wrote:
Le 13/05/2013 07:38, Olivier Bonvalet a écrit :
Le vendredi 10 mai 2013 à 19:16 +0200, Greg a écrit :
Hello folks,
I'm in the process of testing CEPH and RBD, I have set up a small
cluster of hosts running each a
On 05/13/2013 07:26 AM, Greg wrote:
Le 13/05/2013 07:38, Olivier Bonvalet a écrit :
Le vendredi 10 mai 2013 à 19:16 +0200, Greg a écrit :
Hello folks,
I'm in the process of testing CEPH and RBD, I have set up a small
cluster of hosts running each a MON and an OSD with both journal and
data on
Le 13/05/2013 07:38, Olivier Bonvalet a écrit :
Le vendredi 10 mai 2013 à 19:16 +0200, Greg a écrit :
Hello folks,
I'm in the process of testing CEPH and RBD, I have set up a small
cluster of hosts running each a MON and an OSD with both journal and
data on the same SSD (ok this is stupid but
Le vendredi 10 mai 2013 à 19:16 +0200, Greg a écrit :
> Hello folks,
>
> I'm in the process of testing CEPH and RBD, I have set up a small
> cluster of hosts running each a MON and an OSD with both journal and
> data on the same SSD (ok this is stupid but this is simple to verify the
> disks a
that helps. thx
CC: pi...@pioto.org; ceph-users@lists.ceph.com
From: j.michael.l...@gmail.com
Subject: Re: [ceph-users] RBD vs RADOS benchmark performance
Date: Sat, 11 May 2013 13:16:18 -0400
To: ws...@hotmail.com
Hmm try searching the libvirt git for josh as an author you should see the
;
> From: j.michael.l...@gmail.com
> Date: Sat, 11 May 2013 08:45:41 -0400
> To: pi...@pioto.org
> CC: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] RBD vs RADOS benchmark performance
>
> I believe that this is fixed in the most recent versions of libvirt, sheepdog
> a
The reference Mike provided is not valid to me. Anyone else has the same
problem? --weiguo
From: j.michael.l...@gmail.com
Date: Sat, 11 May 2013 08:45:41 -0400
To: pi...@pioto.org
CC: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] RBD vs RADOS benchmark performance
I believe that this is
I believe that this is fixed in the most recent versions of libvirt, sheepdog
and rbd were marked erroneously as unsafe.
http://libvirt.org/git/?p=libvirt.git;a=commit;h=78290b1641e95304c862062ee0aca95395c5926c
Sent from my iPad
On May 11, 2013, at 8:36 AM, Mike Kelly wrote:
> (Sorry for send
(Sorry for sending this twice... Forgot to reply to the list)
Is rbd caching safe to enable when you may need to do a live migration of
the guest later on? It was my understanding that it wasn't, and that
libvirt prevented you from doing the migration of it knew about the caching
setting.
If it i
Le 11/05/2013 13:24, Greg a écrit :
Le 11/05/2013 02:52, Mark Nelson a écrit :
On 05/10/2013 07:20 PM, Greg wrote:
Le 11/05/2013 00:56, Mark Nelson a écrit :
On 05/10/2013 12:16 PM, Greg wrote:
Hello folks,
I'm in the process of testing CEPH and RBD, I have set up a small
cluster of hosts r
Le 11/05/2013 02:52, Mark Nelson a écrit :
On 05/10/2013 07:20 PM, Greg wrote:
Le 11/05/2013 00:56, Mark Nelson a écrit :
On 05/10/2013 12:16 PM, Greg wrote:
Hello folks,
I'm in the process of testing CEPH and RBD, I have set up a small
cluster of hosts running each a MON and an OSD with bot
On 05/10/2013 07:21 PM, Yun Mao wrote:
Hi Mark,
Given the same hardware, optimal configuration (I have no idea what that
means exactly but feel free to specify), which is supposed to perform
better, kernel rbd or qemu/kvm? Thanks,
Yun
Hi Yun,
I'm in the process of actually running some tests
On 05/10/2013 07:20 PM, Greg wrote:
Le 11/05/2013 00:56, Mark Nelson a écrit :
On 05/10/2013 12:16 PM, Greg wrote:
Hello folks,
I'm in the process of testing CEPH and RBD, I have set up a small
cluster of hosts running each a MON and an OSD with both journal and
data on the same SSD (ok this
Hi Mark,
Given the same hardware, optimal configuration (I have no idea what that
means exactly but feel free to specify), which is supposed to perform
better, kernel rbd or qemu/kvm? Thanks,
Yun
On Fri, May 10, 2013 at 6:56 PM, Mark Nelson wrote:
> On 05/10/2013 12:16 PM, Greg wrote:
>
>> Hel
Le 11/05/2013 00:56, Mark Nelson a écrit :
On 05/10/2013 12:16 PM, Greg wrote:
Hello folks,
I'm in the process of testing CEPH and RBD, I have set up a small
cluster of hosts running each a MON and an OSD with both journal and
data on the same SSD (ok this is stupid but this is simple to verif
On 05/10/2013 12:16 PM, Greg wrote:
Hello folks,
I'm in the process of testing CEPH and RBD, I have set up a small
cluster of hosts running each a MON and an OSD with both journal and
data on the same SSD (ok this is stupid but this is simple to verify the
disks are not the bottleneck for 1 cli
Hello folks,
I'm in the process of testing CEPH and RBD, I have set up a small
cluster of hosts running each a MON and an OSD with both journal and
data on the same SSD (ok this is stupid but this is simple to verify the
disks are not the bottleneck for 1 client). All nodes are connected on a
21 matches
Mail list logo