On 28 May 2013, at 06:50, Wolfgang Hennerbichler wrote:
> for anybody who's interested, I've packaged the latest qemu-1.4.2 (not 1.5,
> it didn't work nicely with libvirt) which includes important fixes to RBD for
> ubuntu 12.04 AMD64. If you want to save some time, I can share the packages
>
Hi,
as most on the list here I also see the future of storage in ceph. I
think it is a great system and overall design, and sage with the rest of
inktank and the community are doing their best to make ceph great. Being
a part-time developer myself I know how awesome new features are, and
how great
Hi Wolfgang,
Can you elaborate the issue for 1.5 with libvirt? Wonder if that will impact
the usage with Grizzly. Did a quick compile for 1.5 with RBD support enabled,
so far it seems to be ok for openstack with a few simple tests. But definitely
want to be cautious if there is known integration
I believe that the async_flush fix got in after 1.4.1 release. Unless someone
had backported the patch to 1.4.0, it is unlikely that 1.4.0 package would
contain the fix.
--weiguo
> From: a...@alex.org.uk
> Date: Wed, 29 May 2013 08:59:14 +0100
> To: wolfgang.hennerbich...@risc-software.at
> CC:
Hell Igor,
Thanks for getting back to me.
> > You can map it to multiple hosts, but before doing dd if=/dev/zero
of=/media/tmp/test you have created file system, right
Correct, I can't mount /dev/rbd/rbd/test-device without first creating a
file system on the device. Now, I am creating an ext4
On 05/29/2013 05:26 AM, Ta Ba Tuan wrote:
Hi Majodomo
I am TuanTB (full name: Tuan Ta Ba, and I come from VietNam),
I 'm working about the Cloud Computing
Of course, Were are using the Ceph, and I'm a new Ceph'member
so, I hope to be joined "ceph-delvel", "ceph-users" mailist.
Thank you so muc
Cannot agree more,when I trying to promote ceph to internal state holder,they
always complaining the stability of ceph,especially when they are evaluating
ceph with high enough pressure, ceph cannot stay heathy during the test.
发自我的 iPhone
在 2013-5-29,19:13,"Wolfgang Hennerbichler"
写道:
> H
Hi,
Can I assume i am safe without this patch if i don't use any rbd cache?
发自我的 iPhone
在 2013-5-29,16:00,"Alex Bligh" 写道:
>
> On 28 May 2013, at 06:50, Wolfgang Hennerbichler wrote:
>
>> for anybody who's interested, I've packaged the latest qemu-1.4.2 (not 1.5,
>> it didn't work nicel
We are running ubuntu 12.04 and Folsom. Compiling qemu 1.5 only caused
random complaints about 'qemu query-commands not found' or sth like that on
libvirt end. Upgrading libvirt to 1.0.5 fixed it. But that had some
problems with attaching rbd disks:
could not open disk
image rbd:vols/volume-foo:id
Instead of using ext4 for the file system, you need to use a clustered file
system on the RBD device.
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Jon
Sent: Wednesday, May 29, 2013 7:55 AM
To: Igor Laskovy
Cc: ceph-users
Subject: Re: [ceph-users
Am 29.05.2013 18:18, schrieb Erdem Agaoglu:
> We are running ubuntu 12.04 and Folsom. Compiling qemu 1.5 only caused random
> complaints about 'qemu query-commands not found' or sth like that on libvirt
> end. Upgrading libvirt to 1.0.5 fixed it. But that had some problems with
> attaching rbd disk
Hi,
Am 29.05.2013 16:23, schrieb w sun:
> I believe that the async_flush fix got in after 1.4.1 release. Unless someone
> had backported the patch to 1.4.0, it is unlikely that 1.4.0 package would
> contain the fix.
>
yes, qemu 1.4.2 has the AIO_FLUSH patch included.
--
Mit freundlichen Grü
Hi Jon,
Am 29.05.2013 03:24, schrieb Jon:
> Hello,
>
> I would like to mount a single RBD on multiple hosts to be able to share the
> block device.
> Is this possible? I understand that it's not possible to share data between
> the
> different interfaces, e.g. CephFS and RBDs, but I don't see
Hello Bradley,
Please excuse my ignorance, I am new to CEPH and what I thought was a good
understanding of file systems has clearly been shown to be inadequate.
Maybe I'm asking the question wrong because I keep getting the same
answer.
I guess I don't understand what a clustered filesystem is.
Awesome, thanks Florian!
I think this is exactly the information I needed.
Best Regards,
Jon A
On May 29, 2013 12:17 PM, "Smart Weblications GmbH - Florian Wiessner" <
f.wiess...@smart-weblications.de> wrote:
> Hi Jon,
>
> Am 29.05.2013 03:24, schrieb Jon:
> > Hello,
> >
> > I would like to mount
Hi,
Maybe I'm asking the question wrong because I keep getting the same
answer.
I think it would be wise if you take a step back and explain what you
want to accomplish.
If you want to mount a file system on multiple hosts simultaneously, you
need a distributed file system such as for exa
Hi Jon,
it really sounds like you should really be using CephFS for this use case
instead of RBD. RBD presents as a local block device, similar to iSCSI.
If you wish to access that block device from multiple hosts, you have to
run something on it that is aware of those multiple hosts or you'll
ce
Jon;
For all intents and purposes, an RBD device is treated by the system/OS as a
physical disk, albeit attached via the network where multiple servers store the
data (ceph cluster). Once the RBD device is created, one needs to format the
device using any one of a number of file systems (ext4,
Hi,
I'm testing cephfs to serve video files.
Here is a screenshot from atop:
http://img835.imageshack.us/img835/7356/cephbandwidth.png
The server has 4 cores, even so only 2 kworkers are working hard,
others do notting.
On the webserver the concurrent connections are ~1000.
Is this a ceph or ker
Sieh dir Komakais Fotos auf Facebook an.
Wenn du dich bei Facebook registrierst, kannst du mit Freunden in Verbindung
bleiben, indem du ihre Fotos und Videos anschaust, ihre aktuelle Meldungen
liest und Nachrichten mit ihnen austauschst.
Triff Komakai auf Facebook
http://www.facebook.com/p.php?
We are currently testing Ceph with OpenStack Grizzly release and looking
for some insight on Live Migration [1].. Based on documentation, there are
two options for shared-storage and used for Nova instances (
/var/lib/nova/instances): NFS and OpenStack Gluster Connector..
Do you know if anyone i
Hi Joao,
Thank for replying, I hope I might contribute my knowledges for the Ceph,
With me, the Ceph is very nice!!
Thank you!
--TuanTB
On 05/29/2013 10:17 PM, Joao Eduardo Luis wrote:
On 05/29/2013 05:26 AM, Ta Ba Tuan wrote:
Hi Majodomo
I am TuanTB (full name: Tuan Ta Ba, and I come from V
subscribe ceph-users
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Greg,
> Oh, not the OSD stuff, just the CephFS stuff that goes on top. Look at
> http://www.mail-archive.com/ceph-users@lists.ceph.com/msg00029.html
> Although if you were re-creating pools and things, I think that would
> explain the crash you're seeing.
> -Greg
>
I was thinking about that
Completely agree as well. I'm very keen to see widespread adoption of Ceph, but
battling against the major vendors is a massive challenge not helped by even a
small amount of instability.
Douglas Youd
Direct +61 8 9488 9571
-Original Message-
From: ceph-users-boun...@lists.ceph.com
[
25 matches
Mail list logo