On 05/25/2010 08:12 PM, Sage Weil wrote:
On Tue, 25 May 2010, Avi Kivity wrote:
What's the reason for not having these drivers upstream? Do we gain
anything by hiding them from our users and requiring them to install the
drivers separately from somewhere else?
Six months.
FW
On 05/25/2010 05:02 PM, Anthony Liguori wrote:
On 05/25/2010 08:57 AM, Avi Kivity wrote:
On 05/25/2010 04:54 PM, Anthony Liguori wrote:
On 05/25/2010 08:36 AM, Avi Kivity wrote:
We'd need a kernel-level generic snapshot API for this eventually.
or (2) implement BUSE to complement FUSE and CU
At Tue, 25 May 2010 10:12:53 -0700 (PDT),
Sage Weil wrote:
>
> On Tue, 25 May 2010, Avi Kivity wrote:
> > > What's the reason for not having these drivers upstream? Do we gain
> > > anything by hiding them from our users and requiring them to install the
> > > drivers separately from somewhere els
On Mon, May 24, 2010 at 2:17 AM, Yehuda Sadeh Weinraub
wrote:
> On Sun, May 23, 2010 at 12:59 AM, Blue Swirl wrote:
>> On Thu, May 20, 2010 at 11:02 PM, Yehuda Sadeh Weinraub
>> wrote:
>>> On Thu, May 20, 2010 at 1:31 PM, Blue Swirl wrote:
On Wed, May 19, 2010 at 7:22 PM, Christian Brunner
On Tue, 25 May 2010, Avi Kivity wrote:
> > What's the reason for not having these drivers upstream? Do we gain
> > anything by hiding them from our users and requiring them to install the
> > drivers separately from somewhere else?
> >
>
> Six months.
FWIW, we (Ceph) aren't complaining about
On 05/25/2010 07:21 PM, Anthony Liguori wrote:
On 05/25/2010 11:16 AM, Avi Kivity wrote:
On 05/25/2010 06:01 PM, Anthony Liguori wrote:
On 05/25/2010 10:00 AM, Avi Kivity wrote:
The latter. Why is it less important? If you don't inherit the
memory, you can't access it.
You can also pass /d
On 05/25/2010 05:01 PM, Kevin Wolf wrote:
The current situation is that those block format drivers only exist in
qemu.git or as patches. Surely that's even more unhappiness.
The difference is that in the current situation these drivers will be
part of the next qemu release, so the patch
On 05/25/2010 11:16 AM, Avi Kivity wrote:
On 05/25/2010 06:01 PM, Anthony Liguori wrote:
On 05/25/2010 10:00 AM, Avi Kivity wrote:
The latter. Why is it less important? If you don't inherit the
memory, you can't access it.
You can also pass /dev/shm fd's via SCM_RIGHTs to establish shared
On 05/25/2010 06:01 PM, Anthony Liguori wrote:
On 05/25/2010 10:00 AM, Avi Kivity wrote:
The latter. Why is it less important? If you don't inherit the
memory, you can't access it.
You can also pass /dev/shm fd's via SCM_RIGHTs to establish shared
memory segments dynamically.
Doesn't work
On 05/25/2010 05:09 PM, Kevin Wolf wrote:
The first part of your argument may be true, but the second isn't. No
user can run upstream qemu.git. It's not tested or supported, and has
no backwards compatibility guarantees.
The second part was basically meant to say "developers don't coun
On 05/25/2010 05:03 PM, Anthony Liguori wrote:
On 05/25/2010 08:55 AM, Avi Kivity wrote:
On 05/25/2010 04:53 PM, Kevin Wolf wrote:
I'm still not convinced that we need either. I share Christoph's
concern
that we would make our life harder for almost no gain. It's probably a
very small group
On 05/25/2010 05:05 PM, Anthony Liguori wrote:
On 05/25/2010 09:01 AM, Avi Kivity wrote:
On 05/25/2010 04:55 PM, Anthony Liguori wrote:
On 05/25/2010 08:38 AM, Avi Kivity wrote:
On 05/25/2010 04:35 PM, Anthony Liguori wrote:
On 05/25/2010 08:31 AM, Avi Kivity wrote:
A protocol based mechanism
On 05/25/2010 10:00 AM, Avi Kivity wrote:
The latter. Why is it less important? If you don't inherit the
memory, you can't access it.
You can also pass /dev/shm fd's via SCM_RIGHTs to establish shared
memory segments dynamically.
Doesn't work for anonymous memory.
What's wrong with /dev/
At Mon, 24 May 2010 14:16:32 -0500,
Anthony Liguori wrote:
>
> On 05/24/2010 06:56 AM, Avi Kivity wrote:
> > On 05/24/2010 02:42 PM, MORITA Kazutaka wrote:
> >>
> >>> The server would be local and talk over a unix domain socket, perhaps
> >>> anonymous.
> >>>
> >>> nbd has other issues though, suc
Am 25.05.2010 15:25, schrieb Avi Kivity:
> On 05/25/2010 04:17 PM, Anthony Liguori wrote:
>> On 05/25/2010 04:14 AM, Avi Kivity wrote:
>>> On 05/24/2010 10:38 PM, Anthony Liguori wrote:
> - Building a plugin API seems a bit simpler to me, although I'm to
> sure if I'd get the
>
On 05/25/2010 04:54 PM, Anthony Liguori wrote:
On 05/25/2010 08:36 AM, Avi Kivity wrote:
We'd need a kernel-level generic snapshot API for this eventually.
or (2) implement BUSE to complement FUSE and CUSE to enable proper
userspace block devices.
Likely slow due do lots of copying. Also n
On 05/25/2010 08:25 AM, Avi Kivity wrote:
On 05/25/2010 04:17 PM, Anthony Liguori wrote:
On 05/25/2010 04:14 AM, Avi Kivity wrote:
On 05/24/2010 10:38 PM, Anthony Liguori wrote:
- Building a plugin API seems a bit simpler to me, although I'm to
sure if I'd get the
idea correctly:
The b
On 05/25/2010 09:01 AM, Avi Kivity wrote:
On 05/25/2010 04:55 PM, Anthony Liguori wrote:
On 05/25/2010 08:38 AM, Avi Kivity wrote:
On 05/25/2010 04:35 PM, Anthony Liguori wrote:
On 05/25/2010 08:31 AM, Avi Kivity wrote:
A protocol based mechanism has the advantage of being more robust
in the
On 05/25/2010 08:38 AM, Avi Kivity wrote:
On 05/25/2010 04:35 PM, Anthony Liguori wrote:
On 05/25/2010 08:31 AM, Avi Kivity wrote:
A protocol based mechanism has the advantage of being more robust
in the face of poorly written block backends so if it's possible to
make it perform as well as a
On 05/25/2010 08:57 AM, Avi Kivity wrote:
On 05/25/2010 04:54 PM, Anthony Liguori wrote:
On 05/25/2010 08:36 AM, Avi Kivity wrote:
We'd need a kernel-level generic snapshot API for this eventually.
or (2) implement BUSE to complement FUSE and CUSE to enable proper
userspace block devices.
On 05/25/2010 04:55 PM, Anthony Liguori wrote:
On 05/25/2010 08:38 AM, Avi Kivity wrote:
On 05/25/2010 04:35 PM, Anthony Liguori wrote:
On 05/25/2010 08:31 AM, Avi Kivity wrote:
A protocol based mechanism has the advantage of being more robust
in the face of poorly written block backends so if
Am 25.05.2010 15:55, schrieb Avi Kivity:
> On 05/25/2010 04:53 PM, Kevin Wolf wrote:
>>
>> I'm still not convinced that we need either. I share Christoph's concern
>> that we would make our life harder for almost no gain. It's probably a
>> very small group of users (if it exists at all) that wants
On 05/25/2010 08:36 AM, Avi Kivity wrote:
We'd need a kernel-level generic snapshot API for this eventually.
or (2) implement BUSE to complement FUSE and CUSE to enable proper
userspace block devices.
Likely slow due do lots of copying. Also needs a snapshot API.
The kernel could use spli
On 05/25/2010 08:55 AM, Avi Kivity wrote:
On 05/25/2010 04:53 PM, Kevin Wolf wrote:
I'm still not convinced that we need either. I share Christoph's concern
that we would make our life harder for almost no gain. It's probably a
very small group of users (if it exists at all) that wants to add n
On 05/25/2010 04:53 PM, Kevin Wolf wrote:
I'm still not convinced that we need either. I share Christoph's concern
that we would make our life harder for almost no gain. It's probably a
very small group of users (if it exists at all) that wants to add new
block drivers themselves, but at the sam
Am 25.05.2010 15:25, schrieb Anthony Liguori:
> On 05/25/2010 06:25 AM, Avi Kivity wrote:
>> On 05/25/2010 02:02 PM, Kevin Wolf wrote:
>>>
> So could we not standardize a protocol for this that both sheepdog and
> ceph could implement?
The protocol already exists, nbd. It doesn't
On 05/25/2010 04:25 PM, Anthony Liguori wrote:
Currently if someone wants to add a new block format, they have to
upstream it and wait for a new qemu to be released. With a plugin
API, they can add a new block format to an existing, supported qemu.
Whether we have a plugin or protocol based
On 05/25/2010 04:35 PM, Anthony Liguori wrote:
On 05/25/2010 08:31 AM, Avi Kivity wrote:
A protocol based mechanism has the advantage of being more robust in
the face of poorly written block backends so if it's possible to
make it perform as well as a plugin, it's a preferable approach.
May b
On 05/25/2010 08:31 AM, Avi Kivity wrote:
A protocol based mechanism has the advantage of being more robust in
the face of poorly written block backends so if it's possible to make
it perform as well as a plugin, it's a preferable approach.
May be hard due to difficulty of exposing guest memor
On 05/25/2010 04:29 PM, Anthony Liguori wrote:
The current situation is that those block format drivers only exist
in qemu.git or as patches. Surely that's even more unhappiness.
Confusion could be mitigated:
$ qemu -module my-fancy-block-format-driver.so
my-fancy-block-format-driver.so d
On 05/25/2010 04:17 PM, Anthony Liguori wrote:
On 05/25/2010 04:14 AM, Avi Kivity wrote:
On 05/24/2010 10:38 PM, Anthony Liguori wrote:
- Building a plugin API seems a bit simpler to me, although I'm to
sure if I'd get the
idea correctly:
The block layer has already some kind of api (.b
On 05/25/2010 06:25 AM, Avi Kivity wrote:
On 05/25/2010 02:02 PM, Kevin Wolf wrote:
So could we not standardize a protocol for this that both sheepdog and
ceph could implement?
The protocol already exists, nbd. It doesn't support snapshotting etc.
but we could extend it.
But IMO what's ne
On 05/25/2010 04:14 AM, Avi Kivity wrote:
On 05/24/2010 10:38 PM, Anthony Liguori wrote:
- Building a plugin API seems a bit simpler to me, although I'm to
sure if I'd get the
idea correctly:
The block layer has already some kind of api (.bdrv_file_open,
.bdrv_read). We
could simply
On 05/25/2010 03:03 PM, Christoph Hellwig wrote:
On Tue, May 25, 2010 at 02:25:53PM +0300, Avi Kivity wrote:
Currently if someone wants to add a new block format, they have to
upstream it and wait for a new qemu to be released. With a plugin API,
they can add a new block format to an existi
On Tue, May 25, 2010 at 02:25:53PM +0300, Avi Kivity wrote:
> Currently if someone wants to add a new block format, they have to
> upstream it and wait for a new qemu to be released. With a plugin API,
> they can add a new block format to an existing, supported qemu.
So? Unless we want a sta
On 05/25/2010 02:02 PM, Kevin Wolf wrote:
So could we not standardize a protocol for this that both sheepdog and
ceph could implement?
The protocol already exists, nbd. It doesn't support snapshotting etc.
but we could extend it.
But IMO what's needed is a plugin API for the block
Am 23.05.2010 14:01, schrieb Avi Kivity:
> On 05/21/2010 12:29 AM, Anthony Liguori wrote:
>>
>> I'd be more interested in enabling people to build these types of
>> storage systems without touching qemu.
>>
>> Both sheepdog and ceph ultimately transmit I/O over a socket to a
>> central daemon, ri
On 05/24/2010 10:38 PM, Anthony Liguori wrote:
- Building a plugin API seems a bit simpler to me, although I'm to
sure if I'd get the
idea correctly:
The block layer has already some kind of api (.bdrv_file_open,
.bdrv_read). We
could simply compile the block-drivers as shared objects
On 05/24/2010 10:16 PM, Anthony Liguori wrote:
On 05/24/2010 06:56 AM, Avi Kivity wrote:
On 05/24/2010 02:42 PM, MORITA Kazutaka wrote:
The server would be local and talk over a unix domain socket, perhaps
anonymous.
nbd has other issues though, such as requiring a copy and no
support for
On 05/24/2010 10:19 PM, Anthony Liguori wrote:
On 05/24/2010 06:03 AM, Avi Kivity wrote:
On 05/24/2010 11:27 AM, Stefan Hajnoczi wrote:
On Sun, May 23, 2010 at 1:01 PM, Avi Kivity wrote:
On 05/21/2010 12:29 AM, Anthony Liguori wrote:
I'd be more interested in enabling people to build these ty
On 05/24/2010 02:07 PM, Christian Brunner wrote:
2010/5/24 MORITA Kazutaka:
However, I don't think nbd would be a good protocol. My preference
would be for a plugin API, or for a new local protocol that uses
splice() to avoid copies.
Both would be okay for Sheepdog. I want to ta
On 05/24/2010 06:03 AM, Avi Kivity wrote:
On 05/24/2010 11:27 AM, Stefan Hajnoczi wrote:
On Sun, May 23, 2010 at 1:01 PM, Avi Kivity wrote:
On 05/21/2010 12:29 AM, Anthony Liguori wrote:
I'd be more interested in enabling people to build these types of
storage
systems without touching qemu.
On 05/24/2010 06:56 AM, Avi Kivity wrote:
On 05/24/2010 02:42 PM, MORITA Kazutaka wrote:
The server would be local and talk over a unix domain socket, perhaps
anonymous.
nbd has other issues though, such as requiring a copy and no support
for
metadata operations such as snapshot and file si
2010/5/24 MORITA Kazutaka :
>> However, I don't think nbd would be a good protocol. My preference
>> would be for a plugin API, or for a new local protocol that uses
>> splice() to avoid copies.
>>
>
> Both would be okay for Sheepdog. I want to take a suitable approach
> for qemu.
I think both
At Mon, 24 May 2010 14:56:29 +0300,
Avi Kivity wrote:
>
> On 05/24/2010 02:42 PM, MORITA Kazutaka wrote:
> >
> >> The server would be local and talk over a unix domain socket, perhaps
> >> anonymous.
> >>
> >> nbd has other issues though, such as requiring a copy and no support for
> >> metadata o
On Mon, 24 May 2010 14:56:29 +0300 Avi Kivity wrote:
> On 05/24/2010 02:42 PM, MORITA Kazutaka wrote:
> >
> >> The server would be local and talk over a unix domain socket, perhaps
> >> anonymous.
> >>
> >> nbd has other issues though, such as requiring a copy and no support for
> >> metadata ope
On 05/24/2010 02:42 PM, MORITA Kazutaka wrote:
The server would be local and talk over a unix domain socket, perhaps
anonymous.
nbd has other issues though, such as requiring a copy and no support for
metadata operations such as snapshot and file size extension.
Sorry, my explanation w
At Mon, 24 May 2010 14:05:58 +0300,
Avi Kivity wrote:
>
> On 05/24/2010 10:12 AM, MORITA Kazutaka wrote:
> > At Sun, 23 May 2010 15:01:59 +0300,
> > Avi Kivity wrote:
> >
> >> On 05/21/2010 12:29 AM, Anthony Liguori wrote:
> >>
> >>> I'd be more interested in enabling people to build the
On 05/24/2010 10:12 AM, MORITA Kazutaka wrote:
At Sun, 23 May 2010 15:01:59 +0300,
Avi Kivity wrote:
On 05/21/2010 12:29 AM, Anthony Liguori wrote:
I'd be more interested in enabling people to build these types of
storage systems without touching qemu.
Both sheepdog and ceph ultimate
On 05/24/2010 11:27 AM, Stefan Hajnoczi wrote:
On Sun, May 23, 2010 at 1:01 PM, Avi Kivity wrote:
On 05/21/2010 12:29 AM, Anthony Liguori wrote:
I'd be more interested in enabling people to build these types of storage
systems without touching qemu.
Both sheepdog and ceph ultimately
On Sun, May 23, 2010 at 1:01 PM, Avi Kivity wrote:
> On 05/21/2010 12:29 AM, Anthony Liguori wrote:
>>
>> I'd be more interested in enabling people to build these types of storage
>> systems without touching qemu.
>>
>> Both sheepdog and ceph ultimately transmit I/O over a socket to a central
>> d
At Sun, 23 May 2010 15:01:59 +0300,
Avi Kivity wrote:
>
> On 05/21/2010 12:29 AM, Anthony Liguori wrote:
> >
> > I'd be more interested in enabling people to build these types of
> > storage systems without touching qemu.
> >
> > Both sheepdog and ceph ultimately transmit I/O over a socket to a
On Sun, May 23, 2010 at 12:59 AM, Blue Swirl wrote:
> On Thu, May 20, 2010 at 11:02 PM, Yehuda Sadeh Weinraub
> wrote:
>> On Thu, May 20, 2010 at 1:31 PM, Blue Swirl wrote:
>>> On Wed, May 19, 2010 at 7:22 PM, Christian Brunner wrote:
The attached patch is a block driver for the distribute
On 05/21/2010 12:29 AM, Anthony Liguori wrote:
I'd be more interested in enabling people to build these types of
storage systems without touching qemu.
Both sheepdog and ceph ultimately transmit I/O over a socket to a
central daemon, right?
That incurs an extra copy.
So could we not stan
On Thu, May 20, 2010 at 11:02 PM, Yehuda Sadeh Weinraub
wrote:
> On Thu, May 20, 2010 at 1:31 PM, Blue Swirl wrote:
>> On Wed, May 19, 2010 at 7:22 PM, Christian Brunner wrote:
>>> The attached patch is a block driver for the distributed file system
>>> Ceph (http://ceph.newdream.net/). This dri
At Fri, 21 May 2010 06:28:42 +0100,
Stefan Hajnoczi wrote:
>
> On Thu, May 20, 2010 at 11:16 PM, Christian Brunner wrote:
> > 2010/5/20 Anthony Liguori :
> >> Both sheepdog and ceph ultimately transmit I/O over a socket to a central
> >> daemon, right? So could we not standardize a protocol for
At Fri, 21 May 2010 00:16:46 +0200,
Christian Brunner wrote:
>
> 2010/5/20 Anthony Liguori :
> >> With new approaches like Sheepdog or Ceph, things are getting a lot
> >> cheaper and you can scale your system without disrupting your service.
> >> The concepts are quite similar to what Amazon is do
On Thu, May 20, 2010 at 11:16 PM, Christian Brunner wrote:
> 2010/5/20 Anthony Liguori :
>> Both sheepdog and ceph ultimately transmit I/O over a socket to a central
>> daemon, right? So could we not standardize a protocol for this that both
>> sheepdog and ceph could implement?
>
> There is no c
On Thu, May 20, 2010 at 1:31 PM, Blue Swirl wrote:
> On Wed, May 19, 2010 at 7:22 PM, Christian Brunner wrote:
>> The attached patch is a block driver for the distributed file system
>> Ceph (http://ceph.newdream.net/). This driver uses librados (which
>> is part of the Ceph server) for direct ac
2010/5/20 Anthony Liguori :
>> With new approaches like Sheepdog or Ceph, things are getting a lot
>> cheaper and you can scale your system without disrupting your service.
>> The concepts are quite similar to what Amazon is doing in their EC2
>> environment, but they certainly won't publish it as
On 05/20/2010 04:18 PM, Christian Brunner wrote:
Thanks for your comments. I'll send an updated patch in a few days.
Having a central storage system is quite essential in larger hosting
environments, it enables you to move your guest systems from one node
to another easily (live-migration or dyn
2010/5/20 Blue Swirl :
> On Wed, May 19, 2010 at 7:22 PM, Christian Brunner wrote:
>> The attached patch is a block driver for the distributed file system
>> Ceph (http://ceph.newdream.net/). This driver uses librados (which
>> is part of the Ceph server) for direct access to the Ceph object
>> st
On Wed, May 19, 2010 at 7:22 PM, Christian Brunner wrote:
> The attached patch is a block driver for the distributed file system
> Ceph (http://ceph.newdream.net/). This driver uses librados (which
> is part of the Ceph server) for direct access to the Ceph object
> store and is running entirely i
The attached patch is a block driver for the distributed file system
Ceph (http://ceph.newdream.net/). This driver uses librados (which
is part of the Ceph server) for direct access to the Ceph object
store and is running entirely in userspace. Therefore it is
called "rbd" - rados block device.
To
64 matches
Mail list logo