On 15/09/2016 17:23, Alex Bligh wrote:
> Paolo,
>
>> On 15 Sep 2016, at 15:07, Paolo Bonzini wrote:
>>
>> I don't think QEMU forbids multiple clients to the single server, and
>> guarantees consistency as long as there is no overlap between writes and
>> reads. These are the same guarantees yo
On 09/15/2016 11:27 AM, Wouter Verhelst wrote:
> On Thu, Sep 15, 2016 at 05:08:21PM +0100, Alex Bligh wrote:
>> Wouter,
>>
>>> The server can always refuse to allow multiple connections.
>>
>> Sure, but it would be neater to warn the client of that at negotiation
>> stage (it would only be one flag
Wouter,
> On 15 Sep 2016, at 17:27, Wouter Verhelst wrote:
>
> On Thu, Sep 15, 2016 at 05:08:21PM +0100, Alex Bligh wrote:
>> Wouter,
>>
>>> The server can always refuse to allow multiple connections.
>>
>> Sure, but it would be neater to warn the client of that at negotiation
>> stage (it wou
On Thu, Sep 15, 2016 at 05:08:21PM +0100, Alex Bligh wrote:
> Wouter,
>
> > The server can always refuse to allow multiple connections.
>
> Sure, but it would be neater to warn the client of that at negotiation
> stage (it would only be one flag, e.g. 'multiple connections
> unsafe').
I suppose
Wouter,
> The server can always refuse to allow multiple connections.
Sure, but it would be neater to warn the client of that
at negotiation stage (it would only be one flag, e.g.
'multiple connections unsafe'). That way the connection
won't fail with a cryptic EBUSY or whatever, but will
just ne
Eric,
> I doubt that qemu-nbd would ever want to support the situation with more
> than one client connection writing to the same image at the same time;
> the implications of sorting out data consistency between multiple
> writers is rather complex and not worth coding into qemu. So I think
> qe
Paolo,
> On 15 Sep 2016, at 15:07, Paolo Bonzini wrote:
>
> I don't think QEMU forbids multiple clients to the single server, and
> guarantees consistency as long as there is no overlap between writes and
> reads. These are the same guarantees you have for multiple commands on
> a single connec
Josef,
> On 15 Sep 2016, at 14:57, Josef Bacik wrote:
>
> This isn't an NBD problem, this is an application problem. The application
> must wait for all writes it cares about _before_ issuing a flush. This is
> the same as for normal storage as it is for NBD. It is not NBD's
> responsibili
On 15/09/2016 15:34, Eric Blake wrote:
> On 09/15/2016 06:09 AM, Alex Bligh wrote:
>>
>> I also wonder whether any servers that can do caching per
>> connection will always share a consistent cache between
>> connections. The one I'm worried about in particular here
>> is qemu-nbd - Eric Blake C
On 09/15/2016 09:17 AM, Wouter Verhelst wrote:
On Thu, Sep 15, 2016 at 01:44:29PM +0100, Alex Bligh wrote:
On 15 Sep 2016, at 13:41, Christoph Hellwig wrote:
On Thu, Sep 15, 2016 at 01:39:11PM +0100, Alex Bligh wrote:
That's probably right in the case of file-based back ends that
are runnin
On 09/15/2016 06:09 AM, Alex Bligh wrote:
>
> I also wonder whether any servers that can do caching per
> connection will always share a consistent cache between
> connections. The one I'm worried about in particular here
> is qemu-nbd - Eric Blake CC'd.
>
I doubt that qemu-nbd would ever want
On Thu, Sep 15, 2016 at 01:44:29PM +0100, Alex Bligh wrote:
>
> > On 15 Sep 2016, at 13:41, Christoph Hellwig wrote:
> >
> > On Thu, Sep 15, 2016 at 01:39:11PM +0100, Alex Bligh wrote:
> >> That's probably right in the case of file-based back ends that
> >> are running on a Linux OS. But gonbdse
> On 15 Sep 2016, at 13:41, Christoph Hellwig wrote:
>
> On Thu, Sep 15, 2016 at 01:39:11PM +0100, Alex Bligh wrote:
>> That's probably right in the case of file-based back ends that
>> are running on a Linux OS. But gonbdserver for instance supports
>> (e.g.) Ceph based backends, where each con
On Thu, Sep 15, 2016 at 01:39:11PM +0100, Alex Bligh wrote:
> That's probably right in the case of file-based back ends that
> are running on a Linux OS. But gonbdserver for instance supports
> (e.g.) Ceph based backends, where each connection might be talking
> to a completely separate ceph node,
> On 15 Sep 2016, at 13:36, Christoph Hellwig wrote:
>
> On Thu, Sep 15, 2016 at 01:33:20PM +0100, Alex Bligh wrote:
>> At an implementation level that is going to be a little difficult
>> for some NBD servers, e.g. ones that fork() a different process per
>> connection. There is in general no I
On Thu, Sep 15, 2016 at 01:33:20PM +0100, Alex Bligh wrote:
> At an implementation level that is going to be a little difficult
> for some NBD servers, e.g. ones that fork() a different process per
> connection. There is in general no IPC to speak of between server
> instances. Such servers would t
> On 15 Sep 2016, at 13:23, Christoph Hellwig wrote:
>
> On Thu, Sep 15, 2016 at 02:21:20PM +0200, Wouter Verhelst wrote:
>> Right. So do I understand you correctly that blk-mq currently doesn't
>> look at multiple queues, and just assumes that if a FLUSH is sent over
>> any one of the queues, i
> On 15 Sep 2016, at 13:18, Christoph Hellwig wrote:
>
> Yes, please do that. A "barrier" implies draining of the queue.
Done
--
Alex Bligh
On Thu, Sep 15, 2016 at 02:26:31PM +0200, Wouter Verhelst wrote:
> Yes. I think the kernel nbd driver should probably filter out FUA on
> READ. It has no meaning in the case of nbd, and whatever expectations
> the kernel may have cannot be provided for by nbd anyway.
The kernel never sets FUA on r
On Thu, Sep 15, 2016 at 05:20:08AM -0700, Christoph Hellwig wrote:
> On Thu, Sep 15, 2016 at 02:01:59PM +0200, Wouter Verhelst wrote:
> > Yes. There was some discussion on that part, and we decided that setting
> > the flag doesn't hurt, but the spec also clarifies that using it on READ
> > does no
On Thu, Sep 15, 2016 at 05:01:25AM -0700, Christoph Hellwig wrote:
> On Thu, Sep 15, 2016 at 01:55:14PM +0200, Wouter Verhelst wrote:
> > If that's not a write barrier, then I was using the wrong terminology
> > (and offer my apologies for the confusion).
>
> It's not a write barrier - a write bar
On Thu, Sep 15, 2016 at 02:21:20PM +0200, Wouter Verhelst wrote:
> Right. So do I understand you correctly that blk-mq currently doesn't
> look at multiple queues, and just assumes that if a FLUSH is sent over
> any one of the queues, it applies to all queues?
Yes. The same is true at the protoco
On Thu, Sep 15, 2016 at 01:11:24PM +0100, Alex Bligh wrote:
> > NBD_CMD_FLUSH (3)
> >
> > A flush request; a write barrier.
>
> I can see that's potentially confusing as isn't meant to mean 'an old-style
> linux kernel block device write barrier'. I think in general terms it
> probably is some f
On Thu, Sep 15, 2016 at 02:01:59PM +0200, Wouter Verhelst wrote:
> Yes. There was some discussion on that part, and we decided that setting
> the flag doesn't hurt, but the spec also clarifies that using it on READ
> does nothing, semantically.
>
>
> The problem is that there are clients in the wi
On Thu, Sep 15, 2016 at 12:49:35PM +0200, Wouter Verhelst wrote:
> A while back, we spent quite some time defining the semantics of the
> various commands in the face of the NBD_CMD_FLUSH and NBD_CMD_FLAG_FUA
> write barriers. At the time, we decided that it would be unreasonable
> to expect server
Christoph,
> It's not a write barrier - a write barrier was command that ensured that
>
> a) all previous writes were completed to the host/client
> b) all previous writes were on non-volatile storage
>
> and
>
> c) the actual write with the barrier bit was on non-volatile storage
Ah! the bit
> On 15 Sep 2016, at 12:52, Christoph Hellwig wrote:
>
> On Thu, Sep 15, 2016 at 12:46:07PM +0100, Alex Bligh wrote:
>> Essentially NBD does supports FLUSH/FUA like this:
>>
>> https://www.kernel.org/doc/Documentation/block/writeback_cache_control.txt
>>
>> IE supports the same FLUSH/FUA primi
On Thu, Sep 15, 2016 at 01:55:14PM +0200, Wouter Verhelst wrote:
> Maybe I'm not using the correct terminology here. The point is that
> after a FLUSH, the server asserts that all write commands *for which a
> reply has already been sent to the client* will also have reached
> permanent storage. No
On Thu, Sep 15, 2016 at 04:52:17AM -0700, Christoph Hellwig wrote:
> On Thu, Sep 15, 2016 at 12:46:07PM +0100, Alex Bligh wrote:
> > Essentially NBD does supports FLUSH/FUA like this:
> >
> > https://www.kernel.org/doc/Documentation/block/writeback_cache_control.txt
> >
> > IE supports the same F
> On 15 Sep 2016, at 12:46, Christoph Hellwig wrote:
>
> On Thu, Sep 15, 2016 at 12:43:35PM +0100, Alex Bligh wrote:
>> Sure, it's at:
>>
>> https://github.com/yoe/nbd/blob/master/doc/proto.md#ordering-of-messages-and-writes
>>
>> and that link takes you to the specific section.
>>
>> The tre
On Thu, Sep 15, 2016 at 04:38:07AM -0700, Christoph Hellwig wrote:
> On Thu, Sep 15, 2016 at 12:49:35PM +0200, Wouter Verhelst wrote:
> > A while back, we spent quite some time defining the semantics of the
> > various commands in the face of the NBD_CMD_FLUSH and NBD_CMD_FLAG_FUA
> > write barrier
On Thu, Sep 15, 2016 at 12:46:07PM +0100, Alex Bligh wrote:
> Essentially NBD does supports FLUSH/FUA like this:
>
> https://www.kernel.org/doc/Documentation/block/writeback_cache_control.txt
>
> IE supports the same FLUSH/FUA primitives as other block drivers (AIUI).
>
> Link to protocol (per l
On Thu, Sep 15, 2016 at 12:43:35PM +0100, Alex Bligh wrote:
> Sure, it's at:
>
> https://github.com/yoe/nbd/blob/master/doc/proto.md#ordering-of-messages-and-writes
>
> and that link takes you to the specific section.
>
> The treatment of FLUSH and FUA is meant to mirror exactly the
> linux bloc
> On 15 Sep 2016, at 12:40, Christoph Hellwig wrote:
>
> On Thu, Sep 15, 2016 at 01:29:36PM +0200, Wouter Verhelst wrote:
>> Yes, and that is why I was asking about this. If the write barriers
>> are expected to be shared across connections, we have a problem. If,
>> however, they are not, then
Christoph,
> On 15 Sep 2016, at 12:38, Christoph Hellwig wrote:
>
> On Thu, Sep 15, 2016 at 12:49:35PM +0200, Wouter Verhelst wrote:
>> A while back, we spent quite some time defining the semantics of the
>> various commands in the face of the NBD_CMD_FLUSH and NBD_CMD_FLAG_FUA
>> write barriers
On Thu, Sep 15, 2016 at 01:29:36PM +0200, Wouter Verhelst wrote:
> Yes, and that is why I was asking about this. If the write barriers
> are expected to be shared across connections, we have a problem. If,
> however, they are not, then it doesn't matter that the commands may be
> processed out of o
On Thu, Sep 15, 2016 at 12:09:28PM +0100, Alex Bligh wrote:
> A more general point is that with multiple queues requests
> may be processed in a different order even by those servers that
> currently process the requests in strict order, or in something
> similar to strict order. The server is perm
On Thu, Sep 15, 2016 at 12:09:28PM +0100, Alex Bligh wrote:
> Wouter, Josef, (& Eric)
>
> > On 15 Sep 2016, at 11:49, Wouter Verhelst wrote:
> >
> > Hi,
> >
> > On Fri, Sep 09, 2016 at 10:02:03PM +0200, Wouter Verhelst wrote:
> >> I see some practical problems with this:
> > [...]
> >
> > One
Wouter, Josef, (& Eric)
> On 15 Sep 2016, at 11:49, Wouter Verhelst wrote:
>
> Hi,
>
> On Fri, Sep 09, 2016 at 10:02:03PM +0200, Wouter Verhelst wrote:
>> I see some practical problems with this:
> [...]
>
> One more that I didn't think about earlier:
>
> A while back, we spent quite some tim
Hi,
On Fri, Sep 09, 2016 at 10:02:03PM +0200, Wouter Verhelst wrote:
> I see some practical problems with this:
[...]
One more that I didn't think about earlier:
A while back, we spent quite some time defining the semantics of the
various commands in the face of the NBD_CMD_FLUSH and NBD_CMD_FLA
On 09/09/2016 05:00 PM, Josef Bacik wrote:
Right. Alternatively, you could perhaps make it so that the lost
connection is removed, unack'd requests on that connection are resent,
and the session moves on with one less connection (unless the lost
connection is the last one, in which case we die as
On 09/09/2016 04:55 PM, Wouter Verhelst wrote:
On Fri, Sep 09, 2016 at 04:36:07PM -0400, Josef Bacik wrote:
On 09/09/2016 04:02 PM, Wouter Verhelst wrote:
[...]
I see some practical problems with this:
- You removed the pid attribute from sysfs (unless you added it back and
I didn't notice,
On Fri, Sep 09, 2016 at 04:36:07PM -0400, Josef Bacik wrote:
> On 09/09/2016 04:02 PM, Wouter Verhelst wrote:
[...]
> > I see some practical problems with this:
> > - You removed the pid attribute from sysfs (unless you added it back and
> > I didn't notice, in which case just ignore this part).
On 09/09/2016 04:02 PM, Wouter Verhelst wrote:
Hi Josef,
On Thu, Sep 08, 2016 at 05:12:05PM -0400, Josef Bacik wrote:
Apologies if you are getting this a second time, it appears vger ate my last
submission.
--
This is a patch
Hi Josef,
On Thu, Sep 08, 2016 at 05:12:05PM -0400, Josef Bacik wrote:
> Apologies if you are getting this a second time, it appears vger ate my last
> submission.
>
> --
>
> This is a patch series aimed at bringing NBD into 201
Apologies if you are getting this a second time, it appears vger ate my last
submission.
--
This is a patch series aimed at bringing NBD into 2016. The two big components
of this series is converting nbd over to using blkmq and
46 matches
Mail list logo