Il 23/08/2013 10:32, Asias He ha scritto:
> On Thu, Aug 22, 2013 at 11:51:00AM +0200, Paolo Bonzini wrote:
>> Il 22/08/2013 11:50, Asias He ha scritto:
>>> On Wed, Aug 21, 2013 at 10:16:02AM +0200, Paolo Bonzini wrote:
Il 21/08/2013 04:02, Asias He ha scritto:
> In block/gluster.c, we have
On Thu, Aug 22, 2013 at 11:51:00AM +0200, Paolo Bonzini wrote:
> Il 22/08/2013 11:50, Asias He ha scritto:
> > On Wed, Aug 21, 2013 at 10:16:02AM +0200, Paolo Bonzini wrote:
> >> Il 21/08/2013 04:02, Asias He ha scritto:
> >>> In block/gluster.c, we have
> >>>
> >>> gluster_finish_aiocb
> >>> {
> >
On Fri, Aug 23, 2013 at 09:33:21AM +0200, Paolo Bonzini wrote:
> > (gdb) p *bh
> > $1 = {ctx = 0x0, cb = 0x555ffdcd , opaque =
> > 0x7fffd00419c0, next = 0x56345e70, scheduled = false, idle = false,
> > deleted = true}
>
> This looks like a use-after-free, with bh->ctx corrupted wh
Il 23/08/2013 08:48, Bharata B Rao ha scritto:
> On Wed, Aug 21, 2013 at 05:40:11PM +0200, Paolo Bonzini wrote:
>>
>> We could just use a bottom half, too. Add a bottom half to acb,
>> schedule it in gluster_finish_aiocb, delete it in the bottom half's own
>> callback.
>
> I tried this approach (
On Wed, Aug 21, 2013 at 05:40:11PM +0200, Paolo Bonzini wrote:
>
> We could just use a bottom half, too. Add a bottom half to acb,
> schedule it in gluster_finish_aiocb, delete it in the bottom half's own
> callback.
I tried this approach (the patch at the end of this mail), but see this
occasio
Il 22/08/2013 16:01, Bharata B Rao ha scritto:
EFAULT means the buffer address is invalid, I/O error would be EIO, but...
> I remember this was one of the motivations to
> handle this failure.
... this write is on the pipe, not on a disk.
>>>
>>> Right. Failure to comple
On Thu, Aug 22, 2013 at 03:27:35PM +0200, Paolo Bonzini wrote:
> Looking at write(2), it looks like it is impossible
>
> EAGAIN or EWOULDBLOCK
> can't happen, blocking file descriptor
>
> EBADF, EPIPE
> shouldn't happen
Il 22/08/2013 15:25, Bharata B Rao ha scritto:
> On Thu, Aug 22, 2013 at 01:15:59PM +0200, Paolo Bonzini wrote:
>> Il 22/08/2013 12:28, Bharata B Rao ha scritto:
>>> On Thu, Aug 22, 2013 at 12:00:48PM +0200, Paolo Bonzini wrote:
Il 22/08/2013 11:55, Bharata B Rao ha scritto:
> This was the
On Thu, Aug 22, 2013 at 01:15:59PM +0200, Paolo Bonzini wrote:
> Il 22/08/2013 12:28, Bharata B Rao ha scritto:
> > On Thu, Aug 22, 2013 at 12:00:48PM +0200, Paolo Bonzini wrote:
> >> Il 22/08/2013 11:55, Bharata B Rao ha scritto:
> >>> This was the first apporach I had. I used to abort when writes
Il 22/08/2013 12:28, Bharata B Rao ha scritto:
> On Thu, Aug 22, 2013 at 12:00:48PM +0200, Paolo Bonzini wrote:
>> Il 22/08/2013 11:55, Bharata B Rao ha scritto:
>>> This was the first apporach I had. I used to abort when writes to pipe
>>> fail. But there were concerns raised about handling the fa
On Wed, Aug 21, 2013 at 10:16:02AM +0200, Paolo Bonzini wrote:
> Il 21/08/2013 04:02, Asias He ha scritto:
> > In block/gluster.c, we have
> >
> > gluster_finish_aiocb
> > {
> >if (retval != sizeof(acb)) {
> > qemu_mutex_lock_iothread(); /* We are in gluster thread context */
> > .
On Thu, Aug 22, 2013 at 12:00:48PM +0200, Paolo Bonzini wrote:
> Il 22/08/2013 11:55, Bharata B Rao ha scritto:
> > This was the first apporach I had. I used to abort when writes to pipe
> > fail. But there were concerns raised about handling the failures gracefully
> > and hence we ended up doing
Il 22/08/2013 11:55, Bharata B Rao ha scritto:
> This was the first apporach I had. I used to abort when writes to pipe
> fail. But there were concerns raised about handling the failures gracefully
> and hence we ended up doing all that error handling of completing the aio
> with -EIO, closing the
On Thu, Aug 22, 2013 at 11:06:47AM +0200, Paolo Bonzini wrote:
> Il 22/08/2013 09:48, Stefan Hajnoczi ha scritto:
> >> > gluster_finish_aiocb gets called from gluster thread, is it safe to
> >> > create
> >> > and schedule a bh from such a thread ?
> >> >
> >> > In my first implementation
> >> >
Il 22/08/2013 11:50, Asias He ha scritto:
> On Wed, Aug 21, 2013 at 10:16:02AM +0200, Paolo Bonzini wrote:
>> Il 21/08/2013 04:02, Asias He ha scritto:
>>> In block/gluster.c, we have
>>>
>>> gluster_finish_aiocb
>>> {
>>>if (retval != sizeof(acb)) {
>>> qemu_mutex_lock_iothread(); /* We
Il 22/08/2013 09:48, Stefan Hajnoczi ha scritto:
>> > gluster_finish_aiocb gets called from gluster thread, is it safe to create
>> > and schedule a bh from such a thread ?
>> >
>> > In my first implementation
>> > (http://lists.gnu.org/archive/html/qemu-devel/2012-06/msg01748.html), I
>> > was
On Thu, Aug 22, 2013 at 11:29:47AM +0530, Bharata B Rao wrote:
> On Wed, Aug 21, 2013 at 05:40:11PM +0200, Paolo Bonzini wrote:
> > Il 21/08/2013 17:24, Stefan Hajnoczi ha scritto:
> > > On Wed, Aug 21, 2013 at 10:02:47AM +0800, Asias He wrote:
> > >> In block/gluster.c, we have
> > >>
> > >> glust
On Wed, Aug 21, 2013 at 05:40:11PM +0200, Paolo Bonzini wrote:
> Il 21/08/2013 17:24, Stefan Hajnoczi ha scritto:
> > On Wed, Aug 21, 2013 at 10:02:47AM +0800, Asias He wrote:
> >> In block/gluster.c, we have
> >>
> >> gluster_finish_aiocb
> >> {
> >>if (retval != sizeof(acb)) {
> >> qemu
Il 21/08/2013 17:24, Stefan Hajnoczi ha scritto:
> On Wed, Aug 21, 2013 at 10:02:47AM +0800, Asias He wrote:
>> In block/gluster.c, we have
>>
>> gluster_finish_aiocb
>> {
>>if (retval != sizeof(acb)) {
>> qemu_mutex_lock_iothread(); /* We are in gluster thread context */
>> ...
>>
On Wed, Aug 21, 2013 at 10:02:47AM +0800, Asias He wrote:
> In block/gluster.c, we have
>
> gluster_finish_aiocb
> {
>if (retval != sizeof(acb)) {
> qemu_mutex_lock_iothread(); /* We are in gluster thread context */
> ...
> qemu_mutex_unlock_iothread();
>}
> }
>
> qemu t
Il 21/08/2013 04:02, Asias He ha scritto:
> In block/gluster.c, we have
>
> gluster_finish_aiocb
> {
>if (retval != sizeof(acb)) {
> qemu_mutex_lock_iothread(); /* We are in gluster thread context */
> ...
> qemu_mutex_unlock_iothread();
>}
> }
>
> qemu tools, e.g. qemu-
In block/gluster.c, we have
gluster_finish_aiocb
{
if (retval != sizeof(acb)) {
qemu_mutex_lock_iothread(); /* We are in gluster thread context */
...
qemu_mutex_unlock_iothread();
}
}
qemu tools, e.g. qemu-img, might race here because
qemu_mutex_{lock,unlock}_iothread are
22 matches
Mail list logo