Re: [PATCH 1/4] dma-buf: add dma_fence_describe and dma_resv_describe

2021-10-29 Thread kernel test robot
Hi "Christian,

I love your patch! Yet something to improve:

[auto build test ERROR on drm-tip/drm-tip]
[also build test ERROR on next-20211028]
[cannot apply to drm/drm-next drm-intel/for-linux-next 
drm-exynos/exynos-drm-next tegra-drm/drm/tegra/for-next linus/master 
airlied/drm-next v5.15-rc7]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:
https://github.com/0day-ci/linux/commits/Christian-K-nig/dma-buf-add-dma_fence_describe-and-dma_resv_describe/20211028-171805
base:   git://anongit.freedesktop.org/drm/drm-tip drm-tip
config: m68k-allyesconfig (attached as .config)
compiler: m68k-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
chmod +x ~/bin/make.cross
# 
https://github.com/0day-ci/linux/commit/80ae7cf414dbdb7fa9f48a46cc1bfa25b0a4fda7
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review 
Christian-K-nig/dma-buf-add-dma_fence_describe-and-dma_resv_describe/20211028-171805
git checkout 80ae7cf414dbdb7fa9f48a46cc1bfa25b0a4fda7
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross 
ARCH=m68k 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot 

All errors (new ones prefixed by >>):

   drivers/dma-buf/dma-fence.c: In function 'dma_fence_describe':
>> drivers/dma-buf/dma-fence.c:919:9: error: implicit declaration of function 
>> 'seq_printf'; did you mean 'bstr_printf'? 
>> [-Werror=implicit-function-declaration]
 919 | seq_printf(seq, "%s %s seq %llu %ssignalled\n",
 | ^~
 | bstr_printf
   cc1: all warnings being treated as errors


vim +919 drivers/dma-buf/dma-fence.c

   909  
   910  /**
   911   * dma_fence_describe - Dump fence describtion into seq_file
   912   * @fence: the 6fence to describe
   913   * @seq: the seq_file to put the textual description into
   914   *
   915   * Dump a textual description of the fence and it's state into the 
seq_file.
   916   */
   917  void dma_fence_describe(struct dma_fence *fence, struct seq_file *seq)
   918  {
 > 919  seq_printf(seq, "%s %s seq %llu %ssignalled\n",
   920 fence->ops->get_driver_name(fence),
   921 fence->ops->get_timeline_name(fence), fence->seqno,
   922 dma_fence_is_signaled(fence) ? "" : "un");
   923  }
   924  EXPORT_SYMBOL(dma_fence_describe);
   925  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org


.config.gz
Description: application/gzip
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH 2/2] i2c: virtio: fix completion handling

2021-10-29 Thread Vincent Whitchurch
On Wed, Oct 20, 2021 at 12:47:09PM +0200, Viresh Kumar wrote:
> On 20-10-21, 12:38, Vincent Whitchurch wrote:
> > I don't quite understand how that would be safe since
> > virtqueue_add_sgs() can fail after a few iterations and all queued
> > request buffers can have FAIL_NEXT set.  In such a case, we would end up
> > waiting forever with your proposed change, wouldn't we?
> 
> Good point. I didn't think of that earlier.
> 
> I think a good simple way of handling this is counting the number of
> buffers sent and received. Once they match, we are done. That
> shouldn't break anything else I believe.

That could work, but it's not so straightforward since you would have to
introduce locking to prevent races since the final count is only known
after virtio_i2c_prepare_reqs() completes, while the callback could be
called before that.  Please do not hesitate to send out a patch to fix
it that way if that is what you prefer.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v3 0/2] x86/xen: simplify irq pvops

2021-10-29 Thread Boris Ostrovsky



On 10/28/21 3:27 AM, Juergen Gross wrote:

The pvops function for Xen PV guests handling the interrupt flag are
much more complex than needed.

With the supported Xen hypervisor versions they can be simplified a
lot, especially by removing the need for disabling preemption.

Juergen Gross (2):
   x86/xen: remove xen_have_vcpu_info_placement flag
   x86/xen: switch initial pvops IRQ functions to dummy ones

  arch/x86/include/asm/paravirt_types.h |   2 +
  arch/x86/kernel/paravirt.c|  13 ++-
  arch/x86/xen/enlighten.c  | 116 ++
  arch/x86/xen/enlighten_hvm.c  |   6 +-
  arch/x86/xen/enlighten_pv.c   |  28 ++-
  arch/x86/xen/irq.c|  61 +-
  arch/x86/xen/smp.c|  24 --
  arch/x86/xen/xen-ops.h|   4 +-
  8 files changed, 53 insertions(+), 201 deletions(-)



Applied to for-linus-5.16


-boris

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 1/2] i2c: virtio: disable timeout handling

2021-10-29 Thread Vincent Whitchurch
On Thu, Oct 21, 2021 at 05:30:28AM +0200, Jie Deng wrote:
> On 2021/10/20 19:03, Viresh Kumar wrote:
> > On 20-10-21, 12:55, Vincent Whitchurch wrote:
> >> If the timeout cannot be disabled, then the driver should be fixed to
> >> always copy buffers and hold on to them to avoid memory corruption in
> >> the case of timeout, as I mentioned in my commit message.  That would be
> >> quite a substantial change to the driver so it's not something I'm
> >> personally comfortable with doing, especially not this late in the -rc
> >> cycle, so I'd leave that to others.
> > Or we can avoid clearing up and freeing the buffers here until the
> > point where the buffers are returned by the host. Until that happens,
> > we can avoid taking new requests but return to the earlier caller with
> > timeout failure. That would avoid corruption, by freeing buffers
> > sooner, and not hanging of the kernel.
> 
> It seems similar to use "wait_for_completion". If the other side is
> hacked, the guest may never get the buffers returned by the host,
> right ?

Note that it is trivial for the host to DoS the guest.  All the host has
to do is stop responding to I/O requests (I2C or others), then the guest
will not be able to perform its intended functions, regardless of
whether this particular driver waits forever or not.  Even TDX (which
Greg mentioned) does not prevent that, see:

 https://lore.kernel.org/virtualization/?q=tdx+dos

> For this moment, we can solve the problem by using a hardcoded big
> value or disabling the timeout.

Is that an Acked-by on this patch which does the latter?

> Over the long term, I think the backend should provide that timeout
> value and guarantee that its processing time should not exceed that
> value.

If you mean that the spec should be changed to allow the virtio driver
to be able to program a certain timeout for I2C transactions in the
virtio device, yes, that does sound reasonable.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: vDPA bus driver selection

2021-10-29 Thread Stefano Garzarella

On Fri, Oct 29, 2021 at 10:31:22AM +0800, Jason Wang wrote:

On Thu, Oct 28, 2021 at 5:47 PM Stefano Garzarella  wrote:


On Thu, Oct 28, 2021 at 10:24:47AM +0800, Jason Wang wrote:
>On Thu, Oct 28, 2021 at 4:16 AM Michael S. Tsirkin  wrote:
>>
>> On Wed, Oct 27, 2021 at 03:21:15PM +, Parav Pandit wrote:
>> > Hi Stefano,
>> >
>> > > From: Stefano Garzarella 
>> > > Sent: Wednesday, October 27, 2021 8:04 PM
>> > >
>> > > Hi folks,
>> > > I was trying to understand if we have a way to specify which vDPA bus 
driver
>> > > (e.g. vhost-vdpa, virtio-vdpa) a device should use.
>> > > IIUC we don't have it, and the first registered driver is used when a 
new device
>> > > is registered.
>> > >
>> > > I was thinking if it makes sense to extend the management API to specify 
which
>> > > bus driver to use for a device.
>
>Actually, we want to support this in the first version of vDPA bus.
>But for some reason it was dropped. The idea is to specify the device
>type 'virtio' or 'vhost'. But a concern is that, it may encourage
>vendor to implement e.g virtio specific device (without DMA
>isolation).

Yep, I see the issue about device type, so I think make sense to require
the support of both, how it is now basically.

So instead of defining the type of the device, we could provide the
possibility to choose which bus to connect it to,


I think you meant the "bus driver" here?


Yep, sorry!




in this way we
continue to require that both are supported.

As Michael suggested, instead of specify it at the creation time as 
was
in my original idea, we can provide an API to attach/detach a device 
to

a specific vDPA bus.


Does such an API exist in driver core?


I need to check better, Parav showed something with sysfs to bind a 
device to a driver, so maybe yes.


I just tried the following and it worked:

$ vdpa dev add mgmtdev vdpasim_net name vdpa0

$ readlink -f /sys/bus/vdpa/devices/vdpa0/driver
/sys/bus/vdpa/drivers/vhost_vdpa

$ echo vdpa0 > /sys/bus/vdpa/devices/vdpa0/driver/unbind

$ echo vdpa0 > /sys/bus/vdpa/drivers/virtio_vdpa/bind





Of course, providing a default behaviour like now, which connects to the
first registered.


If we want to change this, we can introduce "driver_override".


Yep, I'll take a look.

Thanks,
Stefano

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: vDPA bus driver selection

2021-10-29 Thread Stefano Garzarella

On Fri, Oct 29, 2021 at 10:34:00AM +0800, Jason Wang wrote:

On Thu, Oct 28, 2021 at 5:48 PM Parav Pandit  wrote:




> From: Stefano Garzarella 
> Sent: Thursday, October 28, 2021 3:08 PM

> >> >$ vdpa/vdpa dev add mgmtdev vdpasim_net name vdpa0 mac
> >> >00:11:22:33:44:55 $ echo 0 > /sys/bus/vdpa/drivers_autoprobe
> >> >
> >> >And after vdpa device creation, it manually binds to the desired
> >> >driver such as,
> >> >
> >> >$ echo vdpa0 > /sys/bus/vdpa/drivers/virtio_vdpa/bind
> >> >Or
> >> >$ echo vdpa0 > /sys/bus/vdpa/drivers/vhost_vdpa/bind
> >>
> >> Cool, I didn't know that. This is very useful, but do you think it
> >> might be better to integrate it with the netlink API and specify at
> >> creation which bus driver to use?
> >I think it is useful; for vduse case we need the ability to say "none"
> >as well and when nothing specified it should be default driver.
>
> Yep, the default can be the actual behaviour.
>
> >
> >More than netlink, I think we need the ability in the core kernel to
> >make this choice.
>
> Okay, but I think we can include that in the vdpa tool.
If functionality and interface exists in other part of the it is hard to wrap 
that in vdpa tool that does the duplicate work.


Got it.



>
> >I haven't explored what is already available to make that happen.
>
> Me too, I only saw the .match() callback of "struct bus_type" that could be 
used
> for this purpose.
>
> >If/once its available, I am sure it has more users than just vdpa.
>
> Make sense. I'll spend some time next week.

Ok. yeah, may be a desired driver can be stored in the vdpa_device that match() 
routine can use.
And if that driver is not available,  it returns EPROBE_DEFER; so whenever such 
driver is loaded it can be bind.

But I think before wrapping something in vdpa, need to find a reason why the 
current interface doesn't solve the problem and also to figure out plumbing.




Yep, when I started this thread I wasn't aware of that APIs available 
through sysfs.


It could be useful to start documenting vDPA (life cycle, management 
API, etc.).  I have a plan to add some vDPA docs in linux/Documentation, 
maybe we can include also these things.



I agree. If it's something that can easily be addressed by the
management code (just a matter of the extra steps for manual setup).
It's probably not worth bothering.


Yep, I agree too. It seems we can easily switch the vDPA bus driver at 
runtime.


Maybe the only missing point is a way to specify the default bus driver 
to use for a device. Of course the workaround is to unbind it and bind 
to the desired one.


Thank you so much for this very helpful discussion,
Stefano

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


RE: vDPA bus driver selection

2021-10-29 Thread Parav Pandit via Virtualization
Hi Stefano,

> From: Stefano Garzarella 
> Sent: Friday, October 29, 2021 8:11 PM
> 
> Maybe the only missing point is a way to specify the default bus driver to use
> for a device. Of course the workaround is to unbind it and bind to the desired
> one.
> 
Unbind bind can be done but it is slower for deployment at scale.

echo 0 > /sys/bus/vdpa/drivers_autoprobe
above command disables binding vdpa device to the driver.
So user can choose it explicitly without affecting performance.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: futher decouple DAX from block devices

2021-10-29 Thread Dan Williams
On Thu, Oct 28, 2021 at 4:52 PM Stephen Rothwell  wrote:
>
> Hi Dan,
>
> On Wed, 27 Oct 2021 13:46:31 -0700 Dan Williams  
> wrote:
> >
> > My merge resolution is here [1]. Christoph, please have a look. The
> > rebase and the merge result are both passing my test and I'm now going
> > to review the individual patches. However, while I do that and collect
> > acks from DM and EROFS folks, I want to give Stephen a heads up that
> > this is coming. Primarily I want to see if someone sees a better
> > strategy to merge this, please let me know, but if not I plan to walk
> > Stephen and Linus through the resolution.
>
> It doesn't look to bad to me (however it is a bit late in the cycle :-(
> ).  Once you are happy, just put it in your tree (some of the conflicts
> are against the current -rc3 based version of your tree anyway) and I
> will cope with it on Monday.

Christoph, Darrick, Shiyang,

I'm losing my nerve to try to jam this into v5.16 this late in the
cycle. I do want to get dax+reflink squared away as soon as possible,
but that looks like something that needs to build on top of a
v5.16-rc1 at this point. If Linus does a -rc8 then maybe it would have
enough soak time, but otherwise I want to take the time to collect the
acks and queue up some more follow-on cleanups to prepare for
block-less-dax.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: futher decouple DAX from block devices

2021-10-29 Thread Dan Williams
On Fri, Oct 29, 2021 at 8:55 AM Darrick J. Wong  wrote:
>
> On Fri, Oct 29, 2021 at 08:42:29AM -0700, Dan Williams wrote:
> > On Thu, Oct 28, 2021 at 4:52 PM Stephen Rothwell  
> > wrote:
> > >
> > > Hi Dan,
> > >
> > > On Wed, 27 Oct 2021 13:46:31 -0700 Dan Williams 
> > >  wrote:
> > > >
> > > > My merge resolution is here [1]. Christoph, please have a look. The
> > > > rebase and the merge result are both passing my test and I'm now going
> > > > to review the individual patches. However, while I do that and collect
> > > > acks from DM and EROFS folks, I want to give Stephen a heads up that
> > > > this is coming. Primarily I want to see if someone sees a better
> > > > strategy to merge this, please let me know, but if not I plan to walk
> > > > Stephen and Linus through the resolution.
> > >
> > > It doesn't look to bad to me (however it is a bit late in the cycle :-(
> > > ).  Once you are happy, just put it in your tree (some of the conflicts
> > > are against the current -rc3 based version of your tree anyway) and I
> > > will cope with it on Monday.
> >
> > Christoph, Darrick, Shiyang,
> >
> > I'm losing my nerve to try to jam this into v5.16 this late in the
> > cycle.
>
> Always a solid choice to hold off for a little more testing and a little
> less anxiety. :)
>
> I don't usually accept new code patches for iomap after rc4 anyway.
>
> > I do want to get dax+reflink squared away as soon as possible,
> > but that looks like something that needs to build on top of a
> > v5.16-rc1 at this point. If Linus does a -rc8 then maybe it would have
> > enough soak time, but otherwise I want to take the time to collect the
> > acks and queue up some more follow-on cleanups to prepare for
> > block-less-dax.
>
> I think that hwpoison-calls-xfs-rmap patchset is a prerequisite for
> dax+reflink anyway, right?  /me had concluded both were 5.17 things.

Ok, cool, sounds like a plan.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization