On Tue, Sep 05, 2017 at 04:09:19PM +0200, Willem de Bruijn wrote:
> On Mon, Sep 4, 2017 at 5:03 AM, Jason Wang wrote:
> >
> >
> > On 2017年09月02日 00:17, Willem de Bruijn wrote:
> >
> > This is not a 50/50 split, which impliesTw that some packets from the
> > large
> > packet flow ar
On Mon, Sep 4, 2017 at 5:03 AM, Jason Wang wrote:
>
>
> On 2017年09月02日 00:17, Willem de Bruijn wrote:
>
> This is not a 50/50 split, which impliesTw that some packets from the
> large
> packet flow are still converted to copying. Without the change the rate
> without queue was
On 2017年09月02日 00:17, Willem de Bruijn wrote:
This is not a 50/50 split, which impliesTw that some packets from the
large
packet flow are still converted to copying. Without the change the rate
without queue was 80k zerocopy vs 80k copy, so this choice of
(vq->num >> 2) appears too conservative
>>> This is not a 50/50 split, which impliesTw that some packets from the
>>> large
>>> packet flow are still converted to copying. Without the change the rate
>>> without queue was 80k zerocopy vs 80k copy, so this choice of
>>> (vq->num >> 2) appears too conservative.
>>>
>>> However, testing wit
On Thu, Aug 31, 2017 at 11:25 PM, Jason Wang wrote:
>
>
> On 2017年08月31日 22:30, Willem de Bruijn wrote:
>>>
>>> Incomplete results at this stage, but I do see this correlation between
>>> flows. It occurs even while not running out of zerocopy descriptors,
>>> which I cannot yet explain.
>>>
>>> R
On 2017年08月31日 22:30, Willem de Bruijn wrote:
Incomplete results at this stage, but I do see this correlation between
flows. It occurs even while not running out of zerocopy descriptors,
which I cannot yet explain.
Running two threads in a guest, each with a udp socket, each
sending up to 100
On 2017年08月30日 11:11, Willem de Bruijn wrote:
On Tue, Aug 29, 2017 at 9:45 PM, Jason Wang wrote:
On 2017年08月30日 03:35, Willem de Bruijn wrote:
On Fri, Aug 25, 2017 at 9:03 PM, Willem de Bruijn
wrote:
On Fri, Aug 25, 2017 at 7:32 PM, Michael S. Tsirkin
wrote:
On Fri, Aug 25, 2017 at 06:4
On Tue, Aug 29, 2017 at 3:35 PM, Willem de Bruijn
wrote:
> On Fri, Aug 25, 2017 at 9:03 PM, Willem de Bruijn
> wrote:
>> On Fri, Aug 25, 2017 at 7:32 PM, Michael S. Tsirkin wrote:
>>> On Fri, Aug 25, 2017 at 06:44:36PM -0400, Willem de Bruijn wrote:
>> >> > We don't enable network watchdog
On Tue, Aug 29, 2017 at 9:45 PM, Jason Wang wrote:
>
>
> On 2017年08月30日 03:35, Willem de Bruijn wrote:
>>
>> On Fri, Aug 25, 2017 at 9:03 PM, Willem de Bruijn
>> wrote:
>>>
>>> On Fri, Aug 25, 2017 at 7:32 PM, Michael S. Tsirkin
>>> wrote:
On Fri, Aug 25, 2017 at 06:44:36PM -0400, Will
On 2017年08月30日 03:35, Willem de Bruijn wrote:
On Fri, Aug 25, 2017 at 9:03 PM, Willem de Bruijn
wrote:
On Fri, Aug 25, 2017 at 7:32 PM, Michael S. Tsirkin wrote:
On Fri, Aug 25, 2017 at 06:44:36PM -0400, Willem de Bruijn wrote:
We don't enable network watchdog on virtio but we could and ma
On Tue, Aug 29, 2017 at 4:40 PM, Michael S. Tsirkin wrote:
> On Tue, Aug 29, 2017 at 03:53:08PM -0400, Willem de Bruijn wrote:
>> On Tue, Aug 29, 2017 at 3:42 PM, Michael S. Tsirkin wrote:
>> > On Tue, Aug 29, 2017 at 03:35:38PM -0400, Willem de Bruijn wrote:
>> >> By the way, I have had an unrel
On Tue, Aug 29, 2017 at 03:53:08PM -0400, Willem de Bruijn wrote:
> On Tue, Aug 29, 2017 at 3:42 PM, Michael S. Tsirkin wrote:
> > On Tue, Aug 29, 2017 at 03:35:38PM -0400, Willem de Bruijn wrote:
> >> By the way, I have had an unrelated patch outstanding for a while
> >> to have virtio-net suppor
On Tue, Aug 29, 2017 at 3:42 PM, Michael S. Tsirkin wrote:
> On Tue, Aug 29, 2017 at 03:35:38PM -0400, Willem de Bruijn wrote:
>> By the way, I have had an unrelated patch outstanding for a while
>> to have virtio-net support the VIRTIO_CONFIG_S_NEEDS_RESET
>> command. Will send that as RFC.
>
> O
On Tue, Aug 29, 2017 at 03:35:38PM -0400, Willem de Bruijn wrote:
> By the way, I have had an unrelated patch outstanding for a while
> to have virtio-net support the VIRTIO_CONFIG_S_NEEDS_RESET
> command. Will send that as RFC.
Oh nice. One needs to be careful about locking there which is why
no
On Fri, Aug 25, 2017 at 9:03 PM, Willem de Bruijn
wrote:
> On Fri, Aug 25, 2017 at 7:32 PM, Michael S. Tsirkin wrote:
>> On Fri, Aug 25, 2017 at 06:44:36PM -0400, Willem de Bruijn wrote:
>>> >> >> > We don't enable network watchdog on virtio but we could and maybe
>>> >> >> > should.
>>> >> >>
>>
On Fri, Aug 25, 2017 at 7:32 PM, Michael S. Tsirkin wrote:
> On Fri, Aug 25, 2017 at 06:44:36PM -0400, Willem de Bruijn wrote:
>> >> >> > We don't enable network watchdog on virtio but we could and maybe
>> >> >> > should.
>> >> >>
>> >> >> Can you elaborate?
>> >> >
>> >> > The issue is that hold
On Fri, Aug 25, 2017 at 06:44:36PM -0400, Willem de Bruijn wrote:
> >> >> > We don't enable network watchdog on virtio but we could and maybe
> >> >> > should.
> >> >>
> >> >> Can you elaborate?
> >> >
> >> > The issue is that holding onto buffers for very long times makes guests
> >> > think they
>> >> > We don't enable network watchdog on virtio but we could and maybe
>> >> > should.
>> >>
>> >> Can you elaborate?
>> >
>> > The issue is that holding onto buffers for very long times makes guests
>> > think they are stuck. This is funamentally because from guest point of
>> > view this is a
On Thu, Aug 24, 2017 at 04:20:39PM -0400, Willem de Bruijn wrote:
> >> Traffic shaping can introduce msec timescale latencies.
> >>
> >> The delay may actually be a useful signal. If the guest does not
> >> orphan skbs early, TSQ will throttle the socket causing host
> >> queue build up.
> >>
> >>
>> Traffic shaping can introduce msec timescale latencies.
>>
>> The delay may actually be a useful signal. If the guest does not
>> orphan skbs early, TSQ will throttle the socket causing host
>> queue build up.
>>
>> But, if completions are queued in-order, unrelated flows may be
>> throttled as
On Wed, Aug 23, 2017 at 11:28:24PM -0400, Willem de Bruijn wrote:
> >> > * as a generic solution, if we were to somehow overcome the safety
> >> > issue, track
> >> > the delay and do copy if some threshold is reached could be an answer,
> >> > but it's
> >> > hard for now.> * so things like the
On Wed, Aug 23, 2017 at 11:28:24PM -0400, Willem de Bruijn wrote:
> >> > * as a generic solution, if we were to somehow overcome the safety
> >> > issue, track
> >> > the delay and do copy if some threshold is reached could be an answer,
> >> > but it's
> >> > hard for now.> * so things like the
>> > * as a generic solution, if we were to somehow overcome the safety issue,
>> > track
>> > the delay and do copy if some threshold is reached could be an answer, but
>> > it's
>> > hard for now.> * so things like the current vhost-net implementation of
>> > deciding whether or not
>> > to do
On Wed, Aug 23, 2017 at 11:20:45AM -0400, Willem de Bruijn wrote:
> > Please let me make sure if I understand it correctly:
> > * always do copy with skb_orphan_frags_rx as Willem mentioned in the earlier
> > post, before the xmit_skb as opposed to my original patch, is safe but too
> > costly so c
> Please let me make sure if I understand it correctly:
> * always do copy with skb_orphan_frags_rx as Willem mentioned in the earlier
> post, before the xmit_skb as opposed to my original patch, is safe but too
> costly so cannot be adopted.
One more point about msg_zerocopy in the guest. This do
On Wed, 2017-08-23 at 23:28 +0900, Koichiro Den wrote:
> On Tue, 2017-08-22 at 20:55 +0300, Michael S. Tsirkin wrote:
> > On Tue, Aug 22, 2017 at 10:50:41AM +0800, Jason Wang wrote:
> > > > Perhaps the descriptor pool should also be
> > > > revised to allow out of order completions. Then there is n
On Tue, 2017-08-22 at 20:55 +0300, Michael S. Tsirkin wrote:
> On Tue, Aug 22, 2017 at 10:50:41AM +0800, Jason Wang wrote:
> > > Perhaps the descriptor pool should also be
> > > revised to allow out of order completions. Then there is no need to
> > > copy zerocopy packets whenever they may experie
On Tue, 2017-08-22 at 13:19 -0400, Willem de Bruijn wrote:
> > > > > /* Don't wait up for transmitted skbs to be freed. */
> > > > > if (!use_napi) {
> > > > > + if (skb_shinfo(skb)->tx_flags & SKBTX_DEV_ZEROCOPY) {
> > > > > + struct ubuf_info *u
On Tue, 2017-08-22 at 13:16 -0400, Willem de Bruijn wrote:
> > > > An issue of the referenced patch is that sndbuf could be smaller than
> > > > low
> > > > watermark.
> >
> > We cannot determine the low watermark properly because of not only sndbuf
> > size
> > issue but also the fact that the up
On Tue, Aug 22, 2017 at 11:28:28AM -0700, Eric Dumazet wrote:
> On Tue, 2017-08-22 at 11:01 -0700, David Miller wrote:
> > From: "Michael S. Tsirkin"
> > Date: Tue, 22 Aug 2017 20:55:56 +0300
> >
> > > Which reminds me that skb_linearize in net core seems to be
> > > fundamentally racy - I suspec
On Tue, 2017-08-22 at 11:01 -0700, David Miller wrote:
> From: "Michael S. Tsirkin"
> Date: Tue, 22 Aug 2017 20:55:56 +0300
>
> > Which reminds me that skb_linearize in net core seems to be
> > fundamentally racy - I suspect that if skb is cloned, and someone is
> > trying to use the shared frags
From: "Michael S. Tsirkin"
Date: Tue, 22 Aug 2017 20:55:56 +0300
> Which reminds me that skb_linearize in net core seems to be
> fundamentally racy - I suspect that if skb is cloned, and someone is
> trying to use the shared frags while another thread calls skb_linearize,
> we get some use after
On Tue, Aug 22, 2017 at 10:50:41AM +0800, Jason Wang wrote:
> > Perhaps the descriptor pool should also be
> > revised to allow out of order completions. Then there is no need to
> > copy zerocopy packets whenever they may experience delay.
>
> Yes, but as replied in the referenced thread, windows
>> > > /* Don't wait up for transmitted skbs to be freed. */
>> > > if (!use_napi) {
>> > > + if (skb_shinfo(skb)->tx_flags & SKBTX_DEV_ZEROCOPY) {
>> > > + struct ubuf_info *uarg;
>> > > + uarg = skb_shinfo(skb)->destructor_
>> > An issue of the referenced patch is that sndbuf could be smaller than low
>> > watermark.
> We cannot determine the low watermark properly because of not only sndbuf size
> issue but also the fact that the upper vhost-net cannot directly see how much
> descriptor is currently available at the
On Tue, 2017-08-22 at 08:11 -0400, Willem de Bruijn wrote:
> On Sun, Aug 20, 2017 at 4:49 PM, Willem de Bruijn
> wrote:
> > On Sat, Aug 19, 2017 at 2:38 AM, Koichiro Den wrote:
> > > Facing the possible unbounded delay relying on freeing on xmit path,
> > > we also better to invoke and clear the
On Mon, 2017-08-21 at 23:10 -0400, Willem de Bruijn wrote:
> > > > Interesting, deadlock could be treated as a a radical case of the
> > > > discussion
> > > > here https://patchwork.kernel.org/patch/3787671/.
> > > >
> > > > git grep tells more similar skb_orphan() cases. Do we need to change
> >
On Sun, Aug 20, 2017 at 4:49 PM, Willem de Bruijn
wrote:
> On Sat, Aug 19, 2017 at 2:38 AM, Koichiro Den wrote:
>> Facing the possible unbounded delay relying on freeing on xmit path,
>> we also better to invoke and clear the upper layer zerocopy callback
>> beforehand to keep them from waiting f
On 2017年08月22日 11:10, Willem de Bruijn wrote:
Interesting, deadlock could be treated as a a radical case of the
discussion
here https://patchwork.kernel.org/patch/3787671/.
git grep tells more similar skb_orphan() cases. Do we need to change them
all (or part)?
Most skb_orphan calls are not r
>>> Interesting, deadlock could be treated as a a radical case of the
>>> discussion
>>> here https://patchwork.kernel.org/patch/3787671/.
>>>
>>> git grep tells more similar skb_orphan() cases. Do we need to change them
>>> all (or part)?
>>
>> Most skb_orphan calls are not relevant to the issue o
On 2017年08月21日 23:41, Willem de Bruijn wrote:
On Mon, Aug 21, 2017 at 8:33 AM, Jason Wang wrote:
On 2017年08月19日 14:38, Koichiro Den wrote:
Facing the possible unbounded delay relying on freeing on xmit path,
we also better to invoke and clear the upper layer zerocopy callback
beforehand to
On Mon, Aug 21, 2017 at 8:33 AM, Jason Wang wrote:
>
>
> On 2017年08月19日 14:38, Koichiro Den wrote:
>>
>> Facing the possible unbounded delay relying on freeing on xmit path,
>> we also better to invoke and clear the upper layer zerocopy callback
>> beforehand to keep them from waiting for unbounde
On Mon, 2017-08-21 at 20:33 +0800, Jason Wang wrote:
>
> On 2017年08月19日 14:38, Koichiro Den wrote:
> > Facing the possible unbounded delay relying on freeing on xmit path,
> > we also better to invoke and clear the upper layer zerocopy callback
> > beforehand to keep them from waiting for unbounde
On Sun, 2017-08-20 at 16:49 -0400, Willem de Bruijn wrote:
> On Sat, Aug 19, 2017 at 2:38 AM, Koichiro Den wrote:
> > Facing the possible unbounded delay relying on freeing on xmit path,
> > we also better to invoke and clear the upper layer zerocopy callback
> > beforehand to keep them from waiti
On 2017年08月19日 14:38, Koichiro Den wrote:
Facing the possible unbounded delay relying on freeing on xmit path,
we also better to invoke and clear the upper layer zerocopy callback
beforehand to keep them from waiting for unbounded duration in vain.
For instance, this removes the possible deadlo
On Sat, Aug 19, 2017 at 2:38 AM, Koichiro Den wrote:
> Facing the possible unbounded delay relying on freeing on xmit path,
> we also better to invoke and clear the upper layer zerocopy callback
> beforehand to keep them from waiting for unbounded duration in vain.
Good point.
> For instance, th
Facing the possible unbounded delay relying on freeing on xmit path,
we also better to invoke and clear the upper layer zerocopy callback
beforehand to keep them from waiting for unbounded duration in vain.
For instance, this removes the possible deadlock in the case that the
upper layer is a zeroc
47 matches
Mail list logo