On Wed, Jan 04, 2017 at 11:03:32AM +0800, Jason Wang wrote:
> On 2017年01月03日 21:33, Stefan Hajnoczi wrote:
> > On Wed, Dec 28, 2016 at 04:09:31PM +0800, Jason Wang wrote:
> > > +static int tun_rx_batched(struct tun_file *tfile, struct sk_buff *skb,
> > > + int more)
> > > +{
> > >
On 2017年01月03日 21:33, Stefan Hajnoczi wrote:
On Wed, Dec 28, 2016 at 04:09:31PM +0800, Jason Wang wrote:
+static int tun_rx_batched(struct tun_file *tfile, struct sk_buff *skb,
+ int more)
+{
+ struct sk_buff_head *queue = &tfile->sk.sk_write_queue;
+ struct
On Wed, Dec 28, 2016 at 04:09:31PM +0800, Jason Wang wrote:
> +static int tun_rx_batched(struct tun_file *tfile, struct sk_buff *skb,
> + int more)
> +{
> + struct sk_buff_head *queue = &tfile->sk.sk_write_queue;
> + struct sk_buff_head process_queue;
> + int qlen;
On 2016年12月30日 00:35, David Miller wrote:
From: Jason Wang
Date: Wed, 28 Dec 2016 16:09:31 +0800
+ spin_lock(&queue->lock);
+ qlen = skb_queue_len(queue);
+ if (qlen > rx_batched)
+ goto drop;
+ __skb_queue_tail(queue, skb);
+ if (!more || qlen + 1
From: Jason Wang
Date: Wed, 28 Dec 2016 16:09:31 +0800
> + spin_lock(&queue->lock);
> + qlen = skb_queue_len(queue);
> + if (qlen > rx_batched)
> + goto drop;
> + __skb_queue_tail(queue, skb);
> + if (!more || qlen + 1 > rx_batched) {
> + __skb_queue_he
We can only process 1 packet at one time during sendmsg(). This often
lead bad cache utilization under heavy load. So this patch tries to do
some batching during rx before submitting them to host network
stack. This is done through accepting MSG_MORE as a hint from
sendmsg() caller, if it was set,