[Bug 224795] vlan interfaces created off tap devices do not work

2018-01-01 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=224795

Mark Linimon  changed:

   What|Removed |Added

   Assignee|freebsd-b...@freebsd.org|freebsd-net@FreeBSD.org

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Linux netmap memory allocation

2018-01-01 Thread Vincenzo Maffione
Hi,
  If you have 32 NICs you should open 32 netmap file descriptors, (and you
should not specify 64 in nr_arg1 or 256 in nr_arg3, this is for different
usecases). Also, since you want to do zercopy you must not specify a
separate memory area (nr_arg2), but use the same one.
You may want to use the high level API nm_open()
https://github.com/luigirizzo/netmap/blob/master/sys/net/netmap_user.h#L307

You may also want to look at the netmap tutorial to get a better idea of
how the API works (https://github.com/vmaffione/netmap-tutorial).

Cheers,
  Vincenzo

2017-12-28 18:34 GMT+01:00 Charlie Smurthwaite :

> Hi,
>
> I'm just starting to use netmap and it is my intention to do zero-copy
> forwarding of frames between a large number of NICs. I am using Intel
> i350 (igb) on Linux. I therefore require a large memory area for rings
> and buffers.
>
> My calculation:
> 32 NICs * 2 rings (TX+RX) * 256 frames * 2048 bytes = 32MB
>
> I am currently having a problem (or perhaps just a misunderstanding)
> regarding allocation of this memory. I am attempting to use the
> following code:
>
> void thread_main(int thread_id) {
>   struct nmreq req; // A struct for the netmap request
>   int fd;   // File descriptor for netmap socket
>   void * mem;   // Pointer to allocated memory area
>
>   fd = open("/dev/netmap", 0); // Open a generic netmap socket
>   strcpy(req.nr_name, "enp8s0f0"); // Copy NIC name into request
>   req.nr_version = NETMAP_API; // Set version number
>   req.nr_flags = NR_REG_ONE_NIC;   // We will be using a single hw ring
>
>   // Select ring 0, disable TX on poll
>   req.nr_ringid = NETMAP_NO_TX_POLL | NETMAP_HW_RING | 0;
>
>   // Ask for 64 additional rings to be allocated (32 * (TX+RX))
>   req.nr_arg1 = 64;
>
>   // Allocate a separate memory area for each thread
>   req.nr_arg2 = 10 + thread_id;
>
>   // Ask for additional buffers (256 per ring)
>   req.nr_arg3 = 64*256;
>
>   // Initialize port
>   ioctl(fd, NIOCREGIF, &req);
>
>   // Check the allocated memory size
>   printf("memsize: %u\n", req.nr_memsize);
>   // Check the allocated memory area
>   printf("nr_arg2: %u\n", req.nr_arg2);
> }
>
> The output is as follows:
>
> memsize: 4206859
> nr_arg2: 10
>
> This is far short of the amount of memory I am hoping to be allocated.
> Am I doing something wrong, or is this simply an indication that the
> driver is unwilling to allocate more than 4MB?
>
> A secondary (related) problem is that if I don't set arg1,arg2,arg3 in
> my code (ie they will be zero), then I get varying output (it varies
> between each of the following):
>
> memsize: 4206843
> nr_arg2: 0
>
> memsize: 343019520
> nr_arg2: 1
>
> Any pointers would be appreciated. Thanks!
>
> Charlie
>
>
> Charlie Smurthwaite
> Technical Director
>
> tel.  email. charlie@atech.media web.
> https://atech.media
>
> This e-mail has been sent by aTech Media Limited (or one of its assoicated
> group companys, Dial 9 Communications Limited or Viaduct Hosting Limited).
> Its contents are confidential therefore if you have received this message
> in error, we would appreciate it if you could let us know and delete the
> message. aTech Media Limited is a UK limited company, registration number
> 5523199. Dial 9 Communications Limited is a UK limited company,
> registration number 7740921. Viaduct Hosting Limited is a UK limited
> company, registration number 8514362. All companies are registered at Unit
> 9 Winchester Place, North Street, Poole, Dorset, BH15 1NX.
> ___
> freebsd-net@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
>



-- 
Vincenzo Maffione
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Linux netmap memory allocation

2018-01-01 Thread Charlie Smurthwaite

Hi,

Thank you for your reply. I was able to resolve this.

1) I do indeed open one FD per NIC
2) I no longer specify nr_arg1, nr_arg2 nor nr_arg3. Instead I just 
verify that all NICs return with identical nr_arg2 so that the memory is 
shared between them.
3) I properly initialized my memory, my failure to do so was causing me 
a lot of confusion,


The resulting memory space is large enough for all the NICs, and 
everything works perfectly with zero-copy forwarding, great!


The only thing I am still having trouble with is the ability to 
simultaneously trigger a TX and an RX sync on all NICs. I have tried 
select, poll, and epoll, and in all cases, RX rings are updated but TX 
rings are not and TX packets are not pushed out (this occurs using both 
native and emulated netmap modes). I notice the documentation says "Note 
that on epoll and kqueue, NETMAP_NO_TX_POLL and NETMAP_DO_RX_POLL only 
have an effect when some event is posted for the file descriptor.", but 
the behaviour seems the same on poll and select as well as epoll, 
perhaps this is a linux-specific implementation detail?


I have also found that all of these mechanisms seem to incur a very high 
cost in terms of CPU time (making them no more efficient than busy 
waiting at 1Mpps+). My current approach is as follows, but I feel like 
there should be a better option:


    for(int n=0; n  // usleep(10); // More CPU time seems to be saved with a careful 
sleep than with select/poll/epoll

  ioctl(fds[n], NIOCTXSYNC);
  ioctl(fds[n], NIOCRXSYNC);
  rxring = rxrings[n];
  while (!nm_ring_empty(rxring)) {
    // Forward any packets waiting in this NIC's RX ring to the 
appropriate TX ring

  }
    }

Thanks again,

Charlie


On 01/01/18 15:40, Vincenzo Maffione wrote:

Hi,
  If you have 32 NICs you should open 32 netmap file descriptors, (and 
you should not specify 64 in nr_arg1 or 256 in nr_arg3, this is for 
different usecases). Also, since you want to do zercopy you must not 
specify a separate memory area (nr_arg2), but use the same one.
You may want to use the high level API nm_open() 
https://github.com/luigirizzo/netmap/blob/master/sys/net/netmap_user.h#L307


You may also want to look at the netmap tutorial to get a better idea 
of how the API works (https://github.com/vmaffione/netmap-tutorial).


Cheers,
  Vincenzo

2017-12-28 18:34 GMT+01:00 Charlie Smurthwaite >:


Hi,

I'm just starting to use netmap and it is my intention to do zero-copy
forwarding of frames between a large number of NICs. I am using Intel
i350 (igb) on Linux. I therefore require a large memory area for rings
and buffers.

My calculation:
32 NICs * 2 rings (TX+RX) * 256 frames * 2048 bytes = 32MB

I am currently having a problem (or perhaps just a misunderstanding)
regarding allocation of this memory. I am attempting to use the
following code:

void thread_main(int thread_id) {
  struct nmreq req; // A struct for the netmap request
  int fd;           // File descriptor for netmap socket
  void * mem;       // Pointer to allocated memory area

  fd = open("/dev/netmap", 0);     // Open a generic netmap socket
  strcpy(req.nr_name, "enp8s0f0"); // Copy NIC name into request
  req.nr_version = NETMAP_API;     // Set version number
  req.nr_flags = NR_REG_ONE_NIC;   // We will be using a single hw
ring

  // Select ring 0, disable TX on poll
  req.nr_ringid = NETMAP_NO_TX_POLL | NETMAP_HW_RING | 0;

  // Ask for 64 additional rings to be allocated (32 * (TX+RX))
  req.nr_arg1 = 64;

  // Allocate a separate memory area for each thread
  req.nr_arg2 = 10 + thread_id;

  // Ask for additional buffers (256 per ring)
  req.nr_arg3 = 64*256;

  // Initialize port
  ioctl(fd, NIOCREGIF, &req);

  // Check the allocated memory size
  printf("memsize: %u\n", req.nr_memsize);
  // Check the allocated memory area
  printf("nr_arg2: %u\n", req.nr_arg2);
}

The output is as follows:

memsize: 4206859
nr_arg2: 10

This is far short of the amount of memory I am hoping to be allocated.
Am I doing something wrong, or is this simply an indication that the
driver is unwilling to allocate more than 4MB?

A secondary (related) problem is that if I don't set arg1,arg2,arg3 in
my code (ie they will be zero), then I get varying output (it varies
between each of the following):

memsize: 4206843
nr_arg2: 0

memsize: 343019520
nr_arg2: 1

Any pointers would be appreciated. Thanks!

Charlie


Charlie Smurthwaite
Technical Director

tel.  email. charlie@atech.media> web. https://atech.media

This e-mail has been sent by aTech Media Limited (or one of its
assoicated group companys, Dial 9 Communications Limited or
Viaduct Hosting Limited). Its contents are confide

Re: Linux netmap memory allocation

2018-01-01 Thread Vincenzo Maffione
2018-01-01 17:14 GMT+01:00 Charlie Smurthwaite :

> Hi,
>
> Thank you for your reply. I was able to resolve this.
>
> 1) I do indeed open one FD per NIC
> 2) I no longer specify nr_arg1, nr_arg2 nor nr_arg3. Instead I just verify
> that all NICs return with identical nr_arg2 so that the memory is shared
> between them.
> 3) I properly initialized my memory, my failure to do so was causing me a
> lot of confusion,
>
> The resulting memory space is large enough for all the NICs, and
> everything works perfectly with zero-copy forwarding, great!
>
> The only thing I am still having trouble with is the ability to
> simultaneously trigger a TX and an RX sync on all NICs. I have tried
> select, poll, and epoll, and in all cases, RX rings are updated but TX
> rings are not and TX packets are not pushed out (this occurs using both
> native and emulated netmap modes). I notice the documentation says "Note
> that on epoll and kqueue, NETMAP_NO_TX_POLL and NETMAP_DO_RX_POLL only have
> an effect when some event is posted for the file descriptor.", but the
> behaviour seems the same on poll and select as well as epoll, perhaps this
> is a linux-specific implementation detail?
>
I have also found that all of these mechanisms seem to incur a very high
> cost in terms of CPU time (making them no more efficient than busy waiting
> at 1Mpps+). My current approach is as follows, but I feel like there should
> be a better option:
>
> for(int n=0; n   // usleep(10); // More CPU time seems to be saved with a careful
> sleep than with select/poll/epoll
>   ioctl(fds[n], NIOCTXSYNC);
>   ioctl(fds[n], NIOCRXSYNC);
>   rxring = rxrings[n];
>   while (!nm_ring_empty(rxring)) {
> // Forward any packets waiting in this NIC's RX ring to the
> appropriate TX ring
>   }
> }
>

If you are using poll() or select() you should not use ioctl(NIOC*XSYNC),
as the txsync/rxsync operations are automatically performed within the
poll()/select() syscall (at least assuming you did not specify
NETMAP_NO_TX_POLL).
Also, whether netmap calls or does not call txsync/rxsync on certain rings
depends on the parameters passed to nm_open().
Make sure you check for nm_ring_space(txring) when forwarding.

Cheers,
  Vincenzo


> Thanks again,
>
> Charlie
>
>
> On 01/01/18 15:40, Vincenzo Maffione wrote:
>
> Hi,
>   If you have 32 NICs you should open 32 netmap file descriptors, (and you
> should not specify 64 in nr_arg1 or 256 in nr_arg3, this is for different
> usecases). Also, since you want to do zercopy you must not specify a
> separate memory area (nr_arg2), but use the same one.
> You may want to use the high level API nm_open()
> https://github.com/luigirizzo/netmap/blob/master/sys/net/
> netmap_user.h#L307
>
> You may also want to look at the netmap tutorial to get a better idea of
> how the API works (https://github.com/vmaffione/netmap-tutorial).
>
> Cheers,
>   Vincenzo
>
> 2017-12-28 18:34 GMT+01:00 Charlie Smurthwaite :
>
>> Hi,
>>
>> I'm just starting to use netmap and it is my intention to do zero-copy
>> forwarding of frames between a large number of NICs. I am using Intel
>> i350 (igb) on Linux. I therefore require a large memory area for rings
>> and buffers.
>>
>> My calculation:
>> 32 NICs * 2 rings (TX+RX) * 256 frames * 2048 bytes = 32MB
>>
>> I am currently having a problem (or perhaps just a misunderstanding)
>> regarding allocation of this memory. I am attempting to use the
>> following code:
>>
>> void thread_main(int thread_id) {
>>   struct nmreq req; // A struct for the netmap request
>>   int fd;   // File descriptor for netmap socket
>>   void * mem;   // Pointer to allocated memory area
>>
>>   fd = open("/dev/netmap", 0); // Open a generic netmap socket
>>   strcpy(req.nr_name, "enp8s0f0"); // Copy NIC name into request
>>   req.nr_version = NETMAP_API; // Set version number
>>   req.nr_flags = NR_REG_ONE_NIC;   // We will be using a single hw ring
>>
>>   // Select ring 0, disable TX on poll
>>   req.nr_ringid = NETMAP_NO_TX_POLL | NETMAP_HW_RING | 0;
>>
>>   // Ask for 64 additional rings to be allocated (32 * (TX+RX))
>>   req.nr_arg1 = 64;
>>
>>   // Allocate a separate memory area for each thread
>>   req.nr_arg2 = 10 + thread_id;
>>
>>   // Ask for additional buffers (256 per ring)
>>   req.nr_arg3 = 64*256;
>>
>>   // Initialize port
>>   ioctl(fd, NIOCREGIF, &req);
>>
>>   // Check the allocated memory size
>>   printf("memsize: %u\n", req.nr_memsize);
>>   // Check the allocated memory area
>>   printf("nr_arg2: %u\n", req.nr_arg2);
>> }
>>
>> The output is as follows:
>>
>> memsize: 4206859
>> nr_arg2: 10
>>
>> This is far short of the amount of memory I am hoping to be allocated.
>> Am I doing something wrong, or is this simply an indication that the
>> driver is unwilling to allocate more than 4MB?
>>
>> A secondary (related) problem is that if I don't set arg1,arg2,arg3 in
>> my code (ie they will be zero), then I get varying output (it varies
>> between

Re: Linux netmap memory allocation

2018-01-01 Thread Charlie Smurthwaite


On 01/01/18 21:05, Vincenzo Maffione wrote:


2018-01-01 17:14 GMT+01:00 Charlie Smurthwaite 
mailto:charlie@atech.media>>:

Hi,

Thank you for your reply. I was able to resolve this.

1) I do indeed open one FD per NIC
2) I no longer specify nr_arg1, nr_arg2 nor nr_arg3. Instead I just verify that 
all NICs return with identical nr_arg2 so that the memory is shared between 
them.
3) I properly initialized my memory, my failure to do so was causing me a lot 
of confusion,

The resulting memory space is large enough for all the NICs, and everything 
works perfectly with zero-copy forwarding, great!

The only thing I am still having trouble with is the ability to simultaneously trigger a 
TX and an RX sync on all NICs. I have tried select, poll, and epoll, and in all cases, RX 
rings are updated but TX rings are not and TX packets are not pushed out (this occurs 
using both native and emulated netmap modes). I notice the documentation says "Note 
that on epoll and kqueue, NETMAP_NO_TX_POLL and NETMAP_DO_RX_POLL only have an effect 
when some event is posted for the file descriptor.", but the behaviour seems the 
same on poll and select as well as epoll, perhaps this is a linux-specific implementation 
detail?

I have also found that all of these mechanisms seem to incur a very high cost 
in terms of CPU time (making them no more efficient than busy waiting at 
1Mpps+). My current approach is as follows, but I feel like there should be a 
better option:

   for(int n=0; nhttps://github.com/catphish/netmap-router/blob/58a9b957c19b0a012088c491bd58bc3161a56ff1/router.c

Specifically, if the ioctl call at line 92 is removed, the code does not work 
(packets are not transmitted, or are only transmitted when the buffer is full, 
which of these 2 behaviours seems to be random), however I would expect it to 
work because I do not specify NETMAP_NO_TX_POLL, and I would therefore hope 
that the poll() call on line 80 would have the same effect.

I hope this all makes sense, and again, I hope I have simply missed something 
from the nmreq i pass to NIOCREGIF.

It is worth mentioning that with the exception of this problem / confusion, I 
am getting extremely good results from this code and netmap in general.

Charlie


Charlie Smurthwaite
Technical Director

tel. email. charlie@atech.media web. 
https://atech.media

This e-mail has been sent by aTech Media Limited (or one of its assoicated 
group companys, Dial 9 Communications Limited or Viaduct Hosting Limited). Its 
contents are confidential therefore if you have received this message in error, 
we would appreciate it if you could let us know and delete the message. aTech 
Media Limited is a UK limited company, registration number 5523199. Dial 9 
Communications Limited is a UK limited company, registration number 7740921. 
Viaduct Hosting Limited is a UK limited company, registration number 8514362. 
All companies are registered at Unit 9 Winchester Place, North Street, Poole, 
Dorset, BH15 1NX.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Multiple instances of hostapd?

2018-01-01 Thread Victor Sudakov
Dear Colleagues,

I would like to run multiple instances of hostapd, each per a wlanX
interface. I see some provisions for multiple instances inside the
/etc/rc.d/hostapd file, but cannot grok the correct rc.conf syntax
instead of just hostapd_enable="YES" for a single instance.

I don't see it documented anywhere either.

Thank you for any input.

-- 
Victor Sudakov,  VAS4-RIPE, VAS47-RIPN
AS43859
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: VLANing between jails not segmenting traffic

2018-01-01 Thread Julian Elischer

On 31/10/17 5:26 am, Eugene Grosbein wrote:

31.10.2017 4:08, Farhan Khan пишет:

Hi all,

I am trying to experiment with setting up two jails on different VLANs, but 
have not been able to segment traffic.

My configuration was to create vlan1 for jail1 and vlan2 for jail2.

I did the following commands:
ifconfig vlan1 create vlan 1 vlandev em0
ifconfig vlan1 10.1.0.1/24
ifconfig vlan2 create vlan 2 vlandev em0
ifconfig vlan2 10.2.0.1/24

Within each jail, I set the interface to be vlan1 and vlan2 and assigned them 
the IP addresses 10.1.0.2/24 and 10.2.0.2/24, respectively.

I can still have connectivity between the two VLANs.

Oddly enough, jail1 with IP 10.1.0.2 does not even have a static route outbound at all. 
An `ifconfig` shows 0xff00 (/24) so my expected behavior would be to say "unable 
to route". It can even connect to the external interface's IP address. At a minimum 
it should not even know how to connect to the 10.2.0.0/24 network at all.

I was advised that its connectivity is because Jails use the base system's 
routing table. If so, how could one possibly separate network traffic? That's 
the entire purpose of VLANing.

I have been advised to use pf to prevent that, but shouldn't VLANing provide 
that separation mechanism? I do not know what I might be doing wrong here.

It seems you are looking for isolated network stacks for jails each having 
distinct route table etc.
You need options VIMAGE for your kernel and create jails with vnet option (man 
jail)
to obtain this feature.

so, a couple of months later, did you try  out VIMAGE?
it's designed to give you EXACTLY what you are looking for.




___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"




___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Multiple instances of hostapd?

2018-01-01 Thread Jim Thompson
https://lists.freebsd.org/pipermail/freebsd-wireless/2015-January/005345.html

> On Jan 1, 2018, at 11:33 PM, Victor Sudakov  wrote:
> 
> Dear Colleagues,
> 
> I would like to run multiple instances of hostapd, each per a wlanX
> interface. I see some provisions for multiple instances inside the
> /etc/rc.d/hostapd file, but cannot grok the correct rc.conf syntax
> instead of just hostapd_enable="YES" for a single instance.
> 
> I don't see it documented anywhere either.
> 
> Thank you for any input.
> 
> -- 
> Victor Sudakov,  VAS4-RIPE, VAS47-RIPN
> AS43859
> ___
> freebsd-net@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"