Re: ipsec with ipfw

2017-03-12 Thread Slawa Olhovchenkov
On Sat, Mar 11, 2017 at 09:53:39PM -0800, Ermal Luçi wrote:

> On Sat, Mar 11, 2017 at 2:16 PM, Slawa Olhovchenkov  wrote:
> 
> > On Sun, Mar 12, 2017 at 12:53:44AM +0330, Hooman Fazaeli wrote:
> >
> > > Hi,
> > >
> > > As you know the ipsec/setkey provide limited syntax to define security
> > > policies: only a single subnet/host, protocol number and optional port
> > > may be used to specify traffic's source and destination.
> > >
> > > I was thinking about the idea of using ipfw as the packet selector for
> > ipsec,
> > > much like it is used with dummeynet. Something like:
> > >
> > > ipfw add 100 ipsec 2 tcp from  to 
> > 80,443,110,139
> > >
> > > What do you think? Are you interested in such a feature?
> > > Is it worth the effort? What are the implementation challenges?
> >
> > security policies is subject of ike protocol exchange, do you plened
> > to extend this protocol too?
> >
> 
> With the introduction of if_ipsec you can implement such tricks through
> routing.

1. routing don't distribute port/protocol info

2. connected client don't have any preconfigured security policies and
got it by IKE protocol from server. how do you to implement this? for
windows/ios/android clients.

___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Problem reports for freebsd-net@FreeBSD.org that need special attention

2017-03-12 Thread bugzilla-noreply
To view an individual PR, use:
  https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=(Bug Id).

The following is a listing of current problems submitted by FreeBSD users,
which need special attention. These represent problem reports covering
all versions including experimental development code and obsolete releases.

Status  |Bug Id | Description
+---+---
In Progress |165622 | [ndis][panic][patch] Unregistered use of FPU in k 
In Progress |203422 | mpd/ppoe not working with re(4) with revision 285 
In Progress |206581 | bxe_ioctl_nvram handler is faulty 
New |204438 | setsockopt() handling of kern.ipc.maxsockbuf limi 
New |205592 | TCP processing in IPSec causes kernel panic   
New |206053 | kqueue support code of netmap causes panic
New |213410 | [carp] service netif restart causes hang only whe 
New |215874 | [patch] [icmp] [mbuf_tags] teach icmp_error() opt 
Open|148807 | [panic] "panic: sbdrop" and "panic: sbsndptr: soc 
Open|193452 | Dell PowerEdge 210 II -- Kernel panic bce (broadc 
Open|194485 | Userland cannot add IPv6 prefix routes
Open|194515 | Fatal Trap 12 Kernel with vimage  
Open|199136 | [if_tap] Added down_on_close sysctl variable to t 
Open|202510 | [CARP] advertisements sourced from CARP IP cause  
Open|206544 | sendmsg(2) (sendto(2) too?) can fail with EINVAL; 
Open|211031 | [panic] in ng_uncallout when argument is NULL 
Open|211962 | bxe driver queue soft hangs and flooding tx_soft_ 
Open|212018 | Enable IPSEC_NAT_T in GENERIC kernel configuratio 

18 problems total for which you should take action.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


bad throughput performance on multiple systems: Re: Fwd: Re: Disappointing packets-per-second performance results on a Dell,PE R530

2017-03-12 Thread John Jasen
I think I am able to confirm Mr. Caraballo's findings.

I pulled a Dell PowerEdge 720 out of production, and upgraded it to
11-RELEASE-p8.

Currently, as in the R530, it has a single Chelsio T5-580, but has two
v2 Intel E5-26xx CPUs versus the newer ones in the R530.

Both ports are configured for jumbo frames, and lro/tso are off. One is
pointed at 172.16.2.0/24 as the load receivers; the other is pointed to
172.16.1.0/24 where the generators reside. Each side has 24 systems.

I've played around a little with the number of queues, cpuset interrupt
binding, and net.isr values -- the only differences were going from
pathetic scores (1.7 million packets-per-second) to absolutely pathetic
(1.3 million when QPI was hit).

In these runs, it seems that no matter what we try on the system, not
all the CPUs are engaged, and the receive queues are also unbalanced. As
an example, in the last run, only 4 of the CPUs were engaged, and
tracking rx queues using
https://github.com/ocochard/BSDRP/blob/master/BSDRP/Files/usr/local/bin/nic-queue-usage,
they ranges from 800k/second to 0/second, depending on the queues (this
run used Chelsio defaults of 8 rx queues/16 tx queues). Interrupts also
seem to confirm there is an unbalance, as current totals on the
'receive' chelsio port range from 935,000 to 9,200,000 (vmstat -ai).

Any idea whats going on?


On 02/27/2017 09:13 PM, Caraballo-vega, Jordan A. (GSFC-6062)[COMPUTER
SCIENCE CORP] wrote:
> As a summarywe have a Dell R530 with a Chelsio T580 cardwith -CURRENT.


___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: bad throughput performance on multiple systems: Re: Fwd: Re: Disappointing packets-per-second performance results on a Dell,PE R530

2017-03-12 Thread Slawa Olhovchenkov
On Sun, Mar 12, 2017 at 06:13:46PM -0400, John Jasen wrote:

> I think I am able to confirm Mr. Caraballo's findings.
> 
> I pulled a Dell PowerEdge 720 out of production, and upgraded it to
> 11-RELEASE-p8.
> 
> Currently, as in the R530, it has a single Chelsio T5-580, but has two
> v2 Intel E5-26xx CPUs versus the newer ones in the R530.
> 
> Both ports are configured for jumbo frames, and lro/tso are off. One is
> pointed at 172.16.2.0/24 as the load receivers; the other is pointed to
> 172.16.1.0/24 where the generators reside. Each side has 24 systems.
> 
> I've played around a little with the number of queues, cpuset interrupt
> binding, and net.isr values -- the only differences were going from
> pathetic scores (1.7 million packets-per-second) to absolutely pathetic
> (1.3 million when QPI was hit).
> 
> In these runs, it seems that no matter what we try on the system, not
> all the CPUs are engaged, and the receive queues are also unbalanced. As
> an example, in the last run, only 4 of the CPUs were engaged, and
> tracking rx queues using
> https://github.com/ocochard/BSDRP/blob/master/BSDRP/Files/usr/local/bin/nic-queue-usage,
> they ranges from 800k/second to 0/second, depending on the queues (this
> run used Chelsio defaults of 8 rx queues/16 tx queues). Interrupts also
> seem to confirm there is an unbalance, as current totals on the
> 'receive' chelsio port range from 935,000 to 9,200,000 (vmstat -ai).
> 
> Any idea whats going on?

what traffic you generated (TCP? UDP? ICMP? other?), what reported in
dmesg | grep txq ?
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: bad throughput performance on multiple systems: Re: Fwd: Re: Disappointing packets-per-second performance results on a Dell,PE R530

2017-03-12 Thread John Jasen
n 03/12/2017 07:18 PM, Slawa Olhovchenkov wrote:
> On Sun, Mar 12, 2017 at 06:13:46PM -0400, John Jasen wrote:

>
> what traffic you generated (TCP? UDP? ICMP? other?), what reported in
> dmesg | grep txq ?

UDP traffic. dmesg reports 16 txq, 8 rxq -- which is the default for
Chelsio.


___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"