Re: tcpdump filter not ignoring jail subnet

2015-03-06 Thread Harrison Grundy

On 03/05/15 23:09, Beeblebrox wrote:
> Hi. Thanks for the input.
> 
>> 192.168.2.97 is not a net. Any /32 is a host... even if it is 
>> anycast. So filter on "host 192.168.2.9".
> 
> I assume that specifying one of {src | dst} is not required and
> that "host 192.168.2.97" will remove all (in and out) from that
> IP?
> 
>> The real issue is that, while hostnames are allowed, I am not
>> sure whether they can be wildcards. That would require lookups at
>> capture time and I don't think that is possible. At very least,
>> the delays would make it fail. If you choose to look up addresses
>> for FreeBSD systems, or build a list of freebsd.org names. That
>> might work, but it would be a bit painful. Especially since there
>> may multiple addresses for a single name. --
> 
> That's an excellent point - I had not considered that. The solution
> then would be to pipe the output through awk or a ready tool like
> sysutils/ccze I think. I was planning on looking into
> smart-colorization anyway (for easy flagging), but as the second
> step of my little project. With this, I would have awk check
> against the white list, so that URL's would get included but
> filtered out by the awk pipe.
> 
> Thanks also to Ian for the off-list input. I do have a bit of a
> "brain-fart" problem with getting the filter to work however. What
> I posted is the 5th or 6th variation, and at this point I'm just
> chasing my tail. Here's what I'd like to monitor:
> 
> * I want none of the traffic displayed from these: src net not
> 192.168.1.0/24 (outward-facing nic is on this subnet) not ip6 (the
> above net pumps IP6 chatter which I don't need) host not
> 192.168.2.97 (my DNS jail running unbound + dnscrypt on 443)
> 
> * I don't need to monitor any of the traffic on these ports not
> port imap and not port imaps and not port 6667 (irc)
> 
> * With the exception of above, I want to see all remaining traffic
> on host mybsd (src and dst. Normally not necessary to specify since
> we're listening on re0 which is the outward-facing nic, but we also
> requested "net not" the entire subnet this nic belongs to)
> 
> Thanks and Regards
> 

This seems to do do what you want:

root@bsddt1241:/home/astrodog # tcpdump -w - src net not
192.168.1.0/24 | tcpdump -r - -w - not ip6 | tcpdump -r - -w - host
not 192.168.2.97 | tcpdump -r - not port imap and not port imaps and
not port 6667

Terrible as it is...

--- Harrison

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


cpsw/atphy network drivers

2015-03-06 Thread Matt Dooner
Hello,

I am having some trouble configurating the network driver on a TI
T335x-based CoM system
(http://www.compulab.co.il/products/computer-on-modules/cm-t335/). It
uses the "the AM335x integrated Ethernet MAC coupled with the AR8033
RGMII Ethernet PHY from Atheros". U-Boot is able to find the device as
expected:

CM-T335w # mii device
MII devices: 'cpsw'
Current device: 'cpsw'

CM-T335w # mdio list
cpsw:
0 - AR8031/AR8033 <--> cpsw

CM-T335w # dhcp
link up on port 0, speed 100, half duplex
BOOTP broadcast 1
DHCP client bound to address 10.1.192.67
CM-T335w # ping 8.8.8.8
link up on port 0, speed 100, half duplex
Using cpsw device
host 8.8.8.8 is alive

And devinfo(8) reports the correct modules being loaded:

root@beaglebone:~ # devinfo
nexus0
  ofwbus0
simplebus0
  aintc0
  ti_scm0
  am335x_prcm0
  am335x_dmtimer0
  ti_adc0
  gpio0
gpioc0
gpiobus0
  uart0
  ti_edma30
  sdhci_ti0
mmc0
  mmcsd0
  cpsw0
miibus0
  atphy0
  ...

The interface does not appear to be sending or receiving any traffic
over the physical interface, and does not report receiving any packets
at all. I have enabled debug mode on the cpsw driver:

root@beaglebone:~ # ifconfig cpsw0 debug
root@beaglebone:~ # ifconfig cpsw0 up
09:54:45 cpsw_ioctl SIOCSIFFLAGS: UP but not RUNNING; starting up
09:54:45 cpsw_init_locked
root@beaglebone:~ # dhclient cpsw0
09:54:56 cpsw_ifmedia_sts
09:54:56 cpsw_ioctl SIOCSIFFLAGS: UP & RUNNING (changed=0x0)
09:54:56 cpsw_init
09:54:56 cpsw_init_locked
DHCPDISCOVER on cpsw0 to 255.255.255.255 port 67 interval 5
09:54:56 cpsw_tx_enqueue Queueing TX packet: 1 segments + 0 pad bytes
09:54:57 cpsw_tx_dequeue TX removing completed packet
...

A DHCP address is never negotiated.

root@beaglebone:~ # ifconfig
cpsw0: flags=8847 metric
0 mtu 1500
options=8000b
ether 1c:ba:8c:ed:40:99
inet 0.0.0.0 netmask 0xff00 broadcast 255.255.255.255
09:58:57 cpsw_ifmedia_sts
09:58:57 cpsw_ifmedia_sts
09:58:57 cpsw_ifmedia_sts
media: Ethernet autoselect (100baseTX )
status: active
nd6 options=29
lo0: flags=8049 metric 0 mtu 16384
options=63
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
inet 127.0.0.1 netmask 0xff00
nd6 options=21

When connected to another computer running Wireshark no frames are
recorded as having been transmitted over the interface. The cpsw
driver never reports receiving any packets, even when I use a tool
like Ostinato to craft frames addressed to the MAC of the NIC on the
board.

The network interface works perfectly in Debian Linux:

root@cm-debian:~# ethtool eth2

Settings for eth2:

Supported ports: [ TP AUI BNC MII FIBRE ]
Supported link modes:   10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: Yes
Advertised link modes:  10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Speed: 100Mb/s
Duplex: Half
Port: MII
PHYAD: 0
Transceiver: external
Auto-negotiation: on
Current message level: 0x (0)
Link detected: yes

root@cm-debian:~# ethtool -i eth2
driver: TI CPSW Driver v1.0
version: 1.0
firmware-version:
bus-info: cpsw
supports-statistics: no
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no

Can anyone reccomend some next steps for debugging this network
interface in FreeBSD? I seem to have exhausted the options I've found
in the handbook, man pages, and google searches.

It is more relevant for freebsd-embedded, but if anyone is curious my
freebsd-crochet board config is forked at
https://github.com/MattDooner/crochet-freebsd/ and u-boot changes
against 2014.04 at https://github.com/MattDooner/u-boot

Cheers,
Matt
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


RE: Network interrupt and NAPI in FreeBSD?

2015-03-06 Thread Wei Hu
Many thanks, Luigi! We are measuring the network performance in VM(Hyper-V), 
using netvsc virtual NIC device and its own driver. The Linux VM also uses the 
similar virtual device. The driver on both Linux and FreeBSD have TSO/LRO 
support. With just one network queue, we found the throughput is higher on 
Linux (around 2.5 - 3 Gbps) than FreeBSD (just around 1.6 Gbps) with 10GB NIC. 
If INVARIANT option is disabled, FreeBSD can achieve 2 - 2.3 Gbps. The much 
higher interrupt rate on FreeBSD was observed. 

Thanks for the all suggestions. Do you think netmap could help in this case?

Wei


From: rizzo.un...@gmail.com [mailto:rizzo.un...@gmail.com] On Behalf Of Luigi 
Rizzo

> Hi,
>
> I am working on network driver performance for Hyper-V. I noticed the network 
> interrupt rate on FreeBSD is significantly higher than Linux, in the same 
> Hyper-V environment. The iperf test also shows the FreeBSD performance is not 
> as good as Linux. Linux has NAPI built in which could avoid a lot of 
> interrupts on a heavy loaded system. I am wondering if FreeBSD also support 
> NAP in its network stack?
>
> Also any thought on the network performance in general?

i suppose you refer to network performance in a VM, since the factors that 
impact performance there are different from those on bare metal.
The behaviour of course depends a lot on the NIC and backend that you are using 
so if you could be more specific (e1000 ? virtio ?), that would help.

please talk to me (even privately if you prefer) because we have done a lot of 
work on enhancing performance in a VM which covers qemu, xen, bhyve and surely 
is applicable to HyperV as well. And while the use of netmap/VALE gives up to a 
5-10x performance boost, there is another factor of 2-5 that can be gained even 
without netmap. Details at info.iet.unipi.it/~luigi/research.html

On the specific NAPI request:
we do not have NAPI but in some NIC drivers the interrupt service routine will 
spin until it is out of work, which will contribute reducing load.
We often rely on interrupt moderation on the NIC to reduce interrupt rates and 
give work to the ISR in batches. Unfortunately, moderation is often not 
emulated in hypervisors (e.g. we pushed it into qemu a couple of years ago for 
the e1000).

An alternative mechanism (supported on some of our network drivers, and trivial 
to add on others) os "device polling", which i introduced some 15 years ago and 
finds a new meaning in a VM world because it removes device interrupts and 
polls the NIC on timer interrupts instead.
This circumvents the lack of interrupt moderation and gives surprisingly good 
results. The caveat is that you need a reasonably high HZ value to avoid 
excessive latency, and the default HZ=1000 is sometimes turned town to 100 in a 
VM. You should probably override that.

Depending on the performance tests you run, there might be other things that 
cause performance differences, such as support of TSO/LRO offloading on the 
backend (usually with virtio or whatever is your backend), which lets the guest 
VM ship large 64k segments through the software switch.

cheers
luigi


___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"