pcap & bpf

2002-09-21 Thread Petri Helenius

(I'm sending a copy here since I'm running this on FreeBSD and got 
 no reply so far from the tcpdump folks)

Function pcap_open_live in pcap-bpf.c contains the code snippet below.

To me, this does not make too much sense, because:
- if v is too big to be accommodated (either by configuration or 
  resources, BIOCSBLEN will fail. However the code ignores the return
  code
- it then proceeds to BIOCSETIF which will succeed either with the 
  bufsize of 32768 or whatever is default in the OS.

Suggestions:
- Do not touch the buffer size (at least without giving the option 
  to specify the size)
- If some operating systems really need touching the buffersize,
  do BIOCGBLEN first to figure out what you got and in any case
  don't make the bufsize smaller than it was
  (reason: doing highspeed capture with 32k buffer is futile)

I staticly linked with patched library with large buffers and 
it works happily, before that the system dropped a few thousand
packets a minute.

Pete


/*
 * Try finding a good size for the buffer; 32768 may be too
 * big, so keep cutting it in half until we find a size
 * that works, or run out of sizes to try.
 *
 * XXX - there should be a user-accessible hook to set the
 * initial buffer size.
 */
for (v = 32768; v != 0; v >>= 1) {
/* Ignore the return value - this is because the call fails
 * on BPF systems that don't have kernel malloc.  And if
 * the call fails, it's no big deal, we just continue to
 * use the standard buffer size.
 */
(void) ioctl(fd, BIOCSBLEN, (caddr_t)&v);

(void)strncpy(ifr.ifr_name, device, sizeof(ifr.ifr_name));
if (ioctl(fd, BIOCSETIF, (caddr_t)&ifr) >= 0)
break;  /* that size worked; we're done */

if (errno != ENOBUFS) {
snprintf(ebuf, PCAP_ERRBUF_SIZE, "BIOCSETIF: %s: %s",
device, pcap_strerror(errno));
goto bad;
}
}

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net" in the body of the message



Re: pcap & bpf

2002-09-21 Thread Richard A Steenbergen

On Sat, Sep 21, 2002 at 12:03:30PM +0300, Petri Helenius wrote:
> (I'm sending a copy here since I'm running this on FreeBSD and got 
>  no reply so far from the tcpdump folks)
> 
> Function pcap_open_live in pcap-bpf.c contains the code snippet below.
> 
> To me, this does not make too much sense, because:
> - if v is too big to be accommodated (either by configuration or 
>   resources, BIOCSBLEN will fail. However the code ignores the return
>   code

Read the comments and the rest of the code in the section you pasted.

/*
 * Try finding a good size for the buffer; 32768 may be too
 * big, so keep cutting it in half until we find a size
 * that works, or run out of sizes to try.
 *
 * XXX - there should be a user-accessible hook to set the
 * initial buffer size.
 */

It couldn't get any blunter if they used a hammer. :)

> - it then proceeds to BIOCSETIF which will succeed either with the 
>   bufsize of 32768 or whatever is default in the OS.
> 
> Suggestions:
> - Do not touch the buffer size (at least without giving the option 
>   to specify the size)

debug.bpf_bufsize: 4096
debug.bpf_maxbufsize: 524288

32k is already a bump up from the default of 4k, which at the time that
was set (and hard coded) probably seemed "good enough". Obviously as
interfaces have gotten faster, that number has become out of date. Yes
they SHOULD make it pcap-user tunable, the comment even says so, but until
they do...  Well it should be really really simple to add a hook for
changing it, if you wanted to try submitting it to the pcap folks. :)

-- 
Richard A Steenbergen <[EMAIL PROTECTED]>   http://www.e-gerbil.net/ras
PGP Key ID: 0x138EA177  (67 29 D7 BC E8 18 3E DA  B2 46 B3 D8 14 36 FE B6)

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net" in the body of the message



Re: pcap & bpf

2002-09-21 Thread Petri Helenius


> 32k is already a bump up from the default of 4k, which at the time that
> was set (and hard coded) probably seemed "good enough". Obviously as
> interfaces have gotten faster, that number has become out of date. Yes
> they SHOULD make it pcap-user tunable, the comment even says so, but until
> they do...  Well it should be really really simple to add a hook for
> changing it, if you wanted to try submitting it to the pcap folks. :)
>
My hope was that (since I didn´t get a reply from the pcap folks)
that this could be fixed in the FreeBSD repository, since that´s the only
platform I care about :-)

Pete



To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net" in the body of the message



Re: pcap & bpf

2002-09-21 Thread Neelkanth Natu

Hi,

--- Petri Helenius <[EMAIL PROTECTED]> wrote:
> (I'm sending a copy here since I'm running this on FreeBSD and got 
>  no reply so far from the tcpdump folks)
> 
> Function pcap_open_live in pcap-bpf.c contains the code snippet below.
> 
> To me, this does not make too much sense, because:
> - if v is too big to be accommodated (either by configuration or 
>   resources, BIOCSBLEN will fail. However the code ignores the return
>   code

BIOCSBLEN can fail only if the the bpf is already attached to an
interface. Otherwise the code sandwiches the requested value of bufsize
between bpf_maxbufsize and BPF_MINBUFSIZE. So there is no way this could
really "fail".

> - it then proceeds to BIOCSETIF which will succeed either with the 
>   bufsize of 32768 or whatever is default in the OS.

On the other hand it is possible for BIOCSETIF to fail in bpf_allocbufs().
And the pcap code does check for ENOBUFS. So what the code snippet is doing
is entirely reasonably.

Making the initial bufsize requested by pcap user-configurable, might be a 
solution to your problem.

best
Neel

> 
> Suggestions:
> - Do not touch the buffer size (at least without giving the option 
>   to specify the size)
> - If some operating systems really need touching the buffersize,
>   do BIOCGBLEN first to figure out what you got and in any case
>   don't make the bufsize smaller than it was
>   (reason: doing highspeed capture with 32k buffer is futile)
> 
> I staticly linked with patched library with large buffers and 
> it works happily, before that the system dropped a few thousand
> packets a minute.
> 
> Pete
> 
> 
> /*
>  * Try finding a good size for the buffer; 32768 may be too
>  * big, so keep cutting it in half until we find a size
>  * that works, or run out of sizes to try.
>  *
>  * XXX - there should be a user-accessible hook to set the
>  * initial buffer size.
>  */
> for (v = 32768; v != 0; v >>= 1) {
> /* Ignore the return value - this is because the call fails
>  * on BPF systems that don't have kernel malloc.  And if
>  * the call fails, it's no big deal, we just continue to
>  * use the standard buffer size.
>  */
> (void) ioctl(fd, BIOCSBLEN, (caddr_t)&v);
> 
> (void)strncpy(ifr.ifr_name, device, sizeof(ifr.ifr_name));
> if (ioctl(fd, BIOCSETIF, (caddr_t)&ifr) >= 0)
> break;  /* that size worked; we're done */
> 
> if (errno != ENOBUFS) {
> snprintf(ebuf, PCAP_ERRBUF_SIZE, "BIOCSETIF: %s: %s",
> device, pcap_strerror(errno));
> goto bad;
> }
> }
> 
> To Unsubscribe: send mail to [EMAIL PROTECTED]
> with "unsubscribe freebsd-net" in the body of the message


__
Do you Yahoo!?
New DSL Internet Access from SBC & Yahoo!
http://sbc.yahoo.com

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net" in the body of the message



Re: ppp client-callback

2002-09-21 Thread Archie Cobbs

Michael Bretterklieber writes:
> do you have the intention to implement this in the near future?
> 
> >>Does mpd support client-callback?
> > 
> > No, sorry.

No, sorry.

-Archie

__
Archie Cobbs * Packet Design * http://www.packetdesign.com

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net" in the body of the message



Latency spike over VPN using SSH (delayed ack problem)

2002-09-21 Thread Dima Dorfman

I have a VPN setup where the client opens an SSH connection to the VPN
router and runs "ppp -direct client-vpn" (i.e., I'm tunneling a PPP
connection over SSH).  My configuration looks very similar to the
example of how to do this in share/examples/ppp/ppp.conf.sample.

Now, there are three computers: C is the VPN client, R is the VPN
router, and S is a server on the other side of the VPN.  After
establishing a VPN connection, if I SSH from C to S and run "ping C",
the first response time will be ~190 ms more than it should be.  Note
that this *only* happens if I connected *from* C to S and *then* run
ping; if I connect to S in another way and run ping, the latency spike
isn't present (I'm not sure how or if this is relevant, but I thought
I'd add it anyway).

C and R are usually connected over 801.11b (wireless), but the
symptoms are present regardless of how they're connected (I've tried
fast ethernet and WAN (Internet)).  Originally I suspected the
"Secure" (CPU-intensive crypto) part of SSH and PPP compression, but
neither of these helped; I turned off all PPP compression and replaced
ssh with rsh, and the problem remained.

Now, if I turned off delayed acks on C xor R, the latency spike drops
to ~95 ms.  If I turn it off on C *and* R, the latency spike
disappears--hence the "delayed ack problem" part of the subject.

Just for reference, here's what the symptom looks like *with* delayed
acks:

dima@SERVER% ping CLIENT
PING CLIENT (192.168.4.193): 56 data bytes
64 bytes from 192.168.4.193: icmp_seq=0 ttl=63 time=193.025 ms
64 bytes from 192.168.4.193: icmp_seq=1 ttl=63 time=3.376 ms
64 bytes from 192.168.4.193: icmp_seq=2 ttl=63 time=3.420 ms
64 bytes from 192.168.4.193: icmp_seq=3 ttl=63 time=4.003 ms
64 bytes from 192.168.4.193: icmp_seq=4 ttl=63 time=5.393 ms
^C
--- CLIENT ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 3.376/41.843/193.025/75.594 ms

Note also that this isn't just for ICMP; the spike can occasionally be
"felt" in interactive sessions.

Now, my question is: Is this a known bug, and if it is, is there a
fix?  If someone wants tcpdumps, just let me know where (on which
machine), on what (which interface--do you want to see the ICMP
packets (inside the tunnel) or the SSH packets (outside the tunnel)),
and when to run them.

Thanks in advance,

Dima.

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net" in the body of the message



Intel Gigabit NIC questions

2002-09-21 Thread Vincent Poy

Greetings everyone:

I have a question on the Intel Gigabit NIC's.  Other than price,
is there any difference in performance (full wire speed) between the
Pro/1000T and the Pro/1000MT.   Thanks.


Cheers,
Vince - [EMAIL PROTECTED] - Vice President    __ 
Unix Networking Operations - FreeBSD-Real Unix for Free / / / / |  / |[__  ]
WurldLink Corporation  / / / /  | /  | __] ]
San Francisco - Honolulu - Hong Kong  / / / / / |/ / | __] ]
HongKong Stars/Gravis UltraSound Mailing Lists Admin /_/_/_/_/|___/|_|[]
Almighty1@IRC - oahu.DAL.NET Hawaii's DALnet IRC Network Server Admin


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-questions" in the body of the message


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net" in the body of the message