On 05/24/12 18:55, Kevin Oberman wrote:
This is,of course, on a 10G interface. On 7.3 there is little
Hi Kevin,
What you're seeing looks almost like a checksum is bad, or
there is some other packet damage. Do you see any
error counters increasing if you run netstat -s before
and after the
On 05/30/12 10:59, Colin Percival wrote:
Hi all,
The Xen virtual network interface has an issue (ok, really the issue is with
the linux back-end, but that's what most people are using) where it can't
handle scatter-gather writes with lots of pieces, aka. long mbuf chains.
This currently bites us
On 05/30/12 18:35, Colin Percival wrote:
On 05/30/12 08:30, Andrew Gallatin wrote:
On 05/30/12 10:59, Colin Percival wrote:
The Xen virtual network interface has an issue (ok, really the issue is with
the linux back-end, but that's what most people are using) where it can't
hand
On 05/28/12 12:12, Luigi Rizzo wrote:
I am doing some experiments with implementing a software bridge
between virtual machines, using netmap as the communication API.
I have a first prototype up and running and it is quite fast (10 Mpps
with 60-byte frames, 4 Mpps with 1500 byte frames, compared
On 06/03/12 01:18, Kevin Oberman wrote:
What can I say but that you are right. When I looked at the interface
stats I found that the link overflow drops were through the roof! This
confuses me a bit since the traffic is outbound and I woudl assume
Indeed, link overflow is incoming traffic that
On 06/03/12 12:51, Colin Percival wrote:
On 05/30/12 08:30, Andrew Gallatin wrote:
On 05/30/12 10:59, Colin Percival wrote:
The Xen virtual network interface has an issue (ok, really the issue is with
the linux back-end, but that's what most people are using) where it can't
hand
lini...@freebsd.org wrote:
Synopsis: [mxge] [panic] panics since mxge(4) update
Responsible-Changed-From-To: freebsd-net->gallatin
Responsible-Changed-By: linimon
Responsible-Changed-When: Sat Mar 13 19:56:17 UTC 2010
Responsible-Changed-Why:
Drew wants these PRs.
http://www.freebsd.org/cgi/q
Murat Balaban [mu...@enderunix.org] wrote:
>
> Much of the FreeBSD networking stack has been made parallel in order to
> cope with high packet rates at 10 Gig/sec operation.
>
> I've seen good numbers (near 10 Gig) in my tests involving TCP/UDP
> send/receive. (latest Intel driver).
>
> As far
David Malone wrote:
On Mon, May 10, 2010 at 11:02:41AM -0400, Andrew Gallatin wrote:
I think something may be holding onto an mbuf after free,
then re-freeing it. But only after somebody else allocated
it. I was hoping that the mbuf double free referenced
above was the smoking gun, but it
Alexander Sack wrote:
<...>
>> Using this driver/firmware combo, we can receive minimal packets at
>> line rate (14.8Mpps) to userspace. You can even access this using a
>> libpcap interface. The trick is that the fast paths are OS-bypass,
>> and don't suffer from OS overheads, like lock content
Alexander Sack wrote:
> On Fri, May 14, 2010 at 10:07 AM, Andrew Gallatin
wrote:
>> Alexander Sack wrote:
>> <...>
>>>> Using this driver/firmware combo, we can receive minimal packets at
>>>> line rate (14.8Mpps) to userspace. You can even acc
Alexander Sack wrote:
To use DCA you need:
- A DCA driver to talk to the IOATDMA/DCA pcie device, and obtain the tag
table
- An interface that a client device (eg, NIC driver) can use to obtain
either the tag table, or at least the correct tag for the CPU
that the interrupt
Bjoern A. Zeeb wrote:
This is kind of a heads up that from the time 8 will be branched off
and HEAD will be 9 all new code should
1) have feature parity for INET and INET6 where applicable
As a sort of side-note, what about feature parity for INET6 for
existing IPV4 features like TSO? Who is
Bjoern A. Zeeb wrote:
As a sort of side-note, what about feature parity for INET6 for
existing IPV4 features like TSO? Who is working on that?
Ok, maybe we should write down the big list now. What all can we have?
What do we already have? What do we need? What needs to be changed?
IPv4 CSUM
Bjoern A. Zeeb wrote:
if_mxge:
mxge_rx_csum() has one in_pseudo(). The function and callers
already seem to know how do deal with results in case the csum can't
be validated. So this should be a simple #ifdef INET wrapping here;
side note: the tcpudp_csum
(apologies if this is the second copy you get, I've been having
email issues)
Bjoern A. Zeeb wrote:
>> As a sort of side-note, what about feature parity for INET6 for
>> existing IPV4 features like TSO? Who is working on that?
>
> Ok, maybe we should write down the big list now. What all can we
Bjoern A. Zeeb wrote:
On Fri, 12 Jun 2009, Navdeep Parhar wrote:
On Fri, Jun 12, 2009 at 10:56:31AM +, Bjoern A. Zeeb wrote:
On Fri, 12 Jun 2009, Pyun YongHyeon wrote:
Hi,
Yeah, there are no checksum offloading support for IPv6 under
FreeBSD so there are no cases the frames are IPv6 whe
Bjoern A. Zeeb wrote:
>>> if there is no INET there should be no LRO for now, the capabilities
>>> not advertised, etc. Be prepared in case LRO will arrive for IPv6.
>>
>> As to LRO & IPV6... I was going to port our LRO for IPv6,
>> but discovered the state of IPv6 in FreeBSD is so disgraceful
>
Michael Tuexen wrote:
> I'm not sure if we need additional IFCAP_RXCSUM6 IFCAP_TXCSUM6
> capabilities... Why would we want to enable IPv4 offloading and
> not IPv6 or vice versa?
I'd assume that some older hardware supports IPv4 offloads, but
might not have support for IPv6 offloads.
Drew
_
Does anybody have a free, in-kernel tool to generate packets quicky
and send them out a particular etherent interface on FreeBSD?
Something similar to pktgen on linux?
I'm trying to excersize just the send-side of programmable firmware
based NIC. The recieve side of the NIC firmware is not yet w
Andre Oppermann writes:
>
> netgraph/ng_source.c
>
> Doesn't have a man page though.
>
Actually it does! Its just never installed..
I'm glad there is a manpage, as I'm a netgraph newbie.
cvs status ng_source.4
===
File: ng
Don Bowman writes:
> From: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED] Behalf Of Andrew Gallatin
> > Sent: September 10, 2004 19:08 PM
> > To: [EMAIL PROTECTED]
> > Subject: packet generator
> >
> > Does anybody have a free, in-kernel tool to gen
Don Bowman writes:
> From: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED] Behalf Of Andrew Gallatin
> > Sent: September 10, 2004 19:08 PM
> > To: [EMAIL PROTECTED]
> > Subject: packet generator
> >
> > Does anybody have a free, in-kernel tool to gen
Andrew Gallatin writes:
> xmit routine was called 683441 times. This means that the queue was
> only a little over two packets deep on average, and vmstat shows idle
> time. I've tried piping additional packets to nghook mx0:orphans
> input, but that does not seem to i
Andre Oppermann writes:
>
> Regarding your measurements, did you measure the bandwidth as reported
> by Netperf? Is a FreeBSD box on both sides (you mentioned Linux)?
Yes, all the numbers were in Mb/sec. The sender was running
linux-2.6.6 (also SMP on a single HTT P4).
Drew
_
Andre Oppermann writes:
>
> I've got some excellent review feedback from Mike Spengler and he found
> a off-by-one queue limit tracking error.
>
> http://www.nrg4u.com/freebsd/tcp_reass-20041213.patch
>
Here are the same tests running your new patch in comparison to a
stock 6.x kernel
Andre Oppermann writes:
>
> I have already the next round in the works which is optimized even more
> by merging consecutive mbuf chains together (at the moment I have packet
> segment chains which have a direct pointer to the mbuf at the end of the
> chain) and which get passed in one go t
Andre Oppermann writes:
> I've totally rewritten the TCP reassembly function to be a lot more
> efficient. In tests with normal bw*delay products and packet loss
> plus severe reordering I've measured an improvment of at least 30% in
> performance. For high and very high bw*delay product lin
Speaking of net.isr, is there any reason why if_simloop() calls
netisr_queue() rather than netisr_dispatch()?
Drew
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL
Robert Watson writes:
>
> On Wed, 12 Oct 2005, Andrew Gallatin wrote:
>
> > Speaking of net.isr, is there any reason why if_simloop() calls
> > netisr_queue() rather than netisr_dispatch()?
>
> Yes -- it's basically to prevent recursion for loopback
Garrett Wollman writes:
> < PROTECTED]> said:
>
> > Right now, at least, it seems to work OK. I haven't tried witness,
> > but a non-debug kernel shows a big speedup from enabling it. Do
> > you think there is a chance that it could be made to work in FreeBSD?
>
> I did this ten years a
Poul-Henning Kamp writes:
> The best compromise solution therefore is to change the scheduler
> to make decisions based on the TSC ticks (or equivalent on other
> archs) and at regular intervals figure out how fast the CPU ran in
> the last period and convert the TSC ticks accumulated to a tim
Poul-Henning Kamp writes:
> In message <[EMAIL PROTECTED]>, Andrew Gallatin
> writes:
> >
> >Poul-Henning Kamp writes:
> > > The best compromise solution therefore is to change the scheduler
> > > to make decisions based on the TSC ticks (or equiva
Poul-Henning Kamp writes:
> The solution is not faster but less reliable timekeeping, the
> solution is to move the scheduler(s) away from using time as an
> approximation of cpu cycles.
So you mean rather than use binuptime() in mi_switch(), use some
per-cpu cycle counter (like rdtsc)?
Heck,
Bruce Evans writes:
> On Fri, 14 Oct 2005, Andrew Gallatin wrote:
>
> > Bear in mind that I have no clue about timekeeping. I got into this
> > just because I noticed using a TSC timecounter reduces context switch
> > latency by 40% or more on all the SMP pl
On 07/14/2016 16:06, Adrian Chadd wrote:
I'd appreciate any other feedback/comments/suggestions. If you're
using RSS and you haven't told me then please let me know!
Hi Adrian,
I'm a huge fan of your RSS work. In fact, I did a backport of RSS to
Netflix's stable-10 about 6 months ago. I was
Steve Shorter writes:
> The following is from netstat -s on a busy NFS client (udp mounts)
> I was wondering about the line
>
> 172764 dropped due to full socket buffers
A 0.02% drop rate is really not so bad. You should try increasing
vfs.nfs.bufpackets to a higher value.
Drew
Julian Elischer writes:
>
>
> On Wed, 27 Mar 2002, Andrew Gallatin wrote:
>
> >
> > Archie Cobbs writes:
> > > Luigi Rizzo writes:
> > > > > Is if_tx_rdy() something that can be used generally or does it only
> > > > &g
Archie Cobbs writes:
> Luigi Rizzo writes:
> > > Is if_tx_rdy() something that can be used generally or does it only
> > > work with dummynet ?
> >
> > well, the function is dummynet-specific, but I would certainly like
> > a generic callback list to be implemented in ifnet which is
> > i
Kenneth D. Merry writes:
>
> I have released a new set of zero copy sockets patches, against -current
> from today (May 17th, 2002).
>
> The main change is to deal with the vfs_ioopt changes that Alan Cox made in
> kern_subr.c. (They conflicted a bit with the zero copy receive code.)
>
Terry Lambert writes:
> To do the work, you'd have to do it on your own, after licensing
> the firmware, after signing an NDA. Unlike the rather public
> Tigon II firmware, the Tigon III doesn't have a lot of synergy
> or interesting work going for it. Most people doing interesting
> work
Kenneth D. Merry writes:
> > As a related question, will this work with the broadcom gigabit (bge)
> > driver, which is the Tigon III? If not, what would it take to get
> > it working?
>
> Unfortunately, it won't work with the Tigon III.
>
> If you can get firmware source for the Tigon I
Archie Cobbs writes:
> Re: the -stable patch. I agree we need a more general MFC/cleanup
> of some of the mbuf improvements from -current into -stable.
> If I find time perhaps I'll do that as well, but in a separate patch.
> For the present time, I'll commit this once 4.6-REL is done.
The b
Bosko Milekic writes:
> > Years ago, I used Wollman's MCLBYTES > PAGE_SIZE support (introduced
> > in rev 1.20 of uipc_mbuf.c) and it seemed to work OK then. But having
> > 16K clusters is a huge waste of space. ;).
>
> Since then, the mbuf allocator in -CURRENT has totally changed. It
Bosko Milekic writes:
> >
> > I'm a bit worried about other devices.. Tradidtionally, mbufs have
> > never crossed page boundaries so most drivers never bother to check
> > for a transmit mbuf crossing a page boundary. Using physically
> > discontigous mbufs could lead to a lot of subtle d
John Polstra writes:
> Something is wrong with the hardware checksum offloading for
<..>
> +#if 0
> ifp->if_hwassist = BGE_CSUM_FEATURES;
> ifp->if_capabilities = IFCAP_HWCSUM;
> ifp->if_capenable = ifp->if_capabilities;
> +#endif
<...>
> Note, the bug may not be in the d
Bosko Milekic writes:
>
> [ -current trimmed ]
>
> On Fri, Jul 05, 2002 at 08:08:47AM -0400, Andrew Gallatin wrote:
> > Would this be easier or harder than simple, physically contiguous
> > buffers? I think that its only worth doing if its easier to manage
Bosko Milekic writes:
>
> On Fri, Jul 05, 2002 at 10:14:05AM -0400, Andrew Gallatin wrote:
> > I think this would be fine, But we'd need to know more about the
> > hardware limitations of the popular GiGE boards out there. We know
> > Tigon-II can handle 4 sca
Bosko Milekic writes:
>
> On Fri, Jul 05, 2002 at 10:45:50AM -0400, Andrew Gallatin wrote:
> >
> > Bosko Milekic writes:
> > >
> > > On Fri, Jul 05, 2002 at 10:14:05AM -0400, Andrew Gallatin wrote:
> > > > I think this wou
John Polstra writes:
> In article <[EMAIL PROTECTED]>,
> Bosko Milekic <[EMAIL PROTECTED]> wrote:
> >
> > On Fri, Jul 05, 2002 at 09:45:01AM -0700, John Polstra wrote:
> > > The BCM570x chips (bge driver) definitely need a single physically
> > > contiguous buffer for each received packet
John Polstra writes:
> In article <[EMAIL PROTECTED]>,
> Andrew Gallatin <[EMAIL PROTECTED]> wrote:
> > > WHOOPS, I'm afraid I have to correct myself. The BCM570x chips do
> > > indeed support multiple buffers for jumbo packets. I'
John Polstra writes:
> In article <[EMAIL PROTECTED]>,
> Andrew Gallatin <[EMAIL PROTECTED]> wrote:
> > > Without the docs it would take a lot of trial & error to
> > > figure out how to make it work.
> >
> > Not necessarily.
Alfred Perlstein writes:
> Some time ago I noticed that there appeared to be several members
> of struct socket that were either only used by listen sockets or
> only used by data sockets.
>
> I've taken a stab at unionizing the members and we wind up saving
> 28 bytes per socket on i386,
Mike Silbersack writes:
>
> Speaking of competition, someone should go look at this:
>
> http://mail-index.netbsd.org/current-users/2002/07/03/0011.html
>
Its very worthwhile. Tru64 has had this for years. I think there may
be a Jeff Mogul paper on it somewhere (but I don't have time t
Julian Elischer writes:
>
>
> On Fri, 12 Jul 2002, Giorgos Keramidas wrote:
>
> > On 2002-07-12 07:45 +, Bosko Milekic wrote:
> > >
> > > So I guess that what we're dealing with isn't really a
> > > "monodirectional" ring. Right?
> >
> > No it isn't. It looks more like the "di
Bosko Milekic writes:
<...>
> If we decide to allocate jumbo bufs from their own seperate map as
> well then we have no wastage for the counters for clusters if we keep
> them in a few pages, like in -STABLE, and it should all work out fine.
That sounds good.
> For the jumbo bufs I
John Baldwin writes:
> Would people be open to renaming the 'MSIZE' kernel option to something
> more specific such as 'MBUF_SIZE' or 'MBUFSIZE'? Using 'MSIZE' can
No.
MSIZE is a traditional BSDism. Everybody else still uses it.
Even AIX and MacOS. I really don't like the idea of changing
Robert Watson writes:
> tear-down magic. What Solaris does here, FYI, is basically add a lock
> around
> entering the device driver via their mac layer in order to prevent it from
> "disappearing" while in use via the ifnet interface. I'm not sure if we
> want
At least for GLDv2, this
Robert Watson writes:
>
> 5BOne of the ideas that I, Scott Long, and a few others have been bouncing
> around for some time is a restructuring of the network interface packet
> transmission API to reduce the number of locking operations and allow
> network
> device drivers increased con
Robert Watson writes:
> The immediate practical benefit is
> clear: if the queueing at the ifnet layer is unnecessary, it is entirely
> avoided, skipping enqueue, dequeue, and four mutex operations.
This is indeed nice, but for TCP I think the benefit would be far
greater if somebody wo
Robert Watson writes:
>
> Jack Vogel at Intel has previously talked about having TSO patches for
> FreeBSD
> to use with if_em, but was running into stability/correctness problems on
> 7.x.
> I e-mailed him a few minutes ago to ask to take a look at the patches.
> Since
> I've not
Robert Watson writes:
>
> On Tue, 1 Aug 2006, Andrew Gallatin wrote:
>
> > > - The ifnet send queue is a separately locked object from the device
> > > driver,
> > >meaning that for a single enqueue/dequeue pair, we pay an extra four
> > &
Jack Vogel writes:
> We are making our development driver for the I/OAT engine available for
> download, experimentation, and comment available at:
>
>
> http://sourceforge.net/project/showfiles.php?group_id=42302&package_id=202220
>
> This includes a core driver for the dma har
Between TSO and your sendfile changes, things are looking up!
Here are some Myri10GbE 1500 byte results from a 1.8GHz UP
FreeBSD/amd64 machine (AMD Athlon(tm) 64 Processor 3000+) sending to a
2.0GHz SMP Linux/x86_64 machine (AMD Athlon(tm) 64 X2 Dual Core Processor
3800+) running 26.17.7smp and
Andre Oppermann writes:
> Andrew Gallatin wrote:
> >
> > Between TSO and your sendfile changes, things are looking up!
> >
> > Here are some Myri10GbE 1500 byte results from a 1.8GHz UP
> > FreeBSD/amd64 machine (AMD Athlon(tm) 64 Processor 3000+) s
Andre Oppermann writes:
> I have rewritten m_getm() to be simpler and to allocate PAGE_SIZE sized
> jumbo mbuf clusters (4k on most architectures) as well as m_uiotombuf()
> to use the new m_getm() to obtain all mbuf space in one go. It then loops
> over it an copies the data into the mbufs by
Andre,
I meant to ask: Did you try 16KB jumbos? Did they perform
any better than page-sized jumbos?
Also, if we're going to change how mbufs work, let's add something
like Linux's skb_frag_t frags[MAX_SKB_FRAGS]; In FreeBSD parlence,
this embeds something like an array of sf_bufs pointers in m
Andre Oppermann writes:
> Andrew Gallatin wrote:
> > Andre,
> >
> > I meant to ask: Did you try 16KB jumbos? Did they perform
> > any better than page-sized jumbos?
>
> No, I didn't try 16K jumbos. The problem with anything larger than
> p
Randall Stewart writes:
> nmbclusters = 1024 + maxusers * 64;
> +nmbjumbop = 100 + (maxusers * 4);
The limit on page-size jumbos seems far too small. Since the socket
buffer code now uses page-sized jumbos, I'd expect to see its limit be
the same as nmbclusters.
Drew
_
Randall Stewart writes:
> Andrew Gallatin wrote:
> > Randall Stewart writes:
> > > nmbclusters = 1024 + maxusers * 64;
> > > +nmbjumbop = 100 + (maxusers * 4);
> >
> > The limit on page-size jumbos seems far too small. Since th
Andre Oppermann writes:
> This patch solves the problem by maintaining an offset pointer in the socket
> buffer to give tcp_output() the closest mbuf right away avoiding the
> traversal
> from the beginning.
>
> With this patch we should be able to compete nicely for the Internet land
> s
Andre Oppermann writes:
> Instead of the unlock-lock dance soreceive_stream() pulls a properly sized
> (relative to the receive system call buffer space) from the socket buffer
> drops
> the lock and gives copyout as much time as it needs. In the mean time the
> lower
> half can happily a
Andre Oppermann writes:
> The patch is here:
>
> http://people.freebsd.org/~andre/soreceive_stream-20070302.diff
>
> Any testing, especially on 10Gig cards, and feedback appreciated.
I just tested with my standard mxge setup (details and data below).
This is *awesome*. Before your patch
Robert Watson writes:
> On Mon, 5 Mar 2007, Andrew Gallatin wrote:
>
> > With the patch, we finally seem to be performance competative on the
> > receive
> > side with Linux x86_64 and Solaris/amd64 on this same hardware. Both of
> > those OSes do much
Robert Watson writes:
> On Mon, 5 Mar 2007, Andrew Gallatin wrote:
>
> > With the patch, we finally seem to be performance competative on the
> > receive
> > side with Linux x86_64 and Solaris/amd64 on this same hardware. Both of
> > those OSes do much
One last note.. It looks like SCHED_4BSD does a decent job (on my
setup) w/o CPU binding, but SCHED_ULE requires CPU binding to get good
performance. W/o CPU binding, the best bandwidth I see using
SCHED_ULE is around 5.3Gb/s with one CPU mostly idle.. With CPU
binding, it is roughly 9Gb/s.
John Baldwin writes:
> > John has a patch that pins interrupt threads, etc, not sure what the
> > status
> of
> > that is. CC'd.
>
> Tested and around for over a year. Sent to people several times but no
> benchmarking has resulted. It lives in p4 in //depot/user/jhb/intr/...
>
>
Hyong-Youb Kim writes:
>
> I have been recently testing 3C996B-T board in an Athlon system.
> The system has TigerMP board and a single Athlon 2000+ and runs FreeBSD
> 4.7-RC. With bge driver, every thing works fine except that the NIC piles
> up bad checksums on TCP receive packets. For ins
Petri Helenius writes:
> options upped the performance to ~300Mbps:ish while 4.7-STABLE gives twice
> that using the same application. The machine is 2.4GHz Dual P4.
>
Yes, -current is much, much slower for networking than -STABLE. This
is because at this point, network drivers have paid all
Jonathan Disher writes:
> On Wed, 11 Dec 2002, Long Le wrote:
>
> > Hi all,
> >
> > We installed FreeBSD 4.7 on an IBM eServer machine with an on-board
> > copper Broadcom BCM5703X and put two more fiber Broadcom BCM5703X NICs
> > into the machine. We used cvs to get the latest update on t
Just a quick question.. Where do we stand on bringing the networking
subsystem out from under Giant?
The mbuf system is soon to be safe, thanks to Alan Cox, so this allows
INTR_MPSAFE drivers. However, swi:net is still under Giant, as are
many of the important socket functions (sendto(), recvfr
M. Warner Losh writes:
<..>
> However in if_slowtimo we have:
>
> if_slowtimo(arg)
> {
> ... IFNET_RLOCK();
> ... if (ifp->if_watchdog)
> (*ifp->if_watchdog)(ifp);
> ... IFNET_RUNLOCK();
> }
>
> and dc_watchdog does a DC_LOCK/UNLOCK pair). This is a Lo
M. Warner Losh writes:
> In message: <[EMAIL PROTECTED]>
> Andrew Gallatin <[EMAIL PROTECTED]> writes:
> : The IFNET_RLOCK() called in if_slowtimo() is a global lock for the
> : list of ifnet structs to ensure that no devices are removed or added
> : w
Bruce Evans writes:
> > Is there an issue on non-x86 architectures?
>
> Not AFAIK. Not far, but SMP and FreeBSD's ithread implementation need
> something like an x86 ICU to work right. The interrupt mask must be
> global, and per-cpu ipls don't (naturally) work right even in the 1-cpu
> c
At my company, some bonehead (not sure if it was maliciousness or just
a stupid customer), opened 60 simultaneous connections to our ftp
server and totally swamped our T1.This is the second or third time
this has happened recently.
So I'm looking for some way to limit the number of connection
Simon L. Nielsen writes:
> On 2003.05.30 09:25:31 -0400, Andrew Gallatin wrote:
> >
> > At my company, some bonehead (not sure if it was maliciousness or just
> > a stupid customer), opened 60 simultaneous connections to our ftp
> > server and totally swamped ou
Maxim Konovalov writes:
> a) run ftpd from inetd -s, man inetd;
Duh! Thanks! Works fine.
Drew
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Luigi Rizzo writes:
> On Fri, May 30, 2003 at 09:33:56AM -0400, Andrew Gallatin wrote:
> ...
> > As for adding it to the server itself, its an alpha, and I don't think
> > dummnet/ipfw are production quality on alpha...
>
> actually the ipfw1/dummynet code is t
Bosko Milekic writes:
>
> On Sun, Jun 22, 2003 at 12:46:26PM -0700, George V. Neville-Neil wrote:
> > Hi,
> >
> >I'm reading over the internals of the network stack in
> >-CURRENT and I'm wondering if the Zero Copy stuff is actually
> >in use yet.
> >
> > Thanks,
> > Georg
In FreeBSD 5 are IP frags guaranteed to be passed to a network driver
with no intermediary IP packets between the fragments? Or can
multiple streams get queued at once?
Specifically, if a driver claims to be able to CSUM_IP_FRAGS, can it
count on getting all the frags, one right after another (p
I've been reading a little about TCP Segmentation Offload (aka TSO).
We don't appear to support it, but at least 2 of our supported nics
(e1000 and bge) apparently could support it.
The gist is that TCP pretends the nic has a large mtu, and passes a
large (> the mtu on the link layer) packet down
Luigi Rizzo writes:
> On Fri, Sep 05, 2003 at 04:47:22PM -0400, Andrew Gallatin wrote:
> >
> > I've been reading a little about TCP Segmentation Offload (aka TSO).
> > We don't appear to support it, but at least 2 of our supported nics
> > (e1000
For the case where the mtu is larger than MCLBYTES (2048), FreeBSD's
TCP implementation restricts the mss to a multiple of MCLBYTES. This
appears to have been inherited from 4.4BSD-lite.
On adapters with 9000 byte jumbo frames, this limits the mss to 8192
bytes, and wastes nearly 1KB out of each
Andre Oppermann writes:
> When I was implementing the tcp_hostcache I reorganized/redid the
> tcp_mss() function and wondered about that too. I don't know if
> this rounding to MCLBYTES is still the right thing to do.
I have the feeling its something from ancient days on vaxes. ;)
> > Would
David Borman writes:
> On the sending side, you'll tend to get your best performance when the
> socket buffer is a multiple of the amount of TCP data per packet, and
> the users writes are a multiple of the socket buffer. This keeps
> everything neatly aligned, minimizing the number of dat
Andre Oppermann writes:
>
> Could you run some bechmarks with the current MCLBYTES rounding
> and without it on 100Mbit 1.5kMTU and GigE with 9k MTU?
David Borman is totally right. Clipping the mss is really worth it,
especially with zero-copy sockets. Forget I said anything.
Here is som
Christophe Prevotaux writes:
> Hi,
>
> Is anyone working or planning to work on these babies support
> for FreeBSD ?
>
> http://www.astutenetworks.com/content/product/superhba.htm
Not that I know of, but if you need 4Gb/sec, have you considered
Myrinet?
I do the driver support for FreeB
gallatin added a comment.
It might be nice to make these general tunables that could be done centrally
and apply to all drivers, but that's probably outside the scope of the review.
INLINE COMMENTS
sys/netinet/tcp_lro.c:655 Can you just initialize ack_append_limit to the max
value for what
gallatin accepted this revision.
gallatin added a comment.
Thanks for addressing my concerns.. Does anybody else want to comment?
REVISION DETAIL
https://reviews.freebsd.org/D5185
EMAIL PREFERENCES
https://reviews.freebsd.org/settings/panel/emailpreferences/
To: sepherosa_gmail.com, net
gallatin added a comment.
The tcp_lro_entry_get() abstraction adds an extra compare to the critical
path (the compare against NULL in the function itself, in addition to the same
compare in the main routine). At least it does at the C level. Have you
verified that the compiler is smart en
1 - 100 of 107 matches
Mail list logo