In article <[EMAIL PROTECTED]>,
Andrew Gallatin <[EMAIL PROTECTED]> wrote:
> > Without the docs it would take a lot of trial & error to
> > figure out how to make it work.
>
> Not necessarily. It just looked at the struct def. To me, it looks
> *exactly* like the equivalent tigon-2 struct.
John Polstra writes:
> In article <[EMAIL PROTECTED]>,
> Andrew Gallatin <[EMAIL PROTECTED]> wrote:
> > > WHOOPS, I'm afraid I have to correct myself. The BCM570x chips do
> > > indeed support multiple buffers for jumbo packets. I'm sorry for the
> > > earlier misinformation!
> >
>
In article <[EMAIL PROTECTED]>,
Andrew Gallatin <[EMAIL PROTECTED]> wrote:
> > WHOOPS, I'm afraid I have to correct myself. The BCM570x chips do
> > indeed support multiple buffers for jumbo packets. I'm sorry for the
> > earlier misinformation!
>
> Are programming docs for this board avail
John Polstra writes:
> In article <[EMAIL PROTECTED]>,
> Bosko Milekic <[EMAIL PROTECTED]> wrote:
> >
> > On Fri, Jul 05, 2002 at 09:45:01AM -0700, John Polstra wrote:
> > > The BCM570x chips (bge driver) definitely need a single physically
> > > contiguous buffer for each received packet
In article <[EMAIL PROTECTED]>,
Bosko Milekic <[EMAIL PROTECTED]> wrote:
>
> On Fri, Jul 05, 2002 at 09:45:01AM -0700, John Polstra wrote:
> > The BCM570x chips (bge driver) definitely need a single physically
> > contiguous buffer for each received packet.
>
> This is totally ridiculous for
In article <[EMAIL PROTECTED]>,
Bosko Milekic <[EMAIL PROTECTED]> wrote:
>
> On Fri, Jul 05, 2002 at 09:45:01AM -0700, John Polstra wrote:
> > The BCM570x chips (bge driver) definitely need a single physically
> > contiguous buffer for each received packet.
>
> This is totally ridiculous for
On Fri, Jul 05, 2002 at 09:45:01AM -0700, John Polstra wrote:
> The BCM570x chips (bge driver) definitely need a single physically
> contiguous buffer for each received packet.
This is totally ridiculous for gigE hardware, IMO. Do you know of
other cards that can't do scatter gather DMA?
>
In article <[EMAIL PROTECTED]>,
Andrew Gallatin <[EMAIL PROTECTED]> wrote:
> Kenneth D. Merry writes:
> > I suppose it would be good to see what NIC drivers in the tree can receive
> > into or send from multiple chunks of data, and what their requirements are.
> > (how many scatter/gather segm
Bosko Milekic writes:
>
> On Fri, Jul 05, 2002 at 10:45:50AM -0400, Andrew Gallatin wrote:
> >
> > Bosko Milekic writes:
> > >
> > > On Fri, Jul 05, 2002 at 10:14:05AM -0400, Andrew Gallatin wrote:
> > > > I think this would be fine, But we'd need to know more about the
> > > > har
On Fri, Jul 05, 2002 at 10:45:50AM -0400, Andrew Gallatin wrote:
>
> Bosko Milekic writes:
> >
> > On Fri, Jul 05, 2002 at 10:14:05AM -0400, Andrew Gallatin wrote:
> > > I think this would be fine, But we'd need to know more about the
> > > hardware limitations of the popular GiGE boards ou
Bosko Milekic writes:
>
> On Fri, Jul 05, 2002 at 10:14:05AM -0400, Andrew Gallatin wrote:
> > I think this would be fine, But we'd need to know more about the
> > hardware limitations of the popular GiGE boards out there. We know
> > Tigon-II can handle 4 scatters, but are there any that
On Fri, Jul 05, 2002 at 10:14:05AM -0400, Andrew Gallatin wrote:
> I think this would be fine, But we'd need to know more about the
> hardware limitations of the popular GiGE boards out there. We know
> Tigon-II can handle 4 scatters, but are there any that can handle 3
> but not four?
Why wo
Bosko Milekic writes:
>
> [ -current trimmed ]
>
> On Fri, Jul 05, 2002 at 08:08:47AM -0400, Andrew Gallatin wrote:
> > Would this be easier or harder than simple, physically contiguous
> > buffers? I think that its only worth doing if its easier to manage at
> > the system level, other
[ -current trimmed ]
On Fri, Jul 05, 2002 at 08:08:47AM -0400, Andrew Gallatin wrote:
> Would this be easier or harder than simple, physically contiguous
> buffers? I think that its only worth doing if its easier to manage at
> the system level, otherwise you might as well use physically
> cont
On Thu, Jun 20, 2002 at 11:45:11 -0400, Bosko Milekic wrote:
> On Thu, Jun 20, 2002 at 11:24:05AM -0400, Andrew Gallatin wrote:
> > Bosko Milekic writes:
>
> > > By the way, my other two comments have been deleted, but reading the
> > > page that Ken maintains I noticed that Alfred already po
Bosko Milekic writes:
> >
> > I'm a bit worried about other devices.. Tradidtionally, mbufs have
> > never crossed page boundaries so most drivers never bother to check
> > for a transmit mbuf crossing a page boundary. Using physically
> > discontigous mbufs could lead to a lot of subtle d
On Thu, Jun 20, 2002 at 12:25:58PM -0400, Andrew Gallatin wrote:
[...]
> > > Do you think it would be feasable to glue in a new jumbo (10K?)
> > > allocator on top of the existing mbuf and mcl allocators using the
> > > existing mechanisms and the existing MCLBYTES > PAGE_SIZE support
> > > (
Bosko Milekic writes:
> > Years ago, I used Wollman's MCLBYTES > PAGE_SIZE support (introduced
> > in rev 1.20 of uipc_mbuf.c) and it seemed to work OK then. But having
> > 16K clusters is a huge waste of space. ;).
>
> Since then, the mbuf allocator in -CURRENT has totally changed. It
On Wed, Jun 19, 2002 at 00:43:13 -0400, Bosko Milekic wrote:
> On Tue, Jun 18, 2002 at 10:36:36PM -0600, Kenneth D. Merry wrote:
> >
> > I've released a new zero copy sockets snapshot, against -current from June
> > 18th, 2002.
> >
> > http://people.FreeBSD
On Tue, Jun 18, 2002 at 10:36:36PM -0600, Kenneth D. Merry wrote:
>
> I've released a new zero copy sockets snapshot, against -current from June
> 18th, 2002.
>
> http://people.FreeBSD.org/~ken/zero_copy
>
> The fixes that went into this snapshot:
>
> - Tak
I've released a new zero copy sockets snapshot, against -current from June
18th, 2002.
http://people.FreeBSD.org/~ken/zero_copy
The fixes that went into this snapshot:
- Take mutex locking out of ti_attach(), it isn't really needed.
As long as we can assume that probes of succe
On Tue, Jun 11, 2002 at 04:37:04 -0400, John Baldwin wrote:
> On 10-Jun-2002 Kenneth D. Merry wrote:
> > 3. ti_attach() calls bus_alloc_resource(), which through a ton of calls
> > ends up calling vm_map_entry_create(), same problem as above.
> >
> > 4. ti_attach() calls bus_setup_intr(), w
On 10-Jun-2002 Kenneth D. Merry wrote:
> 3. ti_attach() calls bus_alloc_resource(), which through a ton of calls
> ends up calling vm_map_entry_create(), same problem as above.
>
> 4. ti_attach() calls bus_setup_intr(), which through various calls ends up
> calling ithread_create(), wh
> 1. sf_buf_init() calls kmem_alloc_pageable(), which through several calls
>ends up calling vm_map_entry_create(). vm_map_entry_create() calls
>uma_zalloc() with M_WAITOK.
Alan Cox and Tor Egge just fixed this in -current in rev 1.247 of vm_map.c.
To Unsubscribe: send mail to [EM
I have released a new zero copy sockets snapshot, the code and a brief
update on what has been fixed is here:
http://people.FreeBSD.org/~ken/zero_copy
In short, I fixed the following things, which were found by Alfred
Perlstein:
- fix a race in the vm object allocation in jumbo_vm_init
25 matches
Mail list logo