Re: High volume proxy server configuration.
On Tue, 21 May 2002, Scott Hess wrote: > Setup: 2x SMP server running FreeBSD4.5. Apache 1.3.x. 2Gig of memory. > > When stress-testing, I am able to cause the kernel messages: > > m_clalloc failed, consider increase NMBCLUSTERS value > fxp0: cluster allocation failed, packet dropped!
Re: Interface statistic
On Tue, 21 May 2002 14:58:22 +0300, Ivailo Tanusheff wrote: >Hi, > >Can you tell me a way to collect per network interface statistic on >my FreeBSD box? >At this moment I'm using IPFilter accounting to collect needed >information, but I think this way I'm collecting only information >related to t
Question about Dummynet and Diffserv
Hi, I am trying to set up a network testbed where I can offer different levels of service to different streams of traffic marked with different Diffserv codepoints. I have two FreeBSD routers (4.6 RC1) in my testbed, compiled with Dummynet, ALTQ, and IPFIREWALL. Dummynet works great for changin
Re: High volume proxy server configuration.
Hello Scott, > Here's my theory: When the amount of space used for user processes and > kernel usage fills all of memory, and a burst of packets are received from > the backend servers, the kernel isn't able to allocate pages and drops the > packets, with the message. The sender resends, and thi
High volume proxy server configuration.
Background: I'm working on an intelligent Apache-based proxy server for backend servers running a custom Apache module. The server does some inspection of the incoming request to determine how to direct it, and passes the reseponse directly back to the client. Thus, I'd like to be able to set th
hfa0 PCA200E more informations [Help]
Hi, Fore PCA-200E AAL 5 Statistics CRC/Len CRC Proto PDU Cells In Cells Out Errs DropsPDUs In PDUs Out Errs Errs Drops 147895872 220929747 4757 9 20005779 23507681 9 0 9 As can be seen I get
Re: Multicast problem with "wi" driver in promiscuous mode - any resolution?
> I don't think anybody has applied fixes to the wi driver in that time > frame for this purpose. Have fun :-(. The problem is that the wavelan/orinoco cards at least, only have space for 16 multicast addresses and don't have an "all multicast" bit, so if you go over 16 addresses or want to catc
Re: Multicast problem with "wi" driver in promiscuous mode - anyresolution?
I don't think anybody has applied fixes to the wi driver in that time frame for this purpose. Have fun :-(. Warner To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-net" in the body of the message
RE: "dynamic" ipfw
Scott must have meant to type http://www.bsdshell.net which does list the EtherFirewall project. Best regards, Frans On Tue, 21 May 2002, Mire, John wrote: > nice project page, does it do anything? > > -Original Message- > From: Scott Ullrich [mailto:[EMAIL PROTECTED]] > Sent: Monday,
RE: "dynamic" ipfw
Title: RE: "dynamic" ipfw a search on google did not turn up anything for me and the webpage is just a page with seiki on it and no other links:seikititle> head> p>
RE: "dynamic" ipfw
Title: RE: "dynamic" ipfw John, What do you mean by does it do anything? Currently all three projects are working and we are in the process of finishing new verisons. ;) -Scott -Original Message-From: Mire, John [mailto:[EMAIL PROTECTED]]Sent: Tuesday, May 21, 2002 10:19
RE: "dynamic" ipfw
Title: RE: "dynamic" ipfw nice project page, does it do anything? -Original Message-From: Scott Ullrich [mailto:[EMAIL PROTECTED]]Sent: Monday, May 20, 2002 5:23 PMTo: 'John Angelmo'; [EMAIL PROTECTED]Subject: RE: "dynamic" ipfw Check out http://www.bsdshell.com 's EtherFi
Interface statistic
Hi, Can you tell me a way to collect per network interface statistic on my FreeBSD box? At this moment I'm using IPFilter accounting to collect needed information, but I think this way I'm collecting only information related to tcp, udp and icmp traffic. My purpose is to visualize this data in MR
Re: HEADS UP: ALTQ integration developer preview
Hello, > > When using 32768 bytes MTU I can get around 190 Mbps out from a PIII 450. > > (and only 190 Mbps because the two frontends have fast ethernet cards) > > So why this is so bad? If the other end can keep up, it will increase > > throughput. > And you could get even better by getting rid