> > This does look odd... maybe there's a leak somewhere... does "in use" > go back down to a much lower number eventually? What kind of test are > you running? "in pool" means that that's the number in the cache > while "in use" means that that's the number out of the cache > currently being used by the system; but if you're telling me that > there's no way usage could be that high while you ran the netstat, > either there's a serious leak somewhere or I got the stats wrong > (anyone else notice irregular stats?) > I think I figured this, the "em" driver is allocating mbuf for each receive descriptor regardless if it´s needed or not. Does this cause a performance issue if there is 8000 mbufs in use and we get 100k-150k frees and allocates a second (for every packet?)
(I have the em driver configured for 4096 receive descriptors) > Another thing I find odd about those stats is that you've set the high > watermark to 8192, which means that in the next free, you should be > moving buckets to the general cache... see if that's really > happening... The low watermark doesn't affect anything right now. Nothing seems to be moving to the GEN pool. > > Can you give me more details on the exact type of test you're running? > Let's move this to -current instead of -current and -net please (feel > free to trim the one you want), getting 3 copies of the same > message all the time is kinda annoying. :-( > I´m running a snort-like application with the interface getting receive only packets. It can either connect to a netgraph node or use bpf, both seem to have similar performance (most CPU is used elsewhere) as the email I sent previously had listed. Pete To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-current" in the body of the message