Julian Elischer wrote:
Jung-uk Kim wrote:
On Tuesday 15 March 2005 01:14 am, Jeff Behl wrote:
Julian Elischer wrote:
Jeff wrote:
I'm not sure what you mean by in band. The IP address of the
BMC is assigned via the bios and is different from what the OS
later assigns. With imi
Michael Vince wrote:
Just out of interest has any one got serial console to work with this
IPMI stuff?
I was looking at regular 9pin serial alternatives since Dell machines
normally only have 1 serial port and I prefer 2.
yep, we've gotten this to work, but again only with linux. it looks
just
Julian Elischer wrote:
Jeff wrote:
I'm not sure what you mean by in band. The IP address of the BMC is
assigned via the bios and is different from what the OS later
assigns. With imiptool we can turn on/powercycle/monitor via the BMC
assigned address up until the point where the kernel l
Yes, I believe the kqueue version of squid would show much better
results. Unfortunately it fails to compile and I have yet the time to
try mucking with it more. I'll get back to the list when I am able to
get it up and running...
jeff
Mohan Srinivasan wrote:
Following up to a mail from
affic
(around 180 Mb/s) at < %50 cpu utilization. Seems like something in the
network stack is responsible for the high %system cpu util...
jeff
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jeff Behl
Sent: Tuesday, December 07, 2004 9:17 AM
I saw some threads that seemed to relate to the bge driver in -net, so i
thought i'd post
here as well...
FreeBSD blade7-bc2.sjc 4.8-RC2 FreeBSD 4.8-RC2 #1: Wed Mar 26 20:17:42
GMT 2003
i've had two reboots in the last 30 mins on a fairly heavly loaded web
server (apache). the following immediatel
single client could easily tie everything up in fin_wait_1...
anyone think of a workaround (besides not serving pop-ups :)
jeff
Mike Silbersack wrote:
On Mon, 30 Dec 2002, Jeff Behl wrote:
5066/52544/256000 mbufs in use (current/peak/max):
5031/50612/64000 mbuf clusters in use (current/pe
running apache-2.0.42 we're running into mbuf cluster exhaustion. going
by what 'man tuning' says:
We recommend values between 1024 and 4096 for machines with mod-
erates amount of memory, and between 4096 and 32768 for machines with
greater amounts of memory. Under no circumstances should you s
Great! I've instaleld rev. 1.110.2.27 so we'll see how it fares.
Thanks much!
Jeff
Guy Helmer wrote:
Jeff Behl wrote:
FreeBSD rack1-5.nwk 4.7-RELEASE-p1 FreeBSD 4.7-RELEASE-p1 #1: Tue Nov
12
10:37:37 PST 2002
[EMAIL PROTECTED]:/usr/src/sys/compile/GENERIC2 i386
Has
FreeBSD rack1-5.nwk 4.7-RELEASE-p1 FreeBSD 4.7-RELEASE-p1 #1: Tue Nov 12
10:37:37 PST 2002 [EMAIL PROTECTED]:/usr/src/sys/compile/GENERIC2 i386
Has anyone had problems with polling on a 4.7 box? It worked for about
24 hours then blew up with the below. While it worked it worked
fantastical
I have 4.3, and soon to be 4.4, boxes dedicated to a single app which
basically 'bounces' traffic between two incoming TCP connections. After
around 240 sessions (each session consisting of two incoming connections
with traffic being passed between them), I started getting ENOBUFS
errors. ne
11 matches
Mail list logo