Edwin Sanjoto wrote:
Hi Guyz,
I want to gain access to the internet via IPv4 (with the public IP) as my
Gateway and I am using pure IPv6 (not dual stack)...
I just want to know how to make a DNS server in freeBSD so i can gain access to
the internet via IPv4...
I've never used the faith dri
Rahman, Md Sazzadur wrote:
Hi, I would like to get the values of SCTP congestion control
algorithm variables (cwnd, ssthresh, flightsize and pba) from any
SCTP based application in runtime for research purpose. Does any API
exist in SCTP for that? Do I need to dig the SCTP code in kernel to
get
Vadim:
Sorry I have not chimed in earlier.. I tend to
"not look" often at some of my boxes :-)
Glad Michael helped out here .. thanks Michael
Vadim Goncharov wrote:
Hi Michael Tuexen!
On Thu, 6 Mar 2008 09:34:13 +0100; Michael Tuexen wrote about 'Re: SCTP using
questions (API etc.)':
"s
Hello:
Sorry for cross posting but this seems to be both a driver and
network/kernel issue so I figure I actually thought all lists seemed
appropriate.
I'm investigating an issue we are seeing with 6.1-RELEASE and the bge
driver dropping packets sporadically at 100MBps speed. The machine is
a 2-
> I'm investigating an issue we are seeing with 6.1-RELEASE and the bge
> driver dropping packets sporadically at 100MBps speed.
> Its get mainly aggravated when heavy disk I/O occurs
> Has anyone seen this problem before with bge? Am I barking up the
> wrong tree with my initial investigation?
Dieter: Thanks, at 20Mbps! That's pretty aweful.
JK: Thanks again. Wow, I searched the list and didn't see much
discussion with respect to bge and packet loss! I will try the rest
of that patch including pushing the TCP receive buffer up (though I
don't think that's going to help in this case)
[CC trimmed]
On Wednesday 16 April 2008 02:20 pm, Alexander Sack wrote:
> Dieter: Thanks, at 20Mbps! That's pretty aweful.
>
> JK: Thanks again. Wow, I searched the list and didn't see much
> discussion with respect to bge and packet loss! I will try the
> rest of that patch including pushing
Hello,
the patches inlined give the ng_iface(4) and the mpd4 port the ability
to rename its interfaces.
IE if you create a new pppoe connection instead of the line
new -i ng0 pppoe pppoe
you can enter
new -i pppoe0 pppoe pppoe
so when mpd starts it will create an ngX interface with a ngX: hook
a
On Wed, Apr 16, 2008 at 2:56 PM, Jung-uk Kim <[EMAIL PROTECTED]> wrote:
> [CC trimmed]
>
>
> On Wednesday 16 April 2008 02:20 pm, Alexander Sack wrote:
> > Dieter: Thanks, at 20Mbps! That's pretty aweful.
> >
> > JK: Thanks again. Wow, I searched the list and didn't see much
> > discussion
On Wednesday 16 April 2008 04:28 pm, Alexander Sack wrote:
> On Wed, Apr 16, 2008 at 2:56 PM, Jung-uk Kim <[EMAIL PROTECTED]>
wrote:
> > [CC trimmed]
> >
> > On Wednesday 16 April 2008 02:20 pm, Alexander Sack wrote:
> > > Dieter: Thanks, at 20Mbps! That's pretty aweful.
> > >
> > > JK: Than
On Wed, Apr 16, 2008 at 4:54 PM, Jung-uk Kim <[EMAIL PROTECTED]> wrote:
> On Wednesday 16 April 2008 04:28 pm, Alexander Sack wrote:
> > On Wed, Apr 16, 2008 at 2:56 PM, Jung-uk Kim <[EMAIL PROTECTED]>
> wrote:
> > > [CC trimmed]
> > >
> > > On Wednesday 16 April 2008 02:20 pm, Alexander Sack
On Wednesday 16 April 2008 05:02 pm, Alexander Sack wrote:
> On Wed, Apr 16, 2008 at 4:54 PM, Jung-uk Kim <[EMAIL PROTECTED]>
wrote:
> > On Wednesday 16 April 2008 04:28 pm, Alexander Sack wrote:
> > > On Wed, Apr 16, 2008 at 2:56 PM, Jung-uk Kim
> > > <[EMAIL PROTECTED]>
> >
> > wrote:
> > >
Hi,
FreeBSD amd64 7.0-RELEASE, ULE, SMP.
On heavy loads bfe network driver after few messages
Serious error: bfe failed to map RX buffer
Serious error: bfe failed to map RX buffer
Serious error: bfe failed to map RX buffer
...
make kernel panic.
Here patch.
--
Best
The following reply was made to PR amd64/122780; it has been noted by GNATS.
From: Paul <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED], [EMAIL PROTECTED]
Cc:
Subject: Re: amd64/122780: [lagg] tcpdump on lagg interface during high pps
wedges netcode
Date: Wed, 16 Apr 2008 18:40:53 -0400
It seems to
On Thu, Apr 17, 2008 at 12:43:53AM +0300, quad wrote:
> Hi,
>
>FreeBSD amd64 7.0-RELEASE, ULE, SMP.
>
>On heavy loads bfe network driver after few messages
>
>Serious error: bfe failed to map RX buffer
>Serious error: bfe failed to map RX buffer
>Serious error: bfe f
I plan on committing the generic kernel level rdma verb and iwarp
infrastructure from OFED as well as the Chelsio iwarp driver to HEAD
next week. The RDMA infrastructure doesn't require any kernel changes
so I don't foresee any need for a lengthy discussion. For the most
part this does not include
this change allows one to type
ipfw table 2 add 1.1.1.1:255.255.255.0 0
in addition to the currently acceptable 1.1.1.1/24 0
The reason is that some programs supply the netmask in
that (mask) form and a shell script trying to add it to a table
has a hard time converting it to the currently accep
I received the following question in a private e-mail that I think
others might be asking.
> what is RDMA and ipwarp ? google didn't help much here :(
RDMA in general is the ability to directly DMA to and from the memory
of a remote host. In this particular context it is the ability to do
so wit
Old Synopsis: FreeBSD 7 multicast routing problem
New Synopsis: [multicast] FreeBSD 7 multicast routing problem
Responsible-Changed-From-To: freebsd-bugs->freebsd-net
Responsible-Changed-By: linimon
Responsible-Changed-When: Thu Apr 17 05:45:46 UTC 2008
Responsible-Changed-Why:
Over to maintainer
Hi All,
I am running squid as reverse proxy on FreeBSD 7.0-R amd64.
After running for a while (~ 8 hours), the throughput degrades to very
very low rate.
I found the squid is in "zoneli" state and is already a bug report on
http://www.freebsd.org/cgi/query-pr.cgi?pr=106317
But after more investig
Julian Elischer wrote:
I do know it won't handle non contiguous masks well but as the
ipfw ABI code only accepts a network mask length instead of a
mask, there's not much that can be done.
I may suggest a later fix for that but it will break the ABI.
comments?
What you think about my patch?
-
21 matches
Mail list logo