Jon Dama wrote:
Yes, but surely you weren't bridging gigabit and 100Mbit before?
Did you try my suggestion about binding the IP address of the NFS server
to the 100Mbit side?
Yeah. Unfortunately networking on the server fell apart when I did that.
Traffic was still passed and I could g
Jon Dama wrote:
Yes, but surely you weren't bridging gigabit and 100Mbit before?
Did you try my suggestion about binding the IP address of the NFS server
to the 100Mbit side?
Yeah. Unfortunately networking on the server fell apart when I did that.
Traffic was still passed and I could g
Yes, but surely you weren't bridging gigabit and 100Mbit before?
Did you try my suggestion about binding the IP address of the NFS server
to the 100Mbit side?
-Jon
On Tue, 31 May 2005, Skylar Thompson wrote:
> Jon Dama wrote:
>
> >Try switching to TCP NFS.
> >
> >a 100MBit interface cannot keep
Yes, but surely you weren't bridging gigabit and 100Mbit before?
Did you try my suggestion about binding the IP address of the NFS server
to the 100Mbit side?
-Jon
On Tue, 31 May 2005, Skylar Thompson wrote:
> Jon Dama wrote:
>
> >Try switching to TCP NFS.
> >
> >a 100MBit interface cannot keep
Jon Dama wrote:
Try switching to TCP NFS.
a 100MBit interface cannot keep up with a 1GBit interface in a bridge
configuration. Therefore, in the long run, at full-bore you'd expect to
drop 9 out of every 10 ethernet frames.
MTU is 1500 therefore 1K works (it fits in one frame), 2K doesn't (yo
Jon Dama wrote:
Try switching to TCP NFS.
a 100MBit interface cannot keep up with a 1GBit interface in a bridge
configuration. Therefore, in the long run, at full-bore you'd expect to
drop 9 out of every 10 ethernet frames.
MTU is 1500 therefore 1K works (it fits in one frame), 2K doesn't (yo
Oh, something else to try:
I checked through my notes and discovered that I had gotten UDP to work in
a similar configuration before. What I did was bind the IP address to
fxp0 instead of em0. By doing this, the kernel seems to send the data at
a pace suitable for the slow interface.
-Jon
On
Oh, something else to try:
I checked through my notes and discovered that I had gotten UDP to work in
a similar configuration before. What I did was bind the IP address to
fxp0 instead of em0. By doing this, the kernel seems to send the data at
a pace suitable for the slow interface.
-Jon
On
Try switching to TCP NFS.
a 100MBit interface cannot keep up with a 1GBit interface in a bridge
configuration. Therefore, in the long run, at full-bore you'd expect to
drop 9 out of every 10 ethernet frames.
MTU is 1500 therefore 1K works (it fits in one frame), 2K doesn't (your
NFS transactions
Try switching to TCP NFS.
a 100MBit interface cannot keep up with a 1GBit interface in a bridge
configuration. Therefore, in the long run, at full-bore you'd expect to
drop 9 out of every 10 ethernet frames.
MTU is 1500 therefore 1K works (it fits in one frame), 2K doesn't (your
NFS transactions
On 26 May, Skylar Thompson wrote:
> I'm having some problems with NFS serving on a FreeBSD 5.4-RELEASE
> machine. The FreeBSD machine is the NFS/NIS server for a group of four
> Linux clusters. The network archictecture looks like this:
>
> 234/24
On 26 May, Skylar Thompson wrote:
> I'm having some problems with NFS serving on a FreeBSD 5.4-RELEASE
> machine. The FreeBSD machine is the NFS/NIS server for a group of four
> Linux clusters. The network archictecture looks like this:
>
> 234/24
I'm having some problems with NFS serving on a FreeBSD 5.4-RELEASE
machine. The FreeBSD machine is the NFS/NIS server for a group of four
Linux clusters. The network archictecture looks like this:
234/24 234/24
Cluster 1 ---|
13 matches
Mail list logo