There is one more point to check :
>From your mount information , in the server , directories are on DIFFERENT
drives .
Assume one of the drives is very "INTELLIGENT" to save power .
During local reading , due to reading speed , it may not go to "SLEEP" ,
but during network access , it may go t
On 03/16/2013 10:15 PM, Mehmet Erol Sanliturk wrote:
On Sat, Mar 16, 2013 at 6:46 PM, Tim Daneliuk mailto:tun...@tundraware.com>> wrote:
On 03/16/2013 05:43 PM, Mehmet Erol Sanliturk wrote:
Michael W. Lucas in Absolute FeeBSD , 2nd Edition , ( ISBN :
978-1-59327-151-0 ) ,
On Sat, Mar 16, 2013 at 6:46 PM, Tim Daneliuk wrote:
> On 03/16/2013 05:43 PM, Mehmet Erol Sanliturk wrote:
>
>>
>>
> Michael W. Lucas in Absolute FeeBSD , 2nd Edition , ( ISBN :
>> 978-1-59327-151-0 ) ,
>> is suggesting the following ( p. 248 ) :
>>
>> In client ( mount , or , fstab ) , use o
On 03/16/2013 05:43 PM, Mehmet Erol Sanliturk wrote:
Michael W. Lucas in Absolute FeeBSD , 2nd Edition , ( ISBN :
978-1-59327-151-0 ) ,
is suggesting the following ( p. 248 ) :
In client ( mount , or , fstab ) , use options ( -o tcp , intr , soft ,
-w=32768 , -r=32768 )
tcp option will
just slap an netapp 8.x with an avere flash box in front if you want
NFS performance... or isilon.
On Sat, Mar 16, 2013 at 5:43 PM, Mehmet Erol Sanliturk
wrote:
> On Sat, Mar 16, 2013 at 3:07 PM, Tim Daneliuk wrote:
>
>> On 03/16/2013 04:20 PM, Mehmet Erol San
On Sat, Mar 16, 2013 at 3:07 PM, Tim Daneliuk wrote:
> On 03/16/2013 04:20 PM, Mehmet Erol Sanliturk wrote:
>
>>
>>
>>
>
>> With respect to your mount points : /usr1 is spanning TWO different
>> partitions :
>>
>> /dev/ad4s1f390G127G231G35%/usr1
>> /dev/ad6s1d902G710G
On 03/16/2013 04:20 PM, Mehmet Erol Sanliturk wrote:
With respect to your mount points : /usr1 is spanning TWO different partitions :
/dev/ad4s1f390G127G231G35%/usr1
/dev/ad6s1d902G710G120G86%/usr1/BKU
because /usr1/BKU is a sub-directory of /usr1
e switch into which it connects to
> 1000Base
> because the LM12 machine had a built in 1000Base NIC. I also changed
> the cables on both machines to ensure they were not the problem. Prior
> to this, I was bandwidth constrained by the 100Base so I never saw NFS
> performance as an i
trained by the 100Base so I never saw NFS
performance as an issue. When I upgraded, I expected faster transfers
and when I didn't get them, I started this whole investigation.
So ... I'm stumped:
- It's not the drive or SATA ports because both drives show comparable
performan
On Fri, Mar 15, 2013 at 5:09 PM, Tim Daneliuk wrote:
> I have a FreeBSD 9.1-STABLE exhibiting weird NFS performance issues
> and I'd appreciate any suggestions.
>
> I have several different directories exported from the same filesystem.
> The machine that mounts them (a L
I have a FreeBSD 9.1-STABLE exhibiting weird NFS performance issues
and I'd appreciate any suggestions.
I have several different directories exported from the same filesystem.
The machine that mounts them (a Linux Mint 12 desktop) writes
nice and fast to one of them, but writes to the othe
In the last episode (Feb 02), Tim Daneliuk said:
> Server:FBSD 8.2-STABLE / MTU set to 15000
> Client:Linux Mint 12 / MTU set to 8192
> NFS Mount Options: rw,soft,intr
> Problem:
>
> Throughput copying from Server to Client is about 2x that when copying a
> file from clie
Server:FBSD 8.2-STABLE / MTU set to 15000
Client:Linux Mint 12 / MTU set to 8192
NFS Mount Options: rw,soft,intr
Problem:
Throughput copying from Server to Client is about 2x that when
copying a file from client to server. The client does have
a SSD whereas the server h
s for
mount_nfs, no kernel tuning), performance is very sluggish: I've got
~250Mbit/sec performance with peaks around 400Mbit/sec.
Sure enough, neither CPU (server and NetApp) nor network performance
is the problem here - it must be something NFS-related.
Any ideas on how to increas my NFS-p
e with peaks around 400Mbit/sec.
> Sure enough, neither CPU (server and NetApp) nor network performance
> is the problem here - it must be something NFS-related.
> Any ideas on how to increas my NFS-performance? (Special mount
> parameters, kernel tuning,...)
very sluggish: I've got
~250Mbit/sec performance with peaks around 400Mbit/sec.
Sure enough, neither CPU (server and NetApp) nor network performance
is the problem here - it must be something NFS-related.
Any ideas on how to increas my NFS-performance? (Special mount
parameters, kernel tuning,...)
On Sunday 18 November 2007 05:59:12 am Kris Kennaway wrote:
> Jonathan Horne wrote:
> > i updated my workstatino to beta3, and then got on a 6.2-p8 machine and
> > mounted /usr/src and /usr/obj from the beta3. tried to installkernel,
> > but it moved as painful pace. would get to the point where
On Sunday 18 November 2007 05:59:12 am Kris Kennaway wrote:
> Jonathan Horne wrote:
> > i updated my workstatino to beta3, and then got on a 6.2-p8 machine and
> > mounted /usr/src and /usr/obj from the beta3. tried to installkernel,
> > but it moved as painful pace. would get to the point where
Jonathan Horne wrote:
i updated my workstatino to beta3, and then got on a 6.2-p8 machine and
mounted /usr/src and /usr/obj from the beta3. tried to installkernel, but
it moved as painful pace. would get to the point where it moves kernel to
kernel.old, and would just pause for a long time.
i updated my workstatino to beta3, and then got on a 6.2-p8 machine and
mounted /usr/src and /usr/obj from the beta3. tried to installkernel, but
it moved as painful pace. would get to the point where it moves kernel to
kernel.old, and would just pause for a long time. file transfer showed a
On Wed, May 10, 2006 at 02:54:39PM +0200, Valerio daelli wrote:
> Hi all
> we have a FreeBSD 5.4 exporting some NFS filesystem to a cluster of gentoo
> boxes
> (kernel 2.6.12).
> Our exported storage disk is an Apple XRaid.
> We have Gigabit Ethernet both on the client and the server.
> We would l
Hi all
we have a FreeBSD 5.4 exporting some NFS filesystem to a cluster of gentoo boxes
(kernel 2.6.12).
Our exported storage disk is an Apple XRaid.
We have Gigabit Ethernet both on the client and the server.
We would like to improve our read performance.
This is our performance:
about 10Mb read
On Thursday 27 November 2003 10:28, Kris Kennaway wrote:
> Try reading the basic documentation that comes with 5.2-BETA, for
> example the /usr/src/UPDATING file, which tells you clearly that
> performance is not expected to be good unless you disable the standard
I've been running CURRENT on test
On Thu, Nov 27, 2003 at 10:12:21AM +0100, Antoine Jacoutot wrote:
> Hi :)
>
> I upgraded two boxes to FreeBSD-5.2-BETA a week ago and I noticed that NFS
> performance is very slow compared to 4.x-RELEASE.
> Before, NFS transfers were between 10 and 12 MB/s and now I don't go
Hi :)
I upgraded two boxes to FreeBSD-5.2-BETA a week ago and I noticed that NFS
performance is very slow compared to 4.x-RELEASE.
Before, NFS transfers were between 10 and 12 MB/s and now I don't go past 7
MB/s.
My exports/mount settings did not change and the hardware is obviously the
On Thu, May 29, 2003 at 04:05:04PM -0500, Marc Wiz wrote:
> On Thu, May 29, 2003 at 04:54:00PM -0400, Tom Limoncelli wrote:
> > I have a NFS server with (so far) a single NFS client. Things work
> > fine, however if (on the client) I do an "rm -rf foo" on a large (deep
> > and wide) directory tr
On Thu, May 29, 2003 at 04:54:00PM -0400, Tom Limoncelli wrote:
> I have a NFS server with (so far) a single NFS client. Things work
> fine, however if (on the client) I do an "rm -rf foo" on a large (deep
> and wide) directory tree the tty receives "NFS server not
> responding"/"NFS server ok"
I have a NFS server with (so far) a single NFS client. Things work
fine, however if (on the client) I do an "rm -rf foo" on a large (deep
and wide) directory tree the tty receives "NFS server not
responding"/"NFS server ok" messages.
I don't think the network is at fault, nor is the server rea
On Monday, December 30, 2002, at 01:12 PM, Scott Ballantyne wrote:
I could never get NFS to work reliably on Linux
I'd like to chime in on this.
There are some serious problems with Linux NFS support. At the company
where I work, we use Solaris NFS servers on Sun hardware, with a mix
> recently I discovered problems with my FreeBSD nfs server.
> I mount my /home/user from my linux box via automounter/nfs from my server.
> They are connected with a switch on a 100baseTX Ethernet. Now, whenever
> I copy large files from a local driver to my home dir or do anything
> else that inv
Hi list,
recently I discovered problems with my FreeBSD nfs server.
I mount my /home/user from my linux box via automounter/nfs from my server.
They are connected with a switch on a 100baseTX Ethernet. Now, whenever
I copy large files from a local driver to my home dir or do anything
else that invo
On Tue, Nov 05, 2002 at 06:44:47PM +0100, Lasse Laursen wrote:
> How is the optimum number of nfsd processes determined on the server? On
> our current setup we have 4 nfs daemons running serving 3 clients
> (webservers)
>
> Is the number of daemons to start determined by the number of clients or
On Wed, 2002-11-06 at 19:52, BigBrother wrote:
>
>
> Although the man page says this, I *think* that the communication is done
> like this
>
> CLIENT <=> NFSIOD(CLIENT) <=> NFSIOD (SERVER) <=> NFSD
>
> which menas that NFSIOD 'speak' with each other and then they pass the
> requests to NFS.
>
On Tue, 5 Nov 2002, Lasse Laursen wrote:
> Hi,
>
> Thanks for your reply. I have some additional questions:
>
> > Well the only rule for selecting the number of nfsiods and nfsd is the
> > maximum number of threads that are going to request an NFS operation on
> > the server. For example assume
Hi,
Thanks for your reply. I have some additional questions:
> Well the only rule for selecting the number of nfsiods and nfsd is the
> maximum number of threads that are going to request an NFS operation on
> the server. For example assume that your web server has a typical number
> of httpd dam
>> According to my experience UDP is much preffered for NFS transport
>> protocols. Also try to have the NFSIOD daemon being executed on every
>> machine by putting in the /etc/rc.conf
>>
>> nfs_client_enable="YES"
>> nfs_client_flags="-n 10"
>>
>>
>> [u may put more than 10 instances if u suspec
Hi,
> According to my experience UDP is much preffered for NFS transport
> protocols. Also try to have the NFSIOD daemon being executed on every
> machine by putting in the /etc/rc.conf
>
> nfs_client_enable="YES"
> nfs_client_flags="-n 10"
>
>
> [u may put more than 10 instances if u suspect that
Howdy!
I have done some simulations with NFS servers - Intel SCB2 (4G RAM)
serving files from 500G RAID devices. I created a treed directory structure
with 300G of 32k files that approximates our "homedirectory" structure.
I had about 6 diskless front ends (tyan 2518 with
>I recently did some research into NFS performance tuning and came across
>the suggestion in an article on onlamp.com by Michael Lucas, that 32768
>is a good value for the read and write buffers. His suggestion is these
>flags:
>
>tcp,intr,nfsv3,-r=32768,-w=32768
>
>I use
I recently did some research into NFS performance tuning and came across
the suggestion in an article on onlamp.com by Michael Lucas, that 32768
is a good value for the read and write buffers. His suggestion is these
flags:
tcp,intr,nfsv3,-r=32768,-w=32768
I used these options (I found tcp was
40 matches
Mail list logo