Claus Guttesen wrote:
we have a FreeBSD 7.0 NFS client (csup today, built world and kernel).
It mounts a Solaris 10 NFS share.
We have bad performance with 7.0 (3MB/s).
We have tried both UDP and TCP mounts, both sync and async.
This is our mount:
nest.xx.xx:/data/export/hosts/bsd7.xx.xx/ /mnt/n
Valerio Daelli wrote:
On Feb 19, 2008 8:53 PM, Kris Kennaway <[EMAIL PROTECTED]> wrote:
Valerio Daelli wrote:
Hi list
we have a FreeBSD 7.0 NFS client (csup today, built world and kernel).
It mounts a Solaris 10 NFS share.
We have bad performance with 7.0 (3MB/s).
We have tried both UDP and TC
Steven Hartland wrote:
- Original Message - From: "Eric Anderson" <[EMAIL PROTECTED]>
I saw this once before, a long time back, and every time I went
through a debugging session, it came to some kind of lock on the
sysctl tree with regards to the geom info (maybe the
Dieter wrote:
What *exactly* do you mean by
machine still locks up with no activity for anywhere from 20 to 30 seconds.
Is there disk activity? (e.g. activity light(s) flashing if you have them)
Cant tell if there is disk activity its in a DC miles away ;)
Does top continue to update the sc
Erik Cederstrand wrote:
Brooks Davis wrote:
On Tue, Sep 25, 2007 at 08:59:44AM +0200, Erik Cederstrand wrote:
Brooks Davis wrote:
On Mon, Sep 24, 2007 at 01:34:34PM +0200, Erik Cederstrand wrote:
>> [...]
If I ignore documentation distfiles (will this affect benchmarks
in any way?), AFAI
Decibel! wrote:
On Sep 13, 2007, at 3:24 PM, Palle Girgensohn wrote:
--On torsdag, torsdag 13 sep 2007 15.07.17 -0400 Francisco Reyes
<[EMAIL PROTECTED]> wrote:
Palle Girgensohn writes:
Now, I hear rumors that AMD is to be preferred over Intel for
performance
From what I have read in the
On 07/17/07 17:10, Jack Toering wrote:
>I'm very delighted with our four-way woodcrest at 3 Ghz. It's a HP DL 380
G5. I have two four-way opterons at 2 Ghz but hz for hz the woodcrest is
(way) faster.<
These are things I need to hear bacause it doesn't make sense for me to
watch these things un
On 05/16/07 11:08, Andrew Edwards wrote:
I have a system running a dual intel zeon 2.8Ghz with 4G of ram and
using an intel raid controller model SRCU42X which uses the amr driver.
I have had this server running 5.4 upgraded to 6.2 and was running fine
for several months and then after a normal r
On 05/15/07 11:30, Kevin Kobb wrote:
Tom Judge wrote:
Randy Schultz wrote:
On Tue, 15 May 2007, Kevin Kobb spaketh thusly:
-}These reports on poor performance using mpt seem to be on SATA
-}drives. Has anybody been seeing this using SAS drives? -}
-}We are testing Dell PE840s with hot swap SAS
On 05/11/07 08:42, Bill Moran wrote:
In response to Randy Schultz <[EMAIL PROTECTED]>:
Hi there,
We just purchased a Dell 860 with these specifics:
- dual core pentium 915, 2.8 GHz, 800MHz FSB
- 2x512 MB 533 MHz DDR2 RAM
- Dell's SAS/SATA drive controller(which is actually an LSILogic
On 03/22/07 08:20, Roman Gorohov. wrote:
Hi, Oliver.
Thanks for explanations.
Roman Gorohov. <[EMAIL PROTECTED]> wrote:
>> Hello list.
>> There is a server with FreeBSD FreeBSD 4.11-STABLE(can't upgrade fornow) on
HP ProLiant DL140.
>> Disk system: at scbus0 target 0 lun0(pass0,da0)
on a
On 03/02/07 09:28, Brooks Davis wrote:
On Fri, Mar 02, 2007 at 10:38:35AM +0100, O. Hartmann wrote:
The last days I tried to figure out why some of my lab's FreeBSD boxes
and also mine at home seem to be outperformed by some Linux setups
around here and I saw something interesting.
On my lab'
On 03/02/07 06:03, Alexander Leidinger wrote:
Quoting Cheffo <[EMAIL PROTECTED]> (from Fri, 02 Mar 2007 13:38:45 +0200):
Hi,
Ted Mittelstaedt wrote:
- Original Message - From: "O. Hartmann"
<[EMAIL PROTECTED]>
To: ;
Sent: Friday, March 02, 2007 1:38 AM
Subject: (S)ATA performance
On 02/28/07 03:06, Peter Losher wrote:
Ivan Voras wrote:
I agree in general, but MySQL performance is very exposed as an advocacy
issue - it has traditionally been the source of statements like
"FreeBSD's threading implementation is weak/bad/broken".
And these days ISC can't consciously recom
not much of any
though.
Eric
--
----
Eric AndersonSr. Systems AdministratorCentaur Technology
An undefined problem has an infinite number of solutions.
top/iostat/gstat may help you in these cases.
Eric
--
Eric AndersonSr. Systems AdministratorCentaur Technology
Anything that works is better than anything that do
lso to the individual disks. That will at least tell us where to look.
Eric
--
----
Eric AndersonSr. Systems AdministratorCentaur Technology
Anything that works is better than anything th
for
the 4.x tree if they want it. That's what people do, and that's the
beauty of open source.
Eric
--
----
Eric AndersonSr. Systems AdministratorCentaur
/mounting the filesystem in question in between each test.
Also, I recommend using a block device, instead of a file on a
filesystem, since the filesystem could allocate blocks for the file
differently each time, causing varying results.
Eric
--
---
r real).
Eric
--
----
Eric AndersonSr. Systems AdministratorCentaur Technology
Anything that works is better than anythi
pages to be served, as long as you aren't
actually IO bound somewhere. I don't know how much memory you have in
the current machine, but adding more might be a good quick upgrade (that
is fairly cheap probably).
Eric
--
----
idn't mention anything about which filesystems these were
using on both occasions.
Eric
--
Eric AndersonSr. Systems AdministratorCentaur Technology
Anything that works is better than anything that doesn't.
etalk etalk wrote:
From: Eric Anderson <[EMAIL PROTECTED]>
To: etalk etalk <[EMAIL PROTECTED]>
CC: [EMAIL PROTECTED], freebsd-performance@freebsd.org
Subject: Re: about ufs filesystem io performance!
Date: Thu, 25 May 2006 07:46:44 -0500
etalk etalk wrote:
5.3 vs 6.0 The
t are the 2,4,8 numbers referencing? How many times did you
run the tests?
Eric
--
----
Eric AndersonSr. Systems AdministratorCentaur Technology
Anything that works is better than anythi
single core processors with the 8M cache on them.
Also, make sure that your database is set up with indexes correctly and
is pruned (if that needs to be done for your database server).
Eric
--
Eric AndersonSr. Sys
Eric Anderson wrote:
Mark Bucciarelli wrote:
On Sat, Feb 18, 2006 at 11:06:57PM -0500, Mark Bucciarelli wrote:
I'm curious how fast stat is.
I generated a list of 200,000 file names
# find / | head -20 > files.statspeed
then ran a million iterations of randomly picking a f
ore you might see higher
numbers than sampling from the entire disk (since the speed is probably
mostly dominated by disk seeks I believe).
What exactly are you trying to determine?
Eric
--
----
Eric AndersonSr
the local disks, but any real high performance data
storage I connect via Fibre channel array built as a 16 disk RAID0+1 (or
RAID10 depending who you ask). Changing out dead drives (or even live
ones) has never been an issue.
Eric
--
-----
ave you tried a smaller block size? What does 8k, 16k, or 512k do for
you? There really isn't much room for improvement here on a single device.
Eric
--
----
Eric AndersonSr. Systems Administrator
Francisco Reyes wrote:
On Wed, 28 Sep 2005, Eric Anderson wrote:
Keep in mind that 5-STABLE, and 6.x (and -CURRENT) have a max of 256
nfsd's, so if you want to go higher, you have to modify a line in nfsd.c.
So far only a handfull of clients are expected. I am going to start a
Francisco Reyes wrote:
On Fri, 23 Sep 2005, Eric Anderson wrote:
Use the -n flag to nfsd, so in /etc/rc.conf:
nfs_server_flags="-u -t -n 1024"
Working on the nfs server today.
How about the "-r" flag? It is the default. Is it not needed?
The man page says "-r&
Francisco Reyes wrote:
On Fri, 23 Sep 2005, Eric Anderson wrote:
You should also increase the rsize and wsize parameters on the mount
options for better efficiency.
On the server?
On the client (in /etc/fstab or on the command line with -o).
Eric
erkill, but you should experiment to find out.
You should also increase the rsize and wsize parameters on the mount
options for better efficiency.
Eric
Eric Anderson wrote:
Francisco Reyes wrote:
On Tue, 13 Sep 2005, mariano benedettini wrote:
91.3% idle
CPU is not the problem. :
e improvements for very little cost.
Eric
--
----
Eric AndersonSr. Systems AdministratorCentaur Technology
Anything that works is bet
nother shell:
ktrace -tni -ip 1268
In ktraced shell:
cd /
cd /tmp
touch t
cat t
rm t
In ktrace shell window:
ktrace -C
kdump | less
That should give you a quick idea how to use it. The man page is pretty
decent.
Eric
--
--
Francisco Reyes wrote:
On Thu, 22 Sep 2005, Eric Anderson wrote:
Also, if it is an NFS server, one should check the cpu times on the
nfsd processes. I've found that many times there aren't enough nfsd
processes to take the load from many clients. Increasing the number
(double
load from many clients. Increasing the number
(double it) often helps this. The max in 5.3 is 20, but you can easily
change it and get around it.
Eric
--
----
Eric AndersonSr. Systems AdministratorCentaur
:
netstat -m
uname -a
Eric
--
----
Eric AndersonSr. Systems AdministratorCentaur Technology
Anything that works is better than anything th
rock solid. You'll pay a little more for them, but
there is a reason for it.
Eric
--
----
Eric AndersonSr. Systems AdministratorCentaur Technology
A lost ounce of gold may be found, a lost momen
about disk arrays that support RAID3?
--
----
Eric AndersonSr. Systems AdministratorCentaur Technology
A lost ounce of gold may be found, a lost moment of time
Poul-Henning Kamp wrote:
In message <[EMAIL PROTECTED]>, Eric Anderson writes:
Poul-Henning Kamp wrote:
In message <[EMAIL PROTECTED]>, Eric Anderson writes:
Don't mean to be terse here, but I'm talking about the same test done an
two different RAID5 configurations, with di
Poul-Henning Kamp wrote:
In message <[EMAIL PROTECTED]>, Eric Anderson writes:
Don't mean to be terse here, but I'm talking about the same test done an
two different RAID5 configurations, with different disks, and not just
me - other users in this very thread see the same issu
Poul-Henning Kamp wrote:
In message <[EMAIL PROTECTED]>, Eric Anderson writes:
I'll be honest here, I don't care much if the speed difference between
4.X and 5.X is measureable, or whatever. What I find is a little
telling of an issue somewhere, is that READS are slower than
es heavly RAM,
- which can take benefits from 64bit registers (floating points),
will have _extreamly_ boost comparing to Intels EMT64 (or Itanium2).
Agreed..
Eric
--
----
Eric AndersonSr. Systems AdministratorCentau
aybe GEOM is
busy doing a check or some routine to data being accessed directly from
the disk device instead of through a filesystem? I don't know, but it
is an issue, and I'm sure we'll get nailed up to a fence in some
benchmark somewhere if we don't fix it..
Eric
--
-
rlier also, I'm seeing the same behavior to a Fiber
Channel RAID array (15 400GB SATA150 disks in RAID5 config). I'm using
a QLogic HBA connected directly to the SATA array.
Eric
--
----
Eric AndersonSr.
ID5.
Eric
--
----
Eric AndersonSr. Systems AdministratorCentaur Technology
A lost ounce of gold may be found, a lost moment of
card. My reads were 1/2 of my writes..
Eric
--
----
Eric AndersonSr. Systems AdministratorCentaur Technology
A lost ounce of gold may be found, a lost moment of
Steven Hartland wrote:
- Original Message - From: "Eric Anderson" <[EMAIL PROTECTED]>
Where do I start looking?
First, understand that RAID 5 is dependant on fast hardware to
performa the XOR operations. A single disk without any RAID can
easily outperform a RAID a
decent performance.
Eric
--
----
Eric AndersonSr. Systems AdministratorCentaur Technology
A lost ounce of gold may be found, a lost moment of time never.
__
ed on the same bus.
Eric
--
----
Eric AndersonSr. Systems AdministratorCentaur Technology
A lost ounce of gold may be found, a
em (a different one).
Thank you for your guidance.
You could use the atabeast to do two raid 5's, then use vinum to stripe
those two.
Eric
--
----
Eric AndersonSr. Systems AdministratorCentaur Technology
A lost
you some
better performance, probably not as close as the amr device, but I would guess
somewhere in the 80-90mb/s range.
Eric
--
----
Eric AndersonSr. Systems AdministratorCentaur Technology
A lost ounce of gold may
).
Eric
--
----
Eric AndersonSr. Systems AdministratorCentaur Technology
A lost ounce of gold may be found, a lost moment of ti
er..
Eric
--
----
Eric AndersonSr. Systems AdministratorCentaur Technology
A lost ounce of gold may be found, a lost moment of time never.
___
freebsd-performance@freebsd.o
e best performance, you might try a RAID 0+1 (or 10
possibly) instead of RAID 5.
Eric
--
----
Eric AndersonSr. Systems AdministratorCentaur Technology
A lost ounce of gold may be found, a lost mo
could mean a lot of things. Is this a single
drive, or a RAID subsystem?
Eric
--
----
Eric AndersonSr. Systems AdministratorCentaur Technology
A lost ounce of gold may be found, a lost mome
cpu you were using for this,
you may or may not gain. How busy was the server during that time? Is this to
a single IDE disk? If so, you are probably bottlenecked by that IDE drive.
Eric
--
----
Eric AndersonSr. Systems
58 matches
Mail list logo