Hi,
compression is off.
I've checked rw-perfomance with 20 simultaneous cp and with the following...
#!/usr/bin/bash
for ((i=1; i<=20; i++))
do
cp lala$i lulu$i &
done
(lala1-20 are 2gb files)
...and ended up with 546mb/s. Not too bad at all.
This message posted from opensolaris.org
__
>
> That all said - we don't have a simple dd benchmark for random
> seeking.
Feel free to try out randomread.f and randomwrite.f - or combine them
into your own new workload to create a random read and write workload.
eric
___
zfs-discuss mailing
On Oct 10, 2007, at 2:56 AM, Thomas Liesner wrote:
> Hi Eric,
>
>> Are you talking about the documentation at:
>> http://sourceforge.net/projects/filebench
>> or:
>> http://www.opensolaris.org/os/community/performance/filebench/
>> and:
>> http://www.solarisinternals.com/wiki/index.php/FileBench
On Oct 10, 2007, at 8:41 AM, Luke Lonergan wrote:
> Hi Eric,
>
> On 10/10/07 12:50 AM, "eric kustarz" <[EMAIL PROTECTED]> wrote:
>
>> Since you were already using filebench, you could use the
>> 'singlestreamwrite.f' and 'singlestreamread.f' workloads (with
>> nthreads set to 20, iosize set to 12
Hi Eric,
On 10/10/07 12:50 AM, "eric kustarz" <[EMAIL PROTECTED]> wrote:
> Since you were already using filebench, you could use the
> 'singlestreamwrite.f' and 'singlestreamread.f' workloads (with
> nthreads set to 20, iosize set to 128k) to achieve the same things.
Yes but once again we see th
Hi Eric,
>Are you talking about the documentation at:
>http://sourceforge.net/projects/filebench
>or:
>http://www.opensolaris.org/os/community/performance/filebench/
>and:
>http://www.solarisinternals.com/wiki/index.php/FileBench
>?
i was talking about the solarisinternals wiki. I can't find any
Since you were already using filebench, you could use the
'singlestreamwrite.f' and 'singlestreamread.f' workloads (with
nthreads set to 20, iosize set to 128k) to achieve the same things.
With the latest version of filebench, you can then use the '-c'
option to compare your results in a nic
Do you have compression turned on? If so, dd'ing from /dev/zero isn't very
useful as a benchmark. (I don't recall if all-zero blocks are always detected
if checksumming is turned on, but I seem to recall that they are, even if
compression is off.)
This message posted from opensolaris.org
___
i wanted to test some simultanious sequential writes and wrote this little
snippet:
#!/bin/bash
for ((i=1; i<=20; i++))
do
dd if=/dev/zero of=lala$i bs=128k count=32768 &
done
While the script was running i watched zpool iostat and measured the time
between starting and stopping of the writes
On Oct 9, 2007, at 4:25 AM, Thomas Liesner wrote:
> Hi,
>
> i checked with $nthreads=20 which will roughly represent the
> expected load and these are the results:
Note, here is the description of the 'fileserver.f' workload:
"
define process name=filereader,instances=1
{
thread name=filere
Hi,
i checked with $nthreads=20 which will roughly represent the expected load and
these are the results:
IO Summary: 7989 ops 7914.2 ops/s, (996/979 r/w) 142.7mb/s, 255us cpu/op, 0.2ms
latency
BTW, smpatch is still running and further tests will get done when the system
is rebooted.
The fig
Hi,
i checked with $nthreads=20 which will roughly represent the expected load and
these are the results:
IO Summary: 7989 ops 7914.2 ops/s, (996/979 r/w) 142.7mb/s,255us
cpu/op, 0.2ms latency
BTW, smpatch is still running and further tests will get done when the system
is reboote
Hi Thomas
the point I was making was that you'll see low performance figures
with 100 concurrent threads. If you set nthreads to something closer
to your expected load, you'll get a more accurate figure.
Also, there's a new filebench out now, see
http://blogs.sun.com/erickustarz/entry/filebench
Hi again,
i did not want to compare the filebench test with the single mkfile command.
Still, i was hoping to see similar numbers in the filbench stats.
Any hints what i could do to further improve the performance?
Would a raid1 over two stripes be faster?
TIA,
Tom
This message posted from op
On 08/10/2007, Thomas Liesner <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] # ./filebench
> filebench> load fileserver
> filebench> run 60
> IO Summary: 8088 ops 8017.4 ops/s, (997/982 r/w) 155.6mb/s,508us
> cpu/op, 0.2ms
> 12746: 65.266: Shutting down processes
> filebench>[/i]
>
>
> statfile1 988ops/s 0.0mb/s 0.0ms/op 22us/op-cpu
> deletefile1 991ops/s 0.0mb/s 0.0ms/op 48us/op-cpu
> closefile2997ops/s 0.0mb/s 0.0ms/op4us/op-cpu
> readfile1 997ops/s 139.8mb/s 0.2ms/op
Hi all,
i want to replace a bunch of Apple Xserves with Xraids and HFS+ (brr) by Sun
x4200 with SAS-Jbods and ZFS. The application will be the Helios UB+ fileserver
suite.
I installed the latest Solaris 10 on a x4200 with 8gig of ram and two Sun SAS
controllers, attached two sas-jbods with 8 SA
17 matches
Mail list logo