> Doesn't this mean that if you enable write back, and you have
> a single, non-mirrored raid-controller, and your raid controller
> dies on you so that you loose the contents of the nvram, you have
> a potentially corrupt file system?
It is understood, that any single point of failure could resul
> ZFS has intelligent prefetching. AFAIK, Solaris disk drivers do not
> prefetch.
Can you point me to any reference? I didn't find anything stating yay or
nay, for either of these.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.o
If I understand correctly, ZFS now adays will only flush data to
non volatile storage (such as a RAID controller NVRAM), and not
all the way out to disks. (To solve performance problems with some
storage systems, and I believe that it also is the right thing
to do under normal circumstances.)
D
On 19 feb 2010, at 17.35, Edward Ned Harvey wrote:
> The PERC cache measurably and significantly accelerates small disk writes.
> However, for read operations, it is insignificant compared to system ram,
> both in terms of size and speed. There is no significant performance
> improvement by
On Feb 19, 2010, at 8:35 AM, Edward Ned Harvey wrote:
> One more thing I’d like to add here:
>
> The PERC cache measurably and significantly accelerates small disk writes.
> However, for read operations, it is insignificant compared to system ram,
> both in terms of size and speed. There is no
hello
i have made some benchmarks with my napp-it zfs-server
http://www.napp-it.org/bench.pdf"; target="_blank">screenshot
http://www.napp-it.org/bench.pdf";
target="_blank">www.napp-it.org/bench.pdf
-> 2gb vs 4 gb vs 8 gb ram
-> mirror vs raidz vs raidz2 vs raidz3
-> dedup and compress enabled
One more thing I¹d like to add here:
The PERC cache measurably and significantly accelerates small disk writes.
However, for read operations, it is insignificant compared to system ram,
both in terms of size and speed. There is no significant performance
improvement by enabling adaptive readahead
On Thu, Feb 18, 2010 at 10:39:48PM -0600, Bob Friesenhahn wrote:
> This sounds like an initial 'silver' rather than a 'resilver'.
Yes, in particular it will be entirely seqential.
ZFS resilver is in txg order and involves seeking.
> What I am interested in is the answer to these sort of questio
On Thu, 18 Feb 2010, Edward Ned Harvey wrote:
Actually, that's easy. Although the "zpool create" happens instantly, all
the hardware raid configurations required an initial resilver. And they
were exactly what you expect. Write 1 Gbit/s until you reach the size of
the drive. I watched the pro
> A most excellent set of tests. We could use some units in the PDF
> file though.
Oh, by the way, you originally requested the 12G file to be used in
benchmark, and later changed to 4G. But by that time, two of the tests had
already completed on the 12G, and I didn't throw away those results, b
> A most excellent set of tests. We could use some units in the PDF
> file though.
Oh, hehehe. ;-) The units are written in the raw txt files. On your
tests, the units were ops/sec, and in mine, they were Kbytes/sec. If you
like, you can always grab the xlsx and modify it to your tastes, and
On Thu, 18 Feb 2010, Edward Ned Harvey wrote:
Ok, I’ve done all the tests I plan to complete. For highest performance, it
seems:
· The measure I think is the most relevant for typical operation is the
fastest random read
/write / mix. (Thanks Bob, for suggesting I do this test.)
Th
Ok, I've done all the tests I plan to complete. For highest performance, it
seems:
. The measure I think is the most relevant for typical operation is
the fastest random read /write / mix. (Thanks Bob, for suggesting I do this
test.)
The winner is clearly striped mirrors in ZFS
.
Richard Elling wrote:
...
As you can see, so much has changed, hopefully for the better, that running
performance benchmarks on old software just isn't very interesting.
NB. Oracle's Sun OpenStorage systems do not use Solaris 10 and if they did, they
would not be competitive in the market. The n
On Feb 14, 2010, at 6:45 PM, Thomas Burgess wrote:
>
> Whatever. Regardless of what you say, it does show:
>
> · Which is faster, raidz, or a stripe of mirrors?
>
> · How much does raidz2 hurt performance compared to raidz?
>
> · Which is faster, raidz, or hardware raid
On Sun, 14 Feb 2010, Thomas Burgess wrote:
Solaris 10 has a really old version of ZFS. i know there are some
pretty big differences in zfs versions from my own non scientific
benchmarks. It would make sense that people wouldn't be as
interested in benchmarks of solaris 10 ZFS seeing as ther
On Sun, 14 Feb 2010, Edward Ned Harvey wrote:
iozone -m -t 8 -T -O -r 128k -o -s 12G
Actually, it seems that this is more than sufficient:
iozone -m -t 8 -T -r 128k -o -s 4G
Good news, cuz I kicked off the first test earlier today, and it seems like
it will run till Wednesday. ;-) The
On Sun, 14 Feb 2010, Edward Ned Harvey wrote:
> Never mind. I have no interest in performance tests for Solaris 10.
> The code is so old, that it does not represent current ZFS at all.
Whatever. Regardless of what you say, it does show:
Since Richard abandoned Sun (in favor of gmail), he ha
> Whatever. Regardless of what you say, it does show:
>
> · Which is faster, raidz, or a stripe of mirrors?
>
> · How much does raidz2 hurt performance compared to raidz?
>
> · Which is faster, raidz, or hardware raid 5?
>
> · Is a mirror twice as fast as a single d
> > iozone -m -t 8 -T -O -r 128k -o -s 12G
>
> Actually, it seems that this is more than sufficient:
>
>iozone -m -t 8 -T -r 128k -o -s 4G
Good news, cuz I kicked off the first test earlier today, and it seems like
it will run till Wednesday. ;-) The first run, on a single disk, took 6.5
> Never mind. I have no interest in performance tests for Solaris 10.
> The code is so old, that it does not represent current ZFS at all.
Whatever. Regardless of what you say, it does show:
. Which is faster, raidz, or a stripe of mirrors?
. How much does raidz2 hurt perfor
On Feb 13, 2010, at 10:54 AM, Edward Ned Harvey wrote:
> > Please add some raidz3 tests :-) We have little data on how raidz3
> > performs.
>
> Does this require a specific version of OS? I'm on Solaris 10 10/09, and
> "man zpool" doesn't seem to say anything about raidz3 ... I haven't tried
On Sat, 13 Feb 2010, Edward Ned Harvey wrote:
> kind as to collect samples of "iosnoop -Da" I would be eternally
> grateful :-)
I'm guessing iosnoop is an opensolaris thing? Is there an equivalent for
solaris?
Iosnoop is part of the DTrace Toolkit by Brendan Gregg, which does
work on Sol
> IMHO, sequential tests are a waste of time. With default configs, it
> will be
> difficult to separate the "raw" performance from prefetched
> performance.
> You might try disabling prefetch as an option.
Let me clarify:
Iozone does a nonsequential series of sequential tests, specifi
On Sat, 13 Feb 2010, Bob Friesenhahn wrote:
Make sure to also test with a command like
iozone -m -t 8 -T -O -r 128k -o -s 12G
Actually, it seems that this is more than sufficient:
iozone -m -t 8 -T -r 128k -o -s 4G
since it creates a 4GB test file for each thread, with 8 threads.
Bob
--
On Sat, 13 Feb 2010, Edward Ned Harvey wrote:
Will test, including the time to flush(), various record sizes inside file
sizes up to 16G,
sequential write and sequential read. Not doing any mixed read/write
requests. Not doing any
random read/write.
iozone -Reab somefile.wks -g 17G -i 1 -i
Some thoughts below...
On Feb 13, 2010, at 6:06 AM, Edward Ned Harvey wrote:
> I have a new server, with 7 disks in it. I am performing benchmarks on it
> before putting it into production, to substantiate claims I make, like
> “striping mirrors is faster than raidz” and so on. Would anybody
I have a new server, with 7 disks in it. I am performing benchmarks on it
before putting it into production, to substantiate claims I make, like
"striping mirrors is faster than raidz" and so on. Would anybody like me to
test any particular configuration? Unfortunately I don't have any SSD, so I
28 matches
Mail list logo