> Doesn't this mean that if you enable write back, and you have
> a single, non-mirrored raid-controller, and your raid controller
> dies on you so that you loose the contents of the nvram, you have
> a potentially corrupt file system?
It is understood, that any single point of failure could resul
> ZFS has intelligent prefetching. AFAIK, Solaris disk drivers do not
> prefetch.
Can you point me to any reference? I didn't find anything stating yay or
nay, for either of these.
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.o
If I understand correctly, ZFS now adays will only flush data to
non volatile storage (such as a RAID controller NVRAM), and not
all the way out to disks. (To solve performance problems with some
storage systems, and I believe that it also is the right thing
to do under normal circumstances.)
D
On 19 feb 2010, at 17.35, Edward Ned Harvey wrote:
> The PERC cache measurably and significantly accelerates small disk writes.
> However, for read operations, it is insignificant compared to system ram,
> both in terms of size and speed. There is no significant performance
> improvement by
On Feb 19, 2010, at 8:35 AM, Edward Ned Harvey wrote:
> One more thing I’d like to add here:
>
> The PERC cache measurably and significantly accelerates small disk writes.
> However, for read operations, it is insignificant compared to system ram,
> both in terms of size and speed. There is no
Richard Elling wrote:
...
As you can see, so much has changed, hopefully for the better, that running
performance benchmarks on old software just isn't very interesting.
NB. Oracle's Sun OpenStorage systems do not use Solaris 10 and if they did, they
would not be competitive in the market. The n
On Feb 14, 2010, at 6:45 PM, Thomas Burgess wrote:
>
> Whatever. Regardless of what you say, it does show:
>
> · Which is faster, raidz, or a stripe of mirrors?
>
> · How much does raidz2 hurt performance compared to raidz?
>
> · Which is faster, raidz, or hardware raid
> Whatever. Regardless of what you say, it does show:
>
> · Which is faster, raidz, or a stripe of mirrors?
>
> · How much does raidz2 hurt performance compared to raidz?
>
> · Which is faster, raidz, or hardware raid 5?
>
> · Is a mirror twice as fast as a single d
> Never mind. I have no interest in performance tests for Solaris 10.
> The code is so old, that it does not represent current ZFS at all.
Whatever. Regardless of what you say, it does show:
. Which is faster, raidz, or a stripe of mirrors?
. How much does raidz2 hurt perfor
On Feb 13, 2010, at 10:54 AM, Edward Ned Harvey wrote:
> > Please add some raidz3 tests :-) We have little data on how raidz3
> > performs.
>
> Does this require a specific version of OS? I'm on Solaris 10 10/09, and
> "man zpool" doesn't seem to say anything about raidz3 ... I haven't tried
> IMHO, sequential tests are a waste of time. With default configs, it
> will be
> difficult to separate the "raw" performance from prefetched
> performance.
> You might try disabling prefetch as an option.
Let me clarify:
Iozone does a nonsequential series of sequential tests, specifi
On Sat, 13 Feb 2010, Edward Ned Harvey wrote:
Will test, including the time to flush(), various record sizes inside file
sizes up to 16G,
sequential write and sequential read. Not doing any mixed read/write
requests. Not doing any
random read/write.
iozone -Reab somefile.wks -g 17G -i 1 -i
Some thoughts below...
On Feb 13, 2010, at 6:06 AM, Edward Ned Harvey wrote:
> I have a new server, with 7 disks in it. I am performing benchmarks on it
> before putting it into production, to substantiate claims I make, like
> “striping mirrors is faster than raidz” and so on. Would anybody
13 matches
Mail list logo