> > Summary (1.8" form factor): write: 35MB/Sec, Read: 62MB/Sec IOPS: 7,000
> >
> That is on par with a 5400 rpm disk, except for the 100x more small, random
> read iops. The biggest issue is the pricing, which will become interestingly
> competitive for mortals this year.
$600+ for a 32 GB de
Peter Schuller wrote:
I've been using a simple model for small, random reads. In that model,
the performance of a raidz[12] set will be approximately equal to a single
disk. For example, if you have 6 disks, then the performance for the
6-disk raidz2 set will be normalized to 1, and the perform
Darren Dunham wrote:
That would be useless, and not provide anything extra.
I think it's useless if a (disk) block of data holding RAIDZ parity
never has silent corruption, or if scrubbing was a lightweight operation
that could be run often.
The problem is that you will still need to
Matthew Ahrens wrote:
Robert Milkowski wrote:
Hello zfs-discuss,
zfs recv -v at the end reported:
received 928Mb stream in 6346 seconds (150Kb/sec)
I'm not sure but shouldn't it be 928MB and 150KB ?
Or perhaps we're counting bits?
That's correct, it is in bytes and should use capital B.
> It's not about the checksum but about how a fs block is stored in
> raid-z[12] case - it's spread out to all non-parity disks so in order
> to read one fs block you have to read from all disks except parity
> disks.
However, if we didn't need to verify the checksum, we wouldn't
have to read the
Al Hopper wrote:
On Fri, 5 Jan 2007, Anton B. Rang wrote:
If [SSD or Flash] devices become more prevalent, and/or cheaper I'm curious what
ways ZFS could be made to bast take advantage of them?
The intent log is a possibility, but this would work better with SSD than
Flash; Flash wr
> >> ... If the block checksums
> >> show OK, then reading the parity for the corresponding data yields no
> >> additional useful information.
> >
> > It would yield useful information about the status of the parity
> > information on disk.
> >
> > The read would be done because you're already payi
... If the block checksums
show OK, then reading the parity for the corresponding data yields no
additional useful information.
It would yield useful information about the status of the parity
information on disk.
The read would be done because you're already paying the penalty for
reading all
Hello Chris,
Wednesday, December 13, 2006, 12:25:40 PM, you wrote:
CG> Robert Milkowski wrote:
>> Hello Chris,
>>
>> Wednesday, December 6, 2006, 6:23:48 PM, you wrote:
>>
>> CG> One of our file servers internally to Sun that reproduces this
>> CG> running nv53 here is the dtrace output:
>>
>>
Ok, now I'm getting somewhere.
vault:/#dd if=/dev/zero of=/dev/dsk/c5t6d0 bs=512 count=64000
64000+0 records in
64000+0 records out
vault:/#dd if=/dev/zero of=/dev/dsk/c5t6d0 bs=512 count=64000 oseek=976174591
64000+0 records in
64000+0 records out
vault:/#zpool replace pool c5t6d0
vault:/#
Looks
On 05 January, 2007 - Mark Maybee sent me these 2,9K bytes:
> Tomas Ögren wrote:
> >On 05 January, 2007 - Mark Maybee sent me these 1,5K bytes:
> >
> >>So it looks like this data does not include ::kmastat info from *after*
> >>you reset arc_reduce_dnlc_percent. Can I get that?
> >
> >Yeah, attac
On 05 January, 2007 - Tomas Ögren sent me these 33K bytes:
> On 05 January, 2007 - Mark Maybee sent me these 1,5K bytes:
>
> > So it looks like this data does not include ::kmastat info from *after*
> > you reset arc_reduce_dnlc_percent. Can I get that?
>
> Yeah, attached. (although about 18 ho
Tomas Ögren wrote:
On 05 January, 2007 - Mark Maybee sent me these 1,5K bytes:
So it looks like this data does not include ::kmastat info from *after*
you reset arc_reduce_dnlc_percent. Can I get that?
Yeah, attached. (although about 18 hours after the others)
Excellent, this confirms #3 b
And to add more fuel to the fire, an fmdump -eV shows the following:
Jan 05 2007 11:30:38.030057310 ereport.fs.zfs.vdev.open_failed
nvlist version: 0
class = ereport.fs.zfs.vdev.open_failed
ena = 0x88c01b571200801
detector = (embedded nvlist)
nvlist version: 0
On Jan 5, 2007, at 11:10, Anton B. Rang wrote:
DIRECT IO is a set of performance optimisations to circumvent
shortcomings of a given filesystem.
Direct I/O as generally understood (i.e. not UFS-specific) is an
optimization which allows data to be transferred directly between
user data bu
Hi Bill,
vault:/#zpool replace pool c5t6d0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c5t6d0s0 is part of active ZFS pool pool. Please see zpool(1M).
vault:/#zpool replace -f pool c5t6d0
invalid vdev specification
the following errors must be manually repaired:
Could this ability (separate ZIL device) coupled with an SSD give
something like a Thumper the write latency benefit of battery-backed
write cache?
Best Regards,
Jason
On 1/5/07, Neil Perrin <[EMAIL PROTECTED]> wrote:
Robert Milkowski wrote On 01/05/07 11:45,:
> Hello Neil,
>
> Friday, Januar
On 05 January, 2007 - Mark Maybee sent me these 1,5K bytes:
> So it looks like this data does not include ::kmastat info from *after*
> you reset arc_reduce_dnlc_percent. Can I get that?
Yeah, attached. (although about 18 hours after the others)
> What I suspect is happening:
> 1 with you
So it looks like this data does not include ::kmastat info from *after*
you reset arc_reduce_dnlc_percent. Can I get that?
What I suspect is happening:
1 with your large ncsize, you eventually ran the machine out
of memory because (currently) the arc is not accounting for
Robert Milkowski wrote On 01/05/07 11:45,:
Hello Neil,
Friday, January 5, 2007, 4:36:05 PM, you wrote:
NP> I'm currently working on putting the ZFS intent log on separate devices
NP> which could include seperate disks and nvram/solid state devices.
NP> This would help any application using fs
On Fri, 5 Jan 2007, Anton B. Rang wrote:
> > If [SSD or Flash] devices become more prevalent, and/or cheaper I'm curious
> > what
> > ways ZFS could be made to bast take advantage of them?
>
> The intent log is a possibility, but this would work better with SSD than
> Flash; Flash writes can act
Hello Neil,
Friday, January 5, 2007, 4:36:05 PM, you wrote:
NP> I'm currently working on putting the ZFS intent log on separate devices
NP> which could include seperate disks and nvram/solid state devices.
NP> This would help any application using fsync/O_DSYNC - in particular
NP> DB and NFS. Fro
On Fri, Jan 05, 2007 at 10:14:21AM -0800, Eric Hill wrote:
> I have a pool of 48 500GB disks across four SCSI channels (12 per
> channel). One of the disks failed, and was replaced. The pool is now
> in a degraded state, but I can't seem to get the pool to be happy with
> the replacement. I did
I have a pool of 48 500GB disks across four SCSI channels (12 per channel).
One of the disks failed, and was replaced. The pool is now in a degraded
state, but I can't seem to get the pool to be happy with the replacement. I
did a resilver and the pool is error free with the exception of this
On 05 January, 2007 - Mark Maybee sent me these 0,8K bytes:
> Thomas,
>
> This could be fragmentation in the meta-data caches. Could you
> print out the results of ::kmastat?
http://www.acc.umu.se/~stric/tmp/zfs-dumps.tar.bz2
memstat, kmastat and dnlc_nentries from 10 minutes after boot up unt
Thomas,
This could be fragmentation in the meta-data caches. Could you
print out the results of ::kmastat?
-Mark
Tomas Ögren wrote:
On 05 January, 2007 - Robert Milkowski sent me these 3,8K bytes:
Hello Tomas,
I saw the same behavior here when ncsize was increased from default.
Try with de
> If [SSD or Flash] devices become more prevalent, and/or cheaper I'm curious
> what
> ways ZFS could be made to bast take advantage of them?
The intent log is a possibility, but this would work better with SSD than
Flash; Flash writes can actually be slower than sequential writes to a real
dis
> DIRECT IO is a set of performance optimisations to circumvent shortcomings of
> a given filesystem.
Direct I/O as generally understood (i.e. not UFS-specific) is an optimization
which allows data to be transferred directly between user data buffers and
disk, without a memory-to-memory copy.
> > Ah, that's a major misconception on my part then. I'd thought I'd read
> > that unlike any other RAID implementation, ZFS checked and verified
> > parity on normal data access.
> That would be useless, and not provide anything extra.
I think it's useless if a (disk) block of data holding R
I'm currently working on putting the ZFS intent log on separate devices
which could include seperate disks and nvram/solid state devices.
This would help any application using fsync/O_DSYNC - in particular
DB and NFS. From protoyping considerable peformanace improvements have
been seen.
Neil.
Ky
I know there's been much discussion on the list lately about getting HW
arrays to use (or not use) their caches in a way that helps ZFS the most.
Just yesterday I started seeing articles on NAND Flash Drives, and I
know other Solid Stae Drive technologies have been around for a while
and many
On 05 January, 2007 - Robert Milkowski sent me these 3,8K bytes:
> Hello Tomas,
>
> I saw the same behavior here when ncsize was increased from default.
> Try with default and lets see what will happen - if it works then it's
> better than hung every an hour or so.
That's still not the point.. I
Hello Tomas,
Friday, January 5, 2007, 4:00:53 AM, you wrote:
TÖ> On 04 January, 2007 - Tomas Ögren sent me these 1,0K bytes:
>> On 03 January, 2007 - [EMAIL PROTECTED] sent me these 0,5K bytes:
>>
>> >
>> > >Hmmm, so there is lots of evictable cache here (mostly in the MFU
>> > >part of the ca
> >Hmmm, so there is lots of evictable cache here (mostly in the MFU
> >part of the cache)... could you make your core file available?
> >I would like to take a look at it.
>
> Isn't this just like:
> 6493923 nfsfind on ZFS filesystem quickly depletes memory in a 1GB system
>
> Which was introduc
DIRECT IO is a set of performance optimisations to circumvent
shortcomings of a given filesystem.
Check out
http://blogs.sun.com/roch/entry/zfs_and_directio
Then I would be interested to know what is the expectation for ZFS/DIO.
Le 5 janv. 07 à 06:39, dudekula mastan a écrit :
Hi
35 matches
Mail list logo