Dana H. Myers wrote:
> Phil Brown wrote:
>> Pawel Wojcik wrote:
>>> Only SATA drives that operate under SATA framework and SATA HBA
>>> drivers have this option available to them via format -e. That's
>>> because they are treated and controlled by the system as scsi drives.
>>> >From your e-mail
Dana H. Myers wrote:
Phil Brown wrote:
hmm. well I hope sun will fix this bug, and add in the long-missing
write_cache control for "regular" ata drives too.
Actually, I believe such ata drives by default enable the write cache.
some do, some dont. reguardless, the toggle functionality bel
Phil Brown wrote:
> Pawel Wojcik wrote:
>> Only SATA drives that operate under SATA framework and SATA HBA
>> drivers have this option available to them via format -e. That's
>> because they are treated and controlled by the system as scsi drives.
>> >From your e-mail it appears that you are talk
Pawel Wojcik wrote:
Only SATA drives that operate under SATA framework and SATA HBA drivers
have this option available to them via format -e. That's because they
are treated and controlled by the system as scsi drives.
>From your e-mail it appears that you are talking about SATA drives
connec
I don't believe ZFS toggles write cache on disks on the fly. Rather,
write caching is enabled on disks which support this functionality.
Then at appropriate points in the code ioctl is called to flush the
cache thereby providing the appropriate data guarantees.
However this by no means
-discuss] Re: disk write cache, redux
From:
Gregory Shaw <[EMAIL PROTECTED]>
Date:
Thu, 15 Jun 2006 21:36:52 -0600
To:
zfs-discuss@opensolaris.org
To
I've got a pretty dumb question regarding SATA and write cache. I
don't see options in 'format -e' on SATA drives for checking/setting
write cache.
I've seen the options for SCSI driver, but not SATA.
I'd like to help on the SATA write cache enable/disable problem, if I
can.
What am I
Roch wrote:
Check here:
http://cvs.opensolaris.org/source/xref/on/usr/src/uts/common/fs/zfs/vdev_disk.c#157
distilled version:
vdev_disk_open(vdev_t *vd, uint64_t *psize, uint64_t *ashift)
/*...*/
/*
* If we own the whole disk, try to enable disk write caching.
* We ignore error
Check here:
http://cvs.opensolaris.org/source/xref/on/usr/src/uts/common/fs/zfs/vdev_disk.c#157
-r
Phil Brown writes:
> Roch Bourbonnais - Performance Engineering wrote:
> > I'm puzzled by 2 things.
> >
> > Naively I'd think a write_cache should not help throughput
> > test since the c
Roch Bourbonnais - Performance Engineering wrote:
I'm puzzled by 2 things.
Naively I'd think a write_cache should not help throughput
test since the cache should fill up after which you should still be
throttled by the physical drain rate. You clearly show that
it helps; Anyone knows why/how a
Just was on the phone with Andy Bowers. He cleared up that
our SATA device drivers need some work. We basically do not
have the necessary I/O concurrency at this stage. So the
write_cache is actually a good substitute for tag queuing.
So that explain why we get more throughput _on SATA_ d
On Jun 15, 2006, at 06:23, Roch Bourbonnais - Performance Engineering
wrote:
Naively I'd think a write_cache should not help throughput
test since the cache should fill up after which you should still be
throttled by the physical drain rate. You clearly show that
it helps; Anyone knows why
I'm puzzled by 2 things.
Naively I'd think a write_cache should not help throughput
test since the cache should fill up after which you should still be
throttled by the physical drain rate. You clearly show that
it helps; Anyone knows why/how a cache helps throughput ?
And the second thing...q
> I previously wrote about my scepticism on the claims that zfs selectively
> enables and disables write cache, to improve throughput over the usual
> solaris defaults prior to this point.
I have snv_38 here. With a zpool thus :
bash-3.1# zpool status
pool: zfs0
state: ONLINE
scrub: scrub c
14 matches
Mail list logo