>>>>> "edm" == Eric D Mudama <edmud...@bounceswoosh.org> writes:
edm> If, instead of having ZFS manage these differences, a user edm> simply created slices that were, say, 98% if you're willing to manually create slices, you should be able to manually enable the write cache, too, while you're in there, so I wouldn't worry about that. I'd worry a little about the confusion over this write cache bit in general---where the write cache setting is stored and when it's enabled and when (if?) it's disabled, if the rules differ on each type of disk attachment, and if you plug the disk into Linux will Linux screw up the setting by auto-enabling at boot or by auto-disabling at shutdown or does Linux use stateless versions (analagous to sdparm without --save) when it prints that boot-time message about enabling write caches? For example weirdness, on iSCSI I get this, on a disk to which I've let ZFS write a GPT/EFI label: write_cache> display Write Cache is disabled write_cache> enable Write cache setting is not changeable so is that a bug of my iSCSI target, and is there another implicit write cache inside the iSCSI initiator or not? The Linux hdparm man page says: -W Disable/enable the IDE drive's write-caching feature (default state is undeterminable; manufacturer/model specific). so is the write_cache 'display' feature in 'format -e' actually reliable? Or is it impossible to reliably read this setting on an ATA drive, and 'format -e' is making stuff up? With Linux I can get all kinds of crazy caching data from a SATA disk: r...@node0 ~ # sdparm --page=ca --long /dev/sda /dev/sda: ATA WDC WD1000FYPS-0 02.0 Caching (SBC) [PS=0] mode page: IC 0 Initiator control ABPF 0 Abort pre-fetch CAP 0 Caching analysis permitted DISC 0 Discontinuity SIZE 0 Size (1->CSS valid, 0->NCS valid) WCE 1 Write cache enable MF 0 Multiplication factor RCD 0 Read cache disable DRRP 0 Demand read retension priority WRP 0 Write retension priority DPTL 0 Disable pre-fetch transfer length MIPF 0 Minimum pre-fetch MAPF 0 Maximum pre-fetch MAPFC 0 Maximum pre-fetch ceiling FSW 0 Force sequential write LBCSS 0 Logical block cache segment size DRA 0 Disable read ahead NV_DIS 0 Non-volatile cache disable NCS 0 Number of cache segments CSS 0 Cache segment size but what's actually coming from the drive, and what's fabricated by the SCSI-to-SATA translator built into Garzik's libata? Because I think Solaris has such a translator, too, if it's attaching sd to SATA disks. I'm guessing it's all a fantasy because: r...@node0 ~ # sdparm --clear=WCE /dev/sda /dev/sda: ATA WDC WD1000FYPS-0 02.0 change_mode_page: failed setting page: Caching (SBC) but neverminding the write cache, I'd be happy saying ``just round down disk sizes using the labeling tool instead of giving ZFS the whole disk, if you care,'' IF the following things were true: * doing so were written up as a best-practice. because, I think it's a best practice if the rest of the storage industry from EMC to $15 promise cards is doing it, though maybe it's not important any more because of IDEMA. And right now very few people are likely to have done it because of the way they've been guided into the setup process. * it were possible to do this label-sizing to bootable mirrors in the various traditional/IPS/flar/jumpstart installers * there weren't a proliferation of >= 4 labeling tools in Solaris, each riddled with assertion bailouts and slightly different capabilities. Linux also has a mess of labeling tools, but they're less assertion-riddled, and usually you can pick one and use it for everything---you don't have to drag out a different tool for USB sticks because they're considered ``removeable.'' Also it's always possible to write to the unpartitioned block device with 'dd' on Linux (and FreeBSD and Mac OS X), no matter what label is on the disk, while Solaris doesn't seem to have an unpartitioned device. And finally the Linux formatting tools work by writing to this unpartitioned device, not by calling into a rat's nest of ioctl's, so they're much easier for me to get along with. Part of the attraction of ZFS should be avoiding this messy part of Solaris, but we still have to use format/fmthard/fdisk/rmformat, to swap label types because ZFS won't, to frob the write cache because ZFS's user interface is too simple and does that semi-automatically though I'm not sure all the rules it's using, to enumerate the installed disks, to determine in which of the several states working / connected-but-not-identified / disconnected / disconnected-but-refcounted the iSCSI initiator is in. And while ZFS will do special things to an UNlabeled disk, I'm not sure there is a documented procedure for removing the label from a disk---Sun seems to imagine all disks will ship with labels that can never be removed, only ``converted,'' and removing a GPT/EFI label is tricky because of that backup label at the end which some tools respect and others don't. I would prefer cleaning up the mess of labelers and removing bogus assertions about ``removeable'' disks and similar cruft, over adding the equivalent of a fifth also-extremely-limited labeling tool to zpool.
pgpxrBfltjUQC.pgp
Description: PGP signature
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss