Re: [zfs-discuss] Wrong rpool used after reinstall!

2011-08-05 Thread Bill
rives to any other box because they are consumer drives and > > > my > > > servers all have ultras. > > Ian wrote: > > > Most modern boards will be boot from a live USB > > stick. > > True but I haven't found a way to get an ISO onto a USB that my

[zfs-discuss] ZFS web admin - No items found.

2006-08-23 Thread Bill
art the service (smcwebserver), no use. Anyone have the experience on it, is it a bug? Regards, Bill This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: ZFS web admin - No items found.

2006-08-24 Thread Bill
When I run the command, it prompts: # /usr/lib/zfs/availdevs -d Segmentation Fault - core dumped. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discu

[zfs-discuss] Re: ZFS web admin - No items found.

2006-08-27 Thread Bill
# /usr/lib/zfs/availdevs -d Segmentation Fault - core dumped. # pstack core core 'core' of 2350:./availdevs -d - lwp# 1 / thread# 1 d2d64b3c strlen (0) + c d2fa2f82 get_device_name (8063400, 0, 804751c, 1c) + 3e d2fa3015 get_disk (8063400, 0, 804751c

Re: [zfs-discuss] compressed root pool at installation time with flash archive predeployment script

2010-03-02 Thread Bill Sommerfeld
ks can read lzjb-compressed blocks in zfs. I have compression=on (and copies=2) for both sparc and x86 roots; I'm told that grub's zfs support also knows how to fall back to ditto blocks if the first copy fails to be readable or has a bad checksum.

Re: [zfs-discuss] swap across multiple pools

2010-03-03 Thread Bill Sommerfeld
f swap on these systems. (when migrating one such system from Nevada to Opensolaris recently I forgot to add swap to /etc/vfstab). - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.

Re: [zfs-discuss] Snapshot recycle freezes system activity

2010-03-08 Thread Bill Sommerfeld
On 03/08/10 12:43, Tomas Ögren wrote: So we tried adding 2x 4GB USB sticks (Kingston Data Traveller Mini Slim) as metadata L2ARC and that seems to have pushed the snapshot times down to about 30 seconds. Out of curiosity, how much physical memory does this system have?

Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-08 Thread Bill Sommerfeld
press and checksum metadata. the evil tuning guide describes an unstable interface to turn off metadata compression, but I don't see anything in there for metadata checksums. if you have an actual need for an in-memory filesystem, will tmpfs fit

Re: [zfs-discuss] Scrub not completing?

2010-03-17 Thread Bill Sommerfeld
routinely see scrubs last 75 hours which had claimed to be "100.00% done" for over a day. - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] sympathetic (or just multiple) drive failures

2010-03-20 Thread Bill Sommerfeld
On 03/19/10 19:07, zfs ml wrote: What are peoples' experiences with multiple drive failures? 1985-1986. DEC RA81 disks. Bad glue that degraded at the disk's operating temperature. Head crashes. No more need be said.

Re: [zfs-discuss] Proposition of a new zpool property.

2010-03-22 Thread Bill Sommerfeld
for the better). - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Tuning the ARC towards LRU

2010-04-05 Thread Bill Sommerfeld
update activity like atime updates, mtime updates on pseudo-terminals, etc., ? I'd want to start looking more closely at I/O traces (dtrace can be very helpful here) before blaming any specific system component for the unexpected I/O.

Re: [zfs-discuss] SSD sale on newegg

2010-04-06 Thread Bill Sommerfeld
h roughly half the space, 1GB in s3 for slog, and the rest of the space as L2ARC in s4. That may actually be overly generous for the root pool, but I run with copies=2 on rpool/ROOT and I tend to keep a bunch of BE's around.

Re: [zfs-discuss] Secure delete?

2010-04-11 Thread Bill Sommerfeld
s completely -- probably the biggest single assumption, given that the underlying storage devices themselves are increasingly using copy-on-write techniques. The most paranoid will replace all the disks and then physically destroy the old ones. - Bill

Re: [zfs-discuss] Secure delete?

2010-04-11 Thread Bill Sommerfeld
stem encryption only changes the size of the problem we need to solve. - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Suggestions about current ZFS setup

2010-04-14 Thread Bill Sommerfeld
ata pool(s). - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] dedup screwing up snapshot deletion

2010-04-14 Thread Bill Sommerfeld
m. If you can try adding more ram to the system. Adding a flash-based ssd as an cache/L2ARC device is also very effective; random i/o to ssd is much faster than random i/o to spinning rust. - Bill ___ z

Re: [zfs-discuss] Is it safe/possible to idle HD's in a ZFS Vdev to save wear/power?

2010-04-16 Thread Bill Sommerfeld
k you may need to add an "autopm enable" if the system isn't recognized as a known desktop. the disks spin down when the system is idle; there's a delay of a few seconds when they spin back up. - Bill _

Re: [zfs-discuss] SSD best practices

2010-04-17 Thread Bill Sommerfeld
her it reduces the risk depends on precisely *what* caused your system to crash and reboot; if the failure also causes loss of the write cache contents on both sides of the mirror, mirroring won't help. - Bill _

Re: [zfs-discuss] Single-disk pool corrupted after controller failure

2010-05-01 Thread Bill Sommerfeld
he pool. I think #2 is somewhat more likely. - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] confused about zpool import -f and export

2010-05-07 Thread Bill McGonigle
oesn't seem to clear this up. Or maybe it does, but I'm not understanding the other thing that's supposed to be cleared up. This worked back on a 20081207 build, so perhaps something has changed? I'm adding format's view of the disks and a zdb list below. Thanks,

Re: [zfs-discuss] Mirroring USB Drive with Laptop for Backup purposes

2010-05-07 Thread Bill McGonigle
get fixed. His use case is very compelling - I know lots of SOHO folks who could really use a NAS where this 'just worked' The ZFS team has done well by thinking liberally about conventional assumptions. -Bill -- Bill McGonigle, Owner BFC Computing, LLC http://bfccomputing.com/ Te

Re: [zfs-discuss] ZFS - USB 3.0 SSD disk

2010-05-07 Thread Bill McGonigle
at's about double what I usually get out of a cheap 'desktop' SATA drive with OpenSolaris. Slower than a RAID-Z2 of 10 of them, though. Still, the power savings could be appreciable. -Bill -- Bill McGonigle, Owner BFC Computing, LLC http://bfccomputing.com/ Telephone: +1.603

Re: [zfs-discuss] ZFS root ARC memory usage on VxFS system...

2010-05-07 Thread Bill Sommerfeld
irect blocks for the swap device get cached). - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] New SSD options

2010-05-20 Thread Bill Sommerfeld
oss from ~30 seconds to a sub-second value. - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Dedup... still in beta status

2010-06-15 Thread Bill Sommerfeld
55% 1.25x ONLINE - # zdb -D z DDT-sha256-zap-duplicate: 432759 entries, size 304 on disk, 156 in core DDT-sha256-zap-unique: 1094244 entries, size 298 on disk, 151 in core dedup = 1.25, compress = 1.44, copies = 1.00, dedup * compress / copies = 1.80 -

Re: [zfs-discuss] zpool throughput: snv 134 vs 138 vs 143

2010-07-20 Thread Bill Sommerfeld
_size=0x1000 * Work around 6965294 set zfs:metaslab_smo_bonus_pct=0xc8 -cut here- no guarantees, but it's helped a few systems.. - Bill ___ zfs-discuss mailing list zfs-discuss@opensolari

Re: [zfs-discuss] zpool throughput: snv 134 vs 138 vs 143

2010-07-20 Thread Bill Sommerfeld
_size=0x1000 * Work around 6965294 set zfs:metaslab_smo_bonus_pct=0xc8 -cut here- no guarantees, but it's helped a few systems.. - Bill ___ zfs-discuss mailing list zfs-discuss@opensolari

Re: [zfs-discuss] L2ARC and ZIL on same SSD?

2010-07-22 Thread Bill Sommerfeld
I've been very happy with the results. - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Increase resilver priority

2010-07-23 Thread Bill Sommerfeld
ence between snapshots. Turning off atime updates (if you and your applications can cope with this) may also help going forward. - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mai

Re: [zfs-discuss] Resilvering, amount of data on disk, etc.

2009-10-26 Thread Bill Sommerfeld
On Mon, 2009-10-26 at 10:24 -0700, Brian wrote: > Why does resilvering an entire disk, yield different amounts of data that was > resilvered each time. > I have read that ZFS only resilvers what it needs to, but in the case of > replacing an entire disk with another formatted clean disk, you woul

Re: [zfs-discuss] sched regularily writing a lots of MBs to the pool?

2009-11-04 Thread Bill Sommerfeld
zfs groups writes together into transaction groups; the physical writes to disk are generally initiated by kernel threads (which appear in dtrace as threads of the "sched" process). Changing the attribution is not going to be simple as a single physical write to the pool may contain data and metad

Re: [zfs-discuss] dedupe question

2009-11-07 Thread Bill Sommerfeld
locks might overwhelm the savings from deduping a small common piece of the file. - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] This is the scrub that never ends...

2009-11-10 Thread Bill Sommerfeld
On Fri, 2009-09-11 at 13:51 -0400, Will Murnane wrote: > On Thu, Sep 10, 2009 at 13:06, Will Murnane wrote: > > On Wed, Sep 9, 2009 at 21:29, Bill Sommerfeld wrote: > >>> Any suggestions? > >> > >> Let it run for another day. > > I'll let it ke

Re: [zfs-discuss] zfs eradication

2009-11-11 Thread Bill Sommerfeld
ocols to themselves implement the TRIM command -- freeing the underlying storage). - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Resilver/scrub times?

2009-11-22 Thread Bill Sommerfeld
early version of the fix, and saw one pool go from an elapsed time of 85 hours to 20 hours; another (with many fewer snapshots) went from 35 to 17. - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] USB sticks show on one set of devices in zpool, different devices in format

2009-12-04 Thread Bill Hutchison
hows my two USB sticks of the rpool being at c8t0d0 and c11t0d0... ! How is this system even working? What do I need to do to clear this up...? Thanks for your time, -Bill -- This message posted from opensolaris.org ___ zfs-discuss mailing li

Re: [zfs-discuss] zfs on ssd

2009-12-11 Thread Bill Sommerfeld
On Fri, 2009-12-11 at 13:49 -0500, Miles Nordin wrote: > > "sh" == Seth Heeren writes: > > sh> If you don't want/need log or cache, disable these? You might > sh> want to run your ZIL (slog) on ramdisk. > > seems quite silly. why would you do that instead of just disabling > the ZIL

[zfs-discuss] zpool fragmentation issues?

2009-12-15 Thread Bill Sprouse
to avoid a directory walk? Thanks, bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] force 4k writes?

2009-12-15 Thread Bill Sprouse
This is most likely a naive question on my part. If recordsize is set to 4k (or a multiple of 4k), will ZFS ever write a record that is less than 4k or not a multiple of 4k? This includes metadata. Does compression have any effect on this? thanks for the help, bill

Re: [zfs-discuss] zpool fragmentation issues?

2009-12-15 Thread Bill Sommerfeld
On Tue, 2009-12-15 at 17:28 -0800, Bill Sprouse wrote: > After > running for a while (couple of months) the zpool seems to get > "fragmented", backups take 72 hours and a scrub takes about 180 > hours. Are there periodic snapshots being created in this pool? C

Re: [zfs-discuss] force 4k writes

2009-12-16 Thread Bill Sprouse
Hi Richard, How's the ranch? ;-) This is most likely a naive question on my part. If recordsize is set to 4k (or a multiple of 4k), will ZFS ever write a record that is less than 4k or not a multiple of 4k? Yes. The recordsize is the upper limit for a file record. This includes metadata

Re: [zfs-discuss] zpool fragmentation issues?

2009-12-16 Thread Bill Sprouse
On Dec 15, 2009, at 6:24 PM, Bill Sommerfeld wrote: On Tue, 2009-12-15 at 17:28 -0800, Bill Sprouse wrote: After running for a while (couple of months) the zpool seems to get "fragmented", backups take 72 hours and a scrub takes about 180 hours. Are there periodic snapshots being

Re: [zfs-discuss] zpool fragmentation issues?

2009-12-16 Thread Bill Sprouse
Hi Bob, On Dec 15, 2009, at 6:41 PM, Bob Friesenhahn wrote: On Tue, 15 Dec 2009, Bill Sprouse wrote: Hi Everyone, I hope this is the right forum for this question. A customer is using a Thumper as an NFS file server to provide the mail store for multiple email servers (Dovecot). They

Re: [zfs-discuss] zpool fragmentation issues?

2009-12-16 Thread Bill Sprouse
Thanks MIchael, Useful stuff to try. I wish we could add more memory, but the x4500 is limited to 16GB. Compression was a question. Its currently off, but they were thinking of turning it on. bill On Dec 15, 2009, at 7:02 PM, Michael Herf wrote: I have also had slow scrubbing on

Re: [zfs-discuss] zpool fragmentation issues?

2009-12-16 Thread Bill Sprouse
couple of alternatives without success. Do you a have a pointer to the "block/parity rewrite" tool mentioned below? bill On Dec 15, 2009, at 9:38 PM, Brent Jones wrote: On Tue, Dec 15, 2009 at 5:28 PM, Bill Sprouse wrote: Hi Everyone, I hope this is the right forum for this ques

Re: [zfs-discuss] zpool fragmentation issues?

2009-12-16 Thread Bill Sprouse
Just checked w/customer and they are using the MailDir functionality with Dovecot. On Dec 16, 2009, at 11:28 AM, Toby Thain wrote: On 16-Dec-09, at 10:47 AM, Bill Sprouse wrote: Hi Brent, I'm not sure why Dovecot was chosen. It was most likely a recommendation by a fellow Unive

Re: [zfs-discuss] ZFS write bursts cause short app stalls

2010-01-02 Thread Bill Werner
Thanks for this thread! I was just coming here to discuss this very same problem. I'm running 2009.06 on a Q6600 with 8GB of RAM. I have a Windows system writing multiple OTA HD video streams via CIFS to the 2009.06 system running Samba. I then have multiple clients reading back other HD vid

Re: [zfs-discuss] Disks and caches

2010-01-07 Thread Bill Sommerfeld
filesystems won't hit the SSD unless the system is short on physical memory. - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Degrated pool menbers excluded from writes ?

2010-01-24 Thread Bill Sommerfeld
-level vdevs if there are healthy ones available. - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zvol being charged for double space

2010-01-27 Thread Bill Sommerfeld
s were actually used. If you want to allow for overcommit, you need to delete the refreservation. - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] server hang with compression on, ping timeouts from remote machine

2010-01-31 Thread Bill Sommerfeld
ratio. - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] most of my space is gone

2010-02-06 Thread Bill Sommerfeld
On 02/06/10 08:38, Frank Middleton wrote: AFAIK there is no way to get around this. You can set a flag so that pkg tries to empty /var/pkg/downloads, but even though it looks empty, it won't actually become empty until you delete the snapshots, and IIRC you still have to manually delete the conte

Re: [zfs-discuss] Reading ZFS config for an extended period

2010-02-11 Thread Bill Sommerfeld
propagate. - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS ZIL + L2ARC SSD Setup

2010-02-12 Thread Bill Sommerfeld
On 02/12/10 09:36, Felix Buenemann wrote: given I've got ~300GB L2ARC, I'd need about 7.2GB RAM, so upgrading to 8GB would be enough to satisfy the L2ARC. But that would only leave ~800MB free for everything else the server needs to do.

Re: [zfs-discuss] Who is using ZFS ACL's in production?

2010-02-26 Thread Bill Sommerfeld
filesystem tunables for ZFS which allow the system to escape the confines of POSIX (noatime, for one); I don't see why a "chmod doesn't truncate acls" option couldn't join it so long as it was off by default and left off while conformance tests were run.

Re: [zfs-discuss] Freeing unused space in thin provisioned zvols

2010-02-26 Thread Bill Sommerfeld
? It's in there. Turn on compression to use it. - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Who is using ZFS ACL's in production?

2010-02-26 Thread Bill Sommerfeld
personal experience with both systems, AFS had it more or less right and POSIX got it more or less wrong -- once you step into the world of acls, the file mode should be mostly ignored, and an accidental chmod should *not* destroy carefully crafted acls.

Re: [zfs-discuss] ZFS compression and deduplication on root pool on SSD

2010-02-28 Thread Bill Sommerfeld
ive upgrade BE with nevada build 130, and a beadm BE with opensolaris build 130, which is mostly the same) - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Who is using ZFS ACL's in production?

2010-03-01 Thread Bill Sommerfeld
uot; rights on the filesystem the file is in, they'll be able to read every bit of the file. - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Who is using ZFS ACL's in production?

2010-03-02 Thread Bill Sommerfeld
to continue to use it? While we're designing on the fly: Another possibility would be to use an additional umask bit or two to influence the mode-bit - acl interaction. - Bill ___ zfs-discuss ma

Re: [zfs-discuss] zfs receive slowness - lots of systime spent in genunix`list_next ?

2011-12-05 Thread Bill Sommerfeld
)] = count(); }' (let it run for a bit then interrupt it). should show who's calling list_next() so much. - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Advanced Format HDD's - are we there yet? (or - how to buy a drive that won't be teh sux0rs on zfs)

2012-05-28 Thread Bill Sommerfeld
On 05/28/12 17:13, Daniel Carosone wrote: There are two problems using ZFS on drives with 4k sectors: 1) if the drive lies and presents 512-byte sectors, and you don't manually force ashift=12, then the emulation can be slow (and possibly error prone). There is essentially an interna

Re: [zfs-discuss] "shareiscsi" and COMSTAR

2012-06-26 Thread Bill Pijewski
age appliance, it may be useful as you're thinking about how to proceed: https://blogs.oracle.com/wdp/entry/comstar_iscsi - Bill -- Bill Pijewski, Joyent       http://dtrace.org/blogs/wdp/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org ht

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Bill Sommerfeld
On 07/11/12 02:10, Sašo Kiselkov wrote: > Oh jeez, I can't remember how many times this flame war has been going > on on this list. Here's the gist: SHA-256 (or any good hash) produces a > near uniform random distribution of output. Thus, the chances of getting > a random hash collision are around

Re: [zfs-discuss] Very poor small-block random write performance

2012-07-20 Thread Bill Sommerfeld
On 07/19/12 18:24, Traffanstead, Mike wrote: iozone doesn't vary the blocksize during the test, it's a very artificial test but it's useful for gauging performance under different scenarios. So for this test all of the writes would have been 64k blocks, 128k, etc. for that particular step. Just

Re: [zfs-discuss] Zvol vs zfs send/zfs receive

2012-09-14 Thread Bill Sommerfeld
On 09/14/12 22:39, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Dave Pooser Unfortunately I did not realize that zvols require disk space sufficient to duplicate the zvol, and

Re: [zfs-discuss] ZFS with Equallogic storage

2010-08-21 Thread Bill Sommerfeld
On 08/21/10 10:14, Ross Walker wrote: I am trying to figure out the best way to provide both performance and resiliency given the Equallogic provides the redundancy. (I have no specific experience with Equallogic; the following is just generic advice) Every bit stored in zfs is checksummed

Re: [zfs-discuss] resilver = defrag?

2010-09-09 Thread Bill Sommerfeld
hots only copy the blocks that change, and receiving an incremental send does the same). And if the destination pool is short on space you may end up more fragmented than the source. - Bill ___ zfs-discuss ma

[zfs-discuss] How do you use >1 partition on x86?

2010-10-25 Thread Bill Werner
So when I built my new workstation last year, I partitioned the one and only disk in half, 50% for Windows, 50% for 2009.06. Now, I'm not using Windows, so I'd like to use the other half for another ZFS pool, but I can't figure out how to access it. I have used fdisk to create a second Solari

Re: [zfs-discuss] ZFS Crypto in Oracle Solaris 11 Express

2010-12-02 Thread Bill Sommerfeld
Knowing Darren, it's very likely that he got it right, but in crypto, all the details matter and if a spec detailed enough to allow for interoperability isn't available, it's safest to assume that some of the details are wrong.

Re: [zfs-discuss] Looking for 3.5" SSD for ZIL

2010-12-23 Thread Bill Werner
> > got it attached to a UPS with very conservative > shut-down timing. Or > > are there other host failures aside from power a > ZIL would be > > vulnerable too (system hard-locks?)? > > Correct, a system hard-lock is another example... How about comparing a non-battery backed ZIL to running a Z

[zfs-discuss] BOOT, ZIL, L2ARC one one SSD?

2010-12-23 Thread Bill Werner
60GB SSD drives using the SF 1222 controller can be had now for around $100. I know ZFS likes to use the entire disk to do it's magic, but under X86, is the entire disk the entire disk, or is it one physical X86 partition? In the past I have created 2 partitions with FDISK, but format will only

Re: [zfs-discuss] BOOT, ZIL, L2ARC one one SSD?

2010-12-25 Thread Bill Werner
Understood Edward, and if this was a production data center, I wouldn't be doing it this way. This is for my home lab, so spending hundreds of dollars on SSD devices isn't practical. Can several datasets share a single ZIL and a single L2ARC, or much must each dataset have their own? -- This

Re: [zfs-discuss] ZFS advice for laptop

2011-01-04 Thread Bill Sommerfeld
yourself with the format command and ZFS won't disable it. - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Understanding directio, O_DSYNC and zfs_nocacheflush on ZFS

2011-02-07 Thread Bill Sommerfeld
ed them to be durable, why does it matter that it may buffer data while it is doing so? - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Understanding directio, O_DSYNC and zfs_nocacheflush on ZFS

2011-02-07 Thread Bill Sommerfeld
On 02/07/11 12:49, Yi Zhang wrote: If buffering is on, the running time of my app doesn't reflect the actual I/O cost. My goal is to accurately measure the time of I/O. With buffering on, ZFS would batch up a bunch of writes and change both the original I/O activity and the time. if batching ma

Re: [zfs-discuss] ZFS send/recv initial data load

2011-02-16 Thread Bill Sommerfeld
storage to something faster/better/..., then after the mirror completes zpool detach to free up the removable storage. - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/

[zfs-discuss] time-sliderd doesn't remove snapshots

2011-02-18 Thread Bill Shannon
In the last few days my performance has gone to hell. I'm running: # uname -a SunOS nissan 5.11 snv_150 i86pc i386 i86pc (I'll upgrade as soon as the desktop hang bug is fixed.) The performance problems seem to be due to excessive I/O on the main disk/pool. The only things I've changed recent

Re: [zfs-discuss] time-sliderd doesn't remove snapshots

2011-02-18 Thread Bill Shannon
One of my old pools was version 10, another was version 13. I guess that explains the problem. Seems like time-sliderd should refuse to run on pools that aren't of a sufficient version. Cindy Swearingen wrote on 02/18/11 12:07 PM: Hi Bill, I think the root cause of this problem is that

Re: [zfs-discuss] Format returning bogus controller info

2011-02-26 Thread Bill Sommerfeld
erent physical controllers). see stmsboot(1m) for information on how to turn that off if you don't need multipathing and don't like the longer device names. - Bill ___ zfs-discuss mailing l

[zfs-discuss] Old posts to zfs-discuss

2011-05-10 Thread Bill Rushmore
realized what was happening and was able to kill the process. Bill Rushmore ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] not sure how to make filesystems

2011-05-31 Thread BIll Palin
I'm migrating some filesystems from UFS to ZFS and I'm not sure how to create a couple of them. I want to migrate /, /var, /opt, /export/home and also want swap and /tmp. I don't care about any of the others. The first disk, and the one with the UFS filesystems, is c0t0d0 and the 2nd disk is

Re: [zfs-discuss] Is another drive worth anything?

2011-05-31 Thread Bill Sommerfeld
it is a noticeable improvement from a 2-disk mirror. I used an 80G intel X25-M, with 1G for zil, with the rest split roughly 50:50 between root pool and l2arc for the data pool. - Bill ___ zfs-discuss ma

Re: [zfs-discuss] Available space confusion

2011-06-06 Thread Bill Sommerfeld
On 06/06/11 08:07, Cyril Plisko wrote: zpool reports space usage on disks, without taking into account RAIDZ overhead. zfs reports net capacity available, after RAIDZ overhead accounted for. Yup. Going back to the original numbers: nebol@filez:/$ zfs list tank2 NAMEUSED AVAIL REFER MOU

Re: [zfs-discuss] Wired write performance problem

2011-06-08 Thread Bill Sommerfeld
On 06/08/11 01:05, Tomas Ögren wrote: And if pool usage is>90%, then there's another problem (change of finding free space algorithm). Another (less satisfying) workaround is to increase the amount of free space in the pool, either by reducing usage or adding more storage. Observed behavior i

Re: [zfs-discuss] Disk replacement need to scan full pool ?

2011-06-14 Thread Bill Sommerfeld
allocated data (and in the case of raidz, know precisely how it's spread and encoded across the members of the vdev). And it's reading all the data blocks needed to reconstruct the disk to be replaced. - Bill ___

Re: [zfs-discuss] OpenIndiana | ZFS | scrub | network | awful slow

2011-06-16 Thread Bill Sommerfeld
e wrong. if you're using dedup, you need a large read cache even if you're only doing application-layer writes, because you need fast random read access to the dedup tables while you write. - Bill _

Re: [zfs-discuss] Encryption accelerator card recommendations.

2011-06-27 Thread Bill Sommerfeld
On 06/27/11 15:24, David Magda wrote: > Given the amount of transistors that are available nowadays I think > it'd be simpler to just create a series of SIMD instructions right > in/on general CPUs, and skip the whole co-processor angle. see: http://en.wikipedia.org/wiki/AES_instruction_set Prese

Re: [zfs-discuss] "zfs diff" performance disappointing

2011-09-26 Thread Bill Sommerfeld
;s metadata will diverge between the writeable filesystem and its last snapshot. - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] checksum errors on root pool after upgrade to snv_94

2008-07-17 Thread Bill Sommerfeld
ever increasing error count and maybe: 6437568 ditto block repair is incorrectly propagated to root vdev Any way to dig further to determine what's going on? - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] checksum errors on root pool after upgrade to snv_94

2008-07-20 Thread Bill Sommerfeld
ug as soon as I can (I'm travelling at the moment with spotty connectivity), citing my and your reports. - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Can I trust ZFS?

2008-08-03 Thread Bill Sommerfeld
On Sun, 2008-08-03 at 11:42 -0500, Bob Friesenhahn wrote: > Zfs makes human error really easy. For example > >$ zpool destroy mypool Note that "zpool destroy" can be undone by "zpool import -D" (if you get to it before the disks are overwritten).

Re: [zfs-discuss] Checksum error: which of my files have failed scrubbing?

2008-08-05 Thread Bill Sommerfeld
ed ZIL This bug is fixed in build 95. One workaround is to mount the filesystems and then unmount them to apply the intent log changes. - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Block unification in ZFS

2008-08-05 Thread Bill Sommerfeld
See the long thread titled "ZFS deduplication", last active approximately 2 weeks ago. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] more ZFS recovery

2008-08-07 Thread Bill Sommerfeld
ether subtrees of the filesystem and recover as much as you can even if many upper nodes in the block tree have had holes shot in them by a miscreant device. - Bill ___ zfs-discuss mailing list zfs-discuss@opensol

Re: [zfs-discuss] Best layout for 15 disks?

2008-08-22 Thread Bill Sommerfeld
On Thu, 2008-08-21 at 21:15 -0700, mike wrote: > I've seen 5-6 disk zpools are the most recommended setup. This is incorrect. Much larger zpools built out of striped redundant vdevs (mirror, raidz1, raidz2) are recommended and also work well. raidz1 or raidz2 vdevs of more than a single-digit nu

Re: [zfs-discuss] Availability: ZFS needs to handle disk removal / driver failure better

2008-08-28 Thread Bill Sommerfeld
lares a response overdue after (SRTT + K * variance). I think you'd probably do well to start with something similar to what's described in http://www.ietf.org/rfc/rfc2988.txt and then tweak based on experience. - Bill ___

Re: [zfs-discuss] Sidebar to ZFS Availability discussion

2008-09-02 Thread Bill Sommerfeld
no substitute for just retrying for a long time. - Bill ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Sidebar to ZFS Availability discussion

2008-09-02 Thread Bill Sommerfeld
On Sun, 2008-08-31 at 15:03 -0400, Miles Nordin wrote: > It's sort of like network QoS, but not quite, because: > > (a) you don't know exactly how big the ``pipe'' is, only > approximately, In an ip network, end nodes generally know no more than the pipe size of the first hop -- and in

  1   2   3   4   >