Re: [zfs-discuss] what have you been buying for slog and l2arc?

2012-08-06 Thread Brandon High
t; If that -- ignoring cache flush requests -- is the whole reason why > SSDs are so fast, I'm glad I haven't got one yet. They're fast for random reads and writes because they don't have seek latency. They're fast for sequential IO because they aren't limited

Re: [zfs-discuss] Can the ZFS "copies" attribute substitute HW disk redundancy?

2012-07-30 Thread Brandon High
On Mon, Jul 30, 2012 at 7:11 AM, GREGG WONDERLY wrote: > I thought I understood that copies would not be on the same disk, I guess I > need to go read up on this again. ZFS attempts to put copies on separate devices, but there's no guarantee. -B -- Brandon High : bh...

Re: [zfs-discuss] Persistent errors?

2012-06-22 Thread Brandon High
mdump -eV', it should have some (rather extensive) information. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Migration of a Thumper to bigger HDDs

2012-05-23 Thread Brandon High
ixes and new features added between snv_117 and snv_134 (the last OpenSolaris release). It might be worth updating to snv_134 at the very least. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensola

Re: [zfs-discuss] checking/fixing busy locks for zfs send/receive

2012-03-16 Thread Brandon High
wondering if this sort of thing can mean interference between > some combination of multiple send/receives at the same time, on the > same filesystem? Look at 'zfs hold', 'zfs holds', and 'zfs release'. Sends and receives will place holds on snapshots to pr

Re: [zfs-discuss] Compatibility of Hitachi Deskstar 7K3000 HDS723030ALA640 with ZFS

2012-03-06 Thread Brandon High
g 8 x 3TB 5k3000 in a raidz2 for about a year without issue. The Deskstar 3TB come off the same production line as the Ultrastar 5k3000. I would avoid the 2TB and smaler 5k3000 - They come off a separate production line. -B -- Brandon High : bh...@f

Re: [zfs-discuss] Compatibility of Hitachi Deskstar 7K3000 HDS723030ALA640 with ZFS

2012-03-05 Thread Brandon High
000 and 5K3000 drives have 512B physical sectors. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Server upgrade

2012-02-15 Thread Brandon High
uture, so it's a somewhat important decision. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] grrr, How to get rid of mis-touched file named `-c'

2011-11-26 Thread Brandon High
t to know how it might be done from a > shell prompt. rm ./-c ./-O ./-k -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Replacement for X25-E

2011-09-22 Thread Brandon High
$100 though. The 100GB Intel 710 costs ~ $650. The 311 is a good choice for home or budget users, and it seems that the 710 is much bigger than it needs to be for slog devices. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discu

Re: [zfs-discuss] Deskstars and CCTL (aka TLER)

2011-09-22 Thread Brandon High
B and 2TB drives are not manufactured on the same line as the Ultrastar and seem to have lower reliability. Only the 3TB 5k3000 shares specs with the Ultrastar 5k3000. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-disc

Re: [zfs-discuss] Replacement for X25-E

2011-09-22 Thread Brandon High
device with the Z68 chipset. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Deskstars and CCTL (aka TLER)

2011-09-07 Thread Brandon High
o a startup script. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS raidz on top of hardware raid0

2011-08-26 Thread Brandon High
rror detection and recovery by using several top-level raidz. 20 x 5-disk raidz would give you very good read and write performance with decent resilver times and 20% overhead for redundancy. 10 x 10-disk raidz2 would give more protection, but a little less performance, and higher resilver times. -

Re: [zfs-discuss] Intel 320 as ZIL?

2011-08-15 Thread Brandon High
The 710 is MLC-HET (high endurance) and will be in 100/200/300GB capacities. The 720 is SLC, but a PCIe interface and will be 200/400GB capacity. I don't imagine either will be very cheap. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss

Re: [zfs-discuss] Intel 320 as ZIL?

2011-08-15 Thread Brandon High
y to 80%). Intel recently added the 311, a small SLC-based drive for use as a temp cache with their Z68 platform. It's limited to 20GB, but it might be a better fit for use as a ZIL than the 320. -B -- Brandon High : bh...@freaks.com ___ zfs-di

Re: [zfs-discuss] Disk IDs and DD

2011-08-09 Thread Brandon High
/rebuilding-the-solaris-device-tree -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS Fragmentation issue - examining the ZIL

2011-08-03 Thread Brandon High
if this was involved here. Using dedup on a pool that houses an Oracle DB is Doing It Wrong in so many ways... -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Exapnd ZFS storage.

2011-08-03 Thread Brandon High
vdev. You can create another vdev to add to your pool though. If you're adding another vdev, it should have the same geometry as your current (ie: 4 drives). The zpool command will complain if you try to add a vdev with different geometry or redundancy, though you can force it with -f.

Re: [zfs-discuss] ZFS Fragmentation issue - examining the ZIL

2011-08-01 Thread Brandon High
tation is a real issue with pools that are (or have been) very full. The data gets written out in fragments and has to be read back in the same order. If the mythical bp_rewrite code ever shows up, it will be possible to defrag a pool. But not yet. -B -- Bran

Re: [zfs-discuss] recover zpool with a new installation

2011-07-26 Thread Brandon High
stall > is complete, just import the pool. > You can also use the Live CD or Live USB to access your pool or possibly fix your existing installation. You will have to force the zpool import with either a reinstall or a Live boot. -B -- Brandon High : bh...@freaks.com

Re: [zfs-discuss] SSD vs "hybrid" drive - any advice?

2011-07-26 Thread Brandon High
27;s right. TRIM just gives hints to the garbage collector that sectors are no longer in use. When the GC runs, it can find more flash blocks more easily that aren't used or combine several mostly-empty blocks and erase or otherwise free them for reuse later. -B --

Re: [zfs-discuss] SSD vs "hybrid" drive - any advice?

2011-07-26 Thread Brandon High
erase block, or 4k with a 512k erase block. It's also due to ECC reasons, since a larger block size allows more efficient ECC over a larger block of data. This is similar to move to 4k sectors in magnetic drives. -B -- Brandon High : bh...@freaks.com

Re: [zfs-discuss] Large scale performance query

2011-07-25 Thread Brandon High
too much, and you will still have more bandwidth from your main storage pools than from the cache devices. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Replacing failed drive

2011-07-22 Thread Brandon High
tition the replacement drive. Since you've physically replaced the drive, you should just have to do: # zpool replace tank c10t0d0 The pool should resilver, and I think the spare should automatically detach. If not # zpool remove tank c10t6d0 should take care of it. -B --

Re: [zfs-discuss] SSD vs "hybrid" drive - any advice?

2011-07-21 Thread Brandon High
e GT, Patriot Wildfire, etc). -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] latest zpool version in solaris 11 express

2011-07-20 Thread Brandon High
new certificates for public products. Please try again later" -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Zil on multiple usb keys

2011-07-17 Thread Brandon High
te a ZFS mirror. But it would be a really bad idea. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Replacement disks for Sun X4500

2011-07-15 Thread Brandon High
s card uses the same Marvell controller as the x4500. Performance is fine if not slightly better than the WD10EADS drives that I replaced. Of course, the pool was about 92% full with the smaller drives ... -B -- Brandon High : bh...@freaks.com ___ zfs-d

Re: [zfs-discuss] Pure SSD Pool

2011-07-12 Thread Brandon High
te nearly 5x as much data to fill it even once. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Pure SSD Pool

2011-07-12 Thread Brandon High
arbage collection when the right criteria are met. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Pure SSD Pool

2011-07-11 Thread Brandon High
hould be fine until the volume gets very full. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Cannot format 2.5TB ext disk (EFI)

2011-06-24 Thread Brandon High
d to use 2TB drives on an Atom N270-based board and they were not recognized, but they worked fine under FreeBSD. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] JBOD recommendation for ZFS usage

2011-05-30 Thread Brandon High
AS device, or with SATA drives. A single port cable can be used with a single- or dual-ported SAS device (although it will only use one port) or with a SATA drive. A SATA cable can be used with a SATA device. -B -- Brandon High : bh...@freaks.com ___ zf

Re: [zfs-discuss] optimal layout for 8x 1 TByte SATA (consumer)

2011-05-26 Thread Brandon High
blems with an 8-drive raidz2, though my usage is fairly light. The system is more than fast enough to saturate gigabit ethernet for sequential reads and writes. My drives were WD10EADS "Green" drives. -B -- Brandon High : bh...@freaks.com ___

Re: [zfs-discuss] offline dedup

2011-05-26 Thread Brandon High
uld require bp_rewrite. Offline (or deferred) dedup certainly seems more attractive given the current real-time performance. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailm

Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-24 Thread Brandon High
probably know for certain. There will probably be a fork at some point to an OSS ZFS and an Oracle ZFS. Hopefully neither side will actively try to break compatibility. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@openso

Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-24 Thread Brandon High
On Tue, May 24, 2011 at 12:41 PM, Richard Elling wrote: > There are many ZFS implementations, each evolving as the contributors desire. > Diversity and innovation is a good thing. ... unless Oracle's zpool v30 is different than Nexenta's v30. -B -- Brandon High :

Re: [zfs-discuss] Monitoring disk seeks

2011-05-19 Thread Brandon High
) % 1000, args[1]->dev_statname, args[0]->b_lblkno, (args[0]->b_flags & B_WRITE ? "W" : "R"), args[0]->b_bcount ); } For every completed IO, this should give you the timestamp, device name, start LBA, "R"ead o

Re: [zfs-discuss] Solaris vs FreeBSD question

2011-05-18 Thread Brandon High
can feed it the output of 'lspci -vv -n'. You may have to disable some on-board devices to get through the installer, but I couldn't begin to guess which. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zf

Re: [zfs-discuss] Reboots when importing old rpool

2011-05-17 Thread Brandon High
On Tue, May 17, 2011 at 11:10 AM, Hung-ShengTsao (Lao Tsao) Ph.D. wrote: > > may be do > zpool import  -R /a rpool 'zpool import -N' may work as well. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-disc

Re: [zfs-discuss] Still no way to recover a "corrupted" pool

2011-05-16 Thread Brandon High
ly when I'm not sure it was the only solution, it's just the one you followed. > What's most frustrating is that this is the third time I've built this > pool due to corruption like this, within three months.  :( You may have an underlying hardware problem, or there could be

Re: [zfs-discuss] 350TB+ storage solution

2011-05-16 Thread Brandon High
e environments. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] 350TB+ storage solution

2011-05-16 Thread Brandon High
e more reliable. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] 350TB+ storage solution

2011-05-15 Thread Brandon High
thought that having data disks that were a power of two was still recommended, due to the way that ZFS splits records/blocks in a raidz vdev. Or are you responding to some other point? -B -- Brandon High : bh...@freaks.com ___ zfs-discu

Re: [zfs-discuss] Tuning disk failure detection?

2011-05-10 Thread Brandon High
ou may be able to enable the feature on your drives, depending on the manufacturer and firmware revision. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] primarycache=metadata seems to force behaviour of secondarycache=metadata

2011-05-10 Thread Brandon High
wasn't that long ago when 66MB/s ATA was considered a waste because no drive could use that much bandwidth. These days a "slow" drive has max throughput greater than 110MB/s. (OK, looking at some online reviews, it was about 13 years ago. Maybe I'm just old.) -B --

Re: [zfs-discuss] ZFS on HP MDS 600

2011-05-09 Thread Brandon High
evice then you shouldn't need to adjust the max_pending. If you're exporting larger RAID10 luns from the MDS, then increasing the value might help for read workloads. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Deduplication Memory Requirements

2011-05-06 Thread Brandon High
e overlap in functionality but sometimes very different implementations. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Deduplication Memory Requirements

2011-05-05 Thread Brandon High
smaller. You'll have to worry about the guests' block alignment in the context of the image file, since two identical files may not create identical blocks as seen from ZFS. This means you may get only fractional savings and have an enormous DDT. -B -- Brandon High : bh...@freaks.com __

Re: [zfs-discuss] Deduplication Memory Requirements

2011-05-05 Thread Brandon High
property. ufs is 4k or 8k on x86 and 8k on sun4u. As with ext4, block alignment is determined by partitioning and slices. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Quick zfs send -i performance questions

2011-05-05 Thread Brandon High
0% and 96% full. This could also be why the full sends perform better than incremental sends. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Deduplication Memory Requirements

2011-05-04 Thread Brandon High
Wouldn't you have been better off cloning datasets that contain an unconfigured install and customizing from there? -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Quick zfs send -i performance questions

2011-05-04 Thread Brandon High
ontinue working when the send is stalled. You will have to fiddle with the buffer size and other options to tune it for your use. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Deduplication Memory Requirements

2011-05-04 Thread Brandon High
mitations, and it sucks when you hit them. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Faster copy from UFS to ZFS

2011-05-03 Thread Brandon High
ete than the original copy, since files on both side need to be read and checksummed. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Faster copy from UFS to ZFS

2011-05-03 Thread Brandon High
ecify --whole-file, it's implied when copying on the same system. --inplace can play badly with hard links and shouldn't be used. It probably will be slower than other options but it may be more accurate, especially with -H -B -- Brandon High : bh...@freaks.com ___

Re: [zfs-discuss] ls reports incorrect file size

2011-05-02 Thread Brandon High
e. NTFS supports sparse files. http://www.flexhex.com/docs/articles/sparse-files.phtml -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-29 Thread Brandon High
f deleting datasets and/or snapshots. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Still no way to recover a "corrupted" pool

2011-04-29 Thread Brandon High
On Fri, Apr 29, 2011 at 1:23 PM, Freddie Cash wrote: > Running ZFSv28 on 64-bit FreeBSD 8-STABLE. I'd suggest trying to import the pool into snv_151a (Solaris 11 Express), which is the reference and development platform for ZFS. -B -- Brandon High : bh...@fr

Re: [zfs-discuss] Faster copy from UFS to ZFS

2011-04-29 Thread Brandon High
ou will probably want to set it back to default after you're done. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-29 Thread Brandon High
for non-dedup datasets, and is in fact the default. As an aside: Erik, any idea when the 159 bits will make it to the public? -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

Re: [zfs-discuss] Finding where dedup'd files are

2011-04-28 Thread Brandon High
ce I have some datasets with dedup'd data, I'm a little paranoid about tanking the system if they are destroyed. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-28 Thread Brandon High
h, it will increment the refcount for the on-disk block. If the zpool property dedupditto is set and the refcount for the on-disk block exceeds the threshold, it will write another copy of the block to disk. -B -- Brandon High : bh...@freaks.com _

Re: [zfs-discuss] Finding where dedup'd files are

2011-04-28 Thread Brandon High
On Thu, Apr 28, 2011 at 3:48 PM, Ian Collins wrote: > Dedup is at the block, not file level. Files are usually composed of blocks. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org h

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-28 Thread Brandon High
ly > set a checksum algorithm specific to dedup (i.e. there's no way to > override the default for dedup). That's my understanding as well. The initial release used fletcher4 or sha256, but there was either a bug in the fletcher4 code or a hash collision that required removing it as an

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-28 Thread Brandon High
hecksum used for deduplication is sha256 (subject to change). When dedup is enabled, the dedup checksum algorithm overrides the checksum property." -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http:/

[zfs-discuss] Finding where dedup'd files are

2011-04-28 Thread Brandon High
duplicated block] sha256 uncompressed LE contiguous unique unencrypted 1-copy size=2L/2P birth=236799L/236799P fill=1 cksum=55c9f21af6399be:11f9d4f5ff4cb109:2af8b798671e47ba:d19caf78da295df5 How can I translate this into datasets or files? -B -- Brandon High :

Re: [zfs-discuss] Spare drives sitting idle in raidz2 with failed drive

2011-04-27 Thread Brandon High
On Wed, Apr 27, 2011 at 12:51 PM, Lamp Zy wrote: > Any ideas how to identify which drive is the one that failed so I can > replace it? Try the following: # fmdump -eV # fmadm faulty -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailin

Re: [zfs-discuss] Drive replacement speed

2011-04-26 Thread Brandon High
slow down, but at 13 hours in, the resilver has been managing ~ 100M/s and is 70% done. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Drive replacement speed

2011-04-25 Thread Brandon High
On Mon, Apr 25, 2011 at 5:26 PM, Brandon High wrote: > Setting zfs_resilver_delay seems to have helped some, based on the > iostat output. Are there other tunables? I found zfs_resilver_min_time_ms while looking. I've tried bumping it up considerably, without much change. '

Re: [zfs-discuss] How does ZFS dedup space accounting work with quota?

2011-04-25 Thread Brandon High
1:1. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Spare drives sitting idle in raidz2 with failed drive

2011-04-25 Thread Brandon High
ll replace the failed drive with the first spare. (I'd suggest verifying the device names before running it.) # zpool replace fwgpool0 c4t5000C5001128FE4Dd0 c4t5000C50014D70072d0 -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mail

Re: [zfs-discuss] Drive replacement speed

2011-04-25 Thread Brandon High
1.5 1 1 c0t1d0 -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-25 Thread Brandon High
by its shortened column name, volblock. -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Drive replacement speed

2011-04-25 Thread Brandon High
0.5 0.21.00.4 17 21 c2t7d0 0.00.00.00.0 0.0 0.00.00.0 0 0 c0t0d0 0.00.0 0.0 0.0 0.0 0.00.00.0 0 0 c0t1d0 -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@open

Re: [zfs-discuss] just can't import

2011-04-11 Thread Brandon High
of use, I suspect from the constant writes.) -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] just can't import

2011-04-11 Thread Brandon High
omplete the destroy and seem to hang until it's completed. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] just can't import

2011-04-10 Thread Brandon High
ever versions of Open Solaris or Solaris 11 Express may complete it faster. > Any tips greatly appreciated, Just wait... -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS Going forward after Oracle - Let's get organized, let's get started.

2011-04-09 Thread Brandon High
. I'd be excited to hear that there's a new feature being worked on, rather than the radio silence we've had. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] How to rename rpool. Is that recommended ?

2011-04-08 Thread Brandon High
x27;. Can > this be done ? Yes you can do it, no it is not recommended. I had a need to do something similar to what you're attempting and ended up using a Live CD (which doesn't have an rpool to have a naming conflict) to do the manipulations. -B -- Brandon High : bh...@freaks.com

Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-07 Thread Brandon High
the dataset version is correct though. You should test this, however. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-06 Thread Brandon High
hat the VM can manage redundancy on its zfs storage, and not just multiple vdsk on the same host disk / lun. Either give it access to the raw devices, or use iSCSI, or create your vdsk on different luns and raidz them, etc. -B -- Brandon High : bh...@freaks.com _

Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-06 Thread Brandon High
e them with a lower version. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] NTFS on NFS and iSCSI always generates small IO's

2011-03-10 Thread Brandon High
o you will need to create a new VM store after the > recordsize is tuned. You can change the recordsize and copy the vmdk files on the nfs server, which will re-write them with a smaller recordsize. -B -- Brandon High : bh...@freaks.com ___ zfs-di

Re: [zfs-discuss] NTFS on NFS and iSCSI always generates small IO's

2011-03-10 Thread Brandon High
shouldn't make a huge difference with the zil disabled, but it certainly won't hurt. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Slices and reservations Was: Re: How long should an empty destroy take? snv_134

2011-03-07 Thread Brandon High
e some accounting bits. I seem to remember seeing a fix for 100% full pools a while ago so this may not be as critical as it used to be, but it's a nice safety net to have. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Format returning bogus controller info

2011-03-01 Thread Brandon High
(outdated) instructions here: http://spiralbound.net/blog/2005/12/21/rebuilding-the-solaris-device-tree . I think you can do this all with a new boot environment, rather than boot from a CD. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing

Re: [zfs-discuss] ZFS send/recv horribly slow on system with 1800+ filesystems

2011-03-01 Thread Brandon High
of filesystems? No. Incremental sends might take longer, as I mentioned above. > 2.) Why do we see 4MB-8MB/s of *writes* to the filesystem when we do a > 'zfs send' to /dev/null ? Is anything else using the filesystems in the pool? -B -- Brandon High : bh...@freaks.com _

Re: [zfs-discuss] ZFS Performance

2011-02-28 Thread Brandon High
On Sun, Feb 27, 2011 at 7:35 PM, Brandon High wrote: > It moves from "best fit" to "any fit" at a certain point, which is at > ~ 95% (I think). Best fit looks for a large contiguous space to avoid > fragmentation while any fit looks for any free space. I got the term

Re: [zfs-discuss] ZFS Performance

2011-02-27 Thread Brandon High
t;any fit" at a certain point, which is at ~ 95% (I think). Best fit looks for a large contiguous space to avoid fragmentation while any fit looks for any free space. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing lis

Re: [zfs-discuss] External SATA drive enclosures + ZFS?

2011-02-27 Thread Brandon High
lower minimum receive power. An internal power might work with a SATA to eSATA cable or adapter, but it's not guaranteed to. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.

Re: [zfs-discuss] External SATA drive enclosures + ZFS?

2011-02-27 Thread Brandon High
roLiant Microserver. It's about $320 and holds 4 drives, with an expansion slot for an additional controller. I think some people have reported success with these on the list. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing li

Re: [zfs-discuss] What drives?

2011-02-26 Thread Brandon High
but I'm sure they exist. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] External SATA drive enclosures + ZFS?

2011-02-25 Thread Brandon High
rformance to be slightly lower, and to use slightly more CPU. Most USB controllers don't support DMA, so all I/O requires CPU time. What about an inexpensive SAS card (eg: Supermicro AOC-USAS-L4i) and external SAS enclosure (eg: Sans Digital TowerRAID TR4X). It would cost about $350 for t

Re: [zfs-discuss] ZFS/Drobo (Newbie) Question

2011-02-08 Thread Brandon High
ou assertion doesn't seem to hold up. I think he meant that if one drive in a mirror dies completely, then any single read error on the remaining drive is not recoverable. With raidz2 (or a 3-way mirror for that matter), if one drive dies completely, you still have redundancy. -B --

Re: [zfs-discuss] Understanding directio, O_DSYNC and zfs_nocacheflush on ZFS

2011-02-07 Thread Brandon High
o worry about whether writes are being cached, because any data that is written synchronously will be committed to stable storage before the write returns. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Understanding directio, O_DSYNC and zfs_nocacheflush on ZFS

2011-02-07 Thread Brandon High
hing reads. ZFS is a very different beast than UFS and doesn't require the same tuning. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS/Drobo (Newbie) Question

2011-02-07 Thread Brandon High
ended to use different levels of redundancy in a pool, so you may want to consider using mirrors for everything. This also makes it easier to add or upgrade capacity later. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discus

Re: [zfs-discuss] ZFS and spindle speed (7.2k / 10k / 15k)

2011-02-06 Thread Brandon High
tween sectors is less likely with the lower density? More platters leads to more heat and higher power consumption. Most drives are 3 or 4 platters, though Hitachi usually manufactures 5 platter drives as well. -B -- Brandon High : bh...@freaks.com ___ zfs

  1   2   3   4   5   >