t; If that -- ignoring cache flush requests -- is the whole reason why
> SSDs are so fast, I'm glad I haven't got one yet.
They're fast for random reads and writes because they don't have seek
latency. They're fast for sequential IO because they aren't limited
On Mon, Jul 30, 2012 at 7:11 AM, GREGG WONDERLY wrote:
> I thought I understood that copies would not be on the same disk, I guess I
> need to go read up on this again.
ZFS attempts to put copies on separate devices, but there's no guarantee.
-B
--
Brandon High : bh...
mdump -eV', it should have some (rather
extensive) information.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ixes and new features added between
snv_117 and snv_134 (the last OpenSolaris release). It might be worth
updating to snv_134 at the very least.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensola
wondering if this sort of thing can mean interference between
> some combination of multiple send/receives at the same time, on the
> same filesystem?
Look at 'zfs hold', 'zfs holds', and 'zfs release'. Sends and receives
will place holds on snapshots to pr
g 8 x 3TB 5k3000 in a raidz2 for about a year without issue.
The Deskstar 3TB come off the same production line as the Ultrastar
5k3000. I would avoid the 2TB and smaler 5k3000 - They come off a
separate production line.
-B
--
Brandon High : bh...@f
000 and 5K3000 drives have 512B physical sectors.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
uture,
so it's a somewhat important decision.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t to know how it might be done from a
> shell prompt.
rm ./-c ./-O ./-k
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
$100 though. The 100GB Intel 710 costs ~ $650.
The 311 is a good choice for home or budget users, and it seems that
the 710 is much bigger than it needs to be for slog devices.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discu
B and 2TB drives are
not manufactured on the same line as the Ultrastar and seem to have
lower reliability. Only the 3TB 5k3000 shares specs with the Ultrastar
5k3000.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-disc
device with the Z68 chipset.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
o a startup script.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
rror detection and recovery by
using several top-level raidz. 20 x 5-disk raidz would give you very
good read and write performance with decent resilver times and 20%
overhead for redundancy. 10 x 10-disk raidz2 would give more
protection, but a little less performance, and higher resilver times.
-
The 710 is MLC-HET (high endurance) and will be in 100/200/300GB
capacities. The 720 is SLC, but a PCIe interface and will be 200/400GB
capacity.
I don't imagine either will be very cheap.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss
y to 80%).
Intel recently added the 311, a small SLC-based drive for use as a
temp cache with their Z68 platform. It's limited to 20GB, but it might
be a better fit for use as a ZIL than the 320.
-B
--
Brandon High : bh...@freaks.com
___
zfs-di
/rebuilding-the-solaris-device-tree
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
if this was involved here.
Using dedup on a pool that houses an Oracle DB is Doing It Wrong in so
many ways...
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
vdev. You can create
another vdev to add to your pool though.
If you're adding another vdev, it should have the same geometry as
your current (ie: 4 drives). The zpool command will complain if you
try to add a vdev with different geometry or redundancy, though you
can force it with -f.
tation is a real issue with pools that are (or have
been) very full. The data gets written out in fragments and has to be
read back in the same order.
If the mythical bp_rewrite code ever shows up, it will be possible to
defrag a pool. But not yet.
-B
--
Bran
stall
> is complete, just import the pool.
>
You can also use the Live CD or Live USB to access your pool or possibly fix
your existing installation.
You will have to force the zpool import with either a reinstall or a Live
boot.
-B
--
Brandon High : bh...@freaks.com
27;s right. TRIM just gives hints to the garbage collector that
sectors are no longer in use. When the GC runs, it can find more flash
blocks more easily that aren't used or combine several mostly-empty
blocks and erase or otherwise free them for reuse later.
-B
--
erase block, or 4k with a 512k erase block.
It's also due to ECC reasons, since a larger block size allows more
efficient ECC over a larger block of data. This is similar to move to 4k
sectors in magnetic drives.
-B
--
Brandon High : bh...@freaks.com
too much, and you will still have more bandwidth from your main storage
pools than from the cache devices.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
tition the replacement drive.
Since you've physically replaced the drive, you should just have to do:
# zpool replace tank c10t0d0
The pool should resilver, and I think the spare should automatically
detach. If not
# zpool remove tank c10t6d0
should take care of it.
-B
--
e GT, Patriot Wildfire,
etc).
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
new certificates for public products.
Please try again later"
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
te a
ZFS mirror. But it would be a really bad idea.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
s card uses the same Marvell
controller as the x4500.
Performance is fine if not slightly better than the WD10EADS drives
that I replaced. Of course, the pool was about 92% full with the
smaller drives ...
-B
--
Brandon High : bh...@freaks.com
___
zfs-d
te nearly 5x as much data to fill it even once.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
arbage collection when the right criteria are met.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
hould be fine until the volume gets very full.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
d to use 2TB drives on
an Atom N270-based board and they were not recognized, but they worked
fine under FreeBSD.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
AS device, or with SATA drives. A single
port cable can be used with a single- or dual-ported SAS device
(although it will only use one port) or with a SATA drive. A SATA
cable can be used with a SATA device.
-B
--
Brandon High : bh...@freaks.com
___
zf
blems with an 8-drive raidz2,
though my usage is fairly light. The system is more than fast enough
to saturate gigabit ethernet for sequential reads and writes. My
drives were WD10EADS "Green" drives.
-B
--
Brandon High : bh...@freaks.com
___
uld require bp_rewrite.
Offline (or deferred) dedup certainly seems more attractive given the
current real-time performance.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailm
probably know for certain.
There will probably be a fork at some point to an OSS ZFS and an
Oracle ZFS. Hopefully neither side will actively try to break
compatibility.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@openso
On Tue, May 24, 2011 at 12:41 PM, Richard Elling
wrote:
> There are many ZFS implementations, each evolving as the contributors desire.
> Diversity and innovation is a good thing.
... unless Oracle's zpool v30 is different than Nexenta's v30.
-B
--
Brandon High :
) % 1000,
args[1]->dev_statname,
args[0]->b_lblkno,
(args[0]->b_flags & B_WRITE ? "W" : "R"),
args[0]->b_bcount
);
}
For every completed IO, this should give you the timestamp, device
name, start LBA, "R"ead o
can feed it the output of 'lspci -vv -n'.
You may have to disable some on-board devices to get through the
installer, but I couldn't begin to guess which.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zf
On Tue, May 17, 2011 at 11:10 AM, Hung-ShengTsao (Lao Tsao) Ph.D.
wrote:
>
> may be do
> zpool import -R /a rpool
'zpool import -N' may work as well.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-disc
ly when
I'm not sure it was the only solution, it's just the one you followed.
> What's most frustrating is that this is the third time I've built this
> pool due to corruption like this, within three months. :(
You may have an underlying hardware problem, or there could be
e environments.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e more reliable.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
thought that having data disks that were a power of two was still
recommended, due to the way that ZFS splits records/blocks in a raidz
vdev. Or are you responding to some other point?
-B
--
Brandon High : bh...@freaks.com
___
zfs-discu
ou may be able to enable the feature on your drives, depending on the
manufacturer and firmware revision.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
wasn't that long ago when 66MB/s ATA was considered a waste because
no drive could use that much bandwidth. These days a "slow" drive has
max throughput greater than 110MB/s.
(OK, looking at some online reviews, it was about 13 years ago. Maybe
I'm just old.)
-B
--
evice then you shouldn't need to
adjust the max_pending. If you're exporting larger RAID10 luns from
the MDS, then increasing the value might help for read workloads.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e overlap
in functionality but sometimes very different implementations.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
smaller. You'll have to worry about the guests' block
alignment in the context of the image file, since two identical files
may not create identical blocks as seen from ZFS. This means you may
get only fractional savings and have an enormous DDT.
-B
--
Brandon High : bh...@freaks.com
__
property.
ufs is 4k or 8k on x86 and 8k on sun4u. As with ext4, block alignment
is determined by partitioning and slices.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
0% and 96% full. This could also be why the full sends
perform better than incremental sends.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Wouldn't you have been better off cloning datasets that contain an
unconfigured install and customizing from there?
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ontinue working
when the send is stalled. You will have to fiddle with the buffer size
and other options to tune it for your use.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
mitations, and it sucks when you hit them.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ete than the original copy, since files on both side
need to be read and checksummed.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ecify --whole-file, it's implied when copying on
the same system. --inplace can play badly with hard links and
shouldn't be used.
It probably will be slower than other options but it may be more
accurate, especially with -H
-B
--
Brandon High : bh...@freaks.com
___
e.
NTFS supports sparse files.
http://www.flexhex.com/docs/articles/sparse-files.phtml
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
f deleting
datasets and/or snapshots.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Apr 29, 2011 at 1:23 PM, Freddie Cash wrote:
> Running ZFSv28 on 64-bit FreeBSD 8-STABLE.
I'd suggest trying to import the pool into snv_151a (Solaris 11
Express), which is the reference and development platform for ZFS.
-B
--
Brandon High : bh...@fr
ou will probably want to set it back to default after you're done.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
for non-dedup
datasets, and is in fact the default.
As an aside: Erik, any idea when the 159 bits will make it to the public?
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
ce I have some datasets with dedup'd data, I'm a little paranoid
about tanking the system if they are destroyed.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
h, it will increment the refcount for
the on-disk block.
If the zpool property dedupditto is set and the refcount for the
on-disk block exceeds the threshold, it will write another copy of the
block to disk.
-B
--
Brandon High : bh...@freaks.com
_
On Thu, Apr 28, 2011 at 3:48 PM, Ian Collins wrote:
> Dedup is at the block, not file level.
Files are usually composed of blocks.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
h
ly
> set a checksum algorithm specific to dedup (i.e. there's no way to
> override the default for dedup).
That's my understanding as well. The initial release used fletcher4 or
sha256, but there was either a bug in the fletcher4 code or a hash
collision that required removing it as an
hecksum used for deduplication is sha256 (subject to
change). When dedup is enabled, the dedup checksum algorithm overrides
the checksum property."
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:/
duplicated block] sha256 uncompressed LE contiguous unique
unencrypted 1-copy size=2L/2P birth=236799L/236799P fill=1
cksum=55c9f21af6399be:11f9d4f5ff4cb109:2af8b798671e47ba:d19caf78da295df5
How can I translate this into datasets or files?
-B
--
Brandon High :
On Wed, Apr 27, 2011 at 12:51 PM, Lamp Zy wrote:
> Any ideas how to identify which drive is the one that failed so I can
> replace it?
Try the following:
# fmdump -eV
# fmadm faulty
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailin
slow down, but at 13 hours in, the resilver has been
managing ~ 100M/s and is 70% done.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Apr 25, 2011 at 5:26 PM, Brandon High wrote:
> Setting zfs_resilver_delay seems to have helped some, based on the
> iostat output. Are there other tunables?
I found zfs_resilver_min_time_ms while looking. I've tried bumping it
up considerably, without much change.
'
1:1.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ll replace the failed drive with the first spare. (I'd suggest
verifying the device names before running it.)
# zpool replace fwgpool0 c4t5000C5001128FE4Dd0 c4t5000C50014D70072d0
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mail
1.5 1 1 c0t1d0
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
by its shortened
column name, volblock.
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
0.5 0.21.00.4 17 21 c2t7d0
0.00.00.00.0 0.0 0.00.00.0 0 0 c0t0d0
0.00.0 0.0 0.0 0.0 0.00.00.0 0 0 c0t1d0
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@open
of use, I
suspect from the constant writes.)
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
omplete the destroy and seem
to hang until it's completed.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ever versions of Open Solaris or Solaris 11 Express may
complete it faster.
> Any tips greatly appreciated,
Just wait...
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
I'd be excited to hear that there's a new feature being worked on,
rather than the radio silence we've had.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
x27;. Can
> this be done ?
Yes you can do it, no it is not recommended.
I had a need to do something similar to what you're attempting and
ended up using a Live CD (which doesn't have an rpool to have a naming
conflict) to do the manipulations.
-B
--
Brandon High : bh...@freaks.com
the dataset
version is correct though. You should test this, however.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
hat the VM can manage
redundancy on its zfs storage, and not just multiple vdsk on the same
host disk / lun. Either give it access to the raw devices, or use
iSCSI, or create your vdsk on different luns and raidz them, etc.
-B
--
Brandon High : bh...@freaks.com
_
e them with a lower version.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
o you will need to create a new VM store after the
> recordsize is tuned.
You can change the recordsize and copy the vmdk files on the nfs
server, which will re-write them with a smaller recordsize.
-B
--
Brandon High : bh...@freaks.com
___
zfs-di
shouldn't make a huge
difference with the zil disabled, but it certainly won't hurt.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e some accounting bits.
I seem to remember seeing a fix for 100% full pools a while ago so
this may not be as critical as it used to be, but it's a nice safety
net to have.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
(outdated) instructions here:
http://spiralbound.net/blog/2005/12/21/rebuilding-the-solaris-device-tree
. I think you can do this all with a new boot environment, rather than
boot from a CD.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing
of filesystems?
No. Incremental sends might take longer, as I mentioned above.
> 2.) Why do we see 4MB-8MB/s of *writes* to the filesystem when we do a
> 'zfs send' to /dev/null ?
Is anything else using the filesystems in the pool?
-B
--
Brandon High : bh...@freaks.com
_
On Sun, Feb 27, 2011 at 7:35 PM, Brandon High wrote:
> It moves from "best fit" to "any fit" at a certain point, which is at
> ~ 95% (I think). Best fit looks for a large contiguous space to avoid
> fragmentation while any fit looks for any free space.
I got the term
t;any fit" at a certain point, which is at
~ 95% (I think). Best fit looks for a large contiguous space to avoid
fragmentation while any fit looks for any free space.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing lis
lower minimum receive power. An internal power
might work with a SATA to eSATA cable or adapter, but it's not
guaranteed to.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.
roLiant
Microserver. It's about $320 and holds 4 drives, with an expansion
slot for an additional controller. I think some people have reported
success with these on the list.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing li
but I'm sure
they exist.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
rformance to
be slightly lower, and to use slightly more CPU. Most USB controllers
don't support DMA, so all I/O requires CPU time.
What about an inexpensive SAS card (eg: Supermicro AOC-USAS-L4i) and
external SAS enclosure (eg: Sans Digital TowerRAID TR4X). It would
cost about $350 for t
ou assertion doesn't seem to hold up.
I think he meant that if one drive in a mirror dies completely, then
any single read error on the remaining drive is not recoverable.
With raidz2 (or a 3-way mirror for that matter), if one drive dies
completely, you still have redundancy.
-B
--
o worry about whether writes are being cached, because
any data that is written synchronously will be committed to stable
storage before the write returns.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
hing reads.
ZFS is a very different beast than UFS and doesn't require the same tuning.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ended to use different levels of redundancy in a pool,
so you may want to consider using mirrors for everything. This also
makes it easier to add or upgrade capacity later.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discus
tween sectors is less likely with the lower density?
More platters leads to more heat and higher power consumption. Most
drives are 3 or 4 platters, though Hitachi usually manufactures 5
platter drives as well.
-B
--
Brandon High : bh...@freaks.com
___
zfs
1 - 100 of 475 matches
Mail list logo