no pools are specified, statistics for every pool in
the
system is shown. If count is specified, the command exits after
count
reports are printed.
:D
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@open
On Dec 13, 2012 8:02 PM, "Fred Liu" wrote:
>
> Assuming in a secure and trusted env, we want to get the maximum transfer
speed without the overhead from ssh.
Add the HPN patches to OpenSSH and enable the NONE cipher. We can saturate
a gigabits link (980 mbps) between two FreeBSD hosts using that
ities.
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
a ZFS pool has metadata on it, that includes
which pool it's part of, which vdev it's part of, etc. Thus, if you do an
export followed by an import, then ZFS will read the metadata off the disks
and sort things out automatically.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
And you can try 'zpool online' on the failed drive to see if it comes back
online.
On Nov 27, 2012 6:08 PM, "Freddie Cash" wrote:
> You don't use replace on mirror vdevs.
>
> 'zpool detach' the failed drive. Then 'zpool attach' the new drive
You don't use replace on mirror vdevs.
'zpool detach' the failed drive. Then 'zpool attach' the new drive.
On Nov 27, 2012 6:00 PM, "Chris Dunbar - Earthside, LLC" <
cdun...@earthside.net> wrote:
> Hello,
>
> ** **
>
> I have a degraded mirror set and this is has happened a few times (not
> a
.
Create new filesystem.
rsync data from /path/to/filesystem/.zfs/snapshot/snapname/ to new filesystem
Snapshot new filesystem.
rsync data from /path/to/filesystem/.zfs/snapshot/snapname+1/ to new filesystem
Snapshot new filesystem
See if zfs diff works.
If it does, repeat the rsync/snapshot steps f
Anandtech.com has a thorough review of it. Performance is consistent
(within 10-15% IOPS) across the lifetime of the drive, has capacitors to
flush RAM cache to disk, and doesn't store user data in the cache. It's
also cheaper per GB than the 710 it replaces.
On 2012-11-13 3:32 PM, "Jim Klimov" wr
Ah, okay, that makes sense. I wasn't offended, just confused. :)
Thanks for the clarification
On Oct 13, 2012 2:01 AM, "Jim Klimov" wrote:
> 2012-10-12 19:34, Freddie Cash пишет:
>
>> On Fri, Oct 12, 2012 at 3:28 AM, Jim Klimov wrote:
>>
>>> In fact
y home file server ran with mixed vdevs for awhile (a 2 IDE-disk
mirror vdev with a 3 SATA-disk raidz1 vdev) as it was built using
scrounged parts.
But all my work file servers have matched vdevs.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discu
-
gpt/log 1.98G 460K 1.98G -
cache - - - - - -
gpt/cache1 32.0G 32.0G 8M -
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Oct 4, 2012 at 9:45 AM, Jim Klimov wrote:
> 2012-10-04 20:36, Freddie Cash пишет:
>>
>> On Thu, Oct 4, 2012 at 9:14 AM, Richard Elling
>> wrote:
>>>
>>> On Oct 4, 2012, at 8:58 AM, Jan Owoc wrote:
>>> The return code for zpool is ambigu
before.
Not sure why I didn't see "health" in the list of pool properties all
the times I've read the zpool man page.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
If you're willing to try FreeBSD, there's HAST (aka high availability
storage) for this very purpose.
You use hast to create mirror pairs using 1 disk from each box, thus
creating /dev/hast/* nodes. Then you use those to create the zpool one the
'primary' box.
All writes to the pool on the primar
Query the size of the other drives in the vdev, obviously. ;) So long as
the replacement is larger than the smallest remaining drive, it'll work.
On Sep 5, 2012 8:57 AM, "Yaverot" wrote:
>
>
> --- skiselkov...@gmail.com wrote:
> >On 09/05/2012 05:06 AM, Yaverot wrote:
> > "What is the smallest si
the -c option.
-D Imports destroyed pools only. The -f option is also required.
-f Forces import, even if the pool appears to be potentially
active.
-m Enables import with missing log devices.
--
Freddie Cash
fjwc...@gmail.com
> encryption) affect zfs specific features like data Integrity and
> deduplication?
If you are using FreeBSD, why not use GELI to provide the block
devices used for the ZFS vdevs? That's the "standard" way to get
encryption and ZFS working on
orce a new import, though, but it didn't boot up
> normally, and told me it couldn't import its pool due to lack of SLOG devices.
Positive. :) I tested it with ZFSv28 on FreeBSD 9-STABLE a month or
two ago. See the updated man page for zpool, especially the bit about
"import -m&
arate device), and
> then lose this SLOG (disk crash etc), you will probably lose the pool. So if
> you want/need SLOG, you probably want two of them in a mirror…
That's only true on older versions of ZFS. ZFSv19 (or 20?) includes
the ability to import a pool with a failed/missing log dev
y size of sectors you want. This can be
used to create ashift=12 vdevs on top of 512B, pseudo-512B, or 4K
drives.
# gnop -S 4096 da{0,1,2,3,4,5,6,7}
# zpool create pool raidz2 da{0,1,2,3,4,5,6,7}.nop
# zpool export pool
# gnop destroy da{0,1,2,3,4,5,6,7}.nop
# zpool
On Tue, May 8, 2012 at 10:24 AM, Freddie Cash wrote:
> I have an interesting issue with one single ZFS filesystem in a pool.
> All the other filesystems are fine, and can be mounted, snapshoted,
> destroyed, etc. But this one filesystem, if I try to do any operation
> on it (zf
sratio 5.93x
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Apr 26, 2012 at 4:34 AM, Deepak Honnalli
wrote:
> cachefs is present in Solaris 10. It is EOL'd in S11.
And for those who need/want to use Linux, the equivalent is FSCache.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing
hey have encryption and we don't?
Can it be backported to illumos ..."
It's too bad Oracle hasn't followed through (yet?) with their promise
to open-source the ZFS (and other CDDL-licensed?) code in Solaris 11.
:(
--
Freddie Cash
fjwc...@gmail.com
___
ing added a "if the blockcount is within 10%,
then allow the replace to succeed" feature, to work around this issue?
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
g/users/bfriesen/
> GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Freddie Cash
fjwc...@gmail.com
__
set, so it's limited to 2 TB harddrives:
http://www.supermicro.com/products/accessories/addon/AOC-USAS-L4i_R.cfm
You could always check if there's an IT-mode firmware for the 921204i4e
card available on the LSI website, and flash that onto the card. That
"disables"/removes the RAI
erver
temporarily to get things working on this box again.
> # sysctl hw.physmem
> hw.physmem: 6363394048
>
> # sysctl vfs.zfs.arc_max
> vfs.zfs.arc_max: 5045088256
>
> (I lowered arc_max to 1GB but hasn't helped)
>
DO NOT LOWER THE ARC WHEN DEDUPE ENABLED!!
--
Freddi
o ZFS, and create a pool using a mirror vdev.
File-backed ZFS vdevs really should only be used for testing purposes.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Oct 17, 2011 at 10:50 AM, Harry Putnam wrote:
> Freddie Cash writes:
>
> > If you only want RAID0 or RAID1, then btrfs is okay. There's no support
> for
> > RAID5+ as yet, and it's been "in development" for a couple of years now.
>
> [...
tly only in Solaris 11)
- built-in CIFS/NFS sharing (on Solaris-based systems; FreeBSD uses normal
nfsd and Samba for this)
- automatic hot-spares (on Solaris-based systems; FreeBSD only supports
manual spares)
- and more
Maybe in another 5 years or so, Btrfs will be up to the po
8-STABLE/9-BETA. And whether or not "zfs send" is faster/better/easier/more
reliable than rsyncing snapshots (which is what we do currently).
Thanks for the info.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@
Just curious if anyone has looked into the relationship between zpool
dedupe, zfs zend dedupe, memory use, and network throughput.
For example, does 'zfs send -D' use the same DDT as the pool? Or does it
require more memory for it's own DDT, thus impacting performance of both?
If you have a dedup
dancy or anything like that, but does
include some compression and other info (I believe).
There's an excellent post in the archives that shows how "ls -l", du, df,
"zfs list", and "zpool list" work, and what each sees as "d
th that as long as the rest of my zpool remains intact.
>
Note: you will have 0 redundancy on the ENTIRE POOL, not just that one
vdev. If that non-redundant vdev dies, you lose the entire pool.
Are you willing to take that risk, if one of the new drives is already DoA?
___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Jun 1, 2011 at 2:34 PM, Freddie Cash wrote:
> On Wed, Jun 1, 2011 at 12:45 PM, Eric Sproul wrote:
>
>> On Wed, Jun 1, 2011 at 2:54 PM, Matt Harrison
>> wrote:
>> > Hi list,
>> >
>> > I've got a pool thats got a single raidz1 vdev.
evs (raidz*, mirror, single) from a
pool, so you can't "add" a new vdev and "remove" the old vdev to convert
between vdev types.
The only solution to the OP's question is to create a new pool, transfer the
data, and destroy the old pool. There are several ways to do t
ttp://people.freebsd.org/~mm/patches/zfs/v28/
ZFS-on-FUSE for Linux currently only supports ZFSv23.
So you can "safely" use Illumos, Nexenta, FreeBSD, etc with ZFSv28. You can
also use Solaris 11 Express, so long as you don't upgrade the pool version
(SolE includes ZFSv31
c Intel motherboard
- 2.8 GHz P4 CPU
- 3 SATA1 harddrives connected to motherboard, in a raidz1 vdev
- 2 IDE harddrives connected to a Promise PCI controller, in a mirror vdev
- 2 GB non-ECC SDRAM
- 2 GB USB stick for the OS install
- FreeBSD 8.2
--
Freddie Cas
On Fri, Apr 29, 2011 at 5:17 PM, Brandon High wrote:
> On Fri, Apr 29, 2011 at 1:23 PM, Freddie Cash wrote:
>> Running ZFSv28 on 64-bit FreeBSD 8-STABLE.
>
> I'd suggest trying to import the pool into snv_151a (Solaris 11
> Express), which is the reference and devel
I can tell, this is due almost
> exclusively to the fact that rsync needs to build an in-memory table of all
> work being done *before* it starts to copy.
rsync 2.x works that way., building a complete list of
files/directories to copy before starting the copy.
rsync 3.x doesn't. 3.x
On Fri, Apr 29, 2011 at 5:00 PM, Alexander J. Maidak wrote:
> On Fri, 2011-04-29 at 16:21 -0700, Freddie Cash wrote:
>> On Fri, Apr 29, 2011 at 1:23 PM, Freddie Cash wrote:
>> > Is there anyway, yet, to import a pool with corrupted space_map
>> > errors, or &qu
On Fri, Apr 29, 2011 at 1:23 PM, Freddie Cash wrote:
> Is there anyway, yet, to import a pool with corrupted space_map
> errors, or "zio-io_type != ZIO_TYPE_WRITE" assertions?
>
> I have a pool comprised of 4 raidz2 vdevs of 6 drives each. I have
> almost 10 TB of data
nning, which were not killed by the
shutdown process for some reason, which prevented 8 ZFS filesystems
from being unmounted, which prevented the pool from being exported
(even though I have a "zfs unmount -f" and "zpool export -f"
fail-safe), which
e-file --inplace (and other options), works
extremely fast for updates.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Apr 25, 2011 at 10:55 AM, Erik Trimble wrote:
> Min block size is 512 bytes.
Technically, isn't the minimum block size 2^(ashift value)? Thus, on
4 KB disks where the vdevs have an ashift=12, the minimum block size
will be 4 KB.
--
Freddie Cash
fjwc...@g
s each, and then it just started taking longer and
longer for each drive.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
28.
And there are patches available for testing ZFSv28 on FreeBSD 8-STABLE.
Let's keep the OS pot shots to a minimum, eh?
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
as the last block or three
of the file will be different.
Repeat changing different lines in the file, and watch as disk usage
only increases a little, since the files still "share" (or have in
common) a lot of blocks.
ZFS dedupe happens at the block layer, not the file layer.
, SAS connectors).
Some consider those enterprise-grade (afterall, it's 6 Gbps SAS,
multilaned, multipathed, but not multi-), some don't (it's not
IBM/Oracle/HP/etc, oh noes!!).
Chenbro also has similar setups to SuperMicro. Again, it's not
"big-name storage company&qu
, SAS connectors).
Some consider those enterprise-grade (afterall, it's 6 Gbps SAS,
multilaned, multipathed, but not multi-), some don't (it's not
IBM/Oracle/HP/etc, oh noes!!).
Chenbro also has similar setups to SuperMicro. Again, it's not
"big-name storage company&qu
ormance won't be as good as it could be due to the uneven
striping, especially when the smaller vdevs get to be full. But it
works.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
5
Creating 1 pool gives you the best performance and the most
flexibility. Use separate filesystems on top of that pool if you want
to tweak all the different properties.
Going with 1 pool also increases your chances for dedupe, as dedupe is
done at the pool level.
--
Freddi
... Are they all screwed?
ZFSv28 is available for FreeBSD 9-CURRENT.
We won't know until after Oracle releases Solaris 11 whether or not
they'll live up to their promise to open the source to ZFSv31. Until
Solaris 11 is released, there's really not much point in debating
e I am afraid.
>
> .. or add a mirror to that drive, to keep some redundancy.
And to ad4s1d as well, since it's also a stand-alone, non-redundand vdev.
Since there are two drives that are non-redundant, it would probably
be best to re-do the
e to a pool
is to take the pool offline via zpool export.
One more reason to stop using hardware storage systems and just let
ZFS handle the drives directly. :)
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
rimental patches
available for ZFSv28.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Oct 18, 2010 at 8:51 AM, Darren J Moffat
wrote:
> On 18/10/2010 16:48, Freddie Cash wrote:
>>
>> On Mon, Oct 18, 2010 at 6:34 AM, Edward Ned Harvey
>> wrote:
>>>>
>>>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>&g
On Mon, Oct 18, 2010 at 6:34 AM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Freddie Cash
>>
>> If you lose 1 vdev, you lose the pool.
>
> As long as 1 vdev is striped and not mi
AID-0 is lost.
Similar for the pool.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
've avoided any vdev with more than 8 drives in it.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
gt; mirror-0 ONLINE 0 0 0
> c1t2d0 ONLINE 0 0 0
> c1t3d0 ONLINE 0 0 0
> mirror-1 ONLINE 0 0 0
> c1t4d0 ONLINE 0 0 0
> c1t5d0 ONLINE 0 0 0
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://m
can be used and
> keep the arc cache warm with metadata. Any suggestions?
Would adding a cache device (L2ARC) and setting primarycache=metadata
and secondarycache=all on the root dataset do what you need?
That way ARC is used strictly for metadata, and L2ARC is used for metad
e from a disk while doing normal
reads/writes is also fun.
Using the controller software (if a RAID controller) to delete
LUNs/disks is also fun.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mai
l name to the drive you are removing. You can
then use that drive to create a new pool, thus creating a duplicate of
the original pool.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ly written data. Any existing data is not affected
until it is re-written or copied.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Sep 9, 2010 at 1:26 PM, Freddie Cash wrote:
> On Thu, Sep 9, 2010 at 1:04 PM, Orvar Korvar
> wrote:
>> A) Resilver = Defrag. True/false?
>
> False. Resilver just rebuilds a drive in a vdev based on the
> redundant data stored on the other drives in the vdev. Simil
I buy larger drives and resilver, does defrag happen?
No.
> C) Does zfs send zfs receive mean it will defrag?
No.
ZFS doesn't currently have a defragmenter. That will come when the
legendary block pointer rewrite feature is committed.
--
Freddie Cash
fjw
don't think you'd be able to
get a 500 GB SATA disk to resilver in a 24-disk raidz vdev (even a
raidz1) in a 50% full pool. Especially if you are using the pool for
anything at the same time.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discu
e-only.
-M uses MLC flash, which is optimised for fast reads. Ideal for an
L2ARC which is (basically) read-only.
-E tends to have smaller capacities, which is fine for ZIL.
-M tends to have larger capacities, which is perfect for L2ARC.
--
Freddie Cash
fj
ND opensolaris commands when a command is shown.
I haven't finished reading it yet (okay, barely read through the
contents list), but would you be interested in the FreeBSD equivalents
for the commands, if they differ?
--
Freddie Cash
fjwc...@gmail.com
reWire.
If there's any way to run cables from inside the case, you can "make
do" with plain SATA and longer cables.
Otherwise, you'll need to look into something other than a MacMini for
your storage box.
--
Freddie Cash
fjwc...@gmail.com
___
On Wed, Aug 25, 2010 at 11:34 AM, Mike DeMarco wrote:
> Is it currently or near future possible to shrink a zpool "remove a disk"
Short answer: no.
Long answer: search the archives for "block pointer rewrite" for all
the gory details. :)
--
Freddie
nd the ones in the middle have "simple" XOR engines for doing the
RAID.stuff in hardware.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
u
really don't want to use the ports tree, there's pkg_upgrade (part of
the bsdadminscripts port).
IOW, if you don't want to compile things on FreeBSD, you don't have to. :)
--
Freddie Cash
fjwc...@gmail.com
___
zfs-dis
raidz3 ?
Backup the data in the pool, destroy the pool, create a new pool
(consider using multiple raidz vdevs instead of one giant raidz vdev),
copy the data back.
There's no other way.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ey on disks to have the same usable space.
And, adding multiple raidz vdevs (each with under 10 disks) to a
single pool (aka stripe of raidz) will give better performance than a
single large raidz vdev.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
rmance, use 2x 2-drive mirrors.
For best redundancy, use 1x 4-drive raidz2.
For middle-of-the-road performance/redundancy, use 1x 4-drive raidz1.
Note: newegg.ca has a sale on right now. WD Caviar Black 1 TB drives
are only $85 CDN.
--
Freddie Cash
fjwc...@gmail.com
___
#x27;t wait for FreeBSD to get ZFSv20+). But the
zfs-fuse system was just too unstable to be usable for even simple
testing.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ol
zpool rename newpool oldpool
The commands are not exact, read the man pages to get the exact syntax
for the send/recv part.
However, doing so will make the pool extremely fragile. Any issues
with any of the 8 LUNs, and the whole pool dies as there is no
redundancy.
--
Freddie
e able to saturate a 10G link using zfs
send/recv, so long as both the systems can read/write that fast.
http://www.psc.edu/networking/projects/hpn-ssh/
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://ma
laptops).
However, the "rule of thumb" for ZFS is 2 GB of RAM as a bare minimum,
using the 64-bit version of FreeBSD. The "sweet spot" is 4 GB of RAM.
But, more is always better.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discu
hard to keep it running.
You definitely want to do the ZFS bits from within FreeBSD.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ontroller into a "dumb" SATA
controller).
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
k03 disk04
Replace one of the drives with a larger on (this may not be perfectly
correct, going from memory):
zpool attach poolname disk05 disk01
zpool detach poolname disk01
Carry on with the add and replace methods as needed until you have
your 6-mirror pool.
No vdev removals required.
--
F
s.
Attached to 3Ware 9550SXU and 9650SE RAID controllers, configured as
Single Drive arrays.
There's also 8 WD Caviar Green 1.5 TB drives in there, which are not
very good (even after twiddling the idle timeout setting via wdidle3).
Definitely avoid the Green/GP line of drives.
-
On Sat, Jun 26, 2010 at 12:20 AM, Ben Miles wrote:
> What supporting applications are there on Ubuntu for RAIDZ?
None. Ubuntu doesn't officially support ZFS.
You can kind of make it work using the ZFS-FUSE project. But it's not
stable, nor recommended.
--
Freddie Cash
fjwc
ith patches available for ZFSv15 and ZFSv16. You'll
get a more stable, better performant system than trying to shoehorn
ZFS-FUSE into Ubuntu (we've tried with Debian, and ZFS-FUSE is good
for short-term testing, but not production use).
--
Freddie Cash
fjwc...@gmail.com
_
o be made up of as few physical
disks as possible (for your size and redundancy requirements), and
your pool to be made up of as many vdevs as possible.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:
bly of the same configuration (all
mirrors, all raidz1, all raidz2, etc).
You can add vdevs to the pool at anytime.
You cannot expand a raidz vdev by adding drives, though (convert a 4-drive
raidz1 to a 5-drive raidz1). Nor can you convert between raidz types
(4-drive raidz1 to
he same way you access any harddrive over the network:
- NFS
- SMB/CIFS
- iSCSI
- etc
It just depends at what level you want to access the storage (files, shares,
block devices, etc).
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
8 drives of the first vdev
are on the first controller, all 8 drives of the third vdev are on the
second controller, with the second vdev being split across both controllers.
Everything is still running smoothly.
--
Freddie Cash
fjwc...@gmail.com
___
zf
exity that makes everything super simple and easy for them
... and a royal pain for everyone else (kinda like Windows). :)
In the end, it all comes down to user education.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@op
On Fri, Jun 11, 2010 at 12:25 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Fri, 11 Jun 2010, Freddie Cash wrote:
>
>>
>>
For the record, the following paragraph was incorrectly quoted by Bob. This
paragraph was originally written by Erik Trimble:
>
an then call
> from userland. Which is essentially what the ZFS FUSE folks have been
> reduced to doing.
>
> The nvidia shim is only needed to be able to ship the non-GPL binary driver
with the GPL binary kernel. If you don't use the binaries, you don't use
the shim.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
"space available" output of various tools (like zfs
list, df, etc).
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ittle and therefore only resilver 200-300Gb of data.
>
When in doubt, read the man page. :)
zpool iostat -v
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
available in a raidz vdev, by
replacing each drive in the raidz vdev with a larger drive. We just did
this, going from 8x 500 GB drives in a raidz2 vdev, to 8x 1.5 TB drives in a
raidz2 vdev.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zf
o
export/import the pool for the space to become available).
We've used both of the above quite successfully, both at home and at work.
Not sure what your buddy was talking about. :)
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss
1 - 100 of 146 matches
Mail list logo