> If I'm not mistaken, a 3-way mirror is not
> implemented behind the scenes in
> the same way as a 3-disk raidz3. You should use a
> 3-way mirror instead of a
> 3-disk raidz3.
RAIDZ2 requires at least 4 drives, and RAIDZ3 requires at least 5 drives. But,
yes, a 3-way mirror is implemented tota
here are no writes in the queue).
Perhaps you are saying that they act like stripes for bandwidth purposes, but
not for read ops/sec?
-Rob
-Original Message-
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
Sent: Saturday, August 06, 2011 11:41 AM
To: Rob Cohen
Cc: zfs-dis
> I may have RAIDZ reading wrong here. Perhaps someone
> could clarify.
>
> For a read-only workload, does each RAIDZ drive act
> like a stripe, similar to RAID5/6? Do they have
> independant queues?
>
> It would seem that there is no escaping
> read/modify/write operations for sub-block writes
RAIDZ has to rebuild data by reading all drives in the group, and
reconstructing from parity. Mirrors simply copy a drive.
Compare 3tb mirros vs. 9x3tb RAIDZ2.
Mirrors:
Read 3tb
Write 3tb
RAIDZ2:
Read 24tb
Reconstruct data on CPU
Write 3tb
In this case, RAIDZ is at least 8x slower to resilver
I may have RAIDZ reading wrong here. Perhaps someone could clarify.
For a read-only workload, does each RAIDZ drive act like a stripe, similar to
RAID5/6? Do they have independant queues?
It would seem that there is no escaping read/modify/write operations for
sub-block writes, forcing the RA
Generally, mirrors resilver MUCH faster than RAIDZ, and you only lose
redundancy on that stripe, so combined, you're much closer to RAIDZ2 odds than
you might think, especially with hot spare(s), which I'd reccommend.
When you're talking about IOPS, each stripe can support 1 simultanious user.
Try mirrors. You will get much better multi-user performance, and you can
easily split the mirrors across enclosures.
If your priority is performance over capacity, you could experiment with n-way
mirros, since more mirrors will load balance reads better than more stripes.
--
This message post
plus virtualbox 4.1 with "network in a box" would like snv_159
from http://www.virtualbox.org/wiki/Changelog
Solaris hosts: New Crossbow based bridged networking driver for Solaris 11
build 159 and above
Rob
_
References:
Thread: ZFS effective short-stroking and connection to thin provisioning?
http://opensolaris.org/jive/thread.jspa?threadID=127608
Confused about consumer drives and zfs can someone help?
http://opensolaris.org/jive/thread.jspa?threadID=132253
Recommended RAM for ZFS on various platf
as a hardware problem, or a
Solaris bug.
- Rob
> I have 15x SAS drives in a Dell MD1000 enclosure,
> attached to an LSI 9200-16e. This has been working
> well. The system is boothing off of internal drives,
> on a Dell SAS 6ir.
>
> I just tried to add a second storag
Markus,
I'm pretty sure that I have the MD1000 plugged in properly, especially since
the same connection works on the 9280 and Perc 6/e. It's not in split mode.
Thanks for the suggestion, though.
--
This message posted from opensolaris.org
___
zfs-dis
ere a
special way to configure one of these LSI boards?
Thanks,
Rob
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
age, even though my cache should be warm by now, and my SSDs are far from
full.
set zfs:l2arc_noprefetch = 0
Am I setting this wrong? Am misunderstanding this option?
Thanks,
Rob
--
This message posted from opensolaris.org
___
zfs-discuss
15 23:05 /kernel/drv/amd64/mpt
-rwxr-xr-x 1 root sys 399952 Nov 15 23:06 /kernel/drv/amd64/mpt_sas
and mpt_sas has a new printf:
"reset was running, this event can not be handled this time"
Rob
__
Thanks, Richard. Your answers were very helpful.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
d the rest fits in L2ARC,
performance will be good.
Thanks,
Rob
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks, Ian.
If I understand correctly, the performance would then drop to the same level as
if I set them up as separate volumes in the first place.
So, I get double the performance for 75% of my data, and equal performance for
25% of my data, and my L2ARC will adapt to my working set across b
could share
L2ARC and ZIL devices, rather than buy two sets.
It appears possible to set up 7x450gb mirrored sets and 7x600gb mirrored sets
in the same volume, without losing capacity. Is that a bad idea? Is there a
problem with having different stripe sizes, like this?
Thanks,
Rob
--
in reality it would be OK.
If it is not OK (for you) then you have open Memory Slots in which to add more
Chips (which you are certain to want to do in the future).
Rob
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-
> I wanted to build a small back up (maybe also NAS) server using
A common question that I am trying to get answered (and have a few) here:
http://www.opensolaris.org/jive/thread.jspa?threadID=102368&tstart=0
Rob
--
This message posted from opensola
> I'm building my new storage server, all the parts should come in this week.
> ...
Another answer is here:
http://eonstorage.blogspot.com/2010/03/whats-best-pool-to-build-with-3-or-4.html
Rob
--
This message posted from opensolaris.org
doubt I can
afford as many as 10 Drives nor could I stuff them
into my Box so please suggest options that use less than that many (most
prefefably less than 7).
A: ?
Thanks,
Rob
--
This message posted from opensolaris.org
___
zfs-dis
'
standpoint - as opposed to over three dozen tiny Drives).
Thanks for your reply,
Rob
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Roy,
Thanks for your reply.
I did get a new drive and attempted the approach (as you have suggested pre
your reply) however once booted off the OpenSolaris Live CD (or the rebuilt new
drive), I was not able to import the rpool (which I had established had sector
errors). I expect I should hav
Folks I posted this question on (OpenSolaris - Help) without any replies
http://opensolaris.org/jive/thread.jspa?threadID=129436&tstart=0 and am
re-posting here in the hope someone can help ... I have updated the wording a
little too (in an attempt to clarify)
I currently use OpenSolaris on a T
when the file system
gets above 80% we seems to have quite a number of issues, much the same as what
you've had in the past, ps and prstats hanging.
are you able to tell me the IDR number that you applied?
Thanks,
Rob
--
This message posted from opensolari
3(ff02fa4b0058)
ff000f4efbb0 smb_session_worker+0x6e(ff02fa4b0058)
ff000f4efc40 taskq_d_thread+0xb1(ff02e51b9e90)
ff000f4efc50 thread_start+8()
>
I can provide any other info that may be need. Thank you in advance for your
help!
Rob
--
Rob Cherveny
Manager of Information Techn
your file or zvol will not be there
when the box comes back, even though your program had finished seconds
before the crash.
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
ol/cache
zpool add test cache /dev/zvol/dsk/rpool/cache
reboot
if your asking for a L2ARC on rpool, well, yea, its not mounted soon enough,
but the
point is to put rpool, swap, and L2ARC for your storage pool all on a single
SSD..
> I like the idea of swapping on SSD too, but why not make a zvol for the L2ARC
> so your not limited by the hard partitioning?
it lives through a reboot..
zpool create -f test c9t3d0s0 c9t4d0s0
zfs create -V 3G rpool/cache
zpool add test cache /dev/zvol/dsk/rpool/cache
reboot
d partitioning?
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Can a ZFS send stream become corrupt when piped between two hosts across a WAN
link using 'ssh'?
For example a host in Australia sends a stream to a host in the UK as follows:
# zfs send tank/f...@now | ssh host.uk receive tank/bar
--
This message posted from opensolaris.org
___
nts, this small loss might be the loss of their
entire dataset.
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
oop!
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
spx?Item=N82E16820139050
But we are still stuck at 8G without going to expensive ram or
a more expensive CPU.
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
d if one uses all 16 slots, that 667Mhz simm runs at 533Mhz
with AMD. The same is true for Lynnfield if one uses Registered
DDR3, one only gets 800Mhz with all 6 slots. (single or dual rank)
> Regardless, for zfs, memory is more important than raw CPU
agreed! but everything must be balanced.
clusions.
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ECC, this close $$
http://www.newegg.com/Product/Product.aspx?Item=N82E16819115214
Now, this gets one to 8G ECC easily...AMD's unfair advantage is all those
ram slots on their multi-die MBs... A slow AMD cpu with 64G ram
might be better depending on your working set / dedup requirements.
r".
I'm thankful Sun shares their research and we can build on it.
(btw, netapp ontap 8 is freebsd, and runs on std hardware
after alittle bios work :-)
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
> a 1U or 2U JBOD chassis for 2.5" drives,
from http://supermicro.com/products/nfo/chassis_storage.cfm
the E1 (single) or E2 (dual) options have a SAS expander so
http://supermicro.com/products/chassis/2U/?chs=216
fits your build or build it your self with
http://supermicro.com/products/accessori
> By partitioning the first two drives, you can arrange to have a small
> zfs-boot mirrored pool on the first two drives, and then create a second
> pool as two mirror pairs, or four drives in a raidz to support your data.
agreed..
2 % zpool iostat -v
capacity operations
data or do some other stuff to give the
system some load it hangs. This happens after 5 minutes or after 30 minutes or
later but it hangs. Then we get the problems of the attached pictures.
I have also emaild Areca. I'll hope the can fix it..
Regards,
Rob
--
This message posted from
this one has me alittle confused. ideas?
j...@opensolaris:~# zpool import z
cannot mount 'z/nukeme': mountpoint or dataset is busy
cannot share 'z/cle2003-1': smb add share failed
j...@opensolaris:~# zfs destroy z/nukeme
internal error: Bad exchange descriptor
Abort (core dumped)
j...@opensolaris
I can report io errors with Chenbro based LSI SASx36 IC based
expanders tested with 111b/121/128a/129. The HBA was LSI 1068 based.
If I bypass expander by adding more HBA controllers, mpt does not have
io errors.
-nola
On Dec 8, 2009, at 6:48 AM, Bruno Sousa wrote:
Hi James,
Thank yo
How can we help with what is outlined below. I can reproduce these at will, so
if anyone at Sun would like an environment to test this situation let me know.
What is the best info to grab for you folks to help here?
Thanks - nola
This is in regard to these threads:
http://www.opensolaris.or
t fun one
might make a tinny slice on all the disks of the raidz2
and list six log devices (6 way stripe) and not bother
adding the other two disks.
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.o
onnection.
wonder if there is a LSI issue with too many links in HBA mode?
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
CIE.
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
stripe
266/6 MB with 6 disks on shared PCI in a raidz
we know disk don't go that fast anyway, but going from a 8h to 15h
scrub is very reasonable depending on vdev config.
Rob
___
zfs-discuss mailing list
zfs-
> from a two disk (10krpm) mirror layout to a three disk raidz-1.
wrights will be unnoticeably slower for raidz1 because of parity calculation
and latency of a third spindle. but reads will be 1/2 the speed
of the mirror because it can split the reads between two disks.
another way to say the s
> Maybe to create snapshots "after the fact"
how does one quiesce a drive "after the fact"?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
frequent snapshots offer outstanding "oops" protection.
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e help
of Victor Latushkin to attempt to recover your pool using painstaking manual
manipulation.
Recent putbacks seem to indicate that future releases will provide a mechanism
to allow mere mortals to recover from some of the errors caused by dropped
writes.
cheers,
Rob
--
This message p
ome fragmentation, 1/4 of c_max wasn't
enough metadata arc space for number of files in /var/pkg/download
good find Henrik!
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
786.html
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Action: Restore the file in question if possible. Otherwise restore
the
entire pool from backup.
:<0x0>
:<0x15>
bet its in a snapshot that looks to have been destroyed already. try
zpool clear POOL01
zpool scrub POOL01
___
zfs-dis
ull
0.41u 0.07s 0:00.50 96.0%
perhaps your ARC is too small?
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
em ram in hopes of increase arc.
if m?u_ghost is a small %, there is no point in adding an L2ARC.
if you do add a L2ARC, one must have ram between c and zfs_arc_max for its
pointers.
Rob
___
zfs-discuss mailing list
z
Folks,
Need help with ZFS recovery following zfs create ...
We recently received new laptops (hardware refresh) and I simply transfered the
multiboot hdd (using OpenSolaris 2008.11 as the primary production OS) from the
old laptop to the new one (used the live DVD to do the zpool import, updat
I'm sure this has been discussed in the past. But its very hard to
understand, or even patch incredibly advanced software such as ZFS
without a deep understanding of the internals.
It will take quite a while before anyone can start understanding a
file system which was developed behind closed door
uptime of 1116days) so the finger is still
pointed at VirtualBox's "hardware" implementation.
as for ZFS requiring "better" hardware, you could turn
off checksums and other protections so one isn't notified
of issues making it act like the others.
1 user, load average: 0.07, 0.05, 0.05
r...@pdm # date
Mon Jul 20 09:33:07 EDT 2009
r...@pdm # uname -a
SunOS pdm 5.9 Generic_112233-12 sun4u sparc SUNW,Ultra-250
Rob
___
zfs-discuss mailing list
zfs-discu
> c4 scsi-bus connectedconfigured unknown
> c4::dsk/c4t15d0disk connectedconfigured unknown
:
> c4::dsk/c4t33d0disk connectedconfigured unknown
> c4::es/ses0ESI connected
>> We have a SC846E1 at work; it's the 24-disk, 4u version of the 826e1.
>> It's working quite nicely as a SATA JBOD enclosure.
> use the LSI SAS 3442e which also gives you an external SAS port.
I'm confused, I though expanders only worked with SAS disk, and SATA disks
took an entire SAS port. c
> CPU is smoothed out quite a lot
yes, but the area under the CPU graph is less, so the
rate of real work performed is less, so the entire
job took longer. (allbeit "smoother")
Rob
___
zfs-discuss mai
94G 14 1 877K 94.2K
c1t1d0s7 244G 200G 15 2 948K 96.5K
c0d0 193G 39.1G 10 1 689K 80.2K
note that c0d0 is basically full, but still serving 10
of every 15 reads, and 82% of the writes.
This appears to be the fix related to the ACL's which they seem to throw all of
the ASSERT panics in zfs_fuid.c under even if they have nothing to do with
ACL's; my case being one of those.
Thanks for the pointer though!
-Rob
--
This message posted from opens
on this issue was done on the S10 side of the house and
there is a stealthy patch ID that can fix the issue.
Thanks,
-Rob
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> correct ratio of arc to l2arc?
from http://blogs.sun.com/brendan/entry/l2arc_screenshots
"It costs some DRAM to reference the L2ARC, at a rate proportional to record
size.
For example, it currently takes about 15 Gbytes of DRAM to reference 600 Gbytes
of
L2ARC - at an 8 Kbyte ZFS record size
MB/s
one raidz2 set of 8 disks can't be faster than the slowest
disk in the set as its one vdev... I would have expected
the 8 vdev set to be 8x faster than the single raidz[12]
set, but like Richard said, there is another bottle
neck in there that iostat will show
ach
disk in the same port too as you go.
> It is still the same size. I would expect it to go to 9G.
a reboot or export/import would have fixed this.
> cannot import 'grow': no such pool available
you meant to type
zpool import -d /var/tmp grow
of one tray.
ie: pls don't discount how one arranges the vdev
in a given configuration.
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
When I type `zpool import` to see what pools are out there, it gets to
/1: open("/dev/dsk/c5t2d0s0", O_RDONLY) = 6
/1: stat64("/usr/local/apache2/lib/libdevid.so.1", 0x08042758) Err#2 ENOENT
/1: stat64("/usr/lib/libdevid.so.1", 0x08042758)= 0
/1: d=0x02D90002 i
Not. Intel decided we don't need ECC memory on the Core i7
I thought that was a Core i7 vs Xeon E55xx for socket
LGA-1366 so that's why this X58 MB claims ECC support:
http://supermicro.com/products/motherboard/Xeon3000/X58/X8SAX.cfm
___
zfs-discus
Thanks Nathan,
I want to test the underlying performance, of course the problem is I want
to test the 16 or so disks in the stripe, rather than individual devices.
Thanks
Rob
On 28/01/2009 22:23, "Nathan Kroenert" wrote:
> Also - My experience with a very small ARC is that you
Solaris 10U4 which doesn¹t have them -can I disable it?
Many thanks
Rob
| Robert Brown - ioko Professional Services |
| Mobile: +44 (0)7769 711 885 |
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
Wow. I will read further into this. That seems like it could have great
applications. I assume the same is true of FCoE?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mai
I am not experienced with iSCSI. I understand it's block level disk access via
TCP/IP. However I don't see how using it eliminates the need for virtualization.
Are you saying that a Windows Server can access a ZFS drive via iSCSI and store
NTFS files?
--
This message posted from opensolaris.org
ZFS is the bomb. It's a great file system. What are it's real world
applications besides solaris userspace? What I'd really like is to utilize the
benefits of ZFS across all the platforms we use. For instance, we use Microsoft
Windows Servers as our primary platform here. How might I utilize ZFS
> (with iostat -xtc 1)
it sure would be nice to know if actv > 0 so
we would know if the lun was busy because
its queue is full or just slow (svc_t > 200)
for tracking errors try `iostat -xcen 1`
and `iostat -E`
Rob
__
the sata framework uses the sd driver so its:
4 % smartctl -d scsi -a /dev/rdsk/c4t2d0s0
smartctl version 5.36 [i386-pc-solaris2.8] Copyright (C) 2002-6 Bruce Allen
Home page is http://smartmontools.sourceforge.net/
Device: ATA WDC WD1001FALS-0 Version: 0K05
Serial number:
Device type: disk
c5t5d0p0ATA WDC WD3200JD-00K 5J08 0 C (32 F) Solaris2
Do you know of a solaris tool to get SMART data?
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Bump.
Some of the threads on this were last posted to over a year ago. I checked
6485689 and it is not fixed yet, is there any work being done in this area?
Thanks,
Rob
> There may be some work being done to fix this:
>
> zpool should support raidz of mirrors
> http://bugs.ope
he should be expected with 16G
filled for months. (still, might not be an issue for a single home user,
but if your married it might be :-)
the Enterprise version of the above drive is
http://www.wdc.com/en/products/Products.asp?DriveID=503
possibly with a desirable faster timeout.
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> WD Caviar Black drive [...] Intel E7200 2.53GHz 3MB L2
> The P45 based boards are a no-brainer
16G of DDR2-1066 with P45 or
8G of ECC DDR2-800 with 3210 based boards
That is the question.
Rob
___
zfs-discuss mailing li
ered ECC /
non-ECC SDRAM.
http://www.intel.com/products/server/chipsets/3200-3210/3200-3210-overview.htm
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
change paths. ie: the disk lable devid
and /etc/zfs/zpool.cache are unnecessary. Both will remain
wrong until a scrub.
So, perhaps the issue is with an EFI labeled disk with old
pool info getting converted to VTOC label for zfs root install.
> ECC?
$60 unbuffered 4GB 800MHz DDR2 ECC CL5 DIMM (Kit Of 2)
http://www.provantage.com/kingston-technology-kvr800d2e5k2-4g~7KIN90H4.htm
for Intel 32x0 north bridge like
http://www.provantage.com/supermicro-x7sbe~7SUPM11K.htm
___
zfs-discuss mailing l
> The other changes that will appear in 0.11 (which is
> nearly done) are:
Still looking forward to seeing .11 :)
Think we can expect a release soon? (or at least svn access so that others can
check out the trunk?)
This message posted from opensolaris.org
_
> Rob wrote:
> > Hello All!
> >
> > Is there a command to force a re-inheritance/reset
> of ACLs? e.g., if i have a directory full of folders
> that have been created with inherited ACLs, and i
> want to change the ACLs on the parent folder, how can
&
Hello All!
Is there a command to force a re-inheritance/reset of ACLs? e.g., if i have a
directory full of folders that have been created with inherited ACLs, and i
want to change the ACLs on the parent folder, how can i force a reapply of all
ACLs?
This message posted from opensolaris.org
There may be some work being done to fix this:
zpool should support raidz of mirrors
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6485689
Discussed in this thread:
Mirrored Raidz ( Posted: Oct 19, 2006 9:02 PM )
http://opensolaris.org/jive/thread.jspa?threadID=15854&tstart=0
Thi
on", the ability to do this over a period of days is also useful.
Indeed the Plan9 filesystem simply snapshots to WORM and has no delete - nor
are they able to fill their drives faster than they can afford to buy new ones:
Venti Filesystem
http://www.cs.bell-labs.com/who/seanq/p9trace.html
R
wap, OCFS2, NTFS, FAT -- so it might be better to suggest adding ZFS
there instead of focusing on non-ZFS solutions in this ZFS discussion group.
Rob
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ht
nd watch from the sidelines -- returning to the OS
when you thought you were 'safe' (and if not, jumping backout).
Thus, Mertol, it is possible (and could work very well).
Rob
This message posted from opensolaris.org
___
zfs-discus
> Solaris will allow you to do this, but you'll need to use SVM instead of ZFS.
>
> Or, I suppose, you could use SVM for RAID-5 and ZFS to mirror those.
> -- richard
Or run Linux ...
Richard, The ZFS Best Practices Guide says not.
"Do not use the same disk or slice in both an SVM and ZFS con
> -Peter Tribble wrote:
>> On Sun, Jul 6, 2008 at 8:48 AM, Rob Clark wrote:
>> I have eight 10GB drives.
>> ...
>> I have 6 remaining 10 GB drives and I desire to
>> "raid" 3 of them and "mirror" them to the other 3 to
>> give me raid s
he "." is part of the URL (NMF) - so add it or you'll 404).
Rob
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
this link:
Using PPMD for compression
http://www.codeproject.com/KB/recipes/ppmd.aspx
Rob
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
this link:
Using PPMD for compression
http://www.codeproject.com/KB/recipes/ppmd.aspx
Rob
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1 - 100 of 212 matches
Mail list logo