Can a ZFS send stream become corrupt when piped between two hosts across a WAN
link using 'ssh'?
For example a host in Australia sends a stream to a host in the UK as follows:
# zfs send tank/f...@now | ssh host.uk receive tank/bar
--
This message posted from opensolaris.org
___
when the file system
gets above 80% we seems to have quite a number of issues, much the same as what
you've had in the past, ps and prstats hanging.
are you able to tell me the IDR number that you applied?
Thanks,
Rob
--
This message posted from opensolari
data or do some other stuff to give the
system some load it hangs. This happens after 5 minutes or after 30 minutes or
later but it hangs. Then we get the problems of the attached pictures.
I have also emaild Areca. I'll hope the can fix it..
Regards,
Rob
--
This message posted from
Hello All!
Is there a command to force a re-inheritance/reset of ACLs? e.g., if i have a
directory full of folders that have been created with inherited ACLs, and i
want to change the ACLs on the parent folder, how can i force a reapply of all
ACLs?
This message posted from opensolaris.org
> Rob wrote:
> > Hello All!
> >
> > Is there a command to force a re-inheritance/reset
> of ACLs? e.g., if i have a directory full of folders
> that have been created with inherited ACLs, and i
> want to change the ACLs on the parent folder, how can
&
> The other changes that will appear in 0.11 (which is
> nearly done) are:
Still looking forward to seeing .11 :)
Think we can expect a release soon? (or at least svn access so that others can
check out the trunk?)
This message posted from opensolaris.org
_
change paths. ie: the disk lable devid
and /etc/zfs/zpool.cache are unnecessary. Both will remain
wrong until a scrub.
So, perhaps the issue is with an EFI labeled disk with old
pool info getting converted to VTOC label for zfs root install.
> WD Caviar Black drive [...] Intel E7200 2.53GHz 3MB L2
> The P45 based boards are a no-brainer
16G of DDR2-1066 with P45 or
8G of ECC DDR2-800 with 3210 based boards
That is the question.
Rob
___
zfs-discuss mailing li
he should be expected with 16G
filled for months. (still, might not be an issue for a single home user,
but if your married it might be :-)
the Enterprise version of the above drive is
http://www.wdc.com/en/products/Products.asp?DriveID=503
possibly with a desirable faster timeout.
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
c5t5d0p0ATA WDC WD3200JD-00K 5J08 0 C (32 F) Solaris2
Do you know of a solaris tool to get SMART data?
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> (with iostat -xtc 1)
it sure would be nice to know if actv > 0 so
we would know if the lun was busy because
its queue is full or just slow (svc_t > 200)
for tracking errors try `iostat -xcen 1`
and `iostat -E`
Rob
__
ZFS is the bomb. It's a great file system. What are it's real world
applications besides solaris userspace? What I'd really like is to utilize the
benefits of ZFS across all the platforms we use. For instance, we use Microsoft
Windows Servers as our primary platform here. How might I utilize ZFS
I am not experienced with iSCSI. I understand it's block level disk access via
TCP/IP. However I don't see how using it eliminates the need for virtualization.
Are you saying that a Windows Server can access a ZFS drive via iSCSI and store
NTFS files?
--
This message posted from opensolaris.org
Wow. I will read further into this. That seems like it could have great
applications. I assume the same is true of FCoE?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mai
each
node and failed to reproduce the problem.
I didn't try the SVM + ZFS combo.
Rob
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
how does one free segment(offset=77984887808 size=66560)
on a pool that won't import?
looks like I found
http://bugs.opensolaris.org/view_bug.do?bug_id=6580715
http://mail.opensolaris.org/pipermail/zfs-discuss/2007-September/042541.html
when I luupgrade a ufs partition with a
dvd-b62 that was bf
orEdge
3510-421F-545.91GB>
/scsi_vhci/[EMAIL PROTECTED]
6. c8t600C0FF008266812A0877700d0 <SUN-StorEdge
3510-421F-545.91GB>
/scsi_vhci/[EMAIL PROTECTED]
7. c8t600C0FF0082668310F838000d0 <SUN-StorEdge
3510-421F-545.91GB>
/
> Since no specific file or directory is mentioned
install newer bits and get better info automatically
but for now type:
zdb -vvv zpool1 17
zdb -vvv zpool1 18
zdb -vvv zpool1 19
echo remove those objects
zpool clear zpool1
zpool scrub zpool1
___
zfs-d
ata:sata_func_enable = 0x7" >> /etc/system
but of cource fixing the drive FW is the answer.
ref:
http://mail.opensolaris.org/pipermail/storage-discuss/2008-January/004428.html
Rob
___
zfs-discuss mailing
and in all cases its not a zfs issue, but a disk, controller
or [EMAIL PROTECTED] issue.
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> cannot share 'tank/software': smb add share failed
you meant to post this in storage-discuss
but type:
chmod 777 /tank/software
zfs set sharesmb=name=software tank/software
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.openso
> Memory: 3072M phys mem, 31M free mem, 2055M swap, 1993M free swap
perhaps this might help..
mkfile -n 4g /usr/swap
swap -a/usr/swap
http://blogs.sun.com/realneel/entry/zfs_arc_statistics
Rob
___
zfs-discuss mail
ram rather than a SSD cache
device would be better? unless you have really slow iscsi vdevs :-)
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
cted until
you attach a mirror to that single disk. one can't (currently)
remove a vdev (shrink a pool) but one can increase each element
of a vdev increasing the size of the pool while maintaining the
number of elements (disk count)
't horribly
http://mail.opensolaris.org/pipermail/zfs-discuss/2007-July/041956.html
perhaps adding ram to the system would be more flexible?
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
> Thus, if you have a 2GB, a 3GB, and a 5GB device in a pool,
> the pool's capacity is 3 x 2GB = 6GB
If you put the three into one raidz vdev it will be 2+2
until you replace the 2G disk with a 5G at which point
it will be 3+3 and then when you replace the 3G with a 5G
it will be 5+5G. and if yo
7;t change with zfs, the system with the most
vdevs wins :-)
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
its 128k, with a 4+1 raidz set
each disk will see 32k. so the 9+1 would get 14.2k.
and what if the block is less than 128k? wouldn't
it be better to have two sets of 4+1 and go
twice as fast splitting the blocks less in the
process? (two vdevs)
your "Type" "sata-port" will change to "disk" when you put
a disk on it. like:
1 % cfgadm
Ap_Id Type Receptacle Occupant Condition
sata0/0::dsk/c2t0d0disk connectedconfigured ok
sata0/1::dsk/c2t1d0cd/dvd connected
You can add more disks to a pool that is in raid-z you just can't
add disks to the existing raid-z vdev.
cd /usr/tmp
mkfile -n 100m 1 2 3 4 5 6 7 8 9 10
zpool create t raidz /usr/tmp/1 /usr/tmp/2 /usr/tmp/3
zpool status t
zfs list t
zpool add -f t raidz2 /usr/tmp/4 /usr/tmp/5 /usr/tmp/6 /usr
0 0
c3t9d0 ONLINE 0 0 0
spares
c2t8d0AVAIL
c3t10d0 AVAIL
Why doesn't ZFS automatically use one of the hot spares? Is this expected
behavior or a bug?
Rob
This message posted from opensolaris.org
ill have a take a closer
> look at the details :-)
Ok, let me try to reproduce the problem and get you more info.
Rob
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
d partitioning?
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> I like the idea of swapping on SSD too, but why not make a zvol for the L2ARC
> so your not limited by the hard partitioning?
it lives through a reboot..
zpool create -f test c9t3d0s0 c9t4d0s0
zfs create -V 3G rpool/cache
zpool add test cache /dev/zvol/dsk/rpool/cache
reboot
ol/cache
zpool add test cache /dev/zvol/dsk/rpool/cache
reboot
if your asking for a L2ARC on rpool, well, yea, its not mounted soon enough,
but the
point is to put rpool, swap, and L2ARC for your storage pool all on a single
SSD..
your file or zvol will not be there
when the box comes back, even though your program had finished seconds
before the crash.
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
3(ff02fa4b0058)
ff000f4efbb0 smb_session_worker+0x6e(ff02fa4b0058)
ff000f4efc40 taskq_d_thread+0xb1(ff02e51b9e90)
ff000f4efc50 thread_start+8()
>
I can provide any other info that may be need. Thank you in advance for your
help!
Rob
--
Rob Cherveny
Manager of Information Techn
Folks I posted this question on (OpenSolaris - Help) without any replies
http://opensolaris.org/jive/thread.jspa?threadID=129436&tstart=0 and am
re-posting here in the hope someone can help ... I have updated the wording a
little too (in an attempt to clarify)
I currently use OpenSolaris on a T
Roy,
Thanks for your reply.
I did get a new drive and attempted the approach (as you have suggested pre
your reply) however once booted off the OpenSolaris Live CD (or the rebuilt new
drive), I was not able to import the rpool (which I had established had sector
errors). I expect I should hav
'
standpoint - as opposed to over three dozen tiny Drives).
Thanks for your reply,
Rob
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
doubt I can
afford as many as 10 Drives nor could I stuff them
into my Box so please suggest options that use less than that many (most
prefefably less than 7).
A: ?
Thanks,
Rob
--
This message posted from opensolaris.org
___
zfs-dis
> I'm building my new storage server, all the parts should come in this week.
> ...
Another answer is here:
http://eonstorage.blogspot.com/2010/03/whats-best-pool-to-build-with-3-or-4.html
Rob
--
This message posted from opensolaris.org
> I wanted to build a small back up (maybe also NAS) server using
A common question that I am trying to get answered (and have a few) here:
http://www.opensolaris.org/jive/thread.jspa?threadID=102368&tstart=0
Rob
--
This message posted from opensola
in reality it would be OK.
If it is not OK (for you) then you have open Memory Slots in which to add more
Chips (which you are certain to want to do in the future).
Rob
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-
em ram in hopes of increase arc.
if m?u_ghost is a small %, there is no point in adding an L2ARC.
if you do add a L2ARC, one must have ram between c and zfs_arc_max for its
pointers.
Rob
___
zfs-discuss mailing list
z
ull
0.41u 0.07s 0:00.50 96.0%
perhaps your ARC is too small?
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Action: Restore the file in question if possible. Otherwise restore
the
entire pool from backup.
:<0x0>
:<0x15>
bet its in a snapshot that looks to have been destroyed already. try
zpool clear POOL01
zpool scrub POOL01
___
zfs-dis
786.html
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ome fragmentation, 1/4 of c_max wasn't
enough metadata arc space for number of files in /var/pkg/download
good find Henrik!
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e help
of Victor Latushkin to attempt to recover your pool using painstaking manual
manipulation.
Recent putbacks seem to indicate that future releases will provide a mechanism
to allow mere mortals to recover from some of the errors caused by dropped
writes.
cheers,
Rob
--
This message p
frequent snapshots offer outstanding "oops" protection.
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> Maybe to create snapshots "after the fact"
how does one quiesce a drive "after the fact"?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> from a two disk (10krpm) mirror layout to a three disk raidz-1.
wrights will be unnoticeably slower for raidz1 because of parity calculation
and latency of a third spindle. but reads will be 1/2 the speed
of the mirror because it can split the reads between two disks.
another way to say the s
stripe
266/6 MB with 6 disks on shared PCI in a raidz
we know disk don't go that fast anyway, but going from a 8h to 15h
scrub is very reasonable depending on vdev config.
Rob
___
zfs-discuss mailing list
zfs-
CIE.
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
onnection.
wonder if there is a LSI issue with too many links in HBA mode?
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t fun one
might make a tinny slice on all the disks of the raidz2
and list six log devices (6 way stripe) and not bother
adding the other two disks.
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.o
How can we help with what is outlined below. I can reproduce these at will, so
if anyone at Sun would like an environment to test this situation let me know.
What is the best info to grab for you folks to help here?
Thanks - nola
This is in regard to these threads:
http://www.opensolaris.or
I can report io errors with Chenbro based LSI SASx36 IC based
expanders tested with 111b/121/128a/129. The HBA was LSI 1068 based.
If I bypass expander by adding more HBA controllers, mpt does not have
io errors.
-nola
On Dec 8, 2009, at 6:48 AM, Bruno Sousa wrote:
Hi James,
Thank yo
this one has me alittle confused. ideas?
j...@opensolaris:~# zpool import z
cannot mount 'z/nukeme': mountpoint or dataset is busy
cannot share 'z/cle2003-1': smb add share failed
j...@opensolaris:~# zfs destroy z/nukeme
internal error: Bad exchange descriptor
Abort (core dumped)
j...@opensolaris
> By partitioning the first two drives, you can arrange to have a small
> zfs-boot mirrored pool on the first two drives, and then create a second
> pool as two mirror pairs, or four drives in a raidz to support your data.
agreed..
2 % zpool iostat -v
capacity operations
> a 1U or 2U JBOD chassis for 2.5" drives,
from http://supermicro.com/products/nfo/chassis_storage.cfm
the E1 (single) or E2 (dual) options have a SAS expander so
http://supermicro.com/products/chassis/2U/?chs=216
fits your build or build it your self with
http://supermicro.com/products/accessori
r".
I'm thankful Sun shares their research and we can build on it.
(btw, netapp ontap 8 is freebsd, and runs on std hardware
after alittle bios work :-)
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ECC, this close $$
http://www.newegg.com/Product/Product.aspx?Item=N82E16819115214
Now, this gets one to 8G ECC easily...AMD's unfair advantage is all those
ram slots on their multi-die MBs... A slow AMD cpu with 64G ram
might be better depending on your working set / dedup requirements.
clusions.
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
d if one uses all 16 slots, that 667Mhz simm runs at 533Mhz
with AMD. The same is true for Lynnfield if one uses Registered
DDR3, one only gets 800Mhz with all 6 slots. (single or dual rank)
> Regardless, for zfs, memory is more important than raw CPU
agreed! but everything must be balanced.
spx?Item=N82E16820139050
But we are still stuck at 8G without going to expensive ram or
a more expensive CPU.
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
oop!
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
nts, this small loss might be the loss of their
entire dataset.
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
could share
L2ARC and ZIL devices, rather than buy two sets.
It appears possible to set up 7x450gb mirrored sets and 7x600gb mirrored sets
in the same volume, without losing capacity. Is that a bad idea? Is there a
problem with having different stripe sizes, like this?
Thanks,
Rob
--
Thanks, Ian.
If I understand correctly, the performance would then drop to the same level as
if I set them up as separate volumes in the first place.
So, I get double the performance for 75% of my data, and equal performance for
25% of my data, and my L2ARC will adapt to my working set across b
d the rest fits in L2ARC,
performance will be good.
Thanks,
Rob
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks, Richard. Your answers were very helpful.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
15 23:05 /kernel/drv/amd64/mpt
-rwxr-xr-x 1 root sys 399952 Nov 15 23:06 /kernel/drv/amd64/mpt_sas
and mpt_sas has a new printf:
"reset was running, this event can not be handled this time"
Rob
__
age, even though my cache should be warm by now, and my SSDs are far from
full.
set zfs:l2arc_noprefetch = 0
Am I setting this wrong? Am misunderstanding this option?
Thanks,
Rob
--
This message posted from opensolaris.org
___
zfs-discuss
ere a
special way to configure one of these LSI boards?
Thanks,
Rob
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Markus,
I'm pretty sure that I have the MD1000 plugged in properly, especially since
the same connection works on the 9280 and Perc 6/e. It's not in split mode.
Thanks for the suggestion, though.
--
This message posted from opensolaris.org
___
zfs-dis
as a hardware problem, or a
Solaris bug.
- Rob
> I have 15x SAS drives in a Dell MD1000 enclosure,
> attached to an LSI 9200-16e. This has been working
> well. The system is boothing off of internal drives,
> on a Dell SAS 6ir.
>
> I just tried to add a second storag
References:
Thread: ZFS effective short-stroking and connection to thin provisioning?
http://opensolaris.org/jive/thread.jspa?threadID=127608
Confused about consumer drives and zfs can someone help?
http://opensolaris.org/jive/thread.jspa?threadID=132253
Recommended RAM for ZFS on various platf
plus virtualbox 4.1 with "network in a box" would like snv_159
from http://www.virtualbox.org/wiki/Changelog
Solaris hosts: New Crossbow based bridged networking driver for Solaris 11
build 159 and above
Rob
_
Try mirrors. You will get much better multi-user performance, and you can
easily split the mirrors across enclosures.
If your priority is performance over capacity, you could experiment with n-way
mirros, since more mirrors will load balance reads better than more stripes.
--
This message post
Generally, mirrors resilver MUCH faster than RAIDZ, and you only lose
redundancy on that stripe, so combined, you're much closer to RAIDZ2 odds than
you might think, especially with hot spare(s), which I'd reccommend.
When you're talking about IOPS, each stripe can support 1 simultanious user.
I may have RAIDZ reading wrong here. Perhaps someone could clarify.
For a read-only workload, does each RAIDZ drive act like a stripe, similar to
RAID5/6? Do they have independant queues?
It would seem that there is no escaping read/modify/write operations for
sub-block writes, forcing the RA
RAIDZ has to rebuild data by reading all drives in the group, and
reconstructing from parity. Mirrors simply copy a drive.
Compare 3tb mirros vs. 9x3tb RAIDZ2.
Mirrors:
Read 3tb
Write 3tb
RAIDZ2:
Read 24tb
Reconstruct data on CPU
Write 3tb
In this case, RAIDZ is at least 8x slower to resilver
> I may have RAIDZ reading wrong here. Perhaps someone
> could clarify.
>
> For a read-only workload, does each RAIDZ drive act
> like a stripe, similar to RAID5/6? Do they have
> independant queues?
>
> It would seem that there is no escaping
> read/modify/write operations for sub-block writes
here are no writes in the queue).
Perhaps you are saying that they act like stripes for bandwidth purposes, but
not for read ops/sec?
-Rob
-Original Message-
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
Sent: Saturday, August 06, 2011 11:41 AM
To: Rob Cohen
Cc: zfs-dis
> If I'm not mistaken, a 3-way mirror is not
> implemented behind the scenes in
> the same way as a 3-disk raidz3. You should use a
> 3-way mirror instead of a
> 3-disk raidz3.
RAIDZ2 requires at least 4 drives, and RAIDZ3 requires at least 5 drives. But,
yes, a 3-way mirror is implemented tota
ly) and will be
testing that now.
System load is definitely going to factor into my configuration choice.
Thanks for all the replies (this post seems to go to the
zfs-discuss@opensolaris.org mailing list but posts there don't seem to end up
here).
Sincerely,
Rob
This message posted from op
> On July 14, 2008 7:49:58 PM -0500 Bob Friesenhahn
> <[EMAIL PROTECTED]> wrote:
> > With ZFS and modern CPUs, the parity calculation is
> surely in the noise to the point of being unmeasurable.
>
> I would agree with that. The parity calculation has *never* been a
> factor in and of itself. T
this link:
Using PPMD for compression
http://www.codeproject.com/KB/recipes/ppmd.aspx
Rob
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
this link:
Using PPMD for compression
http://www.codeproject.com/KB/recipes/ppmd.aspx
Rob
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
he "." is part of the URL (NMF) - so add it or you'll 404).
Rob
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> -Peter Tribble wrote:
>> On Sun, Jul 6, 2008 at 8:48 AM, Rob Clark wrote:
>> I have eight 10GB drives.
>> ...
>> I have 6 remaining 10 GB drives and I desire to
>> "raid" 3 of them and "mirror" them to the other 3 to
>> give me raid s
> Solaris will allow you to do this, but you'll need to use SVM instead of ZFS.
>
> Or, I suppose, you could use SVM for RAID-5 and ZFS to mirror those.
> -- richard
Or run Linux ...
Richard, The ZFS Best Practices Guide says not.
"Do not use the same disk or slice in both an SVM and ZFS con
nd watch from the sidelines -- returning to the OS
when you thought you were 'safe' (and if not, jumping backout).
Thus, Mertol, it is possible (and could work very well).
Rob
This message posted from opensolaris.org
___
zfs-discus
wap, OCFS2, NTFS, FAT -- so it might be better to suggest adding ZFS
there instead of focusing on non-ZFS solutions in this ZFS discussion group.
Rob
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ht
on", the ability to do this over a period of days is also useful.
Indeed the Plan9 filesystem simply snapshots to WORM and has no delete - nor
are they able to fill their drives faster than they can afford to buy new ones:
Venti Filesystem
http://www.cs.bell-labs.com/who/seanq/p9trace.html
R
There may be some work being done to fix this:
zpool should support raidz of mirrors
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6485689
Discussed in this thread:
Mirrored Raidz ( Posted: Oct 19, 2006 9:02 PM )
http://opensolaris.org/jive/thread.jspa?threadID=15854&tstart=0
Thi
> ECC?
$60 unbuffered 4GB 800MHz DDR2 ECC CL5 DIMM (Kit Of 2)
http://www.provantage.com/kingston-technology-kvr800d2e5k2-4g~7KIN90H4.htm
for Intel 32x0 north bridge like
http://www.provantage.com/supermicro-x7sbe~7SUPM11K.htm
___
zfs-discuss mailing l
ered ECC /
non-ECC SDRAM.
http://www.intel.com/products/server/chipsets/3200-3210/3200-3210-overview.htm
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1 - 100 of 212 matches
Mail list logo