[zfs-discuss] ZFS send and receive corruption across a WAN link?

2010-03-18 Thread Rob
Can a ZFS send stream become corrupt when piped between two hosts across a WAN link using 'ssh'? For example a host in Australia sends a stream to a host in the UK as follows: # zfs send tank/f...@now | ssh host.uk receive tank/bar -- This message posted from opensolaris.org ___

Re: [zfs-discuss] Does ZFS use large memory pages?

2010-05-06 Thread Rob
when the file system gets above 80% we seems to have quite a number of issues, much the same as what you've had in the past, ps and prstats hanging. are you able to tell me the IDR number that you applied? Thanks, Rob -- This message posted from opensolari

Re: [zfs-discuss] zfs, raidz, spare and jbod

2010-01-10 Thread Rob
data or do some other stuff to give the system some load it hangs. This happens after 5 minutes or after 30 minutes or later but it hangs. Then we get the problems of the attached pictures. I have also emaild Areca. I'll hope the can fix it.. Regards, Rob -- This message posted from

[zfs-discuss] force a reset/reinheit zfs acls?

2008-08-05 Thread Rob
Hello All! Is there a command to force a re-inheritance/reset of ACLs? e.g., if i have a directory full of folders that have been created with inherited ACLs, and i want to change the ACLs on the parent folder, how can i force a reapply of all ACLs? This message posted from opensolaris.org

Re: [zfs-discuss] force a reset/reinheit zfs acls?

2008-08-05 Thread Rob
> Rob wrote: > > Hello All! > > > > Is there a command to force a re-inheritance/reset > of ACLs? e.g., if i have a directory full of folders > that have been created with inherited ACLs, and i > want to change the ACLs on the parent folder, how can &

Re: [zfs-discuss] zfs-auto-snapshot 0.11 work (was Re: zfs-auto-snapshot with at scheduling )

2008-08-06 Thread Rob
> The other changes that will appear in 0.11 (which is > nearly done) are: Still looking forward to seeing .11 :) Think we can expect a release soon? (or at least svn access so that others can check out the trunk?) This message posted from opensolaris.org _

Re: [zfs-discuss] Ended up in GRUB prompt after the installation on ZFS

2008-11-09 Thread Rob
change paths. ie: the disk lable devid and /etc/zfs/zpool.cache are unnecessary. Both will remain wrong until a scrub. So, perhaps the issue is with an EFI labeled disk with old pool info getting converted to VTOC label for zfs root install.

Re: [zfs-discuss] Still more questions WRT selecting a mobo for small ZFS RAID

2008-11-14 Thread Rob
> WD Caviar Black drive [...] Intel E7200 2.53GHz 3MB L2 > The P45 based boards are a no-brainer 16G of DDR2-1066 with P45 or 8G of ECC DDR2-800 with 3210 based boards That is the question. Rob ___ zfs-discuss mailing li

Re: [zfs-discuss] Still more questions WRT selecting a mobo for small ZFS RAID

2008-11-15 Thread Rob
he should be expected with 16G filled for months. (still, might not be an issue for a single home user, but if your married it might be :-) the Enterprise version of the above drive is http://www.wdc.com/en/products/Products.asp?DriveID=503 possibly with a desirable faster timeout. Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Is SUNWhd for Thumper only?

2008-12-01 Thread Rob
c5t5d0p0ATA WDC WD3200JD-00K 5J08 0 C (32 F) Solaris2 Do you know of a solaris tool to get SMART data? Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs & iscsi sustained write performance

2008-12-08 Thread Rob
> (with iostat -xtc 1) it sure would be nice to know if actv > 0 so we would know if the lun was busy because its queue is full or just slow (svc_t > 200) for tracking errors try `iostat -xcen 1` and `iostat -E` Rob __

[zfs-discuss] Practical Application of ZFS

2009-01-06 Thread Rob
ZFS is the bomb. It's a great file system. What are it's real world applications besides solaris userspace? What I'd really like is to utilize the benefits of ZFS across all the platforms we use. For instance, we use Microsoft Windows Servers as our primary platform here. How might I utilize ZFS

Re: [zfs-discuss] Practical Application of ZFS

2009-01-06 Thread Rob
I am not experienced with iSCSI. I understand it's block level disk access via TCP/IP. However I don't see how using it eliminates the need for virtualization. Are you saying that a Windows Server can access a ZFS drive via iSCSI and store NTFS files? -- This message posted from opensolaris.org

Re: [zfs-discuss] Practical Application of ZFS

2009-01-06 Thread Rob
Wow. I will read further into this. That seems like it could have great applications. I assume the same is true of FCoE? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mai

Re: [zfs-discuss] Strange behavior zfs and soalris cluster

2007-09-17 Thread Rob
each node and failed to reproduce the problem. I didn't try the SVM + ZFS combo. Rob This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] zfs: allocating allocated segment(offset=77984887808 size=66560)

2007-10-11 Thread Rob
how does one free segment(offset=77984887808 size=66560) on a pool that won't import? looks like I found http://bugs.opensolaris.org/view_bug.do?bug_id=6580715 http://mail.opensolaris.org/pipermail/zfs-discuss/2007-September/042541.html when I luupgrade a ufs partition with a dvd-b62 that was bf

[zfs-discuss] zpool attach problem

2008-01-22 Thread Rob
orEdge 3510-421F-545.91GB> /scsi_vhci/[EMAIL PROTECTED] 6. c8t600C0FF008266812A0877700d0 <SUN-StorEdge 3510-421F-545.91GB> /scsi_vhci/[EMAIL PROTECTED] 7. c8t600C0FF0082668310F838000d0 <SUN-StorEdge 3510-421F-545.91GB> /

Re: [zfs-discuss] zfs data corruption

2008-04-23 Thread Rob
> Since no specific file or directory is mentioned install newer bits and get better info automatically but for now type: zdb -vvv zpool1 17 zdb -vvv zpool1 18 zdb -vvv zpool1 19 echo remove those objects zpool clear zpool1 zpool scrub zpool1 ___ zfs-d

Re: [zfs-discuss] cp -r hanged copying a directory

2008-05-03 Thread Rob
ata:sata_func_enable = 0x7" >> /etc/system but of cource fixing the drive FW is the answer. ref: http://mail.opensolaris.org/pipermail/storage-discuss/2008-January/004428.html Rob ___ zfs-discuss mailing

Re: [zfs-discuss] cp -r hanged copying a directory

2008-05-03 Thread Rob
and in all cases its not a zfs issue, but a disk, controller or [EMAIL PROTECTED] issue. Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] sharesmb settings not working with some filesystems

2008-05-05 Thread Rob
> cannot share 'tank/software': smb add share failed you meant to post this in storage-discuss but type: chmod 777 /tank/software zfs set sharesmb=name=software tank/software ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.openso

Re: [zfs-discuss] help with a BIG problem, can't import my zpool anymore

2008-05-23 Thread Rob
> Memory: 3072M phys mem, 31M free mem, 2055M swap, 1993M free swap perhaps this might help.. mkfile -n 4g /usr/swap swap -a/usr/swap http://blogs.sun.com/realneel/entry/zfs_arc_statistics Rob ___ zfs-discuss mail

Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

2008-05-23 Thread Rob
ram rather than a SSD cache device would be better? unless you have really slow iscsi vdevs :-) Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS: A general question

2008-05-24 Thread Rob
cted until you attach a mirror to that single disk. one can't (currently) remove a vdev (shrink a pool) but one can increase each element of a vdev increasing the size of the pool while maintaining the number of elements (disk count)

Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

2008-05-24 Thread Rob
't horribly http://mail.opensolaris.org/pipermail/zfs-discuss/2007-July/041956.html perhaps adding ram to the system would be more flexible? Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org

Re: [zfs-discuss] ZFS: A general question

2008-05-25 Thread Rob
> Thus, if you have a 2GB, a 3GB, and a 5GB device in a pool, > the pool's capacity is 3 x 2GB = 6GB If you put the three into one raidz vdev it will be 2+2 until you replace the 2G disk with a 5G at which point it will be 3+3 and then when you replace the 3G with a 5G it will be 5+5G. and if yo

Re: [zfs-discuss] ZFS Project Hardware

2008-05-25 Thread Rob
7;t change with zfs, the system with the most vdevs wins :-) Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] What is a vdev?

2008-05-25 Thread Rob
its 128k, with a 4+1 raidz set each disk will see 32k. so the 9+1 would get 14.2k. and what if the block is less than 128k? wouldn't it be better to have two sets of 4+1 and go twice as fast splitting the blocks less in the process? (two vdevs)

Re: [zfs-discuss] disk names?

2008-06-03 Thread Rob
your "Type" "sata-port" will change to "disk" when you put a disk on it. like: 1 % cfgadm Ap_Id Type Receptacle Occupant Condition sata0/0::dsk/c2t0d0disk connectedconfigured ok sata0/1::dsk/c2t1d0cd/dvd connected

Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-06 Thread Rob
You can add more disks to a pool that is in raid-z you just can't add disks to the existing raid-z vdev. cd /usr/tmp mkfile -n 100m 1 2 3 4 5 6 7 8 9 10 zpool create t raidz /usr/tmp/1 /usr/tmp/2 /usr/tmp/3 zpool status t zfs list t zpool add -f t raidz2 /usr/tmp/4 /usr/tmp/5 /usr/tmp/6 /usr

[zfs-discuss] ZFS Hot Spare Behavior

2007-01-08 Thread Rob
0 0 c3t9d0 ONLINE 0 0 0 spares c2t8d0AVAIL c3t10d0 AVAIL Why doesn't ZFS automatically use one of the hot spares? Is this expected behavior or a bug? Rob This message posted from opensolaris.org

[zfs-discuss] Re: ZFS Hot Spare Behavior

2007-01-09 Thread Rob
ill have a take a closer > look at the details :-) Ok, let me try to reproduce the problem and get you more info. Rob This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] SSD As ARC

2010-03-28 Thread Rob Logan
d partitioning? Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] SSD As ARC

2010-03-28 Thread Rob Logan
> I like the idea of swapping on SSD too, but why not make a zvol for the L2ARC > so your not limited by the hard partitioning? it lives through a reboot.. zpool create -f test c9t3d0s0 c9t4d0s0 zfs create -V 3G rpool/cache zpool add test cache /dev/zvol/dsk/rpool/cache reboot

Re: [zfs-discuss] sharing a ssd between rpool and l2arc

2010-03-30 Thread Rob Logan
ol/cache zpool add test cache /dev/zvol/dsk/rpool/cache reboot if your asking for a L2ARC on rpool, well, yea, its not mounted soon enough, but the point is to put rpool, swap, and L2ARC for your storage pool all on a single SSD..

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-30 Thread Rob Logan
your file or zvol will not be there when the box comes back, even though your program had finished seconds before the crash. Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org

[zfs-discuss] Snv_126 Kernel PF Panic

2010-04-09 Thread Rob Cherveny
3(ff02fa4b0058) ff000f4efbb0 smb_session_worker+0x6e(ff02fa4b0058) ff000f4efc40 taskq_d_thread+0xb1(ff02e51b9e90) ff000f4efc50 thread_start+8() > I can provide any other info that may be need. Thank you in advance for your help! Rob -- Rob Cherveny Manager of Information Techn

[zfs-discuss] reconstruct recovery of rpool zpool and zfs file system with bad sectors

2010-05-20 Thread Rob Levy
Folks I posted this question on (OpenSolaris - Help) without any replies http://opensolaris.org/jive/thread.jspa?threadID=129436&tstart=0 and am re-posting here in the hope someone can help ... I have updated the wording a little too (in an attempt to clarify) I currently use OpenSolaris on a T

Re: [zfs-discuss] reconstruct recovery of rpool zpool and zfs file system with bad sectors

2010-05-25 Thread Rob Levy
Roy, Thanks for your reply. I did get a new drive and attempted the approach (as you have suggested pre your reply) however once booted off the OpenSolaris Live CD (or the rebuilt new drive), I was not able to import the rpool (which I had established had sector errors). I expect I should hav

Re: [zfs-discuss] [?] - What is the recommended number of disks for a consumer PC with ZFS

2010-07-18 Thread Rob Clark
' standpoint - as opposed to over three dozen tiny Drives). Thanks for your reply, Rob -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Performance advantages of spool with 2x raidz2 vdev"s vs. Single vdev

2010-07-22 Thread Rob Clark
doubt I can afford as many as 10 Drives nor could I stuff them into my Box so please suggest options that use less than that many (most prefefably less than 7). A: ? Thanks, Rob -- This message posted from opensolaris.org ___ zfs-dis

Re: [zfs-discuss] [?] - What is the recommended number of disks for a consumer PC with ZFS

2010-07-22 Thread Rob Clark
> I'm building my new storage server, all the parts should come in this week. > ... Another answer is here: http://eonstorage.blogspot.com/2010/03/whats-best-pool-to-build-with-3-or-4.html Rob -- This message posted from opensolaris.org

Re: [zfs-discuss] Confused about consumer drives and zfs can someone help?

2010-07-22 Thread Rob Clark
> I wanted to build a small back up (maybe also NAS) server using A common question that I am trying to get answered (and have a few) here: http://www.opensolaris.org/jive/thread.jspa?threadID=102368&tstart=0 Rob -- This message posted from opensola

Re: [zfs-discuss] Recommended RAM for ZFS on various platforms

2010-07-22 Thread Rob Clark
in reality it would be OK. If it is not OK (for you) then you have open Memory Slots in which to add more Chips (which you are certain to want to do in the future). Rob -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-

Re: [zfs-discuss] bigger zfs arc

2009-10-02 Thread Rob Logan
em ram in hopes of increase arc. if m?u_ghost is a small %, there is no point in adding an L2ARC. if you do add a L2ARC, one must have ram between c and zfs_arc_max for its pointers. Rob ___ zfs-discuss mailing list z

Re: [zfs-discuss] million files in single directory

2009-10-04 Thread Rob Logan
ull 0.41u 0.07s 0:00.50 96.0% perhaps your ARC is too small? Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZPOOL Metadata / Data Error - Help

2009-10-04 Thread Rob Logan
Action: Restore the file in question if possible. Otherwise restore the entire pool from backup. :<0x0> :<0x15> bet its in a snapshot that looks to have been destroyed already. try zpool clear POOL01 zpool scrub POOL01 ___ zfs-dis

Re: [zfs-discuss] zfs code and fishworks "fork"

2009-10-27 Thread Rob Logan
786.html Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] sub-optimal ZFS performance

2009-10-29 Thread Rob Logan
ome fragmentation, 1/4 of c_max wasn't enough metadata arc space for number of files in /var/pkg/download good find Henrik! Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS + fsck

2009-11-04 Thread Rob Warner
e help of Victor Latushkin to attempt to recover your pool using painstaking manual manipulation. Recent putbacks seem to indicate that future releases will provide a mechanism to allow mere mortals to recover from some of the errors caused by dropped writes. cheers, Rob -- This message p

Re: [zfs-discuss] PSARC recover files?

2009-11-09 Thread Rob Logan
frequent snapshots offer outstanding "oops" protection. Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] PSARC recover files?

2009-11-09 Thread Rob Logan
> Maybe to create snapshots "after the fact" how does one quiesce a drive "after the fact"? ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] raidz-1 vs mirror

2009-11-11 Thread Rob Logan
> from a two disk (10krpm) mirror layout to a three disk raidz-1. wrights will be unnoticeably slower for raidz1 because of parity calculation and latency of a third spindle. but reads will be 1/2 the speed of the mirror because it can split the reads between two disks. another way to say the s

Re: [zfs-discuss] scrub differs in execute time?

2009-11-14 Thread Rob Logan
stripe 266/6 MB with 6 disks on shared PCI in a raidz we know disk don't go that fast anyway, but going from a 8h to 15h scrub is very reasonable depending on vdev config. Rob ___ zfs-discuss mailing list zfs-

Re: [zfs-discuss] scrub differs in execute time?

2009-11-14 Thread Rob Logan
CIE. Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2009-11-30 Thread Rob Logan
onnection. wonder if there is a LSI issue with too many links in HBA mode? Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Separate Zil on HDD ?

2009-12-02 Thread Rob Logan
t fun one might make a tinny slice on all the disks of the raidz2 and list six log devices (6 way stripe) and not bother adding the other two disks. Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.o

[zfs-discuss] How can we help fix MPT driver post build 129

2009-12-05 Thread Rob Nelson
How can we help with what is outlined below. I can reproduce these at will, so if anyone at Sun would like an environment to test this situation let me know. What is the best info to grab for you folks to help here? Thanks - nola This is in regard to these threads: http://www.opensolaris.or

Re: [zfs-discuss] Update - mpt errors on snv 101b

2009-12-08 Thread Rob Nelson
I can report io errors with Chenbro based LSI SASx36 IC based expanders tested with 111b/121/128a/129. The HBA was LSI 1068 based. If I bypass expander by adding more HBA controllers, mpt does not have io errors. -nola On Dec 8, 2009, at 6:48 AM, Bruno Sousa wrote: Hi James, Thank yo

[zfs-discuss] unable to zfs destroy

2010-01-08 Thread Rob Logan
this one has me alittle confused. ideas? j...@opensolaris:~# zpool import z cannot mount 'z/nukeme': mountpoint or dataset is busy cannot share 'z/cle2003-1': smb add share failed j...@opensolaris:~# zfs destroy z/nukeme internal error: Bad exchange descriptor Abort (core dumped) j...@opensolaris

Re: [zfs-discuss] 4 Internal Disk Configuration

2010-01-14 Thread Rob Logan
> By partitioning the first two drives, you can arrange to have a small > zfs-boot mirrored pool on the first two drives, and then create a second > pool as two mirror pairs, or four drives in a raidz to support your data. agreed.. 2 % zpool iostat -v capacity operations

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Rob Logan
> a 1U or 2U JBOD chassis for 2.5" drives, from http://supermicro.com/products/nfo/chassis_storage.cfm the E1 (single) or E2 (dual) options have a SAS expander so http://supermicro.com/products/chassis/2U/?chs=216 fits your build or build it your self with http://supermicro.com/products/accessori

Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-02 Thread Rob Logan
r". I'm thankful Sun shares their research and we can build on it. (btw, netapp ontap 8 is freebsd, and runs on std hardware after alittle bios work :-) Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Rob Logan
ECC, this close $$ http://www.newegg.com/Product/Product.aspx?Item=N82E16819115214 Now, this gets one to 8G ECC easily...AMD's unfair advantage is all those ram slots on their multi-die MBs... A slow AMD cpu with 64G ram might be better depending on your working set / dedup requirements.

Re: [zfs-discuss] Cores vs. Speed?

2010-02-05 Thread Rob Logan
clusions. Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Cores vs. Speed?

2010-02-05 Thread Rob Logan
d if one uses all 16 slots, that 667Mhz simm runs at 533Mhz with AMD. The same is true for Lynnfield if one uses Registered DDR3, one only gets 800Mhz with all 6 slots. (single or dual rank) > Regardless, for zfs, memory is more important than raw CPU agreed! but everything must be balanced.

Re: [zfs-discuss] Cores vs. Speed?

2010-02-06 Thread Rob Logan
spx?Item=N82E16820139050 But we are still stuck at 8G without going to expensive ram or a more expensive CPU. Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Reading ZFS config for an extended period

2010-02-15 Thread Rob Logan
oop! Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread Rob Logan
nts, this small loss might be the loss of their entire dataset. Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] stripes of different size mirror groups

2010-10-28 Thread Rob Cohen
could share L2ARC and ZIL devices, rather than buy two sets. It appears possible to set up 7x450gb mirrored sets and 7x600gb mirrored sets in the same volume, without losing capacity. Is that a bad idea? Is there a problem with having different stripe sizes, like this? Thanks, Rob --

Re: [zfs-discuss] stripes of different size mirror groups

2010-10-28 Thread Rob Cohen
Thanks, Ian. If I understand correctly, the performance would then drop to the same level as if I set them up as separate volumes in the first place. So, I get double the performance for 75% of my data, and equal performance for 25% of my data, and my L2ARC will adapt to my working set across b

[zfs-discuss] zfs record size implications

2010-11-04 Thread Rob Cohen
d the rest fits in L2ARC, performance will be good. Thanks, Rob -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs record size implications

2010-11-10 Thread Rob Cohen
Thanks, Richard. Your answers were very helpful. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] WarpDrive SLP-300

2010-11-17 Thread Rob Logan
15 23:05 /kernel/drv/amd64/mpt -rwxr-xr-x 1 root sys 399952 Nov 15 23:06 /kernel/drv/amd64/mpt_sas and mpt_sas has a new printf: "reset was running, this event can not be handled this time" Rob __

[zfs-discuss] l2arc_noprefetch

2010-11-21 Thread Rob Cohen
age, even though my cache should be warm by now, and my SSDs are far from full. set zfs:l2arc_noprefetch = 0 Am I setting this wrong? Am misunderstanding this option? Thanks, Rob -- This message posted from opensolaris.org ___ zfs-discuss

[zfs-discuss] problem adding second MD1000 enclosure to LSI 9200-16e

2010-11-21 Thread Rob Cohen
ere a special way to configure one of these LSI boards? Thanks, Rob -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] problem adding second MD1000 enclosure to LSI 9200-16e

2010-11-21 Thread Rob Cohen
Markus, I'm pretty sure that I have the MD1000 plugged in properly, especially since the same connection works on the 9280 and Perc 6/e. It's not in split mode. Thanks for the suggestion, though. -- This message posted from opensolaris.org ___ zfs-dis

Re: [zfs-discuss] problem adding second MD1000 enclosure to LSI 9200-16e

2011-01-10 Thread Rob Cohen
as a hardware problem, or a Solaris bug. - Rob > I have 15x SAS drives in a Dell MD1000 enclosure, > attached to an LSI 9200-16e. This has been working > well. The system is boothing off of internal drives, > on a Dell SAS 6ir. > > I just tried to add a second storag

Re: [zfs-discuss] [?] - What is the recommended number of disks for a consumer PC with ZFS

2011-02-07 Thread Rob Clark
References: Thread: ZFS effective short-stroking and connection to thin provisioning? http://opensolaris.org/jive/thread.jspa?threadID=127608 Confused about consumer drives and zfs can someone help? http://opensolaris.org/jive/thread.jspa?threadID=132253 Recommended RAM for ZFS on various platf

Re: [zfs-discuss] latest zpool version in solaris 11 express

2011-07-20 Thread Rob Logan
plus virtualbox 4.1 with "network in a box" would like snv_159 from http://www.virtualbox.org/wiki/Changelog Solaris hosts: New Crossbow based bridged networking driver for Solaris 11 build 159 and above Rob _

Re: [zfs-discuss] Large scale performance query

2011-08-04 Thread Rob Cohen
Try mirrors. You will get much better multi-user performance, and you can easily split the mirrors across enclosures. If your priority is performance over capacity, you could experiment with n-way mirros, since more mirrors will load balance reads better than more stripes. -- This message post

Re: [zfs-discuss] Large scale performance query

2011-08-05 Thread Rob Cohen
Generally, mirrors resilver MUCH faster than RAIDZ, and you only lose redundancy on that stripe, so combined, you're much closer to RAIDZ2 odds than you might think, especially with hot spare(s), which I'd reccommend. When you're talking about IOPS, each stripe can support 1 simultanious user.

Re: [zfs-discuss] Large scale performance query

2011-08-06 Thread Rob Cohen
I may have RAIDZ reading wrong here. Perhaps someone could clarify. For a read-only workload, does each RAIDZ drive act like a stripe, similar to RAID5/6? Do they have independant queues? It would seem that there is no escaping read/modify/write operations for sub-block writes, forcing the RA

Re: [zfs-discuss] Large scale performance query

2011-08-06 Thread Rob Cohen
RAIDZ has to rebuild data by reading all drives in the group, and reconstructing from parity. Mirrors simply copy a drive. Compare 3tb mirros vs. 9x3tb RAIDZ2. Mirrors: Read 3tb Write 3tb RAIDZ2: Read 24tb Reconstruct data on CPU Write 3tb In this case, RAIDZ is at least 8x slower to resilver

Re: [zfs-discuss] Large scale performance query

2011-08-06 Thread Rob Cohen
> I may have RAIDZ reading wrong here. Perhaps someone > could clarify. > > For a read-only workload, does each RAIDZ drive act > like a stripe, similar to RAID5/6? Do they have > independant queues? > > It would seem that there is no escaping > read/modify/write operations for sub-block writes

Re: [zfs-discuss] Large scale performance query

2011-08-06 Thread Rob Cohen
here are no writes in the queue). Perhaps you are saying that they act like stripes for bandwidth purposes, but not for read ops/sec? -Rob -Original Message- From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us] Sent: Saturday, August 06, 2011 11:41 AM To: Rob Cohen Cc: zfs-dis

Re: [zfs-discuss] Large scale performance query

2011-08-06 Thread Rob Cohen
> If I'm not mistaken, a 3-way mirror is not > implemented behind the scenes in > the same way as a 3-disk raidz3. You should use a > 3-way mirror instead of a > 3-disk raidz3. RAIDZ2 requires at least 4 drives, and RAIDZ3 requires at least 5 drives. But, yes, a 3-way mirror is implemented tota

Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive

2008-07-19 Thread Rob Clark
ly) and will be testing that now. System load is definitely going to factor into my configuration choice. Thanks for all the replies (this post seems to go to the zfs-discuss@opensolaris.org mailing list but posts there don't seem to end up here). Sincerely, Rob This message posted from op

Re: [zfs-discuss] Raid-Z with N^2+1 disks

2008-07-19 Thread Rob Clark
> On July 14, 2008 7:49:58 PM -0500 Bob Friesenhahn > <[EMAIL PROTECTED]> wrote: > > With ZFS and modern CPUs, the parity calculation is > surely in the noise to the point of being unmeasurable. > > I would agree with that. The parity calculation has *never* been a > factor in and of itself. T

Re: [zfs-discuss] Adding my own compression to zfs

2008-07-20 Thread Rob Clark
this link: Using PPMD for compression http://www.codeproject.com/KB/recipes/ppmd.aspx Rob This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Adding my own compression to zfs

2008-07-20 Thread Rob Clark
this link: Using PPMD for compression http://www.codeproject.com/KB/recipes/ppmd.aspx Rob This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] How to delete hundreds of emtpy snapshots

2008-07-20 Thread Rob Clark
he "." is part of the URL (NMF) - so add it or you'll 404). Rob This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive

2008-07-20 Thread Rob Clark
> -Peter Tribble wrote: >> On Sun, Jul 6, 2008 at 8:48 AM, Rob Clark wrote: >> I have eight 10GB drives. >> ... >> I have 6 remaining 10 GB drives and I desire to >> "raid" 3 of them and "mirror" them to the other 3 to >> give me raid s

Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive

2008-07-21 Thread Rob Clark
> Solaris will allow you to do this, but you'll need to use SVM instead of ZFS. > > Or, I suppose, you could use SVM for RAID-5 and ZFS to mirror those. > -- richard Or run Linux ... Richard, The ZFS Best Practices Guide says not. "Do not use the same disk or slice in both an SVM and ZFS con

Re: [zfs-discuss] ZFS deduplication

2008-07-22 Thread Rob Clark
nd watch from the sidelines -- returning to the OS when you thought you were 'safe' (and if not, jumping backout). Thus, Mertol, it is possible (and could work very well). Rob This message posted from opensolaris.org ___ zfs-discus

Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive

2008-07-22 Thread Rob Clark
wap, OCFS2, NTFS, FAT -- so it might be better to suggest adding ZFS there instead of focusing on non-ZFS solutions in this ZFS discussion group. Rob This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org ht

Re: [zfs-discuss] ZFS deduplication

2008-07-22 Thread Rob Clark
on", the ability to do this over a period of days is also useful. Indeed the Plan9 filesystem simply snapshots to WORM and has no delete - nor are they able to fill their drives faster than they can afford to buy new ones: Venti Filesystem http://www.cs.bell-labs.com/who/seanq/p9trace.html R

Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive sizes don't add correctl

2008-07-29 Thread Rob Clark
There may be some work being done to fix this: zpool should support raidz of mirrors http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6485689 Discussed in this thread: Mirrored Raidz ( Posted: Oct 19, 2006 9:02 PM ) http://opensolaris.org/jive/thread.jspa?threadID=15854&tstart=0 Thi

Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2008-10-29 Thread Rob Logan
> ECC? $60 unbuffered 4GB 800MHz DDR2 ECC CL5 DIMM (Kit Of 2) http://www.provantage.com/kingston-technology-kvr800d2e5k2-4g~7KIN90H4.htm for Intel 32x0 north bridge like http://www.provantage.com/supermicro-x7sbe~7SUPM11K.htm ___ zfs-discuss mailing l

Re: [zfs-discuss] Inexpensive ZFS home server

2008-11-12 Thread Rob Logan
ered ECC / non-ECC SDRAM. http://www.intel.com/products/server/chipsets/3200-3210/3200-3210-overview.htm Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

  1   2   3   >