Re: [zfs-discuss] Large scale performance query

2011-08-06 Thread Rob Cohen
> If I'm not mistaken, a 3-way mirror is not > implemented behind the scenes in > the same way as a 3-disk raidz3. You should use a > 3-way mirror instead of a > 3-disk raidz3. RAIDZ2 requires at least 4 drives, and RAIDZ3 requires at least 5 drives. But, yes, a 3-way mirror is implemented tota

Re: [zfs-discuss] Large scale performance query

2011-08-06 Thread Rob Cohen
here are no writes in the queue). Perhaps you are saying that they act like stripes for bandwidth purposes, but not for read ops/sec? -Rob -Original Message- From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us] Sent: Saturday, August 06, 2011 11:41 AM To: Rob Cohen Cc: zfs-dis

Re: [zfs-discuss] Large scale performance query

2011-08-06 Thread Rob Cohen
> I may have RAIDZ reading wrong here. Perhaps someone > could clarify. > > For a read-only workload, does each RAIDZ drive act > like a stripe, similar to RAID5/6? Do they have > independant queues? > > It would seem that there is no escaping > read/modify/write operations for sub-block writes

Re: [zfs-discuss] Large scale performance query

2011-08-06 Thread Rob Cohen
RAIDZ has to rebuild data by reading all drives in the group, and reconstructing from parity. Mirrors simply copy a drive. Compare 3tb mirros vs. 9x3tb RAIDZ2. Mirrors: Read 3tb Write 3tb RAIDZ2: Read 24tb Reconstruct data on CPU Write 3tb In this case, RAIDZ is at least 8x slower to resilver

Re: [zfs-discuss] Large scale performance query

2011-08-06 Thread Rob Cohen
I may have RAIDZ reading wrong here. Perhaps someone could clarify. For a read-only workload, does each RAIDZ drive act like a stripe, similar to RAID5/6? Do they have independant queues? It would seem that there is no escaping read/modify/write operations for sub-block writes, forcing the RA

Re: [zfs-discuss] Large scale performance query

2011-08-05 Thread Rob Cohen
Generally, mirrors resilver MUCH faster than RAIDZ, and you only lose redundancy on that stripe, so combined, you're much closer to RAIDZ2 odds than you might think, especially with hot spare(s), which I'd reccommend. When you're talking about IOPS, each stripe can support 1 simultanious user.

Re: [zfs-discuss] Large scale performance query

2011-08-04 Thread Rob Cohen
Try mirrors. You will get much better multi-user performance, and you can easily split the mirrors across enclosures. If your priority is performance over capacity, you could experiment with n-way mirros, since more mirrors will load balance reads better than more stripes. -- This message post

Re: [zfs-discuss] latest zpool version in solaris 11 express

2011-07-20 Thread Rob Logan
plus virtualbox 4.1 with "network in a box" would like snv_159 from http://www.virtualbox.org/wiki/Changelog Solaris hosts: New Crossbow based bridged networking driver for Solaris 11 build 159 and above Rob _

Re: [zfs-discuss] [?] - What is the recommended number of disks for a consumer PC with ZFS

2011-02-07 Thread Rob Clark
References: Thread: ZFS effective short-stroking and connection to thin provisioning? http://opensolaris.org/jive/thread.jspa?threadID=127608 Confused about consumer drives and zfs can someone help? http://opensolaris.org/jive/thread.jspa?threadID=132253 Recommended RAM for ZFS on various platf

Re: [zfs-discuss] problem adding second MD1000 enclosure to LSI 9200-16e

2011-01-10 Thread Rob Cohen
as a hardware problem, or a Solaris bug. - Rob > I have 15x SAS drives in a Dell MD1000 enclosure, > attached to an LSI 9200-16e. This has been working > well. The system is boothing off of internal drives, > on a Dell SAS 6ir. > > I just tried to add a second storag

Re: [zfs-discuss] problem adding second MD1000 enclosure to LSI 9200-16e

2010-11-21 Thread Rob Cohen
Markus, I'm pretty sure that I have the MD1000 plugged in properly, especially since the same connection works on the 9280 and Perc 6/e. It's not in split mode. Thanks for the suggestion, though. -- This message posted from opensolaris.org ___ zfs-dis

[zfs-discuss] problem adding second MD1000 enclosure to LSI 9200-16e

2010-11-21 Thread Rob Cohen
ere a special way to configure one of these LSI boards? Thanks, Rob -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] l2arc_noprefetch

2010-11-21 Thread Rob Cohen
age, even though my cache should be warm by now, and my SSDs are far from full. set zfs:l2arc_noprefetch = 0 Am I setting this wrong? Am misunderstanding this option? Thanks, Rob -- This message posted from opensolaris.org ___ zfs-discuss

Re: [zfs-discuss] WarpDrive SLP-300

2010-11-17 Thread Rob Logan
15 23:05 /kernel/drv/amd64/mpt -rwxr-xr-x 1 root sys 399952 Nov 15 23:06 /kernel/drv/amd64/mpt_sas and mpt_sas has a new printf: "reset was running, this event can not be handled this time" Rob __

Re: [zfs-discuss] zfs record size implications

2010-11-10 Thread Rob Cohen
Thanks, Richard. Your answers were very helpful. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] zfs record size implications

2010-11-04 Thread Rob Cohen
d the rest fits in L2ARC, performance will be good. Thanks, Rob -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] stripes of different size mirror groups

2010-10-28 Thread Rob Cohen
Thanks, Ian. If I understand correctly, the performance would then drop to the same level as if I set them up as separate volumes in the first place. So, I get double the performance for 75% of my data, and equal performance for 25% of my data, and my L2ARC will adapt to my working set across b

[zfs-discuss] stripes of different size mirror groups

2010-10-28 Thread Rob Cohen
could share L2ARC and ZIL devices, rather than buy two sets. It appears possible to set up 7x450gb mirrored sets and 7x600gb mirrored sets in the same volume, without losing capacity. Is that a bad idea? Is there a problem with having different stripe sizes, like this? Thanks, Rob --

Re: [zfs-discuss] Recommended RAM for ZFS on various platforms

2010-07-22 Thread Rob Clark
in reality it would be OK. If it is not OK (for you) then you have open Memory Slots in which to add more Chips (which you are certain to want to do in the future). Rob -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-

Re: [zfs-discuss] Confused about consumer drives and zfs can someone help?

2010-07-22 Thread Rob Clark
> I wanted to build a small back up (maybe also NAS) server using A common question that I am trying to get answered (and have a few) here: http://www.opensolaris.org/jive/thread.jspa?threadID=102368&tstart=0 Rob -- This message posted from opensola

Re: [zfs-discuss] [?] - What is the recommended number of disks for a consumer PC with ZFS

2010-07-22 Thread Rob Clark
> I'm building my new storage server, all the parts should come in this week. > ... Another answer is here: http://eonstorage.blogspot.com/2010/03/whats-best-pool-to-build-with-3-or-4.html Rob -- This message posted from opensolaris.org

Re: [zfs-discuss] Performance advantages of spool with 2x raidz2 vdev"s vs. Single vdev

2010-07-22 Thread Rob Clark
doubt I can afford as many as 10 Drives nor could I stuff them into my Box so please suggest options that use less than that many (most prefefably less than 7). A: ? Thanks, Rob -- This message posted from opensolaris.org ___ zfs-dis

Re: [zfs-discuss] [?] - What is the recommended number of disks for a consumer PC with ZFS

2010-07-18 Thread Rob Clark
' standpoint - as opposed to over three dozen tiny Drives). Thanks for your reply, Rob -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] reconstruct recovery of rpool zpool and zfs file system with bad sectors

2010-05-25 Thread Rob Levy
Roy, Thanks for your reply. I did get a new drive and attempted the approach (as you have suggested pre your reply) however once booted off the OpenSolaris Live CD (or the rebuilt new drive), I was not able to import the rpool (which I had established had sector errors). I expect I should hav

[zfs-discuss] reconstruct recovery of rpool zpool and zfs file system with bad sectors

2010-05-20 Thread Rob Levy
Folks I posted this question on (OpenSolaris - Help) without any replies http://opensolaris.org/jive/thread.jspa?threadID=129436&tstart=0 and am re-posting here in the hope someone can help ... I have updated the wording a little too (in an attempt to clarify) I currently use OpenSolaris on a T

Re: [zfs-discuss] Does ZFS use large memory pages?

2010-05-06 Thread Rob
when the file system gets above 80% we seems to have quite a number of issues, much the same as what you've had in the past, ps and prstats hanging. are you able to tell me the IDR number that you applied? Thanks, Rob -- This message posted from opensolari

[zfs-discuss] Snv_126 Kernel PF Panic

2010-04-09 Thread Rob Cherveny
3(ff02fa4b0058) ff000f4efbb0 smb_session_worker+0x6e(ff02fa4b0058) ff000f4efc40 taskq_d_thread+0xb1(ff02e51b9e90) ff000f4efc50 thread_start+8() > I can provide any other info that may be need. Thank you in advance for your help! Rob -- Rob Cherveny Manager of Information Techn

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-30 Thread Rob Logan
your file or zvol will not be there when the box comes back, even though your program had finished seconds before the crash. Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org

Re: [zfs-discuss] sharing a ssd between rpool and l2arc

2010-03-30 Thread Rob Logan
ol/cache zpool add test cache /dev/zvol/dsk/rpool/cache reboot if your asking for a L2ARC on rpool, well, yea, its not mounted soon enough, but the point is to put rpool, swap, and L2ARC for your storage pool all on a single SSD..

Re: [zfs-discuss] SSD As ARC

2010-03-28 Thread Rob Logan
> I like the idea of swapping on SSD too, but why not make a zvol for the L2ARC > so your not limited by the hard partitioning? it lives through a reboot.. zpool create -f test c9t3d0s0 c9t4d0s0 zfs create -V 3G rpool/cache zpool add test cache /dev/zvol/dsk/rpool/cache reboot

Re: [zfs-discuss] SSD As ARC

2010-03-28 Thread Rob Logan
d partitioning? Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] ZFS send and receive corruption across a WAN link?

2010-03-18 Thread Rob
Can a ZFS send stream become corrupt when piped between two hosts across a WAN link using 'ssh'? For example a host in Australia sends a stream to a host in the UK as follows: # zfs send tank/f...@now | ssh host.uk receive tank/bar -- This message posted from opensolaris.org ___

Re: [zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread Rob Logan
nts, this small loss might be the loss of their entire dataset. Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Reading ZFS config for an extended period

2010-02-15 Thread Rob Logan
oop! Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Cores vs. Speed?

2010-02-06 Thread Rob Logan
spx?Item=N82E16820139050 But we are still stuck at 8G without going to expensive ram or a more expensive CPU. Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Cores vs. Speed?

2010-02-05 Thread Rob Logan
d if one uses all 16 slots, that 667Mhz simm runs at 533Mhz with AMD. The same is true for Lynnfield if one uses Registered DDR3, one only gets 800Mhz with all 6 slots. (single or dual rank) > Regardless, for zfs, memory is more important than raw CPU agreed! but everything must be balanced.

Re: [zfs-discuss] Cores vs. Speed?

2010-02-05 Thread Rob Logan
clusions. Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Rob Logan
ECC, this close $$ http://www.newegg.com/Product/Product.aspx?Item=N82E16819115214 Now, this gets one to 8G ECC easily...AMD's unfair advantage is all those ram slots on their multi-die MBs... A slow AMD cpu with 64G ram might be better depending on your working set / dedup requirements.

Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-02 Thread Rob Logan
r". I'm thankful Sun shares their research and we can build on it. (btw, netapp ontap 8 is freebsd, and runs on std hardware after alittle bios work :-) Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Rob Logan
> a 1U or 2U JBOD chassis for 2.5" drives, from http://supermicro.com/products/nfo/chassis_storage.cfm the E1 (single) or E2 (dual) options have a SAS expander so http://supermicro.com/products/chassis/2U/?chs=216 fits your build or build it your self with http://supermicro.com/products/accessori

Re: [zfs-discuss] 4 Internal Disk Configuration

2010-01-14 Thread Rob Logan
> By partitioning the first two drives, you can arrange to have a small > zfs-boot mirrored pool on the first two drives, and then create a second > pool as two mirror pairs, or four drives in a raidz to support your data. agreed.. 2 % zpool iostat -v capacity operations

Re: [zfs-discuss] zfs, raidz, spare and jbod

2010-01-10 Thread Rob
data or do some other stuff to give the system some load it hangs. This happens after 5 minutes or after 30 minutes or later but it hangs. Then we get the problems of the attached pictures. I have also emaild Areca. I'll hope the can fix it.. Regards, Rob -- This message posted from

[zfs-discuss] unable to zfs destroy

2010-01-08 Thread Rob Logan
this one has me alittle confused. ideas? j...@opensolaris:~# zpool import z cannot mount 'z/nukeme': mountpoint or dataset is busy cannot share 'z/cle2003-1': smb add share failed j...@opensolaris:~# zfs destroy z/nukeme internal error: Bad exchange descriptor Abort (core dumped) j...@opensolaris

Re: [zfs-discuss] Update - mpt errors on snv 101b

2009-12-08 Thread Rob Nelson
I can report io errors with Chenbro based LSI SASx36 IC based expanders tested with 111b/121/128a/129. The HBA was LSI 1068 based. If I bypass expander by adding more HBA controllers, mpt does not have io errors. -nola On Dec 8, 2009, at 6:48 AM, Bruno Sousa wrote: Hi James, Thank yo

[zfs-discuss] How can we help fix MPT driver post build 129

2009-12-05 Thread Rob Nelson
How can we help with what is outlined below. I can reproduce these at will, so if anyone at Sun would like an environment to test this situation let me know. What is the best info to grab for you folks to help here? Thanks - nola This is in regard to these threads: http://www.opensolaris.or

Re: [zfs-discuss] Separate Zil on HDD ?

2009-12-02 Thread Rob Logan
t fun one might make a tinny slice on all the disks of the raidz2 and list six log devices (6 way stripe) and not bother adding the other two disks. Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.o

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2009-11-30 Thread Rob Logan
onnection. wonder if there is a LSI issue with too many links in HBA mode? Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] scrub differs in execute time?

2009-11-14 Thread Rob Logan
CIE. Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] scrub differs in execute time?

2009-11-14 Thread Rob Logan
stripe 266/6 MB with 6 disks on shared PCI in a raidz we know disk don't go that fast anyway, but going from a 8h to 15h scrub is very reasonable depending on vdev config. Rob ___ zfs-discuss mailing list zfs-

Re: [zfs-discuss] raidz-1 vs mirror

2009-11-11 Thread Rob Logan
> from a two disk (10krpm) mirror layout to a three disk raidz-1. wrights will be unnoticeably slower for raidz1 because of parity calculation and latency of a third spindle. but reads will be 1/2 the speed of the mirror because it can split the reads between two disks. another way to say the s

Re: [zfs-discuss] PSARC recover files?

2009-11-09 Thread Rob Logan
> Maybe to create snapshots "after the fact" how does one quiesce a drive "after the fact"? ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] PSARC recover files?

2009-11-09 Thread Rob Logan
frequent snapshots offer outstanding "oops" protection. Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS + fsck

2009-11-04 Thread Rob Warner
e help of Victor Latushkin to attempt to recover your pool using painstaking manual manipulation. Recent putbacks seem to indicate that future releases will provide a mechanism to allow mere mortals to recover from some of the errors caused by dropped writes. cheers, Rob -- This message p

Re: [zfs-discuss] sub-optimal ZFS performance

2009-10-29 Thread Rob Logan
ome fragmentation, 1/4 of c_max wasn't enough metadata arc space for number of files in /var/pkg/download good find Henrik! Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs code and fishworks "fork"

2009-10-27 Thread Rob Logan
786.html Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZPOOL Metadata / Data Error - Help

2009-10-04 Thread Rob Logan
Action: Restore the file in question if possible. Otherwise restore the entire pool from backup. :<0x0> :<0x15> bet its in a snapshot that looks to have been destroyed already. try zpool clear POOL01 zpool scrub POOL01 ___ zfs-dis

Re: [zfs-discuss] million files in single directory

2009-10-04 Thread Rob Logan
ull 0.41u 0.07s 0:00.50 96.0% perhaps your ARC is too small? Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] bigger zfs arc

2009-10-02 Thread Rob Logan
em ram in hopes of increase arc. if m?u_ghost is a small %, there is no point in adding an L2ARC. if you do add a L2ARC, one must have ram between c and zfs_arc_max for its pointers. Rob ___ zfs-discuss mailing list z

[zfs-discuss] zpool create over old pool recovery

2009-08-24 Thread Rob Levy
Folks, Need help with ZFS recovery following zfs create ... We recently received new laptops (hardware refresh) and I simply transfered the multiboot hdd (using OpenSolaris 2008.11 as the primary production OS) from the old laptop to the new one (used the live DVD to do the zpool import, updat

Re: [zfs-discuss] Fed up with ZFS causing data loss

2009-07-30 Thread Rob Terhaar
I'm sure this has been discussed in the past. But its very hard to understand, or even patch incredibly advanced software such as ZFS without a deep understanding of the internals. It will take quite a while before anyone can start understanding a file system which was developed behind closed door

Re: [zfs-discuss] The importance of ECC RAM for ZFS

2009-07-24 Thread Rob Logan
uptime of 1116days) so the finger is still pointed at VirtualBox's "hardware" implementation. as for ZFS requiring "better" hardware, you could turn off checksums and other protections so one isn't notified of issues making it act like the others.

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-20 Thread Rob Logan
1 user, load average: 0.07, 0.05, 0.05 r...@pdm # date Mon Jul 20 09:33:07 EDT 2009 r...@pdm # uname -a SunOS pdm 5.9 Generic_112233-12 sun4u sparc SUNW,Ultra-250 Rob ___ zfs-discuss mailing list zfs-discu

Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Rob Logan
> c4 scsi-bus connectedconfigured unknown > c4::dsk/c4t15d0disk connectedconfigured unknown : > c4::dsk/c4t33d0disk connectedconfigured unknown > c4::es/ses0ESI connected

Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Rob Logan
>> We have a SC846E1 at work; it's the 24-disk, 4u version of the 826e1. >> It's working quite nicely as a SATA JBOD enclosure. > use the LSI SAS 3442e which also gives you an external SAS port. I'm confused, I though expanders only worked with SAS disk, and SATA disks took an entire SAS port. c

Re: [zfs-discuss] ZFS write I/O stalls

2009-06-30 Thread Rob Logan
> CPU is smoothed out quite a lot yes, but the area under the CPU graph is less, so the rate of real work performed is less, so the entire job took longer. (allbeit "smoother") Rob ___ zfs-discuss mai

Re: [zfs-discuss] ZFS and "Dinamic Stripe"

2009-06-29 Thread Rob Logan
94G 14 1 877K 94.2K c1t1d0s7 244G 200G 15 2 948K 96.5K c0d0 193G 39.1G 10 1 689K 80.2K note that c0d0 is basically full, but still serving 10 of every 15 reads, and 82% of the writes.

Re: [zfs-discuss] BugID formally known as 6746456

2009-06-26 Thread Rob Healey
This appears to be the fix related to the ACL's which they seem to throw all of the ASSERT panics in zfs_fuid.c under even if they have nothing to do with ACL's; my case being one of those. Thanks for the pointer though! -Rob -- This message posted from opens

[zfs-discuss] BugID formally known as 6746456

2009-06-24 Thread Rob Healey
on this issue was done on the S10 side of the house and there is a stealthy patch ID that can fix the issue. Thanks, -Rob -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] problems with l2arc in 2009.06

2009-06-18 Thread Rob Logan
> correct ratio of arc to l2arc? from http://blogs.sun.com/brendan/entry/l2arc_screenshots "It costs some DRAM to reference the L2ARC, at a rate proportional to record size. For example, it currently takes about 15 Gbytes of DRAM to reference 600 Gbytes of L2ARC - at an 8 Kbyte ZFS record size

Re: [zfs-discuss] RAIDZ2: only half the read speed?

2009-05-22 Thread Rob Logan
MB/s one raidz2 set of 8 disks can't be faster than the slowest disk in the set as its one vdev... I would have expected the 8 vdev set to be 8x faster than the single raidz[12] set, but like Richard said, there is another bottle neck in there that iostat will show

Re: [zfs-discuss] Replacing HDD with larger HDD..

2009-05-22 Thread Rob Logan
ach disk in the same port too as you go. > It is still the same size. I would expect it to go to 9G. a reboot or export/import would have fixed this. > cannot import 'grow': no such pool available you meant to type zpool import -d /var/tmp grow

Re: [zfs-discuss] SAS 15K drives as L2ARC

2009-05-05 Thread Rob Logan
of one tray. ie: pls don't discount how one arranges the vdev in a given configuration. Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] zpool import crash, import degraded mirror?

2009-04-29 Thread Rob Logan
When I type `zpool import` to see what pools are out there, it gets to /1: open("/dev/dsk/c5t2d0s0", O_RDONLY) = 6 /1: stat64("/usr/local/apache2/lib/libdevid.so.1", 0x08042758) Err#2 ENOENT /1: stat64("/usr/lib/libdevid.so.1", 0x08042758)= 0 /1: d=0x02D90002 i

Re: [zfs-discuss] Motherboard for home zfs/solaris file server

2009-02-24 Thread Rob Logan
Not. Intel decided we don't need ECC memory on the Core i7 I thought that was a Core i7 vs Xeon E55xx for socket LGA-1366 so that's why this X58 MB claims ECC support: http://supermicro.com/products/motherboard/Xeon3000/X58/X8SAX.cfm ___ zfs-discus

Re: [zfs-discuss] Is Disabling ARC on SolarisU4 possible?

2009-02-12 Thread Rob Brown
Thanks Nathan, I want to test the underlying performance, of course the problem is I want to test the 16 or so disks in the stripe, rather than individual devices. Thanks Rob On 28/01/2009 22:23, "Nathan Kroenert" wrote: > Also - My experience with a very small ARC is that you

[zfs-discuss] Is Disabling ARC on SolarisU4 possible?

2009-01-28 Thread Rob Brown
Solaris 10U4 which doesn¹t have them -can I disable it? Many thanks Rob | Robert Brown - ioko Professional Services | | Mobile: +44 (0)7769 711 885 | ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

Re: [zfs-discuss] Practical Application of ZFS

2009-01-06 Thread Rob
Wow. I will read further into this. That seems like it could have great applications. I assume the same is true of FCoE? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mai

Re: [zfs-discuss] Practical Application of ZFS

2009-01-06 Thread Rob
I am not experienced with iSCSI. I understand it's block level disk access via TCP/IP. However I don't see how using it eliminates the need for virtualization. Are you saying that a Windows Server can access a ZFS drive via iSCSI and store NTFS files? -- This message posted from opensolaris.org

[zfs-discuss] Practical Application of ZFS

2009-01-06 Thread Rob
ZFS is the bomb. It's a great file system. What are it's real world applications besides solaris userspace? What I'd really like is to utilize the benefits of ZFS across all the platforms we use. For instance, we use Microsoft Windows Servers as our primary platform here. How might I utilize ZFS

Re: [zfs-discuss] zfs & iscsi sustained write performance

2008-12-08 Thread Rob
> (with iostat -xtc 1) it sure would be nice to know if actv > 0 so we would know if the lun was busy because its queue is full or just slow (svc_t > 200) for tracking errors try `iostat -xcen 1` and `iostat -E` Rob __

Re: [zfs-discuss] SMART data

2008-12-08 Thread Rob Logan
the sata framework uses the sd driver so its: 4 % smartctl -d scsi -a /dev/rdsk/c4t2d0s0 smartctl version 5.36 [i386-pc-solaris2.8] Copyright (C) 2002-6 Bruce Allen Home page is http://smartmontools.sourceforge.net/ Device: ATA WDC WD1001FALS-0 Version: 0K05 Serial number: Device type: disk

Re: [zfs-discuss] Is SUNWhd for Thumper only?

2008-12-01 Thread Rob
c5t5d0p0ATA WDC WD3200JD-00K 5J08 0 C (32 F) Solaris2 Do you know of a solaris tool to get SMART data? Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive sizes don't add correctl

2008-11-29 Thread Rob Clark
Bump. Some of the threads on this were last posted to over a year ago. I checked 6485689 and it is not fixed yet, is there any work being done in this area? Thanks, Rob > There may be some work being done to fix this: > > zpool should support raidz of mirrors > http://bugs.ope

Re: [zfs-discuss] Still more questions WRT selecting a mobo for small ZFS RAID

2008-11-15 Thread Rob
he should be expected with 16G filled for months. (still, might not be an issue for a single home user, but if your married it might be :-) the Enterprise version of the above drive is http://www.wdc.com/en/products/Products.asp?DriveID=503 possibly with a desirable faster timeout. Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Still more questions WRT selecting a mobo for small ZFS RAID

2008-11-14 Thread Rob
> WD Caviar Black drive [...] Intel E7200 2.53GHz 3MB L2 > The P45 based boards are a no-brainer 16G of DDR2-1066 with P45 or 8G of ECC DDR2-800 with 3210 based boards That is the question. Rob ___ zfs-discuss mailing li

Re: [zfs-discuss] Inexpensive ZFS home server

2008-11-12 Thread Rob Logan
ered ECC / non-ECC SDRAM. http://www.intel.com/products/server/chipsets/3200-3210/3200-3210-overview.htm Rob ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Ended up in GRUB prompt after the installation on ZFS

2008-11-09 Thread Rob
change paths. ie: the disk lable devid and /etc/zfs/zpool.cache are unnecessary. Both will remain wrong until a scrub. So, perhaps the issue is with an EFI labeled disk with old pool info getting converted to VTOC label for zfs root install.

Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2008-10-29 Thread Rob Logan
> ECC? $60 unbuffered 4GB 800MHz DDR2 ECC CL5 DIMM (Kit Of 2) http://www.provantage.com/kingston-technology-kvr800d2e5k2-4g~7KIN90H4.htm for Intel 32x0 north bridge like http://www.provantage.com/supermicro-x7sbe~7SUPM11K.htm ___ zfs-discuss mailing l

Re: [zfs-discuss] zfs-auto-snapshot 0.11 work (was Re: zfs-auto-snapshot with at scheduling )

2008-08-06 Thread Rob
> The other changes that will appear in 0.11 (which is > nearly done) are: Still looking forward to seeing .11 :) Think we can expect a release soon? (or at least svn access so that others can check out the trunk?) This message posted from opensolaris.org _

Re: [zfs-discuss] force a reset/reinheit zfs acls?

2008-08-05 Thread Rob
> Rob wrote: > > Hello All! > > > > Is there a command to force a re-inheritance/reset > of ACLs? e.g., if i have a directory full of folders > that have been created with inherited ACLs, and i > want to change the ACLs on the parent folder, how can &

[zfs-discuss] force a reset/reinheit zfs acls?

2008-08-05 Thread Rob
Hello All! Is there a command to force a re-inheritance/reset of ACLs? e.g., if i have a directory full of folders that have been created with inherited ACLs, and i want to change the ACLs on the parent folder, how can i force a reapply of all ACLs? This message posted from opensolaris.org

Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive sizes don't add correctl

2008-07-29 Thread Rob Clark
There may be some work being done to fix this: zpool should support raidz of mirrors http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6485689 Discussed in this thread: Mirrored Raidz ( Posted: Oct 19, 2006 9:02 PM ) http://opensolaris.org/jive/thread.jspa?threadID=15854&tstart=0 Thi

Re: [zfs-discuss] ZFS deduplication

2008-07-22 Thread Rob Clark
on", the ability to do this over a period of days is also useful. Indeed the Plan9 filesystem simply snapshots to WORM and has no delete - nor are they able to fill their drives faster than they can afford to buy new ones: Venti Filesystem http://www.cs.bell-labs.com/who/seanq/p9trace.html R

Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive

2008-07-22 Thread Rob Clark
wap, OCFS2, NTFS, FAT -- so it might be better to suggest adding ZFS there instead of focusing on non-ZFS solutions in this ZFS discussion group. Rob This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org ht

Re: [zfs-discuss] ZFS deduplication

2008-07-22 Thread Rob Clark
nd watch from the sidelines -- returning to the OS when you thought you were 'safe' (and if not, jumping backout). Thus, Mertol, it is possible (and could work very well). Rob This message posted from opensolaris.org ___ zfs-discus

Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive

2008-07-21 Thread Rob Clark
> Solaris will allow you to do this, but you'll need to use SVM instead of ZFS. > > Or, I suppose, you could use SVM for RAID-5 and ZFS to mirror those. > -- richard Or run Linux ... Richard, The ZFS Best Practices Guide says not. "Do not use the same disk or slice in both an SVM and ZFS con

Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive

2008-07-20 Thread Rob Clark
> -Peter Tribble wrote: >> On Sun, Jul 6, 2008 at 8:48 AM, Rob Clark wrote: >> I have eight 10GB drives. >> ... >> I have 6 remaining 10 GB drives and I desire to >> "raid" 3 of them and "mirror" them to the other 3 to >> give me raid s

Re: [zfs-discuss] How to delete hundreds of emtpy snapshots

2008-07-20 Thread Rob Clark
he "." is part of the URL (NMF) - so add it or you'll 404). Rob This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Adding my own compression to zfs

2008-07-20 Thread Rob Clark
this link: Using PPMD for compression http://www.codeproject.com/KB/recipes/ppmd.aspx Rob This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Adding my own compression to zfs

2008-07-20 Thread Rob Clark
this link: Using PPMD for compression http://www.codeproject.com/KB/recipes/ppmd.aspx Rob This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

  1   2   3   >