[zfs-discuss] Acme WX22B-TR?

2007-02-26 Thread Nicholas Lee
Has anyone run Solaris on one of these: http://acmemicro.com/estore/merchant.ihtml?pid=4014&step=4 2U with 12 hotswap SATA disks. Supermicro motherboard, would have to add a second Supermicro SATA2 controller to cover all the disks and the onboard intel controller can only handle 6. Nicholas ___

Re: [zfs-discuss] File System Filter Driver??

2007-02-26 Thread Toby Thain
On 26-Feb-07, at 11:32 PM, Richard Elling wrote: Rayson Ho wrote: NT kernel has the filter driver framework: http://www.microsoft.com/whdc/driver/filterdrv/default.mspx It seems to be useful for things like FS encrytion and compression... is there any plan to implement something similar in Sol

Re: [zfs-discuss] File System Filter Driver??

2007-02-26 Thread Jim Dunham
Rayson, NT kernel has the filter driver framework: http://www.microsoft.com/whdc/driver/filterdrv/default.mspx It seems to be useful for things like FS encrytion and compression... is there any plan to implement something similar in Solaris?? The Availability Suite product set (http://www.open

Re: [zfs-discuss] File System Filter Driver??

2007-02-26 Thread Richard Elling
Rayson Ho wrote: NT kernel has the filter driver framework: http://www.microsoft.com/whdc/driver/filterdrv/default.mspx It seems to be useful for things like FS encrytion and compression... is there any plan to implement something similar in Solaris?? If you google "stacking file system" you'

[zfs-discuss] Re: solaris - ata over ethernet - zfs - HPC

2007-02-26 Thread Pascal Gauthier
AoE and ZFS do not work, as the coraid do not support this yet: REF: http://www.coraid.com/support/solaris/aoe-1.3.1/doc/aoe-guide.html Search for ZFS --- Pascal Gauthier http://www.nihilisme.ca/ This message posted from opensolaris.org ___ zfs-disc

[zfs-discuss] File System Filter Driver??

2007-02-26 Thread Rayson Ho
NT kernel has the filter driver framework: http://www.microsoft.com/whdc/driver/filterdrv/default.mspx It seems to be useful for things like FS encrytion and compression... is there any plan to implement something similar in Solaris?? Rayson ___ zfs-d

Re: [zfs-discuss] understanding zfs/thunoer "bottlenecks"?

2007-02-26 Thread Richard Elling
Jens Elkner wrote: Currently I'm trying to figure out the best zfs layout for a thumper wrt. to read AND write performance. First things first. What is the expected workload? Random, sequential, lots of little files, few big files, 1 Byte iops, synchronous data, constantly changing access tim

Re: [zfs-discuss] Re: .zfs snapshot directory in all directories

2007-02-26 Thread Thomas Garner
> for what purpose ? Darren's correct, it's a simple case of ease of use. Not show-stopping by any means but would be nice to have. Thomas ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discu

Re: [zfs-discuss] zfs received vol not appearing on iscsi target list

2007-02-26 Thread Adam Leventhal
On Sat, Feb 24, 2007 at 09:29:48PM +1300, Nicholas Lee wrote: > I'm not really a Solaris expert, but I would have expected vol4 to appear on > the iscsi target list automatically. Is there a way to refresh the target > list? Or is this a bug. Hi Nicholas, This is a bug either in ZFS or in the iS

[zfs-discuss] understanding zfs/thunoer "bottlenecks"?

2007-02-26 Thread Jens Elkner
Currently I'm trying to figure out the best zfs layout for a thumper wrt. to read AND write performance. I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s seems to be the maximum on can reach (tried initial default setup, all 46 HDDs as R0, etc.). According to h

Re: [zfs-discuss] Re: .zfs snapshot directory in all directories

2007-02-26 Thread Darren Dunham
> > for what purpose ? For me, I'd say ease of use. Using Netapp .snapshot directories, it's often easier to find files in a snapshot by relative path from the directory in question rather than from all the way back at the top of the filesystem. In addition, others have mentioned zone mounts: h

[zfs-discuss] Re: .zfs snapshot directory in all directories

2007-02-26 Thread roland
for what purpose ? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Re: zfs bogus (10 u3)?

2007-02-26 Thread Richard Elling
mkfile files also compress rather nicely, when you have ZFS compression enabled. -- richard Jens Elkner wrote: Hi Wire ;-), What's the output of zpool list zfs list ? Ooops, already destroyed the pool. Anyway, slept a night over it and found a "maybe explaination": Files were created wit

Re[2]: [zfs-discuss] Performance of "zpool import"?

2007-02-26 Thread Robert Milkowski
Hello Paul, Monday, February 26, 2007, 8:28:43 PM, you wrote: >> From: Eric Schrock [mailto:[EMAIL PROTECTED] >> Sent: Monday, February 26, 2007 12:05 PM >> >> The slow part of zpool import is actually discovering the >> pool configuration. This involves examining every device on >> the syst

[zfs-discuss] Re: Samba ACLs en ZFS

2007-02-26 Thread Jiri Sasek
It does not concern the ACL on ZFS but this represent the mapping of the ZFS snapshots to samba shares. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs

[zfs-discuss] Re: Samba and ZFS ACL Question

2007-02-26 Thread Jiri Sasek
Root cause is in the acl(2) call. ZFS implementation team does not implemented backward compatibility of the SETACL/GETACL/GETACLCNT functions of this syscall. Only the extended functions ACE_SETACL/ACE_GETACL/ACE_GETACLCNT are implemented on ZFS. The old ones returns (errno == ENOTSUP) on ZFS (

[zfs-discuss] Re: Samba ACLs en ZFS

2007-02-26 Thread Jiri Sasek
I am currently working (among the other problems) on this issue. I hope the module will be finished till the 3.0.25 will be released. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.openso

RE: [zfs-discuss] Performance of "zpool import"?

2007-02-26 Thread Paul Fisher
> From: Eric Schrock [mailto:[EMAIL PROTECTED] > Sent: Monday, February 26, 2007 12:05 PM > > The slow part of zpool import is actually discovering the > pool configuration. This involves examining every device on > the system (or every device within a 'import -d' directory) > and seeing if i

Re: [zfs-discuss] Re: ARGHH. An other panic!!

2007-02-26 Thread Jason J. W. Williams
Hi Gino, Was there more than one LUN in the RAID-Z using the port you disabled? -J On 2/26/07, Gino Ruopolo <[EMAIL PROTECTED]> wrote: Hi Jason, saturday we made some tests and found that disabling a FC port under heavy load (MPXio enabled) often takes to a panic. (using a RAID-Z !) No prob

[zfs-discuss] Re: zfs bogus (10 u3)?

2007-02-26 Thread Jens Elkner
Hi Wire ;-), > What's the output of > zpool list > zfs list > ? Ooops, already destroyed the pool. Anyway, slept a night over it and found a "maybe explaination": Files were created with mkfile an mkfile has an option -n. It was not used to create the files, however I interrupted mkfile (^C).

Re: [zfs-discuss] Performance of "zpool import"?

2007-02-26 Thread Nicolas Williams
On Mon, Feb 26, 2007 at 10:32:22AM -0800, Eric Schrock wrote: > On Mon, Feb 26, 2007 at 12:27:48PM -0600, Nicolas Williams wrote: > > > > What is slow, BTW? The open(2)s of the devices? Or the label reading? > > And is there a way to do async open(2)s w/o a thread per-open? The > > open(2) man

Re: [zfs-discuss] Performance of "zpool import"?

2007-02-26 Thread Eric Schrock
On Mon, Feb 26, 2007 at 12:27:48PM -0600, Nicolas Williams wrote: > > What is slow, BTW? The open(2)s of the devices? Or the label reading? > And is there a way to do async open(2)s w/o a thread per-open? The > open(2) man page isn't very detailed about O_NONBLOCK/O_NDELAY behaviour > on device

Re: [zfs-discuss] Performance of "zpool import"?

2007-02-26 Thread Nicolas Williams
On Mon, Feb 26, 2007 at 10:10:15AM -0800, Eric Schrock wrote: > On Mon, Feb 26, 2007 at 12:06:14PM -0600, Nicolas Williams wrote: > > Couldn't all that tasting be done in parallel? > > Yep, that's certainly possible. Sounds like a perfect feature for > someone in the community to work on :-) Sim

Re: [zfs-discuss] Re: Efficiency when reading the same file blocks

2007-02-26 Thread Wade . Stuart
[EMAIL PROTECTED] wrote on 02/26/2007 11:36:18 AM: > Jeff Davis wrote: > >> Given your question are you about to come back with a > >> case where you are not > >> seeing this? > > > > Actually, the case where I saw the bad behavior was in Linux using > the CFQ I/O scheduler. When reading the

Re: [zfs-discuss] Re: Efficiency when reading the same file blocks

2007-02-26 Thread Roch Bourbonnais
Le 26 févr. 07 à 18:30, Frank Cusack a écrit : On February 26, 2007 9:05:21 AM -0800 Jeff Davis <[EMAIL PROTECTED]> wrote: That got me worried about the project I'm working on, and I wanted to understand ZFS's caching behavior better to prove to myself that the problem wouldn't happen under Z

Re: [zfs-discuss] Performance of "zpool import"?

2007-02-26 Thread Eric Schrock
On Mon, Feb 26, 2007 at 12:06:14PM -0600, Nicolas Williams wrote: > On Mon, Feb 26, 2007 at 10:05:08AM -0800, Eric Schrock wrote: > > The slow part of zpool import is actually discovering the pool > > configuration. This involves examining every device on the system (or > > every device within a '

Re: [zfs-discuss] Performance of "zpool import"?

2007-02-26 Thread Nicolas Williams
On Mon, Feb 26, 2007 at 10:05:08AM -0800, Eric Schrock wrote: > The slow part of zpool import is actually discovering the pool > configuration. This involves examining every device on the system (or > every device within a 'import -d' directory) and seeing if it has any > labels. Internally, the

Re: [zfs-discuss] Performance of "zpool import"?

2007-02-26 Thread Eric Schrock
The slow part of zpool import is actually discovering the pool configuration. This involves examining every device on the system (or every device within a 'import -d' directory) and seeing if it has any labels. Internally, the import action itself shoudl be quite fast, and is essentially the same

Re: [zfs-discuss] How zpool import command works ?

2007-02-26 Thread Eric Schrock
It's perfectly reasonable to have multiple exported/destroyed pools with the same name. Pool names are unique only when active on the system. This is why 'zpool import' also prints out the pool GUID and allows import by ID, instead of just names. In your output below, you'd see that each pool has

Re: [zfs-discuss] Re: Efficiency when reading the same file blocks

2007-02-26 Thread Bart Smaalders
Jeff Davis wrote: Given your question are you about to come back with a case where you are not seeing this? Actually, the case where I saw the bad behavior was in Linux using the CFQ I/O scheduler. When reading the same file sequentially, adding processes drastically reduced total disk throu

Re: [zfs-discuss] Re: Efficiency when reading the same file blocks

2007-02-26 Thread Frank Cusack
On February 26, 2007 9:05:21 AM -0800 Jeff Davis <[EMAIL PROTECTED]> wrote: That got me worried about the project I'm working on, and I wanted to understand ZFS's caching behavior better to prove to myself that the problem wouldn't happen under ZFS. Clearly the block will be in cache on the secon

[zfs-discuss] Re: What SATA controllers are people using for ZFS?

2007-02-26 Thread Wes Williams
After failed luck with a pair of Syba SD-SATA-4P PCI-X SATA II controllers (Sil3114 chipset), I've now successfully used a Tekram TR-834A 4-port SATA-II controller (Sil3124-2 chipset) at the full PCI-X 133MHz bus speed and b50. Since my disk mirror on the previous SATA controller (built-in W1100

[zfs-discuss] page rates

2007-02-26 Thread Rob Logan
This is a lightly loaded v20z but it has zfs across its two disks.. its hung (requiring a power cycle) twice since running 5.11 opensol-20060904 the last time I had a `vmstat 1` running... nice page rates right before death :-) kthr memorypagedisk faults

[zfs-discuss] Re: Efficiency when reading the same file blocks

2007-02-26 Thread Jeff Davis
> Given your question are you about to come back with a > case where you are not > seeing this? Actually, the case where I saw the bad behavior was in Linux using the CFQ I/O scheduler. When reading the same file sequentially, adding processes drastically reduced total disk throughput (single d

[zfs-discuss] Re: Does running redundancy with ZFS use as much disk

2007-02-26 Thread Eric Haycraft
In other words, say you have 4 - 500 GB drives. In a standard raidz configuration, you should yield (4-1) * 500 GB of space, or 1.5 TB. In your case, I will mention one caveat. Say you have 8 drives in raidz. If you have 7 500 GB and 1 20GB drive, you will only yield (8-1) * 20GB or 140GB of s

Re: [zfs-discuss] zfs and iscsi: cannot open : I/O error

2007-02-26 Thread cedric briner
>> devfsadm -i iscsi # to create the device on sf3 >> iscsiadm list target -Sv| egrep 'OS Device|Peer|Alias' # not empty >> Alias: vol-1 >>IP address (Peer): 10.194.67.111:3260 >> OS Device Name: >> /dev/rdsk/c1t014005A267C12A0045E2F524d0s2 this i

Re: [zfs-discuss] How zpool import command works ?

2007-02-26 Thread Francois Dion
On Mon, 2007-02-26 at 07:00 -0800, dudekula mastan wrote: > > Hi All, > > I have a zpool (name as testpool) on /dev/dsk/c0t0d0. > > The command $zpool import testpool, imports the testpool (means mount > the testpool). > > How the import command comes to know testpool created > on /dev/dsk/

[zfs-discuss] Re: Does running redundancy with ZFS use as much disk

2007-02-26 Thread Eric Haycraft
Also note that you may not need new hardware. While not as compatible as linux, it will run on a variety of hardware, so you may just be able to get by with new disks. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss

[zfs-discuss] How zpool import command works ?

2007-02-26 Thread dudekula mastan
Hi All, I have a zpool (name as testpool) on /dev/dsk/c0t0d0. The command $zpool import testpool, imports the testpool (means mount the testpool). How the import command comes to know testpool created on /dev/dsk/c0t0d0 ? And also the command $zpool import, list out all

Re: [zfs-discuss] zfs and iscsi: cannot open : I/O error

2007-02-26 Thread Matty
On 2/26/07, cedric briner <[EMAIL PROTECTED]> wrote: hello, I'm trying to consolidate my HDs in a cheap but (I hope) reliable manner. To do so, I was thinking to use zfs over iscsi. Unfortunately, I'm having some issue with it, when I do: # iscsi server (nexenta alpha 5) # svcadm e

[zfs-discuss] zfs and iscsi: cannot open : I/O error

2007-02-26 Thread cedric briner
hello, I'm trying to consolidate my HDs in a cheap but (I hope) reliable manner. To do so, I was thinking to use zfs over iscsi. Unfortunately, I'm having some issue with it, when I do: # iscsi server (nexenta alpha 5) # svcadm enable iscsitgt iscsitadm delete target --lun 0 vol-1

[zfs-discuss] Performance of "zpool import"?

2007-02-26 Thread Paul Fisher
Has anyone done benchmarking on the scalability and performance of zpool import in terms of the number of devices in the pool on recent opensolaris builds? In other words, what would the relative performance be for "zpool import" for the following three pool configurations on multi-pathed 4G FC

Re: [zfs-discuss] Does running redundancy with ZFS use as much disk space as doubling drives?

2007-02-26 Thread Jeff Bonwick
> My plan was to have 8-10 cheap drives, most of them IDE drives from > 120 gig and up to 320 gig. Does that mean that I can get 7-9 drives > with data plus full redundancy from the last drive? It sounds almost > like magic to me to be able to have the data on maybe 1 TB of drives > and have one dr

Re: [zfs-discuss] Does running redundancy with ZFS use as much disk space as doubling drives?

2007-02-26 Thread Jeff Bonwick
On Mon, Feb 26, 2007 at 01:53:17AM -0800, Tor wrote: > [...] if using redundancy on ZDF The ZFS Document Format? ;-) > uses less disk space as simply getting extra drives and do identical copies, > with periodic CRC checks of the source material to check the health. If you create a 2-disk mirro

[zfs-discuss] Does running redundancy with ZFS use as much disk space as doubling drives?

2007-02-26 Thread Tor
If I'm gonna use OpenSolaris, I will have to buy new hardware, which I can't really defend at the moment. But I may be able to defend it in the near future if using redundancy on ZDF uses less disk space as simply getting extra drives and do identical copies, with periodic CRC checks of the sour

[zfs-discuss] Re: Are media files compressable with ZFS?

2007-02-26 Thread Tor
Dang, I think I'm dead as far as Solaris goes. I checked the HCL and the Java compatibility check, and none of the two controllers I would need to use, one PCI IDE and one S-ATA on the KT-4 motherboard, will work with OpenSolaris. Annoying as heck, but it looks like I'm gonna have to stick with

[zfs-discuss] Re: Is there an "idiot's guide" to creating network access for users?

2007-02-26 Thread Tor
Dang, I think I'm dead as far as Solaris goes. I checked the HCL and the Java compatibility check, and none of the two controllers I would need to use, one PCI IDE and one S-ATA on the KT-4 motherboard, will work with OpenSolaris. Annoying as heck, but it looks like I'm gonna have to stick with

Re: [zfs-discuss] .zfs snapshot directory in all directories

2007-02-26 Thread Jeremy Teo
On 2/26/07, Thomas Garner <[EMAIL PROTECTED]> wrote: Since I have been unable to find the answer online, I thought I would ask here. Is there a knob to turn to on a zfs filesystem put the .zfs snapshot directory into all of the children directories of the filesystem, like the .snapshot directori

[zfs-discuss] Re: ARGHH. An other panic!!

2007-02-26 Thread Gino Ruopolo
Hi Jason, saturday we made some tests and found that disabling a FC port under heavy load (MPXio enabled) often takes to a panic. (using a RAID-Z !) No problems with UFS ... later, Gino This message posted from opensolaris.org ___ zfs-discuss mail