[zfs-discuss] SSDs with a SCSI SCA interface?

2009-12-03 Thread Erik Trimble
Hey folks. I've looked around quite a bit, and I can't find something like this: I have a bunch of older systems which use Ultra320 SCA hot-swap connectors for their internal drives. (e.g. v20z and similar) I'd love to be able to use modern flash SSDs with these systems, but I have yet to fi

Re: [zfs-discuss] b128a available w/deduplication

2009-12-03 Thread Dennis Clarke
> Dennis Clarke wrote: >>> FYI, >>> OpenSolaris b128a is available for download or image-update from the >>> dev repository. Enjoy. >> >> I thought that dedupe has been out for weeks now ? > > The source has, yes. But what Richard was referring to was the > respun build now available via IPS. Oh

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Michael Schuster
Nicolas Williams wrote: On Thu, Dec 03, 2009 at 12:44:16PM -0800, Per Baatrup wrote: if any of f2..f5 have different block sizes from f1 This restriction does not sound so bad to me if this only refers to changes to the blocksize of a particular ZFS filesystem or copying between different ZFSes

Re: [zfs-discuss] mpt errors on snv 127

2009-12-03 Thread Chad Cantwell
I eventually performed a few more tests, adjusting some zfs tuning options which had no effect, and trying the itmpt driver which someone had said would work, and regardless my system would always freeze quite rapidly in snv 127 and 128a. Just to double check my hardware, I went back to the ope

Re: [zfs-discuss] ZIL corrupt, not recoverable even with logfix

2009-12-03 Thread James Risner
It was created on AMD64 FreeBSD with 8.0RC2 (which was version 13 of ZFS iirc.) At some point I knocked it out (export) somehow, I don't remember doing so intentionally. So I can't do commands like zpool replace since there are no pools. It says it was last used by the FreeBSD box, but the Fre

Re: [zfs-discuss] b128a available w/deduplication

2009-12-03 Thread Eric D. Mudama
On Fri, Dec 4 at 1:12, Dennis Clarke wrote: FYI, OpenSolaris b128a is available for download or image-update from the dev repository. Enjoy. I thought that dedupe has been out for weeks now ? Dedupe has been out, but there were some accounting issues scheduled to be fixed in 128. --eric

Re: [zfs-discuss] b128a available w/deduplication

2009-12-03 Thread James C. McPherson
Dennis Clarke wrote: FYI, OpenSolaris b128a is available for download or image-update from the dev repository. Enjoy. I thought that dedupe has been out for weeks now ? The source has, yes. But what Richard was referring to was the respun build now available via IPS. cheers, James C. McPhe

Re: [zfs-discuss] b128a available w/deduplication

2009-12-03 Thread Dennis Clarke
> FYI, > OpenSolaris b128a is available for download or image-update from the > dev repository. Enjoy. I thought that dedupe has been out for weeks now ? Dennis ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mai

[zfs-discuss] b128a available w/deduplication

2009-12-03 Thread Richard Elling
FYI, OpenSolaris b128a is available for download or image-update from the dev repository. Enjoy. -- richard ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Supermicro AOC-USAS-L8i

2009-12-03 Thread Tim Cook
On Thu, Dec 3, 2009 at 8:02 PM, steven wrote: > It will work in a standard 8x or 16x slot. The bracket is backward. Not one > for subtlety, I took the bracket off, grabbed some pliers, and reversed all > the bends. Not exactly ideal... but I was then able to get it in the case > and get some scre

Re: [zfs-discuss] Supermicro AOC-USAS-L8i

2009-12-03 Thread steven
It will work in a standard 8x or 16x slot. The bracket is backward. Not one for subtlety, I took the bracket off, grabbed some pliers, and reversed all the bends. Not exactly ideal... but I was then able to get it in the case and get some screw tension on it to hold it snugly to the case. I had

Re: [zfs-discuss] possible mega_sas issue sol10u8 (Re: Workaround for mpt timeouts in snv_127)

2009-12-03 Thread James C. McPherson
Tru Huynh wrote: follow up, another crash today. On Mon, Nov 30, 2009 at 11:35:07AM +0100, Tru Huynh wrote: 1) OS SunOS xargos.bis.pasteur.fr 5.10 Generic_141445-09 i86pc i386 i86pc You should be logging a support call for this issue. James C. McPherson -- Senior Kernel Software Engineer, S

Re: [zfs-discuss] L2ARC in clusters

2009-12-03 Thread Erik Trimble
Robert Milkowski wrote: Robert Milkowski wrote: Robert Milkowski wrote: Hi, When deploying ZFS in cluster environment it would be nice to be able to have some SSDs as local drives (not on SAN) and when pool switches over to the other node zfs would pick up the node's local disk drives as L2

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread A Darren Dunham
On Thu, Dec 03, 2009 at 12:44:16PM -0800, Per Baatrup wrote: > >any of f1..f5's last blocks are partial > Does this mean that f1,f2,f3,f4 needs to be exact multiplum of the ZFS > blocksize? This is a severe restriction that will fail unless in very > special cases. Is this related to the disk form

Re: [zfs-discuss] zpool import - device names not always updated?

2009-12-03 Thread Ragnar Sundblad
Thank you Cindy for your reply! On 3 dec 2009, at 18.35, Cindy Swearingen wrote: > A bug might exist but you are building a pool based on the ZFS > volumes that are created in another pool. This configuration > is not supported and possible deadlocks can occur. I had absolutely no idea that ZFS

[zfs-discuss] Quota information from nfs mounting linux client

2009-12-03 Thread Len Zaifman
We are using zfs (solaris 10u9) to serve disk to a couple of hundred linux clients via nfs. We would like users on the linux clients to be able to monitor their disk space on the zfs file system. They do not have shell. accounts on the fileserver. Is the quota information on the fileserver (use

Re: [zfs-discuss] L2ARC in clusters

2009-12-03 Thread Wes Felter
Robert Milkowski wrote: Robert Milkowski wrote: Robert Milkowski wrote: When deploying ZFS in cluster environment it would be nice to be able to have some SSDs as local drives (not on SAN) and when pool switches over to the other node zfs would pick up the node's local disk drives as L2ARC.

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Daniel Carosone
> > Isn't this only true if the file sizes are such that the concatenated > > blocks are perfectly aligned on the same zfs block boundaries they used > > before? This seems unlikely to me. > > Yes that would be the case. While eagerly awaiting b128 to appear in IPS, I have been giving this iss

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Nicolas Williams
On Thu, Dec 03, 2009 at 12:44:16PM -0800, Per Baatrup wrote: > >if any of f2..f5 have different block sizes from f1 > > This restriction does not sound so bad to me if this only refers to > changes to the blocksize of a particular ZFS filesystem or copying > between different ZFSes in the same poo

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Per Baatrup
>Btw. I would be surprised to hear that this can be implemented >with current APIs; I agree. However it looks like an opportunity to dive into the Z-source code. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensol

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Per Baatrup
>if any of f2..f5 have different block sizes from f1 This restriction does not sound so bad to me if this only refers to changes to the blocksize of a particular ZFS filesystem or copying between different ZFSes in the same pool. This can properly be managed with a "-f" switch on the userlan app

Re: [zfs-discuss] Petabytes on a budget - blog

2009-12-03 Thread Trevor Pretty
Just thought I would let everybody know I saw one at a local ISP yesterday. They hadn't started testing the metal had only arrived the day before and they where waiting for the drives to arrive. They had also changed the design to give it more network. I will try to find out more as the custom

[zfs-discuss] EON ZFS Storage 0.59.5 based on snv 125 released!

2009-12-03 Thread Andre Lue
Embedded Operating system/Networking (EON), RAM based live ZFS NAS appliance is released on Genunix! Many thanks to Al Hopper and Genunix.org for download hosting and serving the opensolaris community. EON ZFS storage is available in a 32/64-bit CIFS and Samba versions: tryitEON 64-bit x86 CIFS

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Roland Rambau
Per, Per Baatrup schrieb: Roland, Clearly an extension of "cp" would be very nice when managing large files. Today we are relying heavily on snapshots for this, but this requires disipline on storing files in separate zfs'es avioding to snapshot too many files that changes frequently. The re

Re: [zfs-discuss] possible mega_sas issue sol10u8 (Re: Workaround for mpt timeouts in snv_127)

2009-12-03 Thread Tru Huynh
follow up, another crash today. On Mon, Nov 30, 2009 at 11:35:07AM +0100, Tru Huynh wrote: > 1) OS > SunOS xargos.bis.pasteur.fr 5.10 Generic_141445-09 i86pc i386 i86pc > > it's only sharing though NFS v3 to linux clients running > 20x CentOS-5 x86_64 2.6.18-164.6.1.el5 x86_64/i386 > 78x CentOS-3

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Nicolas Williams
On Thu, Dec 03, 2009 at 03:57:28AM -0800, Per Baatrup wrote: > I would like to to concatenate N files into one big file taking > advantage of ZFS copy-on-write semantics so that the file > concatenation is done without actually copying any (large amount of) > file content. > cat f1 f2 f3 f4 f5 >

[zfs-discuss] L2ARC re-uses new device if it is in the same "place"

2009-12-03 Thread Robert Milkowski
Hi, mi...@r600:/rpool/tmp# zpool status test pool: test state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM test ONLINE 0 0 0 /rpool/tmp/f1 ONLINE 0 0 0 errors: No known data errors lets add a cache devic

Re: [zfs-discuss] L2ARC in clusters

2009-12-03 Thread Robert Milkowski
Robert Milkowski wrote: Robert Milkowski wrote: Hi, When deploying ZFS in cluster environment it would be nice to be able to have some SSDs as local drives (not on SAN) and when pool switches over to the other node zfs would pick up the node's local disk drives as L2ARC. To better clarify

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread A Darren Dunham
On Thu, Dec 03, 2009 at 09:36:23AM -0800, Per Baatrup wrote: > The reason I was speaking about "cat" in stead of "cp" is that in > addition to copying a single file I would like also to concatenate > several files into a single file. Can this be accomplished with your > "(z)cp"? Unless you have s

Re: [zfs-discuss] L2ARC in clusters

2009-12-03 Thread Robert Milkowski
Robert Milkowski wrote: Robert Milkowski wrote: Hi, When deploying ZFS in cluster environment it would be nice to be able to have some SSDs as local drives (not on SAN) and when pool switches over to the other node zfs would pick up the node's local disk drives as L2ARC. To better clarify

Re: [zfs-discuss] L2ARC in clusters

2009-12-03 Thread Robert Milkowski
Robert Milkowski wrote: Hi, When deploying ZFS in cluster environment it would be nice to be able to have some SSDs as local drives (not on SAN) and when pool switches over to the other node zfs would pick up the node's local disk drives as L2ARC. To better clarify what I mean lets assume t

[zfs-discuss] L2ARC in clusters

2009-12-03 Thread Robert Milkowski
Hi, When deploying ZFS in cluster environment it would be nice to be able to have some SSDs as local drives (not on SAN) and when pool switches over to the other node zfs would pick up the node's local disk drives as L2ARC. To better clarify what I mean lets assume there is a 2-node cluster

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Per Baatrup
Roland, Clearly an extension of "cp" would be very nice when managing large files. Today we are relying heavily on snapshots for this, but this requires disipline on storing files in separate zfs'es avioding to snapshot too many files that changes frequently. The reason I was speaking about "ca

Re: [zfs-discuss] zpool import - device names not always updated?

2009-12-03 Thread Cindy Swearingen
Hi Ragnar, A bug might exist but you are building a pool based on the ZFS volumes that are created in another pool. This configuration is not supported and possible deadlocks can occur. If you can retry this example without building a pool on another pool, like using files to create a pool and c

Re: [zfs-discuss] ZIL corrupt, not recoverable even with logfix

2009-12-03 Thread Anon Y Mous
Was the zpool originally created by a FreeBSD operating system or by an OpenSolaris operating system? If so, what version of FreeBSD, SXCE, OpenSolaris Indiana was it originally created by? The reason I'm asking this is because there are different versions of ZFS in different versions of OpenSol

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Roland Rambau
Michael, michael schuster schrieb: Roland Rambau wrote: gang, actually a simpler version of that idea would be a "zcp": if I just cp a file, I know that all blocks of the new file will be duplicates; so the cp could take full advantage for the dedup without a need to check/read/write anz actu

Re: [zfs-discuss] Separate Zil on HDD ?

2009-12-03 Thread Neil Perrin
On 12/03/09 09:21, mbr wrote: Hello, Bob Friesenhahn wrote: On Thu, 3 Dec 2009, mbr wrote: What about the data that were on the ZILlog SSD at the time of failure, is a copy of the data still in the machines memory from where it can be used to put the transaction to the stable storage poo

Re: [zfs-discuss] Separate Zil on HDD ?

2009-12-03 Thread Bob Friesenhahn
On Thu, 3 Dec 2009, mbr wrote: Has the following error no consequences? Bug ID 6538021 Synopsis Need a way to force pool startup when zil cannot be replayed State 3-Accepted (Yes, that is a problem) Link http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6538021 I don't kn

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Bob Friesenhahn
On Thu, 3 Dec 2009, Jason King wrote: Well it could be done in a way such that it could be fs-agnostic (perhaps extending /bin/cat with a new flag such as -o outputfile, or detecting if stdout is a file vs tty, though corner cases might get tricky). If a particular fs supported such a feature,

Re: [zfs-discuss] Separate Zil on HDD ?

2009-12-03 Thread mbr
Hello, Bob Friesenhahn wrote: On Thu, 3 Dec 2009, mbr wrote: What about the data that were on the ZILlog SSD at the time of failure, is a copy of the data still in the machines memory from where it can be used to put the transaction to the stable storage pool? The intent log SSD is used as

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Jason King
On Thu, Dec 3, 2009 at 9:58 AM, Bob Friesenhahn wrote: > On Thu, 3 Dec 2009, Erik Ableson wrote: >> >> Much depends on the contents of the files. Fixed size binary blobs that >> align nicely with 16/32/64k boundaries, or variable sized text files. > > Note that the default zfs block size is 128K a

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Darren J Moffat
Bob Friesenhahn wrote: On Thu, 3 Dec 2009, Erik Ableson wrote: Much depends on the contents of the files. Fixed size binary blobs that align nicely with 16/32/64k boundaries, or variable sized text files. Note that the default zfs block size is 128K and so that will therefore be the defaul

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Bob Friesenhahn
On Thu, 3 Dec 2009, Erik Ableson wrote: Much depends on the contents of the files. Fixed size binary blobs that align nicely with 16/32/64k boundaries, or variable sized text files. Note that the default zfs block size is 128K and so that will therefore be the default dedup block size. Mos

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Seth
michael schuster wrote: Roland Rambau wrote: gang, actually a simpler version of that idea would be a "zcp": if I just cp a file, I know that all blocks of the new file will be duplicates; so the cp could take full advantage for the dedup without a need to check/read/write anz actual data I

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread michael schuster
Per Baatrup wrote: Actually 'ln -s source target' would not be the same "zcp source target" as writing to the source file after the operation would change the target file as well where as for "zcp" this would only change the source file due to copy-on-write semantics of ZFS. I actually was thin

Re: [zfs-discuss] Separate Zil on HDD ?

2009-12-03 Thread Bob Friesenhahn
On Thu, 3 Dec 2009, mbr wrote: What about the data that were on the ZILlog SSD at the time of failure, is a copy of the data still in the machines memory from where it can be used to put the transaction to the stable storage pool? The intent log SSD is used as 'write only' unless the system re

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Per Baatrup
Actually 'ln -s source target' would not be the same "zcp source target" as writing to the source file after the operation would change the target file as well where as for "zcp" this would only change the source file due to copy-on-write semantics of ZFS. -- This message posted from opensolari

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Darren J Moffat
Bob Friesenhahn wrote: On Thu, 3 Dec 2009, Darren J Moffat wrote: The answer to this is likely deduplication which ZFS now has. The reason dedup should help here is that after the 'cat' f15 will be made up of blocks that match the blocks of f1 f2 f3 f4 f5. Copy-on-write isn't what helps you

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread michael schuster
Roland Rambau wrote: gang, actually a simpler version of that idea would be a "zcp": if I just cp a file, I know that all blocks of the new file will be duplicates; so the cp could take full advantage for the dedup without a need to check/read/write anz actual data I think they call it 'ln' ;

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Roland Rambau
gang, actually a simpler version of that idea would be a "zcp": if I just cp a file, I know that all blocks of the new file will be duplicates; so the cp could take full advantage for the dedup without a need to check/read/write anz actual data -- Roland Per Baatrup schrieb: "dedup" operate

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Per Baatrup
"zcat" was my acronym for a special ZFS aware version of "cat" and the name was obviously a big mistake as I did not know it was an existing command and simply forgot to check. Should rename if to "zfscat" or something similar? -- This message posted from opensolaris.org ___

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Michael Schuster
Per Baatrup wrote: "dedup" operates on the block level leveraging the existing FFS checksums. Read "What to dedup: Files, blocks, or bytes" here http://blogs.sun.com/bonwick/entry/zfs_dedup The trick should be that the zcat userland app already knows that it will generate duplicate files so data

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Per Baatrup
"dedup" operates on the block level leveraging the existing FFS checksums. Read "What to dedup: Files, blocks, or bytes" here http://blogs.sun.com/bonwick/entry/zfs_dedup The trick should be that the zcat userland app already knows that it will generate duplicate files so data read and writes c

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Erik Ableson
On 3 déc. 2009, at 13:29, Bob Friesenhahn s> wrote: On Thu, 3 Dec 2009, Darren J Moffat wrote: The answer to this is likely deduplication which ZFS now has. The reason dedup should help here is that after the 'cat' f15 will be made up of blocks that match the blocks of f1 f2 f3 f4 f5. Co

Re: [zfs-discuss] Separate Zil on HDD ?

2009-12-03 Thread mbr
Hello, Edward Ned Harvey wrote: Yes, I have SSD for ZIL. Just one SSD. 32G. But if this is the problem, then you'll have the same poor performance on the local machine that you have over NFS. So I'm curious to see if you have the same poor performance locally. The ZIL does not need to be re

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Bob Friesenhahn
On Thu, 3 Dec 2009, Darren J Moffat wrote: The answer to this is likely deduplication which ZFS now has. The reason dedup should help here is that after the 'cat' f15 will be made up of blocks that match the blocks of f1 f2 f3 f4 f5. Copy-on-write isn't what helps you here it is dedup. Isn

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Darren J Moffat
Peter Tribble wrote: On Thu, Dec 3, 2009 at 12:08 PM, Darren J Moffat wrote: Per Baatrup wrote: I would like to to concatenate N files into one big file taking advantage of ZFS copy-on-write semantics so that the file concatenation is done without actually copying any (large amount of) file co

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Peter Tribble
On Thu, Dec 3, 2009 at 12:08 PM, Darren J Moffat wrote: > Per Baatrup wrote: >> >> I would like to to concatenate N files into one big file taking advantage >> of ZFS copy-on-write semantics so that the file concatenation is done >> without actually copying any (large amount of) file content. >>  

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Darren J Moffat
Per Baatrup wrote: I would like to to concatenate N files into one big file taking advantage of ZFS copy-on-write semantics so that the file concatenation is done without actually copying any (large amount of) file content. cat f1 f2 f3 f4 f5 > f15 Is this already possible when source and tar

[zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Per Baatrup
I would like to to concatenate N files into one big file taking advantage of ZFS copy-on-write semantics so that the file concatenation is done without actually copying any (large amount of) file content. cat f1 f2 f3 f4 f5 > f15 Is this already possible when source and target are on the same Z

Re: [zfs-discuss] Separate Zil on HDD ?

2009-12-03 Thread Auke Folkerts
On Wed, Dec 02, 2009 at 03:57:47AM -0800, Brian McKerr wrote: > I previously had a linux NFS server that I had mounted 'ASYNC' and, as one > would expect, NFS performance was pretty good getting close to 900gb/s. Now > that I have moved to opensolaris, NFS performance is not very good, I'm > gu