Hey folks.
I've looked around quite a bit, and I can't find something like this:
I have a bunch of older systems which use Ultra320 SCA hot-swap
connectors for their internal drives. (e.g. v20z and similar)
I'd love to be able to use modern flash SSDs with these systems, but I
have yet to fi
> Dennis Clarke wrote:
>>> FYI,
>>> OpenSolaris b128a is available for download or image-update from the
>>> dev repository. Enjoy.
>>
>> I thought that dedupe has been out for weeks now ?
>
> The source has, yes. But what Richard was referring to was the
> respun build now available via IPS.
Oh
Nicolas Williams wrote:
On Thu, Dec 03, 2009 at 12:44:16PM -0800, Per Baatrup wrote:
if any of f2..f5 have different block sizes from f1
This restriction does not sound so bad to me if this only refers to
changes to the blocksize of a particular ZFS filesystem or copying
between different ZFSes
I eventually performed a few more tests, adjusting some zfs tuning options
which had no effect, and trying the
itmpt driver which someone had said would work, and regardless my system would
always freeze quite rapidly in
snv 127 and 128a. Just to double check my hardware, I went back to the
ope
It was created on AMD64 FreeBSD with 8.0RC2 (which was version 13 of ZFS iirc.)
At some point I knocked it out (export) somehow, I don't remember doing so
intentionally. So I can't do commands like zpool replace since there are no
pools.
It says it was last used by the FreeBSD box, but the Fre
On Fri, Dec 4 at 1:12, Dennis Clarke wrote:
FYI,
OpenSolaris b128a is available for download or image-update from the
dev repository. Enjoy.
I thought that dedupe has been out for weeks now ?
Dedupe has been out, but there were some accounting issues scheduled
to be fixed in 128.
--eric
Dennis Clarke wrote:
FYI,
OpenSolaris b128a is available for download or image-update from the
dev repository. Enjoy.
I thought that dedupe has been out for weeks now ?
The source has, yes. But what Richard was referring to was the
respun build now available via IPS.
cheers,
James C. McPhe
> FYI,
> OpenSolaris b128a is available for download or image-update from the
> dev repository. Enjoy.
I thought that dedupe has been out for weeks now ?
Dennis
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mai
FYI,
OpenSolaris b128a is available for download or image-update from the
dev repository. Enjoy.
-- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Dec 3, 2009 at 8:02 PM, steven wrote:
> It will work in a standard 8x or 16x slot. The bracket is backward. Not one
> for subtlety, I took the bracket off, grabbed some pliers, and reversed all
> the bends. Not exactly ideal... but I was then able to get it in the case
> and get some scre
It will work in a standard 8x or 16x slot. The bracket is backward. Not one for
subtlety, I took the bracket off, grabbed some pliers, and reversed all the
bends. Not exactly ideal... but I was then able to get it in the case and get
some screw tension on it to hold it snugly to the case.
I had
Tru Huynh wrote:
follow up, another crash today.
On Mon, Nov 30, 2009 at 11:35:07AM +0100, Tru Huynh wrote:
1) OS
SunOS xargos.bis.pasteur.fr 5.10 Generic_141445-09 i86pc i386 i86pc
You should be logging a support call for this issue.
James C. McPherson
--
Senior Kernel Software Engineer, S
Robert Milkowski wrote:
Robert Milkowski wrote:
Robert Milkowski wrote:
Hi,
When deploying ZFS in cluster environment it would be nice to be
able to have some SSDs as local drives (not on SAN) and when pool
switches over to the other node zfs would pick up the node's local
disk drives as L2
On Thu, Dec 03, 2009 at 12:44:16PM -0800, Per Baatrup wrote:
> >any of f1..f5's last blocks are partial
> Does this mean that f1,f2,f3,f4 needs to be exact multiplum of the ZFS
> blocksize? This is a severe restriction that will fail unless in very
> special cases. Is this related to the disk form
Thank you Cindy for your reply!
On 3 dec 2009, at 18.35, Cindy Swearingen wrote:
> A bug might exist but you are building a pool based on the ZFS
> volumes that are created in another pool. This configuration
> is not supported and possible deadlocks can occur.
I had absolutely no idea that ZFS
We are using zfs (solaris 10u9) to serve disk to a couple of hundred linux
clients via nfs. We would like users on the linux clients to be able to monitor
their disk space on the zfs file system. They do not have shell. accounts on
the fileserver. Is the quota information on the fileserver (use
Robert Milkowski wrote:
Robert Milkowski wrote:
Robert Milkowski wrote:
When deploying ZFS in cluster environment it would be nice to be able
to have some SSDs as local drives (not on SAN) and when pool switches
over to the other node zfs would pick up the node's local disk drives
as L2ARC.
> > Isn't this only true if the file sizes are such that the concatenated
> > blocks are perfectly aligned on the same zfs block boundaries they used
> > before? This seems unlikely to me.
>
> Yes that would be the case.
While eagerly awaiting b128 to appear in IPS, I have been giving this iss
On Thu, Dec 03, 2009 at 12:44:16PM -0800, Per Baatrup wrote:
> >if any of f2..f5 have different block sizes from f1
>
> This restriction does not sound so bad to me if this only refers to
> changes to the blocksize of a particular ZFS filesystem or copying
> between different ZFSes in the same poo
>Btw. I would be surprised to hear that this can be implemented
>with current APIs;
I agree. However it looks like an opportunity to dive into the Z-source code.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensol
>if any of f2..f5 have different block sizes from f1
This restriction does not sound so bad to me if this only refers to changes to
the blocksize of a particular ZFS filesystem or copying between different ZFSes
in the same pool. This can properly be managed with a "-f" switch on the
userlan app
Just thought I would let everybody know I saw one at a local ISP
yesterday. They hadn't started testing the metal had only arrived the
day before and they where waiting for the drives to arrive. They had
also changed the design to give it more network. I will try to find out
more as the custom
Embedded Operating system/Networking (EON), RAM based live ZFS NAS appliance is
released on Genunix! Many thanks to Al Hopper and Genunix.org for download
hosting and serving the opensolaris community.
EON ZFS storage is available in a 32/64-bit CIFS and Samba versions:
tryitEON 64-bit x86 CIFS
Per,
Per Baatrup schrieb:
Roland,
Clearly an extension of "cp" would be very nice when managing large files.
Today we are relying heavily on snapshots for this, but this requires disipline
on storing files in separate zfs'es avioding to snapshot too many files that
changes frequently.
The re
follow up, another crash today.
On Mon, Nov 30, 2009 at 11:35:07AM +0100, Tru Huynh wrote:
> 1) OS
> SunOS xargos.bis.pasteur.fr 5.10 Generic_141445-09 i86pc i386 i86pc
>
> it's only sharing though NFS v3 to linux clients running
> 20x CentOS-5 x86_64 2.6.18-164.6.1.el5 x86_64/i386
> 78x CentOS-3
On Thu, Dec 03, 2009 at 03:57:28AM -0800, Per Baatrup wrote:
> I would like to to concatenate N files into one big file taking
> advantage of ZFS copy-on-write semantics so that the file
> concatenation is done without actually copying any (large amount of)
> file content.
> cat f1 f2 f3 f4 f5 >
Hi,
mi...@r600:/rpool/tmp# zpool status test
pool: test
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
/rpool/tmp/f1 ONLINE 0 0 0
errors: No known data errors
lets add a cache devic
Robert Milkowski wrote:
Robert Milkowski wrote:
Hi,
When deploying ZFS in cluster environment it would be nice to be able
to have some SSDs as local drives (not on SAN) and when pool switches
over to the other node zfs would pick up the node's local disk drives
as L2ARC.
To better clarify
On Thu, Dec 03, 2009 at 09:36:23AM -0800, Per Baatrup wrote:
> The reason I was speaking about "cat" in stead of "cp" is that in
> addition to copying a single file I would like also to concatenate
> several files into a single file. Can this be accomplished with your
> "(z)cp"?
Unless you have s
Robert Milkowski wrote:
Robert Milkowski wrote:
Hi,
When deploying ZFS in cluster environment it would be nice to be able
to have some SSDs as local drives (not on SAN) and when pool switches
over to the other node zfs would pick up the node's local disk drives
as L2ARC.
To better clarify
Robert Milkowski wrote:
Hi,
When deploying ZFS in cluster environment it would be nice to be able
to have some SSDs as local drives (not on SAN) and when pool switches
over to the other node zfs would pick up the node's local disk drives
as L2ARC.
To better clarify what I mean lets assume t
Hi,
When deploying ZFS in cluster environment it would be nice to be able
to have some SSDs as local drives (not on SAN) and when pool switches
over to the other node zfs would pick up the node's local disk drives as
L2ARC.
To better clarify what I mean lets assume there is a 2-node cluster
Roland,
Clearly an extension of "cp" would be very nice when managing large files.
Today we are relying heavily on snapshots for this, but this requires disipline
on storing files in separate zfs'es avioding to snapshot too many files that
changes frequently.
The reason I was speaking about "ca
Hi Ragnar,
A bug might exist but you are building a pool based on the ZFS
volumes that are created in another pool. This configuration
is not supported and possible deadlocks can occur.
If you can retry this example without building a pool on another
pool, like using files to create a pool and c
Was the zpool originally created by a FreeBSD operating system or by an
OpenSolaris operating system? If so, what version of FreeBSD, SXCE, OpenSolaris
Indiana was it originally created by? The reason I'm asking this is because
there are different versions of ZFS in different versions of OpenSol
Michael,
michael schuster schrieb:
Roland Rambau wrote:
gang,
actually a simpler version of that idea would be a "zcp":
if I just cp a file, I know that all blocks of the new file
will be duplicates; so the cp could take full advantage for
the dedup without a need to check/read/write anz actu
On 12/03/09 09:21, mbr wrote:
Hello,
Bob Friesenhahn wrote:
On Thu, 3 Dec 2009, mbr wrote:
What about the data that were on the ZILlog SSD at the time of
failure, is
a copy of the data still in the machines memory from where it can be
used
to put the transaction to the stable storage poo
On Thu, 3 Dec 2009, mbr wrote:
Has the following error no consequences?
Bug ID 6538021
Synopsis Need a way to force pool startup when zil cannot be replayed
State 3-Accepted (Yes, that is a problem)
Link
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6538021
I don't kn
On Thu, 3 Dec 2009, Jason King wrote:
Well it could be done in a way such that it could be fs-agnostic
(perhaps extending /bin/cat with a new flag such as -o outputfile, or
detecting if stdout is a file vs tty, though corner cases might get
tricky). If a particular fs supported such a feature,
Hello,
Bob Friesenhahn wrote:
On Thu, 3 Dec 2009, mbr wrote:
What about the data that were on the ZILlog SSD at the time of
failure, is
a copy of the data still in the machines memory from where it can be used
to put the transaction to the stable storage pool?
The intent log SSD is used as
On Thu, Dec 3, 2009 at 9:58 AM, Bob Friesenhahn
wrote:
> On Thu, 3 Dec 2009, Erik Ableson wrote:
>>
>> Much depends on the contents of the files. Fixed size binary blobs that
>> align nicely with 16/32/64k boundaries, or variable sized text files.
>
> Note that the default zfs block size is 128K a
Bob Friesenhahn wrote:
On Thu, 3 Dec 2009, Erik Ableson wrote:
Much depends on the contents of the files. Fixed size binary blobs
that align nicely with 16/32/64k boundaries, or variable sized text
files.
Note that the default zfs block size is 128K and so that will therefore
be the defaul
On Thu, 3 Dec 2009, Erik Ableson wrote:
Much depends on the contents of the files. Fixed size binary blobs that align
nicely with 16/32/64k boundaries, or variable sized text files.
Note that the default zfs block size is 128K and so that will
therefore be the default dedup block size.
Mos
michael schuster wrote:
Roland Rambau wrote:
gang,
actually a simpler version of that idea would be a "zcp":
if I just cp a file, I know that all blocks of the new file
will be duplicates; so the cp could take full advantage for
the dedup without a need to check/read/write anz actual data
I
Per Baatrup wrote:
Actually 'ln -s source target' would not be the same "zcp source target"
as writing to the source file after the operation would change the
target file as well where as for "zcp" this would only change the source
file due to copy-on-write semantics of ZFS.
I actually was thin
On Thu, 3 Dec 2009, mbr wrote:
What about the data that were on the ZILlog SSD at the time of failure, is
a copy of the data still in the machines memory from where it can be used
to put the transaction to the stable storage pool?
The intent log SSD is used as 'write only' unless the system re
Actually 'ln -s source target' would not be the same "zcp source target" as
writing to the source file after the operation would change the target file as
well where as for "zcp" this would only change the source file due to
copy-on-write semantics of ZFS.
--
This message posted from opensolari
Bob Friesenhahn wrote:
On Thu, 3 Dec 2009, Darren J Moffat wrote:
The answer to this is likely deduplication which ZFS now has.
The reason dedup should help here is that after the 'cat' f15 will be
made up of blocks that match the blocks of f1 f2 f3 f4 f5.
Copy-on-write isn't what helps you
Roland Rambau wrote:
gang,
actually a simpler version of that idea would be a "zcp":
if I just cp a file, I know that all blocks of the new file
will be duplicates; so the cp could take full advantage for
the dedup without a need to check/read/write anz actual data
I think they call it 'ln' ;
gang,
actually a simpler version of that idea would be a "zcp":
if I just cp a file, I know that all blocks of the new file
will be duplicates; so the cp could take full advantage for
the dedup without a need to check/read/write anz actual data
-- Roland
Per Baatrup schrieb:
"dedup" operate
"zcat" was my acronym for a special ZFS aware version of "cat" and the name was
obviously a big mistake as I did not know it was an existing command and simply
forgot to check.
Should rename if to "zfscat" or something similar?
--
This message posted from opensolaris.org
___
Per Baatrup wrote:
"dedup" operates on the block level leveraging the existing FFS
checksums. Read "What to dedup: Files, blocks, or bytes" here
http://blogs.sun.com/bonwick/entry/zfs_dedup
The trick should be that the zcat userland app already knows that it
will generate duplicate files so data
"dedup" operates on the block level leveraging the existing FFS checksums. Read
"What to dedup: Files, blocks, or bytes" here
http://blogs.sun.com/bonwick/entry/zfs_dedup
The trick should be that the zcat userland app already knows that it will
generate duplicate files so data read and writes c
On 3 déc. 2009, at 13:29, Bob Friesenhahn s> wrote:
On Thu, 3 Dec 2009, Darren J Moffat wrote:
The answer to this is likely deduplication which ZFS now has.
The reason dedup should help here is that after the 'cat' f15 will
be made up of blocks that match the blocks of f1 f2 f3 f4 f5.
Co
Hello,
Edward Ned Harvey wrote:
Yes, I have SSD for ZIL. Just one SSD. 32G. But if this is the problem,
then you'll have the same poor performance on the local machine that you
have over NFS. So I'm curious to see if you have the same poor performance
locally. The ZIL does not need to be re
On Thu, 3 Dec 2009, Darren J Moffat wrote:
The answer to this is likely deduplication which ZFS now has.
The reason dedup should help here is that after the 'cat' f15 will be made up
of blocks that match the blocks of f1 f2 f3 f4 f5.
Copy-on-write isn't what helps you here it is dedup.
Isn
Peter Tribble wrote:
On Thu, Dec 3, 2009 at 12:08 PM, Darren J Moffat
wrote:
Per Baatrup wrote:
I would like to to concatenate N files into one big file taking advantage
of ZFS copy-on-write semantics so that the file concatenation is done
without actually copying any (large amount of) file co
On Thu, Dec 3, 2009 at 12:08 PM, Darren J Moffat
wrote:
> Per Baatrup wrote:
>>
>> I would like to to concatenate N files into one big file taking advantage
>> of ZFS copy-on-write semantics so that the file concatenation is done
>> without actually copying any (large amount of) file content.
>>
Per Baatrup wrote:
I would like to to concatenate N files into one big file taking advantage of
ZFS copy-on-write semantics so that the file concatenation is done without
actually copying any (large amount of) file content.
cat f1 f2 f3 f4 f5 > f15
Is this already possible when source and tar
I would like to to concatenate N files into one big file taking advantage of
ZFS copy-on-write semantics so that the file concatenation is done without
actually copying any (large amount of) file content.
cat f1 f2 f3 f4 f5 > f15
Is this already possible when source and target are on the same Z
On Wed, Dec 02, 2009 at 03:57:47AM -0800, Brian McKerr wrote:
> I previously had a linux NFS server that I had mounted 'ASYNC' and, as one
> would expect, NFS performance was pretty good getting close to 900gb/s. Now
> that I have moved to opensolaris, NFS performance is not very good, I'm
> gu
61 matches
Mail list logo