Edward wrote:
> So does that mean ZFS is not for consumer computer?
> If ZFS require 4GB of Ram for operation, that means i will need 8GB+ Ram if
> i were to use Photoshop or any other memory intensive application?
>
>
No. It works fine on desktops - I'm writing this on an older Athlon64
wit
Edward wrote:
> So does that mean ZFS is not for consumer computer?
Not at all. "Consumer" computers are plenty powerful enough
to use ZFS with.
> If ZFS require 4GB of Ram for operation, that means i will
> need 8GB+ Ram if i were to use Photoshop or any other memory
> intensive application?
So does that mean ZFS is not for consumer computer?
If ZFS require 4GB of Ram for operation, that means i will need 8GB+ Ram if i
were to use Photoshop or any other memory intensive application?
And it seems ZFS memory usage scales with the amount of HDD space?
This message posted from opens
Just a note:
Setting compression to gzip on a zpool breaks the GUI with a similar type of
error -
Application Error
com.iplanet.jato.NavigationException: Exception encountered during forward
Root cause = [java.lang.IllegalArgumentException: No enum const class
com.sun.zfs.common.model.Compressi
On Sun, 22 Jun 2008, kevin williams wrote:
>
> The article says that ZFS eliminates the need for a RAID card and is
> faster because the striping is running on the main cpu rather than
> an old chipset on a card. My question is, is this true? Can I
Ditto what the other guys said. Since ZFS ma
Marcelo Leal <[EMAIL PROTECTED]> writes:
> Hello all,
>
> [..]
>
> 1) What the difference between the smb server in solaris/opensolaris,
> and the "new" project CIFS?
What you refer to as the "smb server in solaris/opensolaris" is in fact
Samba, which sits on top of a plain unix system. This ha
On Mon, Jun 23, 2008 at 11:13:49AM +1200, [EMAIL PROTECTED] wrote:
>
> The cache may give RAID cards an edge, but ZFS gives near platter speeds for
> its various configurations. The Thumper is a perfect example of a ZFS
> appliance.
I get very acceptable performance out of my Sun Ultra-80 wit
It is indeed true and yoi can.
On 6/22/08, kevin williams <[EMAIL PROTECTED]> wrote:
> digg linked to an article related to the apple port of ZFS
> (http://www.dell.com/content/products/productdetails.aspx/print_1125?c=us&cs=19&l=en&s=dhss).
> I dont have a mac but was interested in ZFS.
>
> Th
kevin williams wrote:
> digg linked to an article related to the apple port of ZFS
> (http://www.dell.com/content/products/productdetails.aspx/print_1125?c=us&cs=19&l=en&s=dhss).
> I dont have a mac but was interested in ZFS.
>
> The article says that ZFS eliminates the need for a RAID card and is
kevin williams writes:
> digg linked to an article related to the apple port of ZFS
> (http://www.dell.com/content/products/productdetails.aspx/print_1125?c=us&cs=19&l=en&s=dhss).
> I dont have a mac but was interested in ZFS.
>
> The article says that ZFS eliminates the need for a RAID car
digg linked to an article related to the apple port of ZFS
(http://www.dell.com/content/products/productdetails.aspx/print_1125?c=us&cs=19&l=en&s=dhss).
I dont have a mac but was interested in ZFS.
The article says that ZFS eliminates the need for a RAID card and is faster
because the stripin
Thanks for the reference.
I read that thread to the end, and saw there are some complex considerations
regarding changing st_dev on an open file, but no decision. Despite this
complexity, I think the situation is quite brain damanged - I'm moving large
files between ZFSs all the time, otherwise
Samba cifs has been in opensolaris from day1.
No, it cannot be used to meet sun's end goal which is cifs INTEGRATION
with the core kernel. Sun cifs supports windows acl's from the kernel
up. Samba does not.
On 6/22/08, Marcelo Leal <[EMAIL PROTECTED]> wrote:
> Hello all,
> i would like to c
Hello all,
i would like to continue with this topic, and after doing some "research"
about the topic, i have some (many) doubts, and maybe we could use this thread
to give some responses to me and other users that can have the same questions...
First, sorry to "CC" to many forums, but i think i
[EMAIL PROTECTED] writes:
> ..sorry, there was a misconfiguration in our email-system. I've fixed it in
> this moment...
> We apologize for any problems you had
>
> Andreas Gaida
Wow, that was fast! And on a Sunday evening, too...
So, everything is fixed, and we are all happy now :-)
Regards -
Yaniv Aknin wrote:
> Hi,
>
> Obviously, moving ('renaming') files between ZFSs on the same zpools is just
> like a move between any other two filesystems, requiring full copy of the
> data and deletion of the old file.
>
> I was wondering if there is (and why there isn't) an optimization inside Z
If you really need the inode number, you should use the semi-public interface
to retrieve it and call VOP_GETATTR. This is what the rest of the kernel does
when it needs attributes of a vnode.
See for example
http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/syscall/stat
On 21 June, 2008 - Victor Pajor sent me these 0,9K bytes:
> Another thing
>
> config:
>
> zfs FAULTED corrupted data
> raidz1ONLINE
> c1t1d0 ONLINE
> c7t0d0 UNAVAIL corrupted data
> c7t1d0 UNAVAIL corrupted data
>
> c70d
> > Everyone who post gets this autoreply.
>
> So what do the rest of you do? Ignore it?
I for one do ignore it. :-)
> > > From: [EMAIL PROTECTED]
> >
> > These people are not spoofing your domain, they set a "From:" header
> > with no "@domain". Many MTAs append the local domain in this case.
On Sun, 22 Jun 2008, Will Murnane wrote:
>
>> Perhaps the solution is to install more RAM in the system so that the
>> stripe is fully cached and ZFS does not need to go back to disk prior
>> to writing an update.
> I don't think the problem is that the stripe is falling out of cache,
> but that it
On Sun, Jun 22, 2008 at 06:11:21PM +0200, Volker A. Brandt wrote:
>
> Everyone who post gets this autoreply.
So what do the rest of you do? Ignore it?
> > From: [EMAIL PROTECTED]
>
> These people are not spoofing your domain, they set a "From:" header
> with no "@domain". Many MTAs append the
On Sun, 22 Jun 2008, Brian Hechinger wrote:
> On Sun, Jun 22, 2008 at 10:37:34AM -0500, Bob Friesenhahn wrote:
>>
>> Perhaps the solution is to install more RAM in the system so that the
>> stripe is fully cached and ZFS does not need to go back to disk prior
>> to writing an update. The need to
Hello Brian!
> Every time I post to this list, I get an AUTOREPLY from somebody who if
> you ask me is up to no good, otherwise they would set a proper From: address
> instead of spoofing my domain.
Everyone who post gets this autoreply.
> From: [EMAIL PROTECTED]
These people are not spoofing
On Sun, Jun 22, 2008 at 15:37, Bob Friesenhahn
<[EMAIL PROTECTED]> wrote:
> Keep in mind that ZFS checksums all data, the checksum is stored in a
> different block than the data, and that if ZFS were to checksum on the
> stripe segment level, a lot more checksums would need to be stored.
> All thes
Every time I post to this list, I get an AUTOREPLY from somebody who if
you ask me is up to no good, otherwise they would set a proper From: address
instead of spoofing my domain.
> Received: from mail01.csw-datensysteme.de ([62.153.225.98])
> by wiggum.4amlunch.net
> (Sun Java(tm) System Messag
On Sun, Jun 22, 2008 at 2:06 PM, Cesare <[EMAIL PROTECTED]> wrote:
> Hy,
>
> I'm facing to a problem where I configure and create a zpool on my
> test bed. The hardware is: T-5120 with Solaris10 with latest patch and
> Clariion CX3 attached by 2 HBA. In this type of configuration every
> LUN export
On Sun, Jun 22, 2008 at 10:37:34AM -0500, Bob Friesenhahn wrote:
>
> Perhaps the solution is to install more RAM in the system so that the
> stripe is fully cached and ZFS does not need to go back to disk prior
> to writing an update. The need to read prior to write is clearly what
> kills ZFS
On Sun, 22 Jun 2008, Ralf Bertling wrote:
>
> Now lets see if this really has to be this way (this implies no, doesn't it
> ;-)
> When reading small blocks of data (as opposed to streams discussed earlier)
> the requested data resides on a single disk and thus reading it does not
> require to se
Hy,
I'm facing to a problem where I configure and create a zpool on my
test bed. The hardware is: T-5120 with Solaris10 with latest patch and
Clariion CX3 attached by 2 HBA. In this type of configuration every
LUN exported by Clariion is viewed 4 times by operating system.
If I configure the late
Hi list,
as this matter pops up every now and then in posts on this list I just
want to clarify that the real performance of RaidZ (in its current
implementation) is NOT anything that follows from raidz-style data
efficient redundancy or the copy-on-write design used in ZFS.
In a M-Way mir
30 matches
Mail list logo