2009/9/17 Brandon High :
> 2009/9/11 "C. Bergström" :
>> Can we make a FAQ on this somewhere?
>>
>> 1) There is some legal bla bla between Sun and green-bytes that's tying up
>> the IP around dedup... (someone knock some sense into green-bytes please)
>> 2) there's an acquisition that's got all sor
Hi,
I posted this on cifs-discuss, but got no reply.
I've just added the CIFs function to some of our ZFS filesystems, so
they are now shared via NFS and CIFS
We've got a lot of Mac users, who can quite happily create files and
directories with names containing characters not allowed in Wind
ZFS is worth putting a little thought into your system when you START it.
If you want to be able easily add a couple disks at a time, just use
mirrors, I user raidz vdevs of 4 and when i need to expand i have 2
options. I add a new raidz vdev of 4 disks OR i replace all 4 disks in one
vdev with l
On Sep 16, 2009, at 7:17 PM, Ross Walker wrote:
more resilient to temporary path failures.
As another list member pointed out you could also avoid the issue by
having a raidz disk per controller. But if I'm buying that kind of
big iron I might just opt for a 3par or emc and save myself the
On Sep 16, 2009, at 6:43 PM, Bob Friesenhahn > wrote:
On Wed, 16 Sep 2009, Ross Walker wrote:
There is another type of failure that mirrors help with and that is
controller or path failures. If one side of a mirror set is on one
controller or path and the other on another then a failure of
On Sep 16, 2009, at 6:50 PM, Marion Hakanson wrote:
rswwal...@gmail.com said:
There is another type of failure that mirrors help with and that is
controller or path failures. If one side of a mirror set is on one
controller or path and the other on another then a failure of one
will not
t
If anyone is interested in tackling this project, I found a blog spelling out
how to go about it at http://blogs.sun.com/ahl/entry/expand_o_matic_raid_z.
That also spells out their position on the priorities related to this project.
I'm not in an enterprise situation, nor in a home situation, bu
On Sun, Sep 13, 2009 at 7:45 PM, Owen Davies wrote:
> Is there a better way to do this than manually editing each file (or db)? I
> don't think there is much of this sort of integration yet so that tools
> update things in a consistent way on both the UNIX side and the CIFS side.
You could use
2009/9/11 "C. Bergström" :
> Can we make a FAQ on this somewhere?
>
> 1) There is some legal bla bla between Sun and green-bytes that's tying up
> the IP around dedup... (someone knock some sense into green-bytes please)
> 2) there's an acquisition that's got all sorts of delays.. which may very
>
rswwal...@gmail.com said:
> There is another type of failure that mirrors help with and that is
> controller or path failures. If one side of a mirror set is on one
> controller or path and the other on another then a failure of one will not
> take down the set.
>
> You can't get that with RAIDZ
On Wed, 16 Sep 2009, Ross Walker wrote:
There is another type of failure that mirrors help with and that is
controller or path failures. If one side of a mirror set is on one controller
or path and the other on another then a failure of one will not take down the
set.
You can't get that wit
On Sep 16, 2009, at 4:29 PM, "Marty Scholes"
wrote:
Yes. This is a mathematical way of saying
"lose any P+1 of N disks."
I am hesitant to beat this dead horse, yet it is a nuance that
either I have completely misunderstood or many people I've met have
completely missed.
Whether a str
On 09/16/09 14:19, Richard Elling wrote:
On Sep 16, 2009, at 1:09 PM, Bob Friesenhahn wrote:
On Wed, 16 Sep 2009, Thomas Burgess wrote:
hrm, i always thought raidz took longerlearn something every day =)
And you were probably right, in spite of Richard's lack of knowledge
of a study or
On Sep 16, 2009, at 1:29 PM, Marty Scholes wrote:
Yes. This is a mathematical way of saying
"lose any P+1 of N disks."
I am hesitant to beat this dead horse, yet it is a nuance that
either I have completely misunderstood or many people I've met have
completely missed.
Whether a stripe of
On Sep 16, 2009, at 1:09 PM, Bob Friesenhahn wrote:
On Wed, 16 Sep 2009, Thomas Burgess wrote:
hrm, i always thought raidz took longerlearn something every
day =)
And you were probably right, in spite of Richard's lack of knowledge
of a study or the feeling in his gut. Just look at t
> Yes. This is a mathematical way of saying
> "lose any P+1 of N disks."
I am hesitant to beat this dead horse, yet it is a nuance that either I have
completely misunderstood or many people I've met have completely missed.
Whether a stripe of mirrors or mirror of a stripes, any single failure m
On Wed, 16 Sep 2009, Thomas Burgess wrote:
hrm, i always thought raidz took longerlearn something every day =)
And you were probably right, in spite of Richard's lack of knowledge
of a study or the feeling in his gut. Just look at the many postings
here about resilvering and you will se
On Sep 16, 2009, at 12:50 PM, Marty Scholes wrote:
This line of reasoning doesn't get you very far.
It is much better to take a look at
the mean time to data loss (MTTDL) for the various
configurations. I wrote a
series of blogs to show how this is done.
http://blogs.sun.com/relling/tags/mttdl"
> This line of reasoning doesn't get you very far.
> It is much better to take a look at
> the mean time to data loss (MTTDL) for the various
> configurations. I wrote a
> series of blogs to show how this is done.
> http://blogs.sun.com/relling/tags/mttdl";
> target="_blank">http://blogs.sun.com
I know... this has been asked a lot around here. I just wanted to pop in and
see if there were any plans on implementing this soon?
Adam describes it wonderfully here, but has anything come about after this post:
http://blogs.sun.com/ahl/entry/expand_o_matic_raid_z
This would make it a killer f
hrm, i always thought raidz took longerlearn something every day =)
On Wed, Sep 16, 2009 at 2:14 PM, Richard Elling wrote:
> On Sep 16, 2009, at 10:42 AM, Thomas Burgess wrote:
>
> Mirrors are much quicker to replace if one DOES fail though...so i would
>> think that bad stuff could happen
On Sep 16, 2009, at 10:56 AM, Erik Trimble wrote:
Lori Alt wrote:
On 09/16/09 10:48, Marty Scholes wrote:
Lori Alt wrote:
As for being able to read streams of a later format
on an earlier version of ZFS, I don't think that will ever be
supported. In that case, we really would have to someh
On 09/16/09 11:56, Erik Trimble wrote:
Lori Alt wrote:
On 09/16/09 10:48, Marty Scholes wrote:
Lori Alt wrote:
As for being able to read streams of a later format
on an earlier version of ZFS, I don't think that will ever be
supported. In that case, we really would have to somehow convert t
On Sep 16, 2009, at 10:42 AM, Thomas Burgess wrote:
Mirrors are much quicker to replace if one DOES fail though...so i
would think that bad stuff could happen with EITHER solutionIf
you buy a bunch of hard drives for a raidz and they are all from the
same batch they might all fail aroun
Lori Alt wrote:
On 09/16/09 10:48, Marty Scholes wrote:
Lori Alt wrote:
As for being able to read streams of a later format
on an earlier
version of ZFS, I don't think that will ever be
supported. In that
case, we really would have to somehow convert the
format of the objects
stored with
Marty Scholes wrote:
Lori Alt wrote:
As for being able to read streams of a later format
on an earlier
version of ZFS, I don't think that will ever be
supported. In that
case, we really would have to somehow convert the
format of the objects
stored within the send stream and we have no plans
Mirrors are much quicker to replace if one DOES fail though...so i would
think that bad stuff could happen with EITHER solutionIf you buy a bunch
of hard drives for a raidz and they are all from the same batch they might
all fail around the same time...what if you have a raidz2 group and 2 driv
On Sep 16, 2009, at 9:38 AM, Marty Scholes wrote:
Generally speaking, striping mirrors will be faster
than raidz or raidz2,
but it will require a higher number of disks and
therefore higher cost to
The main reason to use
raidz or raidz2 instead
of striping mirrors would be to keep the cost down,
On 09/16/09 10:49, David Magda wrote:
On Wed, September 16, 2009 11:53, Lori Alt wrote:
So we're considering a refinement of the current policy of not
guaranteeing future readability of streams generated by earlier version
of ZFS. The time may have come where we know enough about how send
s
On 09/16/09 10:48, Marty Scholes wrote:
Lori Alt wrote:
As for being able to read streams of a later format
on an earlier
version of ZFS, I don't think that will ever be
supported. In that
case, we really would have to somehow convert the
format of the objects
stored within the send strea
At the end of the day, it TOTALLY depends on your needs. raidz may be the
best bet for you if you simply do not need the speed of mirrors, and as
another user mentioned, it DOES offer better fault tollerence. Figure out
what your needs are for your workload THEN ask.
These type of loaded questio
On Wed, September 16, 2009 10:35, cindy.swearin...@sun.com wrote:
> Detaching disks from a mirror isn't ideal but if you absolutely have
> to reuse a disk temporarily then go with mirrors. See the output below.
> You can replace disks in either configuration if you want to switch
> smaller disks
On Wed, September 16, 2009 09:29, Alan Coopersmith wrote:
> The installer used in Solaris 2.0 through the original release of 10
> required
> UFS as the root filesystem - that wasn't a design bug, just the way it was
> designed.
If there are multiple filesystems available, an installer that forc
It's possible to do 3-way (or more) mirrors too, so you may achieve better
redundancy than raidz2/3
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Marty Scholes
Sent: 16. syyskuuta 2009 19:38
To:
On Wed, September 16, 2009 11:53, Lori Alt wrote:
> So we're considering a refinement of the current policy of not
> guaranteeing future readability of streams generated by earlier version
> of ZFS. The time may have come where we know enough about how send
> streams fit into overall ZFS versioni
Lori Alt wrote:
> As for being able to read streams of a later format
> on an earlier
> version of ZFS, I don't think that will ever be
> supported. In that
> case, we really would have to somehow convert the
> format of the objects
> stored within the send stream and we have no plans to
> impl
> Generally speaking, striping mirrors will be faster
> than raidz or raidz2,
> but it will require a higher number of disks and
> therefore higher cost to
> The main reason to use
> raidz or raidz2 instead
> of striping mirrors would be to keep the cost down,
> or to get higher usable
> space out
On Wed, 16 Sep 2009, en...@businessgrade.com wrote:
Hi. If I am using slightly more reliable SAS drives versus SATA, SSDs for
both L2Arc and ZIL and lots of RAM, will a mirrored pool of say 24 disks hold
any significant advantages over a RAIDZ pool?
A mirrored pool will support more IOPs. Th
Erik Trimble wrote:
Lori Alt wrote:
On 09/15/09 06:27, Luca Morettoni wrote:
On 09/15/09 02:07 PM, Mark J Musante wrote:
zfs create -o version=N pool/filesystem
is possible to implement into a future version of ZFS a "released"
send command, like:
# zfs send -r2 ...
to send a specif
I think in theory the ZIL/L2ARC should make things nice and fast if your
workload includes sync requests (database, iscsi, nfs, etc.), regardless of the
backend disks. But the only sure way to know is test with your work load.
-Scott
--
This message posted from opensolaris.org
_
In addition, if you need the flexibility of moving disks around until
the device removal CR integrates, then mirrored pools are more flexible.
Detaching disks from a mirror isn't ideal but if you absolutely have
to reuse a disk temporarily then go with mirrors. See the output below.
You can repla
Joerg Schilling wrote:
> Alan Coopersmith wrote:
>
>> If the test suite is going to be running on nv_128 or later, then
>> you are guaranteed to have a zfs filesystem, since root must be
>> zfs then (since the only install method will be IPS, which requires
>> zfs root). Until then you could ju
Roland Mainz wrote:
Robert Thurlow wrote:
Roland Mainz wrote:
Ok... does that mean that I have to create a ZFS filesystem to actually
test ([1]) an application which modifies ZFS/NFSv4 ACLs or are there any
other options ?
By all means, test with ZFS. But it's easy to do that:
Roland Mainz wrote:
> Robert Thurlow wrote:
>> Roland Mainz wrote:
>>
>>> Ok... does that mean that I have to create a ZFS filesystem to actually
>>> test ([1]) an application which modifies ZFS/NFSv4 ACLs or are there any
>>> other options ?
>> By all means, test with ZFS. But it's easy to do tha
Roland Mainz wrote:
Umpf... the matching code is linked with -Bdirect ... AFAIK I can't
interpose library functions linked with this option, right ?
You could set LD_NODIRECT to defeat direct bindings --- see ld.so.1(1).
However, I agree with the thought that it would be easier to just
have a
I have some vague recollection that tmpfs doesn't support ACLs snd it
appears to be so...
ZFS
opensolaris% touch /var/tmp/bar
opensolaris% chmod A=user:lp:r:deny /var/tmp/bar
opensolaris%
TMPFS
opensolaris% touch /tmp/bar
opensolaris% chmod A=user:lp:r:deny /tmp/bar
chmod:
Quoting David Magda :
On Wed, September 16, 2009 10:31, Edward Ned Harvey wrote:
Hi. If I am using slightly more reliable SAS drives versus SATA, SSDs
for both L2Arc and ZIL and lots of RAM, will a mirrored pool of say 24
disks hold any significant advantages over a RAIDZ pool?
Generally spea
On Wed, September 16, 2009 10:31, Edward Ned Harvey wrote:
>> Hi. If I am using slightly more reliable SAS drives versus SATA, SSDs
>> for both L2Arc and ZIL and lots of RAM, will a mirrored pool of say 24
>> disks hold any significant advantages over a RAIDZ pool?
>
> Generally speaking, striping
> Hi. If I am using slightly more reliable SAS drives versus SATA, SSDs
> for both L2Arc and ZIL and lots of RAM, will a mirrored pool of say 24
> disks hold any significant advantages over a RAIDZ pool?
Generally speaking, striping mirrors will be faster than raidz or raidz2,
but it will require
On Wed, September 16, 2009 02:11, Carson Gaspar wrote:
> "zfs recv" of a full
> stream will create a new filesystem of the appropriate version, which you
> may
> then "zfs upgrade" if you wish. And restoring incrementals to a different
> fs rev
> doesn't make sense. As long as support for older f
Hi, I managed to test this out, it seems iscsitgt performance is suboptimal
with this setup but somehow comstar maxes out gige easily, no performance
issues there.
Yours
Markus Kovero
-Original Message-
From: Maurice Volaski [mailto:maurice.vola...@einstein.yu.edu]
Sent: 11. syyskuuta
it should be faster. It really depends on what you are using it for though,
I've been using raidz for my system and i'm very happy with it.
On Wed, Sep 16, 2009 at 8:55 AM, wrote:
> Hi. If I am using slightly more reliable SAS drives versus SATA, SSDs for
> both L2Arc and ZIL and lots of RAM,
Hi. If I am using slightly more reliable SAS drives versus SATA, SSDs
for both L2Arc and ZIL and lots of RAM, will a mirrored pool of say 24
disks hold any significant advantages over a RAIDZ pool?
This email
Joerg Schilling wrote:
Alan Coopersmith wrote:
If the test suite is going to be running on nv_128 or later, then
you are guaranteed to have a zfs filesystem, since root must be
zfs then (since the only install method will be IPS, which requires
zfs root). Until then you could just document t
Alan Coopersmith wrote:
> If the test suite is going to be running on nv_128 or later, then
> you are guaranteed to have a zfs filesystem, since root must be
> zfs then (since the only install method will be IPS, which requires
> zfs root). Until then you could just document to run it on a
> sy
On Wed, Sep 16, 2009 at 09:34, Erik Trimble wrote:
> Carson Gaspar wrote:
>>
>> Erik Trimble wrote:
>> > I haven't see this specific problem, but it occurs to me thus:
>>>
>>> For the reverse of the original problem, where (say) I back up a 'zfs
>>> send' stream to tape, then later on, after upgr
Erik Trimble wrote:
You are correct in that restoring a full stream creates the appropriate
versioned filesystem. That's not the problem.
The /much/ more likely scenario is this:
(1) Let's say I have a 2008.11 server. I back up the various ZFS
filesystems, with both incremental and full stre
Carson Gaspar wrote:
Erik Trimble wrote:
> I haven't see this specific problem, but it occurs to me thus:
For the reverse of the original problem, where (say) I back up a 'zfs
send' stream to tape, then later on, after upgrading my system, I
want to get that stream back.
Does 'zfs receive'
Erik Trimble wrote:
> I haven't see this specific problem, but it occurs to me thus:
For the reverse of the original problem, where (say) I back up a 'zfs
send' stream to tape, then later on, after upgrading my system, I want
to get that stream back.
Does 'zfs receive' support reading a ver
59 matches
Mail list logo