First, this is under FreeBSD, but it isn't specific to that OS, and it involves
some technical details beyond normal use, so I'm trying my luck here.
I have a pool (around version 14) with a corrupted log device that's
irrecoverable. I found a tool called logfix, but I don't have the GUID of th
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Simon Breden
>
> So are we all agreed then, that a vdev failure will cause pool loss ?
Yes. When I said you could mirror a raidzN vdev, it was based on nothing
more credible than assumption b
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Marty Scholes
>
> Would it make sense for scrub/resilver to be more aware of operating in
> disk order instead of zfs order?
It would certainly make sense. As mentioned, even if you do the en
On 2010-Oct-18 17:45:34 +0800, "casper@sun.com" wrote:
> Write-lock (wlock) the specified file-system. wlock
> suspends writes that would modify the file system.
> Access times are not kept while a file system is write-
> locked.
>
>
>All the applica
> Richard wrote:
> Yep, it depends entirely on how you use the pool. As soon as you
> come up with a credible model to predict that, then we can optimize
> accordingly :-)
You say that somewhat tongue-in-cheek, but Edward's right. If the resliver
code progresses in slab/transaction-group/whatev
A workaround is to create two pools each with your 1 vdev, make zvols for each
of them and export them via iscsi through the localhost interface. Then make a
third mirrored pool out of those two iscsi'ed zvols.
--
This message posted from opensolaris.org
_
On Mon, Oct 18, 2010 at 3:28 AM, Habony, Zsolt wrote:
> In many large datacenters, a different storage team handles LUN requests
> and assignment.
> We ask a LUN in a specific size, and we get one.
>
> It might result that the first vdev (LUN) is on a beginning of a RAID set
> on the storage,
> a
So are we all agreed then, that a vdev failure will cause pool loss ?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Oct 18, 2010 at 8:51 AM, Darren J Moffat
wrote:
> On 18/10/2010 16:48, Freddie Cash wrote:
>>
>> On Mon, Oct 18, 2010 at 6:34 AM, Edward Ned Harvey
>> wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Freddie Cash
>>
On 18/10/2010 16:48, Freddie Cash wrote:
On Mon, Oct 18, 2010 at 6:34 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Freddie Cash
If you lose 1 vdev, you lose the pool.
As long as 1 vdev is striped and not mir
On Mon, Oct 18, 2010 at 6:34 AM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Freddie Cash
>>
>> If you lose 1 vdev, you lose the pool.
>
> As long as 1 vdev is striped and not mirrored, that's true.
> You can o
On Mon, 18 Oct 2010, Edward Ned Harvey wrote:
sec to resilver = 133min. So whenever people have resilver times longer
than that ... It's because ZFS resilver code for raidzN is inefficient.
You keep using the term "code" and using terms like "code is
inefficient" when it seems that you are ta
Thank You all for the comments.
You should imagine a datacenter with
- standards not completely depending on me.
- SAN for many OSs, one of them is Solaris, (and not the major amount)
- usually level 2 engineers doing filesystem increases.
- hundreds of physical boxes, dozens of virtuals on o
On 18/10/2010 15:12, Peter Tribble wrote:
On Mon, Oct 18, 2010 at 2:34 PM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Freddie Cash
If you lose 1 vdev, you lose the pool.
As long as 1 vdev is striped and not mi
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/18/2010 5:40 AM, Habony, Zsolt wrote:
> (I do not mirror, as the storage gives redundancy behind LUNs.)
>
By not enabling redundancy (Mirror or RAIDZ[123]) at the ZFS level,
you are opening yourself to corruption problems that the underlying
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/18/2010 4:28 AM, Habony, Zsolt wrote:
>
> I worry about head thrashing.
Why?
If your SAN group gives you a LUN that is at the opposite end of the
array, I would think that was because they had already assigned the
space in the middle to othe
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> > This is one of the reasons the raidzN resilver code is inefficient.
> > Since you end up waiting for the slowest seek time of any one disk in
> > the vdev, and when that's done, the amount of data you were able to
> > process was at mo
On Mon, Oct 18, 2010 at 2:34 PM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Freddie Cash
>>
>> If you lose 1 vdev, you lose the pool.
>
> As long as 1 vdev is striped and not mirrored, that's true.
> You can o
On Oct 18, 2010, at 6:52 AM, Edward Ned Harvey wrote:
>> From: Richard Elling [mailto:richard.ell...@gmail.com]
>>
>>> http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg41998.html
>>
>> Slabs don't matter. So the rest of this argument is moot.
>
> Tell it to Erik. He might want to kno
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> On Oct 17, 2010, at 6:17 AM, Edward Ned Harvey wrote:
>
> >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> >> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
> >>
> >> If scrub is operating at a block-level (a
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> > http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg41998.html
>
> Slabs don't matter. So the rest of this argument is moot.
Tell it to Erik. He might want to know. Or maybe he knows better than you.
> 2. Each slab is sprea
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Habony, Zsolt
>
> If I use a zpool which is one LUN from the SAN, and when
> it becomes full I add a new LUN to it.
> But I cannot guarantee that the LUN will not come from the s
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Freddie Cash
>
> If you lose 1 vdev, you lose the pool.
As long as 1 vdev is striped and not mirrored, that's true.
You can only afford to lose a vdev, if your vdev itself is mirrored.
You co
OK, thanks Freddie, that's pretty clear.
Cheers,
Simon
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>> You have an application filesystem from one LUN. (vxfs is expensive,
>>> ufs/svm is not really able
to handle online filesystem increase. Thus we plan to use zfs for application
filesystems.)
>
>>What do you mean by "not really"?
>...
>>Use growfs to grow UFS on the grown device.
>
>I know
>> Is there a way to avoid it, or can we be sure that the problem does not
>> exist at all ?
>Grow the existing LUN rather than adding another one.
>The only way to have ZFS not stripe is to not give it devices to stripe
>over. So stick with simple mirrors ...
(I do not mirror, as the storage
>> You have an application filesystem from one LUN. (vxfs is expensive, ufs/svm
>> is not really able to handle online filesystem increase. Thus we plan to use
>> zfs for application filesystems.)
>What do you mean by "not really"?
...
>Use growfs to grow UFS on the grown device.
I know its off
On 10/18/10 2:13 AM, Rainer J.H. Brandt wrote:
Habony, Zsolt writes:
You have an application filesystem from one LUN. (vxfs is
expensive, ufs/svm is not really able to handle online filesystem
increase. Thus we plan to use zfs for application filesystems.)
What do you mean by "not really"? Us
Hi,
Habony, Zsolt writes:
> You have an application filesystem from one LUN. (vxfs is expensive, ufs/svm
> is not really able to handle online filesystem increase. Thus we plan to use
> zfs for application filesystems.)
What do you mean by "not really"?
Use metattach to grow a metadevice or sof
On Mon, Oct 18, 2010 at 1:28 AM, Habony, Zsolt wrote:
> Is there a way to avoid it, or can we be sure that the problem does not exist
> at all ?
ZFS will coalesce asynchronous writes, which should help for most of
the head trash on write. Using a log device will convert sync writes
to async.
Fo
On 18/10/2010 10:01, Habony, Zsolt wrote:
If I can force concatenation, then I do not have to investigate, where are the
existing parts of the filesystems.
You can't, the code for concatenation rather than stripping does not
exist and there are no plans to add it.
Instead of assuming you ha
On 18/10/2010 09:28, Habony, Zsolt wrote:
I worry about head thrashing. Though memory cache of large storage should make
the problem
Is that really something you should be worried about with all the other
software and hardware between ZFS and the actual drives ?
If that is a problem then i
>No. The basic principle of the zpool is dynamic striping across vdevs in order
>to ensure that all available spindles >are contributing to the workload. If
>you want/need more granular control over what data goes to which disk, then
>>you'll need to create multiple pools.
>Just create a new po
In many large datacenters, a different storage team handles LUN requests and
assignment.
We ask a LUN in a specific size, and we get one.
It might result that the first vdev (LUN) is on a beginning of a RAID set on
the storage,
and the second vdev is on the end of the same RAID set on the same p
On 18/10/2010 07:44, Habony, Zsolt wrote:
I have seen a similar question on this list in the archive but haven’t
seen the answer.
Can I avoid striping across top level vdevs ?
If I use a zpool which is one LUN from the SAN, and when it becomes full
I add a new LUN to it.
But I cannot guarantee
35 matches
Mail list logo