Hi all,
Ben Rockwood wrote:
> You want to keep stripes wide to reduce wasted disk space but you
> also want to keep them narrow to reduce the elements involved in parity
> calculation.
I Ben's argument, and the main point IMHO is how the RAID behaves in the
degraded state. When a disk fails,
Hi Peter,
Sorry, I have read you post after posting a reply myself.
Peter Tribble wrote:
> No. The number of spindles is constant. The snag is that for random reads,
> the performance of a raidz1/2 vdev is essentially that of a single disk. (The
> writes are fast because they're always full-strip
> I Ben's argument, and the main point IMHO is how the RAID behaves in the
^
second
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello Nils,
Thursday, September 18, 2008, 11:15:37 AM, you wrote:
NG> Hi Peter,
NG> Sorry, I have read you post after posting a reply myself.
NG> Peter Tribble wrote:
>> No. The number of spindles is constant. The snag is that for random reads,
>> the performance of a raidz1/2 vdev is essential
Hi Robert,
> Basically, the way RAID-Z works is that it spreads FS block to all
> disks in a given VDEV, minus parity/checksum disks). Because when you
> read data back from zfs before it gets to application zfs will check
> it's checksum (fs checksum, not a raid-z one) so it needs entire fs
> blo
(not sure if this has already been answered)
> I have a similar situation and would love some concise suggestions:
>
> Had a working version of 2008.05 running svn_93 with the updated grub. I did
> a pkg-update to svn_95 and ran the zfs update when it was suggested. System
> ran fine until I di
Not knowing of a better place to put this, I have created
http://www.genunix.org/wiki/index.php/ZFS_rpool_Upgrade_and_GRUB
Please make any corrections there.
Thanks, Nils
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.
Hi,
> It is important to remember that ZFS is ideal for writing new files from
> scratch.
IIRC, maildir MTAs never overwrite mail files. But courier-imap does maintain
some additional index files which will be overwritten and I guess other IMAP
servers will probably do the same.
Nils
On Thu, 18 Sep 2008, Nils Goroll wrote:
>
> On the other hand, isn't there room for improvement here? If it was possible
> to
> break large writes into smaller blocks with individual checkums(for instance
> those which are larger than a preferred_read_size parameter), we could still
> write all of
On Thu, Sep 18, 2008 at 01:26:09PM +0200, Nils Goroll wrote:
> Thank you very much for correcting my long-time misconception.
>
> On the other hand, isn't there room for improvement here? If it was
> possible to break large writes into smaller blocks with individual
> checkums(for instance those w
Nils Goroll wrote:
> Hi Robert,
>
>
>> Basically, the way RAID-Z works is that it spreads FS block to all
>> disks in a given VDEV, minus parity/checksum disks). Because when you
>> read data back from zfs before it gets to application zfs will check
>> it's checksum (fs checksum, not a raid-z o
All;
I¹m sure I¹m missing something basic here. I need to do the following
things, and can¹t for the life of me figure out how:
1. Export a zfs filesystem over NFS, but restrict access to a limited set of
hosts and/or subnets: ie: 10.9.8.0/24 and 10.9.9.5 in.
2. give root access to a zfs file sys
Try something like this:
zsfs set sharenfs=options mypool/mydata
where options is:
sharenfs="[EMAIL PROTECTED]/24:@10.9.9.5/32,[EMAIL PROTECTED]/24:@10.9.9.5/32"
--
Dave
Michael Stalnaker wrote:
> All;
>
> I’m sure I’m missing something basic here. I need to do the following
> things, and ca
I believe this is just:
zfs set sharenfs='root=host1:host2,[EMAIL PROTECTED]/24:@10.9.9.5' filesystem
See the man pages for zfs(1M) (especially the last example) and share_nfs(1M).
- Johnson
Michael Stalnaker wrote:
> All;
>
> I¹m sure I¹m missing something basic here. I need to do the follow
On Tue, Sep 16, 2008 at 11:51 PM, Ralf Ramge <[EMAIL PROTECTED]> wrote:
> Jorgen Lundman wrote:
>
>> If we were interested in finding a method to replicate data to a 2nd
>> x4500, what other options are there for us?
>
> If you already have an X4500, I think the best option for you is a cron
> job
I had a disk that contained a zpool. For reasons that we won't go in
to, that disk had zero's written all over it (at least enough to cover
the entirety of the zpool space). Now when I run zpool status the
command hangs when it tries to display information about the now
non-existent pool. Simila
Hi Glenn,
Where is it hanging? Could you provide a stack trace? It's possible
that it's just a bug and not a configuration issue.
On 18 Sep, 2008, at 16.12, Glenn Lagasse wrote:
I had a disk that contained a zpool. For reasons that we won't go in
to, that disk had zero's written all over
Glenn Lagasse wrote:
> Hey Mark,
>
> * Mark J Musante ([EMAIL PROTECTED]) wrote:
>
>> Hi Glenn,
>>
>> Where is it hanging? Could you provide a stack trace? It's possible
>> that it's just a bug and not a configuration issue.
>>
>
> I'll have to recreate the situation (won't be able to d
Hey Mark,
* Mark J Musante ([EMAIL PROTECTED]) wrote:
> Hi Glenn,
>
> Where is it hanging? Could you provide a stack trace? It's possible
> that it's just a bug and not a configuration issue.
I'll have to recreate the situation (won't be able to do so until next
week). I had a zpool status (
I appologize if this has been answered already, but I've tried to RTFM and
haven't found much. I'm trying to get HDS shadow copy to work for zpool
replication. We do this with VXVM by modifying each target disk ID after
it's been shadowed from the source LUN. This allows us to import each
ta
20 matches
Mail list logo