itable when you could have only 7 or 15 or so devices
on a single bus. With modern busses you can have many thousands of
devices on the same fabric. So we address them by WWN.
- Garrett
>
> If you'd like some info about how we use devids and guids,
> please refer to my presen
Hey all,
I have a 10 TB root pool setup like so:
pool: s78
state: ONLINE
scrub: resilver completed after 2h0m with 0 errors on Wed Jan 19 22:04:39 2011
config:
NAME STATE READ WRITE CKSUM
s78 ONLINE 0 0 0
mirror ONLINE 0
Thanks for your replies.
Regards
Victor
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks Darren.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-
WBR, Andrey V. Elsukov
signature.asc
Description: OpenPGP digital signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi all,
In Jeff's blog:http://blogs.sun.com/bonwick/entry/raid_z
It mentions original raid-z codes are 599 lines, where can I find it to learn,
current codes are a little big.
regards
Victor
--
This message posted from opensolaris.org
___
zfs-discuss m
Hi,
for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one
physical disk iops, since raidz1 is like raid5 , so is raid5 has same
performance like raidz1? ie. random iops equal to one physical disk's ipos.
Regards
Victor
--
This message posted from opensolaris.org
_
Hi,
A basic question regarding how zil works:
For asynchronous write, will zil be used?
For synchronous write, and if io is small, will the whole io be place on zil?
or just the pointer be save into zil? what about large size io?
Regards
Victor
--
This message posted from opensolaris.org
___
Hi,
Another question regarding snapshot.
If there is no space in zfs pool, will a write to zfs fail ? Is there a way to
reserve space in zfs pool to be used by snapshot or clone?
Regards
Victor
--
This message posted from opensolaris.org
___
zfs-discus
Hi,
Thanks for the reply.
So the problem of sequential read after random write problem exist in zfs.
I wonder if it is a real problem, ie, for example cause longer backup time,
will it be addressed in future?
So I should ask anther question: is zfs suitable for an environment that has
lots of da
Hi experts,
I am new to zfs and ask a question regarding zfs sequential peroformance: I
read some blogs saying that netapp's WAFL can suffer "sequential read after
random write(SRARW)" performance penalty, since zfs is also doing no update in
place, can zfs has such problem?
Thanks
Victor
--
T
s ok, then destroy old FS
It's IMHO...
--
WBR, Andrey V. Elsukov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
not to all
descendants. Or maybe this functional is already implemented?
--
WBR, Andrey V. Elsukov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
POSIX specification of rename(2) provides a very nice property
for building atomic transcations:
If the old argument points to the pathname of a file that is not a
directory, the new argument shall not po
On Thu, 2009-07-30 at 09:33 +0100, Darren J Moffat wrote:
> Roman V Shaposhnik wrote:
> > On the read-only front: wouldn't it be cool to *not* run zfs sends
> > explicitly but have:
> > .zfs/send/
> > .zfs/sendr/-
> > give you the same data automagica
On Wed, 2009-07-29 at 15:06 +0300, Andriy Gapon wrote:
> What do you think about the following feature?
>
> "Subdirectory is automatically a new filesystem" property - an administrator
> turns
> on this magic property of a filesystem, after that every mkdir *in the root*
> of
> that filesystem c
I must admit that this question originates in the context of Sun's
Storage 7210 product, which impose additional restrictions on the
kind of knobs I can turn.
But here's the question: suppose I have an installation where ZFS
is the storage for user home directories. Since I need quotas, each
direc
On Wed, 2009-02-11 at 09:49 +1300, Ian Collins wrote:
> These posts do sound like someone who is blaming their parents after
> breaking a new toy before reading the instructions.
It looks like there's a serious denial of the fact that "bad things
do happen to even the best of people" on this thre
Regards,
Mark V. Dalton
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I'm hoping that this is simpler than I think it is. :-)
We routinely clone our boot disks using a fairly simple script that:
1) Copies the source disk's partition layout to the target disk using
[i]prtvtoc[/i], [i]fmthard[/i] and [i]installboot.[/i]
2) Using a list, runs [i]newfs[/i] against the
20 matches
Mail list logo