Richard,
thanks alot for that answer. It can be argued back and forth what is right, but
it helps knowing the reason behind the problem. Again, thanks alot...
//Mike
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@o
Hi,
it was actually shared both as a dataset and a NFS-share.
we had zonedata/prodlogs set up as a dataset and then
we had zonedata/tmp mounted as a NFS filesystem within the zone.
//Mike
--
This message posted from opensolaris.org
___
zfs-discuss mail
define a lot :-)
We are doing about 7-8M per second which I don't think is a lot but perhaps it
is enough to screw up the estimates? Anyhow the resilvering completed about
4386h earlier than expected so everything is ok now, but I still feel that the
way it figures out the number is wrong.
Any
Hi,
I've searched without luck, so I'm asking instead.
I have a Solaris 10 box,
# cat /etc/release
Solaris 10 11/06 s10s_u3wos_10 SPARC
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Hi,
thanks for the reply. But there must be a better explanation other than that?
Otherwise it seems kinda harsh to "loose" 20GB per 1TB and I will most likely
have to answer this question when we are going to discuss if we are to migrate
to zfs over vxfs..
This message posted from opensola
Hi,
sorry if I am brining up old news, but I couldn't find a good answer searching
the previous posts (My mom always says I am bad with finding things :)
However I noticed a difference when creating a zfs filesystem compared with a
vxfs filesystem in the available size. ie.
ZFS
zonedata/zfs
Jeff,
thanks for your answer, and I almost wish I did type it wrong (the easy
explanation that I messed up :-) but from what I can tell I did get it right
--- zpool commands I ran ---
bash-3.00# grep zpool /.bash_history
zpool
zpool create data raidz c1t0d0 c1t1d0 c1t2d0 c1t3d0 c1t4d0 c2t0d0 c2
Hi,
so it happened...
I have a 10 disk raidz pool running Solaris 10 U2, and after a reboot the whole
pool became unavailable after apparently loosing a diskdrive. (The drive is
seemingly ok as far as I can tell from other commands)
--- bootlog ---
Jul 17 09:57:38 expprd fmd: [ID 441519 daemon