Boyd and all,
Just an update of what happened and what the customer found out
regarding the issue.
===========================
It does appear that the disk is fill up by 140G.
I think I now know what happen. I created a raidz pool and I did not
write any data to it before I just pulled out a disk. So I believe the
zfs filesystem did not initialize yet. So this is why my zfs filesystem
was unusable. Can you confirm this?
But when I created a zfs filesystem and wrote data to it, it could now
lose a disk and just be degraded. I tested this part by removing the
disk partition in format.
I will try this same test to re-duplicate my issue, but can you confirm
for me if my zfs filesystem as a raidz requires me to write data to it
first before it's really ready?
[EMAIL PROTECTED] df -k
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c1t0d0s0 4136995 2918711 1176915 72% /
/devices 0 0 0 0% /devices
ctfs 0 0 0 0% /system/contract
proc 0 0 0 0% /proc
mnttab 0 0 0 0% /etc/mnttab
swap 5563996 616 5563380 1% /etc/svc/volatile
objfs 0 0 0 0% /system/object
/usr/lib/libc/libc_hwcap2.so.1
4136995 2918711 1176915 72% /lib/libc.so.1
fd 0 0 0 0% /dev/fd
/dev/dsk/c1t0d0s5 4136995 78182 4017444 2% /var
/dev/dsk/c1t0d0s7 4136995 4126 4091500 1% /tmp
swap 5563400 20 5563380 1% /var/run
/dev/dsk/c1t0d0s6 4136995 38674 4056952 1% /opt
pool 210567315 210566773 0 100% /pool
/
[EMAIL PROTECTED] cd /pool
/pool
[EMAIL PROTECTED] ls -la
total 421133452
drwxr-xr-x 2 root sys 3 Aug 23 17:19 .
drwxr-xr-x 25 root root 512 Aug 23 20:34 ..
-rw------- 1 root root 171798691840 Aug 23 17:43 nullfile
/pool
[EMAIL PROTECTED]
[EMAIL PROTECTED] zpool status
pool: pool
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas
exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-D3
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
pool DEGRADED 0 0 0
raidz DEGRADED 0 0 0
c1t2d0 ONLINE 0 0 0
c1t3d0 ONLINE 0 0 0
c1t4d0 UNAVAIL 15.12 10.27 0 cannot open
errors: No known data errors
AND SECOND EMAIL:
I'm unable to re-duplicate my failed zfs pool using raidz. As for the
disk size bug (6288488 and 2140116),
I have a few questions. The developer said that it would be fixed in u3.
When is u3 suppose to be release? U2 just came out. Also, can or will
=============================================
Any ideas when the Solaris 10 update 3 (11/06) be released? And would
this be fixed in Solaris 10 update 2 (6/06)?
Thanks to all of you.
Arlina-
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss