[zfs-discuss] Re: where has all my space gone? (with zfs mountroot + b38)
James C. McPherson wrote: Hi all, I got a new 300Gb SATA disk last friday for my u20. After figuring out that the current u20 bios barfs on EFI labels and creating a whole-disk-sized slice0 I used TimF's script to implement zfsroot. All seemed well and good until this morning when I did a df: pieces: $ df -k / Filesystemkbytesused avail capacity Mounted on root_pool/root_filesystem 286949376 183288496 10365257164%/ Now try as I might, I'm unable to account for more than about 26Gb on this disk. Where has all my space gone? Ok, I thought things were going ok until today when I realised that not only has the space in use not gone down, but even after acquiring more disks and moving 22gb of data *off* root_pool, my in use count on root_pool has increased. I've bfu'd to 20/May archives and done update_nonON to build 40 as well (though I noticed this yesterday before I updated). Using du -sk on the local directories under /: root_pool/root_filesystem shows (kb) 919 bfu.child 99 bfu.conflicts 796 bfu.parent 0 bin 53330 boot 5 Desktop 445 dev 217 devices 85831 etc 1697108 export 56977 kernel 578545 opt/netbeans 4633opt/onbld 336935 opt/openoffice.org2.0 1788opt/OSOLopengrok 4983opt/PM 6018opt/schily 7 opt/SUNWits 49 opt/SUNWmlib 441595 opt/SUNWspro 37954 platform 1489sbin 3149821 usr 3202887 var for a grand total of ~ 9.2gb whereas my new pool, inout, has these filesystems: 18G /scratch 739M /opt/local 945M /opt/csw 2.4G /opt/hometools pieces: # df -k Filesystemkbytesused avail capacity Mounted on root_pool/root_filesystem 286949376 234046924 5286584382%/ /devices 0 0 0 0%/devices ctfs 0 0 0 0%/system/contract proc 0 0 0 0%/proc mnttab 0 0 0 0%/etc/mnttab swap 8097836 768 8097068 1%/etc/svc/volatile objfs 0 0 0 0%/system/object /usr/lib/libc/libc_hwcap2.so.1 286912767 234046924 5286584382%/lib/libc.so.1 fd 0 0 0 0%/dev/fd swap 8097148 80 8097068 1%/tmp swap 8097104 40 8097064 1%/var/run inout/scratch132120576 18536683 10939708715%/scratch /dev/dsk/c1d0s0 8266719 6941900 124215285%/ufsroot inout/optlocal 132120576 752264 109397087 1%/opt/local inout/csw132120576 959117 109397087 1%/opt/csw inout/hometools 132120576 2470511 109397087 3%/opt/hometools inout132120576 24 109397087 1%/inout root_pool286949376 24 52865843 1%/root_pool /export/home/jmcp286912767 234046924 5286584382%/home/jmcp I've had a "zdb -bv root_pool" running for about 30 minutes now.. it just finished and of course told me that everything adds up: Traversing all blocks to verify nothing leaked ... No leaks (block sum matches space maps exactly) bp count: 4575397 bp logical:384796715008 avg: 84101 bp physical: 238705154048 avg: 52171compression: 1.61 bp allocated: 239698446336 avg: 52388compression: 1.61 SPA allocated: 239698446336 used: 80.30% Blocks LSIZE PSIZE ASIZE avgcomp %Total Type 12 132K 32.5K 89.0K 7.42K4.06 0.00 deferred free 1512 512 1K 1K1.00 0.00 object directory 3 1.50K 1.50K 3.00K 1K1.00 0.00 object array 116K 1.50K 3.00K 3.00K 10.67 0.00 packed nvlist - - - - - -- packed nvlist size 232K 17.0K 35.0K 17.5K1.88 0.00 bplist - - - - - -- bplist header - - - - - -- SPA space map header 5.78K 24.8M 17.6M 35.6M 6.16K1.41 0.02 SPA space map 116K 16K 16K 16K1.00 0.00 ZIL intent log 59.5K 952M237M477M 8.01K4.02 0.21 DMU dnode 3 3.00K 1.50K 4.50K 1.50K2.00 0.00 DMU objset - - - - - -- DSL directory 3 1.50K 1.50K 3.00K 1K1.00 0.00 DSL directory child map 2 1K 1K 2K 1K1.00 0.00 DSL dataset snap map 5 64.5K 7.50K 15.0K 3.00K8.60 0.00 DSL props - - - - - -- DSL dataset - - - - - -- ZFS znode - - - - - -- ZFS ACL 4.18M 357G222G223G 53.2K
Re: [zfs-discuss] Re: where has all my space gone? (with zfs mountroot + b38)
On 5/21/06, James C. McPherson <[EMAIL PROTECTED]> wrote: Using du -sk on the local directories under /: I assume that you have also looked through /proc/*/fd/* to be sure that there aren't any big files with a link count of zero (file has been rm'd but process still has it open). The following commands may be helpful: Find the largest files that are open: # du -h /proc/*/fd/* | sort -n or Find all open files that have a link count of zero (have been removed) # ls -l /proc/*/fd/* \ | nawk 'BEGIN { print "Size_MB File" } $2 == 0 && $1 ~ /^-/ { printf "%7d %s\n", $5/1024/1024, $NF }' Mike -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re: where has all my space gone? (with zfs mountroot + b38)
Mike Gerdts wrote: On 5/21/06, James C. McPherson <[EMAIL PROTECTED]> wrote: Using du -sk on the local directories under /: I assume that you have also looked through /proc/*/fd/* to be sure that there aren't any big files with a link count of zero (file has been rm'd but process still has it open). The following commands may be helpful: Find the largest files that are open: # du -h /proc/*/fd/* | sort -n or Find all open files that have a link count of zero (have been removed) # ls -l /proc/*/fd/* \ | nawk 'BEGIN { print "Size_MB File" } $2 == 0 && $1 ~ /^-/ { printf "%7d %s\n", $5/1024/1024, $NF }' Hi Mike, Thanks for the script, but no, it produced only this: Size_MB File /proc/102286/fd/3: No such file or directory /proc/102286/fd/63: No such file or directory The disk usage has persisted across several reboots. -- James C. McPherson -- Solaris Datapath Engineering Data Management Group Sun Microsystems ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re: where has all my space gone? (with zfs mountroot + b38)
>Find the largest files that are open: > ># du -h /proc/*/fd/* | sort -n > >or > >Find all open files that have a link count of zero (have been removed) > ># ls -l /proc/*/fd/* \ >| nawk 'BEGIN { print "Size_MB File" } >$2 == 0 && $1 ~ /^-/ { > printf "%7d %s\n", $5/1024/1024, $NF >}' Nah: find /proc/*/fd -type f -links 0 (Add a -size +100c for good measure, to find unlinked files over one SI MB) Casper ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re: where has all my space gone? (with zfs mountroot + b38)
> I've had a "zdb -bv root_pool" running for about 30 minutes now.. it > just finished and of course told me that everything adds up: This is definitely the delete queue problem: > Blocks LSIZE PSIZE ASIZE avgcomp %Total Type > 4.18M 357G222G223G 53.2K1.6199.68 ZFS plain file > 8.07K 129M 47.2M 94.9M 11.8K2.74 0.04 ZFS delete queue Under normal circumstances, the delete queue should be empty. Here, the delete queue *itself* is 129M, which means that it's probably describing many GB of data to be deleted. The problem is described here: 6420204 root filesystem's delete queue is not running Tabriz, any update on this? Jeff ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re: where has all my space gone? (with zfs mountroot + b38)
Hi Jeff, Jeff Bonwick wrote: I've had a "zdb -bv root_pool" running for about 30 minutes now.. it just finished and of course told me that everything adds up: This is definitely the delete queue problem: Blocks LSIZE PSIZE ASIZE avgcomp %Total Type 4.18M 357G222G223G 53.2K1.6199.68 ZFS plain file 8.07K 129M 47.2M 94.9M 11.8K2.74 0.04 ZFS delete queue Under normal circumstances, the delete queue should be empty. Here, the delete queue *itself* is 129M, which means that it's probably describing many GB of data to be deleted. Aha... I'll go and study the source a bit more carefully. I know that zdb is private and totally unstable, but could we get the manpage for it to at least say what the LSIZE, PSIZE and ASIZE columns mean please? The problem is described here: 6420204 root filesystem's delete queue is not running Tabriz, any update on this? I guess I discounted this bug (of course I knew of it) because if you look at the %Total column it only shows up as 0.04%. Time for me to utsl. Thanks heaps, James C. McPherson -- Solaris Datapath Engineering Data Management Group Sun Microsystems ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: ZFS Web administration interface
To expand on the original question: in nv 38 and 39, I start the Java Web Console https://localhost:6789 and log in as root. Instead of the available application including ZFS admin, I get this page: You Do Not Have Access to Any Application No application is registered with this Sun Java(TM) Web Console, or you have no rights to use any applications that are registered. See your system administrator for assistance. I've tried smreg and wcadmin but do not know the /location/name of the ZFS app to register. Any help is appreciated, google and sunsolve come up empty. On the same note, are there any other apps that can be registed in the Sun Java Web Console? Ron Halstead Technical Consultant TAOS This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re: ZFS Web administration interface
On 22/05/2006, at 6:41 AM, Ron Halstead wrote: To expand on the original question: in nv 38 and 39, I start the Java Web Console https://localhost:6789 and log in as root. Instead of the available application including ZFS admin, I get this page: You Do Not Have Access to Any Application No application is registered with this Sun Java(TM) Web Console, or you have no rights to use any applications that are registered. See your system administrator for assistance. I've tried smreg and wcadmin but do not know the /location/name of the ZFS app to register. Any help is appreciated, google and sunsolve come up empty. On the same note, are there any other apps that can be registed in the Sun Java Web Console? I've noticed the same problem on b37 and followed the same path. I also have no answer :( Boyd ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] tracking error to file
On Fri, May 19, 2006 at 01:23:02PM -0600, Gregory Shaw wrote: > DATASET OBJECT RANGE > 1b 2402lvl=0 blkid=1965 > > I haven't found a way to report in human terms what the above object > refers to. Is there such a method? There isn't any great method currently, but you can use 'zdb' to find this information. The quickest way would be to first determine the name of dataset 0x1b (=27): # zdb local | grep "ID 27," Dataset local/ahrens [ZPL], ID 27, ... Then get info on that particular object in that filesystem: # zdb -vvv 2402 ... Object lvl iblk dblk lsize asize type 2402116K 3.50K 3.50K 2.50K ZFS plain file 264 bonus ZFS znode path/raidz/usr/src/uts/common/fs/zfs/dmu.c ... The "path" listed is relative to the filesystem's mountpoint. --matt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss