Re: /var getting full

2012-04-30 Thread Peter Maloney
Am 27.04.2012 17:26, schrieb Efraín Déctor:
> Thank you all.
>
> I found out that a Java process was using all this space. I restarted
> it and voilá problem solved.
Did you write this Java program?

If so, you probably need a finally block:

File f = ...
InputStream in = null;
try {
in = new FileInputStream(in);
//whatever you do here, such as create a Reader, you keep a
reference to the InputStream
}finally{
//A finally is called regardless of what happens in the try. For
example, if there is an Exception thrown, the finally is run anyway.
Code at the end of the try is not called when an exception is thrown.
if( in != null ) {
//you must wrap this in a try{}catch(IOException){}, otherwise
the rest of your finally is not run if it throws an Exception
try{
in.close();
}catch(IOException e) {
logger.log(Level.SEVERE, Failed to close InputStream", e);
}
} 
}

>
>
> Thanks.
> -Mensaje original- From: Tom Evans
> Sent: Friday, April 27, 2012 10:22 AM
> To: Damien Fleuriot
> Cc: freebsd-stable@freebsd.org
> Subject: Re: /var getting full
>
> On Fri, Apr 27, 2012 at 4:19 PM, Damien Fleuriot  wrote:
>> Type:
>> sync
>>
>>
>> Then:
>> df -h
>>
>> Then:
>> cd /var && du -hd 1
>>
>>
>> Post results.
>>
>
> As well as this, any unlinked files that have file handles open by
> running processes will not be accounted for in du, but will be counted
> in df. You could try restarting services that write to /var.
>
> Cheers
>
> Tom
> ___
> freebsd-stable@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
> ___
> freebsd-stable@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: 9-STABLE, ZFS, NFS, ggatec - suspected memory leak

2012-04-30 Thread Daniel Braniss
> Daniel Braniss wrote:
> > > Daniel Braniss wrote:
> > > > > Daniel Braniss wrote:
> > > > > > > Security_Multipart(Fri_Apr_27_13_35_56_2012_748)--
> > > > > > > Content-Type: Text/Plain; charset=us-ascii
> > > > > > > Content-Transfer-Encoding: 7bit
> > > > > > >
> > > > > > > Rick Macklem  wrote
> > > > > > >   in
> > > > > > >   
> > > > > > > <1527622626.3418715.1335445225510.javamail.r...@erie.cs.uoguelph.ca>:
> > > > > > >
> > > > > > > rm> Steven Hartland wrote:
> > > > > > > rm> >  Original Message -
> > > > > > > rm> > From: "Rick Macklem" 
> > > > > > > rm> > > At a glance, it looks to me like 8.x is affected.
> > > > > > > Note
> > > > > > > that
> > > > > > > the
> > > > > > > rm> > > bug only affects the new NFS server (the
> > > > > > > experimental
> > > > > > > one
> > > > > > > for 8.x)
> > > > > > > rm> > > when exporting ZFS volumes. (UFS exported volumes
> > > > > > > don't
> > > > > > > leak)
> > > > > > > rm> > >
> > > > > > > rm> > > If you are running a server that might be affected,
> > > > > > > just:
> > > > > > > rm> > > # vmstat -z | fgrep -i namei
> > > > > > > rm> > > on the server and see if the 3rd number shown is
> > > > > > > increasing.
> > > > > > > rm> >
> > > > > > > rm> > Many thanks Rick wasnt aware we had anything
> > > > > > > experimental
> > > > > > > enabled
> > > > > > > rm> > but I think that would be a yes looking at these
> > > > > > > number:-
> > > > > > > rm> >
> > > > > > > rm> > vmstat -z | fgrep -i namei
> > > > > > > rm> > NAMEI: 1024, 0, 1, 1483, 25285086096, 0
> > > > > > > rm> > vmstat -z | fgrep -i namei
> > > > > > > rm> > NAMEI: 1024, 0, 0, 1484, 25285945725, 0
> > > > > > > rm> >
> > > > > > > rm> ^
> > > > > > > rm> I don't think so, since the 3rd number (USED) is 0 here.
> > > > > > > rm> If that # is increasing over time, you have the leak.
> > > > > > > You
> > > > > > > are
> > > > > > > rm> probably running the old (default in 8.x) NFS server.
> > > > > > >
> > > > > > >  Just a report, I confirmed it affected 8.x servers running
> > > > > > >  newnfs.
> > > > > > >
> > > > > > >  Actually I have been suffered from memory starvation
> > > > > > >  symptom on
> > > > > > >  that
> > > > > > >  server (24GB RAM) for a long time and watching vmstat -z
> > > > > > >  periodically. It stopped working once a week. I
> > > > > > >  investigated
> > > > > > >  the
> > > > > > >  vmstat log again and found the amount of NAMEI leak was
> > > > > > >  11,543,956
> > > > > > >  (about 11GB!) just before the locked-up. After applying the
> > > > > > >  patch,
> > > > > > >  the leak disappeared. Thank you for fixing it!
> > > > > > >
> > > > > > > -- Hiroki
> > > > > And thanks Hiroki for testing it on 8.x.
> > > > >
> > > > > > this is on 8.2-STABLE/amd64 from around August:
> > > > > > same here, this zfs+newnfs has been hanging every few months,
> > > > > > and
> > > > > > I
> > > > > > can see
> > > > > > now the leak, it's slowly increasing:
> > > > > > NAMEI: 1024, 0, 122975, 529, 15417248, 0
> > > > > > NAMEI: 1024, 0, 122984, 520, 15421772, 0
> > > > > > NAMEI: 1024, 0, 123002, 502, 15424743, 0
> > > > > > NAMEI: 1024, 0, 123008, 496, 15425464, 0
> > > > > >
> > > > > > cheers,
> > > > > > danny
> > > > > Maybe you could try the patch, too.
> > > > >
> > > > > It's at:
> > > > >http://people.freebsd.org/~rmacklem/namei-leak.patch
> > > > >
> > > > > I'll commit it to head soon with a 1 month MFC, so that
> > > > > hopefully
> > > > > Oliver will have a chance to try it on his production server
> > > > > before
> > > > > the MFC.
> > > > >
> > > > > Thanks everyone, for your help with this, rick
> > > >
> > > > I haven't applied the patch yet, but in the meanime I have been
> > > > running some
> > > > experiments on a zfs/nfs server running 8.3-STABLE, and don't see
> > > > any
> > > > leaks
> > > > what triggers the leak?
> > > >
> > > Fortunately Oliver isolated this. It should leak when you do a
> > > successful
> > > "rm" or "rmdir" while running the new/experimental server.
> > >
> > but that's what I did, I'm running the new/experimental nfs server
> > (or so I think :-), and did a huge rm -rf and nothing, nada, no leak.
> > To check the patch, I have to upgrade the production server, the one
> > with the
> > leak,
> > but I wanted to test it on a non production first. Anyways, ill patch
> > the
> > kernel
> > and try it on the leaking production server tomorrow.
> > 
> Well, I think the patch should be harmless.
> 
> You can check which server you are running by doing:
> # nfsstat -e -s
> - and see if the numbers are increasing
>   if they're zero or not increasing, you are running the old (default on 8.x)
> server
was running the wrong nfsd, now all is ok, and the patch works (obviously :-)
BTW, if the if the experimental is not running then
# nfsstat -e -s
nfsstat: experimental client/server not loaded
#

danny





___
freebsd-stable@freebsd.org mailing list
http://lis

Re: High load event idl.

2012-04-30 Thread Albert Shih
 Le 28/04/2012 ? 09:55:41+0300, Alexander Motin a écrit
> >>>
> >>> last pid: 61010;  load averages:  0.00,  0.00,  0.00 up 2+11:02:42
> >>> 22:29:08
> >>> 126 processes: 1 running, 125 sleeping
> >>> CPU: % user, % nice, % system, % interrupt, % idle
> >>> Mem: 803M Active, 2874M Inact, 1901M Wired, 112M Cache, 620M Buf, 202M 
> >>> Free
> >>> Swap: 6144M Total, 36M Used, 6107M Free
> >>>
> >>
> >> http://lists.freebsd.org/pipermail/freebsd-bugs/2012-April/048213.html
> >
> > What I understand of your message (I'm definitvly not a dev) is that's only
> > a little problem of accounting.
> >
> > I'm not absolute sure of that because my laptop fan never stop...
> >
> > If you want any more information...
> 
> Definitely, because here I don't see much.
> 
> Generally, all CPU loads and load averages now calculated via sampling, 
> so theoretically with spiky load numbers may vary for many reasons. I 
> would start from collecting information about running processes. To find 
> fast switching processes that could hide from accounting try `top -SH -m 
> io -o vcsw`. To get more information about scheduler work, use 
> /usr/src/tools/sched/schedgraph.py (instruction inside it).

I rebuild my kernel with KTR. 

But I'm not a dev so I have no idea what the schedgraph.py show...:-(

If this can help to solve the problem you can find my ktr and my dmesg.

http://dl.free.fr/csycL43ad
http://dl.free.fr/j0XQFimPM

Hope that can help you. 

If you need anything else 

Thanks. 

Regards. 

JAS


-- 
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
xmpp: j...@obspm.fr
Heure local/Local time:
lun 30 avr 2012 12:10:13 CEST
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Process for getting data to report LoRs

2012-04-30 Thread Freddie Cash
Is it possible to get the backtrace for a LoR from any of the system
logs or anything like that, after the fact?  Especially after
rebooting into a non-debug kernel?

I compiled a custom kernel for our ZFS boxes that have been locking up
on me lately, adding INVARIANTS and WITNESS.  But, to be safe, I
booted the debug kernel using nextboot on Friday.  The boxes locked up
over the weekend, and we restarted, reverting them back to the
non-debug kernel.

Going through /var/log/messages, I see a couple LoRs that aren't
listed on http://ipv4.sources.zabbadoz.net/freebsd/lor.html

However, as I'm not running the debug kernel anymore, I can't go
through sysctl to grab the backtrace for it.  Is there any other way
to get the info?  Is there anyway to configure the system to log the
backtrace to a file?

Here's the info from /var/log/messages, which I'm guessing is not
enough to track down the cause of the LoR:
lock order reversal:
1st 0xfe0019415098 zfs (zfs) @
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c:1704
2nd 0xfe00191669f8 ufs (ufs) @ /usr/src/sys/kern/vfs_syscalls.c:1665

lock order reversal:
1st 0xfe00194d0eb8 rtentry (rtentry) @ /usr/src/sys/net/route.c:374
2nd 0xfe000adb2bc0 if_afdata (if_afdata) @
/usr/src/sys/netinet6/scope6.c:417

System info:
FreeBSD betadrive.sd73.bc.ca 9.0-STABLE FreeBSD 9.0-STABLE #0 r234466:
Fri Apr 20 10:57:30 PDT 2012
r...@betadrive.sd73.bc.ca:/usr/obj/usr/src/sys/ZFSHOST90  amd64

-- 
Freddie Cash
fjwc...@gmail.com
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


FreeBSD 9 "gptboot: invalid backup GPT header" error (boots fine though)

2012-04-30 Thread Adam Strohl
I've been deploying FreeBSD 9 without issue on a number of 
near-identical servers for a client, but have run into an interesting 
annoyance when I hit the two DB servers.


These DB servers have an LSI 3ware 9750-8i (running a 6 disk RAID 10 in 
a single 3TB virtual volume) which puts them apart from the other two 
servers in this cluster (which don't show either issue I am about to 
discuss).  Otherwise the hardware is identical (Dual Xeon E5620s, 16GB 
RAM).  I've also never seen this before on other physical (or VM) 
FreeBSD 9 instances and I've probably done 50+ FreeBSD 9 VM and physical 
installs at this point (and run through the installer process probably 
over 150 times :P).


Before I get into the GPT error, I want to mention this in case its 
relevant:


I found I had to partition via the shell (gpart create/gpart add/etc 
etc) the disks during install or the kernel would fail to re-mount the 
root disk after booting into the new OS.   If I used the default layout, 
or the partition GUI at all (ie; 'manual mode') the new OS wouldn't 
remount root on boot.


I could manually specify the proper root device ie; ufs:/dev/da0p3 and 
continue booting without issue, so this is an installer thing.   I'm 
sure I could have fixed this in /boot/loader.conf or similar but wanted 
to try to figure out what was breaking (now I know its something the 
installer is doing since it doesn't happen when I do it manually).  So I 
kept reOSing it doing different things and ultimately found shell-based 
manual partitioning worked fine.


However, I see the following error right before BTX comes up (and did 
previously when using the installer's partition GUI):


gptboot: invalid backup GPT header

The machine boots fine, so I'm not stuck  but it is an annoyance for 
an A-type sysadmin like myself.  Even if its superficial I dislike 
setting up a client's machine to generate "errors" on boot, especially 
without an explanation or understanding behind it.   I also obviously 
wanted to raise the issue here in case there is actually a rare problem 
or this is a symptom of one.


I could find nothing that related specifically to this issue, so I was 
wondering if anyone else had seen this or had thoughts.


My suspicion is that maybe the large size of the volume (3TB or 2.7TB 
formatted) makes it too large for the boot loader to "address all of" 
and thus can't get to the end of the disk where the backup GPT header is 
to validate it..


Or maybe the RAID adapter is doing something weird at the end of the 
disk.  This seems unlikely since it presents the RAID as a single volume 
so I'd assume it would hide any tagging or RAID meta data from the OS' 
virtual volume though.


That's about all I can think of.

Selected dmesg output:
LSI 3ware device driver for SAS/SATA storage controllers, version: 
10.80.00.003
tws0:  port 0x1000-0x10ff mem 
0xb194-0xb1943fff,0xb190-0xb193 irq 32 at device 0.0 on pci4

tws0: Using legacy INTx
tws0: Controller details: Model 9750-8i, 8 Phys, Firmware FH9X 
5.12.00.007, BIOS BE9X 5.11.00.006


da0 at tws0 bus 0 scbus0 target 0 lun 0
da0:  Fixed Direct Access SCSI-5 device
da0: 6000.000MB/s transfers
da0: 2860992MB (5859311616 512 byte sectors: 255H 63S/T 364725C)


Let me know anyone wants to see anything else/has seen this/has any 
theories!


--

Adam Strohl
A-Team Systems
http://ateamsystems.com/

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"