Re: bce reporting fantom input errors?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Danny Braniss wrote: > Hi, > After changing cables,switches,ports, I came to the conclusion > that bce is reporting input errors that are not there, or creating them. > I checked this with 3 different boxes, all Dell-2950/Broadcom NetXtreme II > BCM5708 1000Base-T (B2), and one of them, while running Solaris, reported > 0 errors after a week, and freebsd after a few minutes its count was > 100. > The errors appear under 7.-PRERELEASE, but not under 7.0 > > Anybody else seeing this? Please apply this patch, it was committed as revision 186169 about 3 hours ago against -HEAD. I'll MFC it after 3 days. Cheers, - -- Xin LI http://www.delphij.net/ FreeBSD - The Power to Serve! -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.9 (FreeBSD) iEYEARECAAYFAklHZWsACgkQi+vbBBjt66CHxgCfQhUCadChP7mtyoOD4Wg4cP/k lAUAnj1S2vh/TtmnKZAaczJvx7V/XR4x =fdk+ -END PGP SIGNATURE- Index: if_bce.c === --- if_bce.c(revision 186076) +++ if_bce.c(working copy) @@ -7408,7 +7408,6 @@ (u_long) sc->stat_IfInMBUFDiscards + (u_long) sc->stat_Dot3StatsAlignmentErrors + (u_long) sc->stat_Dot3StatsFCSErrors + - (u_long) sc->stat_IfInFramesL2FilterDiscards + (u_long) sc->stat_IfInRuleCheckerDiscards + (u_long) sc->stat_IfInFTQDiscards + (u_long) sc->com_no_buffers; ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: 7.1-PRERELEASE: arcmsr write performance problem
At 05:29 PM 12/15/2008, Paul MacKenzie wrote: The next thing I am doing is going to be removing the QUOTA feature to see if this has any bearing on this problem. It does not appear to be even writing at a heavy load as you can see (almost nothing) but the processes are mostly in UFS when it spirals out of control. Whats strange is that the output from gstat shows the disks hardly active at all Yet why is the syncer at 100% ? Do you have write caching disabled on the array ? What does the raw throughput look like to the disks ? e.g. if you try a simple dd if=/dev/zero of=/var/tmp bs=1024k count=1000 ? I moved the processing of amavisd-new into a memory drive to at least take that off the IO and this seems to have helped a bit. There is not a lot of mail going through the system but every little bit helps. I suspect this is one other reason that is bringing the problem to the forefront as amavisd-new can use the disks a bit to process each e-mail. Is the high load average simply a function of processes blocking on network io ? On our av/spam scanners for example show a high load avg because there are many processes waiting on network io to complete (e.g. talking to RBL lists, waiting for DCC servers to complete etc) Also, is it really related to the arcmsr driver ? i.e. if you did the same tasks on a single IDE drive, is the performance profile going to be the same ? ---Mike ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
more zfs/nfs panics
Hi, I'm trying to tar a rather big directory via nfs (some 800gb), it has many subdirectories, some of them with many files (close to 10^6 :-) just before the server panics, the tar (on the client) starts complaining about lost files, or permition denied, but not in the pathological directories. panic: kmem_malloc(-1661382656): kmem_map too small: 645009408 total allocated cpuid = 3 KDB: enter: panic [thread pid 881 tid 100112 ] Stopped at kdb_enter_why+0x3d: movq$0,0x5ef3e8(%rip) db> tr Tracing pid 881 tid 100112 td 0xff0004ba2000 kdb_enter_why() at kdb_enter_why+0x3d panic() at panic+0x17b kmem_malloc() at kmem_malloc+0x565 uma_large_malloc() at uma_large_malloc+0x4a malloc() at malloc+0xd7 nfsrv_readdir() at nfsrv_readdir+0x4e1 nfssvc() at nfssvc+0x400 syscall() at syscall+0x1bb Xfast_syscall() at Xfast_syscall+0xab --- syscall (155, FreeBSD ELF64, nfssvc), rip = 0x8006885cc, rsp = 0x7fffea28, rbp = 0 --- I have increased vm.kmem_size_max="1024M" vm.kmem_size="1024M" vfs.zfs.arc_max="800M" it just seems to delay the panic though, it smells like some memory leak ... the host is running amd64 quad core, 7.1-prerelease and 8GB. danny ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: bce reporting fantom input errors?
> This is a multi-part message in MIME format. > --070205030901020808000803 > Content-Type: text/plain; charset=ISO-8859-1 > Content-Transfer-Encoding: 7bit > > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > Danny Braniss wrote: > > Hi, > > After changing cables,switches,ports, I came to the conclusion > > that bce is reporting input errors that are not there, or creating them. > > I checked this with 3 different boxes, all Dell-2950/Broadcom NetXtreme II > > BCM5708 1000Base-T (B2), and one of them, while running Solaris, reported > > 0 errors after a week, and freebsd after a few minutes its count was > 100. > > The errors appear under 7.-PRERELEASE, but not under 7.0 > > > > Anybody else seeing this? > > Please apply this patch, it was committed as revision 186169 about 3 > hours ago against -HEAD. I'll MFC it after 3 days. > > Cheers, > - -- > Xin LI http://www.delphij.net/ > FreeBSD - The Power to Serve! > -BEGIN PGP SIGNATURE- > Version: GnuPG v2.0.9 (FreeBSD) > > iEYEARECAAYFAklHZWsACgkQi+vbBBjt66CHxgCfQhUCadChP7mtyoOD4Wg4cP/k > lAUAnj1S2vh/TtmnKZAaczJvx7V/XR4x > =fdk+ > -END PGP SIGNATURE- > > --070205030901020808000803 > Content-Type: text/plain; > name="bce-noL2Filter.diff" > Content-Transfer-Encoding: 7bit > Content-Disposition: inline; > filename="bce-noL2Filter.diff" > > Index: if_bce.c > === > --- if_bce.c (revision 186076) > +++ if_bce.c (working copy) > @@ -7408,7 +7408,6 @@ > (u_long) sc->stat_IfInMBUFDiscards + > (u_long) sc->stat_Dot3StatsAlignmentErrors + > (u_long) sc->stat_Dot3StatsFCSErrors + > - (u_long) sc->stat_IfInFramesL2FilterDiscards + > (u_long) sc->stat_IfInRuleCheckerDiscards + > (u_long) sc->stat_IfInFTQDiscards + > (u_long) sc->com_no_buffers; > > --070205030901020808000803-- thanks! so actually it was counting IfInFramesL2FilterDiscards. btw, it worked, it's now 0 input errors. danny ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: more zfs/nfs panics
Danny Braniss wrote: > Hi, > I'm trying to tar a rather big directory via nfs (some 800gb), it > has many subdirectories, some of them with many files (close to 10^6 :-) > > just before the server panics, the tar (on the client) starts complaining > about lost > files, or permition denied, but not in the pathological directories. > panic: kmem_malloc(-1661382656): kmem_map too small: 645009408 total allocated > cpuid = 3 > KDB: enter: panic > [thread pid 881 tid 100112 ] > Stopped at kdb_enter_why+0x3d: movq$0,0x5ef3e8(%rip) > db> tr > Tracing pid 881 tid 100112 td 0xff0004ba2000 > kdb_enter_why() at kdb_enter_why+0x3d > panic() at panic+0x17b > kmem_malloc() at kmem_malloc+0x565 > uma_large_malloc() at uma_large_malloc+0x4a > malloc() at malloc+0xd7 > nfsrv_readdir() at nfsrv_readdir+0x4e1 > nfssvc() at nfssvc+0x400 > syscall() at syscall+0x1bb > Xfast_syscall() at Xfast_syscall+0xab > --- syscall (155, FreeBSD ELF64, nfssvc), rip = 0x8006885cc, rsp = > 0x7fffea28, rbp = 0 --- > > I have increased > vm.kmem_size_max="1024M" > vm.kmem_size="1024M" > vfs.zfs.arc_max="800M" > it just seems to delay the panic though, it smells like some memory leak ... Well, the canonical fix seems be to DECREASE vfs.zfs.arc_max to something like 100M and keep decreasing until it works. signature.asc Description: OpenPGP digital signature
Install issues with 7.x
Ok, message didn't send entire post history, this should be it. Hopefully it's readable enough. Hello, I purchased a new Clevo M860TU on the account that it ran linux very well and was hoping it would fair the same on FreeBSD. Not so much, little help? I posted this in mobile originally but though stable would be a better choice. Don't know if it is more appropriate here or ACPI. I'm giving you as much information as I know how to get. as I cannot get sysinstall to load I am having to type all these dmesg. The boot process is hanging. This is all with 7.x, I can give 6.x if needed. Hardware: Intel P9500 4gb DDR3-1066 Nvidia 9800M GT Atheros AR5006e FreeBSD 7.1-BETA2 These snippets of dmesg happen around the end where it hangs. 1. Default ... cpu0: on acpi0 ACPI Error (dsopcode-0350): No pointer back to NS node in buffer obj 0xc6a02d40 [20070320] ACPI Exception (dswexec-0556): AE_AML_INTERNAL, While resolving operands for [OpcodeName unavailable] [20070320] ACPI Error (psparse-0626): Method parse/execution failed [\_PR_.CPU0._OSC] (Node 0xc68556e0), AE_AML_INTERNAL est0: on cpu0 p4tcc0: on cpu0 cpu1: on acpi0 ACPI Error (dsopcode-0350): No pointer back to NS node in buffer obj 0xc6a0e300 [20070320] ACPI Exception (dswexec-0556): AE_AML_INTERNAL, While resolving operands for [OpcodeName unavailable] [20070320] ACPI Error (psparse-0626): Method parse/execution failed [\_PR_.CPU1._OSC] (Node 0xc685560), AE_AML_INTERNAL est1: on cpu1 p4tcc1: on cpu1 ... cpu0: Cx states changed cpu1: Cx states changed unknown: timeout waiting for read DRQ unknown: timeout waiting for read DRQ acd0: DVDR at ata3-master UDMA33 GEOM_LABEL: Label for provider acd0 is iso9660/FreeBSD_Install run_interrupt_driven_hooks: still waiting after 60 seconds for xpt_config run_interrupt_driven_hooks: still waiting after 120 seconds for xpt_config run_interrupt_driven_hooks: still waiting after 180 seconds for xpt_config run_interrupt_driven_hooks: still waiting after 240 seconds for xpt_config run_interrupt_driven_hooks: still waiting after 300 seconds for xpt_config Then just stalls 2. No ACPI ... unknown: timeout waiting for read DRQ unknown: timeout waiting for read DRQ acd0: DVDR at ata3-master UDMA33 GEOM_LABEL: Label for provider acd0 is iso9660/FreeBSD_Install run_interrupt_driven_hooks: still waiting after 60 seconds for xpt_config run_interrupt_driven_hooks: still waiting after 120 seconds for xpt_config run_interrupt_driven_hooks: still waiting after 180 seconds for xpt_config run_interrupt_driven_hooks: still waiting after 240 seconds for xpt_config run_interrupt_driven_hooks: still waiting after 300 seconds for xpt_config Then just stalls 3. Safe Mode I can only tell you a little because console is spammed. It is the same as no ACPI, but with an interrupt storm. ... unknown: timeout waiting for read DRQ unknown: timeout waiting for read DRQ acd0: DVDR at ata3-master UDMA33 GEOM_LABEL: Label for provider acd0 is iso9660/FreeBSD_Install run_interrupt_driven_hooks: still waiting after 60 seconds for xpt_config run_interrupt_driven_hooks: still waiting after 120 seconds for xpt_config run_interrupt_driven_hooks: still waiting after 180 seconds for xpt_config run_interrupt_driven_hooks: still waiting after 240 seconds for xpt_config run_interrupt_driven_hooks: still waiting after 300 seconds for xpt_config When it gets to the unknowns, this is spammed. interrupt storm detected on "irq10:"; throttling interrupt source Other than the interrupt storm spam, it is halted like the others. 4. Single User Mode Same as 1, Default 5. Verbose All I can tell you is what is spammed at the end. acpi: bad write to port 0x080 (32), val hex Where hex is ever increasing and loops when it hits 0xff01. I can also see run_interrupt_driven_hooks message in all the spam. Using some googling if you add the sysctl before boot debug.acpi.block_bad_io=1 it might be of some help. This just leads to a never ending loop of acpi errors - the scroll very fast and difficult to record might I add! ... acpi: bad write to port 0x080 (32), val hex ACPI Exception (evregion-0529): AE_BAD_PARAMETER, Returned by handler for [SystemIO] [20070320] ACPI Error (psparse-0626): Method parse/execution failed [\P8XH] (Node 0xc6850a60), AE_BAD_PARAMETER ACPI Error (psparse-0626): Method parse/execution failed [\_GPE._L01] [20070320] ACPI Exception (evgpe-0687): AE_BAD_PARAMETER, while evauating GPE method [_L01] [20070320] --repeat-- ... FreeBSD 7.0-REL 7.0 is a little different than 7.1. Messages are somewhat the same but they happen near the beginning of dmesg instead of around the end. The run_interrupt_driven_hooks issue is nonexistant as well, but it still hangs. I'm guessing that's a debug tool more than an error. 1. Default ... cpu0: on acpi0 ACPI Error (dsopcode-0350): No pointer back to NS node in buffer obj 0xc6862580 [20070320] ACPI Exception (dswexec-0556): AE_AML_INTERNAL, While resolving operands for [OpcodeName unavailable] [2
Re: 7.1-PRERELEASE: arcmsr write performance problem
>> The next thing I am doing is going to be removing the QUOTA feature >> to see if this has any bearing >> on this problem. It does not appear to be even writing at a heavy >> load as you can see (almost >> nothing) but the processes are mostly in UFS when it spirals out of >> control. > > > Whats strange is that the output from gstat shows the disks hardly > active at all Yet why is the syncer at 100% ? Do you have write > caching disabled on the array ? What does the raw throughput look > like to the disks ? e.g. if you try a simple dd if=/dev/zero > of=/var/tmp bs=1024k count=1000 ? > >> I moved the processing of amavisd-new into a memory drive to at least >> take that off the IO and this >> seems to have helped a bit. There is not a lot of mail going through >> the system but every little bit >> helps. I suspect this is one other reason that is bringing the >> problem to the forefront as >> amavisd-new can use the disks a bit to process each e-mail. > > > Is the high load average simply a function of processes blocking on > network io ? On our av/spam scanners for example show a high load avg > because there are many processes waiting on network io to complete > (e.g. talking to RBL lists, waiting for DCC servers to complete etc) > > Also, is it really related to the arcmsr driver ? i.e. if you did the > same tasks on a single IDE drive, is the performance profile going to > be the same ? > > ---Mike > Hi Mike, Well I tried to remove both the USB com ports drivers and the QUOTA out of the kernel last night and this has not solved it but it seems a bit more stable today. The HTTP only had a problem two times last night. I am not sure if it is specifically related to the arcmsr driver but unfortunately I am unable to try a single IDE setup at the moment. If I can get to the bottom of why it is locking then it might point us in the right direction.I was told that Jan downgraded to 6.4 as she could never resolve her issue and worked on it for a very long time. Write caching is enabled on the array which was the first thing I checked and I have the battery backup installed and I confirm it shows up in the areca-cli as 100% charged. Do you think it may be hardware related even though there are no errors at all? I have checked the event log in the S5PAL which is very sensitive to errors I have found in the past as well as the event log of the areca-cli. Both are error free. With regards to the e-mail scanning waiting for RBL completion there is only usually one e-mail per minute approximately to give you an idea of the load with regards to the e-mail so this is not really a reliable test and I don't see how this is an overall contributing factor as there seem to be many ways to bring the locking forward including running a dump being one of them in my experience. What I have found is the more I take off the load of the system writing seems to help a bit so I have been doing everything I can to help with this until I can find a workable solution. I have recompiled all the ports a few times over the past month in hopes that something might get fixed if it was a port issue and all ports are as up-to-date as possible using portupgrade and tracking the port tree. The primary problem is what you said above the gstat shows very little activity but the system seems to be "stuck". The syncer is not always at 100% as it comes and goes I grabbed that at one time when watching it but it did show how "little" activity there was from the reports. Here is the output of dd on the WORKING server: dd if=/dev/zero of=/usr/test bs=1024k count=1000 1000+0 records in 1000+0 records out 1048576000 bytes transferred in 17.874501 secs (58663232 bytes/sec) Here is the output of dd on the one not working right but NOT "locked" right now. I need to wait for it to "lock" again before I can test this again. dd if=/dev/zero of=/usr/test bs=1024k count=1000 1000+0 records in 1000+0 records out 1048576000 bytes transferred in 34.270080 secs (30597419 bytes/sec) The numbers are pretty reasonable albeit half on one compare to the other. I did notice on the non-working server the system numbers were always much higher when I ran the dd. This could be a coincidence but the syncer was not seen in the list of the working server but it was on the list on the one with the problem when running both with top -IS which the dd was running. I also noticed the system number s are always much higher on the one with the problem. I am waiting for the system to lock again to try and see what it shows when it is locked. I suppose I will work on getting 7.0 running (downgrade) next to see if I have the same problem on this version as another clue to the problem. Thanks again for your help so far. Paul ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: 7.1-PRERELEASE: arcmsr write performance problem
> What does top -S show ? Most of the load is in system. Does the > machine in question have a rather large master.passwd file by chance ? > (http://www.freebsd.org/cgi/query-pr.cgi?pr=75855) > ---Mike > Thanks for your quick reply: master.passwd is only 9467 (with a ls-l) TOP -ISM at times shows syncer at the top but this ranges and is not always near the top. last pid: 55084; load averages: 17.74, 10.08, 5.58up 0+10:19:24 15:05:23 290 processes: 50 running, 218 sleeping, 14 waiting, 8 lock CPU: 15.4% user, 0.0% nice, 68.3% system, 3.0% interrupt, 13.2% idle Mem: 795M Active, 3279M Inact, 492M Wired, 6116K Cache, 214M Buf, 11G Free Swap: 8192M Total, 8192M Free PID USERNAMEVCSW IVCSW READ WRITE FAULT TOTAL PERCENT COMMAND 54 root 2 0 0 7 0 7 100.00% syncer Here is a top with it not fully locked but high system usage. last pid: 55468; load averages: 9.93, 11.31, 8.99up 0+10:32:58 15:18:57 259 processes: 19 running, 215 sleeping, 14 waiting, 11 lock CPU: 19.1% user, 0.0% nice, 58.2% system, 1.9% interrupt, 20.8% idle Mem: 635M Active, 3258M Inact, 481M Wired, 6856K Cache, 214M Buf, 11G Free Swap: 8192M Total, 8192M Free PID USERNAME THR PRI NICE SIZERES STATE C TIME WCPU COMMAND 18 root 1 171 ki31 0K16K RUN0 439:32 31.15% idle: cpu0 55422 www 1 1020 193M 59632K RUN5 0:26 30.96% httpd 12 root 1 171 ki31 0K16K RUN6 522:14 28.37% idle: cpu6 54 root 1 20- 0K16K syncer 2 81:19 28.37% syncer 15 root 1 171 ki31 0K16K RUN3 465:15 26.56% idle: cpu3 55411 www 1 -40 157M 33704K *vnode 1 0:21 26.17% httpd 55388 www 1 -40 160M 35940K *vnode 1 0:14 26.17% httpd 13 root 1 171 ki31 0K16K RUN5 509:35 25.98% idle: cpu5 11 root 1 171 ki31 0K16K RUN7 525:53 25.88% idle: cpu7 14 root 1 171 ki31 0K16K RUN4 491:32 25.29% idle: cpu4 55453 www 1 1010 157M 33608K CPU7 7 0:08 24.76% httpd 55365 www 1 -40 157M 33408K ufs3 0:23 24.56% httpd 55440 www 1 690 154M 31180K CPU2 7 0:09 24.37% httpd 55412 www 1 -40 153M 30156K *vnode 3 0:07 23.97% httpd 16 root 1 171 ki31 0K16K CPU2 2 444:38 23.88% idle: cpu2 55376 www 1 -40 158M 34776K *vnode 0 0:26 23.88% httpd 55459 www 1 -40 145M 23920K *vnode 1 0:07 23.49% httpd 55467 www 1 700 154M 31056K *vnode 7 0:09 22.66% httpd 17 root 1 171 ki31 0K16K CPU1 1 443:27 20.90% idle: cpu1 55374 www 1 -40 146M 25312K *vnode 7 0:09 13.38% httpd 55418 www 1 -40 145M 24192K ufs0 0:18 12.89% httpd 55400 www 1 580 146M 25460K select 5 0:20 12.79% httpd 55443 www 1 -40 148M 25788K *vnode 1 0:03 12.50% httpd 55410 www 1 -40 147M 25700K *vnode 7 0:05 12.26% httpd 55438 www 1 -40 145M 24148K RUN4 0:08 11.96% httpd 21 root 1 -44- 0K16K WAIT 0 34:45 11.77% swi1: net 55451 www 1 -40 144M 22704K *vnode 7 0:02 10.99% httpd 55447 www 1 600 145M 24008K select 2 0:07 10.50% httpd 55406 www 1 530 146M 25324K select 2 0:19 9.77% httpd 55433 www 1 490 146M 24912K select 2 0:11 8.06% httpd 55448 www 1 520 144M 22972K RUN6 0:03 8.06% httpd 55383 www 1 450 145M 24284K select 2 0:12 7.96% httpd 55446 www 1 440 146M 24988K select 3 0:09 7.96% httpd 55430 www 1 40 145M 24136K kqread 0 0:03 6.69% httpd 55432 www 1 200 146M 24324K lockf 3 0:04 6.05% httpd 55464 www 1 -40 145M 23464K RUN0 0:02 5.66% httpd 55424 www 1 450 146M 24876K select 6 0:08 3.66% httpd 55442 www 1 470 145M 23852K select 3 0:03 3.56% httpd 55373 www 1 480 146M 25364K select 5 0:07 3.17% httpd 55375 www 1 460 146M 25420K select 2 0:15 3.08% httpd 19 root 1 -32- 0K16K *Giant 2 9:02 2.98% swi4: clock sio 48518 wusage 1 460 10424K 2632K select 4 2:50 2.78% wusage 1490 mysql 97 4 -5 402M 184M sbwait 4 0:29 2.78% mysqld 55372 www 1 470 144M 22136K CPU6 0 0:01 2.59% httpd 55437 root 1 -320 9136K 2940K CPU4 4 0:01 2.59% top 55387 www 1 450 144M 22196K CPU5 1 0:02
Heimdal Breakage
After installing 6.4-RELEASE on my secondary KDC I decided to test the secondary KDC. When trying kinit I get this error: j...@w17 ~ $ kinit j...@stradamotorsports.com's Password: kinit: krb5_get_init_creds: Key size is incompatible with encryption type One post on the net says that Heimdal changed the key format to add some padding or somesuch. I haven't gone about fixing the problem yet so maybe that post is not applicable to FreeBSD. Just the same I thought I would let folks know that their key databases are probably not forward compatible with 6.4-RELEASE. This would be a pretty big deal for some users. It would be nice to see this in UPDATING. Thanks, Jason ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: more zfs/nfs panics
it just seems to delay the panic though, it smells like some memory leak ... Well, the canonical fix seems be to DECREASE vfs.zfs.arc_max to something like 100M and keep decreasing until it works. More info here: http://wiki.freebsd.org/ZFSQuickStartGuide Once you tune, your problems will go away. The default install should not need tuning (so people stop posting this panic problem)... will that be fixed in the next stable release? Rudy ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"