can't build sysutils/apcupsd on 9.1-RC3

2012-11-13 Thread Thomas Eberhardt
Just tested out FreeBSD 9.1-RC3 and i can't build sysutils/apcupsd because of

http://www.freebsd.org/cgi/query-pr.cgi?pr=169901.

Error messages are:

…...
  LDsrc/apcupsd
/usr/ports/sysutils/apcupsd/work/apcupsd-3.14.10/src/lib/libapc.a(astring.o): 
In function `astring::assign(char const*, int)':
astring.cpp:(.text+0x6c): undefined reference to `operator new[](unsigned long)'
/usr/ports/sysutils/apcupsd/work/apcupsd-3.14.10/src/lib/libapc.a(astring.o): 
In function `astring::realloc(unsigned int)':
astring.cpp:(.text+0x1de): undefined reference to `operator new[](unsigned 
long)'
/usr/ports/sysutils/apcupsd/work/apcupsd-3.14.10/src/lib/libapc.a(astring.o): 
In function `astring::vformat(char const*, __va_list_tag*)':
astring.cpp:(.text+0x48e): undefined reference to `operator new[](unsigned 
long)'
gmake[2]: *** [apcupsd] Error 1
gmake[1]: *** [all] Error 2
gmake: *** [src_DIR] Error 2
*** [do-build] Error code 1

Stop in /usr/ports/sysutils/apcupsd.
*** [build] Error code 1

Stop in /usr/ports/sysutils/apcupsd.


There is also an older report by me

http://www.freebsd.org/cgi/query-pr.cgi?pr=168631

that can be closed if this gets fixed.

Best regards,
Thomas Eberhardt

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: nomenclature for conf files

2012-11-13 Thread ill...@gmail.com
On 12 November 2012 00:12, Zoran Kolic  wrote:
> It might sound stupid, but I'd like to know if there's
> any difference. Are those 3 line the same?
>
> WITH_KMS=YES
> WITH_KMS="YES"
> WITH_KMS=yes
>
> Best regards

In /etc/make.conf it shouldn't matter: they should all
be treated as synonyms for:
WITH_KMS=

-- 
--
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: nomenclature for conf files

2012-11-13 Thread Jakub Lach
> Also, the FreeBSD makefiles and sources test all WITH_* variables with 
> .ifdef or #ifdef so the value doesn't matter and can even be empty.

This is exactly the point. But I still use 'yes' just for mnemotechnical 
reason.



--
View this message in context: 
http://freebsd.1045724.n5.nabble.com/nomenclature-for-conf-files-tp5760163p5760661.html
Sent from the freebsd-stable mailing list archive at Nabble.com.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: nomenclature for conf files

2012-11-13 Thread Kurt Buff
OK - I figured it out.

I have always followed the examples in the handbook. I have also been
bitten more than once when I've typoed, and left out one of the quote
marks.

That tends to leave a lasting impression, as it can be painful to fix,
sometimes requiring to drop into single user mode to clean up.

Kurt


On Mon, Nov 12, 2012 at 7:51 AM, Chris Rees  wrote:

>
> On 12 Nov 2012 15:35, "Kurt Buff"  wrote:
> >
> > On Mon, Nov 12, 2012 at 12:29 AM, Chris Rees  wrote:
> > >
> > > On 12 Nov 2012 05:20, "Kurt Buff"  wrote:
> > >>
> > >> On Sun, Nov 11, 2012 at 9:12 PM, Zoran Kolic  wrote:
> > >> > It might sound stupid, but I'd like to know if there's
> > >> > any difference. Are those 3 line the same?
> > >> >
> > >> > WITH_KMS=YES
> > >> > WITH_KMS="YES"
> > >> > WITH_KMS=yes
> > >>
> > >> With regard to their use in /etc/rc.conf, no, absolutely not.
> > >>
> > >> In general, from my experience, only the second one will work.
> > >>
> > >> This might, or might not, be true for other uses, but rc.conf is
> > >> pretty picky about this.
> > >
> > > All three are fine in make.conf and rc.conf
> > >
> > > The issue with rc.conf is when people put spaces around the = sign.
> > >
> > > Chris
> >
> > This has not been my experience - but I will experiment soon and see
> > if I can verify.
>
> Anything that complains about any of those syntaxes is a bug.  Please file
> a PR if you find any examples.
>
> Chris
>
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


thread taskq / unp_gc() using 100% cpu and stalling unix socket IPC

2012-11-13 Thread Markus Gebert
Hi there

We have a pair of servers running FreeBSD 9.1-RC3 that act as transparent layer 
7 loadbalancer (relayd) and pop/imap proxy (dovecot). Only one of them is 
active at a given time, it's a failover setup. From time to time the active one 
gets in a state in which the 'thread taskq' thread uses up 100% of one cpu on 
its own, like here:


  PID USERNAME  PRI NICE   SIZERES STATE   C   TIME   WCPU COMMAND
0 root80 0K  3968K CPU12  12   9:49 100.00% 
kernel{thread taskq}


Most of the userland processes get stalled and if the sitation is not resolved, 
their network services stop responding alltogether after some time. This 
affects relayd and dovecot for sure, but not sshd.

When processes are slow, they often block in states like '*unp_l' or 'pipewr' 
according to top, and overall cpu time spent in kernel grows by factors (and 
system load too). See [0], [1], [2] for top screenshots when things work badly. 
Under normal circumstances load is below 5 and systime below 5%. Load during 
daytime makes this situation more likely to occur. At the same time it does 
_not_ seem to be triggered by any sudden load peaks (e.g. more network 
connections etc.), so I guess it's some kind of race condition.

Today I had the chance to get a few ddb traces of the 'thread taskq' while the 
problem was occuring. Every few seconds I ran a ddb script and continued. I got 
five samples, after that ddb became unresponsive. All of them show the thread 
executing in unp_gc() like here:


unp_gc() at unp_gc+0x81
taskqueue_run_locked() at taskqueue_run_locked+0x85
taskqueue_thread_loop() at taskqueue_thread_loop+0x46
fork_exit() at fork_exit+0x11f
fork_trampoline() at fork_trampoline+0xe

See all samples under [3].

This is consistent with some textdumps and ddb sessions I've done. Also, the 
sysctl 'net.local.taskcount' seems to increase at much higher rates when the 
problem occurs.

To me it looks like the unix socket GC is triggered way too often and/or 
running too long, which uses cpu and worse, causes a lot of contention around 
the unp_list_lock which in turn causes delays for all processes relaying on 
unix sockets for IPC.

I don't know why the unp_gc() is called so often and what's triggering this.

Sometimes this condition only lasts for a few seconds and then resolves itself 
before the thread taskq reaches 100% cpu usage. Sometimes it will last, render 
dovecot/relayd unresponsive and can only be resolved by taking load off the 
system (forcefully restart services).

Booting a kernel with INVARIANTS and WITNESS did not bring much more clues. The 
system ends up panicing with 'soabort: so_count' after a few minutes of load, 
but the trace[4] does not seem related to unix sockets, so that might be 
another problem.

System info, boot dmesg and some configs can be found under [5].

I hope somebody can help me debug this further.


Markus



[0]
last pid: 10227;  load averages: 18.81, 13.02,  8.09
   up 0+18:14:55  13:07:53
136 processes: 4 running, 53 sleeping, 1 waiting, 78 lock
CPU 0:   5.9% user,  0.0% nice, 54.5% system,  3.5% interrupt, 36.1% idle
CPU 1:   5.9% user,  0.0% nice, 56.5% system,  3.1% interrupt, 34.5% idle
CPU 2:   4.3% user,  0.0% nice, 58.8% system,  3.9% interrupt, 32.9% idle
CPU 3:   6.3% user,  0.0% nice, 60.4% system,  2.0% interrupt, 31.4% idle
CPU 4:   3.5% user,  0.0% nice, 59.2% system,  2.7% interrupt, 34.5% idle
CPU 5:   3.9% user,  0.0% nice, 60.0% system,  1.6% interrupt, 34.5% idle
CPU 6:   4.3% user,  0.0% nice, 62.0% system,  1.2% interrupt, 32.5% idle
CPU 7:   8.6% user,  0.0% nice, 56.9% system,  0.8% interrupt, 33.7% idle
CPU 8:   5.1% user,  0.0% nice, 59.1% system,  0.0% interrupt, 35.8% idle
CPU 9:   7.1% user,  0.0% nice, 56.3% system,  0.0% interrupt, 36.6% idle
CPU 10:  4.7% user,  0.0% nice, 61.0% system,  0.0% interrupt, 34.3% idle
CPU 11:  5.5% user,  0.0% nice, 60.8% system,  0.0% interrupt, 33.7% idle
CPU 12:  6.3% user,  0.0% nice, 61.8% system,  0.4% interrupt, 31.5% idle
CPU 13:  8.3% user,  0.0% nice, 57.1% system,  0.0% interrupt, 34.6% idle
CPU 14:  9.1% user,  0.0% nice, 58.3% system,  0.0% interrupt, 32.7% idle
CPU 15:  7.9% user,  0.0% nice, 55.9% system,  0.0% interrupt, 36.2% idle
CPU 16: 13.4% user,  0.0% nice, 52.8% system,  0.0% interrupt, 33.9% idle
CPU 17:  9.8% user,  0.0% nice, 55.3% system,  0.0% interrupt, 34.9% idle
CPU 18:  5.9% user,  0.0% nice, 62.0% system,  0.0% interrupt, 32.2% idle
CPU 19:  5.1% user,  0.0% nice, 62.6% system,  0.0% interrupt, 32.3% idle
CPU 20:  5.5% user,  0.0% nice, 60.8% system,  0.0% interrupt, 33.7% idle
CPU 21: 10.6% user,  0.0% nice, 57.1% system,  0.0% interrupt, 32.3% idle
CPU 22:  5.9% user,  0.0% nice, 58.8% system,  0.0% interrupt, 35.3% idle
CPU 23:  6.3% user,  0.0% nice, 59.6% system,  0.0% interrupt, 34.1% idle
Mem: 3551M Active, 1351M Inact, 2905M Wired, 8K Cache, 7488K Buf, 85G Free
ARC: 1073M Total, 107M MRU, 828M MFU, 

Problems with ZFS's user quota.

2012-11-13 Thread Derek Kulinski
Hi everyone,

I'm having problem using user quotas in ZFS, I think something is broken, but 
it very possible that I'm just doing something wrong.

I have two problems actually:

1. When trying to define quota for a filesystem that has sub filesystem the zfs 
userspace behaves weird:

[chinatsu]:/tank/system# zfs userspace tank/system
TYPENAME   USED  QUOTA
POSIX User  root  9,16K   none
[chinatsu]:/tank/system# zfs set userquota@takeda=7G tank/system
[chinatsu]:/tank/system# zfs userspace tank/system
TYPENAME USED  QUOTA
POSIX User  root9,16K   none
POSIX User  takeda 7G
[chinatsu]:/tank/system# zfs set userquota@takeda=1PB tank/system
[chinatsu]:/tank/system# zfs userspace tank/system
TYPENAME USED  QUOTA
POSIX User  root9,16K   none
POSIX User  takeda 1P
[chinatsu]:/tank/system# zfs set userquota@takeda=none tank/system

Is it possible to set quota that would also be inherited by subfilesystem? For 
example if I have 2 filesystems under tank/system I want them to share the 
quota, so when I set 7GB the total data use would be 7GB max (and not 7GB per 
filesystem)

2. Setting quota works fine on filesystem that has files by given users, but 
does not seem to be enforced (I enabled quota in the kenrel even though I don't 
belive it is ecessary).

[chinatsu]:/tank/system# zfs userspace tank/system/home
TYPENAME  USED  QUOTA
[...]
POSIX User  takeda   6,06G   none
[...]
POSIX User  www  1,34G   none
[chinatsu]:/tank/system# zfs set userquota@takeda=7G tank/system/home
[chinatsu]:/tank/system# zfs userspace tank/system/home
TYPENAME  USED  QUOTA
[...]
POSIX User  takeda   6,06G 7G
[...]
POSIX User  www  1,34G   none
[chinatsu]:/tank/system# sudo su - takeda
chinatsu :: ~ » dd if=/dev/zero of=bigfile bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 7.882992 secs (136209934 bytes/sec)
chinatsu :: ~ »
[chinatsu]:/tank/system# zfs userspace tank/system/home
TYPENAME  USED  QUOTA
[...]
POSIX User  takeda   7,06G 7G
[...]
POSIX User  www  1,34G   none
[chinatsu]:/tank/system# 

It looks like ZFS does not allow me set quota in fractions (for example 6.5GB, 
but I guess that's not that big of a deal).

Thank you,
Derek

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: thread taskq / unp_gc() using 100% cpu and stalling unix socket IPC

2012-11-13 Thread Markus Gebert

On 13.11.2012, at 19:30, Markus Gebert  wrote:

> To me it looks like the unix socket GC is triggered way too often and/or 
> running too long, which uses cpu and worse, causes a lot of contention around 
> the unp_list_lock which in turn causes delays for all processes relaying on 
> unix sockets for IPC.
> 
> I don't know why the unp_gc() is called so often and what's triggering this.

I have a guess now. Dovecot and relayd both use unix sockets heavily. According 
to dtrace uipc_detach() gets called quite often by dovecot closing unix 
sockets. Each time uipc_detach() is called unp_gc_task is taskqueue_enqueue()d 
if fds are inflight.

in uipc_detach():
682 if (local_unp_rights)   
683 taskqueue_enqueue(taskqueue_thread, &unp_gc_task);

We use relayd in a way that keeps the source address of the client when 
connecting to the backend server (transparent load balancing). This requires 
IP_BINDANY on the socket which cannot be set by unprivileged processes, so 
relayd sends the socket fd to the parent process just to set the socket option 
and send it back. This means an fd gets transferred twice for every new backend 
connection.

So we have dovecot calling uipc_detach() often and relayd making it likely that 
fds are inflight (unp_rights > 0). With a certain amount of load this could 
cause unp_gc_task to be added to the thread taskq too often, slowing everything 
unix socket related down by holding global locks in unp_gc().

I don't know if the slowdown can even cause a negative feedback loop at some 
point by inreasing the chance of fds being inflight. This would explain why 
sometimes the condition goes away by itself and sometimes requires intervention 
(taking load away for a moment).

I'll look into a way to (dis)prove all this tomorrow. Ideas still welcome :-).


Markus


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: thread taskq / unp_gc() using 100% cpu and stalling unix socket IPC

2012-11-13 Thread Adrian Chadd
Oh lordie, just hack the kernel to make IP_BINDANY usable by any uid,
not just root.

I was hoping that capabilitiies would actually be useful these days,
but apparently not. :(

Then you can stop this FD exchange nonsense and this problem should go away. :)


Adrian


On 13 November 2012 16:41, Markus Gebert  wrote:
>
> On 13.11.2012, at 19:30, Markus Gebert  wrote:
>
>> To me it looks like the unix socket GC is triggered way too often and/or 
>> running too long, which uses cpu and worse, causes a lot of contention 
>> around the unp_list_lock which in turn causes delays for all processes 
>> relaying on unix sockets for IPC.
>>
>> I don't know why the unp_gc() is called so often and what's triggering this.
>
> I have a guess now. Dovecot and relayd both use unix sockets heavily. 
> According to dtrace uipc_detach() gets called quite often by dovecot closing 
> unix sockets. Each time uipc_detach() is called unp_gc_task is 
> taskqueue_enqueue()d if fds are inflight.
>
> in uipc_detach():
> 682 if (local_unp_rights)
> 683 taskqueue_enqueue(taskqueue_thread, &unp_gc_task);
>
> We use relayd in a way that keeps the source address of the client when 
> connecting to the backend server (transparent load balancing). This requires 
> IP_BINDANY on the socket which cannot be set by unprivileged processes, so 
> relayd sends the socket fd to the parent process just to set the socket 
> option and send it back. This means an fd gets transferred twice for every 
> new backend connection.
>
> So we have dovecot calling uipc_detach() often and relayd making it likely 
> that fds are inflight (unp_rights > 0). With a certain amount of load this 
> could cause unp_gc_task to be added to the thread taskq too often, slowing 
> everything unix socket related down by holding global locks in unp_gc().
>
> I don't know if the slowdown can even cause a negative feedback loop at some 
> point by inreasing the chance of fds being inflight. This would explain why 
> sometimes the condition goes away by itself and sometimes requires 
> intervention (taking load away for a moment).
>
> I'll look into a way to (dis)prove all this tomorrow. Ideas still welcome :-).
>
>
> Markus
>
>
> ___
> freebsd-stable@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Compilation error while compiling FreeBSD9-STABLE with Clang

2012-11-13 Thread Alie Tan
Hi.


I got compilation error while compiling FreeBSD9-STABLE with Clang:
   -m32 -c /usr/src/sys/boot/i386/boot2/sio.S
clang: warning: the clang compiler does not support '-fno-unit-at-a-time'
ld -static -N --gc-sections -nostdlib -m elf_i386_fbsd -Ttext 0x2000 -o
boot2.out /usr/obj/usr/src/sys/boot/i386/boot2/../btx/lib/crt0.o boot2.o
sio.o
objcopy -S -O binary boot2.out boot2.bin
btxld -v -E 0x2000 -f bin -b
/usr/obj/usr/src/sys/boot/i386/boot2/../btx/btx/btx -l boot2.ldr  -o
boot2.ld -P 1 boot2.bin
kernel: ver=1.02 size=690 load=9000 entry=9010 map=16M pgctl=1:1
client: fmt=bin size=1575 text=0 data=0 bss=0 entry=0
output: fmt=bin size=1e05 text=200 data=1c05 org=0 entry=0
-5 bytes available
*** [boot2] Error code 1


And here is my src.conf
CFLAGS+= -O3 -fno-strict-aliasing -pipe -funroll-loops
CXXFLAGS+= -O3 -fno-strict-aliasing -pipe -funroll-loops
COPTFLAGS+= -O3 -pipe -ffast-math -funroll-loops

CC=clang
CXX=clang++
CPP=clang-cpp

WITH_CLANG="YES"
WITH_CLANG_EXTRAS="YES"
WITH_CLANG_IS_CC="YES"
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: thread taskq / unp_gc() using 100% cpu and stalling unix socket IPC

2012-11-13 Thread Alfred Perlstein

On 11/13/12 4:41 PM, Markus Gebert wrote:

On 13.11.2012, at 19:30, Markus Gebert  wrote:


To me it looks like the unix socket GC is triggered way too often and/or 
running too long, which uses cpu and worse, causes a lot of contention around 
the unp_list_lock which in turn causes delays for all processes relaying on 
unix sockets for IPC.

I don't know why the unp_gc() is called so often and what's triggering this.

I have a guess now. Dovecot and relayd both use unix sockets heavily. According 
to dtrace uipc_detach() gets called quite often by dovecot closing unix 
sockets. Each time uipc_detach() is called unp_gc_task is taskqueue_enqueue()d 
if fds are inflight.

in uipc_detach():
682 if (local_unp_rights)   
683 taskqueue_enqueue(taskqueue_thread, &unp_gc_task);

We use relayd in a way that keeps the source address of the client when 
connecting to the backend server (transparent load balancing). This requires 
IP_BINDANY on the socket which cannot be set by unprivileged processes, so 
relayd sends the socket fd to the parent process just to set the socket option 
and send it back. This means an fd gets transferred twice for every new backend 
connection.

So we have dovecot calling uipc_detach() often and relayd making it likely that 
fds are inflight (unp_rights > 0). With a certain amount of load this could 
cause unp_gc_task to be added to the thread taskq too often, slowing everything 
unix socket related down by holding global locks in unp_gc().

I don't know if the slowdown can even cause a negative feedback loop at some 
point by inreasing the chance of fds being inflight. This would explain why 
sometimes the condition goes away by itself and sometimes requires intervention 
(taking load away for a moment).

I'll look into a way to (dis)prove all this tomorrow. Ideas still welcome :-).



A couple of ideas:

1) convert the taskqueue to a callout, but only allow one to be queued 
at a time.  set the granularity.


2) I think you only need to actually run garbage collection on the 
off-chance that you pass unix file descriptors, otherwise you can get 
away with refcounting.  It's hard for me to express the exact logic 
needed for this though.  I think you would need some way to simply do 
refcounting until there was a unix socket descriptor in flight, then 
switch to gc.   Either that or make a sysctl that allows you 
administratively deny passing of unix descriptors and just use refcounting.


Or just use Adrian's hack. :)

-Alfred


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: thread taskq / unp_gc() using 100% cpu and stalling unix socket IPC

2012-11-13 Thread Konstantin Belousov
On Wed, Nov 14, 2012 at 01:41:04AM +0100, Markus Gebert wrote:
> 
> On 13.11.2012, at 19:30, Markus Gebert  wrote:
> 
> > To me it looks like the unix socket GC is triggered way too often and/or 
> > running too long, which uses cpu and worse, causes a lot of contention 
> > around the unp_list_lock which in turn causes delays for all processes 
> > relaying on unix sockets for IPC.
> > 
> > I don't know why the unp_gc() is called so often and what's triggering this.
> 
> I have a guess now. Dovecot and relayd both use unix sockets heavily. 
> According to dtrace uipc_detach() gets called quite often by dovecot closing 
> unix sockets. Each time uipc_detach() is called unp_gc_task is 
> taskqueue_enqueue()d if fds are inflight.
> 
> in uipc_detach():
> 682   if (local_unp_rights)   
> 683   taskqueue_enqueue(taskqueue_thread, &unp_gc_task);
> 
> We use relayd in a way that keeps the source address of the client when 
> connecting to the backend server (transparent load balancing). This requires 
> IP_BINDANY on the socket which cannot be set by unprivileged processes, so 
> relayd sends the socket fd to the parent process just to set the socket 
> option and send it back. This means an fd gets transferred twice for every 
> new backend connection.
> 
> So we have dovecot calling uipc_detach() often and relayd making it likely 
> that fds are inflight (unp_rights > 0). With a certain amount of load this 
> could cause unp_gc_task to be added to the thread taskq too often, slowing 
> everything unix socket related down by holding global locks in unp_gc().
> 
> I don't know if the slowdown can even cause a negative feedback loop at some 
> point by inreasing the chance of fds being inflight. This would explain why 
> sometimes the condition goes away by itself and sometimes requires 
> intervention (taking load away for a moment).
> 
> I'll look into a way to (dis)prove all this tomorrow. Ideas still welcome :-).
> 

If the only issue is indeed too aggressive scheduling of the taskqueue,
than the postpone up to the next tick could do it. The patch below
tries to schedule the taskqueue for gc to the next tick if it is not yet
scheduled. Could you try it ?

diff --git a/sys/kern/subr_taskqueue.c b/sys/kern/subr_taskqueue.c
index 90c6ffc..3bf62f9 100644
--- a/sys/kern/subr_taskqueue.c
+++ b/sys/kern/subr_taskqueue.c
@@ -252,9 +252,13 @@ taskqueue_enqueue_timeout(struct taskqueue *queue,
} else {
queue->tq_callouts++;
timeout_task->f |= DT_CALLOUT_ARMED;
+   if (ticks < 0)
+   ticks = -ticks; /* Ignore overflow. */
+   }
+   if (ticks > 0) {
+   callout_reset(&timeout_task->c, ticks,
+   taskqueue_timeout_func, timeout_task);
}
-   callout_reset(&timeout_task->c, ticks, taskqueue_timeout_func,
-   timeout_task);
}
TQ_UNLOCK(queue);
return (res);
diff --git a/sys/kern/uipc_usrreq.c b/sys/kern/uipc_usrreq.c
index cc5360f..ed92e90 100644
--- a/sys/kern/uipc_usrreq.c
+++ b/sys/kern/uipc_usrreq.c
@@ -131,7 +131,7 @@ static const struct sockaddrsun_noname = { 
sizeof(sun_noname), AF_LOCAL };
  * reentrance in the UNIX domain socket, file descriptor, and socket layer
  * code.  See unp_gc() for a full description.
  */
-static struct task unp_gc_task;
+static struct timeout_task unp_gc_task;
 
 /*
  * The close of unix domain sockets attached as SCM_RIGHTS is
@@ -672,7 +672,7 @@ uipc_detach(struct socket *so)
if (vp)
vrele(vp);
if (local_unp_rights)
-   taskqueue_enqueue(taskqueue_thread, &unp_gc_task);
+   taskqueue_enqueue_timeout(taskqueue_thread, &unp_gc_task, -1);
 }
 
 static int
@@ -1783,7 +1783,7 @@ unp_init(void)
LIST_INIT(&unp_shead);
LIST_INIT(&unp_sphead);
SLIST_INIT(&unp_defers);
-   TASK_INIT(&unp_gc_task, 0, unp_gc, NULL);
+   TIMEOUT_TASK_INIT(taskqueue_thread, &unp_gc_task, 0, unp_gc, NULL);
TASK_INIT(&unp_defer_task, 0, unp_process_defers, NULL);
UNP_LINK_LOCK_INIT();
UNP_LIST_LOCK_INIT();


pgpuRG94AsgOJ.pgp
Description: PGP signature