Re: PIC32 build fail

2021-01-30 Thread Brennan Ashton
If we can get build secret by infra set we can probably just make a
pipeline that we can trigger to build and upload the CI toolchain to a blob
store like S3.  I can file a ticket.

On Fri, Jan 29, 2021, 9:30 PM Xiang Xiao  wrote:

> How about we generate the toolchain binary and publish it on some github
> repo?
>
> > -Original Message-
> > From: Brennan Ashton 
> > Sent: Friday, January 29, 2021 11:49 PM
> > To: dev@nuttx.apache.org
> > Subject: Re: PIC32 build fail
> >
> > On Fri, Jan 29, 2021, 4:39 AM Barbiani  wrote:
> >
> > > I was able to use crosstool-ng to build gcc 10.2 (static) and Nuttx.
> > >
> > > I see that the automated test downloads a pre-built toolchain. Do I
> > > create an archive of the toolchain and upload somewhere?
> > >
> > >
> > I think this is helpful to know that it's working with with latest
> toolchain, but we should hold off on updating the ones in CI until we have a
> > better story for hosting them instead of requiring the image to build
> gcc/binutils. We are doing this for RX and it add quite a bit of time.
> >
> > Did you just use this configuration?
> >
> https://www.github.com/crosstool-ng/crosstool-ng/tree/master/samples%2Fmips-unknown-elf%2Fcrosstool.config
> >
> > Thanks!
> > --Brennan
>
>


RE: timerfd

2021-01-30 Thread Xiang Xiao
timerfd is a very nice feature, but is it better to call wd_* api than timer_ 
api?

> -Original Message-
> From: Matias N. 
> Sent: Saturday, January 30, 2021 10:01 AM
> To: dev@nuttx.apache.org
> Subject: timerfd
> 
> Hi,
> 
> I would like to implement timerfd interface to overcome some of the issues 
> around handling signals and threads and the limitation
of
> SIGEV_THREAD we discussed. I see eventfd is supported and looking at the 
> implementation I think it can be done relatively simple
using
> most of the existing timer_* functionality. I was wondering if anyone already 
> did this work outside of mainline or had any
thoughts about
> what to consider when doing so.
> 
> Best,
> Matias



Re: timerfd

2021-01-30 Thread Matias N.
But that isn't a userspace API, is it? It runs the handler in interrupt context.

I also realize now that timerfd has a limitation: you will not know about the 
timer expiration
while not yet doing poll, so it looses a bit on the real-time aspect.

Best,
Matias

On Sat, Jan 30, 2021, at 09:41, Xiang Xiao wrote:
> timerfd is a very nice feature, but is it better to call wd_* api than timer_ 
> api?
> 
> > -Original Message-
> > From: Matias N. 
> > Sent: Saturday, January 30, 2021 10:01 AM
> > To: dev@nuttx.apache.org
> > Subject: timerfd
> > 
> > Hi,
> > 
> > I would like to implement timerfd interface to overcome some of the issues 
> > around handling signals and threads and the limitation
> of
> > SIGEV_THREAD we discussed. I see eventfd is supported and looking at the 
> > implementation I think it can be done relatively simple
> using
> > most of the existing timer_* functionality. I was wondering if anyone 
> > already did this work outside of mainline or had any
> thoughts about
> > what to consider when doing so.
> > 
> > Best,
> > Matias
> 
> 


Re: timerfd

2021-01-30 Thread Xiang Xiao
On Sat, Jan 30, 2021 at 7:00 AM Matias N.  wrote:

> But that isn't a userspace API, is it? It runs the handler in interrupt
> context.
>
>
The core logic for timerfd is part of the kernel(actually, timerfd/eventfd
implements file_operation like a normal driver). libc timerfd api just a
simple wrapper of ioctl.


> I also realize now that timerfd has a limitation: you will not know about
> the timer expiration
> while not yet doing poll, so it looses a bit on the real-time aspect.
>

Yes, you have to poll timerfd to know when the timer expires. Basically,
there are two programming style:

   1. The push mode: library/kernel calls your callback once something
   happens.
   2. The pull mode: your thread block in poll/select or read/write call
   and wake up when something happens.


> Best,
> Matias
>
> On Sat, Jan 30, 2021, at 09:41, Xiang Xiao wrote:
> > timerfd is a very nice feature, but is it better to call wd_* api than
> timer_ api?
> >
> > > -Original Message-
> > > From: Matias N. 
> > > Sent: Saturday, January 30, 2021 10:01 AM
> > > To: dev@nuttx.apache.org
> > > Subject: timerfd
> > >
> > > Hi,
> > >
> > > I would like to implement timerfd interface to overcome some of the
> issues around handling signals and threads and the limitation
> > of
> > > SIGEV_THREAD we discussed. I see eventfd is supported and looking at
> the implementation I think it can be done relatively simple
> > using
> > > most of the existing timer_* functionality. I was wondering if anyone
> already did this work outside of mainline or had any
> > thoughts about
> > > what to consider when doing so.
> > >
> > > Best,
> > > Matias
> >
> >
>


Re: timerfd

2021-01-30 Thread Matias N.
On Sat, Jan 30, 2021, at 12:26, Xiang Xiao wrote:
> On Sat, Jan 30, 2021 at 7:00 AM Matias N.  wrote:
> 
> > But that isn't a userspace API, is it? It runs the handler in interrupt
> > context.
> >
> >
> The core logic for timerfd is part of the kernel(actually, timerfd/eventfd
> implements file_operation like a normal driver). libc timerfd api just a
> simple wrapper of ioctl.

I was referring to the watchdog you suggested.

Best,
Matias

Re: limitation in SIGEV_THREAD?

2021-01-30 Thread Matias N.
I'm thinking again about this. Why wouldn't it be possible to make functions
using SIGEV_THREAD (such as timer_settime) create a pthread behind
the scenes (only the first time a SIGEV_THREAD is setup)? The underlying
watchdog would go to a handler that posts a semaphore/condition variable
that the helper thread is waiting on. When this thread unblocks, it calls the
user handler directly.

The thread would be created per-process in KERNEL mode, so that shouldn't
be a problem (inside same address space as user handler). I suspect the
unblocking of the thread should also be possible in multi-process somehow
(named semaphore?).

This is essentially what I'm doing myself around a call to timer_settime.

Best,
Matias

On Wed, Jan 27, 2021, at 15:26, Gregory Nutt wrote:
> 
> > Perhaps you could use a pool of application threads as is done with 
> > the kernel threads for the low-priority work queue.  So you could have 
> > a small number of threads that service all tasks.  When a user-space 
> > thread is needed, it could be removed from the pool and be assigned to 
> > the task to run the event.  When the event processing completes, the 
> > thread returned to the pool until it is again needed.  Tasks could 
> > wait for availability if there are no available threads in the pool. 
> Nevermind!  This would not work in KERNEL mode.  In that case, each task 
> (now better called processes) have there own separate protected address 
> enviroments and threads could never be shared across processes.  It 
> would work fine in FLAT and PROTECTED modes where all tasks share the 
> same address space.
> 


Re: limitation in SIGEV_THREAD?

2021-01-30 Thread Gregory Nutt

I'm thinking again about this. Why wouldn't it be possible to make functions
using SIGEV_THREAD (such as timer_settime) create a pthread behind
the scenes (only the first time a SIGEV_THREAD is setup)? The underlying
watchdog would go to a handler that posts a semaphore/condition variable
that the helper thread is waiting on. When this thread unblocks, it calls the
user handler directly.

The thread would be created per-process in KERNEL mode, so that shouldn't
be a problem (inside same address space as user handler). I suspect the
unblocking of the thread should also be possible in multi-process somehow
(named semaphore?).

This is essentially what I'm doing myself around a call to timer_settime.

Best,
Matias


I think a proper approach would be to see how Linux/GLIBC accomplish this.  I 
think that would give you some ideas for proper implementation.



Re: limitation in SIGEV_THREAD?

2021-01-30 Thread Brennan Ashton
On Sat, Jan 30, 2021, 1:58 PM Gregory Nutt  wrote:

> > I'm thinking again about this. Why wouldn't it be possible to make
> functions
> > using SIGEV_THREAD (such as timer_settime) create a pthread behind
> > the scenes (only the first time a SIGEV_THREAD is setup)? The underlying
> > watchdog would go to a handler that posts a semaphore/condition variable
> > that the helper thread is waiting on. When this thread unblocks, it
> calls the
> > user handler directly.
> >
> > The thread would be created per-process in KERNEL mode, so that shouldn't
> > be a problem (inside same address space as user handler). I suspect the
> > unblocking of the thread should also be possible in multi-process somehow
> > (named semaphore?).
> >
> > This is essentially what I'm doing myself around a call to timer_settime.
> >
> > Best,
> > Matias
>
> I think a proper approach would be to see how Linux/GLIBC accomplish
> this.  I think that would give you some ideas for proper implementation.
>

This is actually fairly in line with how the Musl libc implements this (at
least from a quick look).  There are a few important details in there, but
it looks quite clean.

http://git.musl-libc.org/cgit/musl/tree/src/time/timer_create.c

While Musl is usually not as fast as glibc, I find the implementations
usually more straight forward and smaller code size.

>


Re: limitation in SIGEV_THREAD?

2021-01-30 Thread Matias N.

> This is actually fairly in line with how the Musl libc implements this (at
> least from a quick look).  There are a few important details in there, but
> it looks quite clean.
> 
> http://git.musl-libc.org/cgit/musl/tree/src/time/timer_create.c
> 
> While Musl is usually not as fast as glibc, I find the implementations
> usually more straight forward and smaller code size.
> 
> >
> 

>From what I see glibc does a similar thing: has one thread that expects a 
>signal (and waits for it with sigwait) and
spawns a new thread to run the user-supplied callback. I think that using a 
single thread to run the callback should be enough (although it could easily be 
made an option to spawn a new thread each time). In this scenario 
SIGEV_THREAD_ID ammounts to simply skipping the thread creation and letting the 
user specify which thread so send the signal to.

What I'm unsure about is if using a signal is really necessary. It seems that 
using a semaphore/condition variable would be simpler and maybe faster.

Do you think we could do something like this? It would be great to make 
SIGEV_THREAD more usable.

Best,
Matias

Re: limitation in SIGEV_THREAD?

2021-01-30 Thread Brennan Ashton
On Sat, Jan 30, 2021 at 6:11 PM Matias N.  wrote:
>
>
> > This is actually fairly in line with how the Musl libc implements this (at
> > least from a quick look).  There are a few important details in there, but
> > it looks quite clean.
> >
> > http://git.musl-libc.org/cgit/musl/tree/src/time/timer_create.c
> >
> > While Musl is usually not as fast as glibc, I find the implementations
> > usually more straight forward and smaller code size.
> >
> > >
> >
>
> From what I see glibc does a similar thing: has one thread that expects a 
> signal (and waits for it with sigwait) and
> spawns a new thread to run the user-supplied callback. I think that using a 
> single thread to run the callback should be enough (although it could easily 
> be made an option to spawn a new thread each time). In this scenario 
> SIGEV_THREAD_ID ammounts to simply skipping the thread creation and letting 
> the user specify which thread so send the signal to.
>
> What I'm unsure about is if using a signal is really necessary. It seems that 
> using a semaphore/condition variable would be simpler and maybe faster.
>

I think I must not be understanding what you are asking here.  Are you
talking about somehow replacing the sigwaiitinfo with
sem_wait/sem_timedwait? I'm not sure how that would actually be faster
or simpler, with sem_wait you will be interrupted by any signal and
then will have to figure out why and possibly wait again with a race
condition (timer could have expired).  But it is very possible I am
just not understanding how you would be using the semaphore.

--Brennan