Off-topic: Rare RTOS joke

2023-04-17 Thread Sebastien Lorquet
Originating here: 
https://mastodon.social/@marnanel@queer.party/110192469727439582


Copied for archival purposes:


Fun fact: the Jupiter Icy Moons Explorer can't use a real-time operating 
system.


This is because it's Io bound.


That's all for me :-)

Sebastien



RE: [VOTE] Apache NuttX 12.1.0 RC0 release

2023-04-17 Thread alin.jerpe...@sony.com
With my +1 I am closing the vote 

Thanks for supporting this release

Best regards
Alin


-Original Message-
From: Tomek CEDRO  
Sent: den 13 april 2023 13:38
To: dev@nuttx.apache.org
Subject: Re: [VOTE] Apache NuttX 12.1.0 RC0 release

On Wed, Apr 12, 2023 at 8:49 AM alin.jerpe...@sony.com wrote:
> @Tomek
> We need +1 or -1
> Does the "ERROR undefined symbol: backtrace " error appear on master?

Sorry for the delay Alin, this also happens for me in 12.0 and master, so 
probably local setup problem :-)

+1 :-)

--
CeDeROM, SQ7MHZ, http://www.tomek.cedro.info 


Re: V NuttX International Workshop

2023-04-17 Thread Roberto Bucher

Hi Alan

my idea is to present my contributions to microros-NuttX, but after the 
retirement from my professor position at SUPSI (31.08.2023!), it is 
quite difficult for me to be in Brazil. An online presentation should be 
possible?


Best regards

Roberto

On 4/16/23 16:35, Alan C. Assis wrote:

On 4/16/23, Nathan Hartman  wrote:
...

Perhaps the simplest low-tech way is to video tape the presentations and
post the videos online afterwards.

In fact, not only will this make it possible to see the event for people
who can't attend, but it will also become possible to refer back to the
presentations for people who do attend, and can be added to our online
video repertoire for people who discover NuttX in the future.

So, if it is possible , I hope the presentations can be video taped and put
online.


Yes, his is an option, but if we can do live stream to let people to
participate as in previous Online Workshop will be awesome! :-)

Tiago told me that Unicamp has some equipment to do that, let see if
we could use it.

BR,

Alan




RE: Hardcoded Pin mux, pad control and Drive Strength (AKA Slew-rate and Frequency) Settings #1570

2023-04-17 Thread David Sidrane
The PR to make these changes is here
https://github.com/apache/nuttx/pull/8992

There is a tool that will provide you with the changes that are needed in
the board.h

2 boards have been converted to provide examples of the conversion of the
different pinmaps types (STM32F1 series is different that all the stm32)

board olimexino-stm32:Rework board.h not use CONFIG_STM32_USE_LEGACY_PINMAP

board nucleo-h743zi:Rework board.h not use CONFIG_STM32_USE_LEGACY_PINMAP

Please have a look at the PR and test any out of tree boards you may have so
that we may bring this in soon.

Thank you,

David

-Original Message-
From: David Sidrane 
Sent: Wednesday, April 12, 2023 2:49 AM
To: 'dev@nuttx.apache.org' 
Subject: RE: Hardcoded Pin mux, pad control and Drive Strength (AKA
Slew-rate and Frequency) Settings #1570

Nathan, no worries. I ended up doing the STM32G families yesterday.

David



-Original Message-
From: Nathan Hartman 
Sent: Tuesday, April 11, 2023 10:40 PM
To: dev@nuttx.apache.org
Subject: Re: Hardcoded Pin mux, pad control and Drive Strength (AKA
Slew-rate and Frequency) Settings #1570

@davids5, I saw the request on GitHub for help with the STM32G families but
unfortunately something has come up and I won't be able to work on it this
week. Hopefully someone else can volunteer, otherwise I'll try to help next
week...

Thanks,
Nathan

On Tue, Apr 11, 2023 at 7:58 AM David Sidrane 
wrote:

> @slorquet Please have a look at #8992. Let me know if it addresses all
> the concerns you have.
>
> -Original Message-
> From: Sebastien Lorquet 
> Sent: Friday, April 7, 2023 9:58 AM
> To: dev@nuttx.apache.org
> Subject: Re: Hardcoded Pin mux, pad control and Drive Strength (AKA
> Slew-rate and Frequency) Settings #1570
>
> Thanks for the notification.
>
> Your proposal is mostly OK for me, I hope others will send reactions
> too. I have just one concern.
>
>
> If I attempt to rephrase the proposal: Starting from a commit in a
> future, stm32h7 GPIO definitions will not include speed indications
> anymore, and these will have to be added manually in board.h, but ONLY
> if the LEGACY_PINMAP is not set?
>
>
> Here is my concern: What will happen if a user (me, probably) builds a
> NuttX with this new commit from a full stored defconfig, but does not
> regenerate its config prior to rebuilding ? the LEGACY_PINMAP setting
> will not be present when building in that case.
>
> Can we force a config update before starting the build, so the
> LEGACY_PINMAP setting will be set to Y automatically in all cases?
>
>
> Also, this has to be documented very clearly, not just the official
> release notes for the next release!
>
> Aditionnally, if LEGACY_PINMAP is set in user config, maybe we can add
> a compile time warning in stm32h7/stm32_gpio.c that in the future,
> users are required to update their board.h and once done, disable
> LEGACY_PINMAP ?
>
> Sebastien
>
>
> Le 07/04/2023 à 15:34, David Sidrane a écrit :
> > Opening the discussion for this issue on the list. See
> > https://github.com/apache/nuttx/issues/1570
> >
> >
> >
> > I would like to get feedback on the approach see if we can move
> > forward
> on
> > this.
> >
> >
> >
> >
> >
> > While some solutions were discussed in
> >
> > - Revert "stm32h7 sdmmc: set SDMMC_CK pin to high speed (50 MHz)
> > mode."
> >  #5012 
> >
> > I would like to propose a solution for this issue as a request for
> > comment:
> >
> > 1. That will not affect any existing boards
> > 2. Will allow us to fix the issues without forcing massive changes.
> > 3. Eventually after N more releases of NuttX deprecate the solution.
> >
> > Steps to get there:
> >
> > 1. Kconfig for all effected arches will have
> > STM32xxx_USE_LEGACY_PINMAP
> > set to yes as a default.
> > 2. Rework top level pinmap files E.G. hardware/stm32_pinmap.h.
> > 3. The current pinmap file will be renamed with _legacy E.G.
> > hardware/stm32h7x3xx_pinmap_legacy.h
> > 4. Rework chip specific files removing speeds and adding _0 to the
> > previous no-selectable pins with speeds
> > 5. Rework chip specific files adding _0 to the previous
> > no-selectable
> > pins
> >
> > The hardware/stm32_pinmap.h will have the following structure
> >
> > #if defined(STM32H7_USE_LEGACY_PINMAP )
> >
> > #  if defined(CONFIG_STM32H7_STM32H7X3XX)
> >
> > #include "hardware/stm32h7x3xx_pinmap_legacy.h"
> >
> > #  elif defined(CONFIG_STM32H7_STM32H7X7XX)
> >
> > #include "hardware/stm32h7x3xx_pinmap_legacy.h"
> >
> > #  else
> >
> > # error "Unsupported STM32 H7 Legacy Pin map"
> >
> > #  endif
> >
> > #else
> >
> > #  if defined(CONFIG_STM32H7_STM32H7X3XX)
> >
> > #include "hardware/stm32h7x3xx_pinmap.h"
> >
> > #  elif defined(CONFIG_STM32H7_STM32H7X7XX)
> >
> > #include "hardware/stm32h7x3xx_pinmap.h"
> >
> > #  else
> >
> > # error "Unsupported STM32 H7 Pin map"
> >
> > #  endif
> >
> > # endif
> >
> >

[RESULT] Release Apache NuttX 12.1.0 [RC0]

2023-04-17 Thread Alin Jerpelea
Hi,

The vote closes now as over 72hr have passed. The vote PASSES with 4
(+4 binding) votes from the PPMC,
1 (+0 non-binding) votes from the developer community,
No further +1, 0 or -1 votes.

The vote thread:
[1] https://lists.apache.org/list.html?dev@nuttx.apache.org

Thanks,
Alin Jerpelea


Re: V NuttX International Workshop

2023-04-17 Thread Alan C. Assis
Hi Roberto,

Yes, we will find some way to let some remote speakers, but the
priority will be live.

Since you are already a NuttX contributor, maybe we can some way to
bring you with Apache's help as Nathan suggested.

BR,

Alan

On 4/17/23, Roberto Bucher  wrote:
> Hi Alan
>
> my idea is to present my contributions to microros-NuttX, but after the
> retirement from my professor position at SUPSI (31.08.2023!), it is
> quite difficult for me to be in Brazil. An online presentation should be
> possible?
>
> Best regards
>
> Roberto
>
> On 4/16/23 16:35, Alan C. Assis wrote:
>> On 4/16/23, Nathan Hartman  wrote:
>> ...
>>> Perhaps the simplest low-tech way is to video tape the presentations and
>>> post the videos online afterwards.
>>>
>>> In fact, not only will this make it possible to see the event for people
>>> who can't attend, but it will also become possible to refer back to the
>>> presentations for people who do attend, and can be added to our online
>>> video repertoire for people who discover NuttX in the future.
>>>
>>> So, if it is possible , I hope the presentations can be video taped and
>>> put
>>> online.
>>>
>> Yes, his is an option, but if we can do live stream to let people to
>> participate as in previous Online Workshop will be awesome! :-)
>>
>> Tiago told me that Unicamp has some equipment to do that, let see if
>> we could use it.
>>
>> BR,
>>
>> Alan
>
>


Re: V NuttX International Workshop

2023-04-17 Thread Tomek CEDRO
On Sun, Apr 16, 2023, 19:12 Alin Jerpelea wrote:

> 1 laptop streaming and and at least 1 online moderator would bridge the gap
> between phosical and online.
> I offer muself to be one of the moderators
>
> Best regards
> Alin


count me in :-)

--
CeDeROM, SQ7MHZ, http://www.tomek.cedro.info


Re: [Breaking change] Move nxmutex to sched

2023-04-17 Thread Gregory Nutt
Linux uses functions like copytouser() and copyfromuser() to get 
information to/from user space.


But I think there is an easier way in NuttX.  All user memory comes from 
a poll of pages that are also mapped in kernel space.  I think that is 
true for all architectures.  And there should be a function to convert a 
user virtual address to a user virtual address to physical address.  I 
am not sure what all is in place.


Couldn't accessing user memory from the kernel address alias avoid the 
problem you describe.  Of course, you would have to be careful at page 
boundaries because contiguous virtual pages may not be physically 
contiguous.


On 4/14/2023 6:18 AM, Jukka Laitinen wrote:

Hi,

I am not sure whether it is necessary to separate mutex and semaphore 
(although I do see the performance gain that it would give for mutex), 
but there is another related topic I would like to raise.


Currently, the semaphores don't work (at all) for CONFIG_BUILD_KERNEL. 
The simple reason is that the semaphores are allocated from the 
user-mapped memory, which is not always available for the kernel while 
scheduling or in interrupts. At the time when it is needed, there may 
be another memory map active for mmu.


There is also an issue with performance; every semaphore access needs 
to go to the kernel through syscall, although in principle the 
semaphore counter handling alone doesn't need that if the compiler & 
hw has the necessary atomic support.


We are especially interested in having real-time behaviour (priority 
based scheduling, priority inheritance...) AND running 
CONFIG_BUILD_KERNEL. We have used some methods to circumvent the 
issue, but for those I am not going into details as we don't have a 
publishable implementation ready.


A tempting way to fix the problem (which we didn't try out yet) would 
be separating the semaphores in two parts, kernel side structure and 
the user side structure. Something that zyfeier also did with the 
"futex" linux-like implementation. But, also this kind of 
implementation should be real-time - so when there is access to the 
semaphore via syscall (e.g. when the semaphore blocks), or when 
scheduling, the kernel must have O(1) access to the kernel side 
structure - no hashing / allocating etc. at runtime.


So to summarize, for CONFIG_BUILD_KERNEL the semaphores could 
*perhaps* work like this (this is not yet tried out, so please forgive 
me if something is forgotten):
- User-side semaphore handle would have the counter and a direct 
pointer (handle) to the kernel side structure (which can be passed to 
kernel in syscall).
- Kernel side structure would have the needed wait queue and sem 
holder structures (and flags?)
- Kernel side structure would be allocated at sem_init (AND if it was 
not initialized, allocate it at the time when it is needed?). To 
achieve real-time behaviour one should just call sem_init properly at 
startup of the application.
- Kernel side structures would be listed in tcb and cleaned up at 
task_group exit. Also some hard limit/management for how much kernel 
memory can one process eat from kernel heap is needed.
- Counter manipulation can be handled directly in libc in case 
compiler supports proper atomic operations, or syscall to kernel when 
there is no support available (this would be just performance 
optimization - next phase)


Whether it is feasible to do it only for CONFIG_BUILD_KERNEL, or as a 
common implementation for all  build modes, I didn't think of yet. I 
am also not sure whether the re-design of semaphore could also lead to 
better wrapping of it for mutex use, but this is also possible. In 
that case it could *maybe* solve the performance issue zyfeier tried 
to tackle.


This is just one idea, but somehow the problem of not working 
semaphores in CONFIG_BUILD_KERNEL should be tackled. I wonder if this 
is something we should experiment with? If someone is interested in 
such an experiment, please let me know. Or if someone is interested in 
doing this experiment, please let me know as well, so we don't end up 
doing duplicate work :)


Br,
Jukka

Ps. I think that in the current implementation the nxmutex code is 
inlined everywhere, increasing code size. Not a huge issue for me, but 
increasing code size should be managed


On 7.4.2023 5.18, zyfeier wrote:


Thank you very much for the example you provided. What I want to 
point out is that this is not just about " just delete / replace what 
is already out there working fine ". Due to the multi-holder of the 
count semaphore, the performance of the mutex is much worse than 
other RTOS (with a performance gap of 10%), but these operations are 
not necessary for the mutex. That's why there is an idea to separate 
the mutex and semaphore.


However, if everyone thinks that separating the mutex and semaphore 
is a bad idea, then we need to think of other methods. Do you have 
any better methods to offer?


从Windows 版邮件 

Re: [Breaking change] Move nxmutex to sched

2023-04-17 Thread Ville Juven
Hi all.

Greg, yes the page pool is virtually addressable by the kernel, but as you
said, the page boundaries are an issue. I am in fact exploring this route
right now, but Jukka's suggestion is also a valid topic to discuss, what
I'm doing does not exclude what Jukka and Zyfeier are proposing.

In order to get cross page access I need to implement a dynamic vma area
for the kernel. This requires a bit of infrastructure but I already have
most of it up and running. What I have now is a mechanism to obtain the
kernel addressable pointer for the semaphore and this really does fix the
semaphore issue. But as you mentioned, the page boundaries are a problem,
and getting that to work is not trivial, but doable.

This "dynamic kernel mapping"-API is generally useful e.g. for I/O remap,
so I will implement it regardless.

However I would still like to discuss the option to split the semaphore
structure, as it avoids doing a page directory walk every time the
semaphore is accessed.

Btw, the semaphore memory must be mapped when calling sem_wait(), because
tcb->waitobj is used by the scheduler and using dynamic mappings or
put/get_user() (because they are what they say, the COPY data, not refer to
it) for that will just simply destroy the scheduling performance.
I also thought that mqueue:s would need a similar fix but I think I'm wrong
on that one, the mqueue memory is kernel memory and the user accesses it
via a file descriptor handle only.

Br,
Ville Juven / pussuw on github

On Mon, Apr 17, 2023 at 6:50 PM Gregory Nutt  wrote:

> Linux uses functions like copytouser() and copyfromuser() to get
> information to/from user space.
>
> But I think there is an easier way in NuttX.  All user memory comes from
> a poll of pages that are also mapped in kernel space.  I think that is
> true for all architectures.  And there should be a function to convert a
> user virtual address to a user virtual address to physical address.  I
> am not sure what all is in place.
>
> Couldn't accessing user memory from the kernel address alias avoid the
> problem you describe.  Of course, you would have to be careful at page
> boundaries because contiguous virtual pages may not be physically
> contiguous.
>
> On 4/14/2023 6:18 AM, Jukka Laitinen wrote:
> > Hi,
> >
> > I am not sure whether it is necessary to separate mutex and semaphore
> > (although I do see the performance gain that it would give for mutex),
> > but there is another related topic I would like to raise.
> >
> > Currently, the semaphores don't work (at all) for CONFIG_BUILD_KERNEL.
> > The simple reason is that the semaphores are allocated from the
> > user-mapped memory, which is not always available for the kernel while
> > scheduling or in interrupts. At the time when it is needed, there may
> > be another memory map active for mmu.
> >
> > There is also an issue with performance; every semaphore access needs
> > to go to the kernel through syscall, although in principle the
> > semaphore counter handling alone doesn't need that if the compiler &
> > hw has the necessary atomic support.
> >
> > We are especially interested in having real-time behaviour (priority
> > based scheduling, priority inheritance...) AND running
> > CONFIG_BUILD_KERNEL. We have used some methods to circumvent the
> > issue, but for those I am not going into details as we don't have a
> > publishable implementation ready.
> >
> > A tempting way to fix the problem (which we didn't try out yet) would
> > be separating the semaphores in two parts, kernel side structure and
> > the user side structure. Something that zyfeier also did with the
> > "futex" linux-like implementation. But, also this kind of
> > implementation should be real-time - so when there is access to the
> > semaphore via syscall (e.g. when the semaphore blocks), or when
> > scheduling, the kernel must have O(1) access to the kernel side
> > structure - no hashing / allocating etc. at runtime.
> >
> > So to summarize, for CONFIG_BUILD_KERNEL the semaphores could
> > *perhaps* work like this (this is not yet tried out, so please forgive
> > me if something is forgotten):
> > - User-side semaphore handle would have the counter and a direct
> > pointer (handle) to the kernel side structure (which can be passed to
> > kernel in syscall).
> > - Kernel side structure would have the needed wait queue and sem
> > holder structures (and flags?)
> > - Kernel side structure would be allocated at sem_init (AND if it was
> > not initialized, allocate it at the time when it is needed?). To
> > achieve real-time behaviour one should just call sem_init properly at
> > startup of the application.
> > - Kernel side structures would be listed in tcb and cleaned up at
> > task_group exit. Also some hard limit/management for how much kernel
> > memory can one process eat from kernel heap is needed.
> > - Counter manipulation can be handled directly in libc in case
> > compiler supports proper atomic operations, or syscall to kernel when
> > there i

Re: [Breaking change] Move nxmutex to sched

2023-04-17 Thread Jukka Laitinen
Hi, thanks for your contribution to the discussion!

Gregory Nutt kirjoitti maanantai 17. huhtikuuta 2023:
> Linux uses functions like copytouser() and copyfromuser() to get 
> information to/from user space.
>

I'm afraid that using this for semaphores would become quite heavy...
 
> But I think there is an easier way in NuttX.  All user memory comes from 
> a poll of pages that are also mapped in kernel space.  I think that is 
> true for all architectures.  And there should be a function to convert a 
> user virtual address to a user virtual address to physical address.  I 
> am not sure what all is in place.
> 
> Couldn't accessing user memory from the kernel address alias avoid the 
> problem you describe.  Of course, you would have to be careful at page 
> boundaries because contiguous virtual pages may not be physically 
> contiguous.

Yes, this is exactly the workaround which we have used. And as you stated, it 
breaks when the needed structure happens to be on the page boundary. Another 
way around would be mapping the pages for kernel. This is not complex, but 
still requires finding free virtual memory area for the purpose. And, mapping 
two full pages for just one semaphore. So it is not so nice.

> 
> On 4/14/2023 6:18 AM, Jukka Laitinen wrote:
> > Hi,
> >
> > I am not sure whether it is necessary to separate mutex and semaphore 
> > (although I do see the performance gain that it would give for mutex), 
> > but there is another related topic I would like to raise.
> >
> > Currently, the semaphores don't work (at all) for CONFIG_BUILD_KERNEL. 
> > The simple reason is that the semaphores are allocated from the 
> > user-mapped memory, which is not always available for the kernel while 
> > scheduling or in interrupts. At the time when it is needed, there may 
> > be another memory map active for mmu.
> >
> > There is also an issue with performance; every semaphore access needs 
> > to go to the kernel through syscall, although in principle the 
> > semaphore counter handling alone doesn't need that if the compiler & 
> > hw has the necessary atomic support.
> >
> > We are especially interested in having real-time behaviour (priority 
> > based scheduling, priority inheritance...) AND running 
> > CONFIG_BUILD_KERNEL. We have used some methods to circumvent the 
> > issue, but for those I am not going into details as we don't have a 
> > publishable implementation ready.
> >
> > A tempting way to fix the problem (which we didn't try out yet) would 
> > be separating the semaphores in two parts, kernel side structure and 
> > the user side structure. Something that zyfeier also did with the 
> > "futex" linux-like implementation. But, also this kind of 
> > implementation should be real-time - so when there is access to the 
> > semaphore via syscall (e.g. when the semaphore blocks), or when 
> > scheduling, the kernel must have O(1) access to the kernel side 
> > structure - no hashing / allocating etc. at runtime.
> >
> > So to summarize, for CONFIG_BUILD_KERNEL the semaphores could 
> > *perhaps* work like this (this is not yet tried out, so please forgive 
> > me if something is forgotten):
> > - User-side semaphore handle would have the counter and a direct 
> > pointer (handle) to the kernel side structure (which can be passed to 
> > kernel in syscall).
> > - Kernel side structure would have the needed wait queue and sem 
> > holder structures (and flags?)
> > - Kernel side structure would be allocated at sem_init (AND if it was 
> > not initialized, allocate it at the time when it is needed?). To 
> > achieve real-time behaviour one should just call sem_init properly at 
> > startup of the application.
> > - Kernel side structures would be listed in tcb and cleaned up at 
> > task_group exit. Also some hard limit/management for how much kernel 
> > memory can one process eat from kernel heap is needed.
> > - Counter manipulation can be handled directly in libc in case 
> > compiler supports proper atomic operations, or syscall to kernel when 
> > there is no support available (this would be just performance 
> > optimization - next phase)
> >
> > Whether it is feasible to do it only for CONFIG_BUILD_KERNEL, or as a 
> > common implementation for all  build modes, I didn't think of yet. I 
> > am also not sure whether the re-design of semaphore could also lead to 
> > better wrapping of it for mutex use, but this is also possible. In 
> > that case it could *maybe* solve the performance issue zyfeier tried 
> > to tackle.
> >
> > This is just one idea, but somehow the problem of not working 
> > semaphores in CONFIG_BUILD_KERNEL should be tackled. I wonder if this 
> > is something we should experiment with? If someone is interested in 
> > such an experiment, please let me know. Or if someone is interested in 
> > doing this experiment, please let me know as well, so we don't end up 
> > doing duplicate work :)
> >
> > Br,
> > Jukka
> >
> > Ps. I think that in the current implementation 

Re: [VOTE] Apache NuttX 12.1.0 RC0 release

2023-04-17 Thread Tomek CEDRO
yeah!! :-)

--
CeDeROM, SQ7MHZ, http://www.tomek.cedro.info


Re: [Breaking change] Move nxmutex to sched

2023-04-17 Thread Tomek CEDRO
if it possible to add new functionality as optional feature?

--
CeDeROM, SQ7MHZ, http://www.tomek.cedro.info