Hi all,
guess this is kind of newbie topic, sorry about that.
I am using current Nuttx 8.x, master branch.
Using a stm32f0l0g0 config (arm cortex-m0), my own custom board with
ARCH_CHIP_STM32F030CC,
means 256KB flash and 32KB RAM, same as the cpu used with the
nucleo-f091rc config (for which appears the same problem, tried this as
I got a nucleo-f091rc board here also..).
Using CONFIG_NFILE_DESCRIPTORS=6, same as all of the stm32f0l0g0 board
configs.
I created a task as builtin app in the nsh. The task needs to open more
than 4 device files,
but when trying to open the 4.th device file, calling
fs/vfs/fs_open.c : open() -> nx_vopen() -> files_allocate(),
the struct filelist will already be full with device inodes
(>CONFIG_NFILE_DESCRIPTORS),
returning EMFILE error
because the first 3 inodes are already set with stdin, stdout, sterr
(correct?), and then the 3 of my first device nodes.
I am using 4 buttons, and another device, and for each button a device
node, this is why I need that "much" file descriptors for that single task.
So, now my questions are:
1) am I doing bad practice, opening that much file descriptors per task?
if yes, I could open and close them on each access, which is actually
each 100ms, if that would be less overhead (guess not).
2)Of course I tried to set CONFIG_NFILE_DESCRIPTORS=8, but when I do that,
the system jumps into arch/arm/src/armv6-m/up_assert.c :_up_assert(),
before even finishing
nx_start().
Is there anything else that must be considered when increasing the
CONFIG_NFILE_DESCRIPTORS?
Seems that it happens something like stack corruption..
Increased also
CONFIG_USERMAIN_STACKSIZE=2048
and the stacksize of my task,
no success..
I am really not using a lot of resources:
nsh> free
total used free largest
Umem: 26144 10088 16056 14120
flow@flowI3:~/work/nuttx/nx8.2git/nuttx$ arm-none-eabi-size nuttx
text data bss dec hex filename
67539 156 4340 72035 11963 nuttx
Thanks a lot for any help or suggestions.
--
Florian Wehmeyer
TFW Tech Solutions