Hi Sangam, Please let me confirm I understood it correctly: you are using a dual core MCU running NuttX in a core and baremetal in another core and using the same flash shared with both systems, correct?
I don't know if someone has tried this idea before, but it seems very dangerous, because both systems/cores (NuttX and baremetal) because: 1) you need to guarantee that both systems don't access the SPI bus at same time; and more importantly: 2) once one of them modify the content of the flash, the information that each one have in its RAM buffer will get incorrect. Someone asked a similar question about it, but using FatFS: https://www.reddit.com/r/embedded/comments/13918ve/is_sharing_flash_with_fatfs_between_multiple_mcus/ A more complex but robust option would be to implement a distributed or virtualized file system, where NuttX acts as the file server, and the baremetal accesses the flash indirectly through NuttX. Maybe you can use the V9FS that was integrated on NuttX recently to implement it. Best Regards, Alan On Fri, Aug 23, 2024 at 1:21 AM Sangam Thapa <sangam.thapa...@outlook.com> wrote: > Hey folks! I have a flash memory shared between two MCUs. One of the MCU > has been programmed with nuttx and other with barebone(embedded C) code. > The code is working good in both cases. The barebone MCU also detects the > file created and written by the nuttx although the written file is easily > readable by the littlefs implementation in barebone. We can write on top of > the file and save it, the content of the file is changed. The same file > while reading shows the appended content, accordingly the changes are not > shown in the Nuttx OS. The littlefs configuration of both subsystems is > same. Has anyone faced the same issue? Also, two partition has been created > using nuttx and I am not able to mount the second partition in barebone > code. The snapshot of the parameters of implementation of little fs has > been attached with the mail. Your cooperation would be greatly appreciated. > > Thank you in advance for your support! > > Regards, > Sangam Thapa >