Andrew,
> EXT email: be mindful of links/attachments.
>
> On 31/07/2025 11:52 pm, Choi, Anderson wrote:
>> Dario,
>>
>>> On Thu, 2025-07-24 at 21:28 -0400, Stewart Hildebrand wrote:
>>>> On 7/24/25 18:33, Anderson Choi wrote:
>>>>> Fixes
Dario,
> On Thu, 2025-07-24 at 21:28 -0400, Stewart Hildebrand wrote:
>> On 7/24/25 18:33, Anderson Choi wrote:
>>>
>>> Fixes: 22787f2e107c ("ARINC 653 scheduler")
>>> Suggested-by: Stewart Hildebrand
>>> Suggested-by: Nathan Studer
>>> Signed-off-by: Anderson Choi
>>
>> Reviewed-by: Stewart
Stewart,
> EXT email: be mindful of links/attachments.
>
> Hi,
>
> It largely looks OK to me, just a few small comments below.
>
> On 7/18/25 05:16, Anderson Choi wrote:
>> ARINC653 specificaion requires partition scheduling to be
>> deterministic
>
> Typo: s/specificaion/specification/
>
Addr
Stewart,
>>> Stewart,
>>>
>>> I appreciate your suggestion to eliminate the while loop.
>>> What about initializing major_frame and schedule[0].runtime to
>>> DEFAULT_TIMESLICE at a653sched_init() and use them until the real
>>> parameters are set as below to eliminate the if branch?
>>
>> It wo
> EXT email: be mindful of links/attachments.
>
> On 7/17/25 9:21, Hildebrand, Stewart wrote:
else +{ +sched_priv->next_switch_time =
>>> sched_priv->next_major_frame + +
>>> sched_priv->schedule[0].runtime; +
>>> sched_priv->next_major_frame += sched_
Jan,
>> Signed-off-by: Anderson Choi
>> Suggested-by: Nathan Studer
>
> Nit: (Most) tags in chronological order, please.
>
Sorry, I'm not fully familiar with the revisioning / upstreaming process yet.
In this case, what would be the most appropriate action?
1. Create a v3 patch?
2. Send the
Nathan,
> I'm not sure this will work if the first minor frame is also missed (which can
> happen in some odd cases). In that scenario, you need to iterate through the
> schedule after resyncing the expected next major frame.
>
> Building off your changes, this should work:
>
> -if ( sched_
Stewart,
> On 6/25/25 23:50, Choi, Anderson wrote:
>> We are observing a slight delay in the start of major frame with the current
> implementation of ARINC653 scheduler, which breaks the determinism in the
> periodic execution of domains.
>>
>> This seems to resu
We are observing a slight delay in the start of major frame with the current
implementation of ARINC653 scheduler, which breaks the determinism in the
periodic execution of domains.
This seems to result from the logic where the variable "next_major_frame" is
calculated based on the current time
Andrew,
> EXT email: be mindful of links/attachments.
>
> On 13/03/2025 9:27 am, Choi, Anderson wrote:
>> May I know when you think it would be mainlined? And will it be applied to
> all branches, like 4.19 and 4.20?
>
> FYI, backports of this and the xfree() bug hav
Jan,
> EXT email: be mindful of links/attachments.
>
> On 18.03.2025 05:00, Anderson Choi wrote:
>> xen panic is observed with the following configuration.
>>
>> 1. Debug xen build (CONFIG_DEBUG=y)
>> 2. dom1 of an ARINC653 domain
>> 3. shutdown dom1 with xl command
>>
>> $ xl shutdown
>>
>>
Jürgen,
> On 17.03.25 06:07, Choi, Anderson wrote:
>> I'd like to report xen panic when shutting down an ARINC653 domain
>> with the following setup. Note that this is only observed when
>> CONFIG_DEBUG is enabled.
>>
>> [Test environment]
>> Yoct
I'd like to report xen panic when shutting down an ARINC653 domain with the
following setup.
Note that this is only observed when CONFIG_DEBUG is enabled.
[Test environment]
Yocto release : 5.05
Xen release : 4.19 (hash = 026c9fa29716b0ff0f8b7c687908e71ba29cf239)
Target machine : QEMU ARM64
Numbe
Juergen,
> On 13.03.25 07:51, Choi, Anderson wrote:
>> We are observing an incorrect or unexpected behavior with ARINC653
> scheduler when we set up multiple ARINC653 CPU pools and assign a
> different number of domains to each CPU pool.
>
> ...
>
>> It seems
We are observing an incorrect or unexpected behavior with ARINC653 scheduler
when we set up multiple ARINC653 CPU pools and assign a different number of
domains to each CPU pool.
Here's the test configuration to reproduce the issue.
[Test environment]
Yocto release : 5.05
Xen release : 4.19 (ha
15 matches
Mail list logo