Hi Meng,

Thank you again for your kindly reply!
I am currently using "litmus-rt-2014.2.patch" and with my labmates' patch
for IPI interrupt (https://github.com/LITMUS-RT/liblitmus/pull/1/files)
I asked my labmate about it, he said he was not sure if there is more IPI
interrupt bug other than then one he fixed.

I can am using "st_trace" to check the success rate of the tasks. it only
gives me the how much of the deadline I am missing for each job.
which TRACE function in LITMUS, do you mean?

litmus_log?ft_cpu_traceX?ft_msg_traceX?sched_trace?
or
st_trace?

https://wiki.litmus-rt.org/litmus/Tracing


At this point I think the best way is to install the newest litmus-rt
(litmus-rt-2015.1.patch)
?

Again, thank you very much, I really appreciate your help!

Victor






On Mon, Nov 23, 2015 at 6:23 PM, Meng Xu <xumengpa...@gmail.com> wrote:

> Hi,
>
> 2015-11-23 11:35 GMT-05:00 Yu-An(Victor) Chen <chen...@usc.edu>:
> > Hi Meng,
> >
> > Thank you very much for replying!
> >
> > The RT tasks I am running for each trial at a certain utilization rate
> is a
> > collection of real-time tasks, and each real-time task is a sequence of
> jobs
> > that are released periodically. All jobs are periodic, where each job is
> > defined by a period (and deadline) and a worse-case execution time. Each
> job
> > is just running of a number of iterations of floating point operations.
> This
> > is based on the base task.c provided with the LITMUSRT userspace
> library. So
> > Yes they are independent and not I/O or memory intensive.
>
> Ah, I see. Which version of LITMUSRT did you use? LITMUS^RT has a bug
> in Xen environment. The IPI interrupt is not handled properly in
> LITMUS on Xen.  With the bug, the system performance is worse than
> when the system is loaded, because LITMUS scheduler just fails to
> respond due to the IPI ignorance.
>
> IIRC, this bug was fixed in the latest LITMUSRT code.
> Did you use the latest LITMUS code?
>
> One way to debug the issue is:
> Can you enable the TRACE in LITMUS and collact the scheduling log in
> the scenario when you see the bad performance?
>
> >
> > The period distribution I have for each task is (10ms,100ms) which is
> bigger
> > than the VM period I specified using xl sched-rtds (10000us), but I guess
> > the lower bound is too close to the VM period. So by your suggestion,
> maybe
> > shorten the period for VM can improve the performance?
>
> Yes. At least it shorten the starvation interval.
>
> Meng
>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

Reply via email to