Hi Meng,

So I will rewrite my setup here again, but this time I shorten the period
and budget for RTDS like you suggested:
-----------------------------------------------------------------------------------------------------------------------------

for xen-credit : 2vms (both vm are given 8 vCPUs) sharing 8 cores (cpu 0-7)
using credit scheduler(both with weight of 800 and capacity of 400)
for xen-rtds: 2 vms (both vm are given 8 vCPUs) sharing 8 cores (cpu0-7)
using RTDS (both with period of 4000(4ms) and budget of 2000(2ms)))
In both setup, dom0 is using 1 core from cpu 8-15

In both setup:

I loaded VM2 with constant running task with total utilization of 4 cores.
and in VM1 I run iterations of tasks of total utilization rate of 1 cores,
2 cores, 3 cores, 4 cores, and then record their schedulbility.
-----------------------------------------------------------------------------------------------------------------------------

So st_jobs_stats for the missed deadline jobs are:

trial #1 composed of 2 tasks: total tasks utilization rate = 1

(period, exe, deadline)=(21ms,12.023ms,21ms) -> miss all deadline
(period, exe, deadline)=(100ms,37.985ms,100ms) -> no miss

trial #2 composed of 2 tasks: total tasks utilization rate = 1

(period, exe, deadline)=(68ms,40.685ms,68ms) -> miss all deadline
(period, exe, deadline)=(70ms,28.118ms,70ms) -> no miss

trial #3 composed of 2 tasks: total tasks utilization rate = 1

(period, exe, deadline)=(16ms,11.613ms,16ms) -> miss all deadline
(period, exe, deadline)=(46ms,12.612ms,46ms) -> no miss

I do notice that for within the task that misses deadline, the completion
time get progressively longer,
for example: for trial #3, a snapshot of the task st_jobs_stats tells me
that
the completion time of the job is 79ms and then 87ms, and then 95ms

ok, I will look into the reference you provide, I just started my research
in the rt fields, there is still a lot for me to learn. Thank you again for
reply!

Thank you!

On Sat, Nov 28, 2015 at 7:09 AM, Meng Xu <xumengpa...@gmail.com> wrote:

> 2015-11-28 7:20 GMT-05:00 Yu-An(Victor) Chen <chen...@usc.edu>:
> > Hi Meng,
> >
> > Thank you so much for being this patience.
> >
> > So a task set is composed of a collection of real-time tasks, and each
> > real-time task is a sequence of jobs that are released periodically...
> All
> > jobs are periodic, where each job Ti is defined by a period (and
> deadline)
> > pi and a worse-case execution time ei, with pi ≥ ei ≥ 0 and pi, ei ∈
> > integers. Each job is comprised of a number of iterations of floating
> point
> > operations during each job. This is based on the base task.c provided
> with
> > the LITMUSRT userspace library.
>
> I knew this information. Basically, I did the experiment with this
> kind of configuration before. What I want to know is what is the range
> of period, execution and deadline you have for the taskset and when a
> taskset is unschedulable under your experiment, what is the
> st_jobs_stats result from the LITMUS which will show which job of
> which task misses deadline. If you try to service a task with period =
> 100, exe = 50, deadline = 100, specificed as (100, 50, 100), with a
> VCPU with period = 100 and budget = 100, you will never schedule this
> task. The reason is because the VCPU can be unavailable when task is
> released.
> A theoretical analysis can be found in this paper "Periodic Resource
> Model for Compositional RealTime Guarantees"
>
> http://repository.upenn.edu/cgi/viewcontent.cgi?article=1033&context=cis_reports
>
> BTW, I'm assuming you are familiar with the schedulability test for
> real time systems since you are talking about schedulability. If my
> assumption is wrong, you should have a look at the survey paper A
> survey of hard real-time scheduling for multiprocessor systems
> (http://dl.acm.org/citation.cfm?id=1978814). This will at least tell
> you when you should expect a taskset is schedulable in theory. If a
> taskset is claimed unschedulabled in theory, it is very likely
> unscheduble in practice. In addition, it will tell you taskset
> utilization is not the only factor that affects the schedulablity.
> Other factors, such as the highest taks utilization in a taskset,
> scheduling algorithm, task period relation, can affect schedulability
> also.
>
> Best,
>
> Meng
>
>
> >
> >
> > On Fri, Nov 27, 2015 at 4:17 PM, Meng Xu <xumengpa...@gmail.com> wrote:
> >>
> >> 2015-11-27 14:50 GMT-05:00 Yu-An(Victor) Chen <chen...@usc.edu>:
> >> > Hi Dario & Meng,
> >> >
> >> > Thanks for your analysis!
> >> >
> >> > VM1 and VM2 both are given 8 vCPUs and sharing physical CPU 0-7. So in
> >> > theory,"VM1 can get the services of 400%"
> >> > And yes, Dario, your explanation about the task utilization is
> correct.
> >> >
> >> > So the resource configuration as I mentioned before is:
> >> >
> >> > for xen-credit : 2vms (both vm are given 8 vCPUs) sharing 8 cores (cpu
> >> > 0-7)
> >> > using credit scheduler(both with weight of 800 and capacity of 400)
> >> > for xen-rtds: 2 vms (both vm are given 8 vCPUs) sharing 8 cores
> (cpu0-7)
> >> > using RTDS (both with period of 10000 and budget of 5000)
> >> > In both setup, dom0 is using 1 core from cpu 8-15
> >> >
> >> > In both setup:
> >> >
> >> > I loaded VM2 with constant running task with total utilization of 4
> >> > cores.
> >> > and in VM1 I run iterations of tasks of total utilization rate of 1
> >> > cores, 2
> >> > cores, 3 cores, 4 cores, and then record their schedulbility.
> >> >
> >> > Attached is the result plot.
> >> >
> >> >
> >> > I have tried with the newest litmust-rt, and rtxen is still performing
> >> > poorly.
> >>
> >> What is the characteristics of tasks you generated? When a taskset
> >> miss ddl., which task inside miss deadline?
> >>
> >> Meng
> >>
> >>
> >> >
> >> > Thank you both very much again, if there is any unclear part, please
> >> > lemme
> >> > know, thx!
> >> >
> >> > Victor
> >> >
> >> >
> >> >
> >> > On Fri, Nov 27, 2015 at 9:41 AM, Meng Xu <xumengpa...@gmail.com>
> wrote:
> >> >>
> >> >> 2015-11-27 12:23 GMT-05:00 Dario Faggioli <dario.faggi...@citrix.com
> >:
> >> >> > On Fri, 2015-11-27 at 08:36 -0800, Yu-An(Victor) Chen wrote:
> >> >> >> Hi Dario,
> >> >> >>
> >> >> > Hi,
> >> >> >
> >> >> >> Thanks for the reply!
> >> >> >>
> >> >> > You're welcome. :-)
> >> >> >
> >> >> > I'm adding Meng to Cc...
> >> >> >
> >> >>
> >> >> Thanks! :-)
> >> >>
> >> >> >> My goal for the experiment is to show that xen rtds scheduler is
> >> >> >> better than credit scheduler when it comes to real time tasks.
> >> >> >> so my set up is:
> >> >> >>
> >> >> >> for xen-credit : 2vms sharing 8 cores (cpu 0-7) using credit
> >> >> >> scheduler(both with weight of 800 and capacity of 400)
> >> >>
> >> >> So you set up 400% cpu cap for each VM. In other words, each VM will
> >> >> have computation capacity almost equal to 4 cores. Because VCPUs are
> >> >> also scheduled, the four-core capacity is not equal to 4 physical
> core
> >> >> in bare metal, because the resource supplied to tasks from VCPUs also
> >> >> depend on the scheduling pattern (which affect the resource supply
> >> >> pattern) of the VCPUs.
> >> >>
> >> >> >> for xen-rtds: 2 vms sharing 8 cores (cpu0-7) using RTDS (both with
> >> >> >> period of 10000 and budget of 5000)
> >> >>
> >> >> How many VCPUs  for each VM? If each VM has 4 VCPU, each VM has only
> >> >> 200% CPU capacity, which is only half compared to the configuration
> >> >> you made for credit scheduler.
> >> >>
> >> >> >> in both setup, dom0 is using 1 core from cpu 8-15
> >> >>
> >> >> Do you have some quick evaluation report (similar to the evaluation
> >> >> section in academic papers) that describe how you did the
> experiments,
> >> >> so that we can have a better guess on where goes wrong.
> >> >>
> >> >> Right now, I'm guessing that: the resource configured for each VM
> >> >> under credit and rtds schedulers are not the same, and it is possible
> >> >> that some parameters are not configured correctly.
> >> >>
> >> >> Another thing is that:
> >> >> credit scheduler is work conserving, while RTDS is not.
> >> >> So under the under-loaded situation, you will see credit scheduler
> may
> >> >> work better because it try to use as much resource as it could. You
> >> >> can make the comparision more failrly by setting the cap for credit
> >> >> scheduler as you did, and running some background VM or tasks to
> >> >> consume the idle resource.
> >> >>
> >> >> Meng
> >> >
> >> >
> >>
> >>
> >>
> >> --
> >>
> >>
> >> -----------
> >> Meng Xu
> >> PhD Student in Computer and Information Science
> >> University of Pennsylvania
> >> http://www.cis.upenn.edu/~mengxu/
> >
> >
>
>
>
> --
>
>
> -----------
> Meng Xu
> PhD Student in Computer and Information Science
> University of Pennsylvania
> http://www.cis.upenn.edu/~mengxu/
>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

Reply via email to