I think I can have better
understanding of option b) if I could understand the underlining
principles/reasons/rationale why the current design is not good. (I'm
not saying/arguing the current design is good. I just tried to
understand the weakness of it so that we could know how to handle it
b
, this transition toward event driven-ness has two goals:
>> > * improve the scheduler behavior [check, at least to some extent]
>> > * improve the code (readability, maintainability, etc.)
>> >[not check at all :-(]
>>
>> I see. We did consi
ost imminent replenishment event, so it seems to me that you need
> something that will tell you, when servicing replenishment X, what's the
> next time instant you want the timer to fire, to perform the next
> replenishment.
>
> Actually, the depleted queue you have know, can well become the
> replenishment queue (it will have to be kept sorted, though, I think).
> Whether you want to keep it as the tail of the actual runqueue, or split
> the two of them, it does not matter much, IMO.
Right. The runq and the depleted queue actually has the information of
the next replenish time for all VCPUs, after the depleted queue is
changed to be sorted.
Best regards,
Meng
---
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
>
> and its not really *that* far off from
> what we have now, Its
> a little restructuring so that replenishments occur before any
> scheduling activity and
> the handler checks if switching is needed (basically acting as the
> scheduler) and then
> calls tickle. Sounds like what you had in mind?
Thanks and best regards,
Meng
--
---
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
dea of what he wants done now.
Great! It is better to summarize your understanding of the design and
send to the ML to check if your understanding is correct as Chong did
for the improved toolstack of RTDS scheduler. This will save rounds of
patches.
Thanks,
Meng
---
Meng Xu
PhD Stud
? (Probably I'm wrong. :-))
Thanks,
Meng
---
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
suggestion or advice is really appreciated.
Thank you very much for your time on this question!
Best regards,
Meng
[1] http://parsec.cs.princeton.edu/
---
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu
no later than the last posting
> date. All patches posted after that date will be automatically queued
> into next release.
>
> RCs will be arranged immediately after freeze.
>
> = Projects =
>
> == Hypervisor ==
>
> * Convert RTDS from time to event-driven model
> -
On Mon, Feb 29, 2016 at 11:06 AM, Konrad Rzeszutek Wilk
wrote:
> On Fri, Feb 26, 2016 at 12:02:50AM -0500, Meng Xu wrote:
>> Hi,
>>
>
> Hey!
>
> CC-ing Elena.
I think you forgot you cc.ed her..
Anyway, let's cc. her now... :-)
>
>> We are measuring
On Mon, Feb 29, 2016 at 12:59 PM, Konrad Rzeszutek Wilk
wrote:
>> > Hey!
>> >
>> > CC-ing Elena.
>>
>> I think you forgot you cc.ed her..
>> Anyway, let's cc. her now... :-)
>>
>> >
>> >> We are measuring the execution time between native machine environment
>> >> and xen virtualization environmen
Hi Elena,
Thank you very much for sharing this! :-)
On Tue, Mar 1, 2016 at 1:20 PM, Elena Ufimtseva
wrote:
>
> On Tue, Mar 01, 2016 at 08:48:30AM -0500, Meng Xu wrote:
> > On Mon, Feb 29, 2016 at 12:59 PM, Konrad Rzeszutek Wilk
> > wrote:
> > >> > He
Hi Elena,
On Tue, Mar 1, 2016 at 3:39 PM, Elena Ufimtseva
wrote:
> On Tue, Mar 01, 2016 at 02:52:14PM -0500, Meng Xu wrote:
>> Hi Elena,
>>
>> Thank you very much for sharing this! :-)
>>
>> On Tue, Mar 1, 2016 at 1:20 PM, Elena Ufimtseva
>> wrote:
>&
On Tue, Mar 1, 2016 at 4:51 PM, Sander Eikelenboom wrote:
>
> Tuesday, March 1, 2016, 9:39:25 PM, you wrote:
>
>> On Tue, Mar 01, 2016 at 02:52:14PM -0500, Meng Xu wrote:
>>> Hi Elena,
>>>
>>> Thank you very much for sharing this! :-)
>>>
[spin_unlock]
>
> sched_rt.c TIMER_SOFTIRQ
> replenishment_timer_handler()
> [spin_lock]
> {
> replenish(i)
> runq_tickle(i)
> }>
> program_timer()
> [spin_lock]
>
> Signed-off-by: Tianyang
On Wed, Mar 9, 2016 at 10:46 AM, Dario Faggioli
wrote:
> On Tue, 2016-03-08 at 23:33 -0500, Meng Xu wrote:
>> I didn't mark out all repeated style issues. I think you can correct
>> all of the style issues, such as the spaces in the code, in the next
>> version.
>
On Thu, Mar 10, 2016 at 9:42 AM, Dario Faggioli
wrote:
> Meng Xu is one of the maintainers of the RT-Xen project,
> which is from where the RTDS scheduler comes. He also
> is the main author of the version of RTDS that we currently
> have here upstream.
>
> Since the upstre
On Thu, Mar 10, 2016 at 5:38 AM, Dario Faggioli
wrote:
> On Wed, 2016-03-09 at 23:00 -0500, Meng Xu wrote:
>> On Wed, Mar 9, 2016 at 10:46 AM, Dario Faggioli
>> wrote:
>> >
>> > Basically, by doing all the replenishments (which includes updating
>> > all
On Thu, Mar 10, 2016 at 11:43 AM, Dario Faggioli
wrote:
> On Thu, 2016-03-10 at 10:28 -0500, Meng Xu wrote:
>> On Thu, Mar 10, 2016 at 5:38 AM, Dario Faggioli
>> wrote:
>> >
>> > I don't think we really need to count anything. In fact, what I had
>
[My bad, Dario, I somehow only sent to you my reply... I'm resending
to everyone.]
On Thu, Mar 10, 2016 at 6:53 PM, Dario Faggioli
wrote:
> On Thu, 2016-03-10 at 13:08 -0500, Meng Xu wrote:
>> On Thu, Mar 10, 2016 at 11:43 AM, Dario Faggioli
>> wrote:
>> >
>>
ock?
Would you minding pointing me to somewhere I can find the reason or
enlighten me?
Thank you very much!
Best,
Meng
-------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/
___
Hi Wei and Quan,
On Fri, Mar 11, 2016 at 8:14 AM, Xu, Quan wrote:
> On March 10, 2016 11:56pm, Wei Liu wrote:
>> On Thu, Mar 10, 2016 at 10:08:04AM -0500, Meng Xu wrote:
>> > On Thu, Mar 10, 2016 at 9:42 AM, Dario Faggioli
>> > wrote:
>> > > Meng Xu
| Yes
| Yes
| Can run in irq disabled context? | No| No| Yes
Why deadlock may occur if we mix the spin_lock and spin_lock_irq(save)?
If we mix the spin_lock and spin_lock_irq(save), and a group of CPUs
rendezvousing in an IPI handler
on't have the
rendezvous condition.
>
>
> For deadlock, I think the key problems are:
> - A lock can be acquired from IRQ context
> -The interrupt is delivered to the _same_CPU_ that already holds the lock.
>
>
This is one type of deadlock, not the one due to re
On Fri, Mar 11, 2016 at 10:55 AM, Dario Faggioli
wrote:
> On Fri, 2016-03-11 at 09:49 -0500, Meng Xu wrote:
>> > Yes.
>> > Consistency may be helpful to avoid some easy-to-avoid lock errors.
>> > Moreover, without my fix, I think it would not lead dead lock, as
>
> +}
> +
> +/* Iterate through the list of updated vcpus. */
> +list_for_each_safe(iter, tmp, &tmp_replq)
> +{
> +struct vcpu* vc;
> +svc = replq_elem(iter);
> +vc = svc->vcpu;
On Sat, Mar 12, 2016 at 6:34 AM, Dario Faggioli
wrote:
> such as deadline and budget. Packing is necessary to make
> it possible for xentrace_format to properly interpreet the
> records.
>
> Signed-off-by: Dario Faggioli
> ---
> Cc: George Dunlap
> Cc: Meng Xu
>
On Sat, Mar 12, 2016 at 6:34 AM, Dario Faggioli
wrote:
> so the trace will show properly decoded info,
> rather than just a bunch of hex codes.
>
> Signed-off-by: Dario Faggioli
> Reviewed-by: Konrad Rzeszutek Wilk
> ---
> Cc: George Dunlap
> Cc: Meng Xu
> Cc: Tiany
, mask 0x000200
> (XEN) Additional IRQ 106 (DISPLAY B)
> (XEN) TEGRA: Routing IRQ106 to dom0, ICTLR2, mask 0x000400
> (XEN) Loading zImage from 8100 to
> 8fa0-8ff8
> (XEN) Allocating PPI 16 for event channel interrupt
> (XEN) Loading dom0 DTB to 0x
On Sat, Mar 12, 2016 at 3:23 PM, Dushyant Behl
wrote:
> Hi Meng,
>
> On Sat, Mar 12, 2016 at 8:57 PM, Meng Xu wrote:
>>
>> On Sat, Mar 12, 2016 at 9:20 AM, Dushyant Behl
>> wrote:
>> > Hi Julien,
>> >
>> > Thanks for the quick reply.
&g
On Sat, Mar 12, 2016 at 5:21 PM, Chen, Tianyang wrote:
>
>
> On 03/11/2016 11:54 PM, Meng Xu wrote:
>>
>> I'm focusing on the style and the logic in the replenish handler:
>>
>>> /*
>>> @@ -160,6 +180,7 @@ struct rt_private {
>>>
On Mon, Mar 14, 2016 at 7:48 AM, Dario Faggioli
wrote:
> On Sun, 2016-03-13 at 11:43 -0400, Meng Xu wrote:
>> On Sat, Mar 12, 2016 at 5:21 PM, Chen, Tianyang > > wrote:
>> > On 03/11/2016 11:54 PM, Meng Xu wrote:
>> > > One more thing we should think about is
Hi Dario,
On Mon, Mar 14, 2016 at 7:58 AM, Dario Faggioli
wrote:
> On Fri, 2016-03-11 at 23:54 -0500, Meng Xu wrote:
>>
>> > @@ -1150,6 +1300,101 @@ rt_dom_cntl(
>> > return rc;
>> > }
>> >
>> > +/*
>> > + * The replenishment
On Mon, Mar 14, 2016 at 11:38 AM, Meng Xu wrote:
> Hi Dario,
>
> On Mon, Mar 14, 2016 at 7:58 AM, Dario Faggioli
> wrote:
>> On Fri, 2016-03-11 at 23:54 -0500, Meng Xu wrote:
>>>
>>> > @@ -1150,6 +1300,101 @@ rt_dom_cntl(
>>> > return rc;
&
don't think it should have difference in x86 and in ARM. However,
perviously, I remembered that RTDS does not work in ARM because the
critical section in context switch in ARM is much longer than that in
x86. That's why RTDS reports error in ARM in terms of locks and was
fixed by global
On Mon, Mar 14, 2016 at 12:35 PM, Dario Faggioli
wrote:
> On Mon, 2016-03-14 at 12:03 -0400, Meng Xu wrote:
>> On Mon, Mar 14, 2016 at 11:38 AM, Meng Xu
>> wrote:
>> >
>> > I'm ok that we keep using spin_lock_irqsave() for now. But maybe
>> >
(scurr)
> snext = runq_pick()
> [spin_unlock]
>
> sched_rt.c TIMER_SOFTIRQ
> replenishment_timer_handler()
> [spin_lock]
> {
> replenish(i)
> runq_tickle(i)
> }>
> program_timer()
> [spin_lock]
>
Chong,
I don't think creating a global variable just for the warning thing is
a better idea. Even if we do want such a variable, it should only
occur in rt_dom_cntl() function, since it is only used in
rt_dom_cntl().
Global variable should be used "globally", isn't it. ;-)
Thanks,
Meng
---
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
>> ret.migrated = 1;
>>> }
>>> +ret.time = snext->budget; /* invoke the scheduler next time */
>>
>>
>> Ah, this is incorrect, although this is easy to fix.
>>
>> The ret.time is the relative time when
On Tue, Mar 15, 2016 at 11:32 PM, Chong Li wrote:
> On Tue, Mar 15, 2016 at 10:14 PM, Meng Xu wrote:
>> On Tue, Mar 15, 2016 at 1:22 PM, Chong Li wrote:
>>> On Tue, Mar 15, 2016 at 11:41 AM, Dario Faggioli
>>> wrote:
>>>> On Tue, 2016-03-15 at 11:22 -050
a
> scheduling decision is necessary, such as when the currently running
> vcpu runs out of budget.
>
> Finally, when waking up a vcpu, it is now enough to tickle the various
> CPUs appropriately, like all other schedulers also do.
>
> Signed-off-by: Tianyang Chen
> Signed-of
iggering ASSERT-s.
>
> Signed-off-by: Dario Faggioli
> ---
> Cc: George Dunlap
> Cc: Meng Xu
> Cc: Tianyang Chen
> ---
> xen/common/sched_credit.c |9 +
> xen/common/sched_credit2.c | 28 ++--
> xen/common/sched_r
hook for Credit2"), applies
> to it to.
>
> This patch, therefore, introduces the switch_sched hook
> for RTDS, as done already for Credit2 and Credit1.
>
> Signed-off-by: Dario Faggioli
> ---
> Cc: George Dunlap
> Cc: Meng Xu
> Cc: Tianyang Chen
> ---
Reviewe
On Wed, Mar 16, 2016 at 10:44 AM, Dario Faggioli
wrote:
> On Wed, 2016-03-16 at 10:20 -0400, Meng Xu wrote:
>> As to the comment, I will suggest:
>>
>> /*
>> * RTDS_was_depleted: Is a vcpus budget depleted?
>>
>> * + Set in burn_budget() when a vcpus budg
o mark whitespace in color.
One approach can be found at
http://stackoverflow.com/questions/5257553/coloring-white-space-in-git-diffs-output
Thanks,
Meng
--
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
tch the most relevant question is: Is this
> an issue with the patch, or one that existed already before?
No. I don't think it's the issue introduced by this patch. (I saw
Dario replied as well. )
Thanks and Best Regards,
Meng
---
Meng Xu
PhD Student in Compute
tting
The tool stack change for RTDS scheduler is in good status, but hasn't
collected all Reviewed-by tags...
> 6. Event driven RTDS
The Event driven RTDS has received the Reviewed-by from Dario and me.
It is ready after getting Acked-by.
Thanks,
Meng
---
Meng Xu
PhD Student i
> - change the mapping of the lock to the RTDS one;
> - release the lock (the one that has actually been
>taken!)
>
> Signed-off-by: Dario Faggioli
> ---
> Cc: Meng Xu
> Cc: George Dunlap
> Cc: Tianyang Chen
Reviewed-by: Meng Xu
--
Meng Xu
PhD Student in Comp
update it sooner or later.
>
> Thanks for the review Dario. I will put everything together soon.
You can just register an account on Xen wiki and you can update it directly...
It's always good to have more detailed correct information on the wiki.
Thanks and best regards,
Meng
--
sched_rtds_def = {
> .deinit = rt_deinit,
> .alloc_pdata= rt_alloc_pdata,
> .free_pdata = rt_free_pdata,
> +.init_pdata = rt_init_pdata,
> .alloc_domdata = rt_alloc_domdata,
> .free_domdata = rt_free_domdata,
> .init_dom
ny per-pCPU data, can avoid implementing the
> hook. In fact, the artificial implementation of
> .alloc_pdata in the ARINC653 is removed (and, while there,
> nuke .free_pdata too, as it is equally useless).
>
> Signed-off-by: Dario Faggioli
> ---
> Cc: George Dunlap
> Cc: Robe
> - it's an even bigger issue than that ASSERT triggering (i.e., there
>are potential races even when things works)
Ah-ha, I didn't know this before. :-)
> - I'm taking care of it.
Thank you so much! I'm looking fo
andler is not presented in the old RTDS code?
>
Yes. You can work on the latest staging commit point and later rebase it.
BTW, I'm testing the patch tonight and will send out my Reviewed-by
tag once I tested it...
Thanks,
Meng
--
---
Meng Xu
PhD Student in Computer and Info
do_sysctl+0x1eb/0x68d
(XEN)[] do_sysctl+0x615/0x102c
(XEN)[] lstar_enter+0xe2/0x13c
(XEN)
(XEN)
(XEN)
(XEN) Panic on CPU 5:
(XEN) Assertion 'sd->schedule_lock == &prv->lock' failed at sched_rt.c:690
Thanks,
Meng
On Wed, Mar 16, 2016 at 4:23 AM, Dario Faggioli
wrote:
> On Tue, 2016-03-15 at 23:43 -0400, Meng Xu wrote:
>> On Tue, Mar 15, 2016 at 11:32 PM, Chong Li
>> wrote:
>> > > > How about:
>> > > >
>> > > > We create a global variable
On Wed, Mar 16, 2016 at 6:23 AM, Dario Faggioli
wrote:
> On Tue, 2016-03-15 at 23:40 -0400, Meng Xu wrote:
>> > > > @@ -115,6 +118,18 @@
>> > > > #define RTDS_delayed_runq_add (1<<__RTDS_delayed_runq_add)
>> > > >
>> > > >
>>> without any VE support at all.
> It was another Samsung's SoC in my experience.
Yes. I just started looking at the ARM now and tried to evaluate the
RTDS scheduler on ARM. That's why I'm very interested in the use case
here and tried to run it. :-)
I will keep an
On Fri, May 20, 2016 at 4:52 AM, Olaf Hering wrote:
> On Thu, May 19, Meng Xu wrote:
>
>> Does anyone try to install two version of Xen toolstack on the same machine?
>
> I do that. See the INSTALL file which has examples at the end:
>
> * To build a private copy of to
d sharing your
script wrapper? I can learn from it and customize it for my machine.
Once I have it set up successfully, I can write a xen wiki to describe
how to do it.
Thank you again for your time and help!
Best Regards,
Meng
---
Meng Xu
PhD Student in Computer and Informat
On Sat, May 21, 2016 at 10:32 AM, Julien Grall wrote:
> Hello Meng,
Hi Julien,
>
> On 20/05/2016 16:21, Meng Xu wrote:
>>
>> On Thu, May 19, 2016 at 5:53 PM, Andrii Anisov
>> wrote:
>>>>>
>>>>> If the board is not supported by
On Tue, May 24, 2016 at 11:06 AM, Dario Faggioli
wrote:
> which was overlooked in 779511f4bf5ae ("sched: avoid
> races on time values read from NOW()").
>
> Reported-by: Jan Beulich
> Signed-off-by: Dario Faggioli
> ---
> Cc: Meng Xu
> Cc: George Dunlap
Hi Olaf,
Thank you very much for your suggestion!
On Fri, May 20, 2016 at 4:52 AM, Olaf Hering wrote:
> On Thu, May 19, Meng Xu wrote:
>
>> Does anyone try to install two version of Xen toolstack on the same machine?
>
> I do that. See the INSTALL file which has examples at
Hi Wei,
On Wed, May 25, 2016 at 6:53 AM, Wei Liu wrote:
> On Tue, May 24, 2016 at 04:47:38PM -0400, Meng Xu wrote:
>> Hi Olaf,
>>
>> Thank you very much for your suggestion!
>>
>> On Fri, May 20, 2016 at 4:52 AM, Olaf Hering wrote:
>> > On Thu, May 19,
On Tue, May 24, 2016 at 11:16 AM, Dario Faggioli
wrote:
> [trimmed To/Cc]
>
> On Fri, 2016-05-20 at 13:56 -0400, Meng Xu wrote:
>> On Fri, May 20, 2016 at 6:20 AM, Jan Beulich
>> wrote:
>> > Or, as an alternative to Olaf's reply, don't install the
ay. Xen Project hackathons have evolved in
> format into a series of structured problem-solving sessions that scale up to
> 50 people.
I'm wondering if it's possible to record the hackathon into a video or
audio, which will be really helpful for people who cannot make the
summ
our time and help in this question!
Best Regards,
Meng
---
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen
On Mon, Jun 13, 2016 at 2:28 PM, Boris Ostrovsky
wrote:
> On 06/13/2016 01:43 PM, Meng Xu wrote:
>> Hi,
>>
>> I have a quick question about using the Linux spin_lock() in Xen
>> environment to protect some host-wide shared (memory) resource among
>> VMs.
>
On Mon, Jun 13, 2016 at 5:17 PM, Boris Ostrovsky
wrote:
> On 06/13/2016 04:46 PM, Meng Xu wrote:
>> On Mon, Jun 13, 2016 at 2:28 PM, Boris Ostrovsky
>> wrote:
>>> On 06/13/2016 01:43 PM, Meng Xu wrote:
>>>> Hi,
>>>>
>>>> I ha
On Mon, Jun 13, 2016 at 6:54 PM, Andrew Cooper
wrote:
> On 13/06/2016 18:43, Meng Xu wrote:
>> Hi,
>>
>> I have a quick question about using the Linux spin_lock() in Xen
>> environment to protect some host-wide shared (memory) resource among
>> VMs.
>
On Tue, Jun 14, 2016 at 12:01 PM, Andrew Cooper
wrote:
>
> On 14/06/16 03:13, Meng Xu wrote:
> > On Mon, Jun 13, 2016 at 6:54 PM, Andrew Cooper
> > wrote:
> >> On 13/06/2016 18:43, Meng Xu wrote:
> >>> Hi,
> >>>
> >>> I
x27;t find the
> solution after spending a long time googling
> i I’ll appreciate your help
>
There is a video about how to install RT-Xen at [1]. Since the
installation of RT-Xen and Xen are exactly same, you may want to have
a look and follow the video
t; While there, fix style of an "out:" label in sched_rt.c.
>
> Signed-off-by: Dario Faggioli
> ---
> Cc: George Dunlap
> Cc: Meng Xu
> Cc: Anshul Makkar
> Cc: David Vrabel
> ---
> xen/common/sched_credit.c| 10 +++---
> xen/common/sched_credit2.c
1.0) entered disabled state
[ 326.509849] device vif1.0 left promiscuous mode
[ 326.509878] xenbr0: port 1(vif1.0) entered disabled state
[ 340.651538] IPv6: ADDRCONF(NETDEV_UP): vif2.0: link is not ready
[ 340.808801] device vif2.0 entered promiscuous mode
[ 340.816121] IPv6: ADDR
since then, and I now feel
> comfortable to be the one that will (N)Ack their patches! :-)
I'm not sure what I should reply, but I'm raising my hands and feet to
vote for it. :-)
Thanks,
Meng
---
Meng Xu
PhD Student in Computer and Information Science
University of Pe
2015-06-25 5:44 GMT-07:00 Dario Faggioli :
> Signed-off-by: Dario Faggioli
> ---
> Cc: George Dunlap
> Cc: Meng Xu
> ---
> MAINTAINERS |5 +
> 1 file changed, 5 insertions(+)
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 6b1068e..e6616d2 1006
e to fix the issue.
Thanks,
Meng
---
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
what we are after, and accommodates some sanity checking;
> * replaces some of the calls to cpupool_online_cpumask()
>with calls to the new functions too.
>
> Signed-off-by: Dario Faggioli
> ---
> Cc: George Dunlap
> Cc: Juergen Gross
> Cc: Robert VanVossen
> C
es not say
much about the design. As you can see in our discussion with Dario,
there exist several design choices to achieve this. Explicitly state
it here can help people understand what this patch is doing.
>
> Signed-off-by: Dagaen Golomb
> Signed-off-by: Meng Xu
>
restarts the timer.
>
> This may have some issues with corner cases that were discussed
> earlier, such as unexpected
> behavior if the two timers are armed for the same time. It should be
> correct for the common case.
Could you elaborate more about when two timers can be armed
If rt_schedule runs first and schedule a VCPU to run, rt_schedule will
be invoked again when replenishment is invoked.
I'm not sure if this is the only kind of scenario that may arm two
timers for the same time.
Thanks,
Meng
---
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
ke one more rt_schedule
> > is:
> > VCPU j currently runs out of budget and will have top priority once it
> > get budget replenishment.
> > If replenishment runs first, rt_schedule will be invoked for only once.
> > If rt_schedule runs first and schedule a VCPU to run,
Hi Ian and Dario,
Thank you very much for your explanation!
2015-06-29 2:33 GMT-07:00 Dario Faggioli :
>
> On Mon, 2015-06-29 at 09:53 +0100, Ian Campbell wrote:
> > On Sat, 2015-06-27 at 12:05 -0700, Meng Xu wrote:
>
> > > I want/hope to know when/how the RTDS schedule
>
> == Hypervisor ==
>
> * Improve RTDS scheduler (ok)
>Change RTDS from quantum driven to event driven
> - Dagaen Golomb, Meng Xu, Chong Li
As to changing the RTDS scheduler from quantum driven to event driven
(in Hypervisor):
Dagaen sent the second version of patch tha
t;
> Note that this field is not the same as the others in this struct, it is
> in effect part of the "key" while the others are the "values".
>
vcpuid in the libxl is not the key but the value.
When it is passed into hypervisor, vcpuid acts as the key to identify
which
ould be awesome. If you don't have time, I will have a
> look myself, but only in a few days.
>
Hmm, this is another bug for RTDS on ARM. :-(
I don't have an ARM board set up right now. I'm not sure if I can
run/test it on ARM. I'm curious if this bug is similar wi
s as it did before.
>
> Signed-off-by: Dagaen Golomb
> Signed-off-by: Meng Xu
> ---
> xen/common/sched_rt.c | 100
> +
> 1 file changed, 93 insertions(+), 7 deletions(-)
>
> diff --git a/xen/common/sched_rt.c b/xen/comm
priority vcpus after their budgets get replenished?
If so, doesn't that mean that we will "duplicate" (part of) the
runq_tickle code to find the pCPUs to preempt? Is it better than the
approach that just call runq_tickle each time whenever a high priority
"ready" VCPU is found?
>
> (let me know if I've explained myself...)
Sure. I'm not sure I really got the idea of how to handle the multiple
tickles scenario you mentioned above. ;-(
>
> Allow me to say this: thanks (to both Daegan and Meng) for doing this
> work. It's been a pleasure to have so much an interesting architectural
> discussion, and it's great to see results already. :-D
Thank you very much for your always very helpful suggestions/advices! :-D
Best regards,
Meng
---
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
Hi Dario,
(I understand and agree most of your comments, but have one concern
about your comment. I will comment below.)
Hi Dagaen,
Please comment on my comment if you have any concern/idea.
2015-07-07 7:03 GMT-07:00 Dario Faggioli :
>
> On Mon, 2015-07-06 at 22:51 -0700, Meng Xu
nvocation of the scheduler.
As to the min value of period, I think it should be >=100us. The
scheduler overhead of running a large box could be 1us if the runq is
long and competetion of the runq lock is heavy. If the scheduler is
potentially invoked every 10us, the scheduler overhead will be 10
2015-07-08 1:33 GMT-07:00 Dario Faggioli :
> On Tue, 2015-07-07 at 23:06 -0700, Meng Xu wrote:
>> 2015-07-07 7:39 GMT-07:00 Dario Faggioli :
>> > On Tue, 2015-07-07 at 09:59 +0100, Jan Beulich wrote:
>> >> >>> On 29.06.15 at 04:44, wrote:
>> >> &
2015-07-08 1:01 GMT-07:00 Dario Faggioli :
> [Trimming the Cc-list a bit, to avoid bothering Wei and Jan]
>
> On Tue, 2015-07-07 at 22:56 -0700, Meng Xu wrote:
>> Hi Dario,
>>
> Hi,
>
>> 2015-07-07 7:03 GMT-07:00 Dario Faggioli :
>> >
>> &g
comment or concerns on the current software-based
cache management work?
I hope to listen to your opinions and incorporate your opinions on
my ongoing work instead of diverting too
far away from Xen mainstream ideas. :-)
Thank you very much!
Best regards,
Meng
-------
Meng Xu
PhD Student in Co
it.
>
> Such scratch area can be used to kill most of the
> cpumasks{_var}_t local variables in other functions
> in the file, but that is *NOT* done in this chage.
>
> Finally, convert the file to use keyhandler scratch,
> instead of open coded string buffers.
>
>
need that #define, and if I kill it from
> here, you'll have to introduce it yourself. As said, I like it being
> introduced here better, but can live with you adding it with your
> patch(es), if that's what everyone else prefer.
I'm ok with either way and prefer the way y
2015-05-12 18:59 GMT-04:00 Dario Faggioli :
> On Sun, 2015-05-10 at 22:36 -0400, Meng Xu wrote:
>> Hi Dario and George,
>>
> Hi Meng,
Hi Dario,
>
> I gave a quick look at the slides. Nice work.
Thanks for your encouragement! :-)
>
> Although I don't have
h at
http://lists.xenproject.org/archives/html/xen-devel/2015-05/msg00750.html
.
> - Dagaen Golomb, Meng Xu
If you agree that the toolstack improvement for RTDS scheduler is also
for Xen 4.6, which is actually marked in the Xen Scheduler's plans
(http://wiki.xen.org/wiki/Xen_Project_Schedulers), I think
budget and vcpuID\n");
> +return 1;
> +}
> +flag_v = 1;
> +if (flag_p && flag_b && flag_v) {
> +flag_p = 0;
> +flag_b = 0;
> +flag_v = 0;
> +index++;
> +}
&g
Hi Dario,
2015-05-14 10:39 GMT-04:00 Dario Faggioli :
> On Thu, 2015-05-14 at 10:24 -0400, Meng Xu wrote:
>> > @@ -5744,6 +5749,7 @@ static int sched_rtds_pool_output(uint32_t poolid)
>> > return 0;
>> > }
>> >
>> > +
>> &
hts?
I think option 2) is better than the other two choices. The SEDF does
not use a strictly EDF scheduling policy as RTDS does, so the
scheduling sequence of VCPUs won't be same even when we tuned RTDS to
single-core EDF scheduler. This implicit transformation may give users
toolstack functionality of RTDS scheduler is
called in the test. So I'm wondering about the first question. :-)
(probably I missed something?)
Thanks,
Meng
---
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
t ok to use "git send-email --reply-to" to attach all four patches to
the cover letter (that is this email thread) of the patch set?
Thanks,
Meng
---
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengx
201 - 300 of 397 matches
Mail list logo