so maybe this will finally make it work as expected!
Aha! Seems to work now! Thanks for all the useful feedback and quick responses!
Regards,
Dagaen Golomb
Ph.D Student, University of Pennsylvania
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
CONFIG_MAXSMP is not set
>> CONFIG_PM_SLEEP_SMP=y
>>
>> See anything problematic? Seems PV spinlocks is not set, and SMP is
>> enabled... or is something else required to prevent stripping the
>> spinlocks? Also not sure if any of t
think about this -- It was
compiled with one vcpu so if its done at compile time this could be
the issue. I doubt its done at boot but if so I would presume there is
a way to disable this?
Below is the config file grepped for "SMP".
CONFIG_X86_64_SMP=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_SMP=y
# CONFIG_X86_VSMP is not set
# CONFIG_MAXSMP is not set
CONFIG_PM_SLEEP_SMP=y
See anything problematic? Seems PV spinlocks is not set, and SMP is
enabled... or is something else required to prevent stripping the
spinlocks? Also not sure if any of the set SPIN config items could
mess with this. If this is done at boot, a point in the direction for
preventing this would be appreciated!
Regards,
Dagaen Golomb
Ph.D Student, University of Pennsylvania
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
pin_lock() in linux can provide the host-wide atomicity (which will
>> >> surprise me, though), that will be great. Otherwise, we probably have
>> >> to expose the spin_lock in Xen to the Linux?
>>
>> > I'd think this has to be via the hypervisor (or some other third party).
>> > Otherwise what happens if one of the guests dies while holding the lock?
>> > -boris
>>
>> This is a valid point against locking in the guests, but itself won't
>> prevent a spinlock implementation from working! We may move this
>> direction for several reasons but I am interested in why the above is
>> not working when I've disabled the PV part that sleeps vcpus.
Regards,
Dagaen Golomb
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
is has to be via the hypervisor (or some other third party).
> Otherwise what happens if one of the guests dies while holding the lock?
> -boris
This is a valid point against locking in the guests, but itself won't
prevent a spinlock implementation fr
On May 16, 2016 6:25 PM, "Doug Goldstein" wrote:
>
> On 5/16/16 3:18 PM, Dagaen Golomb wrote:
> > On Mon, May 16, 2016 at 3:56 PM, Dagaen Golomb
wrote:
> >> On Mon, May 16, 2016 at 3:52 PM, Dagaen Golomb
wrote:
> >>> On Mon, May 16, 2016 at 3:12 PM,
On Mon, May 16, 2016 at 3:56 PM, Dagaen Golomb wrote:
> On Mon, May 16, 2016 at 3:52 PM, Dagaen Golomb wrote:
>> On Mon, May 16, 2016 at 3:12 PM, Doug Goldstein wrote:
>>> On 5/16/16 11:20 AM, Dagaen Golomb wrote:
>>>> On Mon, May 16, 2016 at 12:11 PM, Dagaen Gol
On Mon, May 16, 2016 at 3:52 PM, Dagaen Golomb wrote:
> On Mon, May 16, 2016 at 3:12 PM, Doug Goldstein wrote:
>> On 5/16/16 11:20 AM, Dagaen Golomb wrote:
>>> On Mon, May 16, 2016 at 12:11 PM, Dagaen Golomb
>>> wrote:
>>>> On Mon, May 16, 2016 at 12:03 P
On Mon, May 16, 2016 at 3:12 PM, Doug Goldstein wrote:
> On 5/16/16 11:20 AM, Dagaen Golomb wrote:
>> On Mon, May 16, 2016 at 12:11 PM, Dagaen Golomb
>> wrote:
>>> On Mon, May 16, 2016 at 12:03 PM, Dagaen Golomb
>>> wrote:
>>>> On Mon, May 16, 2016
On Mon, May 16, 2016 at 12:11 PM, Dagaen Golomb wrote:
> On Mon, May 16, 2016 at 12:03 PM, Dagaen Golomb
> wrote:
>> On Mon, May 16, 2016 at 11:55 AM, Doug Goldstein wrote:
>>> On 5/15/16 8:41 PM, Dagaen Golomb wrote:
>>>>> On 5/15/16 8:28 PM, Dagaen Golom
On Mon, May 16, 2016 at 12:03 PM, Dagaen Golomb wrote:
> On Mon, May 16, 2016 at 11:55 AM, Doug Goldstein wrote:
>> On 5/15/16 8:41 PM, Dagaen Golomb wrote:
>>>> On 5/15/16 8:28 PM, Dagaen Golomb wrote:
>>>>>> On 5/15/16 11:40 AM, Dagaen Golomb wrote:
>
On Mon, May 16, 2016 at 11:55 AM, Doug Goldstein wrote:
> On 5/15/16 8:41 PM, Dagaen Golomb wrote:
>>> On 5/15/16 8:28 PM, Dagaen Golomb wrote:
>>>>> On 5/15/16 11:40 AM, Dagaen Golomb wrote:
>>>>>> Hi All,
>>>>>>
>>
Regards,
Dagaen Golomb
On May 16, 2016 09:13, "Jonathan Creekmore"
wrote:
>
>
> Dagaen Golomb writes:
>
> > It does, being the custom kernel on version 4.1.0. But Dom0 uses this
same
> > exact kernel and reads/writes just fine! The only solution if this is
&g
>>> mismatches or something similar? I'm using the xen/xenstore.h header
> >>> file for all of my xenstore interactions. I'm running Xen 4.7 so it
> >>> should be in /dev/, and the old kernel is before 3.14 but the new one
> >>> is after,
> On 5/15/16 8:28 PM, Dagaen Golomb wrote:
>>> On 5/15/16 11:40 AM, Dagaen Golomb wrote:
>>>> Hi All,
>>>>
>>>> I'm having an interesting issue. I am working on a project that
>>>> requires me to share memory between dom0 and domUs
> On 5/15/16 11:40 AM, Dagaen Golomb wrote:
> > Hi All,
> >
> > I'm having an interesting issue. I am working on a project that
> > requires me to share memory between dom0 and domUs. I have this
> > successfully working using the grant table and the XenStor
kernel modification would be the
>>> issue as described above. I also have the dom0 running this kernel and
>>> it reads and writes the XenStore just dandy. Are there any kernel
>>> config issues that could do this?
>>
>> What if you use the .config of the ker
r, stock kernels. They all work with the
>> read. However, I do not see why the kernel modification would be the
>> issue as described above. I also have the dom0 running this kernel and
>> it reads and writes the XenStore just dandy. Are there any kernel
>> config issues tha
ill waiting.
>
> The only issue I could see is the 88007B2F3390. This is supposed
> to be the key name a presume (such as gref). Maybe its an issue with
> the compiled binary, and it ends up watching on a key that doesn't
> exist (nor ever will). I will look into this as it looks promising!
> Thanks!
Update: I checked further up the logs and the other working kernels
produce values like this as well. Also, I am using the same exact
binary between kernel versions, not recompiling.
>
> I'm don't see any other logs for xenstore, if there are more I please
> point them to me. xenstored.log in same directory is recognized as
> binary fine, and when I open anyways all I see is "Xen Storage Daemon,
> version 1.0" repeatedly.
Regards,
Dagaen Golomb
Ph.D. Student, University of Pennsylvania
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
t ends up watching on a key that doesn't
exist (nor ever will). I will look into this as it looks promising!
Thanks!
I'm don't see any other logs for xenstore, if there are more I please
point them to me. xenstored.log in same directory is recognized as
binary fine, and when I open anyways all I see is "Xen Storage Daemon,
version 1.0" repeatedly.
Regards,
Dagaen Golomb
Ph.D. Student, University of Pennsylvania
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
in which case it blocks on
the read itself.
I have an inkling this may be something as simple as a configuration
issue, but I can't seem to find anything. Also, the fact that writes
work fine but reads do not is perplexing me.
Any help would be appreciated!
Regards,
Dagaen Golomb
P
a
> runq to runq and delpetedq. I was thinking it might be the case here
> as well. (Yes, it is different here since we can get more useful
> information to tickle cpu if we put vCPUs into runq instead of adding
> one more queue.) :-)
I think this is straightforward to simply use
e I see the need for another list. Again, why
> just not leave them in runq? I appreciate this is a rather big change
> (although, perhaps it looks bigger said than done), but I think it could
> be worth pursuing.
>
> For double checking, asserting, and making sure that we are able to
> identify the running svc-s, we have the __RTDS_scheduled flag.
I also don't feel we need another list.
Regards,
~Dagaen Golomb
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
a patch series! :-D
>
> Sure! Dagaen, what do you think?
Yes, this may become a series with several changes like this. For now
I am going to get it working with the running vcpus in the runq.
I thought returning the inserted index was a good way of checking if
se. :-)
>
> So, as you wish. If you just wanna know, as again. If you fancy go down
> in the code and check, that, IMO, would be best. :-D
I will research myself. :)
Regards,
~Dagaen Golomb
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
->NUM_CPUS);
>> > +}
>> > +
>>
>> This is incorrect. The for_each_present_cpu() list all cpus in the
>> system. it enumerate the cpu_present_map.
>>
>> What you want here is the number of cpus for the scheduler in the
>> current cpupool.
> If rt_schedule runs first and schedule a VCPU to run, rt_schedule will
> be invoked again when replenishment is invoked.
This is a good point here. The ordering in this case doesn't seem to cause
any functional/behavior problems, but it will cause rt_schedule to run twice
when it could have been ran once. So, even as a corner case, it would seem
that its a performance corner case and not a behavior one.
~Dagaen Golomb
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
deal. However, if rt_schedule goes first it may kick a vcpu that is
about to get a replenishment that would cause it to remain a top priority.
One easy option is to check replenishments before kicking a vcpu, but
that's exactly the kind of stuff we wanted to avoid with this
restructuring. Addi
f the two timers are armed for the same time. It should be
correct for the common case.
Dario, let me know if this is closer to what you envisioned.
~Dagaen
On Sat, Jun 27, 2015 at 3:46 PM, Dagaen Golomb wrote:
> To do this, we create a new list that holds, for each
> vcpu, the time lea
To do this, we create a new list that holds, for each
vcpu, the time least into the future that it may need to be
rescheduled. The scheduler chooses the lowest time off of this
list and waits until the specified time instead of running every
1 ms as it did before.
Signed-off-by: Dagaen Golomb
To do this, we create a new list that holds, for each
vcpu, the time least into the future that it may need to be
rescheduled. The scheduler chooses the lowest time off of this
list and waits until the specified time instead of running every
1 ms as it did before.
Signed-off-by: Dagaen Golomb
400, Dagaen Golomb wrote:
>
>> > Anyway, I've zero interest in turning this into a fight over
>> > terminology... If you want to call runq_tickle() "the scheduler", go
>> > ahead, it would just make communication a bit more difficult, but I'm up
>&
>> Yes, this is an option. However, I thought this would actually be an
>> option you would
>> not like.
>>
> How so... I've been arguing for this the whole time?!?! :-O
>
> I'm sure I've put down a sketch of what I think the replenishment
> function should do in my first or second email in the thr
Thanks for the reply, budget enforcement in the scheduler timer makes
sense. I think I have an idea of what he wants done now.
~Dagaen
On Jun 17, 2015 1:45 AM, "Meng Xu" wrote:
> Hi Dagaen,
>
> I just comment on the summary of scheduler design you proposed at the
> end of the email. I'm looking
> Thanks for this actually... I love discussing these things, it makes me
> remind the time when I was doing these stuff myself, and makes me feel
> young! :-P
And thank you for the very detailed and well-thought response!
>
>> Separating the replenishment from the scheduler may be problematic. T
Let me know if I'm missing some key insight into how the behavior
could be implemented
correctly and beautifully using the multiple timer approach. I simply
don't see how it can
be done without heavy interaction and information sharing between them
which really
defeats the purpose.
Regards,
~Da
> No HTML, please.
Got it, sorry.
>> And note that, when I say "timer", I mean an actual Xen timer,
>> i.e.,
>> those things that are started, stopped, and with a timer
>> handling
>> routine being called when they expire. For an example, you can
>>
s and run the same workload on each, making sure each
run a long time to remove biases.
> On Mon, 2015-06-08 at 07:46 -0400, Dagaen Golomb wrote:
> > To do this, we create a new list that holds, for each
> > vcpu, the time least into the future that it may need to be
> >
...
> == Hypervisor ==
[...]
>
> * Improve RTDS scheduler (none)
>Change RTDS from quantum driven to event driven
> - Dagaen Golomb, Meng Xu, Chong Li
>
...
Ok.
The patch for this is out:
http://osdir.com/ml/general/2015-06/msg10265.html
Looking forward to comments
To do this, we create a new list that holds, for each
vcpu, the time least into the future that it may need to be
rescheduled. The scheduler chooses the lowest time off of this
list and waits until the specified time instead of running every
1 ms as it did before.
Signed-off-by: Dagaen Golomb
To do this, we create a new list that holds, for each
vcpu, the time least into the future that it may need to be
rescheduled. The scheduler chooses the lowest time off of this
list and waits until the specified time instead of running every
1 ms as it did before.
Signed-off-by: Dagaen Golomb
All,
I expect to have a patch out soon for the RTDS scheduler improvement.
Regards,
Dagaen Golomb
On Thu, Mar 12, 2015 at 12:01 PM, Olaf Hering wrote:
> On Thu, Mar 12, Ian Campbell wrote:
>
> > dist/install/var/xen/dump
> > which all seems proper and correct to
vents are received.
This improvement will only require changes to the RTDS scheduler file
(sched_rt.c) and will not require changes to any other Xen subsystems.
Discussion, comments, and suggestions are welcome.
Regards,
Dagaen Golomb
___
Xen-devel ma
43 matches
Mail list logo