On 06.12.2011, at 08:39, 陳韋任 wrote:
>> If you want to be more exotic (minix found a lot of bugs for me back in the
>> day!) you can try the os zoo:
>>
>> http://www.oszoo.org/
>
> The website seems down?
Yeah, looks like it's down :(. Too bad.
Alex
> If you want to be more exotic (minix found a lot of bugs for me back in the
> day!) you can try the os zoo:
>
> http://www.oszoo.org/
The website seems down?
Regards,
chenwj
--
Wei-Ren Chen (陳韋任)
Computer Systems Lab, Institute of Information Science,
Academia Sinica, Taiwan (R.O.C.)
Te
Hi Max,
> If your code is available online I can try it myself, the question is
> where is it hosted then.
> If not, then link to kernel binary and qemu exec trace would help me to start.
Personally, I really want to make our work public, but I am not the decision
maker. I'll push it toward ope
> > We ask TCG to disassemble the guest binary where the trace beginning with
> > _again_ to get a set of TCG blocks, then sent them to the LLVM translator.
>
> So you have two TCG backends? One to generate real host code and one that
> goes into your LLVM generator?
Ah..., I should say we as
On 04.12.2011, at 07:14, 陳韋任 wrote:
>>> 3. Then a trace composed of TCG blocks is sent to a LLVM translator. The
>>> translator
>>> generates the host binary for the trace into a LLVM code cache, and patch
>>> the
>>
>> I don't fully understand this part. Do you disassemble the x86 blob that
> > 3. Then a trace composed of TCG blocks is sent to a LLVM translator. The
> > translator
> > generates the host binary for the trace into a LLVM code cache, and patch
> > the
>
> I don't fully understand this part. Do you disassemble the x86 blob that TCG
> emitted?
We ask TCG to disass
On 1 December 2011 03:50, 陳韋任 wrote:
> We use QEMU 0.13
Oops, I missed this. 0.13 is over a year old now. There is zero point
in doing any kind of engineering work of this scale on such an old
codebase. You need to be tracking the head of git master, generally,
if you want (a) any hope of getting
> The IO thread is always enabled in QEMU these days.
We use QEMU 0.13. I think IO thread is not enabled by default.
Regards,
chenwj
--
Wei-Ren Chen (陳韋任)
Computer Systems Lab, Institute of Information Science,
Academia Sinica, Taiwan (R.O.C.)
Tel:886-2-2788-3799 #1667
Homepage: http://people
On 01.12.2011, at 04:50, 陳韋任 wrote:
> Hi Alex,
>
>> Very cool! I was thinking about this for a while myself now. It's especially
>> appealing these days since you can do the hotspot optimization in a separate
>> thread :).
>>
>> Especially in system mode, you also need to flush when tb_flush(
On 1 December 2011 09:03, 陳韋任 wrote:
> I read the thread talking about the broken tb_unlink [1], and I'm surprised
> that tb_unlink is broken even under single-threaded mode and system mode. You
> mentioned (b) could be the IO thread in [1]. I think we don't enable IO thread
> in system mode righ
>> There's no attachment in this mail. I can try to help you resolving it
>> if you provide more information.
>
> Sorry about that, see the attachment please. What kind of information you
> want
> to know?
If your code is available online I can try it myself, the question is
where is it hosted t
Hi, Stefan
> It would be interesting to use an optimized interpreter instead of TCG,
> then go to LLVM for hot traces. This is more HotSpot-like with the idea
> being that the interpreter runs through initialization and rarely
> executed code without a translation overhead. For the hot paths LLV
On 1 December 2011 07:46, Stefan Hajnoczi wrote:
> It would be interesting to use an optimized interpreter instead of TCG,
> then go to LLVM for hot traces. This is more HotSpot-like with the idea
> being that the interpreter runs through initialization and rarely
> executed code without a transl
> Misgenerated code might not be an issue now since we have tested our
> framework
> in LLVM-only mode. I think the problem still is about the link/unlink stuff.
> The first problem I have while lowering the threshold is the broken one
> generate
> a few traces (2, actually) that a work one does
On Thu, Dec 01, 2011 at 11:50:24AM +0800, 陳韋任 wrote:
> > I don't see any better approach to debugging this than the one you're
> > already taking. Try to run as many workloads as you can and see if they
> > break :). Oh and always make the optimization optional, so that you can
> > narrow it dow
Hi Peter,
> > 1. cpu_unlink_tb (exec.c)
>
> This function is broken even for pure TCG -- we know it has a race condition.
> As I said on IRC, I think that the right thing to do is to start
> by overhauling the current TCG code so that it is:
> (a) properly multithreaded (b) race condition free
Hi Alex,
> Very cool! I was thinking about this for a while myself now. It's especially
> appealing these days since you can do the hotspot optimization in a separate
> thread :).
>
> Especially in system mode, you also need to flush when tb_flush() is called
> though. And you have to make sur
On 29 November 2011 07:03, 陳韋任 wrote:
>
> 1. cpu_unlink_tb (exec.c)
This function is broken even for pure TCG -- we know it has a race condition.
As I said on IRC, I think that the right thing to do is to start
by overhauling the current TCG code so that it is:
(a) properly multithreaded (b) ra
On 29.11.2011, at 08:03, 陳韋任 wrote:
> Hi all,
>
> Our team are working on a project similar to llvm-qemu [1], which is also
> based on QEMU and LLVM. Current status is the process mode works fine [2], and
> we're moving forward to system mode.
>
> Let me briefly introduce our framework here an
Hi all,
Our team are working on a project similar to llvm-qemu [1], which is also
based on QEMU and LLVM. Current status is the process mode works fine [2], and
we're moving forward to system mode.
Let me briefly introduce our framework here and state what problem we encounter.
What we do is tr
20 matches
Mail list logo