As a follow up to my previous post - I'm debugging data via the host
parallel port to some custom hardware. The OS (which I don't have code for),
sets up the RTC for a very fast 0.25ms (4096Hz) time, and runs exclusively
off the interrupt geneted, ie all subsequent timing and functions are based
off it.
Prior to working with the real host parallel port, I hooked the port calls
my self and emulated the custom hardware in qemu. The guest OS works near
perfectly.
When the real port is hooked, the guest OS crashes with a stack overflow.
When I added printf calls and ran via the emulated port, it also crashed in
the same place.
So I realized that it must be that the interrupt is occuring far too often
during the port access, which is why it only appeared during the real port
access, and when the added printf calls via the software happened to slow
down the process a bit.
Does this sound like a reasonable explanaton? I'm assuming the rtc callback
that qemu sets up can even interrupt port access calls to linux? Is this
true, especially with such a high rate?
If true, the question is, what's the best way to slow down or possible stop
the interrupt from occurring while the ports are being read & written to?
My thought was to create a global variable to indicate while the port call
is busy, and reference it in the signalalarm handler to eat the interrpupts
while it's busy.
Does this sound like the best approach, or am I way off track?
Thanks for all your help!!
-Steve
_______________________________________________
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel