On Thu, 11 Mar 2010, Nick Piggin wrote:
> On Thu, Mar 11, 2010 at 03:10:47AM +, Jamie Lokier wrote:
> > Paul Brook wrote:
> > > > > In a cross environment that becomes extremely hairy. For example the
> > > > > x86
> > > > > architecture effectively has an implicit write barrier before every
> On 03/10/2010 07:41 PM, Paul Brook wrote:
> >>> You're much better off using a bulk-data transfer API that relaxes
> >>> coherency requirements. IOW, shared memory doesn't make sense for TCG
> >>
> >> Rather, tcg doesn't make sense for shared memory smp. But we knew that
> >> already.
> >
> > I
On 03/10/2010 07:41 PM, Paul Brook wrote:
You're much better off using a bulk-data transfer API that relaxes
coherency requirements. IOW, shared memory doesn't make sense for TCG
Rather, tcg doesn't make sense for shared memory smp. But we knew that
already.
In think TCG SMP is
On Thu, Mar 11, 2010 at 03:10:47AM +, Jamie Lokier wrote:
> Paul Brook wrote:
> > > > In a cross environment that becomes extremely hairy. For example the
> > > > x86
> > > > architecture effectively has an implicit write barrier before every
> > > > store, and an implicit read barrier before
Paul Brook wrote:
> > > In a cross environment that becomes extremely hairy. For example the x86
> > > architecture effectively has an implicit write barrier before every
> > > store, and an implicit read barrier before every load.
> >
> > Btw, x86 doesn't have any implicit barriers due to ordina
> > You're much better off using a bulk-data transfer API that relaxes
> > coherency requirements. IOW, shared memory doesn't make sense for TCG
>
> Rather, tcg doesn't make sense for shared memory smp. But we knew that
> already.
In think TCG SMP is a hard, but soluble problem, especially when
On 03/10/2010 07:13 PM, Anthony Liguori wrote:
On 03/10/2010 03:25 AM, Avi Kivity wrote:
On 03/09/2010 11:44 PM, Anthony Liguori wrote:
Ah yes. For cross tcg environments you can map the memory using
mmio callbacks instead of directly, and issue the appropriate
barriers there.
Not good eno
On 03/10/2010 03:25 AM, Avi Kivity wrote:
On 03/09/2010 11:44 PM, Anthony Liguori wrote:
Ah yes. For cross tcg environments you can map the memory using
mmio callbacks instead of directly, and issue the appropriate
barriers there.
Not good enough unless you want to severely restrict the use
> >> As of March 2009[1] Intel guarantees that memory reads occur in order
> >> (they may only be reordered relative to writes). It appears AMD do not
> >> provide this guarantee, which could be an interesting problem for
> >> heterogeneous migration..
> >
> > Interesting, but what ordering would c
On 03/10/2010 06:38 AM, Cam Macdonell wrote:
On Tue, Mar 9, 2010 at 5:03 PM, Paul Brook wrote:
In a cross environment that becomes extremely hairy. For example the x86
architecture effectively has an implicit write barrier before every
store, and an implicit read barrier before every load.
On 03/09/2010 11:44 PM, Anthony Liguori wrote:
Ah yes. For cross tcg environments you can map the memory using mmio
callbacks instead of directly, and issue the appropriate barriers there.
Not good enough unless you want to severely restrict the use of shared
memory within the guest.
For i
On Tue, Mar 9, 2010 at 5:03 PM, Paul Brook wrote:
>> > In a cross environment that becomes extremely hairy. For example the x86
>> > architecture effectively has an implicit write barrier before every
>> > store, and an implicit read barrier before every load.
>>
>> Btw, x86 doesn't have any impl
> > In a cross environment that becomes extremely hairy. For example the x86
> > architecture effectively has an implicit write barrier before every
> > store, and an implicit read barrier before every load.
>
> Btw, x86 doesn't have any implicit barriers due to ordinary loads.
> Only stores and
On 03/08/2010 07:16 AM, Avi Kivity wrote:
On 03/08/2010 03:03 PM, Paul Brook wrote:
On 03/08/2010 12:53 AM, Paul Brook wrote:
Support an inter-vm shared memory device that maps a shared-memory
object as a PCI device in the guest. This patch also supports
interrupts between guest by communicati
On 03/08/2010 03:54 AM, Jamie Lokier wrote:
Alexander Graf wrote:
Or we could put in some code that tells the guest the host shm
architecture and only accept x86 on x86 for now. If anyone cares for
other combinations, they're free to implement them.
Seriously, we're looking at an interface
Paul Brook wrote:
> > On 03/08/2010 12:53 AM, Paul Brook wrote:
> > >> Support an inter-vm shared memory device that maps a shared-memory
> > >> object as a PCI device in the guest. This patch also supports
> > >> interrupts between guest by communicating over a unix domain socket.
> > >> This pa
Avi Kivity wrote:
> On 03/08/2010 03:03 PM, Paul Brook wrote:
> >>On 03/08/2010 12:53 AM, Paul Brook wrote:
> >>
> Support an inter-vm shared memory device that maps a shared-memory
> object as a PCI device in the guest. This patch also supports
> interrupts between guest by commu
Paul Brook wrote:
> > However, coherence could be made host-type-independent by the host
> > mapping and unampping pages, so that each page is only mapped into one
> > guest (or guest CPU) at a time. Just like some clustering filesystems
> > do to maintain coherence.
>
> You're assuming that a TL
On Tue, Mar 9, 2010 at 3:29 AM, Avi Kivity wrote:
> On 03/08/2010 07:57 PM, Cam Macdonell wrote:
>>
>>> Can you provide a spec that describes the device? This would be useful
>>> for
>>> maintaining the code, writing guest drivers, and as a framework for
>>> review.
>>>
>>
>> I'm not sure if you
On 03/08/2010 03:03 PM, Paul Brook wrote:
On 03/08/2010 12:53 AM, Paul Brook wrote:
Support an inter-vm shared memory device that maps a shared-memory
object as a PCI device in the guest. This patch also supports
interrupts between guest by communicating over a unix domain socket.
This pa
> However, coherence could be made host-type-independent by the host
> mapping and unampping pages, so that each page is only mapped into one
> guest (or guest CPU) at a time. Just like some clustering filesystems
> do to maintain coherence.
You're assuming that a TLB flush implies a write barrie
> On 03/08/2010 12:53 AM, Paul Brook wrote:
> >> Support an inter-vm shared memory device that maps a shared-memory
> >> object as a PCI device in the guest. This patch also supports
> >> interrupts between guest by communicating over a unix domain socket.
> >> This patch applies to the qemu-kvm
Jamie Lokier wrote:
> Alexander Graf wrote:
>
>> Or we could put in some code that tells the guest the host shm
>> architecture and only accept x86 on x86 for now. If anyone cares for
>> other combinations, they're free to implement them.
>>
>> Seriously, we're looking at an interface design
Alexander Graf wrote:
> Or we could put in some code that tells the guest the host shm
> architecture and only accept x86 on x86 for now. If anyone cares for
> other combinations, they're free to implement them.
>
> Seriously, we're looking at an interface designed for kvm here. Let's
> plea
On 03/08/2010 12:53 AM, Paul Brook wrote:
Support an inter-vm shared memory device that maps a shared-memory object
as a PCI device in the guest. This patch also supports interrupts between
guest by communicating over a unix domain socket. This patch applies to
the qemu-kvm repository.
Am 08.03.2010 um 02:45 schrieb Jamie Lokier :
Paul Brook wrote:
Support an inter-vm shared memory device that maps a shared-memory
object
as a PCI device in the guest. This patch also supports interrupts
between
guest by communicating over a unix domain socket. This patch
applies to
th
Paul Brook wrote:
> > Support an inter-vm shared memory device that maps a shared-memory object
> > as a PCI device in the guest. This patch also supports interrupts between
> > guest by communicating over a unix domain socket. This patch applies to
> > the qemu-kvm repository.
>
> No. All new
> Support an inter-vm shared memory device that maps a shared-memory object
> as a PCI device in the guest. This patch also supports interrupts between
> guest by communicating over a unix domain socket. This patch applies to
> the qemu-kvm repository.
No. All new devices should be fully qdev b
Support an inter-vm shared memory device that maps a shared-memory object
as a PCI device in the guest. This patch also supports interrupts between
guest by communicating over a unix domain socket. This patch applies to the
qemu-kvm repository.
This device now creates a qemu character device and
29 matches
Mail list logo