> > -/* FIXME: load/save binding. */
> > -//pci_device_save(&vdev->pci_dev, f);
> > -//msix_save(&vdev->pci_dev, f);
>
> qdev regressed save/restore? What else is broken right now from the
> qdev commit?
>
> I'm beginning to think committing in the state it was in was a mistake.
> Pau
On Monday 08 June 2009, Weidong Han wrote:
> When hot remove an assigned device, segmentation fault was triggered
> by qemu_free(&pci_dev->qdev) in pci_unregister_device().
> pci_register_device() doesn't initialize or set pci_dev->qdev. For an
> assigned device, qdev variable isn't touched at all.
On Tuesday 09 June 2009, Han, Weidong wrote:
> Paul Brook wrote:
> > On Monday 08 June 2009, Weidong Han wrote:
> >> When hot remove an assigned device, segmentation fault was triggered
> >> by qemu_free(&pci_dev->qdev) in pci_unregister_device().
> >>
On Monday 25 May 2009, Michael S. Tsirkin wrote:
> Add functions implementing MSI-X support. First user will be virtio-pci.
> Note that platform must set a flag to declare MSI supported.
> For PC this will be set by APIC.
This sounds wrong. The device shouldn't know or care whether the system has
> > > Note that platform must set a flag to declare MSI supported.
> > > For PC this will be set by APIC.
> >
> > This sounds wrong. The device shouldn't know or care whether the system
> > has a MSI capable interrupt controller. That's for the guest OS to figure
> > out.
>
> You are right of cours
> > If we really need to avoid MSI-X capable devices then that should be done
> > explicity per-device. i.e. you have a different virtio-net device that
> > does not use MSI-X.
> >
> > Paul
>
> Why should it be done per-device?
Because otherwise you end up with the horrible hacks that you're curr
> > That's seems just plain wrong to me.
> > Loading a VM shouldn't not
> > do anything that can't happen during normal operation.
>
> At least wrt pci, we are very far from this state: load just overwrites
> all registers, readonly or not, which can never happen during normal
> operation.
IMO tha
> > caps can be anywhere, but we don't expect it to change during machine
> > execution lifetime.
> >
> > Or I am just confused by the name "pci_device_load" ?
>
> Right. So I want to load an image and it has capability X at offset Y.
> wmask has to match. I don't want to assume that we never chang
On Wednesday 10 June 2009, Michael S. Tsirkin wrote:
> On Wed, Jun 10, 2009 at 04:15:04PM +0100, Paul Brook wrote:
> > > > That's seems just plain wrong to me.
> > > > Loading a VM shouldn't not
> > > > do anything that can't happen during
> > If you can't create an identical machine from scratch then I don't
> > consider snapshot/migration to be a useful feature. i.e. as soon as you
> > shutdown and restart the guest it is liable to break anyway.
>
> Why is liable to break?
A VM booted on an old version of qemu and migrated to a ne
On Wednesday 10 June 2009, Michael S. Tsirkin wrote:
> On Wed, Jun 10, 2009 at 05:46:03PM +0100, Paul Brook wrote:
> > > > If you can't create an identical machine from scratch then I don't
> > > > consider snapshot/migration to be a useful feature. i.e. as soo
> > If we can't start a new qemu with the same hardware configuration then we
> > should not be allowing migration or loading of snapshots.
>
> OK, so I'll add an option in virtio-net to disable msi-x, and such
> an option will be added in any device with msi-x support.
> Will that address your con
On Thursday 18 June 2009, Michael S. Tsirkin wrote:
> Make it possible to resize PCI regions. This will be used by virtio
> with MSI-X, where the region size depends on whether MSI-X is enabled,
> and can change across load/save.
I thought we'd agreed we shouldn't be doing this.
i.e. if the user
On Tuesday 23 June 2009, Avi Kivity wrote:
> On 06/23/2009 12:47 AM, Andre Przywara wrote:
> > KVM defaults to the hypervisor CPUID bit to be set, whereas pure QEMU
> > clears it. On some occasions one want to set or clear it the other way
> > round (for instance to get HyperV running inside a gues
> I agree it's pointless, but it is a Microsoft requirement for passing
> their SVVP tests. Enabling it by default makes life a little easier for
> users who wish to validate their hypervisor and has no drawbacks.
I wasn't arguing against setting it by default (for QEMU CPU types), just
against
> Here are two patches. One implements a virtio-serial device in qemu
> and the other is the driver for a guest kernel.
So I'll ask again. Why is this separate from virtio-console?
Paul
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.ke
On Tuesday 23 June 2009, Amit Shah wrote:
> On (Tue) Jun 23 2009 [13:55:52], Paul Brook wrote:
> > > Here are two patches. One implements a virtio-serial device in qemu
> > > and the other is the driver for a guest kernel.
> >
> > So I'll ask again. Wh
On Tuesday 23 June 2009, Christian Bornträger wrote:
> Am Dienstag 23 Juni 2009 14:55:52 schrieb Paul Brook:
> > > Here are two patches. One implements a virtio-serial device in qemu
> > > and the other is the driver for a guest kernel.
> >
> > So I'll ask a
> >>> The qcow block driver format is no longer maintained and likely
> >>> contains
> >>> serious data corruptors. Urge users to stay away for it, and advertise
> >>> the new and improved replacement.
> >
> > I'm not sure how I feel about this. Can we prove qcow is broken? Is
> > it only broken
> currently SMP guests happen to see vCPUs as different sockets.
> Some guests (Windows comes to mind) have license restrictions and refuse
> to run on multi-socket machines.
> So lets introduce a "cores=" parameter to the -cpu option to let the user
> specify the number of _cores_ the guest shou
On Saturday 04 July 2009, Andre Przywara wrote:
> Paul Brook wrote:
> >> currently SMP guests happen to see vCPUs as different sockets.
> >> Some guests (Windows comes to mind) have license restrictions and refuse
> >> to run on multi-socket machines.
> >>
> Right, that part I'm okay with. But the vCont based gdb model presumes
> a unified address space which while usually true for kernel address
> spaces, isn't universally true and certainly not true when PC is in
> userspace. That's what I understood to be the major objection to vCont.
The threa
> As pointed out before, it doesn't break anything but adds a workaround
> for scenarios which are _now_ broken (16/32 bit target code exported as
> 64 bit is widely useless for gdb today). Sorry, but you never explained
> to me how user are _currently_ supposed to debug under that conditions,
> na
On Wednesday 15 July 2009, Anthony Liguori wrote:
> Blue Swirl wrote:
> > I bet this won't compile on win32.
> >
> > Instead of this (IMHO doomed) escape approach, maybe the filename
> > parameter could be specified as the next argument, for example:
> > -hda format=qcow2,blah,blah,filename_is_next
> Instead of using '-drive if=none' we could use some other syntax where
> the filename can be passed as separate argument. Can switches have two
> arguments? If so, maybe this:
>
>-hostdrive $file $options
This only works for a single mandatory argument that needs to contain awkward
charac
> So I propose this as a universal quoting scheme:
>
> \ where is not ASCII alphanumeric.
No thank you. This sounds dangerously like the windows command shell quoting
rules. At first clance they appear to "just work", however when you actually
try to figure out what's going on it gets horri
> > In a nutshell, I don't know what a SHPC is (nor OSHP), so I'm looking
> > for an additional Ack.
>
> No problem, I'll get an Ack :)
> Meanwhile - here's a summary, as far as I understand it.
>
> Originally PCI SIG only defined the electrical
> and mechanical requirements from hotplug, no stan
> > > Now an OS can have a standard driver and use it
> > > to activate hotplug functionality. This is OS hotplug (OSHP).
> >
> > So presumably this will work on targets that don't have ACPI?
> > Assuming a competent guest OS of course. Have you tested this?
>
> This being the qemu side of thing
> @@ -682,10 +733,18 @@ void virtio_init_pci(VirtIOPCIProxy *proxy,
> VirtIODevice *vdev) if (size & (size-1))
> size = 1 << qemu_fls(size);
>
> +proxy->bar0_mask = size - 1;
You'll get better performance if you use page-sized mappings. You're already
creating a mapping bigger tha
> This patch enables USB UHCI global suspend/resume feature. The OS will
> stop the HC once all ports are suspended. If there is activity on the
> port(s), an interrupt signalling remote wakeup will be triggered.
I'm pretty sure this is wrong. Suspend/resume works based on physical
topology, i.e
> On 11/26/10 03:15, Marcelo Tosatti wrote:
> > On Fri, Nov 26, 2010 at 12:38:28AM +0000, Paul Brook wrote:
> >>> This patch enables USB UHCI global suspend/resume feature. The OS will
> >>> stop the HC once all ports are suspended. If there is activity o
> One question I have about Kemari is whether it adds new constraints to
> the QEMU codebase? Fault tolerance seems like a cross-cutting concern
> - everyone writing device emulation or core QEMU code may need to be
> aware of new constraints. For example, "you are not allowed to
> release I/O op
> >> Could you formulate the constraints so developers are aware of them in
> >> the future and can protect the codebase. How about expanding the
> >> Kemari wiki pages?
> >
> > If you like the idea above, I'm happy to make the list also on
> > the wiki page.
>
> Here's a different question: wha
> 2010/11/29 Paul Brook :
> >> >> Could you formulate the constraints so developers are aware of them
> >> >> in the future and can protect the codebase. How about expanding the
> >> >> Kemari wiki pages?
> >> >
> >> > I
> >> To answer Stefan's question, there shouldn't be any requirement
> >> for a device, but must be tested with Kemari. If it doesn't work
> >> correctly, the problems must be fixed before adding to the list.
> >
> > What exactly are the problems? Is this a device bus of a Kemari bug?
> > If it's
> > If devices incorrectly claim support for live migration, then that should
> > also be fixed, either by removing the broken code or by making it work.
>
> I totally agree with you.
>
> > AFAICT your current proposal is just feeding back the results of some
> > fairly specific QA testing. I'd
> >> Sorry, I didn't get what you're trying to tell me. My plan would
> >> be to initially start from a subset of devices, and gradually
> >> grow the number of devices that Kemari works with. While this
> >> process, it'll include what you said above, file a but and/or fix
> >> the code. Am I m
> > Is this a fair summary: any device that supports live migration workw
> > under Kemari?
>
> It might be fair summary but practically we barely have live migration
> working w/o Kemari. In addition, last I checked Kemari needs additional
> hooks and it will be too hard to keep that out of tree
> On 11/29/2010 10:53 AM, Paul Brook wrote:
> >>> Is this a fair summary: any device that supports live migration workw
> >>> under Kemari?
> >>
> >> It might be fair summary but practically we barely have live migration
> >> working w/o Kemar
> This adds a minimum chunk of Anthony's RAM API support so that we
> can identify actual VM RAM versus all the other things that make
> use of qemu_ram_alloc.
Why do we care? How are you defining "actual VM RAM"?
Surely the whole point of qemu_ram_alloc is to allocate a chunk of memory that
can
> However, as I've mentioned repeatedly, the reason I won't merge
> virtio-serial is that it duplicates functionality with virtio-console.
> If the two are converged, I'm happy to merge it. I'm not opposed to
> having more functionality.
I strongly agree.
Paul
--
To unsubscribe from this list: s
> Does this mean that virtio-blk supports all three combinations?
>
>1. FLUSH that isn't a barrier
>2. FLUSH that is also a barrier
>3. Barrier that is not a flush
>
> 1 is good for fsync-like operations;
> 2 is good for journalling-like ordered operations.
> 3 sounds like it doesn't
> The offset given to a block created via qemu_ram_alloc/map() is arbitrary,
> let the caller specify a name so we can make a positive match.
> @@ -1924,7 +1925,9 @@ static int pci_add_option_rom(PCIDevice *pdev)
> +snprintf(name, sizeof(name), "pci:%02x.%x.rom",
> + PCI_SLOT(pdev-
> On 06/08/2010 09:30 PM, Paul Brook wrote:
> >> The offset given to a block created via qemu_ram_alloc/map() is
> >> arbitrary, let the caller specify a name so we can make a positive
> >> match.
> >>
> >>
> >> @@ -1924,7
> > > Not all ram is associated with a device.
> >
> > Maybe not, but where it is we should be using that information.
> > Absolute minimum we should be using the existing qdev address rather than
> > inventing a new one. Duplicating this logic inside every device seems
> > like a bad idea so I s
> * Alex Williamson (alex.william...@redhat.com) wrote:
> > +// XXX check duplicates
>
> Yes, definitely. You created a notion of a hierarchical namespace,
> can this be formalized any more?
We already have one: The qdev tree.
Paul
--
To unsubscribe from this list: send the line "unsubscrib
> Keep in mind, this has to be a stable string across versions of qemu
> since this is savevm/migration. Are we absolutely confident that the
> full qdev path isn't going to change? I'm more confident that a unique
> device name is going to be static across qemu versions.
The actual representati
> > Not really. This identifier is device and bus independent, which is why
> > I suggested passing the device to qemu_ram_alloc. This can then figure
> > out how to the identify the device. It should probably do this the same
> > way that we identify the saved state for the device. Currently I
> On Thu, 2010-06-10 at 10:23 +0200, Gerd Hoffmann wrote:
> > > I may have been a bit misleading here. What we really want to do is use
> > > the same matching algorithm as is used by the rest of the device
> > > state. Currently this is a vmstate name and [arbitrary] numeric id. I
> > > don't reme
> > > to the identify the device. It should probably do this the same way
> > > that we identify the saved state for the device. Currently I think
> > > this is an arbitrary vmstate name/id, but I expect this to change to a
> > > qdev address (e.g. /i440FX-pcihost/pci.0/_addr_04.0").
> >
> > Ok,
> The trouble I'm running into is that the SaveStateEntry.instance_id is
> effectively private, and there's no easy way to associate a
> SaveStateEntry to a device since it passes an opaque pointer, which
> could be whatever the driver decides it wants. I'm wondering if we
> should pass the Device
> I'm actually liking bdrv_flush_all() less and less. If there are any
> outstanding IO requests, it will increase the down time associated with
> live migration. I think we definitely need to add a live save handler
> that waits until there are no outstanding IO requests to converge. I'm
> conc
On Friday 03 October 2008, Ryan Harper wrote:
> The default buffer size breaks up larger read/write requests unnecessarily.
> When we encounter requests larger than the default dma buffer, reallocate
> the buffer to support the request.
Allocating unboundedly large host buffers based on guest inpu
On Saturday 04 October 2008, Anthony Liguori wrote:
> Paul Brook wrote:
> > On Friday 03 October 2008, Ryan Harper wrote:
> >> The default buffer size breaks up larger read/write requests
> >> unnecessarily. When we encounter requests larger than the default dma
> &g
On Saturday 04 October 2008, Ryan Harper wrote:
> In all, it seems silly to worry about this sort of thing since the
> entire process could be contained with process ulimits if this is really
> a concern. Are we any more concerned that by splitting the requests
> into many smaller requests that we
On Wednesday 15 October 2008, Ryan Harper wrote:
> This patch places the qemu-test framework and tests into the qemu source
> tree. There are a number of components to this patch:
Is there any point having this in the qemu repository?
AFAICS it gains nothing from being "integrated" with qemu. It
On Tuesday 07 April 2009, Daniel Jacobowitz wrote:
> On Tue, Apr 07, 2009 at 08:52:46AM -0500, Anthony Liguori wrote:
> > I think that's going to lead to even more confusion. While I'm inclined
> > to not greatly mind 0.10.99 for the development tree, when we do release
> > candidates for the next
On Wednesday 29 April 2009, Christoph Hellwig wrote:
> On Tue, Apr 28, 2009 at 11:37:01AM -0500, Anthony Liguori wrote:
> > Ah, excellent. I think that's a great thing to do. So do you think
> > virtio-scsi would deprecate virtio-blk?
>
> I don't think so. If you have an image format or a non-sc
On Wednesday 29 April 2009, Christian Borntraeger wrote:
> Am Wednesday 29 April 2009 13:11:19 schrieb Paul Brook:
> > On Wednesday 29 April 2009, Christoph Hellwig wrote:
> > > On Tue, Apr 28, 2009 at 11:37:01AM -0500, Anthony Liguori wrote:
> > > > Ah, excellent. I
On Wednesday 29 April 2009, Christoph Hellwig wrote:
> On Wed, Apr 29, 2009 at 12:11:19PM +0100, Paul Brook wrote:
> > Is this actually measurably faster, or just infinitesimally faster in
> > theory?
>
> On normal disks it's rather theoretical. On high-end SSDs
On Thursday 30 April 2009, Christoph Hellwig wrote:
> On Wed, Apr 29, 2009 at 12:37:20PM +0100, Paul Brook wrote:
> > How exactly does it introduce additional latency? A scsi command block is
> > hardly large or complicated. Are you suggesting that a 16/32byte scsi
> > comman
> I think we need to use the output of 'make headers-install', which
> removes things like __user and CONFIG_*.
Yes. Assuming we do decide to import a set of headers, they should definitely
be the sanitised version created by make headers-install.
Paul
--
To unsubscribe from this list: send the
On Tuesday 12 May 2009, Alex Williamson wrote:
> Bit 0 is the enable bit, which we not only don't want to set, but
> it will stick and make us think it's an I/O port resource.
Why is the ROM slot special? Doesn't the same apply to all BARs?
Paul
--
To unsubscribe from this list: send the line "un
On Wednesday 20 May 2009, Michael S. Tsirkin wrote:
> define api for allocating/setting up msi-x irqs, and for updating them
> with msi-x vector information, supply implementation in ioapic. Please
> comment on this API: I intend to port my msi-x patch to work on top of
> it.
I though the point of
> The PCI bus doesn't need any special support (I think) but something on
> the other end needs to interpret those writes.
Sure. But there's definitely nothing PCI specific about it. I assumed this
would all be contained within the APIC.
> In any case we need some internal API for this, and qemu
> >> The PCI bus doesn't need any special support (I think) but something on
> >> the other end needs to interpret those writes.
> >
> > Sure. But there's definitely nothing PCI specific about it. I assumed
> > this would all be contained within the APIC.
>
> MSIs are defined by PCI and their confi
On Thursday 21 May 2009, Avi Kivity wrote:
> Paul Brook wrote:
> >>>> In any case we need some internal API for this, and qemu_irq looks
> >>>> like a good choice.
> >>>
> >>> What do you expect to be using this API?
> >>
> &
> > A tight coupling between PCI devices and the APIC is just going to cause
> > us problems later one. I'm going to come back to the fact that these are
> > memory writes so once we get IOMMU support they will presumably be
> > subject to remapping by that, just like any other memory access.
>
> I
> > MSI provides multiple edge triggered interrupts, whereas traditional mode
> > provides a single level triggered interrupt. My guess is most devices
> > will want to treat these differently anyway.
>
> So, is qemu_send_msi better than qemu_set_irq.
Neither. pci_send_msi, which is a trivial wrap
On Thursday 21 May 2009, Paul Brook wrote:
> > > MSI provides multiple edge triggered interrupts, whereas traditional
> > > mode provides a single level triggered interrupt. My guess is most
> > > devices will want to treat these differently anyway.
> >
>
> > which is a trivial wrapper around stl_phys.
>
> OK, but I'm adding another level of indirection in the middle,
> to allow us to tie in a kvm backend.
kvm has no business messing with the PCI device code.
Paul
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a
On Thursday 21 May 2009, Avi Kivity wrote:
> Paul Brook wrote:
> >>> which is a trivial wrapper around stl_phys.
> >>
> >> OK, but I'm adding another level of indirection in the middle,
> >> to allow us to tie in a kvm backend.
> >
> > kv
> >>> kvm has no business messing with the PCI device code.
> >>
> >> kvm has a fast path for irq injection. If qemu wants to support it we
> >> need some abstraction here.
> >
> > Fast path from where to where? Having the PCI layer bypass/re-implement
> > the APIC and inject the interrupt directl
On Thursday 21 May 2009, Avi Kivity wrote:
> Paul Brook wrote:
> >> kvm implements the APIC in the host kernel (qemu upstream doesn't
> >> support this yet). The fast path is wired to the in-kernel APIC, not
> >> the cpu core directly.
> >>
> >>
>+/*
>+ * ftruncate is not supported by hugetlbfs in older
>+ * hosts, so don't bother checking for errors.
>+ * If anything goes wrong with it under other filesystems,
>+ * mmap will fail.
>+ */
>+if (ftruncate(fd, memory))
>+ perror("ftruncate");
Code does not m
> Support an inter-vm shared memory device that maps a shared-memory object
> as a PCI device in the guest. This patch also supports interrupts between
> guest by communicating over a unix domain socket. This patch applies to
> the qemu-kvm repository.
No. All new devices should be fully qdev b
> On 03/08/2010 12:53 AM, Paul Brook wrote:
> >> Support an inter-vm shared memory device that maps a shared-memory
> >> object as a PCI device in the guest. This patch also supports
> >> interrupts between guest by communicating over a unix domain socket.
> >
> However, coherence could be made host-type-independent by the host
> mapping and unampping pages, so that each page is only mapped into one
> guest (or guest CPU) at a time. Just like some clustering filesystems
> do to maintain coherence.
You're assuming that a TLB flush implies a write barrie
> > In a cross environment that becomes extremely hairy. For example the x86
> > architecture effectively has an implicit write barrier before every
> > store, and an implicit read barrier before every load.
>
> Btw, x86 doesn't have any implicit barriers due to ordinary loads.
> Only stores and
> >> As of March 2009[1] Intel guarantees that memory reads occur in order
> >> (they may only be reordered relative to writes). It appears AMD do not
> >> provide this guarantee, which could be an interesting problem for
> >> heterogeneous migration..
> >
> > Interesting, but what ordering would c
> > You're much better off using a bulk-data transfer API that relaxes
> > coherency requirements. IOW, shared memory doesn't make sense for TCG
>
> Rather, tcg doesn't make sense for shared memory smp. But we knew that
> already.
In think TCG SMP is a hard, but soluble problem, especially when
> On 03/10/2010 07:41 PM, Paul Brook wrote:
> >>> You're much better off using a bulk-data transfer API that relaxes
> >>> coherency requirements. IOW, shared memory doesn't make sense for TCG
> >>
> >> Rather, tcg doesn't make
> Where does the translator need access to this original code? I was
> just thinking about this problem today, wondering how much overhead
> there is with this SMC page protection thing.
When an MMU fault occurs qemu re-translates the TB with additional annotations
to determine which guest instr
> Oh, well, yes, I remember. qemu is more strict on ISA irq sharing now.
> A bit too strict.
>
> /me goes dig out a old patch which never made it upstream for some
> reason I forgot. Attached.
This is wrong. Two devices should never be manipulating the same qemu_irq
object. If you want mult
> On 03/16/2010 10:10 PM, Blue Swirl wrote:
> >> Yes, and is what tlb_protect_code() does and it's called from
> >> tb_alloc_page() which is what's code when a TB is created.
> >
> > Just a tangential note: a long time ago, I tried to disable self
> > modifying code detection for Sparc. On most R
> Wenhao Xu wrote:
> > Hi, Juan,
> >I am fresh to both QEMU and KVM. But so far, I notice that QEMU
> > uses "KVM_SET_USER_MEMORY_REGION" to set memory region that KVM can
> > use and uses cpu_register_physical_memory_offset to register the same
> > memory to QEMU emulator, which means QEMU an
> > Looks like the tablet is set to 100 Hz polling rate. We may be able
> > to get away with 30 Hz or even less (ep_bInterval, in ms, in
> > hw/usb-wacom.c).
>
> Changing the USB tablet polling interval from 10ms to 100ms in both
> hw/usb-wacom.c and hw/usb-hid.c made no difference except the an i
> > The USB HID devices implement the SET_IDLE command, so the polling
> > interval will have no real effect on performance.
>
> On a Linux guest (F12), I see 125 USB interrupts per second with no
> mouse movement, so something is broken (on the guest or host).
Turns out to be a a bug in the UHCI
> > My guess is that the overhead you're seeing is entirely from the USB host
> > adapter having to wake up and check the transport descriptor lists. This
> > will only result in the guest being woken if a device actually responds
> > (as mentioned above it should not).
>
> A quick profile on the
> I immediately reproduced the problem locally. It turns out that
> kvm reflects packets coming from one guest NIC on another guest
> NIC, and since both are connected to the same bridge we're getting
> endless packet storm. To a level when kvm process becomes 100%
> busy and does not respond to
> But I certainly do _not_ want to update the SCSI disk
> emulation, as this is really quite tied to the SCSI parallel
> interface used by the old lsi53c895a.c.
This is completely untrue. scsi-disk.c contains no transport-specific code. It
is deliberately designed to be independent of both the tr
> > > "/main-system-bus/pci.0,addr=09.0/virtio-blk-pci"
There's a device missing between the main system bus and the pci bus. Should
be something like:
/main-system-bus/piix4-pcihost/pci.0/_09.0
> > Could you explain why you add "identified properties of the immediate
> > parent bus and device
> On Mon, 2010-06-14 at 14:09 +0100, Paul Brook wrote:
> > > > > "/main-system-bus/pci.0,addr=09.0/virtio-blk-pci"
> >
> > There's a device missing between the main system bus and the pci bus.
> > Should be something like:
> >
> > /
> On Mon, 2010-06-14 at 18:49 +0200, Jan Kiszka wrote:
> > Alex Williamson wrote:
> > > On Mon, 2010-06-14 at 18:00 +0200, Jan Kiszka wrote:
> > >> And instead of introducing another hierarchy level with the bus
> > >> address, I would also prefer to add this as prefix or suffix to the
> > >> devic
> > > Ok, I can get it down to something like:
> > >
> > > /i440FX-pcihost/pci.0/virtio-blk-pci,09.0
> > >
> > > The addr on the device is initially a little non-intuitive to me since
> > > it's a property of the bus, but I guess it make sense if we think of
> > > that level as slot, which includ
> > In fact what you really want to do is transfer the device tree
> > (including properties), and create the machine from scratch, not load
> > state into a pre-supplied device tree.
>
> Well, I agree, but that's a lot more of an overhaul, and once again
> we're changing the problem.
I think it'
> > Alex proposed to disambiguate by adding "identified properties of the
> > immediate parent bus and device" to the path component. For PCI, these
> > are dev.fn. Likewise for any other bus where devices have unambigous
> > bus address. The driver name carries no information!
>
> From user PO
> >> From user POV, driver names are very handly to address a device
> >> intuitively - except for the case you have tones of devices on the same
> >> bus that are handled by the same driver. For that case we need to
> >> augment the device name with a useful per-bus ID, derived from the bus
> >> a
> Paul Brook wrote:
> >>>> From user POV, driver names are very handly to address a device
> >>>> intuitively - except for the case you have tones of devices on the
> >>>> same bus that are handled by the same driver. For that case we need
> >&
> >> Works for serial, but fails for ISA devices not occupying an address.
> >
> > An ISA device without an IO/MMIO capabilities seems extremely unlikely.
> > What exactly would such a device do?
>
> Inject interrupts via that bus (while exposing registers in some other
> way). The m48t59 seems t
1 - 100 of 154 matches
Mail list logo