On 08/15/2014 02:20 PM, David Gibson wrote: > On Fri, Aug 15, 2014 at 01:09:20PM +1000, Alexey Kardashevskiy wrote: >> On 08/15/2014 10:04 AM, David Gibson wrote: >>> On Thu, Aug 14, 2014 at 06:29:50PM +1000, Alexey Kardashevskiy wrote: >>>> On 08/13/2014 01:27 PM, David Gibson wrote: >>>>> On Tue, Aug 12, 2014 at 05:25:29PM +1000, Alexey Kardashevskiy wrote: >>>>>> On 08/12/2014 11:45 AM, David Gibson wrote: >>>>>>> On Thu, Jul 31, 2014 at 07:34:10PM +1000, Alexey Kardashevskiy >>>>>> wrote: >>>>> [snip] >>>>>>> The function of this is kind of unclear. I'm assuming this is >>>>>>> filtering the supported page sizes reported by the PHB by the possible >>>>>>> page sizes based on host page size or other constraints. Is that >>>>>>> right? >>>>>>> >>>>>>> I think you'd be better off folding the whole double loop into the >>>>>>> fixmask function. >>>>>>> >>>>>>>> + >>>>>>>> + rtas_st(rets, 0, RTAS_OUT_SUCCESS); >>>>>>>> + rtas_st(rets, 1, windows_available); >>>>>>>> + /* Return maximum number as all RAM was 4K pages */ >>>>>>>> + rtas_st(rets, 2, ram_size >> SPAPR_TCE_PAGE_SHIFT); >>>>>>> >>>>>>> I'm assuming this is the allowed size of the dynamic windows. >>>>>>> Shouldn't that be reported by a PHB callback, rather than hardcoded >>>>>>> here? >>>>>> >>>>>> Why PHB? This is DMA memory. @ram_size is the upper limit, we can make >>>>>> more >>>>>> only when we have memory hotplug (which we do not have) and the guest can >>>>>> create smaller windows if it wants so I do not really follow you here. >>>>> >>>>> What I'm not clear on is what this RTAS return actually means. Is it >>>>> saying the maximum size of the DMA window, or the maximum address >>>>> which can be mapped by that window? Remember I don't have access to >>>>> PAPR documentation any more - nor do others reading these patches. >>>> >>>> >>>> It is literally "Largest contiguous block of TCEs allocated specifically >>>> for (that is, are reserved for) this PE". Which I understand as the maximum >>>> number of TCEs. >>> >>> Ok, so essentially it's a property of the IOMMU. Hrm, I guess >>> ram_size is good enough for now then. >>> >>> [snip] >>>>> [snip] >>>>>>>> +static void rtas_ibm_create_pe_dma_window(PowerPCCPU *cpu, >>>>>>>> + sPAPREnvironment *spapr, >>>>>>>> + uint32_t token, uint32_t >>>>>>>> nargs, >>>>>>>> + target_ulong args, >>>>>>>> + uint32_t nret, target_ulong >>>>>>>> rets) >>>>>>>> +{ >>>>>>>> + sPAPRPHBState *sphb; >>>>>>>> + sPAPRPHBClass *spc; >>>>>>>> + sPAPRTCETable *tcet = NULL; >>>>>>>> + uint32_t addr, page_shift, window_shift, liobn; >>>>>>>> + uint64_t buid; >>>>>>>> + long ret; >>>>>>>> + >>>>>>>> + if ((nargs != 5) || (nret != 4)) { >>>>>>>> + goto param_error_exit; >>>>>>>> + } >>>>>>>> + >>>>>>>> + buid = ((uint64_t)rtas_ld(args, 1) << 32) | rtas_ld(args, 2); >>>>>>>> + addr = rtas_ld(args, 0); >>>>>>>> + sphb = spapr_pci_find_phb(spapr, buid); >>>>>>>> + if (!sphb) { >>>>>>>> + goto param_error_exit; >>>>>>>> + } >>>>>>>> + >>>>>>>> + spc = SPAPR_PCI_HOST_BRIDGE_GET_CLASS(sphb); >>>>>>>> + if (!spc->ddw_create) { >>>>>>>> + goto hw_error_exit; >>>>>>>> + } >>>>>>>> + >>>>>>>> + page_shift = rtas_ld(args, 3); >>>>>>>> + window_shift = rtas_ld(args, 4); >>>>>>>> + liobn = sphb->dma_liobn + 0x10000; >>>>>>> >>>>>>> Isn't using a fixed LIOBN here assuming you can only have a single DDW >>>>>>> per PHB? That's true for now, but in theory shouldn't it be reported >>>>>>> by the PHB code itself? >>>>>> >>>>>> >>>>>> This should be a unique LIOBN so it is not up to PHB to choose. And we >>>>>> cannot make it completely random for migration purposes. I'll make it >>>>>> something like >>>>>> >>>>>> #define SPAPR_DDW_LIOBN(sphb, windownum) ((sphb)->dma_liobn | windownum) >>>>> >>>>> Ok. >>>>> >>>>> Really, the assigned liobns should be included in the migration stream >>>>> if they're not already. >>>> >>>> LIOBNs already migrate, liobn itself is an instance id of a TCE table >>>> object in the migration stream. >>> >>> Ok, so couldn't we just add an alloc_liobn() function instead of >>> hardcoding how the liobns are constructed? >> >> >> No. If we did so, exact numbers would depend on the device order in the >> QEMU command line - QEMU command line produced by libvirt from handmade XML >> and QEMU command line produced by libvirt from XML printed by "dumpxml" can >> have devices in different order, so interrupt numbers, LIOBNs - all of this >> gets broken. > > Ah, duh. Clearly I'm still swapping back in my knowledge of qemu > migration. > > What I was meaning before is that qemu should reconstruct the TCE > tables / liobns based on the info in the migration stream, which > remains true, but it can't really be done without more broadly > addressing the fact that qemu doesn't transmit the hardware > configuration in the migration stream. > > Ok, so what should probably be done here is to explicitly assign > fields in the LIOBN to "device type", "device id" (vio reg or PHB > buid), and per-device instance (DDW number for PCI). >
Yes, this is my new queue: ------------------------------ hw/ppc/spapr_pci.c ------------------------------ index 5c46c0d..17eb0d8 100644 @@ -527,11 +527,11 @@ static void spapr_phb_realize(DeviceState *dev, Error **errp) " be specified for PAPR PHB, not both"); return; } sphb->buid = SPAPR_PCI_BASE_BUID + sphb->index; - sphb->dma_liobn = SPAPR_PCI_BASE_LIOBN + sphb->index; + sphb->dma_liobn = SPAPR_PCI_LIOBN(sphb->index, 0); windows_base = SPAPR_PCI_WINDOW_BASE + sphb->index * SPAPR_PCI_WINDOW_SPACING; sphb->mem_win_addr = windows_base + SPAPR_PCI_MMIO_WIN_OFF; sphb->io_win_addr = windows_base + SPAPR_PCI_IO_WIN_OFF; ---------------------------- include/hw/ppc/spapr.h ---------------------------- index c9d6c6c..92c6e2c 100644 @@ -441,11 +441,11 @@ int spapr_rtas_device_tree_setup(void *fdt, hwaddr rtas_addr, #define SPAPR_TCE_PAGE_SHIFT 12 #define SPAPR_TCE_PAGE_SIZE (1ULL << SPAPR_TCE_PAGE_SHIFT) #define SPAPR_TCE_PAGE_MASK (SPAPR_TCE_PAGE_SIZE - 1) #define SPAPR_VIO_BASE_LIOBN 0x00000000 -#define SPAPR_PCI_BASE_LIOBN 0x80000000 +#define SPAPR_PCI_LIOBN(i, n) (0x80000000 | ((i) << 8) | (n)) #define RTAS_ERROR_LOG_MAX 2048 -- Alexey