On Tue, Sep 11, 2012 at 01:00:13PM -0400, Don Slutz wrote: > + if (next_chain_offset) { > + MptSGEntryChain sgec; > + cpu_physical_memory_read(seg_start_pa + next_chain_offset, > + &sgec, sizeof(MptSGEntryChain)); > + assert(sgec.u2ElementType == MPTSGENTRYTYPE_CHAIN); > + next_sge_pa = sgec.u32SegmentAddressLow; > + if (sgec.f64BitAddress) { > + next_sge_pa |= > + ((uint64_t)sgec.u32SegmentAddressHigh) << 32; > + } > + seg_start_pa = next_sge_pa; > + next_chain_offset = sgec.u8NextChainOffset * > sizeof(uint32_t);
BTW all this logic seems wrong on big endian. Maybe we don't care short term but we do long term. I think you need to fix it up using le_to_cpu or something. And in particular this likely means bitfields can not be used cleanly, so you will not be able to resync lsilogic.h from virtualbox. The implication I guess is that we should just fix up the style to match qemu. -- MST