Hi Jason, See below...
> On 7 Apr 2016, at 10:24 AM, Jason Wang <jasow...@redhat.com> wrote: […] > >>>> +Device properties: >>>> + >>>> ++-------------------------------------------------+--------+---------+ >>>> +| Propery name | Description | Type | Default | >>>> ++--------------------------------------------------------------------| >>>> +| subsys_ven | PCI device Subsystem Vendor ID | UINT16 | 0x8086 | >>>> +| | | | | >>>> +| subsys | PCI device Subsystem ID | UINT16 | 0 | >>>> +| | | | | >>>> +| vnet | Use virtio headers or perform | BOOL | TRUE | >>>> +| | SW offloads emulation | | | >>>> ++----------------+--------------------------------+--------+---------+ >>> >>> You may using PropertyInfo which has a "description" field to do this >>> better (e.g qdev_prop_vlan). Then there's even no need for this file. >> >> We replaced this file with source code comments. >> As for PropertyInfo description I can’t see any way to pass >> description to DEFINE_PROP_* macros. Could you please elaborate on this? > > You could use DEFINE_PROP() which can accept a PropertyInfo parameter. I see now. Done. > >> >>> >>>> […] >>>> + if (!nc->peer || !qemu_has_vnet_hdr(nc->peer)) { >>>> + s->core.has_vnet = false; >>> >>> So in fact we're probing the vnet support instead of doing vnet our own >>> if backend does not support it. It seems to me that there's no need to >>> having "vnet" property. >> >> This property is intended for forcible disable of virtio headers >> support. Possible use cases are verification of SW offloads logic and >> work around for theoretical interoperability problems. > > Ok, so maybe we'd better rename it to "disable_vnet”. Ok. Renamed. > >> >>> >>>> + trace_e1000e_cfg_support_virtio(false); >>>> + return; >>>> + } >>>> + } >>>> + > > > >>>> + VMSTATE_UINT8(core.tx[1].sum_needed, E1000EState), >>>> + VMSTATE_UINT8(core.tx[1].ipcss, E1000EState), >>>> + VMSTATE_UINT8(core.tx[1].ipcso, E1000EState), >>>> + VMSTATE_UINT16(core.tx[1].ipcse, E1000EState), >>>> + VMSTATE_UINT8(core.tx[1].tucss, E1000EState), >>>> + VMSTATE_UINT8(core.tx[1].tucso, E1000EState), >>>> + VMSTATE_UINT16(core.tx[1].tucse, E1000EState), >>>> + VMSTATE_UINT8(core.tx[1].hdr_len, E1000EState), >>>> + VMSTATE_UINT16(core.tx[1].mss, E1000EState), >>>> + VMSTATE_UINT32(core.tx[1].paylen, E1000EState), >>>> + VMSTATE_INT8(core.tx[1].ip, E1000EState), >>>> + VMSTATE_INT8(core.tx[1].tcp, E1000EState), >>>> + VMSTATE_BOOL(core.tx[1].tse, E1000EState), >>>> + VMSTATE_BOOL(core.tx[1].cptse, E1000EState), >>>> + VMSTATE_BOOL(core.tx[1].skip_cp, E1000EState), >>> >>> You can move the tx state into another structure and use VMSTATE_ARRAY() >>> here. >> >> Not sure I got you point. Could you please provide more details? > > I mean e.g, move skip_cp into e1000e_txd_props and move tx_pkt out of > e1000_tx. Then you can use VMSTATE_ARRAY to save and load > e1000_txd_props array. I got your point. Done via VMSTATE_STRUCT_ARRAY. > >> >>> >>>> + VMSTATE_END_OF_LIST() […] >>> >>> Should we stop/start the above timers during vm stop/start through vm >>> state change handler? >> >> Not sure. Is there any reason for this? > > Otherwise we may raise irq during vm stop? Right. Done. > > [...] > >>>> + >>>> +static inline void >>>> +e1000e_tx_ring_init(E1000ECore *core, E1000E_TxRing *txr, int idx) >>>> +{ >>>> + static const E1000E_RingInfo i[E1000E_NUM_QUEUES] = { >>>> + { TDBAH, TDBAL, TDLEN, TDH, TDT, 0 }, >>>> + { TDBAH1, TDBAL1, TDLEN1, TDH1, TDT1, 1 } >>>> + }; >>> >>> Instead of using static inside a function, why not just use a global >>> array and then there's no need for this function and caller can access >>> it directly? >> >> Anyway there should be a function to combine the two assignments below. >> And since this array is used by this function only it should better be >> hidden. > > I mean you can avoid the assignment before each transmission by just > introducing arrays like: > > int tdt[] = {TDT, TDT1}; > > And then access them through qidx, e.g: core->mac[tdt[qidx]]. I’m not sure about this. Current approach requires assignment of one pointer. Alternative approach requires assignment of queue index in the same place or passing queue index as function parameter everywhere. Also as for me the current approach make code look simpler. What do you think? > >> >>> >>>> + […] I’m sending v4, Thanks, Dmitry