Hi

On Fri, Nov 20, 2020 at 1:28 PM Lin Ma <l...@suse.de> wrote:

> On 2020-11-19 14:46, Marc-André Lureau wrote:
> > Hi
> >
> > On Thu, Nov 19, 2020 at 12:48 PM Lin Ma <l...@suse.com> wrote:
> >
> >> The guest-get-vcpus returns incorrect vcpu info in case we hotunplug
> >> vcpus(not
> >> the last one).
> >> e.g.:
> >> A VM has 4 VCPUs: cpu0 + 3 hotunpluggable online vcpus(cpu1, cpu2 and
> >> cpu3).
> >> Hotunplug cpu2,  Now only cpu0, cpu1 and cpu3 are present & online.
> >>
> >> ./qmp-shell /tmp/qmp-monitor.sock
> >> (QEMU) query-hotpluggable-cpus
> >> {"return": [
> >> {"props": {"core-id": 0, "thread-id": 0, "socket-id": 3},
> >> "vcpus-count": 1,
> >>  "qom-path": "/machine/peripheral/cpu3", "type": "host-x86_64-cpu"},
> >> {"props": {"core-id": 0, "thread-id": 0, "socket-id": 2},
> >> "vcpus-count": 1,
> >>  "qom-path": "/machine/peripheral/cpu2", "type": "host-x86_64-cpu"},
> >> {"props": {"core-id": 0, "thread-id": 0, "socket-id": 1},
> >> "vcpus-count": 1,
> >>  "qom-path": "/machine/peripheral/cpu1", "type": "host-x86_64-cpu"},
> >> {"props": {"core-id": 0, "thread-id": 0, "socket-id": 0},
> >> "vcpus-count": 1,
> >>  "qom-path": "/machine/unattached/device[0]", "type":
> >> "host-x86_64-cpu"}
> >> ]}
> >>
> >> (QEMU) device_del id=cpu2
> >> {"return": {}}
> >>
> >> (QEMU) query-hotpluggable-cpus
> >> {"return": [
> >> {"props": {"core-id": 0, "thread-id": 0, "socket-id": 3},
> >> "vcpus-count": 1,
> >>  "qom-path": "/machine/peripheral/cpu3", "type": "host-x86_64-cpu"},
> >> {"props": {"core-id": 0, "thread-id": 0, "socket-id": 2},
> >> "vcpus-count": 1,
> >>  "type": "host-x86_64-cpu"},
> >> {"props": {"core-id": 0, "thread-id": 0, "socket-id": 1},
> >> "vcpus-count": 1,
> >>  "qom-path": "/machine/peripheral/cpu1", "type": "host-x86_64-cpu"},
> >> {"props": {"core-id": 0, "thread-id": 0, "socket-id": 0},
> >> "vcpus-count": 1,
> >>  "qom-path": "/machine/unattached/device[0]", "type":
> >> "host-x86_64-cpu"}
> >> ]}
> >>
> >> Before:
> >> ./qmp-shell -N /tmp/qmp-ga.sock
> >> Welcome to the QMP low-level shell!
> >> Connected
> >> (QEMU) guest-get-vcpus
> >> {"return": [
> >> {"online": true, "can-offline": false, "logical-id": 0},
> >> {"online": true, "can-offline": true, "logical-id": 1}]}
> >>
> >> After:
> >> ./qmp-shell -N /tmp/qmp-ga.sock
> >> Welcome to the QMP low-level shell!
> >> Connected
> >> (QEMU) guest-get-vcpus
> >> {"execute":"guest-get-vcpus"}
> >> {"return": [
> >> {"online": true, "can-offline": false, "logical-id": 0},
> >> {"online": true, "can-offline": true, "logical-id": 1},
> >> {"online": true, "can-offline": true, "logical-id": 3}]}
> >>
> >> Signed-off-by: Lin Ma <l...@suse.com>
> >> ---
> >>  qga/commands-posix.c | 8 +++++---
> >>  1 file changed, 5 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/qga/commands-posix.c b/qga/commands-posix.c
> >> index 3bffee99d4..accc893373 100644
> >> --- a/qga/commands-posix.c
> >> +++ b/qga/commands-posix.c
> >> @@ -2182,15 +2182,15 @@ GuestLogicalProcessorList
> >> *qmp_guest_get_vcpus(Error **errp)
> >>  {
> >>      int64_t current;
> >>      GuestLogicalProcessorList *head, **link;
> >> -    long sc_max;
> >> +    long max_loop_count;
> >>      Error *local_err = NULL;
> >>
> >>      current = 0;
> >>      head = NULL;
> >>      link = &head;
> >> -    sc_max = SYSCONF_EXACT(_SC_NPROCESSORS_CONF, &local_err);
> >> +    max_loop_count = SYSCONF_EXACT(_SC_NPROCESSORS_CONF, &local_err);
> >>
> >> -    while (local_err == NULL && current < sc_max) {
> >> +    while (local_err == NULL && current < max_loop_count) {
> >>          GuestLogicalProcessor *vcpu;
> >>          GuestLogicalProcessorList *entry;
> >>          int64_t id = current++;
> >> @@ -2206,6 +2206,8 @@ GuestLogicalProcessorList
> >> *qmp_guest_get_vcpus(Error
> >> **errp)
> >>              entry->value = vcpu;
> >>              *link = entry;
> >>              link = &entry->next;
> >> +        } else {
> >> +            max_loop_count += 1;
> >>
> >
> > This looks like a recipe for infinite loop on error.
> Emm...It is possible.
> >
> > Shouldn't we loop over all the /sys/devices/system/cpu/cpu#/ instead?
> Originally I'd like to use the function fnmatch to handle pattern cpu#
> to
> loop over all of the /sys/devices/system/cpu/cpu#/, But it introduces
> the
> header file fnmatch.h and make things complicated a little.
>
>
Why use fnmatch?
g_dir_open & g_dir_read_name, then you can sscanf for the matching entries.


> >
> > (possibly parse /sys/devices/system/cpu/present, but I doubt it's
> > necessary)
> IMO the 'present' won't help.
>
> I'm about to post the V2, I made tiny change in the V2, Please help to
> review.
>
> BTW, The local_err will be set in case of error, right? It could avoid
> infinite loop.
>
> I think it should.


-- 
Marc-André Lureau

Reply via email to