On 8/5/2019 10:58 AM, David Gibson wrote:
On Mon, Aug 05, 2019 at 08:56:40AM +0800, Tao Xu wrote:
On 8/2/2019 2:55 PM, David Gibson wrote:
On Thu, Aug 01, 2019 at 03:52:58PM +0800, Tao Xu wrote:
Introduce MachineClass::auto_enable_numa for one implicit NUMA node,
and enable it to fix broken check in spapr_validate_node_memory(), when
spapr_populate_memory() creates a implicit node and info then use
nb_numa_nodes which is 0.

Suggested-by: Igor Mammedov <imamm...@redhat.com>
Suggested-by: Eduardo Habkost <ehabk...@redhat.com>
Signed-off-by: Tao Xu <tao3...@intel.com>

The change here looks fine so,

Acked-by: David Gibson <da...@gibson.dropbear.id.au>

However, I'm not following what check in spapr is broken and why.

Sorry, may be I should update the commit message.

Because in spapr_populate_memory(), if numa node is 0

     if (!nb_nodes) {
         nb_nodes = 1;
         ramnode.node_mem = machine->ram_size;
         nodes = &ramnode;
     }

it use a local 'nb_nodes' as 1 and update global nodes info, but
inpapr_validate_node_memory(), use the global nb_numa_nodes

     for (i = 0; i < nb_numa_nodes; i++) {
        if (numa_info[i].node_mem % SPAPR_MEMORY_BLOCK_SIZE) {

so the global is 0 and skip the node_mem check.

Well, not really.  That loop is that each node has memory size a
multiple of 256MiB.  But we've already checked that the whole memory
size is a multiple of 256MiB, so in the case of one NUMA node, the
per-node check doesn't actually do anything extra.

And in the "non-NUMA" case, nb_numa_nodes == 0, then I don't believe
numa_info[] is populated anyway, so we couldn't do the check like
this.

Thank you David. I understand. I will modify the commit message. So can I modify and keep this patch as a feature? Because it can reuse the generic numa code.


Reply via email to