On 03/22/2014 12:11 AM, Bjorn Helgaas wrote:
[+cc Rafael, linux-acpi for _PXM questions]
On Thu, Mar 20, 2014 at 9:38 PM, Daniel J Blueman wrote:
On 21/03/2014 06:07, Bjorn Helgaas wrote:
On Thu, Mar 13, 2014 at 5:43 AM, Daniel J Blueman
wrote:
For systems with multiple servers and routed
On 03/22/2014 01:16 AM, Suravee Suthikulpanit wrote:
On 3/20/2014 10:38 PM, Daniel J Blueman wrote:
On 21/03/2014 06:07, Bjorn Helgaas wrote:
[+cc linux-pci, Myron, Suravee, Kim, Aravind]
On Thu, Mar 13, 2014 at 5:43 AM, Daniel J Blueman
wrote:
For systems with multiple servers and routed fa
On 3/20/2014 10:38 PM, Daniel J Blueman wrote:
On 21/03/2014 06:07, Bjorn Helgaas wrote:
[+cc linux-pci, Myron, Suravee, Kim, Aravind]
On Thu, Mar 13, 2014 at 5:43 AM, Daniel J Blueman wrote:
For systems with multiple servers and routed fabric, all northbridges get
assigned to the first serve
[+cc Rafael, linux-acpi for _PXM questions]
On Thu, Mar 20, 2014 at 9:38 PM, Daniel J Blueman wrote:
> On 21/03/2014 06:07, Bjorn Helgaas wrote:
>> On Thu, Mar 13, 2014 at 5:43 AM, Daniel J Blueman
>> wrote:
>>>
>>> For systems with multiple servers and routed fabric, all northbridges get
>>> as
On 21/03/2014 11:51, Suravee Suthikulpanit wrote:
Bjorn,
On a typical AMD system, there are two types of host bridges:
* PCI Root Complex Host bridge (e.g. RD890, SR56xx, etc.)
* CPU Host bridge
Here is an example from a 2 sockets system:
$ lspci
[]
The host bridge 00:00.0 is basically the
Bjorn,
On a typical AMD system, there are two types of host bridges:
* PCI Root Complex Host bridge (e.g. RD890, SR56xx, etc.)
* CPU Host bridge
Here is an example from a 2 sockets system:
$ lspci
00:00.0 Host bridge: Advanced Micro Devices [AMD] nee ATI RD890 PCI to PCI
bridge (external gfx0
On 21/03/2014 06:07, Bjorn Helgaas wrote:
[+cc linux-pci, Myron, Suravee, Kim, Aravind]
On Thu, Mar 13, 2014 at 5:43 AM, Daniel J Blueman wrote:
For systems with multiple servers and routed fabric, all northbridges get
assigned to the first server. Fix this by also using the node reported from
[+cc linux-pci, Myron, Suravee, Kim, Aravind]
On Thu, Mar 13, 2014 at 5:43 AM, Daniel J Blueman wrote:
> For systems with multiple servers and routed fabric, all northbridges get
> assigned to the first server. Fix this by also using the node reported from
> the PCI bus. For single-fabric systems
Hi Boris,
On 14/03/2014 17:06, Borislav Petkov wrote:
On Thu, Mar 13, 2014 at 07:43:01PM +0800, Daniel J Blueman wrote:
For systems with multiple servers and routed fabric, all northbridges get
assigned to the first server. Fix this by also using the node reported from
the PCI bus. For single-f
On Thu, Mar 13, 2014 at 07:43:01PM +0800, Daniel J Blueman wrote:
> For systems with multiple servers and routed fabric, all northbridges get
> assigned to the first server. Fix this by also using the node reported from
> the PCI bus. For single-fabric systems, the northbriges are on PCI bus 0
> by
For systems with multiple servers and routed fabric, all northbridges get
assigned to the first server. Fix this by also using the node reported from
the PCI bus. For single-fabric systems, the northbriges are on PCI bus 0
by definition, which are on NUMA node 0 by definition, so this is invarient
11 matches
Mail list logo