On Sat, 2005-02-19 at 20:02 -0800, Linus Torvalds wrote:
> 
> On Sat, 19 Feb 2005, Steven Rostedt wrote:
> >
> > On Sat, 2005-02-19 at 18:10 -0800, Linus Torvalds wrote:
> > 
> > > I _think_ it's the code in arch/i386/pci/fixup.c that does this. See the
> > > 
> > >   static void __devinit pci_fixup_transparent_bridge(struct pci_dev *dev)
> > > 
> > > thing, and try to disable it. Maybe that rule is wrong, and triggers much 
> > > too often?
> > > 
> > 
> > Linus,
> > 
> > Thank you very much! That was it.  The following patch made everything
> > look good.
> 
> Ok. I've fired off an email to some Intel people asking what the
> real rules are wrt Intel PCI-PCI bridges. It may be that it's not that 
> particular chip, but some generic rule (like "all Intel bridges act like 
> they are subtractive decode _except_ if they actually have the IO 
> start/stop ranges set" or something like that).
> 
> If anybody on the list can figure the Intel bridge decoding rules out, 
> please holler..
> 
>                       Linus

Actually, I've ran into a similar situation on my hardware.  After
looking into it for a while, I'm pretty sure it's actually a transparent
bridge (despite it not indicating such in the programing interface class
code).  Have you heard anything more?

Thanks,
Adam


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to