David S. Miller writes:
> What are these "devices", and what drivers "just program the cards to
> start the dma on those hundred mbyte of ram"?
Hmmm, I have a few cards that are used that way. They are used
for communication between nodes of a cluster.
One might put 16 cards in a system. The ca
At 10:24 PM +0100 2001-05-22, Alan Cox wrote:
> > On the main board, and not just the old ones. These days it's
>> typically in the chipset's south bridge. "Third-party DMA" is
>> sometimes called "fly-by DMA". The ISA card is a slave, as is memory,
>> and the DMA chip reads from one ands writ
At 2:02 PM -0700 2001-05-22, Richard Henderson wrote:
>On Tue, May 22, 2001 at 01:48:23PM -0700, Jonathan Lundell wrote:
>> 64KB for 8-bit DMA; 128KB for 16-bit DMA. [...] This doesn't
>> apply to bus-master DMA, just the legacy (8237) stuff.
>
>Would this 8237 be something on the ISA card, or
> On Tue, May 22, 2001 at 05:00:16PM +0200, Andrea Arcangeli wrote:
> > I'm also wondering if ISA needs the sg to start on a 64k boundary,
> Traditionally, ISA could not do DMA across a 64k boundary.
The ISA dmac on the x86 needs a 64K boundary (128K for 16bit) because it
did not carry the 16 bit
> ISA cards can do sg?
AHA1542 scsi for one. It wasnt that uncommon.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.or
Hi!
> > > [..] Even sparc64's fancy
> > > iommu-based pci_map_single() always succeeds.
> >
> > Whatever sparc64 does to hide the driver bugs you can break it if you
> > pci_map 4G+1 bytes of phyical memory.
>
> Which is an utterly stupid thing to do.
>
> Please construct a plausable sit
On Tue, May 22, 2001 at 01:48:23PM -0700, Jonathan Lundell wrote:
> 64KB for 8-bit DMA; 128KB for 16-bit DMA. [...] This doesn't
> apply to bus-master DMA, just the legacy (8237) stuff.
Would this 8237 be something on the ISA card, or something on
the old pc mainboards? I'm wondering if we can
On Tue, May 22, 2001 at 04:40:17PM -0400, Jeff Garzik wrote:
> ISA cards can do sg?
No, but the host iommu can. The isa card sees whatever
view of memory presented to it by the iommu.
r~
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAI
At 1:28 PM -0700 2001-05-22, Richard Henderson wrote:
>On Tue, May 22, 2001 at 05:00:16PM +0200, Andrea Arcangeli wrote:
>> I'm also wondering if ISA needs the sg to start on a 64k boundary,
>
>Traditionally, ISA could not do DMA across a 64k boundary.
>
>The only ISA card I have (a soundblaster
On Tue, May 22, 2001 at 05:00:16PM +0200, Andrea Arcangeli wrote:
> I'm also wondering if ISA needs the sg to start on a 64k boundary,
Traditionally, ISA could not do DMA across a 64k boundary.
The only ISA card I have (a soundblaster compatible) appears
to work without caring for this, but I su
At 11:12 PM +1200 2001-05-22, Chris Wedgwood wrote:
>On Mon, May 21, 2001 at 03:19:54AM -0700, David S. Miller wrote:
>
> Electrically (someone correct me, I'm probably wrong) PCI is
> limited to 6 physical plug-in slots I believe, let's say it's 8
> to choose an arbitrary larger numbe
On Tue, May 22, 2001 at 07:55:18PM +0400, Ivan Kokshaysky wrote:
> Yes. Though those races more likely would cause silent data
> corruption, but not immediate crash.
Ok. I wasn't sure if it was crashing or not for you.
Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kern
On Tue, May 22, 2001 at 06:44:09PM +0400, Ivan Kokshaysky wrote:
> On Tue, May 22, 2001 at 04:29:16PM +0200, Andrea Arcangeli wrote:
> > Ivan could you test the above fix on the platforms that needs the
> > align_entry hack?
>
> That was one of the first things I noticed, and I've tried exactly
>
On Tue, May 22, 2001 at 04:29:16PM +0200, Andrea Arcangeli wrote:
> Ivan could you test the above fix on the platforms that needs the
> align_entry hack?
That was one of the first things I noticed, and I've tried exactly
that (2 instead of ~1UL).
No, it wasn't the cause of the crashes on pyxis, s
While merging all the recent fixes in my tree and while reserving the
pci32 space above -1M to have a dynamic window of almost 1G without
dropping down the direct window, I noticed and fixed a severe bug, and
so now I started to wonder if the real reason of the crash when an
invalid entry is cache
On Mon, May 21, 2001 at 10:53:39AM -0700, Richard Henderson wrote:
> diff -ruNp linux/arch/alpha/kernel/pci_iommu.c
>linux-new/arch/alpha/kernel/pci_iommu.c
> --- linux/arch/alpha/kernel/pci_iommu.c Fri Mar 2 11:12:07 2001
> +++ linux-new/arch/alpha/kernel/pci_iommu.c Mon May 21 01:25:25
On Mon, May 21, 2001 at 10:53:39AM -0700, Richard Henderson wrote:
> should probably just go ahead and allocate the 512M or 1G
> scatter-gather arena.
I just have a bugreport in my mailbox about pci_map faliures even after
I enlarged to window to 1G argghh (at first it looked apparently stable
by
On Mon, May 21 2001, Andi Kleen wrote:
> On Mon, May 21, 2001 at 03:00:24AM -0700, David S. Miller wrote:
> > > That's currently the case, but at least on IA32 the block layer
> > > must be fixed soon because it's a serious performance problem in
> > > some cases (and fixing it is not very hard
On Mon, May 21, 2001 at 03:51:51PM +0400, Ivan Kokshaysky wrote:
> I'm unable reproduce it with *8Mb* window, so I'm asking.
Me either. But Tom Vier, the guy who started this thread
was able to use up the 8MB. Which is completely believable.
The following should aleviate the situation on these
On Mon, May 21, 2001 at 06:55:29AM -0700, Jonathan Lundell wrote:
> 8 slots (and you're right, 6 is a practical upper limit, fewer for
> 66 MHz) *per bus*. Buses can proliferate like crazy, so the slot
> limit becomes largely irrelevant.
True, but the bandwidth limit is highly relevant. That's
At 3:19 AM -0700 2001-05-21, David S. Miller wrote:
>This is totally wrong in two ways.
>
>Let me fix this, the IOMMU on these machines is per PCI bus, so this
>figure should be drastically lower.
>
>Electrically (someone correct me, I'm probably wrong) PCI is limited
>to 6 physical plug-in slots
Andrea Arcangeli wrote:
> On Mon, May 21, 2001 at 04:04:28AM -0700, David S. Miller wrote:
> > How many physical PCI slots on a Tsunami system? (I know the
>
> on tsunamis probably not many, but on a Typhoon (the one in the es40
> that is the 4-way extension) I don't know, but certainly the box
On Mon, May 21, 2001 at 01:19:59PM +0200, Andrea Arcangeli wrote:
> Alpha in mainline is just screwedup if a single pci bus tries to dynamic
> map more than 128mbyte, changing it to 512mbyte is trivial, growing more
Could you just describe the configuration where increasing sg window
from 128 to
Andi Kleen writes:
> How about a new function (pci_nonrepresentable_address() or whatever)
> that returns true when page cache contains pages that are not representable
> physically as void *. On IA32 it would return true only if CONFIG_PAE is
> true and there is memory >4GB.
No, if we're
On Mon, May 21, 2001 at 04:04:28AM -0700, David S. Miller wrote:
> How many physical PCI slots on a Tsunami system? (I know the
on tsunamis probably not many, but on a Typhoon (the one in the es40
that is the 4-way extension) I don't know, but certainly the box is
large.
Andrea
-
To unsubscribe
On Mon, May 21, 2001 at 03:59:58AM -0700, David S. Miller wrote:
> This still leaves around 800MB IOMMU space free on that sparc64 PCI
> controller.
if it was 400mbyte you were screwed up too, the point here is that the
marging is way too to allows ignore the issue completly, furthmore there
can
Andrea Arcangeli writes:
> On Mon, May 21, 2001 at 03:19:54AM -0700, David S. Miller wrote:
> > max bytes per bttv: max_gbuffers * max_gbufsize
> >64 * 0x208000 == 133.12MB
> >
> > 133.12MB * 8 PCI slots == ~1.06 GB
> >
> > Which is still only half of the t
On Mon, May 21, 2001 at 03:19:54AM -0700, David S. Miller wrote:
> max bytes per bttv: max_gbuffers * max_gbufsize
> 64 * 0x208000 == 133.12MB
>
> 133.12MB * 8 PCI slots == ~1.06 GB
>
> Which is still only half of the total IOMMU space available per
> controller.
Andrea Arcangeli writes:
> On Mon, May 21, 2001 at 03:11:52AM -0700, David S. Miller wrote:
> > I think such designs which gobble up a gig or so of DMA mappings on
>
> they maps something like 200mbyte I think. I also seen other cards doing
> the same kind of stuff again for the distributed
On Mon, May 21, 2001 at 03:11:52AM -0700, David S. Miller wrote:
> I think such designs which gobble up a gig or so of DMA mappings on
they maps something like 200mbyte I think. I also seen other cards doing
the same kind of stuff again for the distributed computing.
> to be using dual address c
Andi Kleen writes:
> On Mon, May 21, 2001 at 03:34:50AM -0700, David S. Miller wrote:
> > egrep illegal_highdma net/core/dev.c
>
> There is just no portable way for the driver to figure out if it should
> set this flag or not. e.g. acenic.c gets it wrong: it is unconditionally
> set even o
> This without considering bttv and friends are not even trying to use the
> pci_map_* yet, I hope you don't watch TV on your sparc64 if you have
> enough ram.
The bttv devel versions[1] are fixed already, they should work
out-of-the box on sparc too. Just watching TV is harmless (needs
lots
On Mon, May 21, 2001 at 03:34:50AM -0700, David S. Miller wrote:
>
> Andi Kleen writes:
> > [BTW, the 2.4.4 netstack does not seem to make any attempt to handle the
> > pagecache > 4GB case on IA32 for sendfile, as the pci_* functions are dummies
> > here. It probably needs bounce buffers th
On Mon, May 21, 2001 at 03:00:24AM -0700, David S. Miller wrote:
> > That's currently the case, but at least on IA32 the block layer
> > must be fixed soon because it's a serious performance problem in
> > some cases (and fixing it is not very hard).
>
> If such a far reaching change goes into
> I can be really wrong on this because I didn't checked anything about
> the GART yet but I suspect you cannot use the GART for this stuff on
> ia32 in 2.4 because I think I recall it provides not an huge marging of
> mapping entries that so would far too easily trigger the bugs in the
> device d
David S. Miller writes:
>
> 1) I showed you in a private email that I calculated the
>maximum possible IOMMU space that one could allocate
>to bttv cards in a fully loaded Sunfire sparc64 system
>to be between 300MB and 400MB. This is assuming that
>every PCI slot containe
Andrea Arcangeli writes:
> I just given you a test case that triggers on sparc64 in earlier email.
If you are talking about the bttv card:
1) I showed you in a private email that I calculated the
maximum possible IOMMU space that one could allocate
to bttv cards in a fully loaded Sunfire
On Mon, May 21, 2001 at 11:42:16AM +0200, Andi Kleen wrote:
> [actually most IA32 boxes already have one in form of the AGP GART, it's just
> not commonly used for serious things yet]
I can be really wrong on this because I didn't checked anything about
the GART yet but I suspect you cannot use t
Andi Kleen writes:
> > Certainly, when this changes, we can make the interfaces adapt to
> > this.
>
> I am just curious why you didn't consider that case when designing the
> interfaces. Was that a deliberate decision or just an oversight?
> [I guess the first, but why?]
I didn't want th
On Mon, May 21, 2001 at 02:30:09AM -0700, David S. Miller wrote:
>
> Andi Kleen writes:
> > On the topic of to the PCI DMA code: one thing I'm missing
> > are pci_map_single()/pci_map_sg() that take struct page * instead of
> > of direct pointers. Currently I don't see how you would implement
Andi Kleen writes:
> On the topic of to the PCI DMA code: one thing I'm missing
> are pci_map_single()/pci_map_sg() that take struct page * instead of
> of direct pointers. Currently I don't see how you would implement IO-MMU IO
> on a 32bit box with more than 4GB of memory, because the addre
On the topic of to the PCI DMA code: one thing I'm missing
are pci_map_single()/pci_map_sg() that take struct page * instead of
of direct pointers. Currently I don't see how you would implement IO-MMU IO
on a 32bit box with more than 4GB of memory, because the address won't
fit into the pointer.
On Mon, May 21, 2001 at 12:05:40AM -0700, David S. Miller wrote:
> together. And it was agreed upon that the routines will not allow
> failure in 2.4.x and we would work on resolving this in 2.5.x and no
> sooner.
I'm glad you at least just considered to fix all those bugs for 2.5 but
that won't
Andrea Arcangeli writes:
> Tell me a best way to get rid of those bugs all together if you can.
Please give me a test case that triggers the bug on sparc64
and I will promptly work on a fix, ok?
I mean a test case you _actually_ trigger, not some fantasy case.
In theory it can happen, but nob
> > Look at the history of kernel API's over time. Everything that can
> > go wrong eventually does.
>
> I agree, and it will be dealt with in 2.5.x
>
> The scsi layer in 2.4.x is simply not able to handle failure in these
> code paths, as Gerard Roudier has mentioned.
On that I am unconvince
Alan Cox writes:
> Pages allocated in main memory and mapped for access by PCI devices. On some
> HP systems there is now way for such a page to stay coherent. It is quite
> possible to sync the view but there is no sane way to allow any
> pci_alloc_consistent to succeed
This is not what the
Alan Cox writes:
> Ok how about a PIV Xeon with 64Gb of memory and 5 AMI Megaraids, which are
> limited to the low 2Gb range for pci mapping and otherwise need bounce buffers.
> Or how about any consistent alloc on certain HP machines which totally lack
> coherency - also I suspect the R10K o
> Alan Cox writes:
> > And how do you propose to implemnt cache coherent pci allocations
> > on machines which lack the ability to have pages coherent between
> > I/O and memory space ?
>
> Pages, being in memory space, are never in I/O space.
Ok my fault. Let me try that again with clearer L
> What are these "devices", and what drivers "just program the cards to
> start the dma on those hundred mbyte of ram"?
>
> Are we designing Linux for hypothetical systems with hypothetical
> devices and drivers, or for the real world?
Ok how about a PIV Xeon with 64Gb of memory and 5 AMI Megara
> Andrew Morton writes:
> > Well this is news to me. No drivers understand this.
> > How long has this been the case? What platforms?
>
> The DMA interfaces may never fail and I've discussed this over and
> over with port maintainers a _long_ time ago.
And how do you propose to implemnt cach
Andrea Arcangeli writes:
> Assume I have a dozen of PCI cards that does DMA using SG tables that
> can map up to some houndred mbytes of ram each, so I can just program
> the cards to start the dma on those houndred mbyte of ram, most of the
> time the I/O is not simulaneous, but very rarely
On Sun, May 20, 2001 at 06:03:44PM -0700, David S. Miller wrote:
> But for the time being, everyone assumes address zero is not valid and
> it shouldn't be too painful to reserve the first page of DMA space
> until we fix this issue.
Indeed, virtually all PCI systems have legacy PeeCee compatibil
On Sun, May 20, 2001 at 06:01:40PM -0700, David S. Miller wrote:
>
> Andrea Arcangeli writes:
> > > Well this is news to me. No drivers understand this.
> >
> > Yes, almost all drivers are buggy.
>
> No, the interface says that the DMA routines may not return failure.
The alpha returns a f
On Sun, May 20, 2001 at 06:07:17PM -0700, David S. Miller wrote:
>
> Andrea Arcangeli writes:
> > > [..] Even sparc64's fancy
> > > iommu-based pci_map_single() always succeeds.
> >
> > Whatever sparc64 does to hide the driver bugs you can break it if you
> > pci_map 4G+1 bytes of phyical
Andrea Arcangeli writes:
> > [..] Even sparc64's fancy
> > iommu-based pci_map_single() always succeeds.
>
> Whatever sparc64 does to hide the driver bugs you can break it if you
> pci_map 4G+1 bytes of phyical memory.
Which is an utterly stupid thing to do.
Please construct a plausable
Andrea Arcangeli writes:
> > Well this is news to me. No drivers understand this.
>
> Yes, almost all drivers are buggy.
No, the interface says that the DMA routines may not return failure.
If you want to change the DMA api to act some other way, then fine
please propose it, but do not act
On Sun, May 20, 2001 at 04:05:18PM +0400, Ivan Kokshaysky wrote:
> Ok. What do you think about reorg like this:
> basically leave the old code as is, and add
> if (is_pyxis)
> alpha_mv.mv_pci_tbi = cia_pci_tbi_try2;
> else
> tbia test
>
On Sun, May 20, 2001 at 01:16:25PM -0400, Jeff Garzik wrote:
> Andrea Arcangeli wrote:
> >
> > On Sun, May 20, 2001 at 03:49:58PM +0200, Andrea Arcangeli wrote:
> > > they returned zero. You either have to drop the skb or to try again later
> > > if they returns zero.
> >
> > BTW, pci_map_single
On Sun, 20 May 2001, Ivan Kokshaysky wrote:
> On Sun, May 20, 2001 at 04:40:13AM +0200, Andrea Arcangeli wrote:
> > I was only talking about when you get the "pci_map_sg failed" because
> > you have not 3 but 300 scsi disks connected to your system and you are
> > writing to all them at the sam
Andrea Arcangeli wrote:
>
> On Sun, May 20, 2001 at 03:49:58PM +0200, Andrea Arcangeli wrote:
> > they returned zero. You either have to drop the skb or to try again later
> > if they returns zero.
>
> BTW, pci_map_single is not a nice interface, it cannot return bus
> address 0,
who says?
A
On Mon, May 21, 2001 at 02:54:16AM +1000, Andrew Morton wrote:
> No. Most of the pci_map_single() implementations just
> use virt_to_bus()/virt_to_phys(). [..]
then you are saying that on the platforms without an iommu the pci_map_*
cannot fail, of course, furthmore even a missing pci_unmap cann
Andrea Arcangeli wrote:
>
> > I can't find *any* pci_map_single() in the 2.4.4-ac9 tree
> > which can fail, BTW.
>
> I assume you mean that no one single caller of pci_map_single is
> checking if it failed or not (because all pci_map_single can fail).
No. Most of the pci_map_single() implement
On Mon, May 21, 2001 at 02:21:18AM +1000, Andrew Morton wrote:
> Andrea Arcangeli wrote:
> Would it not be sufficient to define a machine-specific
> macro which queries it for error? On x86 it would be:
>
> #define BUS_ADDR_IS_ERR(addr) ((addr) == 0)
that would be more flexible at least, howeve
Andrea Arcangeli wrote:
>
> On Sun, May 20, 2001 at 03:49:58PM +0200, Andrea Arcangeli wrote:
> > they returned zero. You either have to drop the skb or to try again later
> > if they returns zero.
>
> BTW, pci_map_single is not a nice interface, it cannot return bus
> address 0, so once we star
On Sun, May 20, 2001 at 03:49:58PM +0200, Andrea Arcangeli wrote:
> they returned zero. You either have to drop the skb or to try again later
> if they returns zero.
BTW, pci_map_single is not a nice interface, it cannot return bus
address 0, so once we start the fixage it is probably better to c
On Mon, May 21, 2001 at 12:05:20AM +1000, Andrew Morton wrote:
> Andrea Arcangeli wrote:
> >
> > [ cc'ed to l-k ]
> >
> > > DMA-mapping.txt assumes that it cannot fail.
> >
> > DMA-mapping.txt is wrong. Both pci_map_sg and pci_map_single failed if
> > they returned zero. You either have to drop
Andrea Arcangeli wrote:
>
> [ cc'ed to l-k ]
>
> > DMA-mapping.txt assumes that it cannot fail.
>
> DMA-mapping.txt is wrong. Both pci_map_sg and pci_map_single failed if
> they returned zero. You either have to drop the skb or to try again later
> if they returns zero.
>
Well this is news to
[ cc'ed to l-k ]
> DMA-mapping.txt assumes that it cannot fail.
DMA-mapping.txt is wrong. Both pci_map_sg and pci_map_single failed if
they returned zero. You either have to drop the skb or to try again later
if they returns zero.
Andrea
-
To unsubscribe from this list: send the line "unsubscri
On Sun, May 20, 2001 at 04:12:34PM +0400, Ivan Kokshaysky wrote:
> On Sun, May 20, 2001 at 04:40:13AM +0200, Andrea Arcangeli wrote:
> > I was only talking about when you get the "pci_map_sg failed" because
> > you have not 3 but 300 scsi disks connected to your system and you are
> > writing to a
On Sun, May 20, 2001 at 04:40:13AM +0200, Andrea Arcangeli wrote:
> I was only talking about when you get the "pci_map_sg failed" because
> you have not 3 but 300 scsi disks connected to your system and you are
> writing to all them at the same time allocating zillons of pte, and one
> of your dri
On Sat, May 19, 2001 at 06:11:27PM -0700, Richard Henderson wrote:
> I'd rather keep this around. It should be possible to use on CIA2.
Ok. What do you think about reorg like this:
basically leave the old code as is, and add
if (is_pyxis)
alpha_mv.mv_pci_tbi = cia_pci_tbi
On Sat, May 19, 2001 at 11:11:31PM +0400, Ivan Kokshaysky wrote:
> On Sat, May 19, 2001 at 03:55:02PM +0200, Andrea Arcangeli wrote:
> > Reading the tsunami specs I learnt 1 tlb entry caches 8 pagetables (not 1)
> > so the tlb flush will be invalidate immediatly by any PCI DMA run after
> > the fl
On Fri, May 18, 2001 at 09:46:17PM +0400, Ivan Kokshaysky wrote:
> -void
> -cia_pci_tbi(struct pci_controller *hose, dma_addr_t start, dma_addr_t end)
> -{
> - wmb();
> - *(vip)CIA_IOC_PCI_TBIA = 3; /* Flush all locked and unlocked. */
> - mb();
> - *(vip)CIA_IOC_PCI_TBIA;
> -
On Sat, May 19, 2001 at 02:48:15PM +0400, Ivan Kokshaysky wrote:
> This is incorrect. If you want directly mapped PCI window then you don't
> need the iommu_arena for it. If you want scatter-gather mapping, you
> should write address of the SG page table into the T3_BASE register.
i've tried both
On Sat, May 19, 2001 at 03:55:02PM +0200, Andrea Arcangeli wrote:
> Reading the tsunami specs I learnt 1 tlb entry caches 8 pagetables (not 1)
> so the tlb flush will be invalidate immediatly by any PCI DMA run after
> the flush on any of the other 7 mappings cached in the same tlb entry.
I have
On Fri, May 18, 2001 at 09:46:17PM +0400, Ivan Kokshaysky wrote:
> The most interesting thing here is the pyxis "tbia" fix.
> Whee! I can now copy files from SCSI to bus-master IDE, or
> between two IDE drives on separate channels, or do other nice
> things without hanging lx/sx164. :-)
> The pyxi
On Fri, May 18, 2001 at 10:34:36PM -0400, Tom Vier wrote:
> hose->sg_pci = iommu_arena_new(hose, 0xc000, 0x0800, 32768);
> *(vip)CIA_IOC_PCI_W3_BASE = 0xc000 | 1;
> *(vip)CIA_IOC_PCI_W3_MASK = (0x0800 - 1) & 0xfff0;
> *(vip)CIA_IOC_PCI_T3_BASE = 0x80
maybe this third window stuff in cia_enable_broken_tbia() is why i can't
seem to get the third window to open up. from my reading of the 21174 docs,
my code should work. since T2_BASE is at 0x4000 for 1gig, i'd think
T3_BASE should be at 0x8000. am i missing something?
hose->sg_pc
The most interesting thing here is the pyxis "tbia" fix.
Whee! I can now copy files from SCSI to bus-master IDE, or
between two IDE drives on separate channels, or do other nice
things without hanging lx/sx164. :-)
The pyxis "tbia" turned out to be broken in a more nastier way
than one could expec
79 matches
Mail list logo