Well, Segher doesn't want me to use iobarrier (because it's not I/O).
Andy doesn't want me to use wmb() (because it's sync). I don't think
something like gfar_wmb() would be appropriate. So the remaining
options are either eieio(),
? Just curious... the original intent of eieio was to order I/
On Fri, May 04, 2007 at 05:13:09PM -0500, Linas Vepstas wrote:
> On Wed, May 02, 2007 at 03:40:20PM -0500, Scott Wood wrote:
> >
> > Well, Segher doesn't want me to use iobarrier (because it's not I/O).
> > Andy doesn't want me to use wmb() (because it's sync). I don't think
> > something like
On Wed, May 02, 2007 at 03:40:20PM -0500, Scott Wood wrote:
>
> Well, Segher doesn't want me to use iobarrier (because it's not I/O).
> Andy doesn't want me to use wmb() (because it's sync). I don't think
> something like gfar_wmb() would be appropriate. So the remaining
> options are either
So what about some thing like this where we do the read only once?
- k
diff --git a/drivers/net/gianfar.c b/drivers/net/gianfar.c
index a06d8d1..9cd7d1e 100644
--- a/drivers/net/gianfar.c
+++ b/drivers/net/gianfar.c
@@ -1438,31 +1438,35 @@ int gfar_clean_rx_ring(struct net_device
*dev, int rx_w
Kumar Gala wrote:
So what about some thing like this where we do the read only once?
- k
diff --git a/drivers/net/gianfar.c b/drivers/net/gianfar.c
index a06d8d1..9cd7d1e 100644
--- a/drivers/net/gianfar.c
+++ b/drivers/net/gianfar.c
@@ -1438,31 +1438,35 @@ int gfar_clean_rx_ring(struct net_dev
On Wed, 2 May 2007, Scott Wood wrote:
> Kumar Gala wrote:
> > Why doesn't marking the bdp pointer volatile resolve the issue in
> > gfar_clean_rx_ring() to ensure load ordering?
>
> Because that only addresses compiler reordering (and does so in a rather
> clumsy way -- not all accesses need to be
And the driver is already ppc-specific; it uses in/out_be32.
True, but its hidden behind the gfar_read/write accessors.
Your change is a bit more blatant.
Well, Segher doesn't want me to use iobarrier (because it's not I/O).
Andy doesn't want me to use wmb() (because it's sync).
You should
Kumar Gala wrote:
Why doesn't marking the bdp pointer volatile resolve the issue in
gfar_clean_rx_ring() to ensure load ordering?
Because that only addresses compiler reordering (and does so in a rather
clumsy way -- not all accesses need to be strongly ordered), not
hardware reordering.
-
On May 2, 2007, at 3:40 PM, Scott Wood wrote:
Kumar Gala wrote:
On May 2, 2007, at 3:12 PM, Scott Wood wrote:
wmb() is a sync, smp_wmb() is an eieio. Andy told me he would
not accept a sync in those spots.
Sorry, was looking at the iobarrier code.
And the driver is already ppc-specific;
Kumar Gala wrote:
On May 2, 2007, at 3:12 PM, Scott Wood wrote:
wmb() is a sync, smp_wmb() is an eieio. Andy told me he would not
accept a sync in those spots.
Sorry, was looking at the iobarrier code.
And the driver is already ppc-specific; it uses in/out_be32.
True, but its hidden be
On May 2, 2007, at 3:12 PM, Scott Wood wrote:
Kumar Gala wrote:
I'd rather see a wmb() instead of eieio() to keep this code non-
ppc specific. (also, we implement wmb as eieio, so I don't keep
the comment about it being too heavy, unless you mean generically).
wmb() is a sync, smp_wmb()
Kumar Gala wrote:
I'd rather see a wmb() instead of eieio() to keep this code non-ppc
specific. (also, we implement wmb as eieio, so I don't keep the
comment about it being too heavy, unless you mean generically).
wmb() is a sync, smp_wmb() is an eieio. Andy told me he would not
accept a
On May 2, 2007, at 2:57 PM, Scott Wood wrote:
The hardware must not see that is given ownership of a buffer until
it is
completely written, and when the driver receives ownership of a
buffer,
it must ensure that any other reads to the buffer reflect its final
state. Thus, I/O barriers are
The hardware must not see that is given ownership of a buffer until it is
completely written, and when the driver receives ownership of a buffer,
it must ensure that any other reads to the buffer reflect its final
state. Thus, I/O barriers are added where required.
Without this patch, I have obse
14 matches
Mail list logo