Hello Alexey,

In general, a very nice, clean patch.

> +     /* Flush modified buffer descriptor */
> +     flush_dcache_range((unsigned long)desc_p,
> +                        (unsigned long)desc_p + sizeof(struct dmamacdescr));
> +

If I remember correctly, there is some bit that tells you if the DMA descriptor 
is owned by CPU or by GMAC. What if the descriptor size is smaller than the 
cache line size of the CPU? How do you prevent the CPU from overwriting 
adiacent DMA descriptors that may be owned by the GMAC?

As far as I can remember, in Linux they solve this by mapping the descriptors 
(not the packet buffers, these are always cacheline aligned) in uncached 
memory, but we cannot do that in u-boot as the MMU is still disabled. OTOH, as 
we may not need to have the performance benefits of the CPU and GMAC 
concurrently accessing the descriptor table, we may be able to work around it 
by handing off multiple descriptors at once from GMAC to CPU and vice versa 
(maybe depending on cache line size?). 

I remember that a similar patch (that looked a lot uglier BTW) solved it by 
doing uncached accesses to the descriptors, but that would require using 
arch-specific accessor macro's (and I'm not sure if all architectures support 
an 'uncached access' attribute/flag with load/store instructions).

Mischa
_______________________________________________
U-Boot mailing list
U-Boot@lists.denx.de
http://lists.denx.de/mailman/listinfo/u-boot

Reply via email to