Hi all, > > Several fixes are possible; I'll try the most generic one next. > > I am curious what the fix will be (with regard to that Geert's remark).
I'm getting a bit confused - the good old mach_max_dma_address variable does not seem to get used in the recent kernels anymore at all. Could a mm expert please unravel this mystery? Cooked up the following patch to make the m68k mm code honor the DMA limit as it used to: --- arch/m68k/mm/motorola.c.maxdma.org 2007-12-31 20:10:18.000000000 +1300 +++ arch/m68k/mm/motorola.c 2007-12-31 20:12:00.000000000 +1300 @@ -43,6 +43,8 @@ EXPORT_SYMBOL(mm_cachebits); #endif +extern unsigned long mach_max_dma_addr; + /* size of memory already mapped in head.S */ #define INIT_MAPPED_SIZE (4UL<<20) @@ -296,7 +298,15 @@ printk ("before free_area_init\n"); #endif for (i = 0; i < m68k_num_memory; i++) { - zones_size[ZONE_DMA] = m68k_memory[i].size >> PAGE_SHIFT; + /* MSch Hack */ + if ( m68k_memory[i].addr < mach_max_dma_addr + && (m68k_memory[i].addr+m68k_memory[i].size) <= mach_max_dma_addr ) { + zones_size[ZONE_DMA] = m68k_memory[i].size >> PAGE_SHIFT; + zones_size[ZONE_NORMAL] = 0; + } else { + zones_size[ZONE_DMA] = 0; + zones_size[ZONE_NORMAL] = m68k_memory[i].size >> PAGE_SHIFT; + } free_area_init_node(i, pg_data_map + i, zones_size, m68k_memory[i].addr >> PAGE_SHIFT, NULL); } I'm blatantly assuming I can set the DMA vs. normal zone sizes for memory nodes here to achieve flagging certain zones for no DMA action. Using this patch on my kernel results in a panic when atafb_init attempts to allocate screen memory, with the following diagnostics from the allocator: Mem-info: DMA per-cpu: CPU 0: Hot: hi: 0, btch: 1 usd: 0 Cold: hi: 0, btch: 1 usd: 0 Normal per-cpu: CPU 0: Hot: hi: 90, btch: 15 usd: 0 Cold: hi: 30, btch: 7 usd: 0 Active:128 inactive:1897 dirty:0 writeback:0 unstable:0 free:65505 slab:109 mapped:0 pagetables:0 bounce:0 DMA free:0kB min:108kB low:132kB high:160kB active:512kB inactive:7580kB present:14212kB pages_scanned:0 all_unreclaimable? no lowmem_reserve[]: 0 0 0 Normal free:262020kB min:1980kB low:2472kB high:2968kB active:0kB inactive:8kB present:259840kB pages_scanned:0 all_unreclaimable? no lowmem_reserve[]: 0 0 0 DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 0kB Normal: 1*4kB 4*8kB 2*16kB 0*32kB 1*64kB 0*128kB 1*256kB 1*512kB 1*1024kB 1*2048kB 63*4096kB = 262020kB Swap cache: add 0, delete 0, find 0/0, race 0+0 Free swap = 0kB Total swap = 0kB Free swap: 0kB 69120 pages of RAM 65535 free pages 1389 reserved pages 0 pages shared 0 pages swap cached Kernel panic - not syncing: Cannot allocate screen memory (I am using 256 MB of FastRAM here) >From the diagnostics it appears that 14 MB of ST-RAM are present, 7.5MB of which are inactive (is that the ramdisk, perhaps?). Why is there no free RAM in the DMA zone? Booting with no ramdisk does still work with the patched kernel, so I cannot have totally messed up memory management. Anyway, what would be the correct way to set up the FastRAM zone as non-DMA memory? I must have made some mistake in the code above. Looking at the bright side: we now do at least get the framebuffer to confess it's not going to work :-) Regarding booter options: is there an option to load the kernel to FastRAM as opposed to ST-RAM, Petr? What is the default here? As to your question regarding the most generic fix: if there really is not enough ST-RAM (i.e. the available space is taken by the kernel and the ramdisk, after 'unpacking' the ramdisk to the buffer cache) we'd need to either make the ramdisk unpack go to non-DMA memory (no idea here; ideally the buffer cache should not have a preference for DMA memory in this case), or reserve a chunk of memory up front (tried that in a hackish way). Awaiting the verdict of the mm experts... Happy New Year to y'all Michael -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]