In dcr_write_dma(), there is code that uses cpu_physical_memory_map() to implement a DMA transfer. That function takes a 'plen' argument, which points to a hwaddr which is used for both input and output: the caller must set it to the size of the range it wants to map, and on return it is updated to the actual length mapped. The dcr_write_dma() code fails to initialize rlen and wlen, so will end up mapping an unpredictable amount of memory.
Initialize the length values correctly, and check that we managed to map the entire range before using the fast-path memmove(). This was spotted by Coverity, which points out that we never initialized the variables before using them. Fixes: Coverity CID 1487137 Signed-off-by: Peter Maydell <peter.mayd...@linaro.org> --- This seems totally broken, so I presume we just don't have any guest code that actually exercises this... --- hw/ppc/ppc440_uc.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/hw/ppc/ppc440_uc.c b/hw/ppc/ppc440_uc.c index a1ecf6dd1c2..11fdb88c220 100644 --- a/hw/ppc/ppc440_uc.c +++ b/hw/ppc/ppc440_uc.c @@ -904,14 +904,17 @@ static void dcr_write_dma(void *opaque, int dcrn, uint32_t val) int width, i, sidx, didx; uint8_t *rptr, *wptr; hwaddr rlen, wlen; + hwaddr xferlen; sidx = didx = 0; width = 1 << ((val & DMA0_CR_PW) >> 25); + xferlen = count * width; + wlen = rlen = xferlen; rptr = cpu_physical_memory_map(dma->ch[chnl].sa, &rlen, false); wptr = cpu_physical_memory_map(dma->ch[chnl].da, &wlen, true); - if (rptr && wptr) { + if (rptr && rlen == xferlen && wptr && wlen == xferlen) { if (!(val & DMA0_CR_DEC) && val & DMA0_CR_SAI && val & DMA0_CR_DAI) { /* optimise common case */ -- 2.25.1