On Monday 16 November 2009, Gerd Hoffmann wrote: > On 11/11/09 17:38, Paul Brook wrote: > >>> That cap is important. > >>> For scsi-generic you probably don't have a choice because of the way > >>> the kernel interface works. > >> > >> Exactly. And why is the cap important for scsi-disk if scsi-generic > >> does fine without? > > > > With scsi-generic you're at the mercy of what the kernel API gives you, > > and if the guest hardware/OS isn't cooperative then you loose. > > The guest will loose with unreasonable large requests. qemu_malloc() -> > oom() -> abort() -> guest is gone.
Exactly. This lossage is not acceptable. > We can also limit the amout of host memory we allow the guest to > consume, so uncooperative guests can't push the host into swap. This is > not implemented today, indicating that it hasn't been a problem so far. Capping the amount of memory required for a transfer *is* implemented, in both LSI and virtio-blk. The exception being SCSI passthrough where the kernel API makes it impossible. > And with zerocopy it will be even less a problem as we don't need host > memory to buffer the data ... zero-copy isn't possible in many cases. You must handle the other cases gracefully. > >> It doesn't. The disconnect and thus the opportunity to submit more > >> commands while the device is busy doing the actual I/O is there. > > > > Disconnecting on the first DMA request (after switching to a data phase > > and transferring zero bytes) is bizarre behavior, but probably allowable. > > The new lsi code doesn't. The old code could do that under certain > circumstances. And what is bizarre about that? A real hard drive will > most likely do exactly that on reads (unless it has the data cached and > can start the transfer instantly). No. The old code goes directly from the command phase to the message (disconnect) phase. > > However by my reading DMA transfers must be performed synchronously by > > the SCRIPTS engine, so you need to do a lot of extra checking to prove > > that you can safely continue execution without actually performing the > > transfer. > > I'll happily add a 'strict' mode which does data transfers synchronously > in case any compatibility issues show up. > > Such a mode would be slower of course. We'll have to either do the I/O > in lots of little chunks or loose zerocopy. Large transfers + memcpy is > probably the faster option. But as you agreed above, large transfers+memcpy is not a realistic option because it can have excessive memory requirements. Paul