seem to occur only with O_DIRECT (so cache=none and cache=directsync) this bugzilla was about cdrom (with large sector), but I don't know if it's apply on 4K hdd
https://bugzilla.redhat.com/show_bug.cgi?id=608548 " Technical note added. If any revisions are required, please edit the "Technical Notes" field accordingly. All revisions will be proofread by the Engineering Content Services team. New Contents: Cause: qemu did not align memory properly for O_DIRECT support Fix: qemu was changed to use properly aligned memory for I/O requests Consequence: I/O to device with large sector sizes like CDROMs dit not work in cache=none mode Result: I/O to devices with large sector sizes like CDROMs work in cache=none mode" ----- Mail original ----- De: "Alexandre DERUMIER" <[email protected]> À: "Dietmar Maurer" <[email protected]> Cc: [email protected] Envoyé: Jeudi 8 Novembre 2012 18:06:47 Objet: Re: [pve-devel] new cache benchmark results >>Another problem with cache=none is that it does not work with 4K sector >>drives. >> >>User reported problems with 4K iscsi and 4K local disks. >> Yes, I remember the post in the forum. It doesn't work with cache=none or cache=writeback >>Any idea how to solve that? This is strange, because it seem to be fixed a long time ago(I have seen some redhat bugzilla from 2011 about it) But I don't have hardware to test it :( I'll look in my archives tomorrow ;) ----- Mail original ----- De: "Dietmar Maurer" <[email protected]> À: "Alexandre DERUMIER" <[email protected]>, [email protected] Envoyé: Jeudi 8 Novembre 2012 17:50:58 Objet: RE: [pve-devel] new cache benchmark results > Note : with shared storage, with writeback, I generally see big spike (faster > than cache=none), but after big slowdown (near zero) during 5-10s. > So I think this is when the host need to flush the datas, it add more > overhead. (maybe network latency have an impact...) Another problem with cache=none is that it does not work with 4K sector drives. User reported problems with 4K iscsi and 4K local disks. Any idea how to solve that? _______________________________________________ pve-devel mailing list [email protected] http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel _______________________________________________ pve-devel mailing list [email protected] http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
