for 4K drive, I don't know if this is related: 

http://www.linuxtopia.org/online_books/rhel6/rhel_6_technical_notes/rhel_6_technotes_virt.html
 

3.1. Known Issues
"
Direct Asynchronous IO (AIO) that is not issued on filesystem block boundaries, 
and falls into a hole in a sparse file on ext4 or xfs filesystems, may corrupt 
file data if multiple I/O operations modify the same filesystem block. 
Specifically, if qemu-kvm is used with the aio=native IO mode over a sparse 
device image hosted on the ext4 or xfs filesystem, guest filesystem corruption 
will occur if partitions are not aligned with the host filesystem block size. 
Generally, do not use aio=native option along with cache=none for QEMU. This 
issue can be avoided by using one of the following techniques:

    Align AIOs on filesystem block boundaries, or do not write to sparse files 
using AIO on xfs or ext4 filesystems.
    KVM: Use a non-sparse system image file or allocate the space by zeroing 
out the entire file.
    KVM: Create the image using an ext3 host filesystem instead of ext4.
    KVM: Invoke qemu-kvm with aio=threads (this is the default).
    KVM: Align all partitions within the guest image to the host's filesystem 
block boundary (default 4k).
"


----- Mail original ----- 

De: "Alexandre DERUMIER" <[email protected]> 
À: "Dietmar Maurer" <[email protected]> 
Cc: [email protected] 
Envoyé: Jeudi 8 Novembre 2012 18:40:21 
Objet: Re: [pve-devel] new cache benchmark results 

seem to occur only with O_DIRECT (so cache=none and cache=directsync) 

this bugzilla was about cdrom (with large sector), but I don't know if it's 
apply on 4K hdd 

https://bugzilla.redhat.com/show_bug.cgi?id=608548 
" Technical note added. If any revisions are required, please edit the 
"Technical Notes" field 
accordingly. All revisions will be proofread by the Engineering Content 
Services team. 

New Contents: 
Cause: qemu did not align memory properly for O_DIRECT support 

Fix: qemu was changed to use properly aligned memory for I/O requests 

Consequence: I/O to device with large sector sizes like CDROMs dit not work in 
cache=none mode 
Result: I/O to devices with large sector sizes like CDROMs work in cache=none 
mode" 

----- Mail original ----- 

De: "Alexandre DERUMIER" <[email protected]> 
À: "Dietmar Maurer" <[email protected]> 
Cc: [email protected] 
Envoyé: Jeudi 8 Novembre 2012 18:06:47 
Objet: Re: [pve-devel] new cache benchmark results 

>>Another problem with cache=none is that it does not work with 4K sector 
>>drives. 
>> 
>>User reported problems with 4K iscsi and 4K local disks. 
>> 
Yes, I remember the post in the forum. It doesn't work with cache=none or 
cache=writeback 

>>Any idea how to solve that? 
This is strange, because it seem to be fixed a long time ago(I have seen some 
redhat bugzilla from 2011 about it) 
But I don't have hardware to test it :( 


I'll look in my archives tomorrow ;) 


----- Mail original ----- 

De: "Dietmar Maurer" <[email protected]> 
À: "Alexandre DERUMIER" <[email protected]>, [email protected] 
Envoyé: Jeudi 8 Novembre 2012 17:50:58 
Objet: RE: [pve-devel] new cache benchmark results 

> Note : with shared storage, with writeback, I generally see big spike (faster 
> than cache=none), but after big slowdown (near zero) during 5-10s. 
> So I think this is when the host need to flush the datas, it add more 
> overhead. (maybe network latency have an impact...) 

Another problem with cache=none is that it does not work with 4K sector drives. 

User reported problems with 4K iscsi and 4K local disks. 

Any idea how to solve that? 
_______________________________________________ 
pve-devel mailing list 
[email protected] 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
_______________________________________________ 
pve-devel mailing list 
[email protected] 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

_______________________________________________
pve-devel mailing list
[email protected]
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to