Hi,

not sure it's related, but with O_DIRECT I think that the write need to be 
aligned with multiple of 4k block. (or 512bytes)

(and I remember some bug with qemu and and 512b-logical/4k-physical disks

http://pve.proxmox.com/pipermail/pve-devel/2012-November/004530.html

I'm not an expert so I can't confirm.

----- Mail original -----
De: "Stanislav German-Evtushenko" <ginerm...@gmail.com>
À: "dietmar" <diet...@proxmox.com>
Cc: "aderumier" <aderum...@odiso.com>, "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Jeudi 28 Mai 2015 09:22:12
Objet: Re: [pve-devel] Default cache mode for VM hard drives

Hi Dietmar, 

I did it couple of times already and everytime I had the same answer "upper 
layer problem". Well, as we've done this long way up to this point I would like 
to continue. 

I have just done the same test with mdadm and not DRBD. And what I found that 
this problem was reproducible on the software raid too, just as it was claimed 
by Lars Ellenberg. It means that problem is not only related to DRBD but to 
O_DIRECT mode generally when we don't use host cache and a block device reads 
data directly from userspace. 

The testcase is bellow. 

1. Prepare 

dd if=/dev/zero of=/tmp/mdadm1 bs=1M count=100 
dd if=/dev/zero of=/tmp/mdadm2 bs=1M count=100 
losetup /dev/loop1 /tmp/mdadm1 
losetup /dev/loop2 /tmp/mdadm2 
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/loop{1,2} 

2. Write data with O_DIRECT 

./a.out /dev/md0 

3. Check consistency with vbindiff 

vbindiff /tmp/mdadm{1,2} #press enter multiple times to skip metadata 

And here we find that data on "physical devices" is different and md raid did 
not catch this. 


On Thu, May 28, 2015 at 7:40 AM, Dietmar Maurer < diet...@proxmox.com > wrote: 


> What this means? 

I still think you should discuss that on the DRBD list. 





Best regards, 
Stanislav German-Evtushenko 
_______________________________________________
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to