Hi,all

David said: users will simply try to get rid of their volumes ALL at the same 
time and this is putting a lot of pressure on the SAN servicing those volumes 
and since the hardware isn't replying fast enough, the process then fall in D 
state and are waiting for IOs to complete which slows down everything.
The system must tolerate this kind of behavior. The status of process "dd" will 
fall in D state with the pressure of  SAN.

In my opinion, we should rethink the way of wiping the data in the volumes. 
Filling in the device with /dev/zero with "dd" command was the most primitive 
method.  The standard scsi command "write same" could be taken into considered.

Once the LBA was provided and the command was sent to the SAN , the storage 
device(SAN) could write the same-data into the LUN or volumes. The "dd" 
operation  can be offloaded to the storage array to execute.

Thanks,

Qi


Reference:

1)       http://manpages.ubuntu.com/manpages/karmic/man8/sg_write_same.8.html

2)       http://storagegaga.wordpress.com/2012/01/06/why-vaai/


________________________________
Qi Xiaozhen
CLOUD OS PDU, IT Product Line, Huawei Enterprise Business Group
Mobile: +86 13609283376    Tel: +86 29-89191578
Email: qixiaoz...@huawei.com <mailto:qixiaoz...@huawei.com>


From: David Hill [mailto:david.h...@ubisoft.com]
Sent: Saturday, November 02, 2013 6:21 AM
To: openstack@lists.openstack.org
Subject: [Openstack] Wiping of old cinder volumes

Hi guys,

                I was wondering there was some better way of wiping the content 
of an old EBS volume before actually deleting the logical volume in cinder ?  
Or perhaps, configure or add the possibility to configure the number of 
parallel "dd" processes that will be spawn at the same time...
Sometimes, users will simply try to get rid of their volumes ALL at the same 
time and this is putting a lot of pressure on the SAN servicing those volumes 
and since the hardware isn't replying fast enough, the process then fall in D 
state and are waiting for IOs to complete which slows down everything.
Since this process isn't (in my opinion) as critical as a EBS write or read, 
perhaps we should be able to throttle the speed of disk wiping or number of 
parallel wipings to something that wouldn't affect the other read/write that 
are most probably more critical.

Here is a small capture of the processes :
cinder   23782  0.7  0.2 248868 20588 ?        S    Oct24  94:23 
/usr/bin/python /usr/bin/cinder-volume --config-file /etc/cinder/cinder.conf 
--logfile /var/log/cinder/volume.log
cinder   23790  0.0  0.5 382264 46864 ?        S    Oct24   9:16  \_ 
/usr/bin/python /usr/bin/cinder-volume --config-file /etc/cinder/cinder.conf 
--logfile /var/log/cinder/volume.log
root     32672  0.0  0.0 175364  2648 ?        S    21:48   0:00  |   \_ sudo 
cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--2e86d686--de67--4ee4--992d--72818c70d791 
count=102400 bs=1M co
root     32675  0.0  0.1 173636  8672 ?        S    21:48   0:00  |   |   \_ 
/usr/bin/python /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf dd 
if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--2e86d686--de67--4ee4--992d--72818c70d7
root     32681  3.2  0.0 106208  1728 ?        D    21:48   0:47  |   |       
\_ /bin/dd if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--2e86d686--de67--4ee4--992d--72818c70d791 
count=102400 bs=1M conv=fdatasync
root     32674  0.0  0.0 175364  2656 ?        S    21:48   0:00  |   \_ sudo 
cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--d54a1c96--63ca--45cb--a597--26194d45dcdf 
count=102400 bs=1M co
root     32676  0.0  0.1 173636  8672 ?        S    21:48   0:00  |   |   \_ 
/usr/bin/python /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf dd 
if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--d54a1c96--63ca--45cb--a597--26194d45dc
root     32683  3.2  0.0 106208  1724 ?        D    21:48   0:47  |   |       
\_ /bin/dd if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--d54a1c96--63ca--45cb--a597--26194d45dcdf 
count=102400 bs=1M conv=fdatasync
root     32693  0.0  0.0 175364  2656 ?        S    21:48   0:00  |   \_ sudo 
cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--048dae36--b225--4266--b21e--af4b66eae6cd 
count=102400 bs=1M co
root     32694  0.0  0.1 173632  8668 ?        S    21:48   0:00  |   |   \_ 
/usr/bin/python /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf dd 
if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--048dae36--b225--4266--b21e--af4b66eae6
root     32707  3.2  0.0 106208  1728 ?        D    21:48   0:46  |   |       
\_ /bin/dd if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--048dae36--b225--4266--b21e--af4b66eae6cd 
count=102400 bs=1M conv=fdatasync
root       342  0.0  0.0 175364  2648 ?        S    21:48   0:00  |   \_ sudo 
cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--45251e8e--0c54--4e8f--9446--4e92801976ab 
count=102400 bs=1M co
root       343  0.0  0.1 173636  8672 ?        S    21:48   0:00  |   |   \_ 
/usr/bin/python /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf dd 
if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--45251e8e--0c54--4e8f--9446--4e92801976
root       347  3.2  0.0 106208  1728 ?        D    21:48   0:45  |   |       
\_ /bin/dd if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--45251e8e--0c54--4e8f--9446--4e92801976ab 
count=102400 bs=1M conv=fdatasync
root       380  0.0  0.0 175364  2656 ?        S    21:48   0:00  |   \_ sudo 
cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--1d9dfb31--dc06--43d5--bc1f--93b6623ff8c4 
count=102400 bs=1M co
root       382  0.0  0.1 173632  8668 ?        S    21:48   0:00  |   |   \_ 
/usr/bin/python /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf dd 
if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--1d9dfb31--dc06--43d5--bc1f--93b6623ff8
root       388  3.2  0.0 106208  1724 ?        R    21:48   0:45  |   |       
\_ /bin/dd if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--1d9dfb31--dc06--43d5--bc1f--93b6623ff8c4 
count=102400 bs=1M conv=fdatasync
root       381  0.0  0.0 175364  2648 ?        S    21:48   0:00  |   \_ sudo 
cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--60971d47--d3c5--44ef--9d43--d461c364d148 
count=102400 bs=1M co
root       384  0.0  0.1 173636  8672 ?        S    21:48   0:00  |   |   \_ 
/usr/bin/python /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf dd 
if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--60971d47--d3c5--44ef--9d43--d461c364d1
root       391  3.2  0.0 106208  1728 ?        D    21:48   0:45  |   |       
\_ /bin/dd if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--60971d47--d3c5--44ef--9d43--d461c364d148 
count=102400 bs=1M conv=fdatasync
root       383  0.0  0.0 175364  2648 ?        S    21:48   0:00  |   \_ sudo 
cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--856080db--4f8c--4063--9c47--69acb8460e50 
count=102400 bs=1M co
root       386  0.0  0.1 173632  8668 ?        S    21:48   0:00  |   |   \_ 
/usr/bin/python /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf dd 
if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--856080db--4f8c--4063--9c47--69acb8460e
root       389  3.1  0.0 106208  1724 ?        D    21:48   0:45  |   |       
\_ /bin/dd if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--856080db--4f8c--4063--9c47--69acb8460e50 
count=102400 bs=1M conv=fdatasync
root       385  0.0  0.0 175364  2652 ?        S    21:48   0:00  |   \_ sudo 
cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--f8f98d80--044f--4d4a--983f--d1186556f886 
count=102400 bs=1M co
root       387  0.0  0.1 173632  8668 ?        S    21:48   0:00  |   |   \_ 
/usr/bin/python /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf dd 
if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--f8f98d80--044f--4d4a--983f--d1186556f8
root       392  3.1  0.0 106208  1728 ?        D    21:48   0:45  |   |       
\_ /bin/dd if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--f8f98d80--044f--4d4a--983f--d1186556f886 
count=102400 bs=1M conv=fdatasync
root       413  0.0  0.0 175364  2652 ?        S    21:48   0:00  |   \_ sudo 
cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--0e89696a--492b--494c--81fa--7e834b9f31f4 
count=102400 bs=1M co
root       414  0.0  0.1 173636  8672 ?        S    21:48   0:00  |       \_ 
/usr/bin/python /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf dd 
if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--0e89696a--492b--494c--81fa--7e834b9f31
root       420  3.1  0.0 106208  1728 ?        D    21:48   0:45  |           
\_ /bin/dd if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--0e89696a--492b--494c--81fa--7e834b9f31f4 
count=102400 bs=1M conv=fdatasync
cinder   23791  0.0  0.5 377464 41968 ?        S    Oct24   7:46  \_ 
/usr/bin/python /usr/bin/cinder-volume --config-file /etc/cinder/cinder.conf 
--logfile /var/log/cinder/volume.log

iostat output :
dm-23             0.00     0.00    0.00 18408.00     0.00    71.91     8.00   
503.06   28.83   0.05 100.00
dm-25             0.00     0.00    0.00 20544.00     0.00    80.25     8.00   
597.24   30.56   0.05 100.10
dm-29             0.00     0.00    0.00 19232.00     0.00    75.12     8.00   
531.80   27.62   0.05 100.10
dm-34             0.00     0.00    0.00 20128.00     0.00    78.62     8.00   
498.10   24.92   0.05 100.00
dm-39             0.00     0.00    0.00 18355.00     0.00    71.70     8.00   
534.77   28.98   0.05 100.00
dm-59             0.00     0.00    0.00 18387.00     0.00    71.82     8.00   
587.79   32.10   0.05 100.00
dm-96             0.00     0.00    0.00 16480.00     0.00    64.38     8.00   
467.96   27.51   0.06 100.00
dm-97             0.00     0.00    0.00 17024.00     0.00    66.50     8.00   
502.25   29.21   0.06 100.00
dm-98             0.00     0.00    0.00 20704.00     0.00    80.88     8.00   
655.67   31.37   0.05 100.00

parent dm :
Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz 
avgqu-sz   await  svctm  %util
dm-0            142.00 74394.00  100.00 2812.00     1.00   302.41   213.38   
156.74   52.84   0.34 100.00

Thank you very much ,

Dave

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to