On Fri 14 Sep 2018 04:19:01 AM CEST, lampahome wrote: > I figure out one way is to *divide data of device into 1TB chunk* and > save every chunk into qcow2 image cuz I don't change filesystem, and > connect with backing chain.
Let's see if I understood what you want: 1) you have for example a 30TB image that you want to split because the filesystem won't allow files larger than 16TB. 2) you create 3 images of 10TB each (let's call them [A], [B] and [C]). [A] contains the range 0-10TB, [B] 10-20TB and [C] 20-30TB. They are connected in a backing chain ([A] <- [B] <- [C]). 3) there's an empty image [D] on top of those, which is where you write ([A] <- [B] <- [C] <- [D]). 4) you want a way to move the data that you write into [D] to the appropriate backing image. So if you write 10MB at offset 4TB, and 4MB at offset 22TB, the first 10MB should go to image [A], and the other 4MB should go to image [C]. Is my description correct? Some possible solutions: a) a new driver (or an extension to the Quorum driver) that would allow to concatenate several disk images (like RAID-0 but without the stripping part). b) an extension to the block-commit command in which you would specify the range of data that you want to copy. c) you could also merge the files offline. That is: first you create a new image that would contain [A] plus its modifications, then you would replace [A] with this new image. The first part can already be done with the existing tools. The replacement part can be done with the "blockdev-reopen" command I'm working on. Berto