On Wed, Aug 26, 2020 at 08:34:32PM +0200, Alberto Garcia wrote:
> On Tue 25 Aug 2020 09:47:24 PM CEST, Brian Foster wrote:
> > My fio fallocates the entire file by default with this command. Is that
> > the intent of this particular test? I added --fallocate=none to my test
> > runs to incorporate
On Tue 25 Aug 2020 09:47:24 PM CEST, Brian Foster wrote:
> My fio fallocates the entire file by default with this command. Is that
> the intent of this particular test? I added --fallocate=none to my test
> runs to incorporate the allocation cost in the I/Os.
That wasn't intentional, you're right
On Tue, Aug 25, 2020 at 07:18:19PM +0200, Alberto Garcia wrote:
> On Tue 25 Aug 2020 06:54:15 PM CEST, Brian Foster wrote:
> > If I compare this 5m fio test between XFS and ext4 on a couple of my
> > systems (with either no prealloc or full file prealloc), I end up seeing
> > ext4 run slightly fast
On Tue 25 Aug 2020 06:54:15 PM CEST, Brian Foster wrote:
> If I compare this 5m fio test between XFS and ext4 on a couple of my
> systems (with either no prealloc or full file prealloc), I end up seeing
> ext4 run slightly faster on my vm and XFS slightly faster on bare metal.
> Either way, I don't
On Tue, Aug 25, 2020 at 02:24:58PM +0200, Alberto Garcia wrote:
> On Fri 21 Aug 2020 07:02:32 PM CEST, Brian Foster wrote:
> >> I was running fio with --ramp_time=5 which ignores the first 5 seconds
> >> of data in order to let performance settle, but if I remove that I can
> >> see the effect more
On Fri 21 Aug 2020 07:02:32 PM CEST, Brian Foster wrote:
>> I was running fio with --ramp_time=5 which ignores the first 5 seconds
>> of data in order to let performance settle, but if I remove that I can
>> see the effect more clearly. I can observe it with raw files (in 'off'
>> and 'prealloc' mo
On Sun 23 Aug 2020 11:59:07 PM CEST, Dave Chinner wrote:
>> >> Option 4 is described above as initial file preallocation whereas
>> >> option 1 is per 64k cluster prealloc. Prealloc mode mixup aside, Berto
>> >> is reporting that the initial file preallocation mode is slower than
>> >> the per clus
On Fri, Aug 21, 2020 at 08:59:44AM -0400, Brian Foster wrote:
> On Fri, Aug 21, 2020 at 01:42:52PM +0200, Alberto Garcia wrote:
> > On Fri 21 Aug 2020 01:05:06 PM CEST, Brian Foster
> > wrote:
> > And yes, (4) is a bit slower than (1) in my tests. On ext4 I get 10%
> > more IOPS.
> >
> > I just
On Fri, Aug 21, 2020 at 02:12:32PM +0200, Alberto Garcia wrote:
> On Fri 21 Aug 2020 01:42:52 PM CEST, Alberto Garcia wrote:
> > On Fri 21 Aug 2020 01:05:06 PM CEST, Brian Foster
> > wrote:
> >>> > 1) off: for every write request QEMU initializes the cluster (64KB)
> >>> > with fallocate(
On Fri, Aug 21, 2020 at 02:12:32PM +0200, Alberto Garcia wrote:
> On Fri 21 Aug 2020 01:42:52 PM CEST, Alberto Garcia wrote:
> > On Fri 21 Aug 2020 01:05:06 PM CEST, Brian Foster
> > wrote:
> >>> > 1) off: for every write request QEMU initializes the cluster (64KB)
> >>> > with fallocate(
On Thu 20 Aug 2020 11:58:11 PM CEST, Dave Chinner wrote:
>> The virtual drive (/dev/vdb) is a freshly created qcow2 file stored on
>> the host (on an xfs or ext4 filesystem as the table above shows), and
>> it is attached to QEMU using a virtio-blk-pci device:
>>
>>-drive if=virtio,file=image.
On Fri 21 Aug 2020 02:59:44 PM CEST, Brian Foster wrote:
>> > Option 4 is described above as initial file preallocation whereas
>> > option 1 is per 64k cluster prealloc. Prealloc mode mixup aside, Berto
>> > is reporting that the initial file preallocation mode is slower than
>> > the per cluster
On Fri, Aug 21, 2020 at 01:42:52PM +0200, Alberto Garcia wrote:
> On Fri 21 Aug 2020 01:05:06 PM CEST, Brian Foster wrote:
> >> > 1) off: for every write request QEMU initializes the cluster (64KB)
> >> > with fallocate(ZERO_RANGE) and then writes the 4KB of data.
> >> >
> >> > 2) off w/o
On Fri 21 Aug 2020 01:42:52 PM CEST, Alberto Garcia wrote:
> On Fri 21 Aug 2020 01:05:06 PM CEST, Brian Foster wrote:
>>> > 1) off: for every write request QEMU initializes the cluster (64KB)
>>> > with fallocate(ZERO_RANGE) and then writes the 4KB of data.
>>> >
>>> > 2) off w/o ZERO_RAN
On Fri 21 Aug 2020 01:05:06 PM CEST, Brian Foster wrote:
>> > 1) off: for every write request QEMU initializes the cluster (64KB)
>> > with fallocate(ZERO_RANGE) and then writes the 4KB of data.
>> >
>> > 2) off w/o ZERO_RANGE: QEMU writes the 4KB of data and fills the rest
>> > o
On Fri, Aug 21, 2020 at 07:58:11AM +1000, Dave Chinner wrote:
> On Thu, Aug 20, 2020 at 10:03:10PM +0200, Alberto Garcia wrote:
> > Cc: linux-xfs
> >
> > On Wed 19 Aug 2020 07:53:00 PM CEST, Brian Foster wrote:
> > > In any event, if you're seeing unclear or unexpected performance
> > > deltas bet
On Thu, Aug 20, 2020 at 10:03:10PM +0200, Alberto Garcia wrote:
> Cc: linux-xfs
>
> On Wed 19 Aug 2020 07:53:00 PM CEST, Brian Foster wrote:
> > In any event, if you're seeing unclear or unexpected performance
> > deltas between certain XFS configurations or other fs', I think the
> > best thing t
Cc: linux-xfs
On Wed 19 Aug 2020 07:53:00 PM CEST, Brian Foster wrote:
> In any event, if you're seeing unclear or unexpected performance
> deltas between certain XFS configurations or other fs', I think the
> best thing to do is post a more complete description of the workload,
> filesystem/stora
On Wed, Aug 19, 2020 at 05:07:11PM +0200, Kevin Wolf wrote:
> Am 19.08.2020 um 16:25 hat Alberto Garcia geschrieben:
> > On Mon 17 Aug 2020 05:53:07 PM CEST, Kevin Wolf wrote:
> > >> > Or are you saying that ZERO_RANGE + pwrite on a sparse file (=
> > >> > cluster allocation) is faster for you than
On Wed 19 Aug 2020 05:37:12 PM CEST, Alberto Garcia wrote:
> I ran the test again on a newly created filesystem just to make sure,
> here are the full results (numbers are IOPS):
>
> |--+---+---|
> | preallocation| ext4 | xfs |
> |--+--
On Wed 19 Aug 2020 05:07:11 PM CEST, Kevin Wolf wrote:
>> I checked with xfs on my computer. I'm not very familiar with that
>> filesystem so I was using the default options and I didn't tune
>> anything.
>>
>> What I got with my tests (using fio):
>>
>> - Using extent_size_hint didn't make any d
Am 19.08.2020 um 16:25 hat Alberto Garcia geschrieben:
> On Mon 17 Aug 2020 05:53:07 PM CEST, Kevin Wolf wrote:
> >> > Or are you saying that ZERO_RANGE + pwrite on a sparse file (=
> >> > cluster allocation) is faster for you than just the pwrite alone (=
> >> > writing to already allocated cluste
On Mon 17 Aug 2020 05:53:07 PM CEST, Kevin Wolf wrote:
>> > Or are you saying that ZERO_RANGE + pwrite on a sparse file (=
>> > cluster allocation) is faster for you than just the pwrite alone (=
>> > writing to already allocated cluster)?
>>
>> Yes, 20% faster in my tests (4KB random writes), but
Am 17.08.2020 um 20:18 hat Alberto Garcia geschrieben:
> On Mon 17 Aug 2020 05:53:07 PM CEST, Kevin Wolf wrote:
> > Maybe the difference is in allocating 64k at once instead of doing a
> > separate allocation for every 4k block? But with the extent size hint
> > patches to file-posix, we should all
On Mon 17 Aug 2020 05:53:07 PM CEST, Kevin Wolf wrote:
> Maybe the difference is in allocating 64k at once instead of doing a
> separate allocation for every 4k block? But with the extent size hint
> patches to file-posix, we should allocate 1 MB at once by default now
> (if your test image was new
On Mon 17 Aug 2020 05:53:07 PM CEST, Kevin Wolf wrote:
>> > Or are you saying that ZERO_RANGE + pwrite on a sparse file (=
>> > cluster allocation) is faster for you than just the pwrite alone (=
>> > writing to already allocated cluster)?
>>
>> Yes, 20% faster in my tests (4KB random writes), but
On Mon 17 Aug 2020 12:10:19 PM CEST, Kevin Wolf wrote:
>> Since commit c8bb23cbdbe / QEMU 4.1.0 (and if the storage backend
>> allows it) writing to an image created with preallocation=metadata
>> can be slower (20% in my tests) than writing to an image with no
>> preallocation at all.
>
> A while
Am 17.08.2020 um 17:31 hat Alberto Garcia geschrieben:
> On Mon 17 Aug 2020 12:10:19 PM CEST, Kevin Wolf wrote:
> >> Since commit c8bb23cbdbe / QEMU 4.1.0 (and if the storage backend
> >> allows it) writing to an image created with preallocation=metadata
> >> can be slower (20% in my tests) than wr
Am 14.08.2020 um 16:57 hat Alberto Garcia geschrieben:
> Hi,
>
> the patch is self-explanatory, but I'm using the cover letter to raise
> a couple of related questions.
>
> Since commit c8bb23cbdbe / QEMU 4.1.0 (and if the storage backend
> allows it) writing to an image created with preallocatio
Hi!
14.08.2020 17:57, Alberto Garcia wrote:
Hi,
the patch is self-explanatory, but I'm using the cover letter to raise
a couple of related questions.
Since commit c8bb23cbdbe / QEMU 4.1.0 (and if the storage backend
allows it) writing to an image created with preallocation=metadata can
be slow
Hi,
the patch is self-explanatory, but I'm using the cover letter to raise
a couple of related questions.
Since commit c8bb23cbdbe / QEMU 4.1.0 (and if the storage backend
allows it) writing to an image created with preallocation=metadata can
be slower (20% in my tests) than writing to an image w
31 matches
Mail list logo