On Tue, Sep 29, 2020 at 03:49:04PM +0200, Miklos Szeredi wrote:
> On Tue, Sep 29, 2020 at 3:18 PM Vivek Goyal wrote:
>
> > - virtiofs cache=none mode is faster than cache=auto mode for this
> > workload.
>
> Not sure why. One cause could be that readahead is not perfect at
> detecting the ran
On Tue, Sep 29, 2020 at 4:01 PM Vivek Goyal wrote:
>
> On Tue, Sep 29, 2020 at 03:49:04PM +0200, Miklos Szeredi wrote:
> > On Tue, Sep 29, 2020 at 3:18 PM Vivek Goyal wrote:
> >
> > > - virtiofs cache=none mode is faster than cache=auto mode for this
> > > workload.
> >
> > Not sure why. One c
On Tue, Sep 29, 2020 at 03:49:04PM +0200, Miklos Szeredi wrote:
> On Tue, Sep 29, 2020 at 3:18 PM Vivek Goyal wrote:
>
> > - virtiofs cache=none mode is faster than cache=auto mode for this
> > workload.
>
> Not sure why. One cause could be that readahead is not perfect at
> detecting the ran
On Dienstag, 29. September 2020 15:49:42 CEST Vivek Goyal wrote:
> > Depends on what's randomized. If read chunk size is randomized, then yes,
> > you would probably see less performance increase compared to a simple
> > 'cat foo.dat'.
>
> We are using "fio" for testing and read chunk size is not
On Tue, Sep 29, 2020 at 3:18 PM Vivek Goyal wrote:
> - virtiofs cache=none mode is faster than cache=auto mode for this
> workload.
Not sure why. One cause could be that readahead is not perfect at
detecting the random pattern. Could we compare total I/O on the
server vs. total I/O by fio?
On Tue, Sep 29, 2020 at 03:28:06PM +0200, Christian Schoenebeck wrote:
> On Dienstag, 29. September 2020 15:03:25 CEST Vivek Goyal wrote:
> > On Sun, Sep 27, 2020 at 02:14:43PM +0200, Christian Schoenebeck wrote:
> > > On Freitag, 25. September 2020 20:51:47 CEST Dr. David Alan Gilbert wrote:
> > >
On Dienstag, 29. September 2020 15:03:25 CEST Vivek Goyal wrote:
> On Sun, Sep 27, 2020 at 02:14:43PM +0200, Christian Schoenebeck wrote:
> > On Freitag, 25. September 2020 20:51:47 CEST Dr. David Alan Gilbert wrote:
> > > * Christian Schoenebeck (qemu_...@crudebyte.com) wrote:
> > > > On Freitag,
On Fri, Sep 25, 2020 at 01:41:39PM +0100, Dr. David Alan Gilbert wrote:
[..]
> So I'm sitll beating 9p; the thread-pool-size=1 seems to be great for
> read performance here.
>
Hi Dave,
I spent some time making changes to virtiofs-tests so that I can test
a mix of random read and random write wo
On Sun, Sep 27, 2020 at 02:14:43PM +0200, Christian Schoenebeck wrote:
> On Freitag, 25. September 2020 20:51:47 CEST Dr. David Alan Gilbert wrote:
> > * Christian Schoenebeck (qemu_...@crudebyte.com) wrote:
> > > On Freitag, 25. September 2020 15:05:38 CEST Dr. David Alan Gilbert wrote:
> > > > >
On Freitag, 25. September 2020 20:51:47 CEST Dr. David Alan Gilbert wrote:
> * Christian Schoenebeck (qemu_...@crudebyte.com) wrote:
> > On Freitag, 25. September 2020 15:05:38 CEST Dr. David Alan Gilbert wrote:
> > > > > 9p ( mount -t 9p -o trans=virtio kernel /mnt
> > > > > -oversion=9p2000.L,cac
* Christian Schoenebeck (qemu_...@crudebyte.com) wrote:
> On Freitag, 25. September 2020 15:05:38 CEST Dr. David Alan Gilbert wrote:
> > > > 9p ( mount -t 9p -o trans=virtio kernel /mnt
> > > > -oversion=9p2000.L,cache=mmap,msize=1048576 ) test: (g=0): rw=randrw,
> > >
> > > Bottleneck ---
On Freitag, 25. September 2020 18:05:17 CEST Christian Schoenebeck wrote:
> On Freitag, 25. September 2020 15:05:38 CEST Dr. David Alan Gilbert wrote:
> > > > 9p ( mount -t 9p -o trans=virtio kernel /mnt
> > > > -oversion=9p2000.L,cache=mmap,msize=1048576 ) test: (g=0): rw=randrw,
> > >
> > > Bott
On Freitag, 25. September 2020 15:05:38 CEST Dr. David Alan Gilbert wrote:
> > > 9p ( mount -t 9p -o trans=virtio kernel /mnt
> > > -oversion=9p2000.L,cache=mmap,msize=1048576 ) test: (g=0): rw=randrw,
> >
> > Bottleneck --^
> >
> > By increasing 'msize' you would enco
On Fri, Sep 25, 2020 at 01:11:27PM +0100, Dr. David Alan Gilbert wrote:
> * Vivek Goyal (vgo...@redhat.com) wrote:
> > On Tue, Sep 22, 2020 at 11:25:31AM +0100, Dr. David Alan Gilbert wrote:
> > > * Dr. David Alan Gilbert (dgilb...@redhat.com) wrote:
> > > > Hi,
> > > > I've been doing some of my
* Christian Schoenebeck (qemu_...@crudebyte.com) wrote:
> On Freitag, 25. September 2020 14:41:39 CEST Dr. David Alan Gilbert wrote:
> > > Hi Carlos,
> > >
> > > So you are running following test.
> > >
> > > fio --direct=1 --gtod_reduce=1 --name=test
> > > --filename=random_read_write.fio --bs=4
On Freitag, 25. September 2020 14:41:39 CEST Dr. David Alan Gilbert wrote:
> > Hi Carlos,
> >
> > So you are running following test.
> >
> > fio --direct=1 --gtod_reduce=1 --name=test
> > --filename=random_read_write.fio --bs=4k --iodepth=64 --size=4G
> > --readwrite=randrw --rwmixread=75 --outpu
* Vivek Goyal (vgo...@redhat.com) wrote:
> On Thu, Sep 24, 2020 at 09:33:01PM +, Venegas Munoz, Jose Carlos wrote:
> > Hi Folks,
> >
> > Sorry for the delay about how to reproduce `fio` data.
> >
> > I have some code to automate testing for multiple kata configs and collect
> > info like:
>
* Vivek Goyal (vgo...@redhat.com) wrote:
> On Tue, Sep 22, 2020 at 11:25:31AM +0100, Dr. David Alan Gilbert wrote:
> > * Dr. David Alan Gilbert (dgilb...@redhat.com) wrote:
> > > Hi,
> > > I've been doing some of my own perf tests and I think I agree
> > > about the thread pool size; my test is
* Chirantan Ekbote (chiran...@chromium.org) wrote:
> On Sat, Sep 19, 2020 at 6:36 AM Vivek Goyal wrote:
> >
> > Hi All,
> >
> > virtiofsd default thread pool size is 64. To me it feels that in most of
> > the cases thread pool size 1 performs better than thread pool size 64.
> >
> > I ran virtiofs
Hi Folks,
Sorry for the delay about how to reproduce `fio` data.
I have some code to automate testing for multiple kata configs and collect info
like:
- Kata-env, kata configuration.toml, qemu command, virtiofsd command.
See:
https://github.com/jcvenegas/mrunner/
Last time we agreed to narro
On Thu, Sep 24, 2020 at 09:33:01PM +, Venegas Munoz, Jose Carlos wrote:
> Hi Folks,
>
> Sorry for the delay about how to reproduce `fio` data.
>
> I have some code to automate testing for multiple kata configs and collect
> info like:
> - Kata-env, kata configuration.toml, qemu command, virt
On Wed, Sep 23, 2020 at 09:50:59PM +0900, Chirantan Ekbote wrote:
> On Sat, Sep 19, 2020 at 6:36 AM Vivek Goyal wrote:
> >
> > Hi All,
> >
> > virtiofsd default thread pool size is 64. To me it feels that in most of
> > the cases thread pool size 1 performs better than thread pool size 64.
> >
> >
On Sat, Sep 19, 2020 at 6:36 AM Vivek Goyal wrote:
>
> Hi All,
>
> virtiofsd default thread pool size is 64. To me it feels that in most of
> the cases thread pool size 1 performs better than thread pool size 64.
>
> I ran virtiofs-tests.
>
> https://github.com/rhvgoyal/virtiofs-tests
>
> And here
On Tue, Sep 22, 2020 at 12:09:46PM +0100, Dr. David Alan Gilbert wrote:
>
> Do you have the numbers for:
>epool
>epool thread-pool-size=1
>spool
Hi David,
Ok, I re-ran my numbers again after upgrading to latest qemu and also
upgraded host kernel to latest upstream. Apart from compari
On Tue, Sep 22, 2020 at 11:25:31AM +0100, Dr. David Alan Gilbert wrote:
> * Dr. David Alan Gilbert (dgilb...@redhat.com) wrote:
> > Hi,
> > I've been doing some of my own perf tests and I think I agree
> > about the thread pool size; my test is a kernel build
> > and I've tried a bunch of differ
* Vivek Goyal (vgo...@redhat.com) wrote:
> On Fri, Sep 18, 2020 at 05:34:36PM -0400, Vivek Goyal wrote:
> > Hi All,
> >
> > virtiofsd default thread pool size is 64. To me it feels that in most of
> > the cases thread pool size 1 performs better than thread pool size 64.
> >
> > I ran virtiofs-te
* Dr. David Alan Gilbert (dgilb...@redhat.com) wrote:
> Hi,
> I've been doing some of my own perf tests and I think I agree
> about the thread pool size; my test is a kernel build
> and I've tried a bunch of different options.
>
> My config:
> Host: 16 core AMD EPYC (32 thread), 128G RAM,
>
On Fri, Sep 18, 2020 at 05:34:36PM -0400, Vivek Goyal wrote:
> Hi All,
>
> virtiofsd default thread pool size is 64. To me it feels that in most of
> the cases thread pool size 1 performs better than thread pool size 64.
>
> I ran virtiofs-tests.
>
> https://github.com/rhvgoyal/virtiofs-tests
I
On Mon, Sep 21, 2020 at 09:39:44AM -0400, Vivek Goyal wrote:
> On Mon, Sep 21, 2020 at 09:39:23AM +0100, Stefan Hajnoczi wrote:
> > On Fri, Sep 18, 2020 at 05:34:36PM -0400, Vivek Goyal wrote:
> > > And here are the comparision results. To me it seems that by default
> > > we should switch to 1 thr
Hi,
I've been doing some of my own perf tests and I think I agree
about the thread pool size; my test is a kernel build
and I've tried a bunch of different options.
My config:
Host: 16 core AMD EPYC (32 thread), 128G RAM,
5.9.0-rc4 kernel, rhel 8.2ish userspace.
5.1.0 qemu/virtiofsd bu
On Mon, Sep 21, 2020 at 09:35:16AM -0400, Vivek Goyal wrote:
> On Mon, Sep 21, 2020 at 09:50:19AM +0100, Dr. David Alan Gilbert wrote:
> > * Vivek Goyal (vgo...@redhat.com) wrote:
> > > Hi All,
> > >
> > > virtiofsd default thread pool size is 64. To me it feels that in most of
> > > the cases thr
On Mon, Sep 21, 2020 at 09:39:23AM +0100, Stefan Hajnoczi wrote:
> On Fri, Sep 18, 2020 at 05:34:36PM -0400, Vivek Goyal wrote:
> > And here are the comparision results. To me it seems that by default
> > we should switch to 1 thread (Till we can figure out how to make
> > multi thread performance
On Mon, Sep 21, 2020 at 09:50:19AM +0100, Dr. David Alan Gilbert wrote:
> * Vivek Goyal (vgo...@redhat.com) wrote:
> > Hi All,
> >
> > virtiofsd default thread pool size is 64. To me it feels that in most of
> > the cases thread pool size 1 performs better than thread pool size 64.
> >
> > I ran
* Vivek Goyal (vgo...@redhat.com) wrote:
> Hi All,
>
> virtiofsd default thread pool size is 64. To me it feels that in most of
> the cases thread pool size 1 performs better than thread pool size 64.
>
> I ran virtiofs-tests.
>
> https://github.com/rhvgoyal/virtiofs-tests
>
> And here are the
On Fri, Sep 18, 2020 at 05:34:36PM -0400, Vivek Goyal wrote:
> And here are the comparision results. To me it seems that by default
> we should switch to 1 thread (Till we can figure out how to make
> multi thread performance better even when single process is doing
> I/O in client).
Let's underst
Hi All,
virtiofsd default thread pool size is 64. To me it feels that in most of
the cases thread pool size 1 performs better than thread pool size 64.
I ran virtiofs-tests.
https://github.com/rhvgoyal/virtiofs-tests
And here are the comparision results. To me it seems that by default
we should
36 matches
Mail list logo