Hi Chris,

On Mon, Jan 08, 2018 at 02:03:53AM +0000, Chris Mi wrote:
> > On Thu, Jan 04, 2018 at 04:34:51PM +0900, Chris Mi wrote:
> > > The insertion rate is improved more than 10%.
> > 
> > Did you measure the effect of increasing batch sizes?
> Yes. Even if we enlarge the batch size bigger than 10, there is no big 
> improvement.
> I think that's because current kernel doesn't process the requests in 
> parallel.
> If kernel processes the requests in parallel, I believe specifying a bigger 
> batch size
> will get a better result.

But throughput doesn't regress at some point, right? I think that's the
critical aspect when considering an "unlimited" batch size.

On Mon, Jan 08, 2018 at 08:00:00AM +0000, Chris Mi wrote:
> After testing, I find that the message passed to kernel should not be too big.
> If it is bigger than about 64K, sendmsg returns -1, errno is 90 (EMSGSIZE).
> That is about 400 commands.  So how about set batch size to 128 which is big 
> enough?

If that's the easiest way, why not. At first, I thought one could maybe
send the collected messages in chunks of suitable size, but that's
probably not worth the effort.

Cheers, Phil

Reply via email to