I am also confused. Is the striped-pool something that is attached to NIO threads? Can someone describe the sequence of events here? For example,
1. Message is received 2. Message is deserialized by a NIO thread 3. ??? Thanks, D. On Fri, Dec 16, 2016 at 9:37 AM, Denis Magda <dma...@apache.org> wrote: > Well, looks like I’m a bit confused with what is meant under the “striped > pool”. > > Vladimir, referring to your explanation > > > Yes, "striped pool" is a thread pool where every thread operate on a > single > > queue. > > please answer on the following: > > - Does a queue belong to a specific cache partition? In other words, every > partition will have its own processing thread which is similar to what HZ > and VoltDB do. > > - If the answer to the question above is negative then how do we decide > where a specific cache message should fall for processing? > > > — > Denis > > > On Dec 16, 2016, at 3:39 AM, Vladimir Ozerov <voze...@gridgain.com> > wrote: > > > > Andrey, > > > > Yes, "striped pool" is a thread pool where every thread operate on a > single > > queue. This is the only similarity with FJP, as we do not perform real > > fork-join in our pools. Having said that, I do not see why we might want > to > > compare our pool with FJP. > > > > As per throughput - it is improved because instead of accessing a > > single *BlockingQueue > > *based on *ReentrantLock*, every thread has separate queue. And it is > > designed in a way, that under load NIO threads usually enqueue tasks and > > worker threads dequeue tasks without blocking, and even without > CAS-loops, > > thanks to MPSC semantics. > > > > Vladimir. > > > > On Fri, Dec 16, 2016 at 2:21 PM, Andrey Mashenkov < > > andrey.mashen...@gmail.com> wrote: > > > >> Vladimir, > >> > >> As I understand "striped pool" is a array of single-threaded pools. Do > you > >> have an understanding why throughput increased up to 40%? > >> It looks like it is due to every thread has its own task queue. > >> > >> As far as I know, there is ForkJoinPool in JDK, FJP implements > >> ExecutorService interface, has striped tasks queue, has task-stealing > >> mechanics. > >> > >> Can we run Ignite performance tests b\w using "striped pool" and FJP? > >> > >> On Fri, Dec 16, 2016 at 11:17 AM, Vladimir Ozerov <voze...@gridgain.com > > > >> wrote: > >> > >>> Folks, > >>> Can we move all discussions aside of pool config to separate threads, > >>> please? :-) > >>> > >>> Dima, > >>> I heard your concern about configuration complexity. But I do not see > any > >>> problem at all. Whether to use striped pool or not is a matter of > >>> fine-grained tuning. We will set sensible defaults (i.e. enabled > striped > >>> pool), so 99% of users will never know that a concept of "stiped pool" > >> even > >>> exists. > >>> > >>> Striped and non-striped approaches have their own pros and cons. > >>> > >>> Striped: > >>> + Better overall throughput (up to +40% compared to non-striped); > >>> - Less predictable latency - user can have bad numbers even on moderate > >>> load (e.g. consider updates on a large affinity co-located object > graph). > >>> Benchmarks demonstrate this clearly. > >>> - Higher memory footprint, because in striped mode we pre-start all the > >>> threads. On high-end machines it may end up in wasting of hundrends of > >>> megabytes of RAM. > >>> - As a result of previous point, it is not well-suited for clients, > which > >>> may require small memory footprint and low startup time. > >>> > >>> Non-striped: > >>> - Worse throughput due to very high contention on BlockingQueue. > >>> + No waste on idle threads. > >>> > >>> I would propose the following final design for this: > >>> > >>> - Introduce *"boolean systemThreadPoolStriped"* property; > >>> - Set it to *"true"* for servers by default; > >>> - Set it to *"false"* for clients by default. > >>> > >>> Yakov, > >>> I still do not get your point about (system + public) pools approach. > >>> > >>> Vladimir. > >>> > >>> On Fri, Dec 16, 2016 at 9:29 AM, Yakov Zhdanov <yzhda...@apache.org> > >>> wrote: > >>> > >>>>> In my view, we should hide most of such configuration intricacies > from > >>>> users, > >>>> and select an appropriate thread pool for each task ourselves. Will > >> this > >>> be > >>>> possible? > >>>> > >>>> We already do this. We take decision internally on where to route the > >>>> request. > >>>> > >>>> SoE, my answers below. > >>>> > >>>>> How did the striped pool show itself for transactional updates that > >>>> involve 2 phase commit? Any benefits or degradation? > >>>> > >>>> Even better than for atomic - up to ~50% improvement. > >>>> > >>>>> Here we should decide what’s more important for us - throughout or > >>>> latency. If to execute SQL queries in the striped pool using multiple > >>>> partitioned threads then, for sure, it will affect latency of other > >>>> operations that are sitting in the queue waiting for their time to be > >>>> processed but, on the other hand, the overall throughput of the > >> platform > >>>> should be improved because the operations will be less halted all the > >>> time > >>>> by synchronization needs. > >>>> > >>>>> VoltDB decided to go for with the throughput while our current > >>>> architecture is mostly latency based. > >>>> > >>>> What do you mean by "using several partition threads"? I am pretty > sure > >>>> that if we do as you suggest we will have degradation here instead of > >>>> boost: > >>>> > >>>> sql-query-put > >>>> after: 77,344.83 > >>>> before: 53,578.78 > >>>> delta: 30.73% > >>>> > >>>> sql-query-put-offheap > >>>> after 32,212.30 > >>>> before 25,322.43 > >>>> delta 21.39% > >>>> > >>>> --Yakov > >>> > >> > >> > >> > >> -- > >> С уважением, > >> Машенков Андрей Владимирович > >> Тел. +7-921-932-61-82 > >> > >> Best regards, > >> Andrey V. Mashenkov > >> Cerr: +7-921-932-61-82 > >> > >