Agree that initial loading and real-time streaming should be seen as different use cases.
For the loading part, I would borrow ideas from direct data load IEP [1]. Ignite should assume that no app works with the cluster until it's preloaded. So, no global locks or things like that. Just fasten a seat belt and feed data to your nodes. For the streaming part, I would consider 2 or 3 proposed by Igor. -- Denis [1] https://cwiki.apache.org/confluence/display/IGNITE/IEP-22%3A+Direct+Data+Load On Fri, Jul 13, 2018 at 10:03 AM Seliverstov Igor <[email protected]> wrote: > Ivan, > > Anyway DataStreamer is the fastest way to deliver data to a data node, the > question is how to apply it correctly. > > I don’t thing we need one more tool, which 90% is the same as DataStreamer. > > All we need is just to implement a couple of new stream receivers. > > Regards, > Igor > > > 13 июля 2018 г., в 9:56, Павлухин Иван <[email protected]> написал(а): > > > > Hi Igniters, > > > > I had a look into IgniteDataStreamer. As far as I understand, currently > it > > just works incorrectly for MVCC tables. It appears as a blocker for > > releasing MVCC. The simplest thing is to refuse creating streamer for > MVCC > > tables. > > > > Next step could be hair splitting of related use cases. For me, initial > > load and continuous streaming look quite different cases and it is better > > to keep them separate at least at API level. Perhaps, it is better to > > separate API basing on user experience. For example, DataStreamer could > be > > considered tool without surprises (which means leaving data always > > consistent, transactions). And let's say BulkLoader is a beast for > fastest > > data loading but full of surprises. Such surprises could be locking > tables, > > rolling back user transactions and so on. So, it is of very limited use > > (like initial load). Keeping API entities separate looks better for me > than > > introducing multiple modes, because separated entities are easier for > > understanding and so less prone to user mistakes. > > > > -- > > Best regards, > > Ivan Pavlukhin > >
