On Sun, May 3, 2015 at 12:44 AM, David Faure <fa...@kde.org> wrote: > On Thursday 08 January 2015 10:45:13 Mark Gaiser wrote: > > (wow, time flies) > > > The issue i see here is different processes. KIO::listDir is a process, > > KDirListerCache lives in the client process. > > I guess you didn't mean this litterally, because in that case it's wrong, > both > happen in the client process. > > > You want to do sorting in both processes. > > 1. When the client initially asks a listing you want to do the sorting as > > quick as possible to send back as little data as possible. So sorting on > > the SlaveBase side. As long as not all data is known in the > KDirListerCache > > sorting would have to be done on the SlaveBase side. That would be the > case > > to change sort order _while_ data is dripping in. > > 2. When all data is fetched you don't want to go over a socket to > request a > > different sorting method so then you would need to have the same sorting > > operations on the KIO client side with data fed from KDirListerCache. > > Obviously more complex than the current solution, but OK, let's see where > that > leads us. > > > I do think that KIO would need a new class for this that can handle a set > > of predefined sorting and filtering operations if this entire "slave side > > sorting" is to be considered for KIO. > > > > Just to be clear, when i said "slave side sorting" i meant sorting in - > for > > instance - SlaveBase. Not in the actual slave plugins. > > Where exactly, i don't know, but certainly before data is being send back > > to the application that requested a KIO::listDir. > > Sorry if i confused you here. > > Well, this requires the actual kioslave to first give all items to > SlaveBase, > doesn't it? Otherwise it can't sort them. So instead of incremental > listing, > this would wait until everything is available and then send everything; not > sure it would appear faster to the user. > (this depends on whether the time is spent doing the actually listing and > creating of UDSEntries, or if more time is spent sending the stuff over the > socket --- but you optimized that :-) > > > The advantage i see with doing this is allowing data to be visible for > the > > user as soon as data is available which will give the user a smoother and > > faster experience. > > Interesting you should say that, I think the suggested approach actually > goes > into the opposite direction from this goal. > You need for all udsentries to exist before sorting, so it might take > longer > for the first items to appear, compared to the current solution, where a > first > set of items get sent over as soon as they are available, and then the > other > batch comes in and the two get sorted together. > Which can make items move down, so I'm not saying it's ideal, but it > technically does reach the above goal better :) > > > It allows for a mechanism of "show me the first 100 > > items of this massive folder in this particular sort order". Something > that > > isn't fully possible right now since the client would have to wait till > all > > data has arrived over the socket, then sort it, then present it. All data > > would still have to be fetched on the slave side, but only X number of > > entries would have to be send to the client allowing it to immediately > > present the data. The rest of the data can then be fetched on a "need to > > know" basis. Eg. when scrolling down for the next batch of 100 items. By > > that time the slave would probably (depending on the folder size and sort > > order) be done fetching all items and is ready to immediately send those > > pre sorted batches to the client. > > So the very first step is to find out how long it takes to create > udsentries > in the slave, and how long it takes to transfer them over the socket. > If the first one is much bigger than the second one, then this idea would > delay quite a lot the time until when the first items appear... > > > An added advantage is that the client could then just use KIO as > streaming > > API. Just like a youtube api or whatever streaming api. The client would > > just have to implement "canFetchMode" and "fetchMode" if the Qt classes > are > > used. It should prevent the client from needing to implement complicated > > threading and UI tricks to keep the UI smooth. > > fetchMore is pull ("get me more now"), while entries get listed > asynchronously > and pushed via a signal, so I don't see how that would work, what would > fetchMore really do? > > -- > David Faure, fa...@kde.org, http://www.davidfaure.fr > Working on KDE Frameworks 5 > > Time flies indeed! I completely forgot about this thread.
Just going to respond here instead of inline. I fully agree with your comments and actually start to doubt the usefulness (again) of having a SlaveBase side sorting approach. I theory it sounds so neat to have a streaming api approach, but in practice the SlaveBase side would just have to wait till all items are fetched before it can sort. There would be a measurable time difference between: 1. SlaveBase fetching all items and sending a sorted batch to the client 2. and SlaveBase sending all items over the socket, clients waits till it has everything and sorts - aka, the current situation. Point 1 will certainly be faster, but will it be worth the effort? For this - filesystems - it's probably just not worth it. The difference would be seconds at most (rough guess). Lets take a step back to the actual goal that i try to achieve here. Perhaps there is another way that i haven't thought of yet? There is basically just one goal: 1. Present data as fast as possible in the sort order provided by the user. How can we get there besides extreme optimizations on the SlaveBase + UDSEntry side (which i have done, more is on my todo list)?
_______________________________________________ Kde-frameworks-devel mailing list Kde-frameworks-devel@kde.org https://mail.kde.org/mailman/listinfo/kde-frameworks-devel