Hi,

OK as a quick test I created a default spark data grid with 100,000 rows and 
ten columns with simple numbers and tried sorting a column, it’s slow about 10 
seconds on my machine.

Looking in Scout 30+% of the time is garbage collection in the sort routines 
and 20+% is the sorting itself, so thats’ where most of the time is going. So 
as I suspected a lot of it is the temporary objects used by the default sorting 
algorithms, in particular the code to handle complex fields.

There is however a work around:
- Changing the data grid to use simple custom sort functions changes the sort 
time to around 1 second and there’s very little garbage collection.
- Changing to use a named class rather than objects improved the performance a 
little more.

OR alternatively removing the custom sort and using the named data class 
reduced the column sort time to about 2 seconds (again down for 10 seconds).

Double checking with a mx datagrid by default the performance was significantly 
faster and usable without any custom sort routines or named classes so I can 
see there may be an expectation that the spark data grid performs just as well.

The mx datagrid is calling Sort.sort method directly, it may be possible to 
optimise the spark Datagrid to do this as well. However for large amounts of 
data the defaults may not going to be the most performant, and given they are 
coding for all cases rather than a known simple case it's probably not that 
surprising.

Thanks,
Justin

Reply via email to