Hi Mark, could you please explain how you did that. How you chunk the data into groups?
That is not clear for me. Regards, Matthias Am 07.06.2011 um 20:46 schrieb Mark Talluto: > On Jun 7, 2011, at 6:27 AM, Mark Schonewille wrote: > >> Since the data is already in memory, there is no reason to process it in >> steps of 50. Also, using repear with x =... is very slow. Use repear for >> each with a counter instead: > > > I believe there is a major speed benefit to chunking the data into smaller > groups. We optimized a data processing app with this technique and brought > tasks from minutes of processing down to milliseconds. > > The technique chunks the data into groups of 600 lines. Then you use a basic > repeat. No with or for each. Then you read just the first line of the > repeat. Then delete the first line of the dataset. You will see a major > improvement in speed. > > > Best regards, > > Mark Talluto > http://www.canelasoftware.com > > > > _______________________________________________ > use-livecode mailing list > use-livecode@lists.runrev.com > Please visit this url to subscribe, unsubscribe and manage your subscription > preferences: > http://lists.runrev.com/mailman/listinfo/use-livecode _______________________________________________ use-livecode mailing list use-livecode@lists.runrev.com Please visit this url to subscribe, unsubscribe and manage your subscription preferences: http://lists.runrev.com/mailman/listinfo/use-livecode