Nicethe proof is in the pudding. Thats why the say a good programmer
spends most of their time staring at the ceiling ... thinking. Who
they are ...I do not know.
On Tue, Jun 19, 2018 at 12:47 AM, Curry Kenworthy via use-livecode <
use-livecode@lists.runrev.com> wrote:
>
> Geoff wrote:
>
Geoff wrote:
> This is certainly not identical to the original method, but it's
> close enough, and runs in a small fraction of a second.
Good approach! Optimized code rules.
Best wishes,
Curry K.
___
use-livecode mailing list
use-livecode@lists.
Thanks for all the suggestions -- I tried multiple ideas without much
improvement, then decided to rethink the problem. Roughly:
I have a bunch of source data that I categorized by two numbers, the first
from 1-N where N might be anywhere from 100-5,000, and the second from
1-100. For any given fi
Thanks for the tip Ralphlove the sound of that filer function.
On Tue, Jun 12, 2018 at 7:00 PM, Curry Kenworthy via use-livecode <
use-livecode@lists.runrev.com> wrote:
>
> Optimizing scripts in LC is not the same as running reports in other
> software suites. If you only need 10 results, you
Optimizing scripts in LC is not the same as running reports in other
software suites. If you only need 10 results, you probably don't want to
handle all items twice.
I hate to loop through all items even once. But if I do, I may be done!
I'm not making a full report to print out; I'm just g
ssage-
From: use-livecode [mailto:use-livecode-boun...@lists.runrev.com] On Behalf
Of hh via use-livecode
Sent: Tuesday, June 12, 2018 4:39 PM
To: use-livecode@lists.runrev.com
Cc: hh
Subject: Re: Optimization can be tricky
The scenario Geoff described is roughly to get the top ten (a handful) of
Then there's something about Geoff's problem (or problem statement) that
I don't understand.
What on earth is in those records for sorting a mere 2000 of them to be
noticable? (I had thought he had to be talking about magnitudes more
lines than that).
LC (9.0) sorts 20,000 lines of >1000 cha
The scenario Geoff described is roughly to get the
top ten (a handful) of 2000 records comparing a certain
numeric value of each.
To get that unknown value of each one has to go once through
all the records *and then sort* for that ranking.
(LiveCode is very fast with a simple numeric sort!)
Any ot
Put yourself in the computer's shoes, and also clarify what you need to
accomplish. You are asking it to sort the entire list of 2000 records,
but (if I understand) you only want a handful of those.
And it has already gone through all the records once before the sort. If
you asked a human t
I don't know if these are good ideas or BAD ideas and without some
suitable data, not sure if I could find out - so I'll throw the ideas
over and let you decide if they are worth trying :-)
1. Two ideas combined
- avoid scanning for the "item 1" of each line
- use the (hopefully optimi
Sorry, I forgot to mention Jerry who already proposed to pull out computations
from the inner loop and also to use variables in the inner loop. My experience
says to avoid getting items as often as possible. And to use the random() in the
inner repeat loop instead of in the final sort may be worth
You could try the following:
repeat for each key T in interestArray[uID]
put item 1 of interestArray[uID][T] into i1
put item 2 of interestArray[uID][T] into i2
repeat for each line S in storyArray[T]
put userSeenArray[uID][item 1 of S] into s1
put abs(item 2 of S - i1) into s2
i
Hi Geoff,
One thing to try in your original code, which should be significantly
faster if the array is big, is using
> repeat for each key T in interestArray[uID]
instead of
> repeat for each line T in the keys of interestArray[uID]
The latter has to allocate memory for a string containing all
I know that this might be a different use-case but I have a CA Lottery -
Fantasy five lottery parser that collects lottery data and loads it into a
matrix and provides the total number of hits for each number. This also has 4
optimization buttons to the show differences. This aupplication was o
Do you need the entire list sorted randomly or are you just needing to select N
random entries from the list (weighted)? How does the speed change if you do a
simple numeric sort?
You could use a function on a random number to weight things - would need to
work out the right math to get what yo
At first glance, it looks like you might save some time by grabbing
interestArray[uID][T] as soon as you have T, and then use what you grabbed in
the 3 later places instead of re-figuring interestArray[uID][T] each time.
As in:
repeat for each line T in the keys of interestArray[uID]
put in
please do goeff this subject is very interesting to me. i have some
problems where i need to optimize in a similar way. which kind of repeat
look have u found works fastest? have u tried ? repeat for each key
this_key in array? is that slower?
i love saving milliseconds. :) makes a big diff
17 matches
Mail list logo