On Dec 9, 2012, at 4:31 PM, Glen Bojsza wrote:
> That's it!!
>
> 500,000 points in 419 milliseconds.
>
> This scales perfectly for what I need.
>
> thanks Dick.
>
You're welcome, Glen. So now I've tried optimizing for speed and legibility.
Would you check the timing for your data, please,
That's it!!
500,000 points in 419 milliseconds.
This scales perfectly for what I need.
thanks Dick.
On Sun, Dec 9, 2012 at 7:07 PM, Dick Kriesel wrote:
> Hi, Glen. I've reread your replies. Here's a new version that returns
> the original values, as needed.
>
> Please test and report again.
Hi, Glen. I've reread your replies. Here's a new version that returns the
original values, as needed.
Please test and report again.
-- Dick
function digest @pLines
local tGroupSize, tLineNumber, tArray, tKeys, tMin, tMax, tResult
put number of lines in pLines div 1000 into tGroupSize
I removed my code that gives the actual item 1 value... setting it back to
your original code.
Then I removed your line "put 0 into tLineNumber".
The results only gave a couple of data points and not the actual value of
item 1.
===
*on* mouseUp
*put* fld mydata into plines
*put*
I have a curiosity question about this type of thing. Is there a way to
adjust things so that posting data to a url is not blocking? If so, you
could take advantage of a webservers multithreading (or several webservers)
to hand off "jobs" and then get the returned result.
Otherwise it should be po
On Dec 9, 2012, at 12:37 PM, Glen Bojsza wrote:
> So there must be a better way to get the item 1 value and still stay in the
> millisecond range...or not?
Did you remove the line I mentioned, verify the results, and check the timing?
-- Dick
___
use
Hi Dick
My mistake... my solution was actually just putting in the correct line
numbers and NOT the item value for the associated line.
So the solution I indicated does NOT work.
I still need the original item 1 value of the associated max and min lines.
I quickly tried to pull the item 1 value
On Dec 9, 2012, at 3:17 AM, Glen Bojsza wrote:
> In other words the two values found in column 2 of the group data must then
> get the corresponding column1 values from the original dataset.
Hi, Glen. You'll get that if you remove the line "put 0 into tLineNumber" near
the end of the repeat l
HI Mike,
Yes the numbers are used ... they will always be sequential and unique but
the may be much larger than my example.
On Sun, Dec 9, 2012 at 9:59 AM, Michael Kann wrote:
> Glen,
> In your example do you use the numbers in the left column for anything?
> Are they just the line numbers?
>
DOH 100k lines takes 800+ millisec. Me and my brain don't always talk.
On Sun, Dec 9, 2012 at 8:55 AM, Mike Bonner wrote:
> Heres one more possibility.. If it doesn't matter which duplicate is used
> if there are duplicates then the following will do 100k lines in 167
> millisec. This is using
Heres one more possibility.. If it doesn't matter which duplicate is used
if there are duplicates then the following will do 100k lines in 167
millisec. This is using google, sorry if the blasted asterisks show up.
(To clean up yours, pasted to a field and replaced * with empty but its
still a pai
Glen,
In your example do you use the numbers in the left column for anything? Are
they just the line numbers?
Mike
Example:
1 23
2 12
3 9
4 77
5 2
6 13
7 44
8 83
9 2
10 37
In this example the result would be **Note if one or more values are the
min or max the the
Hi Dick,
I have adjusted your routine so it reports back the values from the
original dataset... note the "x" factor that has been added.
Also, for 50,000 data points I was doing approximately 9 seconds with your
routine it now is in the low milliseconds!!
Now I have to really review and underst
Yes, I always thought that "repeat with" was the only slow repeat method and as
you point out it is only "repeat for each" that increases the speed not "repeat
for".
Sorry about not pasting in plain text as I would have liked to have seen your
solution.
Glen
On Dec 9, 2012, at 4:49 AM, Richar
Hi Dick,
I tried your solution assuming that plines is my original data in fld
mydata as described.
I get a resulting output incredibly fast but the column 1 values don't
work... ie I think they are putting the line number associated with the
group size and not the original column data?
In other
On Dec 9, 2012, at 12:33 AM, Glen Bojsza wrote:
> I believe that this should be doable less than 1 second
Hi, Glen. Here's a draft you could try. It works for the sample data you
posted.
If you have questions, please ask. If you try it, please report your timings.
-- Dick
function foo
Glen Bojsz wrote:
> Once again I hope the algorithm experts can speed of the following...
> currently on just 50,000 points it takes 10 seconds ...as I indicate
> below I know why so this is why I am looking for better solutions
...
>*on* mouseUp
> *put* the seconds into startTime
> *put* the
Once again I hope the algorithm experts can speed of the following...
currently on just 50,000 points it takes 10 seconds ...as I indicate below
I know why so this is why I am looking for better solutions
Key points are:
- datasets are a minimun of 10,000 lines
- datasets are always a multiple of
18 matches
Mail list logo