On Sat, 11 Mar 2006 13:12:30 +1100 "Steven D'Aprano" <[EMAIL PROTECTED]> wrote: > On Fri, 10 Mar 2006 23:24:46 +1100, Dave wrote: > > Hi. I am learning PyOpenGL and I am working with a > > largish fixed scene composed of several thousand > > GLtriangles. I plan to store the coords and normals in > > a NumPy array. > > > > Is this the fastest solution in python?
> Optimization without measurement is at best a waste of > time and at worst counter-productive. Why don't you time > your code and see if it is fast enough? > > See the timeit module, and the profiler. Talk about knee-jerk reactions. ;-) It's a *3D animation* module -- of course it's going to be time-critical. Sheesh. Now *that* is stating the obvious. The obvious solution is actually a list of tuples. But it's very possible that that won't be fast enough, so the NumPy approach may be a significant speedup. I doubt you need more than that, though. I think the real question is not going to be how fast your code handles data, though, but rather how fast you can get that data into PyOpenGL and back. So the real fastest format is going to be "whatever PyOpenGL uses" -- so I'd look that up. For comparison, SDL uses "surfaces" to store 2D data, so when programming in PyGame, your first step is to load every image into a surface. Once there, display to the screen is very very fast -- but moving from image to surface is typically slow, no matter how spiffy your image format may be internally. I suspect something similar applies to PyOpenGL. -- Terry Hancock ([EMAIL PROTECTED]) Anansi Spaceworks http://www.AnansiSpaceworks.com -- http://mail.python.org/mailman/listinfo/python-list