Re: More efficient array processing

2008-10-24 Thread sturlamolden
On Oct 23, 8:11 pm, "John [H2O]" <[EMAIL PROTECTED]> wrote: > datagrid = numpy.zeros(360,180,3,73,20) On a 32 bit system, try this instead: datagrid = numpy.zeros((360,180,3,73,20), dtype=numpy.float32) (if you can use single precision that is.) -- http://mail.python.org/mailman/lis

Re: More efficient array processing

2008-10-23 Thread John [H2O]
No secret at all... As you might have guessed, it is global model fields that I am working with: 360x180 (lon,lat) I have three 'z' levels. (360,180,3) Then I have different 'fields', usually on the order of ~50-80 (360,180,3,60) Lastly, I have output for a several timesteps, then those times

Re: More efficient array processing

2008-10-23 Thread Ivan Reborin
On Fri, 24 Oct 2008 00:32:11 +0200, Ivan Reborin <[EMAIL PROTECTED]> wrote: >On Thu, 23 Oct 2008 11:44:04 -0700 (PDT), "John [H2O]" ><[EMAIL PROTECTED]> wrote: > >> >>Thanks for the clarification. >> >>What is strange though, is that I have several Fortran programs that create >>the exact same arr

Re: More efficient array processing

2008-10-23 Thread Ivan Reborin
On Thu, 23 Oct 2008 11:44:04 -0700 (PDT), "John [H2O]" <[EMAIL PROTECTED]> wrote: > >Thanks for the clarification. > >What is strange though, is that I have several Fortran programs that create >the exact same array srtucture... wouldn't they be restricted to the 2Gb >limit as well? Depends on lo

Re: More efficient array processing

2008-10-23 Thread Marc 'BlackJack' Rintsch
On Thu, 23 Oct 2008 13:56:22 -0700, John [H2O] wrote: > I'm using zeros with type np.float, is there a way to define the data > type to be 4 byte floats? Yes: In [13]: numpy.zeros(5, numpy.float32) Out[13]: array([ 0., 0., 0., 0., 0.], dtype=float32) Ciao, Marc 'BlackJack' Rintsch -

Re: More efficient array processing

2008-10-23 Thread Robert Kern
John [H2O] wrote: I'm using zeros with type np.float, is there a way to define the data type to be 4 byte floats? np.float32. np.float is not part of the numpy API. It's just Python's builtin float type which corresponds to C doubles. -- Robert Kern "I have come to believe that the whole wo

Re: More efficient array processing

2008-10-23 Thread John [H2O]
I'm using zeros with type np.float, is there a way to define the data type to be 4 byte floats? Marc 'BlackJack' Rintsch wrote: > > On Thu, 23 Oct 2008 11:44:04 -0700, John [H2O] wrote: > >> What is strange though, is that I have several Fortran programs that >> create the exact same array srt

Re: More efficient array processing

2008-10-23 Thread Marc 'BlackJack' Rintsch
On Thu, 23 Oct 2008 11:44:04 -0700, John [H2O] wrote: > What is strange though, is that I have several Fortran programs that > create the exact same array srtucture... wouldn't they be restricted to > the 2Gb limit as well? They should be. What about the data type of the elements? Any chance t

Re: More efficient array processing

2008-10-23 Thread John [H2O]
Thanks for the clarification. What is strange though, is that I have several Fortran programs that create the exact same array srtucture... wouldn't they be restricted to the 2Gb limit as well? Thoughts on a more efficient work around? Marc 'BlackJack' Rintsch wrote: > > On Thu, 23 Oct 2008 1

Re: More efficient array processing

2008-10-23 Thread Marc 'BlackJack' Rintsch
On Thu, 23 Oct 2008 11:11:32 -0700, John [H2O] wrote: > I'm trying to do the following: > > datagrid = numpy.zeros(360,180,3,73,20) > > But I get an error saying that the dimensions are too large? Is there a > memory issue here? Let's see: You have: 360 * 180 * 3 * 73 * 20 * 8 bytes You want: