Re: More efficient array processing

2008-10-24 Thread sturlamolden
On Oct 23, 8:11 pm, "John [H2O]" <[EMAIL PROTECTED]> wrote: > datagrid = numpy.zeros(360,180,3,73,20) On a 32 bit system, try this instead: datagrid = numpy.zeros((360,180,3,73,20), dtype=numpy.float32) (if you can use single precision that is.) -- http://mail.python.org/mailman/lis

Re: More efficient array processing

2008-10-23 Thread John [H2O]
maller 'chunks', say creating separate arrays for each field, or day. -- View this message in context: http://www.nabble.com/More-efficient-array-processing-tp20136676p20141712.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list

Re: More efficient array processing

2008-10-23 Thread Ivan Reborin
On Fri, 24 Oct 2008 00:32:11 +0200, Ivan Reborin <[EMAIL PROTECTED]> wrote: >On Thu, 23 Oct 2008 11:44:04 -0700 (PDT), "John [H2O]" ><[EMAIL PROTECTED]> wrote: > >> >>Thanks for the clarification. >> >>What is strange though, is that I have several Fortran programs that create >>the exact same arr

Re: More efficient array processing

2008-10-23 Thread Ivan Reborin
On Thu, 23 Oct 2008 11:44:04 -0700 (PDT), "John [H2O]" <[EMAIL PROTECTED]> wrote: > >Thanks for the clarification. > >What is strange though, is that I have several Fortran programs that create >the exact same array srtucture... wouldn't they be restricted to the 2Gb >limit as well? Depends on lo

Re: More efficient array processing

2008-10-23 Thread Marc 'BlackJack' Rintsch
On Thu, 23 Oct 2008 13:56:22 -0700, John [H2O] wrote: > I'm using zeros with type np.float, is there a way to define the data > type to be 4 byte floats? Yes: In [13]: numpy.zeros(5, numpy.float32) Out[13]: array([ 0., 0., 0., 0., 0.], dtype=float32) Ciao, Marc 'BlackJack' Rintsch -

Re: More efficient array processing

2008-10-23 Thread Robert Kern
John [H2O] wrote: I'm using zeros with type np.float, is there a way to define the data type to be 4 byte floats? np.float32. np.float is not part of the numpy API. It's just Python's builtin float type which corresponds to C doubles. -- Robert Kern "I have come to believe that the whole wo

Re: More efficient array processing

2008-10-23 Thread John [H2O]
the default in `numpy`? > > Ciao, > Marc 'BlackJack' Rintsch > -- > http://mail.python.org/mailman/listinfo/python-list > > -- View this message in context: http://www.nabble.com/More-efficient-array-processing-tp20136676p20139062.html Sent from the P

Re: More efficient array processing

2008-10-23 Thread Marc 'BlackJack' Rintsch
On Thu, 23 Oct 2008 11:44:04 -0700, John [H2O] wrote: > What is strange though, is that I have several Fortran programs that > create the exact same array srtucture... wouldn't they be restricted to > the 2Gb limit as well? They should be. What about the data type of the elements? Any chance t

Re: More efficient array processing

2008-10-23 Thread John [H2O]
#x27;s see: > > You have: 360 * 180 * 3 * 73 * 20 * 8 bytes > You want: GiB > * 2.1146536 > / 0.47289069 > > Do you have a 32 bit system? Then 2 GiB is too much for a process. > > Ciao, > Marc 'BlackJack' Rintsch > -- > http://m

Re: More efficient array processing

2008-10-23 Thread Marc 'BlackJack' Rintsch
On Thu, 23 Oct 2008 11:11:32 -0700, John [H2O] wrote: > I'm trying to do the following: > > datagrid = numpy.zeros(360,180,3,73,20) > > But I get an error saying that the dimensions are too large? Is there a > memory issue here? Let's see: You have: 360 * 180 * 3 * 73 * 20 * 8 bytes You want:

More efficient array processing

2008-10-23 Thread John [H2O]
2.4.5 Procmail v3.22 2001/09/10 -- View this message in context: http://www.nabble.com/More-efficient-array-processing-tp20136676p20136676.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list