Looping using iterators with fractional values
Hello, Making the transition from Perl to Python, and have a question about constructing a loop that uses an iterator of type float. How does one do this in Python? In Perl this construct quite easy: for (my $i=0.25; $i<=2.25; $i+=0.25) { printf "%9.2f\n", $i; } Thanks in advance for your help. Daran Rife [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Reading Fortran binary files
Hello, I need to read a Fortran binary data file in Python. The Fortran data file is organized thusly: nx,ny,nz,ilog_scale # Record 1 (Header) ihour,data3D_array# Record 2 Where every value above is a 2 byte Int. Further, the first record is a header containing the dimensions of the data that follows, as well as the scaling factor of the data (log base 10). The second record contains the hour, followed by the 3D array of data, which is dimensioned by nx,ny,nz. I also need to convert all the 2 byte Int values to 'regular' Int. I realize that similar questions have previously been posted to the group, but the most recent inquiries date back to 2000 and 2001. I thought there may be newer and easier ways to do this. Thanks in advance for your help. Daran Rife [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Rotation of eigenvector matrix using the varimax method
Hello, Has anyone a Python script for rotating an eigenvector matrix using the varimax (or quartimax or other) methods? Thanks in advance for your help. Daran -- http://mail.python.org/mailman/listinfo/python-list
LinearAlgebra incredibly slow for eigenvalue problems
Hello, I need to calculate the eigenvectors and eigenvalues for a 3600 X 3600 covariance matrix. The LinearAlgebra package in Python is incredibly slow to perform the above calculations (about 1.5 hours). This in spite of the fact that I have installed Numeric with the full ATLAS and LAPACK libraries. Also note that my computer has dual Pentium IV (3.1 GHz) processors with 2Gb ram. Every Web discussion I have seen about such issues indicates that one can expect huge speed ups if one compiles and installs Numeric linked against the ATLAS and LAPACK libraries. Even more perplexing is that the same calculation takes a mere 7 min in Matlab V6.5. Matlab uses both ATLAS and LAPACK. Moreover, the above calculation takes the same amount of time for Numeric to complete with --and-- without ATLAS and PACK. I am certain that I have done the install correctly. Can anyone provide some insight? Thanks in advance for your help. Daran -- http://mail.python.org/mailman/listinfo/python-list
Installing Numeric with ATLAS and LAPACK
Hello, Could someone please provide instructions for install Numeric with ATLAS and LAPACK? I've actually done this correctly, I think. But I don't see any difference in the speed. I'm calculating eigenvalues for a 3600 X 3600 covariance matrix. Calculating the eigenvalues for this matrix requires a mere 7 min in Matlab 6.5...which uses ATLAS and LAPACK. Thanks, Daran -- http://mail.python.org/mailman/listinfo/python-list
Re: Installing Numeric with ATLAS and LAPACK
Thanks John. Those are the steps I followed, and to no avail. Interestingly, I downloaded and installed SciPy, and ran the same eigenvector problem. SciPy greatly speeds up the calculation...was 1.5 hours using Numeric, now only 15 min with SciPy. Unfortunately, SciPy only solves ordinary and generalized eigenvalue problems of a square matrix. They do not test to see if the matrix is symmetric, then call the appropriate routine from LAPACK. Daran -- http://mail.python.org/mailman/listinfo/python-list
Re: Installing Numeric with ATLAS and LAPACK
Hi John, When I built Numeric with ATLAS and LAPACK, the eigenvalue calculation took the same amount of time. Per your suggestion, I will capture the output of the build and post it to the Numpy discussion group. Thanks, Daran -- http://mail.python.org/mailman/listinfo/python-list
Re: LinearAlgebra incredibly slow for eigenvalue problems
Hi David, I performed the above check, and sure enough, Numeric is --not-- linked to the ATLAS libraries. I followed each of your steps outlined above, and Numeric still is not linking to the ATLAS libraries. My setup.py file is attached below. Thanks , Daran --#!/usr/bin/env python # To use: # python setup.py install # or: # python setup.py bdist_rpm (you'll end up with RPMs in dist) # import os, sys, string, re from glob import glob if not hasattr(sys, 'version_info') or sys.version_info < (2,0,0,'alpha',0): raise SystemExit, "Python 2.0 or later required to build Numeric." import distutils from distutils.core import setup, Extension # Get all version numbers execfile(os.path.join('Lib','numeric_version.py')) numeric_version = version execfile(os.path.join('Packages', 'MA', 'Lib', 'MA_version.py')) MA_version = version headers = glob (os.path.join ("Include","Numeric","*.h")) extra_compile_args = [] # You could put "-O4" etc. here. mathlibs = ['m'] define_macros = [('HAVE_INVERSE_HYPERBOLIC',None)] undef_macros = [] # You might need to add a case here for your system if sys.platform in ['win32']: mathlibs = [] define_macros = [] undef_macros = ['HAVE_INVERSE_HYPERBOLIC'] elif sys.platform in ['mac', 'beos5']: mathlibs = [] # delete all but the first one in this list if using your own LAPACK/BLAS sourcelist = [os.path.join('Src', 'lapack_litemodule.c')] # set these to use your own BLAS; library_dirs_list = ['/d2/lib/atlas'] libraries_list = ['lapack', 'ptcblas', 'ptf77blas', 'atlas', 'g2c'] # set to true (1), if you also want BLAS optimized matrixmultiply/dot/innerproduct use_dotblas = 1 include_dirs = ['/d2/include'] # You may need to set this to find cblas.h # e.g. on UNIX using ATLAS this should be ['/usr/include/atlas'] extra_link_args = [] # for MacOS X to link against vecLib if present VECLIB_PATH = '/System/Library/Frameworks/vecLib.framework' if os.path.exists(VECLIB_PATH): extra_link_args = ['-framework', 'vecLib'] include_dirs = [os.path.join(VECLIB_PATH, 'Headers')] # The packages are split in this way to allow future optional inclusion # Numeric package packages = [''] package_dir = {'': 'Lib'} include_dirs.append('Include') ext_modules = [ Extension('_numpy', [os.path.join('Src', '_numpymodule.c'), os.path.join('Src', 'arrayobject.c'), os.path.join('Src', 'ufuncobject.c')], extra_compile_args = extra_compile_args), Extension('multiarray', [os.path.join('Src', 'multiarraymodule.c')], extra_compile_args = extra_compile_args), Extension('umath', [os.path.join('Src', 'umathmodule.c')], libraries = mathlibs, define_macros = define_macros, undef_macros = undef_macros, extra_compile_args = extra_compile_args), Extension('arrayfns', [os.path.join('Src', 'arrayfnsmodule.c')], extra_compile_args = extra_compile_args), Extension('ranlib', [os.path.join('Src', 'ranlibmodule.c'), os.path.join('Src', 'ranlib.c'), os.path.join('Src', 'com.c'), os.path.join('Src', 'linpack.c')], extra_compile_args = extra_compile_args), Extension('lapack_lite', sourcelist, library_dirs = library_dirs_list, libraries = libraries_list, extra_link_args = extra_link_args, extra_compile_args = extra_compile_args) ] # add FFT package (optional) packages.append('FFT') package_dir['FFT'] = os.path.join('Packages','FFT','Lib') include_dirs.append(os.path.join('Packages','FFT','Include')) ext_modules.append(Extension('FFT.fftpack', [os.path.join('Packages','FFT','Src', 'fftpackmodule.c'), os.path.join('Packages', 'FFT', 'Src', 'fftpack.c')], extra_compile_args = extra_compile_args)) # add MA package (optional) packages.append('MA') package_dir['MA'] = os.path.join('Packages', 'MA', 'Lib') # add RNG package (optional) packages.append('RNG') packages.append('RNG') package_dir['RNG'] = os.path.join('Packages', 'RNG', 'Lib') include_dirs.append(os.path.join('Packages', 'RNG', 'Include')) ext_modules.append(Extension('RNG.RNG', [os.path.join('Packages', 'RNG', 'Src', 'RNGmodule.c'), os.path.join('Packages', 'RNG', 'Src', 'ranf.c'), os.path.join('Packages', 'RNG', 'Src', 'pmath_rng.c')], extra_compile_args = extra_compile_args)) # add dotblas package (optional) if use_dotblas: packages.append('dotblas') package_dir['dotblas'] = os.path.join('Packages', 'dotblas', 'dotblas') ext_modules.append(Extension('_dotblas', [os.path.join('Packages', 'dotblas', 'dotblas', '_dotblas.c')], library_dirs = library_dirs_list, libraries = libraries_list, extra_compile_args=extra_compile_args)) long_description = """ Numerical Extension to Python with subpackages. The authors and maintainers of the subpackages are: FFTPACK-3.1 maintainer = "Numerical Python Developers" maintainer_email = "[EMAIL PROTECTED]" description = "Fast Fourier Transforms" url = "http://numpy.sourceforge.net"; MA-%s author = "Paul F. Dubois" description = "Masked Array facility" maintainer = "Paul F. Dubois" maintainer_email = "[EMAIL PROTECTED]" url = "http://sourceforge.net/projects/numpy"; RNG-3.1 author = "Lee Busby, Paul F. Dubois, Fred Fri
Re: LinearAlgebra incredibly slow for eigenvalue problems
Hi John, I do have more than one version of Python laying around. To do the build and install I am typing: /d2/python/bin/python setup.by build > &! build.out /d2/python/bin/python setup.by install > &! install.out Should I be doing something different? Daran -- http://mail.python.org/mailman/listinfo/python-list
Re: LinearAlgebra incredibly slow for eigenvalue problems
Hi David, Yes, when Numeric compiles it does look like the linking is being done properly. I captured the build to a file and see a few lines similar to: gcc -pthread -shared build/temp.linux-i686-2.4/Src/lapack_litemodule.o -L/d2/lib/atlas -llapack -lptcblas -lptf77blas -latlas -lg2c -o build/lib.linux-i686-2.4/lapack_lite.so I checked the files in build/lib.linux-i686-2.4 and none have dependencies on ATLAS. When I run Python interpreter with: >>> import Numeric >>> Numeric.__file__ I get the answer I expect. I am totall baffled. Any other ideas? Thanks for your help, Daran -- http://mail.python.org/mailman/listinfo/python-list
Re: LinearAlgebra incredibly slow for eigenvalue problems
David, One more thing. I checked to see if the SciPy libraries had dependencies on ATLAS. They do not, however, the eigenvector calculation is still much faster than Numeric? This is very strange. Daran -- http://mail.python.org/mailman/listinfo/python-list
Re: LinearAlgebra incredibly slow for eigenvalue problems
David, I noticed that the libraries that ATLAS builds are not shared objects (e.g., liblapack.a). Should these be shared objects? I see nothing in the ATLAS documentation about building things as shared objects. Wondering if this is why the Numeric install is failing. Daran -- http://mail.python.org/mailman/listinfo/python-list
Re: Installing Numeric with ATLAS and LAPACK
Could you clarify this please? Let's say that I want to make a call to the LAPACK routine sspevd, and pass it a matrix, and get the result. How do I accomplish this? Thanks, Daran -- http://mail.python.org/mailman/listinfo/python-list
Suggestions for optimizing my code
Hello, I am looking for suggestions for how I might optimize my code (make it execute faster), and make it more streamlined/ elegent. But before describing my code, I want to state that I am not a computer scientist (I am an atmospheric scientist), and have but a rudimentary understanding of OO principles. My problem is this: I want to find the maximum off-diagonal element of a correlation matrix, and the -position- of that element within the matrix. In case you are unfamiliar with correlation matrices they are square and symmetric, and contain the mutual correlations among data collected at different points in space or time. I write much Python code and absolutely love the language. To do the task outlined above I twiddled around with the "argsort" methods available in both Numeric and numarry. In the end, I decided to accomplish this through the following algorithm. < import MLab import Numeric as N def FindMax( R): """Find row (column) where max off-diagonal element occurs in matrx [R]. """ # First get elements in the lower triangle of [R], # since the diagonal elements are uninformative and # the upper triangle contains redundant information. Y = MLab.tril(R) offMax = -. rowMax = 0 colMax = 0 for row in range(len(Y)): for col in range(0,row): if Y[row][col] > offMax: offMax = Y[row][col] rowMax = row colMax = col return (rowMax, colMax, offMax) > Now, this algorithm will work sufficiently fast on "small" sized matrices, but I worry that performance will not scale well when the dimensions of the matrix grow "large" (say 1000-1500 elements on a side, or larger). So onto my question. Could someone please provide me with a suggestion for making this code more efficient and elegant? Again, I have but a rudimentary understanding of OO principles. Thanks very much for your help, Daran [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Trouble with numpy-0.9.4 and numpy-0.9.5
Hello, I use the Python Numeric package extensively, and had been an avid user of the "old" scipy. In my view, both pieces of software are truly first rate, and have greatly improved my productivity in the area of scientific analysis. Thus, I was excited to make the transition to the new scipy core (numpy). Unfortunately, I am experiencing a problem that I cannot sort out. I am running Python 2.4.2 on a Debian box (V3.1), using gcc version 3.3.5, and ATLAS, BLAS, and LAPACK libraries built from scratch. When building numpy everything seems to go AOK. But trouble soon follows. As a sanity check, I run the diagnostic tests that come as part of the numpy package: from numpy import * from scipy import * test(level=N) # N can vary from 1 to 10. : No matter which test I run, Python crashes hard with only the following output: "Floating exception". I thought perhaps numpy would still work Ok, even though it seemed to have failed this "sanity" test. Thus, I tried using the numpy.linalg.eig functionality, and upon calling numpy.linalg.eig I get the same behavior: Python crashes hard with the same output "Floating exception". Can someone help me sort out what might be wrong? Perhaps I have overlooked a crucial step in the build/ install of numpy. Thanks very much, Daran -- http://mail.python.org/mailman/listinfo/python-list
Structure function
Hello, Does anyone have an efficient Python routine for calculating the structure function? Thanks very much, Daran [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list