locate items in matrix (index of lists of lists)
Hello there, let's suppose I have the following matrix: mat = [[1,2,3], [3,2,4], [7,8,9], [6,2,9]] where [.. , .. , ..] are the rows. I am interested into getting the "row index" of all the matrix rows where a certain number occurs. For example for 9 I should get 2 and 3 (starting from 0). For 10 I should get an error msg (item not found) and handle it. How to get the"row indexes" of found items? In practice I am looking for an equivalent to "list.index(x)" for the case "lists of lists" Many Thanks! Alex PS: this is just a simplified example, but I have actually to deal with large matrices [~50 * 4] -- http://mail.python.org/mailman/listinfo/python-list
get rid of duplicate elements in list without set
Hello there, I'd like to get the same result of set() but getting an indexable object. How to get this in an efficient way? Example using set A = [1, 2, 2 ,2 , 3 ,4] B= set(A) B = ([1, 2, 3, 4]) B[2] TypeError: unindexable object Many thanks, alex -- http://mail.python.org/mailman/listinfo/python-list
replacing numbers in LARGE MATRIX with criterium in 2 columns (a--> b)
Hello, I have this matrix [20*4 - but it could be n*4 , with n~100,000] in file "EL_list" like this: 1, 1, 2, 3 2, 4, 1, 5 3, 5, 1, 6 4, 7, 5, 6 5, 8, 7, 9 6, 8, 5, 7 7, 10, 9, 7 8, 10, 11, 12 9, 7, 13, 10 10, 14, 15, 16 11, 14, 17, 15 12, 17, 14, 18 13, 13, 16, 10 14, 16, 15, 11 15, 16, 11, 10 16, 19, 20, 21 17, 22, 15, 20 18, 17, 20, 15 19, 23, 20, 24 20, 25, 24, 20 I would like to replace some numbers in "EL_list" but only in columns 2,3,4 using the criterium by line in file "criterium" (column 1 are the new numbers, column 2 the old ones). 1 1 2 3 3 5 4 12 5 13 6 14 7 15 8 16 9 17 10 18 11 19 12 10 13 21 14 22 15 23 16 24 17 25 For example all the 7 have to replaced by 15 and so on.. -- How to implement it in a fast end efficient code? many thanks, Alex -- http://mail.python.org/mailman/listinfo/python-list
writing on file not until the end
Hello, I am a newby with python. I wrote the following code to extract a text from a file and write it to another file: linestring = open(path, 'r').read() #read all the inp file in linestring i=linestring.index("*NODE") i=linestring.index("E",i) e=linestring.index("*",i+10) textN = linestring[i+2:e-1] # crop the ELement+nodes list Nfile = open("N.txt", "w") Nfile.write(textN) unfortunately when I check N.txt some lines are missing (it only crop until "57, 0.231749688431, 0.0405121944142" but I espect the final line to be "242, 0.2979675, 0.224605896461". I check textN and it has all the lines until "242..") when I try Nfile.write(textN) again it writes some more lines but still not all! what is wrong whit this? Many thanks for your help! Alex the original file to crop (in path) looks like this: *HEADING ABAQUS-style file created by OOF2 on 2009-03-05 03:23:49.204607 from a mesh of the microstructure A. ** Materials defined by OOF2: ** C: ** W: ** Master elements used in OOF2: ** 2: D2_2, Isoparametric 2-noded edge element ** 3: T3_3, Isoparametric 3 noded triangle with linear interpolation for both fields and positions. ** 4: Q4_4, Isoparametric 4 noded quadrilateral with bilinear interpolation for both positions and fields ** Boundary Conditions: ** Notes: ** The set of nodes and elements may be different from the set created from a skeleton **depending on the element type and if the mesh was refined. ** The materials and boundary conditions provided by OOF2 may be translated into ABAQUS by the user. ** The element type provided below should be verified and modified accordingly. ** Only elements (and nodes of such elements) that have an associated material are included in this file. *NODE 6, 0.0, 0.0 1, 0.031365, 0.0 9, 0.06273, 0.0 10, 0.094095, 0.0 17, 0.12546, 0.0 18, 0.156825, 0.0 29, 0.18819, 0.0 33, 0.219555, 0.0 37, 0.25092, 0.0 38, 0.282285, 0.0 245, 0.31365, 0.0 4, 0.0, 0.027505735912 7, 0.0283153247289, 0.0247564240049 8, 0.06273, 0.0236123 13, 0.126933916405, 0.0219324392255 22, 0.156825, 0.0236123 25, 0.185442602873, 0.0174188364874 35, 0.216158167035, 0.0260087067213 41, 0.25092, 0.0236123 65, 0.31365, 0.0265806479154 45, 0.0, 0.0472246 44, 0.0270022417013, 0.0457250093507 46, 0.06273, 0.0472246 47, 0.098090467, 0.0476663143118 48, 0.123432920369, 0.0493041533953 50, 0.157517619392, 0.0491485665468 54, 0.18819, 0.0472246 56, 0.21829479153, 0.0489914989989 59, 0.255396164365, 0.053642386456 62, 0.283534894472, 0.0482125495477 66, 0.31365, 0.0473757108379 68, 0.0, 0.0708369 71, 0.031365, 0.0708369 73, 0.0667269869261, 0.0724057787737 74, 0.094095, 0.0708369 75, 0.12203525, 0.0683265441516 77, 0.154069643632, 0.067833220225 79, 0.192676294482, 0.0697610582702 80, 0.218433034079, 0.0733140035725 81, 0.250592391777, 0.0705659765909 115, 0.31365, 0.0749934030765 86, 0.0, 0.0937712452856 87, 0.0327416679167, 0.0988320724937 92, 0.0642181016634, 0.0996865210439 94, 0.0939583966251, 0.0960413983543 98, 0.12277254344, 0.0956157005759 101, 0.156825, 0.0944492 107, 0.190080813107, 0.0989004943145 109, 0.219555, 0.0944492 111, 0.25092, 0.0944492 116, 0.31365, 0.0913902112346 117, 0.0, 0.118185969919 118, 0.031365, 0.1180615 119, 0.0657061477957, 0.117247512024 120, 0.095407726, 0.117579706786 122, 0.120457073874, 0.118852698163 125, 0.150412085671, 0.119805228459 128, 0.188401966746, 0.118578758447 132, 0.219555, 0.1180615 131, 0.25092, 0.1180615 134, 0.282285, 0.1180615 138, 0.0, 0.1416738 137, 0.031365, 0.1416738 139, 0.06273, 0.1416738 140, 0.098131408, 0.141731507791 160, 0.156825, 0.1416738 145, 0.18819, 0.1416738 144, 0.21849787291, 0.144222847202 146, 0.25214522485, 0.139721206044 147, 0.287968204391, 0.138882348861 151, 0.0, 0.1652861 152, 0.031365, 0.1652861 153, 0.06273, 0.1652861 154, 0.094095, 0.1652861 156, 0.124683249222, 0.165861357563 189, 0.156825, 0.1652861 163, 0.189978244061, 0.168552116135 193, 0.219555, 0.1652861 170, 0.251953134991, 0.166067888686 175, 0.31365, 0.1652861 179, 0.0, 0.1888984 178, 0.031365, 0.1888984 180, 0.06273, 0.1888984 181, 0.0948683944207, 0.186105991855 184, 0.12546, 0.1888984 185, 0.156825, 0.1888984 191, 0.185226863639, 0.189033619679 192, 0.21459450643, 0.190219519385 196, 0.248868626255, 0.189540639649 197, 0.277613280215, 0.188996385139 200, 0.31365, 0.188840621897 201, 0.0, 0.2125107 202, 0.031365, 0.2125107 203, 0.061560970472, 0.211435761567 206, 0.094095, 0.2125107 208, 0.123520746115, 0.21724383 212, 0.156824305885, 0.210136954906 215, 0.19080765368, 0.21182785412 216, 0.219555, 0.2125107 217, 0.24652142594, 0.211374186361 219, 0.282347639617, 0.2121001649 220, 0.31365, 0.213088792923 222, 0.0, 0.236123 221, 0.031365, 0.236123 224, 0.06273, 0.236123 226, 0.094095, 0.236123 230, 0.12546, 0.236123 236, 0.156825, 0.236123 237, 0.18819, 0.236123 238, 0.219555, 0.236123 246, 0.25092, 0.236123 239, 0.282285, 0.236123 244, 0.31365, 0.236123 2, 0.0150409063102, 0.0175241129844 5, 0.0, 0.0143072034694 3, 0.0156
improving a huge double-for cycle
Hello there :) , I am a python newbie and need to run following code for a task in an external simulation programm called "Abaqus" which makes use of python to access the mesh (ensamble of nodes with xy coordinates) of a certain geometrical model. [IN is the starting input containing the nodes to be check, there are some double nodes with the same x and y coordinates which need to be removed. SN is the output containing such double nodes] Code: Select all for i in range(len(IN)): #scan all elements of the list IN for j in range(len(IN)): if i <> j: if IN[i].coordinates[0] == IN[j].coordinates[0]: if IN[i].coordinates[1] == IN[j].coordinates[1]: SN.append(IN[i].label) Unfortunately my len(IN) is about 100.000 and the running time about 15h :( Any idea to improve it? I have already tried to group the "if statements" in a single one: Code: Select all if i <> j and if IN[i].coordinates[0] == IN[j].coordinates[0] and if IN[i].coordinates[1] == IN[j].coordinates[1]: but no improvements. Many thanks, Alex -- http://mail.python.org/mailman/listinfo/python-list
how to improve this cycle (extracting data from structured array)?
Hello guys, I am just wondering if there is a quick way to improve this algorithm [N is a structured array which hold info about the nodes n of a finite element mesh, and n is about 300.000). I need to extract info from N and put it in to a 3*n matrix NN which I reshape then with numpy. I think to "append" is quite unefficient: *** N = odb.rootAssembly.instances['PART-1-1'].nodes NN=[] B=[0,0,0] #auxsiliar vector for i in range(len(N)): B[0] = N[i].label B[1] = N[i].coordinates[0] B[2] = N[i].coordinates[1] NN = append(NN,B) NN=NN.reshape(-1,3) Many Thanks in advance! Alex -- http://mail.python.org/mailman/listinfo/python-list
Re: how to improve this cycle (extracting data from structured array)?
On Feb 11, 1:08 pm, Tim Chase wrote: > Alexzive wrote: > > I am just wondering if there is a quick way to improve this algorithm > > [N is a structured array which hold info about the nodes n of a finite > > element mesh, and n is about 300.000). I need to extract info from N > > and put it in to a 3*n matrix NN which I reshape then with numpy. I > > think to "append" is quite unefficient: > > > *** > > N = odb.rootAssembly.instances['PART-1-1'].nodes > > > NN=[] > > > B=[0,0,0] > > #auxsiliar vector > > for i in range(len(N)): > > B[0] = N[i].label > > B[1] = N[i].coordinates[0] > > B[2] = N[i].coordinates[1] > > NN = append(NN,B) > > Usually this would be written with a list-comprehension, > something like > > nn = [(x.label, x.coordinates[0], x.coordinates[1]) > for x in N] > > or if you really need lists-of-lists instead of lists-of-tuples: > > nn = [[x.label, x.coordinates[0], x.coordinates[1]] > for x in N] > > -tkc yeah, much better thanks! Alex -- http://mail.python.org/mailman/listinfo/python-list
speed up a numpy code with huge array
Hello Pythonguys! is there a way to improve the performance of the attached code ? it takes about 5 h on a dual-core (using only one core) when len(V) ~1MIL. V is an array which is supposed to store all the volumes of tetrahedral elements of a grid whose coord. are stored in NN (accessed trough the list of tetraelements --> EL) Thanks in advance! Alex print 'start ' + nameodb #path = '/windows/D/SIM-MM/3D/E_ortho/' + nameodb + '.odb' path = pt + nameodb + '.odb' odb = openOdb(path) N = odb.rootAssembly.instances['PART-1-1'].nodes if loadV==1: pathV=pt+vtet V=numpy.loadtxt(pathV) VTOT = V[0] L3 = V[1] print 'using ' + vtet else: NN=[] B=[0,0,0,0] for i in range(len(N)): B[0] = N[i].label B[1] = N[i].coordinates[0] B[2] = N[i].coordinates[1] B[3] = N[i].coordinates[2] NN = append(NN,B) NN=NN.reshape(-1,4) EL = odb.rootAssembly.instances['PART-1-1'].elements L1 = max(NN[:,1])-min(NN[:,1]) L2 = max(NN[:,2])-min(NN[:,2]) L3 = max(NN[:,3])-min(NN[:,3]) VTOT=L1*L2*L3 print 'VTOT: [mm³]' + str(VTOT) V = array([]) print 'calculating new Vtet ' V = range(len(EL)+2) V[0] = VTOT V[1] = L3 for j in range(0,len(EL)): Va = EL[j].connectivity[0] Vb = EL[j].connectivity[1] Vc = EL[j].connectivity[2] Vd = EL[j].connectivity[3] ix = where(NN[:,0] == Va) Xa = NN[ix,1][0][0] Ya = NN[ix,2][0][0] Za = NN[ix,3][0][0] ix = where(NN[:,0] == Vb) Xb = NN[ix,1][0][0] Yb = NN[ix,2][0][0] Zb = NN[ix,3][0][0] ix = where(NN[:,0] == Vc) Xc = NN[ix,1][0][0] Yc = NN[ix,2][0][0] Zc = NN[ix,3][0][0] ix = where(NN[:,0] == Vd) Xd = NN[ix,1][0][0] Yd = NN[ix,2][0][0] Zd = NN[ix,3][0][0] a = [Xa,Ya,Za] b = [Xb,Yb,Zb] c = [Xc,Yc,Zc] d = [Xd,Yd,Zd] aa = numpy.diff([b,a],axis=0)[0] bb = numpy.diff([c,b],axis=0)[0] cc = numpy.diff([d,c],axis=0)[0] D=array([aa,bb,cc]) det=numpy.linalg.det(D) V[j+2] = abs(det)/6 pathV = pt + vtet savetxt(pathV, V, fmt='%.3e') ### -- http://mail.python.org/mailman/listinfo/python-list
Re: speed up a numpy code with huge array
thank you all for the tips. I 'll try them soon. I also notice another bottleneck, when python tries to access some array data stored in the odb files (---> in text below), even before starting the algoritm: ### EPS_nodes = range(len(frames)) for f in frames: ... sum = 0 --->UN = F[f].fieldOutputs['U'].getSubset(region=TOP).values <--- ... EPS_nodes[f] = UN[10].data[Scomp-1]/L3 ### unfortunately I don't have time to learn cython. Using dictionaries sounds promising. Thanks! Alex On May 26, 8:14 am, Stefan Behnel wrote: > Alexzive, 25.05.2010 21:05: > > > is there a way to improve the performance of the attached code ? it > > takes about 5 h on a dual-core (using only one core) when len(V) > > ~1MIL. V is an array which is supposed to store all the volumes of > > tetrahedral elements of a grid whose coord. are stored in NN (accessed > > trough the list of tetraelements --> EL) > > Consider using Cython for your algorithm. It has direct support for NumPy > arrays and translates to fast C code. > > Stefan -- http://mail.python.org/mailman/listinfo/python-list
Re: speed up a numpy code with huge array
sorry it was just bullshit what I wrote about the second bottleneck, it seemed to hang up but it was just me forgetting to double-enter during debugging after "for cycle". On May 26, 1:43 pm, Alexzive wrote: > thank you all for the tips. > I 'll try them soon. > > I also notice another bottleneck, when python tries to access some > array data stored in the odb files (---> in text below), even before > starting the algoritm: > > ### > EPS_nodes = range(len(frames)) > for f in frames: > ... sum = 0 > ---> UN = F[f].fieldOutputs['U'].getSubset(region=TOP).values <--- > ... EPS_nodes[f] = UN[10].data[Scomp-1]/L3 > > ### > > unfortunately I don't have time to learn cython. Using dictionaries > sounds promising. > Thanks! > Alex > > On May 26, 8:14 am, Stefan Behnel wrote: > > > > > Alexzive, 25.05.2010 21:05: > > > > is there a way to improve the performance of the attached code ? it > > > takes about 5 h on a dual-core (using only one core) when len(V) > > > ~1MIL. V is an array which is supposed to store all the volumes of > > > tetrahedral elements of a grid whose coord. are stored in NN (accessed > > > trough the list of tetraelements --> EL) > > > Consider using Cython for your algorithm. It has direct support for NumPy > > arrays and translates to fast C code. > > > Stefan -- http://mail.python.org/mailman/listinfo/python-list
how to build with 2.4 having 2.6 as main python
Hello there, my Mandriva has the 2.6.4 python pre-installed (in /usr/lib64/ python2.6/) I need to install numpy 1.4 for python 2.4.3 (I installed it separately from source on/usr/local/lib/python2.4/ ) but still typing "python" I get: Python 2.6.4 (r264:75706, Jan 8 2010, 18:59:59) [GCC 4.4.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> what to change in order to get "python" calling python 2.4.3 instead of 2.6.4 (at least during python setup.py build)? I suppose I need something like changing the link to /usr/local/bin/ python.. but I fear to do something bad by myself.. please help! -- http://mail.python.org/mailman/listinfo/python-list
Re: how to build with 2.4 having 2.6 as main python
thanks guys, the solution for me was python2.4 setup.py install --prefix=/usr/local cheers, AZ On Jun 14, 11:00 am, Steven D'Aprano wrote: > On Mon, 14 Jun 2010 01:30:09 -0700, Alexzive wrote: > > what to change in order to get "python" calling python 2.4.3 instead of > > 2.6.4 (at least during python setup.py build)? > > That will do bad things to your system, which will be expecting the > system Python to be 2.6 and instead will be 2.4. You will probably find > system tools will start to fail. > > > I suppose I need something like changing the link to /usr/local/bin/ > > python.. > > but I fear to do something bad by myself.. please help! > > Yes, that will do it, but if you do, you will probably break things. Best > to just call the python2.4 binary directly. > > If you call > > python2.4 > > from the command line, what happens? > > -- > Steven -- http://mail.python.org/mailman/listinfo/python-list