Help on slow attribute copy

2005-05-18 Thread wout
Hi there,

I am fairly new to python, the problem is as follows:

newnodes = {}
for i in nodes:
newnodes[i.label] = i.coordinate

is very slow, which is due to the dots, I know that for functions the 
dot lookup
can be done outside the loop, can this be done for attributes in any 
way? (in such a way that it is fast)
Coordinate is a tuple in this case, filling a dictionary of that size is 
no problem without dots.
I use python 2.0 as shipped with abaqus,  there are about 10 nodes 
in the current case, and more to come, and the
system can be stuck on this for 15+ minutes :-(

Greetings
Wout




 


This message has been checked for viruses but the contents of an attachment
may still contain software viruses, which could damage your computer system:
you are advised to perform your own checks. Email communications with the
University of Nottingham may be monitored as permitted by UK legislation.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help on slow attribute copy

2005-05-18 Thread wout
bgs wrote:

>There's no way that loop takes fifteen minutes just because of the dot
>operator.  I mean, 20 dots in 15 minutes is 200 dots/second.  On a
>1 GHz machine, that would be 5 million cycles per dot.   That does not
>seem reasonable (assuming you haven't overridden the dot operator to do
>something more complicated than normal).
>
>Check that i.label does not have __hash__ overridden in a bad way.  I
>tried a test case with __hash__ overridden to always return the same
>integer, and I got performance about what you are describing when I
>tried your example loop with 10 iterations.
>
>And, of course, make sure you are not low on memory and relying too
>heavily on swap space.
>
>  
>
bgs,
thanks for the quick reply
I don't think it's due to the .label, maybe too, but when I take it out 
it is still slow (due to the .coordinate)
here's the abaqus stuff:
 >>> print mdb.model['Model-1'].rootAssembly.instance['blok'].nodes[1]
({'coordinate': (3.0122888184, 5.0122888184, 0.53123539686203), 
'coordinates': (3.0122888184, 5.0122888184, 0.53123539686203), 
'instanceName': 'blok', 'label': 2})
 >>> nodes = mdb.model['Model-1'].rootAssembly.instance['blok'].node
 >>> for i in nodes:
...   dummy = i.coordinate
wait for half an hour
The machine is p4 with 1GB ram so should be ok,
running fedora 3

Thanks
Wout

This message has been checked for viruses but the contents of an attachment
may still contain software viruses, which could damage your computer system:
you are advised to perform your own checks. Email communications with the
University of Nottingham may be monitored as permitted by UK legislation.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help on slow attribute copy

2005-05-18 Thread wout
bgs wrote:

>Hmm, it looks like the dot operator has been overloaded to do something
>complicated.  (although if you haven't already, try "for i in nodes:
>pass" just to make sure).  Is it retrieving the data from the network
>  
>
I tried that (the pass), that runs fast as it should

>somewhere?  If so, then it looks like it is probably retrieving each
>coordinate individually on each iteration of the loop.  Perhaps there
>is some way of retrieving them all in one bunch?
>  
>
I guess it is looking up the node locations from the database with 
individual queries. As the source for the interface is not provided 
(only pyc) I will not find out.

>It's difficult to say more without knowing anything about abaqus and
>its interface.
>
I'll look for a 'heap lookup', otherwise I'll have to do text parsing...
Thanks loads anyway
Wout
-- 
http://mail.python.org/mailman/listinfo/python-list


OpenMP uses only 1 core as soon as numpy is loaded

2013-05-30 Thread Wout Megchelenbrink
I use openMp in a C-extension that has an interface with Python. 

In its simplest form I do this:

== code ==
#pragma omp parallel
{

#pragma omp for
for(int i=0; i<10; i++)
{
 // multiply some matrices in C 
 }
   }

== end of code ==


This all works fine, and it uses the number of cores I have. But if I import 
numpy in my python session BEFORE I run the code, then it uses only 1 core (and 
omp_num_procs also returns 1 core, instead of the maximum of 8 cores).

So how does numpy affect openMp, and does it have anything to do with the GIL 
or something? I don't use any Python object in my parallel region.

Any help would be appreciated!
Wout
-- 
http://mail.python.org/mailman/listinfo/python-list