On 3/28/2023 1:50 PM, a a wrote:
On Tuesday, 28 March 2023 at 18:12:40 UTC+2, Thomas Passin wrote:
On 3/28/2023 8:47 AM, a a wrote:
Ok, I can export bookmarks to html file and open it in Firefox to get
a long list of clickable urls but icon of the bookmarked web page is missing.
When I open Bo
On Tuesday, 28 March 2023 at 18:12:40 UTC+2, Thomas Passin wrote:
> On 3/28/2023 8:47 AM, a a wrote:
> > Ok, I can export bookmarks to html file and open it in Firefox to get
> > a long list of clickable urls but icon of the bookmarked web page is
> > missing.
> >
> > When I open Bookmarks as
On 3/28/2023 8:47 AM, a a wrote:
Ok, I can export bookmarks to html file and open it in Firefox to get
a long list of clickable urls but icon of the bookmarked web page is missing.
When I open Bookmarks as right a side-bar I can view and identify an individual
Boomarks by icon,
so I would like
On Tuesday, 28 March 2023 at 06:33:44 UTC+2, Thomas Passin wrote:
> On 3/27/2023 8:37 PM, a a wrote:
> >> To save the tabs, right click any one of them and select the "Select All
> >> Tabs" item. They will all highlight. Right click on one of them and
> >> select the "Bookmark Tabs" item. A dial
On Tuesday, 28 March 2023 at 06:33:44 UTC+2, Thomas Passin wrote:
> On 3/27/2023 8:37 PM, a a wrote:
> >> To save the tabs, right click any one of them and select the "Select All
> >> Tabs" item. They will all highlight. Right click on one of them and
> >> select the "Bookmark Tabs" item. A dial
On 3/27/2023 8:37 PM, a a wrote:
I can select All Opened Tabs (as from the given link)
and get 1,000+ Opened Tabs ( I am afraid, this is s number of all saved
bookmarks in the past)
I go to menu, Bookmarks, Manage Boomarks and copy Tabs
and
https://www.textfixer.com/html/convert-url-to-html-lin
On 3/27/2023 8:37 PM, a a wrote:
To save the tabs, right click any one of them and select the "Select All
Tabs" item. They will all highlight. Right click on one of them and
select the "Bookmark Tabs" item. A dialog box will open with an entry
lone for the Name to use (like "Tabset1") and a locat
On Tuesday, 28 March 2023 at 02:07:43 UTC+2, Thomas Passin wrote:
> On 3/27/2023 4:02 PM, Thomas Passin wrote:
> > On 3/27/2023 3:07 PM, a a wrote:
> >> On Monday, 27 March 2023 at 19:19:41 UTC+2, Thomas Passin wrote:
> >>> On 3/27/2023 10:07 AM, a a wrote:
> Ok, I know, I need to switch to
On 3/27/2023 4:02 PM, Thomas Passin wrote:
On 3/27/2023 3:07 PM, a a wrote:
On Monday, 27 March 2023 at 19:19:41 UTC+2, Thomas Passin wrote:
On 3/27/2023 10:07 AM, a a wrote:
Ok, I know, I need to switch to Windows 10 run on another PC next to
me.
I need to learn how to copy and move every w
On 3/27/2023 3:07 PM, a a wrote:
On Monday, 27 March 2023 at 19:19:41 UTC+2, Thomas Passin wrote:
On 3/27/2023 10:07 AM, a a wrote:
Ok, I know, I need to switch to Windows 10 run on another PC next to me.
I need to learn how to copy and move every web page opened in Firefox as a
reference to
On Monday, 27 March 2023 at 19:19:41 UTC+2, Thomas Passin wrote:
> On 3/27/2023 10:07 AM, a a wrote:
> > Ok, I know, I need to switch to Windows 10 run on another PC next to me.
> >
> > I need to learn how to copy and move every web page opened in Firefox as a
> > reference to social media, web
On 3/27/2023 10:07 AM, a a wrote:
Ok, I know, I need to switch to Windows 10 run on another PC next to me.
I need to learn how to copy and move every web page opened in Firefox as a
reference to social media, web sites for Python, chat and more (about 50 web
pages live opened 😉
This sounds l
On Thursday, 23 March 2023 at 22:15:10 UTC+1, Thomas Passin wrote:
> On 3/23/2023 3:38 PM, Mats Wichmann wrote:
> > On 3/23/23 09:48, Thomas Passin wrote:
> >
> >> I didn't realize that Christoph Gohlke is still maintaining this site.
> >
> > Unless the the last-changed stuff stopped working,
On 3/23/2023 3:38 PM, Mats Wichmann wrote:
On 3/23/23 09:48, Thomas Passin wrote:
I didn't realize that Christoph Gohlke is still maintaining this site.
Unless the the last-changed stuff stopped working, it's in a static state:
by Christoph Gohlke. Updated on 26 June 2022 at 07:27 UTC
I di
On 3/23/23 09:48, Thomas Passin wrote:
I didn't realize that Christoph Gohlke is still maintaining this site.
Unless the the last-changed stuff stopped working, it's in a static state:
by Christoph Gohlke. Updated on 26 June 2022 at 07:27 UTC
--
https://mail.python.org/mailman/listinfo/pyt
On 3/18/2023 3:05 PM, Thomas Passin wrote:
downloaded and run HWiNFO and AVE not supported, not greened out
That's too bad; you may be out of luck. It's possible that someone
has compiled the .pyd library in such a way that it does not need the
instruction set extensions. I'm sorry but I don
On 3/22/2023 8:09 AM, a a wrote:
On Saturday, 18 March 2023 at 20:12:22 UTC+1, Thomas Passin wrote:
On 3/17/2023 11:52 AM, a a wrote:
On Friday, 17 March 2023 at 16:32:53 UTC+1, a a wrote:
On Friday, 17 March 2023 at 16:03:14 UTC+1, Thomas Passin wrote:
On 3/16/2023 8:07 PM, a a wrote:
Crash
On Saturday, 18 March 2023 at 20:12:22 UTC+1, Thomas Passin wrote:
> On 3/17/2023 11:52 AM, a a wrote:
> > On Friday, 17 March 2023 at 16:32:53 UTC+1, a a wrote:
> >> On Friday, 17 March 2023 at 16:03:14 UTC+1, Thomas Passin wrote:
> >>> On 3/16/2023 8:07 PM, a a wrote:
> Crash report:
>
On 3/17/2023 11:52 AM, a a wrote:
On Friday, 17 March 2023 at 16:32:53 UTC+1, a a wrote:
On Friday, 17 March 2023 at 16:03:14 UTC+1, Thomas Passin wrote:
On 3/16/2023 8:07 PM, a a wrote:
Crash report:
Problem Caption:
Problem Event Name: APPCRASH
Application name: python.exe
Application versi
On 3/17/2023 11:32 AM, a a wrote:
On Friday, 17 March 2023 at 16:03:14 UTC+1, Thomas Passin wrote:
It would be worth trying to downgrade the multiarray version to an
earlier one and see if that fixes the problem.
Thank you Thomas for your kind reply.
I am fully aware to be living on an old
On Friday, 17 March 2023 at 16:03:14 UTC+1, Thomas Passin wrote:
> On 3/16/2023 8:07 PM, a a wrote:
> > Crash report:
> >
> > Problem Caption:
> > Problem Event Name: APPCRASH
> > Application name: python.exe
> > Application version: 3.8.7150.1013
> > Application time signature: 5fe0df5a
>
On Friday, 17 March 2023 at 16:03:14 UTC+1, Thomas Passin wrote:
> On 3/16/2023 8:07 PM, a a wrote:
> > Crash report:
> >
> > Problem Caption:
> > Problem Event Name: APPCRASH
> > Application name: python.exe
> > Application version: 3.8.7150.1013
> > Application time signature: 5fe0df5a
>
On Friday, 17 March 2023 at 16:32:53 UTC+1, a a wrote:
> On Friday, 17 March 2023 at 16:03:14 UTC+1, Thomas Passin wrote:
> > On 3/16/2023 8:07 PM, a a wrote:
> > > Crash report:
> > >
> > > Problem Caption:
> > > Problem Event Name: APPCRASH
> > > Application name: python.exe
> > > Applicat
On 3/16/2023 8:07 PM, a a wrote:
Crash report:
Problem Caption:
Problem Event Name: APPCRASH
Application name: python.exe
Application version: 3.8.7150.1013
Application time signature: 5fe0df5a
Error module name: _multiarray_umath.cp38-win32.pyd
Version of the module with t
Am 29.09.21 um 18:16 schrieb Jorge Conforte:
Hi,
I have a netcdf file "uwnd_850_1981.nc" and I'm using the commands to
read it:
Your code is incomplete:
from numpy import dtype
 fileu ='uwnd_850_1981.nc'
ncu = Dataset(fileu,'r')
Where is "Dataset" defined?
uwnd=ncu.variables['uwnd'][:
On Sun, 3 Jan 2021, Rich Shepard wrote:
I'm trying to rebuild numpy-1.18.2 using the newly installed Python-3.9.1.
The script fails when running setup.py:
Traceback (most recent call last):
File "setup.py", line 32, in
raise RuntimeError("Python version >= 3.5 required.")
RuntimeError: Pyth
Às 05:55 de 09/12/20, Paulo da Silva escreveu:
> Hi!
>
> I am looking at some code, that I found somewhere in the internet, to
> compute DCT for each 8x8 block in an gray (2D) image (512x512).
>
> This is the code:
>
> def dct2(a):
> return
> scipy.fft.dct(scipy.fft.dct(a,axis=0,norm='ortho'
Hi
https://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy also has a numpy wheel
1.19.4+vanilla‑cp39‑cp39‑win_amd64.whl
"Vanilla is a minimal distribution, which does not include any optimized
BLAS libray or C runtime DLLs."
Have not tried this.
cheers
Malcolm
On 30/11/2020 7:19 am, MRAB wrote
On 2020-11-29 18:33, Dennis Lee Bieber wrote:
On Sat, 28 Nov 2020 17:28:50 -0600, Larry Burford
declaimed the following:
when trying to run the tutorial program standardplot.py I get a msg that
says my numpy won't pass a sanity check due to a problem in the Win runtime
Wait for M
HI
Just had the same problem.
The solution that worked for me was (
pip uninstall numpy
then
pip install numpy==1.19.3
The latest update to windows has an error in the BLAS libray causing the
error. its a known problem.
hope this helps
Malcolm
On 29/11/2020 10:28 am, Larry Burford wro
jagmit sandhu wrote:
> python newbie. I can't understand the following about numpy arrays:
>
> x = np.array([[0, 1],[2,3],[4,5],[6,7]])
> x
> array([[0, 1],
>[2, 3],
>[4, 5],
>[6, 7]])
> x.shape
> (4, 2)
> y = x[:,0]
> y
> array([0, 2, 4, 6])
> y.shape
> (4,)
>
> Why is t
Il giorno giovedì 2 aprile 2020 06:30:22 UTC+2, jagmit sandhu ha scritto:
> python newbie. I can't understand the following about numpy arrays:
>
> x = np.array([[0, 1],[2,3],[4,5],[6,7]])
> x
> array([[0, 1],
>[2, 3],
>[4, 5],
>[6, 7]])
> x.shape
> (4, 2)
> y = x[:,0]
> y
use:
num_arr1 = numpy.array(tgt_arr1, dtype=int)
num_arr2 = numpy.array(tgt_arr2, dtype=int)
On Mon, Sep 16, 2019 at 5:36 PM Pradeep Patra
wrote:
> Yes it is crashing in the hackerrank site and the testcases fails with
> segmentation fault. I tried to install numpy with 3.7.3 and it is for som
Thomas Jollans wrote:
> Please reply on-list. (both of you)
>
>
> Forwarded Message
> Subject: Re: numpy results in segmentation fault
> Date: Mon, 16 Sep 2019 17:04:57 +0530
> From: Test Bot
> To: Pradeep Patra
> CC: Thomas Jollans
>
> Firstly,
Yes it is crashing in the hackerrank site and the testcases fails with
segmentation fault. I tried to install numpy with 3.7.3 and it is for some
reason not working and after import when I run import numpy at python
console and press enter I get >>? i,e its not working properly.
Can you please hel
On 12/09/2019 15.53, Pradeep Patra wrote:
> Hi ,
>
> I was trying to solve the hackerrank and was using python 3.7.x.
> https://www.hackerrank.com/challenges/np-concatenate/problem
>
> While running the code sometimes I get success result and sometimes it
> fails with "Segmentation Fault" at Hacker
Sharan Basappa writes:
> On Sunday, 8 September 2019 11:16:52 UTC-4, Luciano Ramalho wrote:
>> >>> int('C0FFEE', 16)
>> 12648430
>>
>> There you go!
>>
>> On Sun, Sep 8, 2019 at 12:02 PM Sharan Basappa
>> wrote:
>> >
>> > I have a numpy array that has data in the form of hex.
>> > I would li
On Sunday, 8 September 2019 11:16:52 UTC-4, Luciano Ramalho wrote:
> >>> int('C0FFEE', 16)
> 12648430
>
> There you go!
>
> On Sun, Sep 8, 2019 at 12:02 PM Sharan Basappa
> wrote:
> >
> > I have a numpy array that has data in the form of hex.
> > I would like to convert that into decimal/integ
>>> int('C0FFEE', 16)
12648430
There you go!
On Sun, Sep 8, 2019 at 12:02 PM Sharan Basappa wrote:
>
> I have a numpy array that has data in the form of hex.
> I would like to convert that into decimal/integer.
> Need suggestions please.
> --
> https://mail.python.org/mailman/listinfo/python-lis
On 05/18/2018 09:50 PM, Sharan Basappa wrote:
This is regarding numpy array. I am a bit confused how parts of the array are
being accessed in the example below.
1 import scipy as sp
2 data = sp.genfromtxt("web_traffic.tsv", delimiter="\t")
3 print(data[:10])
4 x = data[:,0]
5 y = data[:,1]
App
The "indexing" page of the documentation might help you with this:
https://docs.scipy.org/doc/numpy-1.14.0/reference/arrays.indexing.html
On 05/18/2018 09:50 PM, sharan.basa...@gmail.com wrote:
This is regarding numpy array. I am a bit confused how parts of the array are
being accessed in the
On Jan 2, 2018 18:27, Rustom Mody wrote:
>
> Someone who works in hadoop asked me:
>
> If our data is in terabytes can we do statistical (ie numpy pandas etc)
> analysis on it?
>
> I said: No (I dont think so at least!) ie I expect numpy (pandas etc)
> to not work if the data does not fit in memo
On Wednesday, January 3, 2018 at 1:43:40 AM UTC+5:30, Paul Moore wrote:
> On 2 January 2018 at 17:24, Rustom Mody wrote:
> > Someone who works in hadoop asked me:
> >
> > If our data is in terabytes can we do statistical (ie numpy pandas etc)
> > analysis on it?
> >
> > I said: No (I dont think so
On 2 January 2018 at 17:24, Rustom Mody wrote:
> Someone who works in hadoop asked me:
>
> If our data is in terabytes can we do statistical (ie numpy pandas etc)
> analysis on it?
>
> I said: No (I dont think so at least!) ie I expect numpy (pandas etc)
> to not work if the data does not fit in m
I've never heard or done that type of testing for a large dataset solely on
python, so I don't know what's the cap from the memory standpoint that
python can handle base on memory availability. Now, if I understand what
you are trying to do, you can achieve that by leveraging Apache Spark and
invo
I'm not sure if I'll be laughed at, but a statistical sampling of a randomized
sample should resemble the whole.
If you need min/max then min ( min(each node) )
If you need average then you need sum( sum(each node)) sum(count(each node))*
*You'll likely need to use log here, as you'll probably o
On Tuesday, August 15, 2017 at 8:13:19 PM UTC+1, Poul Riis wrote:
> Den tirsdag den 15. august 2017 kl. 19.19.15 UTC+2 skrev bream...@gmail.com:
> > On Tuesday, August 15, 2017 at 5:23:29 PM UTC+1, Poul Riis wrote:
> > > Den tirsdag den 15. august 2017 kl. 07.29.05 UTC+2 skrev dieter:
> > > > Poul
Den tirsdag den 15. august 2017 kl. 19.19.15 UTC+2 skrev bream...@gmail.com:
> On Tuesday, August 15, 2017 at 5:23:29 PM UTC+1, Poul Riis wrote:
> > Den tirsdag den 15. august 2017 kl. 07.29.05 UTC+2 skrev dieter:
> > > Poul Riis writes:
> > > > ...
> > > > For some time I have been using python 3.
On Tuesday, August 15, 2017 at 5:23:29 PM UTC+1, Poul Riis wrote:
> Den tirsdag den 15. august 2017 kl. 07.29.05 UTC+2 skrev dieter:
> > Poul Riis writes:
> > > ...
> > > For some time I have been using python 3.6.0 on a windows computer.
> > > Suddenly, my numpy does not work any more.
> > > This
Den tirsdag den 15. august 2017 kl. 07.29.05 UTC+2 skrev dieter:
> Poul Riis writes:
> > ...
> > For some time I have been using python 3.6.0 on a windows computer.
> > Suddenly, my numpy does not work any more.
> > This one-liner program:
> > import numpy as np
> > results in the long error messa
Poul Riis writes:
> ...
> For some time I have been using python 3.6.0 on a windows computer.
> Suddenly, my numpy does not work any more.
> This one-liner program:
> import numpy as np
> results in the long error message below.
> ...
> Traceback (most recent call last):
> File
> "C:\Users\pr\A
Olaf Dietrich :
> This is a simplified example of a Monte Carlo
> simulation where random vectors (here 2D vectors,
> which are all zero) are summed (the result is in
> r1 and r2 or r, respectively):
>
> def case1():
> import numpy as np
> M = 10
> N = 1
> r1 = np.zeros(M)
> ImportError:
> /home/conrado/Canopy/appdata/canopy-1.5.5.3123.rh5-x86_64/lib/
> libgfortran.so.3:
> version `GFORTRAN_1.4' not found (required by /lib64/liblapack.so.3)
Looks like you need to install the 'GFORTRAN_1.4' plugin into Canopy. I don't
know where you'll find it, but Canopy's main web
> ImportError:
> /home/conrado/Canopy/appdata/canopy-1.5.5.3123.rh5-x86_64/lib/
> libgfortran.so.3:
> version `GFORTRAN_1.4' not found (required by /lib64/liblapack.so.3)
Looks like you need to install the 'GFORTRAN_1.4' plugin into Canopy. I
don't know where you'll find it, but Canopy's main we
On 22/11/2016 16:48, Steve D'Aprano wrote:
On Tue, 22 Nov 2016 11:45 pm, BartC wrote:
I will have a look. Don't forget however that all someone is trying to
do is to multiply two vectors. They're not interested in axes
transformation or making them broadcastable, whatever that means.
You don
On Tue, 22 Nov 2016 11:45 pm, BartC wrote:
> I will have a look. Don't forget however that all someone is trying to
> do is to multiply two vectors. They're not interested in axes
> transformation or making them broadcastable, whatever that means.
You don't know that.
Bart, you have a rather di
On 22/11/2016 12:45, BartC wrote:
On 22/11/2016 12:34, Skip Montanaro wrote:
I'm simply suggesting there is plenty of room for improvement. I even
showed a version that did *exactly* what numpy does (AFAIK) that was
three
times the speed of numpy even executed by CPython. So there is some
myste
On Tue, Nov 22, 2016 at 1:06 PM, BartC wrote:
>> In this specific example, the OP is comparing two radically different
>> pieces of code that clearly and obviously perform differently. He's doing
>> the equivalent of timing the code with his heartbeat, and getting 50 beats
>> for one and 150 beats
On 22/11/2016 03:00, Steve D'Aprano wrote:
On Tue, 22 Nov 2016 12:45 pm, BartC wrote:
You get to know after while what kinds of processes affect timings. For
example, streaming a movie at the same time.
Really, no.
py> with Stopwatch():
... x = math.sin(1.234)
...
elapsed time is very
On 22/11/2016 12:34, Skip Montanaro wrote:
I'm simply suggesting there is plenty of room for improvement. I even
showed a version that did *exactly* what numpy does (AFAIK) that was three
times the speed of numpy even executed by CPython. So there is some mystery
there.
As I indicated in my ear
> I'm simply suggesting there is plenty of room for improvement. I even
showed a version that did *exactly* what numpy does (AFAIK) that was three
times the speed of numpy even executed by CPython. So there is some mystery
there.
As I indicated in my earlier response, your version doesn't pass all
On 22/11/2016 02:44, Steve D'Aprano wrote:
On Tue, 22 Nov 2016 05:43 am, BartC wrote:
The fastest I can get compiled, native code to do this is at 250 million
cross-products per second.
(Actually 300 million using 64-bit code.)
Yes, yes, you're awfully clever, and your secret private langua
Steven D'Aprano writes:
> if we knew we should be doing it, and if we could be bothered to run
> multiple trials and gather statistics and keep a close eye on the
> deviation between measurements. But who wants to do that by hand?
You might like this, for Haskell:
http://www.serpentine.com/cr
On Tuesday 22 November 2016 14:00, Steve D'Aprano wrote:
> Running a whole lot of loops can, sometimes, mitigate some of that
> variation, but not always. Even when running in a loop, you can easily get
> variation of 10% or more just at random.
I think that needs to be emphasised: there's a lot
On Tue, 22 Nov 2016 12:45 pm, BartC wrote:
> On 21/11/2016 14:50, Steve D'Aprano wrote:
>> On Mon, 21 Nov 2016 11:09 pm, BartC wrote:
>
>> Modern machines run multi-tasking operating systems, where there can be
>> other processes running. Depending on what you use as your timer, you may
>> be mea
On Tue, 22 Nov 2016 05:43 am, BartC wrote:
> The fastest I can get compiled, native code to do this is at 250 million
> cross-products per second.
Yes, yes, you're awfully clever, and your secret private language is so much
more efficient than even C that the entire IT industry ought to hang the
On 21/11/2016 14:50, Steve D'Aprano wrote:
On Mon, 21 Nov 2016 11:09 pm, BartC wrote:
Modern machines run multi-tasking operating systems, where there can be
other processes running. Depending on what you use as your timer, you may
be measuring the time that those other processes run. The OS c
On 21/11/2016 17:04, Nobody wrote:
On Mon, 21 Nov 2016 14:53:35 +, BartC wrote:
Also that the critical bits were not implemented in Python?
That is correct. You'll notice that there aren't any loops in numpy.cross.
It's just a wrapper around a bunch of vectorised operations (*, -, []).
I
On Mon, 21 Nov 2016 14:53:35 +, BartC wrote:
> Also that the critical bits were not implemented in Python?
That is correct. You'll notice that there aren't any loops in numpy.cross.
It's just a wrapper around a bunch of vectorised operations (*, -, []).
If you aren't taking advantage of vect
Perhaps your implementation isn't as general as numpy's? I pulled out
the TestCross class from numpy.core.tests.test_numeric and replaced
calls to np.cross with calls to your function. I got an error in
test_broadcasting_shapes:
ValueError: operands could not be broadcast together with shapes (1,2
On 21/11/2016 12:44, Peter Otten wrote:
After a look into the source this is no longer a big surprise (numpy 1.8.2):
if axis is not None:
axisa, axisb, axisc=(axis,)*3
a = asarray(a).swapaxes(axisa, 0)
b = asarray(b).swapaxes(axisb, 0)
The situation may be different when
On Mon, 21 Nov 2016 11:09 pm, BartC wrote:
> On 21/11/2016 02:48, Steve D'Aprano wrote:
[...]
>> However, your code is not a great way of timing code. Timing code is
>> *very* difficult, and can be effected by many things, such as external
>> processes, CPU caches, even the function you use for ge
On Mon, Nov 21, 2016 at 1:38 AM, BartC wrote:
> On 20/11/2016 20:46, DFS wrote:
>>
>> import sys, time, numpy as np
>> loops=int(sys.argv[1])
>>
>> x=np.array([1,2,3])
>> y=np.array([4,5,6])
>> start=time.clock()
In Unix, time.clock doesn't measure wall-clock time, but rather an
approximation to
Steve D'Aprano wrote:
> On Mon, 21 Nov 2016 07:46 am, DFS wrote:
>
>> import sys, time, numpy as np
>> loops=int(sys.argv[1])
>>
>> x=np.array([1,2,3])
>> y=np.array([4,5,6])
>> start=time.clock()
>> for i in range(loops):
>> np.cross(x,y)
>> print "Numpy, %s loops: %.2g seconds" %(loops,ti
On 21/11/2016 02:48, Steve D'Aprano wrote:
On Mon, 21 Nov 2016 07:46 am, DFS wrote:
start=time.clock()
for i in range(loops):
np.cross(x,y)
print "Numpy, %s loops: %.2g seconds" %(loops,time.clock()-start)
However, your code is not a great way of timing code. Timing code is *very*
diff
On Mon, 21 Nov 2016 07:46 am, DFS wrote:
> import sys, time, numpy as np
> loops=int(sys.argv[1])
>
> x=np.array([1,2,3])
> y=np.array([4,5,6])
> start=time.clock()
> for i in range(loops):
> np.cross(x,y)
> print "Numpy, %s loops: %.2g seconds" %(loops,time.clock()-start)
[...]
> $ python
On 20/11/2016 20:46, DFS wrote:
import sys, time, numpy as np
loops=int(sys.argv[1])
x=np.array([1,2,3])
y=np.array([4,5,6])
start=time.clock()
for i in range(loops):
np.cross(x,y)
print "Numpy, %s loops: %.2g seconds" %(loops,time.clock()-start)
x=[1,2,3]
y=[4,5,6]
z=[0,0,0]
start=time.clo
On Tue, Jul 26, 2016 at 6:31 PM, sth wrote:
>
> The restype is a ctypes Structure instance with a single __fields__ entry
> (coords), which
Watch the underscores with ctypes attributes. Your code spells it
correctly as "_fields_".
> is a Structure with two fields (len and data) which are the F
On Tuesday, 26 July 2016 19:10:46 UTC+1, eryk sun wrote:
> On Tue, Jul 26, 2016 at 12:06 PM, sth wrote:
> > I'm using ctypes to interface with a binary which returns a void pointer
> > (ctypes c_void_p) to a nested 64-bit float array:
>
> If this comes from a function result, are you certain th
On Tue, Jul 26, 2016 at 12:06 PM, wrote:
> I'm using ctypes to interface with a binary which returns a void pointer
> (ctypes c_void_p) to a nested 64-bit float array:
If this comes from a function result, are you certain that its restype
is ctypes.c_void_p? I commonly see typos here such as s
On Tuesday, 26 July 2016 16:36:33 UTC+1, Christian Gollwitzer wrote:
> Am 26.07.16 um 17:09 schrieb sth:
> > it's difficult to test a .dylib / .so using valgrind
>
> Why is it difficult? If you have a python script such that
>
> python mytests.py
>
> loads the .so and runs the tests, then
Am 26.07.16 um 17:09 schrieb sth:
it's difficult to test a .dylib / .so using valgrind
Why is it difficult? If you have a python script such that
python mytests.py
loads the .so and runs the tests, then
valgrind --tool=memcheck python mytests.py
should work. This should imme
On Tuesday, 26 July 2016 15:21:14 UTC+1, Peter Otten wrote:
>
> > I'm using ctypes to interface with a binary which returns a void pointer
> > (ctypes c_void_p) to a nested 64-bit float array:
> > [[1.0, 2.0], [3.0, 4.0], … ]
> > then return the pointer so it can be freed
> >
> > I'm using the f
ursch...@gmail.com wrote:
> I'm using ctypes to interface with a binary which returns a void pointer
> (ctypes c_void_p) to a nested 64-bit float array:
> [[1.0, 2.0], [3.0, 4.0], … ]
> then return the pointer so it can be freed
>
> I'm using the following code to de-reference it:
>
> # a 10-ele
On Mon, May 23, 2016 at 9:12 AM wrote:
> > On 23 mei 2016, at 14:19, Peter Otten <__pete...@web.de> wrote:
> > li...@onemanifest.net wrote:
> >
> >> I've got a 2D array
> >> And an array of indexes that for shows which row to keep for each column
> >> of values:
> >>
> >> keep = np.array([2, 3, 1
>
> On 23 mei 2016, at 14:19, Peter Otten <__pete...@web.de> wrote:
>
> li...@onemanifest.net wrote:
>
>> I've got a 2D array with values:
>>
>> values = np.array(
>> [[ 20, 38, 4, 45, 65],
>> [ 81, 44, 38, 57, 92],
>> [ 92, 41, 16, 77, 44],
>> [ 53, 62, 9, 75, 12],
>> [ 58, 2, 60, 100,
li...@onemanifest.net wrote:
> I've got a 2D array with values:
>
> values = np.array(
> [[ 20, 38, 4, 45, 65],
> [ 81, 44, 38, 57, 92],
> [ 92, 41, 16, 77, 44],
> [ 53, 62, 9, 75, 12],
> [ 58, 2, 60, 100, 29],
> [ 63, 15, 48, 43, 71],
> [ 80, 97, 87, 64, 60],
> [ 16, 16, 70, 88,
As you said, this did the trick.
sortedVal=np.array(val[ind]).reshape((xcoord.size,ycoord.size,zcoord.size))
Only val[ind] instead of val[ind,:] as val is 1D.
Thanks Oscar,
--
https://mail.python.org/mailman/listinfo/python-list
Thanks Oscar,
In my case this did the trick.
sortedVal=np.array(val[ind]).reshape((xcoord.size,ycoord.size,zcoord.size))
--
https://mail.python.org/mailman/listinfo/python-list
On 7 April 2016 at 15:31, Heli wrote:
>
> Thanks a lot Oscar,
>
> The lexsort you suggested was the way to go.
Glad to hear it.
> import h5py
> import numpy as np
> f=np.loadtxt(inputFile,delimiter=None)
> xcoord=np.sort(np.unique(f[:,0]))
> ycoord=np.sort(np.unique(f[:,1]))
> zcoord=np.sort(np.
Thanks a lot Oscar,
The lexsort you suggested was the way to go.
import h5py
import numpy as np
f=np.loadtxt(inputFile,delimiter=None)
xcoord=np.sort(np.unique(f[:,0]))
ycoord=np.sort(np.unique(f[:,1]))
zcoord=np.sort(np.unique(f[:,2]))
x=f[:,0]
y=f[:,1]
z=f[:,2]
val=f[:,3]
ind = np.lexsort(
On 6 April 2016 at 17:26, Heli wrote:
>
> Thanks for your replies. I have a question in regard with my previous
> question. I have a file that contains x,y,z and a value for that coordinate
> on each line. Here I am giving an example of the file using a numpy array
> called f.
>
> f=np.array([[
Thanks for your replies. I have a question in regard with my previous question.
I have a file that contains x,y,z and a value for that coordinate on each line.
Here I am giving an example of the file using a numpy array called f.
f=np.array([[1,1,1,1],
[1,1,2,2],
[1,1,3
> What you want is called *transposing* the array:
>
> http://docs.scipy.org/doc/numpy/reference/generated/numpy.transpose.html
>
> That should be a sufficiently fast operation.
Transposing itself is fast, as it just swaps the strides and dimensions
without touching the data (i.e. it returns a n
On 23 March 2016 10:06:56 GMT+00:00, Heli wrote:
>Hi,
>
>I have a 2D numpy array like this:
>
>[[1,2,3,4],
> [1,2,3,4],
> [1,2,3,4]
> [1,2,3,4]]
>
>Is there any fast way to convert this array to
>
>[[1,1,1,1],
> [2,2,2,2]
> [3,3,3,3]
> [4,4,4,4]]
Use the transpose() method:
http://docs.scipy
On Wednesday, March 23, 2016 at 11:07:27 AM UTC+1, Heli wrote:
> Hi,
>
> I have a 2D numpy array like this:
>
> [[1,2,3,4],
> [1,2,3,4],
> [1,2,3,4]
> [1,2,3,4]]
>
> Is there any fast way to convert this array to
>
> [[1,1,1,1],
> [2,2,2,2]
> [3,3,3,3]
> [4,4,4,4]]
>
> In general I wo
On Wed, Mar 23, 2016 at 9:06 PM, Heli wrote:
> I have a 2D numpy array like this:
>
> [[1,2,3,4],
> [1,2,3,4],
> [1,2,3,4]
> [1,2,3,4]]
>
> Is there any fast way to convert this array to
>
> [[1,1,1,1],
> [2,2,2,2]
> [3,3,3,3]
> [4,4,4,4]]
What you want is called *transposing* the array:
h
On 03/23/16 at 03:06am, Heli wrote:
> I have a 2D numpy array like this:
>
> [[1,2,3,4],
> [1,2,3,4],
> [1,2,3,4]
> [1,2,3,4]]
>
> Is there any fast way to convert this array to
>
> [[1,1,1,1],
> [2,2,2,2]
> [3,3,3,3]
> [4,4,4,4]]
You don't mean just transposing your original array, as
On Wed, 23 Mar 2016 09:06 pm, Heli wrote:
> Hi,
>
> I have a 2D numpy array like this:
>
> [[1,2,3,4],
> [1,2,3,4],
> [1,2,3,4]
> [1,2,3,4]]
>
> Is there any fast way to convert this array to
>
> [[1,1,1,1],
> [2,2,2,2]
> [3,3,3,3]
> [4,4,4,4]]
Mathematically, this is called the "tran
On Friday, 13 November 2015 18:17:59 UTC+1, Ian wrote:
> On Fri, Nov 13, 2015 at 8:37 AM, PythonDude wrote:
> > 3) I DON'T understand why the code doesn't look like this:
> >
> > means, stds = np.column_stack([
> > for _ in xrange(n_portfolios):
> > getMuSigma_from_PF(return_vec) ])
>
1 - 100 of 473 matches
Mail list logo