On 08/03/2016 01:47, BartC wrote:
On 08/03/2016 01:12, Mark Lawrence wrote:
On 08/03/2016 01:00, BartC wrote:
If your efforts manage to double the speed of reading file A, then
probably the reading file B is also going to be improved! In practice
you use a variety of files, but one at a time will do.
What is the difference in your timing when you first read the file, and
then read it a second time when it's been cached by the OS? In other
words, you are probably measuring more of the response time of the disk
than the code that does the reading, hence making your figures useless.
It's not going to be significant. My hard drive is going to read at,
what, 100MB per second? Probably more.
One test file is 0.2MB. Load time is going to be negligible whether
cached or not.
The Python timing for that file is around 20 seconds, time enough to
read 10000 copies from the disk.
And a C program reads /and decodes/ the same file from the same disk in
between 0.1 and 0.2 seconds.
So how much of that time is Python startup time, compared to C which is
effectively zero? Or are you suggesting that C code is always 100 times
faster than Python? Of course I'd like to see you write C code 100
times faster than Python, but of course that's where Python shines,
which is why it is so popular.
--
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.
Mark Lawrence
--
https://mail.python.org/mailman/listinfo/python-list