Re: Pause and Resuming of a file upload process to AWS S3

2015-10-28 Thread dieter
ashw...@nanoheal.com writes:
> I wanted to know whether it is possible in python to pause and resume the 
> file upload process to AWS S3

Have you checked that "AWS S3" supports it?

If so, you will be able to do it with Python -- however, you likely
need to use low level modules (such as "httplib", maybe "ftplib")
where you can fit in the technical details you learn from your
"AWS S3" analysis.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to get python socket to use a specific interface

2015-10-28 Thread Robin Becker

..


binding to the local IP seems to be a windows only thing.


No, it is a pretty standard BSD socket layer thing. (Windows got its original
TCP stack from there too). I just tested a Linux RHEL6 host binding to a
specific address just now using telnet:

  /usr/bin/telnet -b x.x.x.193 x.x.x.174 22

where the .193 is not the primary address - it is an additional local address.
The connection was correctly received by the target as from the alias address,
not the base address:

I don't think I'll be able to do all I need with telnet :(





Please show me the exact code you're using. This really should work without
annoying "device" binding.

The counter examples in the articules you cite are for particularly weird
circumstances, such as where the routing table cannot correctly deduce the
interface (distinct attached networks with the _same_ network numbering -
ghastly). They don't say "binding to the local IP seems to be a windows only
thing" that I can see.

Please post your failing code. I suspect you're missing something.

...
Well originally I was hacking on miproxy to try and get it to use a specific ip 
address. I must have messed up somewhere there as when I try this more obvious code



from socket import socket, SOL_SOCKET
BIND_DEVICE='eth0.0'
sock = socket()
sock.settimeout(10)
sock.bind(('xx.xx.xx.13', 0))
#sock.setsockopt(SOL_SOCKET, 25, BIND_DEVICE)
sock.connect(("int.hh.com", 80))
sock.send("GET / HTTP/1.0\r\n\r\n")
print sock.recv(20)
sock.close()


it does work as intended and I can see the .13 address hitting the remote 
server. I guess my hack of the miproxy code didn't work as intended.


Anyhow my upstream provider has taken over the problem so hopefully I will get 
the address cleared at some point.

--
Robin Becker

--
https://mail.python.org/mailman/listinfo/python-list


Re: Windows 10 pip2.7.exe uninstall fails

2015-10-28 Thread Robin Becker



A message box is displayed:-

"This app can't run on your PC
To find a version for your PC, check with the software publisher".

Close the message box and:-

"Access is denied."

Searching hasn't thrown up a single reference to uninstall errors like this, any
ideas?



FWIW on my 64bit windows 7


C:\tmp>\python27\Scripts\pip.exe install tox
Downloading/unpacking tox
  Downloading tox-2.1.1-py2.py3-none-any.whl
Downloading/unpacking pluggy>=0.3.0,<0.4.0 (from tox)
  Downloading pluggy-0.3.1-py2.py3-none-any.whl
Requirement already satisfied (use --upgrade to upgrade): virtualenv>=1.11.2 in 
c:\python27\lib\site-packages\virtualenv
-1.11.6-py2.7.egg (from tox)
Downloading/unpacking py>=1.4.17 (from tox)
Installing collected packages: tox, pluggy, py
Successfully installed tox pluggy py
Cleaning up...

C:\tmp>\python27\Scripts\pip.exe uninstall tox
Uninstalling tox:
  c:\python27\lib\site-packages\tox-2.1.1.dist-info\description.rst
  c:\python27\lib\site-packages\tox-2.1.1.dist-info\entry_points.txt
  c:\python27\lib\site-packages\tox-2.1.1.dist-info\metadata
  c:\python27\lib\site-packages\tox-2.1.1.dist-info\metadata.json
  c:\python27\lib\site-packages\tox-2.1.1.dist-info\record
  c:\python27\lib\site-packages\tox-2.1.1.dist-info\top_level.txt
  c:\python27\lib\site-packages\tox-2.1.1.dist-info\wheel
  c:\python27\lib\site-packages\tox\__init__.py
  c:\python27\lib\site-packages\tox\__init__.pyc
  c:\python27\lib\site-packages\tox\__main__.py
  c:\python27\lib\site-packages\tox\__main__.pyc
  c:\python27\lib\site-packages\tox\_pytestplugin.py
  c:\python27\lib\site-packages\tox\_pytestplugin.pyc
  c:\python27\lib\site-packages\tox\_quickstart.py
  c:\python27\lib\site-packages\tox\_quickstart.pyc
  c:\python27\lib\site-packages\tox\_verlib.py
  c:\python27\lib\site-packages\tox\_verlib.pyc
  c:\python27\lib\site-packages\tox\config.py
  c:\python27\lib\site-packages\tox\config.pyc
  c:\python27\lib\site-packages\tox\hookspecs.py
  c:\python27\lib\site-packages\tox\hookspecs.pyc
  c:\python27\lib\site-packages\tox\interpreters.py
  c:\python27\lib\site-packages\tox\interpreters.pyc
  c:\python27\lib\site-packages\tox\result.py
  c:\python27\lib\site-packages\tox\result.pyc
  c:\python27\lib\site-packages\tox\session.py
  c:\python27\lib\site-packages\tox\session.pyc
  c:\python27\lib\site-packages\tox\venv.py
  c:\python27\lib\site-packages\tox\venv.pyc
  c:\python27\scripts\tox-quickstart.exe
  c:\python27\scripts\tox.exe
Proceed (y/n)? y
  Successfully uninstalled tox

C:\tmp>


so perhaps yours is an exceptional case
--
Robin Becker

--
https://mail.python.org/mailman/listinfo/python-list


Re: Pause and Resuming of a file upload process to AWS S3

2015-10-28 Thread Michiel Overtoom

Hi,

> I wanted to know whether it is possible in python to pause and resume the 
> file upload process to AWS S3

Have a look at s3tools/s3cmd at http://s3tools.org/s3cmd, in 
http://s3tools.org/usage I read:

   --continue-put   Continue uploading partially uploaded files

Greetings,

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: UNABLE TO GET IDLE TO RUN

2015-10-28 Thread Peter Otten
Terry Reedy wrote:

Thank you for your patience.

> Why do you think it a misfeature for IDLE to execute code the way Python
> does?

Sadly I wasn't aware that the interactive interpreter is also vulnerable.
I should have been, but failed to add one and one.

Until now I have often started python in a directory with unknown contents, 
to use it as a calculator or to explore the files in that directory.

I will stop doing so.

-- 
https://mail.python.org/mailman/listinfo/python-list


List comprehension with if-else

2015-10-28 Thread Larry Martell
I'm trying to do a list comprehension with an if and that requires an
else, but in the else case I do not want anything added to the list.

For example, if I do this:

white_list = [l.control_hub.serial_number if l.wblist ==
wblist_enum['WHITE']  else None for l in wblist]

I end up with None in my list for the else cases. Is there a way I can
do this so for the else cases nothing is added to the list?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: List comprehension with if-else

2015-10-28 Thread Zachary Ware
On Wed, Oct 28, 2015 at 11:25 AM, Larry Martell  wrote:
> I'm trying to do a list comprehension with an if and that requires an
> else, but in the else case I do not want anything added to the list.
>
> For example, if I do this:
>
> white_list = [l.control_hub.serial_number if l.wblist == wblist_enum['WHITE'] 
>  else None for l in wblist]

Switch the 'if' and the 'for':

   white_list = [l.control_hub.serial_number for l in wblist if
l.wblist == wblist_enum['WHITE']]

-- 
Zach
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: List comprehension with if-else

2015-10-28 Thread Carl Meyer
Hi Larry,

On 10/28/2015 10:25 AM, Larry Martell wrote:
> I'm trying to do a list comprehension with an if and that requires an
> else, but in the else case I do not want anything added to the list.
> 
> For example, if I do this:
> 
> white_list = [l.control_hub.serial_number if l.wblist ==
> wblist_enum['WHITE']  else None for l in wblist]
> 
> I end up with None in my list for the else cases. Is there a way I can
> do this so for the else cases nothing is added to the list?

You're not really using the if clause of the list comprehension here,
you're just using a ternary if-else in the result expression. List
comprehension if clauses go at the end, and don't require an else:

[l.foo for l in wblist if l.bar == "baz"]

Carl



signature.asc
Description: OpenPGP digital signature
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: UNABLE TO GET IDLE TO RUN

2015-10-28 Thread Michael Torrie
On 10/28/2015 10:10 AM, Peter Otten wrote:
> Terry Reedy wrote:
> 
> Thank you for your patience.
> 
>> Why do you think it a misfeature for IDLE to execute code the way Python
>> does?
> 
> Sadly I wasn't aware that the interactive interpreter is also vulnerable.
> I should have been, but failed to add one and one.
> 
> Until now I have often started python in a directory with unknown contents, 
> to use it as a calculator or to explore the files in that directory.
> 
> I will stop doing so.

I'm curious what behavior you would suggest?

In the case of the bare interactive interpreter, since there's no script
loaded, the current directory is added so you can import modules you are
working on.  I do this all the time to help with testing and development
of my projects' modules. This behavior makes perfect sense to me and I
don't see any other practical alternative that is useful, expect for
some syntax that differentiates between "local" imports and system ones.
 Not being able to easily import local modules would make the
interactive interpreter next to useless for me.

Given that this is only the behavior for interactive Python anyway, I
don't see this as a significant vulnerability. If a bad guy is littering
your working directories with malicious python programs you might
import, you've already lost. No amount of Python tweaks are going to
save you.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: UNABLE TO GET IDLE TO RUN

2015-10-28 Thread Peter Otten
Michael Torrie wrote:

> On 10/28/2015 10:10 AM, Peter Otten wrote:
>> Terry Reedy wrote:
>> 
>> Thank you for your patience.
>> 
>>> Why do you think it a misfeature for IDLE to execute code the way Python
>>> does?
>> 
>> Sadly I wasn't aware that the interactive interpreter is also vulnerable.
>> I should have been, but failed to add one and one.
>> 
>> Until now I have often started python in a directory with unknown
>> contents, to use it as a calculator or to explore the files in that
>> directory.
>> 
>> I will stop doing so.
> 
> I'm curious what behavior you would suggest?

I didn't suggest anything, because I didn't see a practical remedy. 
 
> In the case of the bare interactive interpreter, since there's no script
> loaded, the current directory is added so you can import modules you are
> working on.  I do this all the time to help with testing and development
> of my projects' modules. This behavior makes perfect sense to me and I
> don't see any other practical alternative that is useful, expect for
> some syntax that differentiates between "local" imports and system ones.
>  Not being able to easily import local modules would make the
> interactive interpreter next to useless for me.
> 
> Given that this is only the behavior for interactive Python anyway, I
> don't see this as a significant vulnerability. If a bad guy is littering
> your working directories with malicious python programs you might
> import, you've already lost. No amount of Python tweaks are going to
> save you.

The problematic module might not even be malicious, it could just lack the 

if __name__ == "__main__": ...

guard.

And I am the bad guy I have in mind ;)

When I download a Python project, have a look at it and then fire up an 
editor...

$ hg clone http://www.example.com/whatever
$ cd whatever
$ ls -1
interesting_stuff.py
...
string.py
...
also_interesting.py
...
readline.py
...
$ idle  # or $ python

I don't want to check if there are any modules in the project that have 
names that will cause idle or python to import them instead of those it 
actually needs.

Safer behaviour might be achieved by deferring the addition of the current 
directory to the path until idle or the interactive interpreter is 
completely set up or even by limiting import during the interpreter startup 
to built-in modules or a whitelist.

PS: The shell people have learned their lesson and no longer include the 
working directory in the PATH: 
$ ls # the real thing
$ ./ls # use at your own risk

So maybe

>>> import string  # stdlib
>>> from . import string  # whatever you dropped into your working directory

OK, probably not (just brainstorming).

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: List comprehension with if-else

2015-10-28 Thread Larry Martell
On Wed, Oct 28, 2015 at 12:36 PM, Zachary Ware
 wrote:
> On Wed, Oct 28, 2015 at 11:25 AM, Larry Martell  
> wrote:
>> I'm trying to do a list comprehension with an if and that requires an
>> else, but in the else case I do not want anything added to the list.
>>
>> For example, if I do this:
>>
>> white_list = [l.control_hub.serial_number if l.wblist == 
>> wblist_enum['WHITE']  else None for l in wblist]
>
> Switch the 'if' and the 'for':
>
>white_list = [l.control_hub.serial_number for l in wblist if
> l.wblist == wblist_enum['WHITE']]

Perfect. Thanks
-- 
https://mail.python.org/mailman/listinfo/python-list


Most space-efficient way to store log entries

2015-10-28 Thread Marc Aymerich
Hi,
I'm writting an application that saves historical state in a log file.
I want to be really efficient in terms of used bytes.

What I'm doing now is:

1) First use zlib.compress
2) And then remove all new lines using binascii.b2a_base64, so I have
a log entry per line.

but b2a_base64 is far from ideal: adds lots of bytes to the compressed
log entry. So, I wonder if perhaps there is a better way to remove new
lines from the zlib output? or maybe a different approach?

Anyone?

Thanks!!
-- 
Marc
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Most space-efficient way to store log entries

2015-10-28 Thread Chris Angelico
On Thu, Oct 29, 2015 at 9:30 AM, Marc Aymerich  wrote:
> I'm writting an application that saves historical state in a log file.
> I want to be really efficient in terms of used bytes.

Why, exactly?

By zipping the state, you make it utterly opaque. It'll require some
sort of tool to tease it apart before you can read anything. Much more
useful would be to have some sort of textual delimiter, followed by
the content - then when you come to read it, all you need is a text
viewer.

Disk space is not expensive. Even if you manage to cut your file by a
factor of four (75% compression, which is entirely possible if your
content is plain text, but far from guaranteed), that's maybe three
years of Moore's Law at most. You can get 3-4 terabytes of storage for
roughly $100-$200, depending on exactly where you buy it, which dollar
you're using, etc. How long will your program have to run to generate
that much data? If you can't do that in, say, two years, don't bother
compressing.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to get python socket to use a specific interface

2015-10-28 Thread Cameron Simpson

On 28Oct2015 10:41, Robin Becker  wrote:

binding to the local IP seems to be a windows only thing.

No, it is a pretty standard BSD socket layer thing. (Windows got its original
TCP stack from there too). I just tested a Linux RHEL6 host binding to a
specific address just now using telnet:

 /usr/bin/telnet -b x.x.x.193 x.x.x.174 22

where the .193 is not the primary address - it is an additional local address.
The connection was correctly received by the target as from the alias address,
not the base address:

I don't think I'll be able to do all I need with telnet :(


Indeed:-( But it is very handy as a test for basic connection stuff in the 
field.



Please show me the exact code you're using. This really should work without
annoying "device" binding. [...]


Well originally I was hacking on miproxy to try and get it to use a specific 
ip address. I must have messed up somewhere there as when I try this more 
obvious code [...]



from socket import socket, SOL_SOCKET
BIND_DEVICE='eth0.0'
sock = socket()
sock.settimeout(10)
sock.bind(('xx.xx.xx.13', 0))
#sock.setsockopt(SOL_SOCKET, 25, BIND_DEVICE)
sock.connect(("int.hh.com", 80))
sock.send("GET / HTTP/1.0\r\n\r\n")
print sock.recv(20)
sock.close()


it does work as intended and I can see the .13 address hitting the 
remote server. I guess my hack of the miproxy code didn't work as 
intended.


That is reassuring to me. Thanks for checking. Reaching for a particular device 
is annoying and weird and possibly even pointless.


Anyhow my upstream provider has taken over the problem so hopefully I will get 
the address cleared at some point.


Ok.

Cheers,
Cameron Simpson 
--
https://mail.python.org/mailman/listinfo/python-list


Re: Most space-efficient way to store log entries

2015-10-28 Thread Tim Chase
On 2015-10-29 09:38, Chris Angelico wrote:
> On Thu, Oct 29, 2015 at 9:30 AM, Marc Aymerich
>  wrote:
> > I'm writting an application that saves historical state in a log
> > file. I want to be really efficient in terms of used bytes.
> 
> Why, exactly?
> 
> By zipping the state, you make it utterly opaque.

If it's only zipped, it's not opaque.  Just `zcat` or `zgrep` and
process away.  The whole base64+minus_newlines thing does opaquify
and doesn't really save all that much for the trouble.

> Disk space is not expensive. Even if you manage to cut your file by
> a factor of four (75% compression, which is entirely possible if
> your content is plain text, but far from guaranteed)

Though one also has to consider the speed of reading it off the drive
for processing.  If you have spinning-rust drives, it's pretty slow
(and SSD is still not like accessing RAM), and reading zipped
content can shovel a LOT more data at your CPU than if it is coming
off the drive uncompressed.  Logs aren't much good if they aren't
being monitored and processed for the information they contain.  If
nobody is monitoring the logs, just write them to /dev/null for 100%
compression. ;-)

-tkc


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Most space-efficient way to store log entries

2015-10-28 Thread Mark Lawrence

On 28/10/2015 22:53, Tim Chase wrote:

On 2015-10-29 09:38, Chris Angelico wrote:

On Thu, Oct 29, 2015 at 9:30 AM, Marc Aymerich
 wrote:

I'm writting an application that saves historical state in a log
file. I want to be really efficient in terms of used bytes.


Why, exactly?

By zipping the state, you make it utterly opaque.


If it's only zipped, it's not opaque.  Just `zcat` or `zgrep` and
process away.  The whole base64+minus_newlines thing does opaquify
and doesn't really save all that much for the trouble.


Disk space is not expensive. Even if you manage to cut your file by
a factor of four (75% compression, which is entirely possible if
your content is plain text, but far from guaranteed)


Though one also has to consider the speed of reading it off the drive
for processing.  If you have spinning-rust drives, it's pretty slow
(and SSD is still not like accessing RAM), and reading zipped
content can shovel a LOT more data at your CPU than if it is coming
off the drive uncompressed.  Logs aren't much good if they aren't
being monitored and processed for the information they contain.  If
nobody is monitoring the logs, just write them to /dev/null for 100%
compression. ;-)

-tkc



Can you get better than 100% compression if you write them to somewhere 
other than /dev/null/ ?


--
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.

Mark Lawrence

--
https://mail.python.org/mailman/listinfo/python-list


Re: Most space-efficient way to store log entries

2015-10-28 Thread Chris Angelico
On Thu, Oct 29, 2015 at 9:53 AM, Tim Chase
 wrote:
> On 2015-10-29 09:38, Chris Angelico wrote:
>> On Thu, Oct 29, 2015 at 9:30 AM, Marc Aymerich
>>  wrote:
>> > I'm writting an application that saves historical state in a log
>> > file. I want to be really efficient in terms of used bytes.
>>
>> Why, exactly?
>>
>> By zipping the state, you make it utterly opaque.
>
> If it's only zipped, it's not opaque.  Just `zcat` or `zgrep` and
> process away.  The whole base64+minus_newlines thing does opaquify
> and doesn't really save all that much for the trouble.

If you zip the whole file as a whole, yes. If you zip individual
pieces, you can't zcat it (at least, I don't think so?). Conversely,
zipping the whole file means you have no choice but to sequentially
scan it - you can't pull up the last section of the file. It's still a
binary blob to many tools - we as humans may have handy tools around,
but it's still going to be an extra step for any tool that doesn't
intrinsically support it.

>> Disk space is not expensive. Even if you manage to cut your file by
>> a factor of four (75% compression, which is entirely possible if
>> your content is plain text, but far from guaranteed)
>
> Though one also has to consider the speed of reading it off the drive
> for processing.  If you have spinning-rust drives, it's pretty slow
> (and SSD is still not like accessing RAM), and reading zipped
> content can shovel a LOT more data at your CPU than if it is coming
> off the drive uncompressed.  Logs aren't much good if they aren't
> being monitored and processed for the information they contain.  If
> nobody is monitoring the logs, just write them to /dev/null for 100%
> compression. ;-)

Yeah. There are lots of considerations, but frankly, I don't think
disk _capacity_ is a big one. Sometimes you _might_ get some benefit
from compression (writing less sectors might save you time), but I
almost never fill up my hard drives, and when I do, it's usually with
already-compressed data (movies and stuff).

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Most space-efficient way to store log entries

2015-10-28 Thread Chris Angelico
On Thu, Oct 29, 2015 at 11:21 AM, Mark Lawrence  wrote:
>> Though one also has to consider the speed of reading it off the drive
>> for processing.  If you have spinning-rust drives, it's pretty slow
>> (and SSD is still not like accessing RAM), and reading zipped
>> content can shovel a LOT more data at your CPU than if it is coming
>> off the drive uncompressed.  Logs aren't much good if they aren't
>> being monitored and processed for the information they contain.  If
>> nobody is monitoring the logs, just write them to /dev/null for 100%
>> compression. ;-)
>>
>> -tkc
>>
>
> Can you get better than 100% compression if you write them to somewhere
> other than /dev/null/ ?

If you write them to /dev/sda, you might be able to create free space
where there was none before. It all depends on the exact content of
your logs :)

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Most space-efficient way to store log entries

2015-10-28 Thread Cameron Simpson

On 29Oct2015 11:39, Chris Angelico  wrote:

If it's only zipped, it's not opaque.  Just `zcat` or `zgrep` and
process away.  The whole base64+minus_newlines thing does opaquify
and doesn't really save all that much for the trouble.


If you zip the whole file as a whole, yes. If you zip individual
pieces, you can't zcat it (at least, I don't think so?).


If it is pure gzip, then yes you can. So this:

 gunzip < file1.gz; gunzip < file2.gz

and this:

 cat file1.gz file2.gz | gunzip

should produce the same output. I think this works at the record level too.

Of course all bets are off once you wrap the records in some outer layer (I 
have a file format with is little records which may have the data section 
zipped).



Conversely,
zipping the whole file means you have no choice but to sequentially
scan it - you can't pull up the last section of the file. It's still a
binary blob to many tools - we as humans may have handy tools around,
but it's still going to be an extra step for any tool that doesn't
intrinsically support it.


Yes. But if you're keeping a lot of data or you're using a very constrained 
system you probably do want compression somewhere in there. Maybe the OP is 
optimising prematurely, but again, maybe not.


However it sounds like the OP wants a text log encoding some test state, and is 
just compressing to gain a little room; I suspect that with a short record you 
might put on a line the compression obtained will be small and the loss from 
any base64 post step will undo it all.  He may be better off keeping 
conventional text logs and just rotating them and compressing the rotated 
copies.


Cheers,
Cameron Simpson 

Hoping to shave precious seconds off the time it would take me to get through 
the checkout process and on my way home, I opted for the express line ("9 Items 
Or Less [sic]"  Why nine items?  Where do they come up with these rules, 
anyway?  It's the same way at most stores -- always some oddball number like 
that, instead of a more understandable multiple of five.  Like "five.")

- Geoff Miller, geo...@purplehaze.corp.sun.com
--
https://mail.python.org/mailman/listinfo/python-list


Re: Most space-efficient way to store log entries

2015-10-28 Thread Chris Angelico
On Thu, Oct 29, 2015 at 12:09 PM, Cameron Simpson  wrote:
> On 29Oct2015 11:39, Chris Angelico  wrote:
>>>
>>> If it's only zipped, it's not opaque.  Just `zcat` or `zgrep` and
>>> process away.  The whole base64+minus_newlines thing does opaquify
>>> and doesn't really save all that much for the trouble.
>>
>>
>> If you zip the whole file as a whole, yes. If you zip individual
>> pieces, you can't zcat it (at least, I don't think so?).
>
>
> If it is pure gzip, then yes you can. So this:
>
>  gunzip < file1.gz; gunzip < file2.gz
>
> and this:
>
>  cat file1.gz file2.gz | gunzip
>
> should produce the same output. I think this works at the record level too.
>
> Of course all bets are off once you wrap the records in some outer layer (I
> have a file format with is little records which may have the data section
> zipped).

I was thinking in terms of having them wrapped, yes. Though I didn't
think of the possibility of merely abutting compressed streams; with a
bit of seeking and footling around, you could possibly make the file
more tailable. Lots of options, but I still think uncompressed text is
best.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: UNABLE TO GET IDLE TO RUN

2015-10-28 Thread Michael Torrie
On 10/28/2015 12:21 PM, Peter Otten wrote:
> PS: The shell people have learned their lesson and no longer include the 
> working directory in the PATH: 
> $ ls # the real thing
> $ ./ls # use at your own risk

Sure but this is a somewhat different genre.

> 
> So maybe
> 
 import string  # stdlib
 from . import string  # whatever you dropped into your working directory
> 
> OK, probably not (just brainstorming).
> 

It's actually not just interactive Python sessions where people are
attacked by themselves by covering other modules in the path.  It
happens in programs too, like recently on the list where someone called
their python program turtle.py and managed to stick it in a weird path,
but one that is normally in the Python search path, and then when
someone wanted to import turtle from the stdlib, they got the one in
C:\windows\system32.  Granted that is really a misconfiguration problem
on the part of the OS for allowing a normal user to write to a system
location.

So yeah I dunno.




-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Most space-efficient way to store log entries

2015-10-28 Thread Tim Chase
On 2015-10-29 00:21, Mark Lawrence wrote:
> On 28/10/2015 22:53, Tim Chase wrote:
>> If nobody is monitoring the logs, just write them to /dev/null
>> for 100% compression. ;-)
> 
> Can you get better than 100% compression if you write them to
> somewhere other than /dev/null/ ?

Well, /dev/null is a device. I don't know what happens if you remove
it and make it a sub-directory.

But sure, you can use the "rm -rf /*" utility to compress your files
and you'll get more space on your disk than you had before you
started.  It works by commenting out all of your drive's
content...note the beginning of the C-style comment token.  You
might also have to run it as root or prefixed with `sudo` because
getting >100% compression requires super-user permissions.

If-you-wipe-your-drive-don't-blame-me'ly yers,

-tkc


-- 
https://mail.python.org/mailman/listinfo/python-list


system error - api-ms-win-crt-runtime-l1-1-0.dll

2015-10-28 Thread Nagu Koppula
Hi 

 

Could you help me to resolve below error in my windows 7 laptop?

I had tried re-installing / repair, still  error persists.

 

 

Error - screenshot

 



 

 

Regards,

Nagu

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Most space-efficient way to store log entries

2015-10-28 Thread Martin A. Brown

Hello Marc,

I think you have gotten quite a few answers already, but I'll add my 
voice.

> I'm writting an application that saves historical state in a log 
> file.

If I were in your shoes, I'd probably use the logging module rather 
than saving state in my own log file.  That allows the application 
to send all historical state to the system log.  Then, it could be 
captured, recorded, analyzed and purged (or neglected) along with 
all of the other logging.

But, this may not be appropriate for your setup.  See also my final 
two questions at the bottom.

> I want to be really efficient in terms of used bytes.

It is good to want to be efficient.  Don't cost your (future) self 
or some other poor schlub future working or computational 
efficiency, though!

Somebody may one day want to extract utility out of the 
application's log data.  So, don't make that data too hard to read.

> What I'm doing now is:
> 
> 1) First use zlib.compress

... assuming you are going to write your own files, then, certainly.

If you also want better compression (quantified in a table below) at 
a higher CPU cost, try bz2 or lzma (Python3).  Note that there is 
not a symmetric CPU cost for compression and decompression.  
Usually, decompression is much cheaper.

  # compress = bz2.compress
  # compress = lzma.compress
  compress = zlib.compress

To read the logging data, then the programmer, application analyst 
or sysadmin will need to spend CPU to uncompress.  If it's rare, 
that's probably a good tradeoff.

Here's my small comparison matrix of the time it takes to transform 
a sample log file that was roughly 33MB (in memory, no I/O costs 
included in timing data).  The chart also shows the size of the 
compressed data, in bytes and percentage (to demonstrate compression 
efficiency).

   formatbytes   pct  walltime
   raw34311602 1.00%  0.0s
   base64-encode  46350762 1.35%  0.43066s
   zlib-compress   3585508 0.10%  0.54773s
   bz2-compress2704835 0.08%  4.15996s
   lzma-compress   2243172 0.07% 15.89323s
   base64-decode  34311602 1.00%  0.18933s
   bz2-decompress 34311602 1.00%  0.62733s
   lzma-decompress34311602 1.00%  0.22761s
   zlib-decompress34311602 1.00%  0.07396s

The point of a sample matrix like this is to examine the tradeoff 
between time (for compression and decompression) and to think about 
how often you, your application or your users will decompress the 
historical data.  Also consider exactly how sensitive you are to 
bytes on disk.  (N.B. Data from a single run of the code.)

Finally, simply make a choice for one of the compression algorithms.

> 2) And then remove all new lines using binascii.b2a_base64, so I 
> have a log entry per line.

I'd also suggest that you resist the base64 temptation.  As others 
have pointed out, there's a benefit to keeping the logs compressed 
using one of the standard compression tools (zgrep, zcat, bzgrep, 
lzmagrep, xzgrep, etc.)

Also, see the statistics above for proof--base64 encoding is not 
compression.  Rather, it usually expands input data to the tune of 
one third (see above, the base64 encoded string is 135% of the raw 
input).

That's not compression.  So, don't do it.  In this case, it's 
expansion and obfuscation.  If you don't need it, don't choose it.

In short, base64 is actively preventing you from shrinking your 
storage requirement.

> but b2a_base64 is far from ideal: adds lots of bytes to the 
> compressed log entry. So, I wonder if perhaps there is a better 
> way to remove new lines from the zlib output? or maybe a different 
> approach?

Suggestion:  Don't worry about the single-byte newline terminator.  
Look at a whole logfile and choose your best option.

Lastly, I have one other pair of questions for you to consider.

Question one:  Will your application later read or use the logging 
data?  If no, and it is intended only as a record for posterity, 
then, I'd suggest sending that data to the system logs (see the 
'logging' module and talk to your operational people).

If yes, then question two is:  What about resilience?  Suppose your 
application crashes in the middle of writing a (compressed) logfile.  
What does it do?  Does it open the same file?  (My personal answer 
is always 'no.')  Does it open a new file?  When reading the older 
logfiles, how does it know where to resume?  Perhaps you can see my 
line of thinking.

Anyway, best of luck,

-Martin

P.S. The exact compression ratio is dependent on the input.  I have 
  rarely seen zlib at 10% or bz2 at 8%.  I conclude that my sample 
  log data must have been more homogeneous than the data on which I 
  derived my mental bookmarks for textual compression efficiencies 
  of around 15% for zlib and 12% for bz2.  I have no mental bookmark 
  for lzma yet, but 7% is an outrageously good compression ratio.

-- 
Martin A. Brown
http://linux-ip.net/
-- 
https://mail.python.org

Re: system error - api-ms-win-crt-runtime-l1-1-0.dll

2015-10-28 Thread Terry Reedy

On 10/29/2015 1:02 AM, Nagu Koppula wrote:

Hi

Could you help me to resolve below error in my windows 7 laptop?

I had tried re-installing / repair, still  error persists.

Error - screenshot


Copy the relevant parts into your text message.

Ever heard of a search bar?  Google?

This error message is not specific to Python.  Here is one hit.
http://blog.spreendigital.de/2015/09/01/how-to-fix-the-api-ms-win-crt-runtime-l1-1-0-dll-is-missing-error-for-delphi-10-seattle/

Many others.

--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list