New submission from Sworddragon :
If a client gets a reconnect and a new ip from the provider the methods of
ftplib can't handle this and are hanging in an infinite loop. For example if a
file is transfered with storbinary() and the client gets a new ip address the
script will never end
Sworddragon added the comment:
If the connection gets lost and reconnected again but the ip address doesn't
change storbinary() continues the data transfer. But if the ip address was
changed due to the reconnect storbinary() hangs in a loop.
I expect either that storbinary() detect
Sworddragon added the comment:
The problem is that it is for example here in germany very common that the
provider disconnects the client every 24 hours and gives him a new ip address
if his router reconnects. This makes it very difficult to send big files with
ftplib.
For example for daily
Sworddragon added the comment:
If i set the timeout argument an exception s thrown if the ip address is
changed. At least it's a workaround but we should think about if Python
shouldn't try to detect changes of the ip address.
It would be nicer to continue the file transfer like
New submission from Sworddragon:
Positional arguments which have no dest attribute doesn't replace any - with _.
In the attachments is an example script which demonstrate this. The output
looks like this:
sworddragon@ubuntu:~$ ./args.py foo
Namespace(foo-bar2='foo'
Sworddragon added the comment:
I have found another report about this: http://bugs.python.org/issue15125
--
resolution: -> duplicate
status: open -> closed
___
Python tracker
<http://bugs.python.org/i
Changes by Sworddragon :
--
type: -> behavior
___
Python tracker
<http://bugs.python.org/issue18074>
___
___
Python-bugs-list mailing list
Unsubscri
New submission from Sworddragon:
Currently Python 3 has some problems of handling files with an unknown
encoding. In this example we have a file encoded as ISO-8859-1 with the content
"ä" which should be tried to be read. Lets see what Python 3 can currently do
here:
1. We can s
New submission from Sworddragon:
On configuring a logger with logging.basicConfig() and using
logging.exception() the traceback is always written implicitly to the end. This
makes it not possible to create a formation that writes something beyond the
traceback. For example it could be
New submission from Sworddragon:
For logging.exception() and similar variants the msg argument must be passed
but on a formation the LogRecord "message" is not mandatory. In this case
wouldn't it be better to make the msg argument optional? At default it co
New submission from Sworddragon:
On a try/except-block if an exception raises (for example KeyboardInterrupt)
the except block could cause another exception and if this block tries to catch
it too the nested except block could cause another exception again. This goes
into an unlimited
Changes by Sworddragon :
Added file: http://bugs.python.org/file31470/race_condition_slow.py
___
Python tracker
<http://bugs.python.org/issue18836>
___
___
Python-bug
Sworddragon added the comment:
> Are you saying that if the user keeps hitting ctrl+c you would need an
> endless chain of nested try/except in order to catch them all?
Correct. For example if I want to show the user the message "Aborted" instead
of a huge exception if
Sworddragon added the comment:
The problem is simple: It is not possible to catch every exception in an
application. Even if you try to print a message and exit on an exception it is
always possible that the user will see a traceback.
--
___
Python
Sworddragon added the comment:
> but what if there is a bug in your code?
Bugs in a python application can be fixed by the user while a specific behavior
of the interpreter can't.
Maybe you are also thinking in the wrong direction. Nobody wants a solution
that traps the user forev
Sworddragon added the comment:
> Unless I'm completely misunderstanding (which I don't think I am), this is
> not a race condition, it is how the language is designed to operate.
If it is intended not to be able to catch all exceptions and prevent a
traceback being showe
Sworddragon added the comment:
> You may want to have a look at sys.excepthook.
This would not solve the race condition.
--
___
Python tracker
<http://bugs.python.org/issu
New submission from Sworddragon :
For file objects the read() function has the optional size argument to limit
the data that will be read. I'm wondering why there isn't such an argument for
readline(). Theoretically lines in a file could have million of characters and
even muc
New submission from Sworddragon:
I have made some tests with encoding/decoding in conjunction with
unicode-escape and got some strange results:
>>> print('ä')
ä
>>> print('ä'.encode('utf-8'))
b'\xc3\xa4'
>>> print('ä
Sworddragon added the comment:
The documentation says that unicode_internal is deprecated since Python 3.3 but
not unicode_escape. Also, isn't unicode_escape different from utf-8? For
example my original intention was to convert 2 byte string characters to their
control characters
New submission from Sworddragon:
tarfile.open() does support optionally an compression method on the mode
argument in the form of 'filemode[:compression]' but tarfile.TarFile() does
only suport 'a', 'r' and 'w'. Is there a special reason that tarfil
Sworddragon added the comment:
The TarFile class provides more options. Alternatively a file object could be
used but this means additional code (and maybe IO overhead).
--
___
Python tracker
<http://bugs.python.org/issue21
Sworddragon added the comment:
Interesting, after reading the documentation again I would now assume that is
what **kwargs is for.
--
___
Python tracker
<http://bugs.python.org/issue21
Changes by Sworddragon :
--
type: -> enhancement
___
Python tracker
<http://bugs.python.org/issue21404>
___
___
Python-bugs-list mailing list
Unsubscrib
New submission from Sworddragon:
The tarfile/zipfile libraries doesn't seem to provide a direct way to specify
the compression level. I have now ported my code from subprocess to
tarfile/zipfile to achieve platform independency but would be happy if I could
also control the compression
Sworddragon added the comment:
Could it be that compress_level is not documented?
--
___
Python tracker
<http://bugs.python.org/issue21404>
___
___
Python-bug
Sworddragon added the comment:
Then this one is easy: The documentation needs just an update. But then there
is still zipfile that doesn't provide (or at least document) a compression
level.
--
___
Python tracker
<http://bugs.python.org/is
New submission from Sworddragon:
This is a fork from this ticket: http://bugs.python.org/issue21404
tarfile has a compression level and seems to get now the missing documentation
for it. But there is still a compression level missing for zipfile.
--
components: Library (Lib)
messages
Changes by Sworddragon :
--
type: -> enhancement
___
Python tracker
<http://bugs.python.org/issue21417>
___
___
Python-bugs-list mailing list
Unsubscrib
Sworddragon added the comment:
Sure, here is the new ticket: http://bugs.python.org/issue21417
--
___
Python tracker
<http://bugs.python.org/issue21404>
___
___
New submission from Sworddragon:
The mode 'br' on open() can cause an exception with the following message:
"ValueError: mode string must begin with one of 'r', 'w', 'a' or 'U', not
'br'". Curriously most times the mode
Sworddragon added the comment:
> Is this issue just for 2.7? 3.3 was selected as the affected version, but
> the error message text seems limited to 2.7
You have given me a good hint. My script is running on python3 with the shbang
line "#!/usr/bin/python3 -OOtt". But it ma
New submission from Sworddragon:
On my system (Linux 64 Bit) I figured out that python 3 needs a little more
memory than python 2 and it is a little bit slower. Here are some examples:
sworddragon@ubuntu:~$ execution-time 'python2 -c print\("0"\)'
0.21738
sworddragon@ubun
New submission from Sworddragon:
If a command gets too long os.system() will return 32512. As I have figured out
from Google this normally happens if the command can't be found. In the
attachments is an example command which will fail on os.system() (it was
generated as test d
Sworddragon added the comment:
I have figured out that system() in C can only take up to 65533 arguments after
a command (so it is a 16 bit issue). Giving one more argument will result in
the return code 32512 (which implies the exit code 127).
--
resolution: -> invalid
status: o
New submission from Sworddragon:
Python 2 provided this command line option:
"-t Issue a warning when a source file mixes tabs and spaces for
indentation in a way that makes it depend on the worth of a tab expressed in
spaces. Issue an error when the option is given twice.&q
Sworddragon added the comment:
Thanks for the example, this is what I had in mind. Python 3 does also still
provide the -t option (I'm assuming for compatibility reasons) but python3 -h
and the manpage aren't saying about this.
--
New submission from Sworddragon:
The documentation says that -OO does remove docstrings so applications should
be aware of it. But there is also a case where a valid declared docstring isn't
accessible anymore if -O is given. First the testcase:
test1.py:
import test2
def
New submission from Sworddragon:
The force-option from compileall seems not to rebuild the bytecode files if
they already exist. Here is an example of 2 calls:
root@ubuntu:~# python3 -m compileall -f
Skipping current directory
Listing '/usr/lib/python3.3'...
Compiling '/us
New submission from Sworddragon:
Using -OO on a script will remove the __doc__ attributes but the docstrings
will still be in the process memory. In the attachments is an example script
which demonstrates this with a docstring of ~10 MiB (opening the file in an
editor can need some time
Sworddragon added the comment:
> Do realize this is a one-time memory cost, though, because next execution
> will load from the .pyo and thus will never load the docstring into memory.
Except in 2 cases:
- The bytecode was previously generated with -O.
- The bytecode couldn't be w
New submission from Sworddragon:
All functions of compileall are providing a maxlevels argument which defaults
to 10. But it is currently not possible to disable this recursion limitation.
Maybe it would be useful to have a special value like -1 to disable this
limitation and allow to compile
New submission from Sworddragon:
Currently on calling one of the compileall functions it is not possible to pass
the optimization level as argument. The bytecode will be created depending of
the optimization level of the current script instance. But if a script wants to
compile .pyc files for
New submission from Sworddragon:
Currently the documentation does sometimes say about specific exceptions but
most times not. As I'm often catching exceptions to ensure a high stability
this gets a little difficult. For example print() can trigger a BrokenPipeError
and the most file func
Sworddragon added the comment:
I'm fine with this decision as it will be really much work. But this also means
programming with Python isn't considered for high stability applications - due
to the lack of important informations in the documentation.
An alternate way would be to rel
Sworddragon added the comment:
Correct, but the second part of my last message was just my opinion that I
would prefer error codes over exceptions because it implies already a completed
documentation for this part due to return codes/error arguments/other potential
ways
Sworddragon added the comment:
> Hi. Since Python 3.2, compileall functions supports the optimization level
> through the `optimize` parameter.
> There is no command-line option to control the optimization level used by the
> compile() function, because the Python interpreter it
Sworddragon added the comment:
After checking it: Yes it does, thanks for the hint. In this case I'm closing
this ticket now.
--
resolution: -> invalid
status: open -> closed
___
Python tracker
<http://bugs.python.
New submission from Sworddragon:
In the attachments is a testcase which does concatenate 10 times a string
and than 10 times a bytes object. Here is my result:
sworddragon@ubuntu:~/tmp$ ./test.py
String: 0.03165316581726074
Bytes : 0.5805566310882568
--
components: Benchmarks
New submission from Sworddragon:
socket(7) does contain SO_PRIORITY but trying to use this value will result in
this error: AttributeError: 'module' object has no attribute 'SO_PRIORITY'
--
components: Library (Lib)
messages: 204506
nosy: Sworddragon
priority: nor
New submission from Sworddragon:
It seems that print() and write() (and maybe other of such I/O functions) are
relying on sys.getfilesystemencoding(). But these functions are not operating
with filenames but with their content. In the attachments is an example script
which demonstrates this
New submission from Sworddragon:
sys.getfilesystemencoding() says for Unix: On Unix, the encoding is the user’s
preference according to the result of nl_langinfo(CODESET), or 'utf-8' if
nl_langinfo(CODESET) failed.
In my opinion relying on the locale environment is risky since
Sworddragon added the comment:
It is nice that you could fixed the documentation due to this report but this
was just a sideeffect - so closing this report and moving it to "Documentation"
was maybe wrong.
--
___
Python trac
Sworddragon added the comment:
> This idea was already proposed in issue #8622, but it was a big fail.
Not completely: If your locale is utf-8 and you want to operate on an utf-8
filesystem all is fine. But what if you want then to operate on a ntfs
(non-utf-8) partition? As I know there
Sworddragon added the comment:
I have extended the benchmark a little and here are my new results:
concatenate_string() : 0.037489
concatenate_bytes(): 2.920202
concatenate_bytearray(): 0.157311
concatenate_string_io(): 0.035397
concatenate_bytes_io
Sworddragon added the comment:
> We aren't going to add the optimization shortcut for bytes
There is still the question: Why isn't this going to be optimized?
--
___
Python tracker
<http://bugs.pytho
Sworddragon added the comment:
Using an environment variable is not the holy grail for this. On writing a
non-single-user application you can't expect the user to set extra environment
variables.
If compatibility is the only reason in my opinion it would be much better to
include some
Sworddragon added the comment:
You should keep things more simple:
- Python and the operation system/filesystem are in a client-server
relationship and Python should validate all.
- It doesn't matter what you will finally decide to be the default encoding on
various places - all will pr
Sworddragon added the comment:
> I'm closing the issue as invalid, because Python 3 behaviour is correct > and
> must not be changed.
The fact that write() uses sys.getfilesystemencoding() is either a defect or a
bad design (I leave the decision to you).
But I'm still mi
Sworddragon added the comment:
> If the environment variable is not enough
There is a big difference between environment variables and internal calls:
Environment variables are user-space while builtin/library functions are
developer-space.
> I have good news for you. write() does n
New submission from Sworddragon:
If I'm receiving data from a socket (several bytes) and making the first call
to socket.recv(1) all is fine but the second call won't get any further data.
But doing this again with socket.recv(2) instead will successfully get the 2
bytes. Here is
Sworddragon added the comment:
> and if you try to receive less bytes than the datagram size, the rest will be
> discarded, like UDP.
I'm wondering how would it be possible then to fetch packets of an unknown size
without using an extremely
Sworddragon added the comment:
> It is too late to change the unicode-escape encoding.
So it will stay at ISO-8859-1? If yes I think this ticket can be closed as wont
fix.
--
status: pending -> open
___
Python tracker
<http://bugs.p
Sworddragon added the comment:
I have retested this with the correct linked version and it is working fine now
so I'm closing this ticket.
--
resolution: -> not a bug
status: open -> closed
___
Python tracker
<http://bugs.python.
New submission from Sworddragon:
On sending something to stdin of a process that was called with subprocess (for
example diff) I have figured out that all is working fine if stdin is closed
but flushing stdin will cause a hang (the same as nothing would be done). In
the attachments is a
New submission from Sworddragon:
The application apt-get on Linux does scale its output dependent of the size of
the terminal but I have noticed that there are differences if I'm calling
apt-get directly or with a subprocess without shell and creationflags set (so
that creationflags shou
New submission from Sworddragon:
On reading the output of an application (for example "apt-get download
firefox") that dynamically changes a line (possibly with the terminal control
character \r) I have noticed that read(1) does not read the output until it has
finished with a new
Changes by Sworddragon :
Removed file: http://bugs.python.org/file36661/test.py
___
Python tracker
<http://bugs.python.org/issue22443>
___
___
Python-bugs-list mailin
Sworddragon added the comment:
Edit: Updated testcase as I forgot to flush the output (in case somebody hints
to it).
--
Added file: http://bugs.python.org/file36662/test.py
___
Python tracker
<http://bugs.python.org/issue22
Sworddragon added the comment:
> The buffering of stdout and/or stderr of your application probably
> changes if the application runs in a terminal (TTY) or if the output is
> redirected to a pipe (not a TTY). Set the setvbuf() function.
This means in the worst case there is cur
Sworddragon added the comment:
> You don't need to compile Python. Just compile nobuffer.c to
> libnobuffer.so. See the "documentation" in nobuffer.c.
Strictly following the documentation does not work:
sworddragon@ubuntu:~/tmp$ gcc -shared -o nobuffer.so int
Sworddragon added the comment:
Why must stdin of the subprocess be closed so that a read() on stdout can
return?
--
___
Python tracker
<http://bugs.python.org/issue22
Sworddragon added the comment:
But this happens also on read(1). I'm even getting no partly output.
1. I'm calling diff in a way where it expects input to compare.
2. I'm writing and flushing to diff's stdin.
3. diff seems to not get this content unt
Sworddragon added the comment:
Ah, now I see it. Thanks for your hint.
--
___
Python tracker
<http://bugs.python.org/issue22439>
___
___
Python-bugs-list mailin
Sworddragon added the comment:
I was able to compile the library but after executing
"LD_PRELOAD=./libnobuffer.so ./test.py" I'm seeing no difference. The unflushed
output is still not being read with read(1).
--
___
Python
Sworddragon added the comment:
"stdbuf -o 0 ./test.py" and "unbuffer ./test.py" doesn't change the result too.
Or is something wrong with my testcase?
--
___
Python tracker
<http:
Changes by Sworddragon :
Removed file: http://bugs.python.org/file36660/test.py
___
Python tracker
<http://bugs.python.org/issue22441>
___
___
Python-bugs-list mailin
Changes by Sworddragon :
Added file: http://bugs.python.org/file36667/test.py
___
Python tracker
<http://bugs.python.org/issue22441>
___
___
Python-bugs-list mailin
Sworddragon added the comment:
Edit: Updated testcase as it contained an unneeded argument from an older
testcase (in case it confuses somebody).
--
___
Python tracker
<http://bugs.python.org/issue22
Sworddragon added the comment:
It works if "-q 0" is given without the need of a workaround. So this was just
a feature of apt that was causing this behavior. I think here is nothing more
to do so I'm closing this ticket.
--
resolution: -> not a bug
stat
New submission from Sworddragon:
There is currently shlex.split() that is for example useful to split a command
string and pass it to subprocess.Popen with shell=False. But I'm missing a
function that does the opposite: Building the command string from a list that
could for example th
Sworddragon added the comment:
Yes, it is possible to do this with a few other commands. But I think it would
be still a nice enhancement to have a direct function for it.
--
___
Python tracker
<http://bugs.python.org/issue22
New submission from Sworddragon:
>From the documentation: "The '*', '+', and '?' qualifiers are all greedy;"
But this is not the case for '?'. In the attachments is an example which shows
this: re.search(r'1?', '01') shou
Sworddragon added the comment:
>> The fact that write() uses sys.getfilesystemencoding() is either
>> a defect or a bad design (I leave the decision to you).
> I have good news for you. write() does not cal sys.getfilesystemencoding(),
> because the encoding is set at the
Sworddragon added the comment:
> Instead, open() determines the default encoding by calling the same function
> that's used to initialize Py_FileSystemDefaultEncoding: get_locale_encoding()
> in Python/pythonrun.c. Which on POSIX systems calls the POSIX function
> nl_langinf
Sworddragon added the comment:
By the way I have found a valid use case for LANG=C. udev and Upstart are not
setting LANG which will result in the ascii encoding for invoked Python
scripts. This could be a problem since these applications are commonly dealing
with non-ascii filesystems
Sworddragon added the comment:
> https://bugs.launchpad.net/ubuntu/+source/upstart/+bug/1235483
After opening many hundred tickets I would say: With luck this ticket will get
a response within the next year. But in the worst case it will be simply
refused.
> I found examples using
Sworddragon added the comment:
What would happen if we call this example script with LANG=C on the patch?:
---
import os
for name in sorted(os.listdir('ä')):
print(name)
---
Would it throw an exception on os.listdir('ä')?
--
__
New submission from Sworddragon:
With Python 3.4.0 RC1 on using the command "unoconv -o test.pdf test.odt" I'm
getting a segmentation fault. In the attachments are the used LibreOffice
document and a GDB backtrace. The used version of unoconv was 0.6-6 from Ubuntu
14.04
Changes by Sworddragon :
Added file: http://bugs.python.org/file34207/test.odt
___
Python tracker
<http://bugs.python.org/issue20756>
___
___
Python-bugs-list mailin
Sworddragon added the comment:
> Was it rebuilt linked against Python 3.4, instead of Python 3.3?
I don't know. Is ../Python/pystate.c that throws the error not a part of Python?
--
___
Python tracker
<http://bugs.python.org
New submission from Sworddragon:
The following was tested on Linux. In the attachments is the example code and
here is my output:
sworddragon@ubuntu:/tmp$ ./test.py
1
I'm deleting the list of directories on every recursion and skipping if I'm
directly in /proc (which is the
Sworddragon added the comment:
It sounds like me that "del dir_list" does only delete the copied list while
"del dir_list[:]" accesses the reference and deletes this list. If I'm not
wrong with this assumption I think you was meaning dir_list instead of root_dir
in y
New submission from Sworddragon:
I have noticed that since Python 3.4 the interactive mode does log all commands
to ~/.python_history. This caused me to switch into "normal user mode" and look
for a solution. With Google I have found the related entry in the documentation:
On sy
New submission from Sworddragon :
Python 3.1.2 hasn't any arguments except the file name in sys.argv[0]. For
example: build.py test
sys.argv[1] will be empty. I tried even the first example from the
documentation 15.4 (optparse) but the filename is None. In Python 2.6.5 all is
working
Sworddragon added the comment:
Examplescript test.py:
import sys
print(sys.argv[1])
Call this script now with an argument, for exmaple: test.py 1234
I expect to see the string 1234 in the console but Python 3 says "IndexError:
list index out of range". With Python 2.6.5 I be able
Sworddragon added the comment:
I'm using Windows XP Professional SP3. I downloaded Python 3.1.2 from this
site. Even Python 3.0.1 hasn't worked.
--
components: +Library (Lib) -Interpreter Core
___
Python tracker
<http://bugs.python.
Sworddragon added the comment:
I have already installed Python 3.1.2 a second time. I have selected during the
installation that the files shall be compiled into bytecode.
--
components: +Library (Lib) -Interpreter Core
___
Python tracker
<h
Sworddragon added the comment:
assoc .py
.py=Python.File
I tried this now with Ubuntu and Python 3.1.2 and all works fine. But under
Windows XP it doesn't work.
--
___
Python tracker
<http://bugs.python.org/i
Sworddragon added the comment:
ftype Python.File
Python.File="E:\Python31\python.exe" "%1" %*
--
___
Python tracker
<http://bugs.python.org/issue8984>
___
1 - 100 of 140 matches
Mail list logo