Tony Rice added the comment:
I would argue that PEP20 should win over backward compatibility, in addition to
the points I hinted at above,
practicality beats purity
--
___
Python tracker
<https://bugs.python.org/issue12
Tony Rice added the comment:
This enhancement request should be reconsidered.
Yes it is the documented behavior but that doesn't mean it's the right
behavior. Functions should work as expected not just in the context of the
module they are implemented in but the context of t
New submission from Tony Rice :
datetime.datetime.utcnow()
returns a timezone naive datetime, this is counter-intuitive since you are
logically dealing with a known timezone. I suspect this was implemented this
way for fidelity with the rest of datetime.datetime (which returns timezone
Tony Zhou added the comment:
ok i see, I found the pdf. thank you for that anyway
--
___
Python tracker
<https://bugs.python.org/issue45916>
___
___
Python-bug
New submission from Tony Zhou :
3.10.0 Documentation » The Python Tutorial » 15. Floating Point Arithmetic:
Issues and Limitationsin
in the link "The Perils of Floating Point" brings user to https://www.hmbags.tw/
I don't think this is right. please check
--
messag
New submission from Tony :
on the >>> prompt type:
>>>717161 * 0.01
7171.6101
the same goes for
>>>717161.0 * 0.01
7171.6101
You can easily find more numbers with similar problem:
for i in range(100):
if len(str(i * 0.01)) > 12:
Tony Martin Berbel added the comment:
My system crashed completely. I reinstalled Ubuntu. Sorry I couldn't help
more ... :(
___
MARTIN BERBEL, Tony
GSM: +32 (0) 477 / 33.12.48
--
Le mer. 10 févr. 2021 à 04:06, Tony M
Tony Martin Berbel added the comment:
I found lastlog and attached it !
--
Added file: https://bugs.python.org/file49800/lastlog
___
Python tracker
<https://bugs.python.org/issue43
Tony Martin Berbel added the comment:
I had the same error
I ran the make test command with >&log
But I don't know where to look for the log file
--
nosy: +wingarmac
___
Python tracker
<https://bugs.python
Tony Lykke added the comment:
Sorry, there's a typo in my last comment.
--store --foo a
Namespace(foo=['a', 'b', 'c'])
from the first set of examples should have been
--store --foo c
Tony Lykke added the comment:
Perhaps the example I added to the docs isn't clear enough and should be
changed because you're right, that specific one can be served by store_const.
Turns out coming up with examples that are minimal but not too contrived is
hard! Let me try ag
Change by Tony Lykke :
--
keywords: +patch
pull_requests: +23269
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/24478
___
Python tracker
<https://bugs.python.org/issu
New submission from Tony Lykke :
I submitted this to the python-ideas mailing list early last year:
https://mail.python.org/archives/list/python-id...@python.org/thread/7ZHY7HFFQHIX3YWWCIJTNB4DRG2NQDOV/.
Recently I had some time to implement it (it actually turned out to be pretty
trivial
Tony Ladd added the comment:
Dennis
Thanks for the explanation. Sorry to post a fake report. Python is relentlessly
logical but sometimes confusing.
--
___
Python tracker
<https://bugs.python.org/issue43
New submission from Tony Ladd :
The expression "1 and 2" evaluates to 2. Actually for most combinations of data
type it returns the second object. Of course its a senseless construction (a
beginning student made it) but why no exception?
--
components: Interpreter Cor
Tony Albers added the comment:
No no no, please don't.
Apart from FreeBSD, illumos distros are the only really hard-core UNIX OS'es
still freely available, the features taken into account.
SMF, dtrace and several hypervisor types makes illumos really stand out.
I understand that
Change by Tony Wu :
--
nosy: +ghaering
___
Python tracker
<https://bugs.python.org/issue41829>
___
___
Python-bugs-list mailing list
Unsubscribe:
New submission from Tony Wu :
Supplying a sequence or sqlite3.Row objects to sqlite3.Connection.executemany
will cause the Row objects to be interpreted as Sequences instead of Mappings
even if the statement to be executed uses named parameter substitution.
That is, values in the Rows are
New submission from Tony DiLoreto :
The following code does not work on many OSX installations of Python via
homebrew:
>>> import webbrowser
>>> webbrowser.open("http://www.google.com";)
And throws the following error stack trace:
File
"/us
Tony added the comment:
bump
--
___
Python tracker
<https://bugs.python.org/issue41246>
___
___
Python-bugs-list mailing list
Unsubscribe:
https://mail.pyth
Tony added the comment:
bump
--
___
Python tracker
<https://bugs.python.org/issue41533>
___
___
Python-bugs-list mailing list
Unsubscribe:
https://mail.pyth
Tony added the comment:
bump
--
title: Convert StreamReaderProtocol to a BufferedProtocol -> Add a
StreamReaderBufferedProtocol
___
Python tracker
<https://bugs.python.org/issu
Tony Reix added the comment:
Hi Stefan,
In your message https://bugs.python.org/issue41540#msg375462 , you said:
"However, instead of freezing the machine, the process gets a proper SIGKILL
almost instantly."
That's probably due to a very small size of the Paging Space of
Tony Reix added the comment:
I forgot to say that this behavior was not present in stable version 3.8.5 .
Sorry.
On 2 machines AIX 7.2, testing Python 3.8.5 with:
+ cd /opt/freeware/src/packages/BUILD/Python-3.8.5
+ ulimit -d unlimited
+ ulimit -m unlimited
+ ulimit -s unlimited
+ export
Tony Reix added the comment:
Is it a 64bit AIX ? Yes, AIX is 64bit by default and only since ages, but it
manages 32bit applications as well as 64bit applications.
The experiments were done with 64bit Python executables on both AIX and Linux.
The AIX machine has 16GB Memory and 16GB Paging
Tony Reix added the comment:
Hi Pablo,
I'm only surprised that the maximum size generated in the test is always lower
than the PY_SSIZE_T_MAX. And this appears both on AIX and on Linux, which both
compute the same values.
On AIX, it appears (I've just discovered this now) that mal
Tony Reix added the comment:
Some more explanations.
On AIX, the memory is controlled by the ulimit command.
"Global memory" comprises the physical memory and the paging space, associated
with the Data Segment.
By default, both Memory and Data Segment are limited:
# ulimit -a
dat
New submission from Tony Reix :
Python master of 2020/08/11
Test test_maxcontext_exact_arith (test.test_decimal.CWhitebox) checks that
Python correctly handles a case where an object of size 421052631578947376 is
created.
maxcontext = Context(prec=C.MAX_PREC, Emin=C.MIN_EMIN, Emax
Change by Tony :
--
keywords: +patch
pull_requests: +20974
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/21847
___
Python tracker
<https://bugs.python.org/issu
New submission from Tony :
When calling a function a stack is allocated via va_build_stack.
There is a leak that happens if do_mkstack fails in it.
--
messages: 375267
nosy: tontinton
priority: normal
severity: normal
status: open
title: Bugfix: va_build_stack leaks the stack if
Tony Reix added the comment:
I do agree that the example with memchr is not correct.
About your suggestion, I've done it. With 32. And that works fine.
All 3 values are passed by value.
# cat Pb-3.8.5.py
#!/usr/bin/env python3
from ctypes import *
mine = CDLL('./MemchrAr
Tony Reix added the comment:
After more investigations, we (Damien and I) think that there are several
issues in Python 3.8.5 :
1) Documentation.
a) AFAIK, the only place in the Python ctypes documentation where it talks
about how arrays in a structure are managed appears at:
https
Tony Reix added the comment:
Fedora32/x86_64 : Python v3.8.5 : optimized : uint type.
If, instead of using ulong type, the Pb.py program makes use of uint, the issue
is different: see below.
This means that the issue depends on the length of the data.
BUILD=optimized
TYPE=int
export
Change by Tony Reix :
--
versions: +Python 3.8 -Python 3.7
___
Python tracker
<https://bugs.python.org/issue38628>
___
___
Python-bugs-list mailing list
Unsub
Tony Reix added the comment:
Fedora32/x86_64 : Python v3.8.5 has been built.
Issue is still there, but different in debug or optimized mode.
Thus, change done in https://bugs.python.org/issue22273 did not fix this issue.
./Pb-3.8.5-debug.py :
#!/opt/freeware/src/packages/BUILD/Python-3.8.5
Tony Reix added the comment:
After adding traces and after rebuilding Python and libffi with -O0 -g -gdwarf,
it appears that, still in 64bit, the bug is still there, but that ffi_call_AIX
is called now instead of ffi_call_DARWIN from ffi_call() routine of
../src/powerpc/ffi_darwin.c (lines
Tony Reix added the comment:
On AIX 7.2, with libffi compiled with -O0 -g, I have:
1) Call to memchr thru memchr_args_hack
#0 0x091b0d60 in memchr () from /usr/lib/libc.a(shr_64.o)
#1 0x0900058487a0 in ffi_call_DARWIN () from
/opt/freeware/lib/libffi.a(libffi.so.6)
#2
Tony Reix added the comment:
# pwd
/opt/freeware/src/packages/BUILD/libffi-3.2.1
# grep -R ffi_closure_ASM *
powerpc-ibm-aix7.2.0.0/.libs/libffi.exp: ffi_closure_ASM
powerpc-ibm-aix7.2.0.0/include/ffitarget.h:void * code_pointer; /*
Pointer to ffi_closure_ASM */
src/powerpc
Tony Reix added the comment:
AIX: difference between 32bit and 64bit.
After the second print, the stack is:
32bit:
#0 0xd01407e0 in memchr () from /usr/lib/libc.a(shr.o)
#1 0xd438f480 in ffi_call_AIX () from /opt/freeware/lib/libffi.a(libffi.so.6)
#2 0xd438effc in ffi_call () from /opt
Tony Reix added the comment:
On AIX in 32bit, we have:
Thread 2 hit Breakpoint 2, 0xd01407e0 in memchr () from /usr/lib/libc.a(shr.o)
(gdb) where
#0 0xd01407e0 in memchr () from /usr/lib/libc.a(shr.o)
#1 0xd438f480 in ffi_call_AIX () from /opt/freeware/lib/libffi.a(libffi.so.6)
#2
Tony Reix added the comment:
On Fedora/PPC64LE, where it is OK, the same debug with gdb gives:
(gdb) where
#0 0x77df03b0 in __memchr_power8 () from /lib64/libc.so.6
#1 0x7fffea167680 in ?? () from /lib64/libffi.so.6
#2 0x7fffea166284 in ffi_call () from /lib64/libffi.so.6
Tony Reix added the comment:
On Fedora/x86_64, in order to get the core, one must do:
coredumpctl -o /tmp/core dump /usr/bin/python3.8
--
___
Python tracker
<https://bugs.python.org/issue38
Tony Reix added the comment:
On AIX:
root@castor4## gdb /opt/freeware/bin/python3
...
(gdb) run -m pdb Pb.py
...
(Pdb) n
b'def'
> /home2/freeware/src/packages/BUILD/Python-3.8.5/32bit/Pb.py(35)()
-> print(
(Pdb) n
> /home2/freeware/src/packages/BUILD/Python-3
Tony Reix added the comment:
Fedora32/x86_64
[root@destiny10 tmp]# gdb /usr/bin/python3.8 core
...
Core was generated by `python3 ./Pb.py'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x7f898a02a1d8 in __memchr_sse2 () from /lib64/libc.so.6
Missing separate debug
Tony Reix added the comment:
On Fedora32/PPC64LE (5.7.9-200.fc32.ppc64le), with little change:
libc = CDLL('/usr/lib64/libc.so.6')
I get the correct answer:
b'def'
b'def'
b'def'
# python3 --version
Python 3.8.3
libffi : 3.1-24
On Fedora32/x86_6
Tony added the comment:
If the error is not resolved yet, I would prefer if we revert this change then.
The new PR is kinda big I don't know when it will be merged.
--
___
Python tracker
<https://bugs.python.org/is
Tony added the comment:
Ok so I checked and the PR I am currently having a CR on fixes this issue:
https://github.com/python/cpython/pull/21446
Do you want me to make a different PR tomorrow that fixes this specific issue
to get it faster to master or is it ok to wait a bit
Tony added the comment:
I see, I'll start working on a fix soon
--
___
Python tracker
<https://bugs.python.org/issue41273>
___
___
Python-bugs-list m
Tony added the comment:
By the way if we will eventually combine StreamReader and StreamWriter won't
this function (readinto) be useful then?
Maybe we should consider adding it right now.
Tell me your thoughts on this.
--
___
Python tr
Tony added the comment:
> Which brings me to the most important point: what we need it not coding it
> (yet), but rather drafting the actual proposal and posting it to
> https://discuss.python.org/c/async-sig/20. Once a formal proposal is there
> we can proceed with the im
Tony added the comment:
Ok actually that sounds really important, I am interested.
But to begin doing something like this I need to know what's the general design.
Is it simply combining stream reader and stream writer into a single object and
changing the write() function to always
Tony added the comment:
Ah it's trio...
--
___
Python tracker
<https://bugs.python.org/issue41305>
___
___
Python-bugs-list mailing list
Unsubscr
Tony added the comment:
ok.
Im interested in learning about the new api.
Is it documented somewhere?
--
___
Python tracker
<https://bugs.python.org/issue41
Change by Tony :
--
pull_requests: +20633
pull_request: https://github.com/python/cpython/pull/21491
___
Python tracker
<https://bugs.python.org/issue41
Change by Tony :
--
keywords: +patch
pull_requests: +20634
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/21491
___
Python tracker
<https://bugs.python.org/issu
New submission from Tony :
Add a StreamReader.readinto(buf) function.
Exactly like StreamReader.read() with *n* being equal to the length of buf.
Instead of allocating a new buffer, copy the read buffer into buf.
--
messages: 373702
nosy: tontinton
priority: normal
severity: normal
Tony added the comment:
I feel like the metadata is not really a concern here. I like when there is no
code duplication :)
--
___
Python tracker
<https://bugs.python.org/issue41
Change by Tony :
--
keywords: +patch
pull_requests: +20594
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/21446
___
Python tracker
<https://bugs.python.org/issu
Change by Tony :
--
pull_requests: +20593
pull_request: https://github.com/python/cpython/pull/21446
___
Python tracker
<https://bugs.python.org/issue41
New submission from Tony :
This will greatly increase performance, from my internal tests it was about
150% on linux.
Using read_into instead of read will make it so we do not allocate a new buffer
each time data is received.
--
messages: 373526
nosy: tontinton
priority: normal
Change by Tony :
--
pull_requests: +20589
pull_request: https://github.com/python/cpython/pull/21442
___
Python tracker
<https://bugs.python.org/issue41
Change by Tony :
--
pull_requests: +20590
pull_request: https://github.com/python/cpython/pull/21442
___
Python tracker
<https://bugs.python.org/issue41
Change by Tony :
--
keywords: +patch
pull_requests: +20588
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/21439
___
Python tracker
<https://bugs.python.org/issu
Change by Tony :
--
nosy: +tontinton
nosy_count: 3.0 -> 4.0
pull_requests: +20585
pull_request: https://github.com/python/cpython/pull/21439
___
Python tracker
<https://bugs.python.org/issu
New submission from Tony :
Using recv_into instead of recv in the transport _loop_reading will speed up
the process.
>From what I checked it's about 120% performance increase.
This is only because there should not be a new buffer allocated each time we
call recv, it's re
Change by Tony :
--
pull_requests: +20555
pull_request: https://github.com/python/cpython/pull/21406
___
Python tracker
<https://bugs.python.org/issue41
Tony added the comment:
bump
--
___
Python tracker
<https://bugs.python.org/issue41093>
___
___
Python-bugs-list mailing list
Unsubscribe:
https://mail.pyth
Change by Tony :
--
title: asyncio module better caching for set and get_running_loop ->
asyncio.set_running_loop() cache running loop holder
___
Python tracker
<https://bugs.python.org/issu
Change by Tony :
--
keywords: +patch
pull_requests: +20550
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/21401
___
Python tracker
<https://bugs.python.org/issu
New submission from Tony :
There is a cache variable for the running loop holder, but once
set_running_loop is called the variable was set to NULL so the next time
get_running_loop would have to query a dictionary to receive the running loop
holder.
I thought why not always cache the latest
Change by Tony :
--
keywords: +patch
pull_requests: +20547
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/21399
___
Python tracker
<https://bugs.python.org/issu
New submission from Tony :
In IocpProactor I saw that the callbacks to the functions recv, recv_into,
recvfrom, sendto, send and sendfile all give the same callback function for
when the overlapped operation is done.
I just wanted cleaner code so I made a static function inside the class
Tony added the comment:
poke
--
___
Python tracker
<https://bugs.python.org/issue41093>
___
___
Python-bugs-list mailing list
Unsubscribe:
https://mail.pyth
Tony added the comment:
This still leaves the open issue of UDPServer not shutting down immediately
though
--
___
Python tracker
<https://bugs.python.org/issue41
Tony added the comment:
Just want to note that this fixes an issue in all TCPServers and not only
http.server
--
title: BaseServer's server_forever() shutdown immediately when calling
shutdown() -> TCPServer's server_forever() shutdown immediately when call
Change by Tony :
--
pull_requests: +20260
pull_request: https://github.com/python/cpython/pull/21094
___
Python tracker
<https://bugs.python.org/issue41
Change by Tony :
--
keywords: +patch
pull_requests: +20259
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/21093
___
Python tracker
<https://bugs.python.org/issu
Tony added the comment:
By the way I have to ask, if I want this feature to be merged (this is my first
PR) should I make a PR to 3.6/3.7/3.8/3.9 and master?
Or should I create a PR to master only?
thanks
--
___
Python tracker
<ht
New submission from Tony :
Currently calling BaseServer's shutdown() function will not make
serve_forever() return immediately from it's select().
I suggest adding a new function called server_shutdown() that will make
serve_forever() shutdown immediately.
Then in TCPServer(BaseS
Tony added the comment:
Hi Steve,
Thank you for this.
I know about the working of WOW64 and the redirection to the
(HKEY_LOCAL_MACHINE) ..\Wow6432Node, that is explained on md.docs.
The HKEY_CURRENT_USER redirection is not well explained, and so it appears I’m
not the only one (Google) who
Tony added the comment:
The attachment I forgot..
Greetings, Tony.
Van: Steve Dower
Verzonden: zaterdag 11 januari 2020 17:30
Aan: factoryx.c...@gmail.com
Onderwerp: [issue39296] Windows register keys
Steve Dower added the comment:
Have you read PEP 514? Does that help?
If not, can you
Tony added the comment:
Hello Steve,
I just red the PEP 514.
Thank you for pointing this out.
However, when installing the latest version (3.8.1), the multi-user install is
registered under key
“HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\” as the PEP describes.
The key “HKEY_LOCAL_MACHINE
New submission from Tony :
It would be more practical to name the Windows main registry keys 'python',
with for example 'python32' or 'python64'. This would make searching the
registry for registered python versions (single and/or multi users) a lot
easier.
New submission from Tony :
After installing python 3.8.1 64-bit, on Windows 10 64-bit version 1909, the
system needs to be rebooted to validate all settings in the registry. Otherwise
will cause a lot of exceptions, like Path not found etc.
--
components: Installation
messages
Tony Hirst added the comment:
Argh: previous was incorrect associated issue: correct issue:
https://bugs.python.org/issue24313
--
___
Python tracker
<https://bugs.python.org/issue39
Tony Hirst added the comment:
Previously posted issue: https://bugs.python.org/issue22107
--
___
Python tracker
<https://bugs.python.org/issue39258>
___
___
Tony Hirst added the comment:
Apols - this is probably strictly a numpy issue.
See: https://github.com/numpy/numpy/issues/12481
--
___
Python tracker
<https://bugs.python.org/issue39
New submission from Tony Hirst :
import json
import numpy as np
json.dumps( {'int64': np.int64(1)})
TypeError: Object of type int64 is not JSON serializable
---
TypeError
Change by Tony Hirst :
--
components: Library (Lib)
nosy: Tony Hirst
priority: normal
severity: normal
status: open
title: json serialiser errors with numpy int64
versions: Python 3.7
___
Python tracker
<https://bugs.python.org/issue39
Tony Cappellini added the comment:
Using Python 3.7.4, I'm calling subprocess.run() with the following arguments.
.run() still hangs even though a timeout is being passed in.
subprocess.run(cmd_list,
stdout=subprocess
Tony Cappellini added the comment:
I'm still seeing hangs with subprocess.run() in Python 3.7.4
Unfortunately, it involves talking to an NVME SSD on Linux, so I cannot
easily submit code to duplicate it.
--
nosy: +cappy
___
Python tracker
&
New submission from Tony Hammack :
ThreadPoolExecutor(max_workers=None) throws exception when it should not.
Inconsistent with 3.4 documentation. If max_workers=None, then it should use
the amount of cpus as threadcount.
--
components: Library (Lib)
messages: 336354
nosy: Tony
Tony Roberts added the comment:
Sure, that's reasonable :)
For my case I have a usable workaround so not back porting it to < 3.8 is fine
for me. My workaround will just leak the thread state if another thread is in
__import__, which happens so rarely that it's not really a pro
Tony Roberts added the comment:
GetProcAddress and GetModuleHandle do block in the same way as LoadLibrary and
FreeLibrary - they acquire the loader lock too.
Yes, ideally the application would terminate its threads cleanly, however when
Python is embedded in another application it may not
Change by Tony Roberts :
--
keywords: +patch
pull_requests: +7393
stage: needs patch -> patch review
___
Python tracker
<https://bugs.python.org/issu
Tony Roberts added the comment:
Sure, I'll get that done in the next couple of days.
--
___
Python tracker
<https://bugs.python.org/issue33895>
___
___
New submission from Tony Roberts :
In dynload_win.c LoadLibraryExW is called with the GIL held.
This can cause a deadlock in an uncommon case where the GIL also needs to be
acquired when another thread is being detached.
Both LoadLibrary and FreeLibrary acquire the Windows loader-lock. If
REIX Tony added the comment:
OK.
However, compiling ONLY the file Objects/longobject.c with -qalias=noansi did
fix the issue on AIX. That could be the same on Linux.
I haven't tried to use Py_SIZE() in all places where it should be used. Now
trying to figure out why GCC behaves worst
REIX Tony added the comment:
With XLC v13 -O2, using -qalias=noansi for building Objects/longobject.o only
and not for all the other .o files did fix the 10 more failed tests I see with
-O2 compared to -O0 (7-8 failed tests).
So, ANSI-aliasing in Objects/longobject.c is the issue.
About
REIX Tony added the comment:
Thanks a lot Stefan, that should completely explain my issues.
-fno-strict-aliasing -fwrapv for gcc
So, that means that you would get better performance if you applied on Python
v2.7 what Python v3.5 did about Py_SIZE(x) .
However, there are probably other places
1 - 100 of 159 matches
Mail list logo