Re: Format list of list sub elements keeping structure.
On 24/07/18 06:41, Sayth Renshaw wrote: On Tuesday, 24 July 2018 14:25:48 UTC+10, Rick Johnson wrote: Sayth Renshaw wrote: elements = [['[{0}]'.format(element) for element in elements]for elements in data] I would suggest you avoid list comprehensions until you master long-form loops. I actually have the answer except for a glitch where on list element is an int. My code for item in data: out = '[{0}]'.format("][".join(item)) print(out) which prints out [glossary] [glossary][title] [glossary][GlossDiv] [glossary][GlossDiv][title] [glossary][GlossDiv][GlossList] However, in my source I have two lines like this ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'GlossDef', 'GlossSeeAlso', 0], ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'GlossDef', 'GlossSeeAlso', 1], when it hits these lines I get TypeError: sequence item 6: expected str instance, int found Do I need to do an explicit check for these 2 cases or is there a simpler way? Cheers Sayth out = '[{0}]'.format("][".join(str(item))) -- My fellow Pythonistas, ask not what our language can do for you, ask what you can do for our language. Mark Lawrence -- https://mail.python.org/mailman/listinfo/python-list
Re: Format list of list sub elements keeping structure.
2018-07-24 3:52 GMT+02:00, Sayth Renshaw : > I have data which is a list of lists of all the full paths in a json > document. > > How can I change the format to be usable when selecting elements? > > data = [['glossary'], > ['glossary', 'title'], > ['glossary', 'GlossDiv'], > ['glossary', 'GlossDiv', 'title'], > ['glossary', 'GlossDiv', 'GlossList'], > ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry'], > ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'ID'], > ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'SortAs'], > ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'GlossTerm'], > ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'Acronym'], > ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'Abbrev'], > ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'GlossDef'], > ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'GlossDef', 'para'], > ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'GlossDef', > 'GlossSeeAlso'], > ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'GlossDef', > 'GlossSeeAlso', 0], > ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'GlossDef', > 'GlossSeeAlso', 1], > ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'GlossSee']] > > I am trying to change it to be. > > [['glossary'], > ['glossary']['title'], > ['glossary']['GlossDiv'], > ] > > [...] > > > Cheers > > Sayth > -- > https://mail.python.org/mailman/listinfo/python-list > Hi, You may try to experiment with pprint https://docs.python.org/3/library/pprint.html I don't use it very often, but it seems to suit the needs like this. If you increase the default width=80 (intended for reading and source code) to a larger value, the formatting might come close to your specification. For the sample data, the width can be set to a slightly smaller value, than the length of the whole string representation of the nested list, the formatting then follows the next "level" of the nesting, these are formated on separate lines. cf.: >>> import pprint >>> len(str(data)) 957 >>> pprint.pprint(data, width=956, compact=False) [['glossary'], ['glossary', 'title'], ['glossary', 'GlossDiv'], ['glossary', 'GlossDiv', 'title'], ['glossary', 'GlossDiv', 'GlossList'], ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry'], ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'ID'], ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'SortAs'], ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'GlossTerm'], ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'Acronym'], ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'Abbrev'], ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'GlossDef'], ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'GlossDef', 'para'], ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'GlossDef', 'GlossSeeAlso'], ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'GlossDef', 'GlossSeeAlso', 0], ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'GlossDef', 'GlossSeeAlso', 1], ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'GlossSee']] >>> hth, vbr -- https://mail.python.org/mailman/listinfo/python-list
Re: Format list of list sub elements keeping structure.
On 24/07/18 08:25, Mark Lawrence wrote: > On 24/07/18 06:41, Sayth Renshaw wrote: >> On Tuesday, 24 July 2018 14:25:48 UTC+10, Rick Johnson wrote: >>> Sayth Renshaw wrote: >>> elements = [['[{0}]'.format(element) for element in elements]for elements in data] >>> >>> I would suggest you avoid list comprehensions until you master >>> long-form loops. >> >> I actually have the answer except for a glitch where on list element >> is an int. >> >> My code >> >> for item in data: >> out = '[{0}]'.format("][".join(item)) >> print(out) >> >> which prints out >> >> [glossary] >> [glossary][title] >> [glossary][GlossDiv] >> [glossary][GlossDiv][title] >> [glossary][GlossDiv][GlossList] >> >> >> However, in my source I have two lines like this >> >> ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'GlossDef', >> 'GlossSeeAlso', 0], >> ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'GlossDef', >> 'GlossSeeAlso', 1], >> >> when it hits these lines I get >> >> TypeError: sequence item 6: expected str instance, int found >> >> Do I need to do an explicit check for these 2 cases or is there a >> simpler way? >> >> Cheers >> >> Sayth >> > > out = '[{0}]'.format("][".join(str(item))) No. >>> item = ['a', 'b', 1] >>> '[{0}]'.format("][".join(item)) Traceback (most recent call last): File "", line 1, in TypeError: sequence item 2: expected str instance, int found >>> '[{0}]'.format("][".join(str(item))) "[[]['][a]['][,][ ]['][b]['][,][ ][1][]]" >>> You'll want to use map(), a generator comprehension, or a different approach. > -- https://mail.python.org/mailman/listinfo/python-list
Re: Format list of list sub elements keeping structure.
Sayth Renshaw wrote: > I have data which is a list of lists of all the full paths in a json > document. > > How can I change the format to be usable when selecting elements? How do you want to select these elements? myjson = ... path = "['foo']['bar'][42]" print(eval("myjson" + path)) ? Wouldn't it be better to keep 'data' as is and use a helper function like def get_value(myjson, path): for key_or_index in path: myjson = myjson[key_or_index] return myjson path = ['foo', 'bar', 42] print(get_value(myjson, path)) ? > data = [['glossary'], > ['glossary', 'title'], > ['glossary', 'GlossDiv'], > ['glossary', 'GlossDiv', 'title'], > ['glossary', 'GlossDiv', 'GlossList'], > ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry'], > ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'ID'], > ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'SortAs'], > ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'GlossTerm'], > ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'Acronym'], > ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'Abbrev'], > ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'GlossDef'], > ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'GlossDef', 'para'], > ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'GlossDef', > ['GlossSeeAlso'], 'glossary', 'GlossDiv', 'GlossList', 'GlossEntry', > ['GlossDef', 'GlossSeeAlso', 0], 'glossary', 'GlossDiv', 'GlossList', > ['GlossEntry', 'GlossDef', 'GlossSeeAlso', 1], 'glossary', 'GlossDiv', > ['GlossList', 'GlossEntry', 'GlossSee']] > > I am trying to change it to be. > > [['glossary'], > ['glossary']['title'], > ['glossary']['GlossDiv'], > ] > > Currently when I am formatting I am flattening the > structure(accidentally). > > for item in data: > for elem in item: > out = ("[{0}]").format(elem) > print(out) > > Which gives > > [glossary] > [title] > [GlossDiv] > [title] > [GlossList] > [GlossEntry] > [ID] > [SortAs] > [GlossTerm] > [Acronym] > [Abbrev] > [GlossDef] > [para] > [GlossSeeAlso] > [0] > [1] > [GlossSee] > > > Cheers > > Sayth -- https://mail.python.org/mailman/listinfo/python-list
Re: Non-GUI, single processort inter process massaging - how?
Dennis Lee Bieber wrote: > On Mon, 23 Jul 2018 22:14:22 +0100, Chris Green declaimed the > following: > > >Anders Wegge Keller wrote: > >> > >> If your update frequency is low enough that it wont kill the filesystem > >> and > >> the amount of data is reasonably small, atomic writes to a file is easy to > >> work with: > >> > >Yes, I think you're right, using a file would seem to be the best > >answer. Sample rate is only one a second or slower and there's not a > >huge amount of data involved. > > > > If the data is small enough, putting the file into the small shared > memory (forget if that is /dev/shm or /run/shm on the BBB) would even avoid > wearing out eMMC/SD card. > That is also a very good idea, there's /dev/shm and /run on my BBB, /dev/shm seems to have more space. -- Chris Green · -- https://mail.python.org/mailman/listinfo/python-list
Re: Format list of list sub elements keeping structure.
> myjson = ... > path = "['foo']['bar'][42]" > print(eval("myjson" + path)) > > ? > > Wouldn't it be better to keep 'data' as is and use a helper function like > > def get_value(myjson, path): > for key_or_index in path: > myjson = myjson[key_or_index] > return myjson > > path = ['foo', 'bar', 42] > print(get_value(myjson, path)) > > ? Currently I do leave the data I extract the keys out as a full path. If I use pprint as suggested I get close. ['glossary'], ['glossary', 'title'], ['glossary', 'GlossDiv'], ['glossary', 'GlossDiv', 'title'], ['glossary', 'GlossDiv', 'GlossList'], ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry'], ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'ID'], ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'SortAs'], ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'GlossTerm'], ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'Acronym'], ...] But to select elements from the json I need the format json['elem1']['elem2] . I want to be able to get any json in future and parse it into my function and return a list of all json key elements. Then using this cool answer on SO https://stackoverflow.com/a/14692747/461887 from functools import reduce # forward compatibility for Python 3 import operator def getFromDict(dataDict, mapList): return reduce(operator.getitem, mapList, dataDict) def setInDict(dataDict, mapList, value): getFromDict(dataDict, mapList[:-1])[mapList[-1]] = value Then get the values from the keys >>> getFromDict(dataDict, ["a", "r"]) 1 That would mean I could using my function if I get it write be able to feed it any json, get all the full paths nicely printed and then feed it back to the SO formula and get the values. It would essentially self process itself and let me get a summary of all keys and their data. Thanks Sayth -- https://mail.python.org/mailman/listinfo/python-list
Re: Format list of list sub elements keeping structure.
Sayth Renshaw wrote: > >> myjson = ... >> path = "['foo']['bar'][42]" >> print(eval("myjson" + path)) >> >> ? >> >> Wouldn't it be better to keep 'data' as is and use a helper function like >> >> def get_value(myjson, path): >> for key_or_index in path: >> myjson = myjson[key_or_index] >> return myjson >> >> path = ['foo', 'bar', 42] >> print(get_value(myjson, path)) >> >> ? > > Currently I do leave the data I extract the keys out as a full path. > > If I use pprint as suggested I get close. > >['glossary'], > ['glossary', 'title'], > ['glossary', 'GlossDiv'], > ['glossary', 'GlossDiv', 'title'], > ['glossary', 'GlossDiv', 'GlossList'], > ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry'], > ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'ID'], > ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'SortAs'], > ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'GlossTerm'], > ['glossary', 'GlossDiv', 'GlossList', 'GlossEntry', 'Acronym'], > ...] > > But to select elements from the json I need the format > json['elem1']['elem2] . > > I want to be able to get any json in future and parse it into my function > and return a list of all json key elements. > > Then using this cool answer on SO > https://stackoverflow.com/a/14692747/461887 > > from functools import reduce # forward compatibility for Python 3 > import operator > > def getFromDict(dataDict, mapList): > return reduce(operator.getitem, mapList, dataDict) Note that my -- not so cool ;) -- function >> def get_value(myjson, path): does the same in a way that I expected to be easier to understand than the functional idiom. > def setInDict(dataDict, mapList, value): > getFromDict(dataDict, mapList[:-1])[mapList[-1]] = value > > Then get the values from the keys getFromDict(dataDict, ["a", "r"]) > 1 > > > That would mean I could using my function if I get it write be able to > feed it any json, get all the full paths nicely printed and then feed it > back to the SO formula and get the values. OK, if this is really just about printing >>> path = ["foo", "bar", 42] >>> print("".join("[{!r}]".format(key) for key in path)) ['foo']['bar'][42] > > It would essentially self process itself and let me get a summary of all > keys and their data. > > Thanks > > Sayth -- https://mail.python.org/mailman/listinfo/python-list
Re: Tracking a memory leak in C extension - interpreting the output of PYTHONMALLOCSTATS
2018-07-24 12:09 GMT+02:00 Bartosz Golaszewski : > 2018-07-23 21:51 GMT+02:00 Thomas Jollans : >> On 23/07/18 20:02, Bartosz Golaszewski wrote: >>> Hi! >> >> Hey! >> >>> A user recently reported a memory leak in python bindings (C extension >>> module) to a C library[1] I wrote. I've been trying to fix it since >>> but so far without success. Since I'm probably dealing with a space >>> leak rather than actual memory leak, valgrind didn't help much even >>> when using malloc as allocator. I'm now trying to use >>> PYTHONMALLOCSTATS but need some help on how to interpret the output >>> emitted it's enabled. >> >> Oh dear. >> >>> >>> [snip] >>> >>> The number of pools in arena 53 continuously grows. Its size column >>> says: 432. I couldn't find any documentation on what it means but I >>> assume it's an allocation of 432 bytes. [...] >> >> I had a quick look at the code (because what else does one do for fun); >> I don't understand much, but what I can tell you is that >> (a) yes, that is an allocation size in bytes, and >> (b) as you can see, it uses intervals of 8. This means that pool 53 >> is used for allocations of 424 < nbytes <= 432 bytes. Maybe your >> breakpoint needs tweaking. >> (c) Try breaking on _PyObject_Malloc or pymalloc_alloc. I think they're >> called by both PyMem_Malloc and PyObject_Malloc. >> >> int _PyObject_DebugMallocStats(FILE *out) >> >> https://github.com/python/cpython/blob/b18f8bc1a77193c372d79afa79b284028a2842d7/Objects/obmalloc.c#L2435 >> >> static int pymalloc_alloc(void *ctx, void **ptr_p, size_t nbytes) >> >> https://github.com/python/cpython/blob/b18f8bc1a77193c372d79afa79b284028a2842d7/Objects/obmalloc.c#L1327 >> >> >> Have fun debugging! >> >> -- Thomas >> >> [snip!] > > I don't see any other allocation of this size. Can this be some bug in > the interpreter? > > Bart Ok so this is strange: I can fix the leak if I explicitly call PyObject_Free() on the leaking object which is created by "calling" its type. Is this normal? Shouldn't Py_DECREF() be enough? The relevant dealloc callback is called from Py_DECREF() but the object's memory is not freed. Bart -- https://mail.python.org/mailman/listinfo/python-list
Re: Tracking a memory leak in C extension - interpreting the output of PYTHONMALLOCSTATS
2018-07-23 21:51 GMT+02:00 Thomas Jollans : > On 23/07/18 20:02, Bartosz Golaszewski wrote: >> Hi! > > Hey! > >> A user recently reported a memory leak in python bindings (C extension >> module) to a C library[1] I wrote. I've been trying to fix it since >> but so far without success. Since I'm probably dealing with a space >> leak rather than actual memory leak, valgrind didn't help much even >> when using malloc as allocator. I'm now trying to use >> PYTHONMALLOCSTATS but need some help on how to interpret the output >> emitted it's enabled. > > Oh dear. > >> >> [snip] >> >> The number of pools in arena 53 continuously grows. Its size column >> says: 432. I couldn't find any documentation on what it means but I >> assume it's an allocation of 432 bytes. [...] > > I had a quick look at the code (because what else does one do for fun); > I don't understand much, but what I can tell you is that > (a) yes, that is an allocation size in bytes, and > (b) as you can see, it uses intervals of 8. This means that pool 53 > is used for allocations of 424 < nbytes <= 432 bytes. Maybe your > breakpoint needs tweaking. > (c) Try breaking on _PyObject_Malloc or pymalloc_alloc. I think they're > called by both PyMem_Malloc and PyObject_Malloc. > > int _PyObject_DebugMallocStats(FILE *out) > > https://github.com/python/cpython/blob/b18f8bc1a77193c372d79afa79b284028a2842d7/Objects/obmalloc.c#L2435 > > static int pymalloc_alloc(void *ctx, void **ptr_p, size_t nbytes) > > https://github.com/python/cpython/blob/b18f8bc1a77193c372d79afa79b284028a2842d7/Objects/obmalloc.c#L1327 > > > Have fun debugging! > > -- Thomas > > >> >> How do I use the info produced by PYTHONMALLOCSTATS do get to the >> culprit of the leak? Is there anything wrong in my reasoning here? >> >> Best regards, >> Bartosz Golaszewski >> >> [1] https://git.kernel.org/pub/scm/libs/libgpiod/libgpiod.git/ >> > > -- > https://mail.python.org/mailman/listinfo/python-list Thanks for the hints! I've been able to pinpoint the allocation in question to this line: https://git.kernel.org/pub/scm/libs/libgpiod/libgpiod.git/tree/bindings/python/gpiodmodule.c?h=next#n1238 with the following stack trace: #0 _PyObject_Malloc (ctx=0x0, nbytes=432) at Objects/obmalloc.c:1523 #1 0x55614c38 in _PyMem_DebugRawAlloc (ctx=0x55a3c340 <_PyMem_Debug+96>, nbytes=400, use_calloc=0) at Objects/obmalloc.c:1998 #2 0x556238c5 in PyType_GenericAlloc (type=0x76e06820 , nitems=0) at Objects/typeobject.c:972 #3 0x55627ba5 in type_call (type=0x76e06820 , args=0x76e21910, kwds=0x0) at Objects/typeobject.c:929 #4 0x555cc666 in PyObject_Call (kwargs=0x0, args=, callable=0x76e06820 ) at Objects/call.c:245 #5 PyEval_CallObjectWithKeywords (kwargs=0x0, args=, callable=0x76e06820 ) at Objects/call.c:826 #6 PyObject_CallObject (callable=0x76e06820 , args=) at Objects/call.c:834 #7 0x76c008dd in gpiod_LineToLineBulk (line=line@entry=0x75bbd240) at gpiodmodule.c:1238 #8 0x76c009af in gpiod_Line_set_value (self=0x75bbd240, args=) at gpiodmodule.c:442 #9 0x555c9ef8 in _PyMethodDef_RawFastCallKeywords (method=0x76e06280 , self=self@entry=0x75bbd240, args=args@entry=0x55b15e18, nargs=nargs@entry=1, kwnames=kwnames@entry=0x0) at Objects/call.c:694 #10 0x55754db9 in _PyMethodDescr_FastCallKeywords (descrobj=0x76e344d0, args=args@entry=0x55b15e10, nargs=nargs@entry=2, kwnames=kwnames@entry=0x0) at Objects/descrobject.c:288 #11 0x555b7fcd in call_function (kwnames=0x0, oparg=2, pp_stack=) at Python/ceval.c:4581 #12 _PyEval_EvalFrameDefault (f=, throwflag=) at Python/ceval.c:3176 #13 0x55683b7c in PyEval_EvalFrameEx (throwflag=0, f=0x55b15ca0) at Python/ceval.c:536 #14 _PyEval_EvalCodeWithName (_co=_co@entry=0x77e50460, globals=globals@entry=0x77f550e8, locals=locals@entry=0x77e50460, args=args@entry=0x0, argcount=argcount@entry=0, kwnames=kwnames@entry=0x0, kwargs=0x0, kwcount=0, kwstep=2, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0, name=0x0, qualname=0x0) at Python/ceval.c:3941 #15 0x55683ca3 in PyEval_EvalCodeEx (closure=0x0, kwdefs=0x0, defcount=0, defs=0x0, kwcount=0, kws=0x0, argcount=0, args=0x0, locals=locals@entry=0x77e50460, globals=globals@entry=0x77f550e8, _co=_co@entry=0x77e50460) at Python/ceval.c:3970 #16 PyEval_EvalCode (co=co@entry=0x77e50460, globals=globals@entry=0x77efcc50, locals=locals@entry=0x77efcc50) at Python/ceval.c:513 #17 0x556bb099 in run_mod (arena=0x77f550e8, flags=0x7fffe1a0, locals=0x77efcc50, globals=0x77efcc50, filename=0x77e5b4c0, mod=0x55afc2f8) at Python/pythonrun.c:1035 #18 PyRun_FileExFlags (fp=0x55b26010, filename_str=, start=, globals=0x77efcc50, locals=0x77efcc50, closeit=1, flags=0x7fffe1a0) at Python/pythonrun.c:988 #19 0x556bb2b5 in PyRun_SimpleFileExFlags
Re: Tracking a memory leak in C extension - interpreting the output of PYTHONMALLOCSTATS
2018-07-24 13:30 GMT+02:00 Bartosz Golaszewski : > 2018-07-24 12:09 GMT+02:00 Bartosz Golaszewski : >> 2018-07-23 21:51 GMT+02:00 Thomas Jollans : >>> On 23/07/18 20:02, Bartosz Golaszewski wrote: Hi! >>> >>> Hey! >>> A user recently reported a memory leak in python bindings (C extension module) to a C library[1] I wrote. I've been trying to fix it since but so far without success. Since I'm probably dealing with a space leak rather than actual memory leak, valgrind didn't help much even when using malloc as allocator. I'm now trying to use PYTHONMALLOCSTATS but need some help on how to interpret the output emitted it's enabled. >>> >>> Oh dear. >>> [snip] The number of pools in arena 53 continuously grows. Its size column says: 432. I couldn't find any documentation on what it means but I assume it's an allocation of 432 bytes. [...] >>> >>> I had a quick look at the code (because what else does one do for fun); >>> I don't understand much, but what I can tell you is that >>> (a) yes, that is an allocation size in bytes, and >>> (b) as you can see, it uses intervals of 8. This means that pool 53 >>> is used for allocations of 424 < nbytes <= 432 bytes. Maybe your >>> breakpoint needs tweaking. >>> (c) Try breaking on _PyObject_Malloc or pymalloc_alloc. I think they're >>> called by both PyMem_Malloc and PyObject_Malloc. >>> >>> int _PyObject_DebugMallocStats(FILE *out) >>> >>> https://github.com/python/cpython/blob/b18f8bc1a77193c372d79afa79b284028a2842d7/Objects/obmalloc.c#L2435 >>> >>> static int pymalloc_alloc(void *ctx, void **ptr_p, size_t nbytes) >>> >>> https://github.com/python/cpython/blob/b18f8bc1a77193c372d79afa79b284028a2842d7/Objects/obmalloc.c#L1327 >>> >>> >>> Have fun debugging! >>> >>> -- Thomas >>> >>> > > [snip!] > >> >> I don't see any other allocation of this size. Can this be some bug in >> the interpreter? >> >> Bart > > Ok so this is strange: I can fix the leak if I explicitly call > PyObject_Free() on the leaking object which is created by "calling" > its type. Is this normal? Shouldn't Py_DECREF() be enough? The > relevant dealloc callback is called from Py_DECREF() but the object's > memory is not freed. > > Bart Ok I've found the problem and it's my fault. From tp_dealloc's documentation: --- The destructor function should free all references which the instance owns, free all memory buffers owned by the instance (using the freeing function corresponding to the allocation function used to allocate the buffer), and finally (as its last action) call the type’s tp_free function. --- I'm not calling the tp_free function... Best regards, Bartosz Golaszewski -- https://mail.python.org/mailman/listinfo/python-list
Re: curses, ncurses or something else
On Mon, 23 Jul 2018 23:24:18 +0100, John Pote wrote: > I recently wrote a command line app to take a stream of numbers, do some > signal processing on them and display the results on the console. There > may be several output columns of data so a title line is printed first. > But the stream of numbers may be several hundred long and the title line > disappears of the top on the console. > > So I thought it might be quick and easy to do something with curses to > keep the title line visable while the numbers roll up the screen. But > alas I'm a Windows user and the 'curses' module is not in the Windows > standard library for Python. > > It occured to me that I could create a simple tkinter class but I > haven't tinkered for some time and would have to refresh my knowledge of > the API. Just wondered if there was any other simple way I could keep > the title line on the console, preferably without having to install > another library. Browsergui is designed to simplify GUI-building by mooching off your web browser. I like it. sudo pip3 install browsergui python3 -m browsergui.examples Enjoy! -- To email me, substitute nowhere->runbox, invalid->com. -- https://mail.python.org/mailman/listinfo/python-list
Can pip install packages for all users (on a Linux system)?
I've been using "sudo pip3 install" to add packages from the PyPI repository. I have multiple user accounts on the computer in question. My goal is to install packages that are accessible to all user accounts. I know that using the Synaptic Package Manager in Ubuntu will install for all users, but not every Python package is included in the Canonical repository. I hadn't noticed any discrepancies until recently. I upgraded from Ubuntu 17.10 to 18.04. In parallel, I upgraded tensorflow-gpu 1.4.0 to 1.8.0. Everything worked on my main account. However, attempting to import tensorflow from Python on a secondary account failed. Eventually I checked the pip lists in each account, and I found a reference to the old tensorflow 1.4 on the secondary account. Uninstalling that, and reinstalling tensorflow-gpu 1.8 on the secondary account fixed the problem. I believe that I now have tensorflow 1.8 installed twice on my system, once for each user. If anyone can share how to convince pip to behave like Synaptic, I would appreciate it. Thanks. -- https://mail.python.org/mailman/listinfo/python-list
Checking whether type is None
Consider: >>> type({}) is dict True >>> type(3) is int True >>> type(None) is None False Obvious I guess, since the type object is not None. So what would I compare type(None) to? >>> type(None) >>> type(None) is NoneType Traceback (most recent call last): File "", line 1, in NameError: name 'NoneType' is not defined I know I ask whether: >>> thing is None but I wanted a generic test. I'm trying to get away from things like: >>> type(thing) is type(None) because of something I read somewhere preferring my original test method. Thanks -- https://mail.python.org/mailman/listinfo/python-list
Re: Can pip install packages for all users (on a Linux system)?
On 24.07.2018 20:07, John Ladasky wrote: I've been using "sudo pip3 install" to add packages from the PyPI repository. I have multiple user accounts on the computer in question. My goal is to install packages that are accessible to all user accounts. I know that using the Synaptic Package Manager in Ubuntu will install for all users, but not every Python package is included in the Canonical repository. I hadn't noticed any discrepancies until recently. I upgraded from Ubuntu 17.10 to 18.04. In parallel, I upgraded tensorflow-gpu 1.4.0 to 1.8.0. Everything worked on my main account. However, attempting to import tensorflow from Python on a secondary account failed. Eventually I checked the pip lists in each account, and I found a reference to the old tensorflow 1.4 on the secondary account. Uninstalling that, and reinstalling tensorflow-gpu 1.8 on the secondary account fixed the problem. One possible explanation for your finding: user installs normally take precedence over system-wide installs both at import time and for pip (list, uninstall, etc.). So if you, or your users, have installed tensorflow 1.4.0 using pip install --user before, then a system-wide pip install tensorflow 1.8.0 won't override this previous version (though if your admin account has the user install, too, pip would warn you). Otherwise, a pip install without --user is effectively a system-wide install as long as your Python is a system-wide install. I believe that I now have tensorflow 1.8 installed twice on my system, once for each user. If anyone can share how to convince pip to behave like Synaptic, I would appreciate it. Thanks. If a user has a user install of tensorflow, it will always shadow the system-wide version. The only solution I know (except manipulating Python's import path list) is to pip uninstall the per-user version. Best, Wolfgang -- https://mail.python.org/mailman/listinfo/python-list
Re: Checking whether type is None
In Python 2, you can import NoneType from types module. In Python 3, the best you can do is: NoneType = type(None) Iwo Herka https://github.com/IwoHerka ‐‐‐ Original Message ‐‐‐ On 24 July 2018 7:33 PM, Tobiah wrote: > > > Consider: > > >>> type({}) is dict > > True > >>> type(3) is int > > True > >>> type(None) is None > > False > > > Obvious I guess, since the type object is not None. > > So what would I compare type(None) to? > > >>> type(None) > > > > >>> type(None) is NoneType > > Traceback (most recent call last): > File "", line 1, in > > NameError: name 'NoneType' is not defined > > > I know I ask whether: > > >>> thing is None > > but I wanted a generic test. > > I'm trying to get away from things like: > > >>> type(thing) is type(None) > > because of something I read somewhere preferring > > my original test method. > > Thanks > > > -- > > https://mail.python.org/mailman/listinfo/python-list -- https://mail.python.org/mailman/listinfo/python-list
Re: Checking whether type is None
On Wed, Jul 25, 2018 at 5:33 AM, Tobiah wrote: > Consider: > > >>> type({}) is dict > True > >>> type(3) is int > True > >>> type(None) is None > False > > Obvious I guess, since the type object is not None. > So what would I compare type(None) to? > > >>> type(None) > > >>> type(None) is NoneType > Traceback (most recent call last): > File "", line 1, in > NameError: name 'NoneType' is not defined > > > I know I ask whether: > > >>> thing is None > > but I wanted a generic test. > I'm trying to get away from things like: > > >>> type(thing) is type(None) > > because of something I read somewhere preferring > my original test method. There is nothing more generic in a type test than in simply saying "is None". There are no other instances of NoneType. Don't try type-checking None; just check if the object is None. ChrisA -- https://mail.python.org/mailman/listinfo/python-list
hello from a very excited and totally blind python programmer and game player
Hi there everyone, my name is Daniel Perry and I'm a totally blind new Python user. I've only just recently started picking up python and playing with it and I intend on building some unique audio computer games for the blind. Such things mostly as simulation games like farming, building type games and even eventually a virtual world built completely with audio but building it in such a way that both we as totally blind colonists can live inside of it but also that our fully sighted counterparts could live here as well and not need any sort of guidance from the other group. That is, the blind would not need any help from our sighted counterparts and you in turn would not need any guidance from us as to how to live in the world or grow in it. Of course this virtual world idea is down the road, I've got other game ideas first of all but I would eventually like to get to the point I've just described above. Preferably building my own server on which to park not only my virtual world and games but also my web site that I would most likely need to put these items up to be downloaded. Have a wonderful day to you all and I look forward to your feedback and advice. Also, When I opened up the first message that I had gotten from this list, I got a prompt that popped up asking if I wanted to make windows live mail my default news client and I answered no. From that point on, I've been getting an error message and the message would not open. How must I fix this? or am I able to correct this situation. Have a wonderful day and I look forward to hearing from you soon. -- https://mail.python.org/mailman/listinfo/python-list
RE: Checking whether type is None
https://docs.python.org/3.7/library/constants.html "None The sole value of the type NoneType..." "x is None" and "type(x) is type(None)" are equivalent because of that. I think though that the better way to do the first tests would be to use isinstance https://docs.python.org/3.7/library/functions.html#isinstance isinstance({}, dict) isinstance(3, int) And I suppose if you really wanted: isinstance(None, type(None)) -Original Message- From: Python-list [mailto:python-list-bounces+david.raymond=tomtom@python.org] On Behalf Of Tobiah Sent: Tuesday, July 24, 2018 3:33 PM To: python-list@python.org Subject: Checking whether type is None Consider: >>> type({}) is dict True >>> type(3) is int True >>> type(None) is None False Obvious I guess, since the type object is not None. So what would I compare type(None) to? >>> type(None) >>> type(None) is NoneType Traceback (most recent call last): File "", line 1, in NameError: name 'NoneType' is not defined I know I ask whether: >>> thing is None but I wanted a generic test. I'm trying to get away from things like: >>> type(thing) is type(None) because of something I read somewhere preferring my original test method. Thanks -- https://mail.python.org/mailman/listinfo/python-list -- https://mail.python.org/mailman/listinfo/python-list
Re: Checking whether type is None
On Tue, 24 Jul 2018 12:33:27 -0700, Tobiah wrote: [...] > So what would I compare type(None) to? Why would you need to? The fastest, easiest, most reliable way to check if something is None is: if something is None > >>> type(None) > > >>> type(None) is NoneType > Traceback (most recent call last): > File "", line 1, in > NameError: name 'NoneType' is not defined You can do: from types import NoneType or if you prefer: NoneType = type(None) but why bother? > I know I ask whether: > > >>> thing is None > > but I wanted a generic test. That *is* a generic test. > I'm trying to get away from things like: > > >>> type(thing) is type(None) That is a good move. > because of something I read somewhere preferring my original test > method. Oh, you read "something" "somewhere"? Then it must be good advice! *wink* Writing code like: type(something) is dict was the standard way to do a type check back in the Python 1.5 days. That's about 20 years ago now. These days, that is rarely what we need now. The usual way to check a type is: isinstance(something, dict) but even that should be rare. If you find yourself doing lots of type checking, using isinstance() or type(), then you're probably writing slow, inconvenient Python code. -- Steven D'Aprano "Ever since I learned about confirmation bias, I've been seeing it everywhere." -- Jon Ronson -- https://mail.python.org/mailman/listinfo/python-list
Re: Checking whether type is None
On 07/24/2018 01:07 PM, Chris Angelico wrote: On Wed, Jul 25, 2018 at 5:33 AM, Tobiah wrote: Consider: >>> type({}) is dict True >>> type(3) is int True >>> type(None) is None False Obvious I guess, since the type object is not None. So what would I compare type(None) to? >>> type(None) >>> type(None) is NoneType Traceback (most recent call last): File "", line 1, in NameError: name 'NoneType' is not defined I know I ask whether: >>> thing is None but I wanted a generic test. I'm trying to get away from things like: >>> type(thing) is type(None) because of something I read somewhere preferring my original test method. There is nothing more generic in a type test than in simply saying "is None". There are no other instances of NoneType. Don't try type-checking None; just check if the object is None. ChrisA I suppose one valid usage would be this sort of thing: fn = { int: dispatchInt, str: dispatchStr, list: dispatchList, type(None): dispatchNone }[type(x)] fn(x) -- Rob Gaddi, Highland Technology -- www.highlandtechnology.com Email address domain is currently out of order. See above to fix. -- https://mail.python.org/mailman/listinfo/python-list
Re: Checking whether type is None
On Wed, Jul 25, 2018 at 9:18 AM, Rob Gaddi wrote: > On 07/24/2018 01:07 PM, Chris Angelico wrote: >> >> On Wed, Jul 25, 2018 at 5:33 AM, Tobiah wrote: >>> >>> Consider: >>> >>> >>> type({}) is dict >>> True >>> >>> type(3) is int >>> True >>> >>> type(None) is None >>> False >>> >>> Obvious I guess, since the type object is not None. >>> So what would I compare type(None) to? >>> >>> >>> type(None) >>> >>> >>> type(None) is NoneType >>> Traceback (most recent call last): >>>File "", line 1, in >>> NameError: name 'NoneType' is not defined >>> >>> >>> I know I ask whether: >>> >>> >>> thing is None >>> >>> but I wanted a generic test. >>> I'm trying to get away from things like: >>> >>> >>> type(thing) is type(None) >>> >>> because of something I read somewhere preferring >>> my original test method. >> >> >> There is nothing more generic in a type test than in simply saying "is >> None". There are no other instances of NoneType. Don't try >> type-checking None; just check if the object is None. >> >> ChrisA >> > > I suppose one valid usage would be this sort of thing: > > fn = { > int: dispatchInt, > str: dispatchStr, > list: dispatchList, > type(None): dispatchNone > }[type(x)] > fn(x) > True, but that would be useful only in a very few situations, where you guarantee that you'll never get any subclasses. So if you're walking something that was decoded from JSON, and you know for certain that you'll only ever get those types (add float to the list and it's basically covered), then yes, you might do this; and then I would say that using "type(None)" is the correct spelling of it. ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: Format list of list sub elements keeping structure.
> > Then using this cool answer on SO [...] > > Oh. I thought you wanted to learn how to solve problems. I had no idea you > were auditioning for the James Dean part. My bad. Awesome response burn lol. I am trying to solve problems. Getting tired of dealing with JSON and having to figure out the structure each time. Just want to automate that part so I can move through the munging part and spend more time on higher value tasks. Cheers Sayth -- https://mail.python.org/mailman/listinfo/python-list
Re: Format list of list sub elements keeping structure.
> > Well, your code was close. All you needed was a little tweak > to make it work like you requested. So keep working at it, > and if you have a specific question, feel free to ask on the > list. > > Here's a tip. Try to simplify the problem. Instead of > looping over a list of lists, and then attempting to do a > format in the middle of an iteration, a format that you > really don't know how to do in a vacuum (no pressure, > right???), pull one of the sublists out and try to format it > _first_. IOWs: isolate the problem. > > And, when you can format _one_ list the way you want -- > spoiler alert! -- you can format an infinite number of lists > the way you want. Loops are cool like that. Well, most of > the time. > > The key to solving most complex problems is to (1) break > them down into small parts, (2) solve each small part, and > (3) assemble the whole puzzle. This is a skill you must > master. And it's really not difficult. It just requires a > different way of thinking about tasks. Thank you Rick, good advice. I really am enjoying coding at the moment, got myself and life in a good headspace. Cheers Sayth -- https://mail.python.org/mailman/listinfo/python-list