I tried your popen script before, but it's running fine - no core dump. I'll might try to compile a debuggable python over the weekend...
Chris. On 13 May 2011, at 18:09, Artur Wroblewski <wrob...@pld-linux.org> wrote: > On Fri, May 13, 2011 at 03:49:49PM +0000, Christian Marquardt wrote: >> Hello, >> >> I experienced seg faults and core dumps related to rpy with several >> recent versions of rpy, and reported some of them on this list. Here's >> another one, this time with rpy 2.2.0beta3 (and python 2.7.1) >> occurring during one of the tests coming with the source code. After >> unpacking, > > [...] >> in my setup which is admittedly somewhat special (Intel 11.1 compilers >> used for compiling python and all modules, for example, on an OpenSuse >> 11.1 Linux). > > [...] > >> Sorry - my intention is not to complain. I just feel terribly >> frustrated that I cannot even provide the slightest idea where that >> problem comes from, and even less how to drill it down further. > >> From my experience segv with different compilers or architectures > expose problems with bad memory management (which can be not the case > in our situation). Actually, I find it very reassuring that there > are people who use rpy on different architectures or using different > compilers. Hopefully, it will be made solid like a rock soon. > > Anyway, how does the backtrace look like in your case? > > On mine (on Fedora 14 using Python 3.2) while using attached script (t.py) > > (gdb) set args t.py > (gdb) r > Starting program: /home/wrobell/opt/bin/python3 t.py > [Thread debugging using libthread_db enabled] > Detaching after fork from child process 8221. > Detaching after fork from child process 8222. > end > > Program received signal SIGSEGV, Segmentation fault. > 0x00000070006d0063 in ?? () > (gdb) bt > #0 0x00000070006d0063 in ?? () > #1 0x000000000051e447 in insertdict (mp=0xadc580, key=0xa58570, > hash=953885456351248617, value=0x7959c0) at Objects/dictobject.c:538 > #2 0x00000000005200ce in PyDict_SetItem (op=0xadc580, > key=<value optimized out>, value=0x7959c0) at Objects/dictobject.c:810 > #3 0x0000000000524a72 in _PyModule_Clear (m=<value optimized out>) > at Objects/moduleobject.c:297 > #4 0x0000000000477e5a in PyImport_Cleanup () at Python/import.c:520 > #5 0x000000000048553e in Py_Finalize () at Python/pythonrun.c:430 > #6 0x000000000049cc53 in Py_Main (argc=<value optimized out>, > argv=<value optimized out>) at Modules/main.c:711 > #7 0x0000000000415551 in main (argc=2, argv=0x7fffffffe6a8) > at ./Modules/python.c:59 > (gdb) x/i 0x00000070006d0063 > => 0x70006d0063: Cannot access memory at address 0x70006d0063 > > Best regards, > > Artur > > <t.py> > ------------------------------------------------------------------------------ > Achieve unprecedented app performance and reliability > What every C/C++ and Fortran developer should know. > Learn how Intel has extended the reach of its next-generation tools > to help boost performance applications - inlcuding clusters. > http://p.sf.net/sfu/intel-dev2devmay > _______________________________________________ > rpy-list mailing list > rpy-list@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/rpy-list ------------------------------------------------------------------------------ Achieve unprecedented app performance and reliability What every C/C++ and Fortran developer should know. Learn how Intel has extended the reach of its next-generation tools to help boost performance applications - inlcuding clusters. http://p.sf.net/sfu/intel-dev2devmay _______________________________________________ rpy-list mailing list rpy-list@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/rpy-list