Charles-Francois Natali <neolo...@free.fr> added the comment:

> .. even with a self-compiled 1.2.3, INT_MAX/1000 ... nothing.
> The problem is not crc32(), but the buffer itself:
> 
>    if (pbuf.len > 1024*5) {
>         unsigned char *buf = pbuf.buf;
>         Py_ssize_t len = pbuf.len;
>         Py_ssize_t i;
> fprintf(stderr, "CRC 32 2.1\n");
> for(i=0; (size_t)i < (size_t)len;++i)
>     *buf++ = 1;
> fprintf(stderr, "CRC 32 2.2\n");

Unless I'm mistaken, in the test the file is mapped with PROT_READ, so it's 
normal to get SIGSEGV when writting to it:

   def setUp(self): 
            with open(support.TESTFN, "wb+") as f: 
                f.seek(_4G) 
                f.write(b"asdf") 
                f.flush() 
                self.mapping = mmap.mmap(f.fileno(), 0, 
access=mmap.ACCESS_READ) 

> for(i=0; (size_t)i < (size_t)len;++i)
>     *buf++ = 1;

But it seems you're also getting segfaults when only reading it, right ?

I've got a stupid question: how much memory do you have ?
Cause there seems to be some issues with page cache when reading mmaped files 
on OS-X:
http://lists.apple.com/archives/darwin-development/2003/Jun/msg00141.html

On Linux, the page cache won't fill forever, so you don't need to have enough 
free memory to accomodate the whole file (the page cache should grow, but not 
forever). But on OS-X, it seems that the page replacement algorithm seems to 
retain mmaped pages in the page cache much longer, which could potentially 
trigger an OOM later (because of overcommitting, mmap can very well return a 
valid address range which leads to a segfault when accessed later).
I'm not sure why it would segfault on the first page, though.

----------
nosy: +neologix

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue11277>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to