On Friday, 21 April 2017 at 17:40:03 UTC, Era Scarecrow wrote:
I think I'll just go with full memory compression and make a
quick simple filter to manage the large blocks of 0's to
something more manageable. That will reduce the memory
allocation issues.
Done and I'm happy with the results.
On Friday, 21 April 2017 at 12:57:25 UTC, Adam D. Ruppe wrote:
But I didn't realize your thing was a literal example from the
docs. Ugh, can't even trust that.
Which was a larger portion of why I was confused by it all than
otherwise.
Still, it's much easier to salvage if I knew how the memo
On Friday, 21 April 2017 at 11:18:55 UTC, Era Scarecrow wrote:
So that's what's going on. But if I have to dup the blocks
then I have the same problem as before with limited memory
issues. I kinda wish more there was the gz_open that is in the
C interface and let it deal with the decompression
On Thursday, 20 April 2017 at 20:24:15 UTC, Adam D. Ruppe wrote:
In short, byChunk reuses its buffer, and std.zlib holds on to
the pointer. That combination leads to corrupted data.
Easiest fix is to .dup the chunk...
So that's what's going on. But if I have to dup the blocks then
I have th
On Thursday, 20 April 2017 at 20:19:31 UTC, Era Scarecrow wrote:
I took the UnCompress example and tried to make use of it,
however it breaks midway through my program with nothing more
than 'Data Error'.
See the tip of the week here:
http://arsdnet.net/this-week-in-d/2016-apr-24.html
In sho
I took the UnCompress example and tried to make use of it,
however it breaks midway through my program with nothing more
than 'Data Error'.
[code]
//shamelessly taken for experimenting with
UnCompress decmp = new UnCompress;
foreach (chunk; stdin.byChunk(4096).map!(x =>
decmp.uncompress(x)))