Thanks for the comments on the random file generation. Regarding the write a line at a time, that's just an artifact of adapting this from my larger program which has to process each line before it's written back to disk.
On 30/10/2018 17:04, Neil Van Dyke wrote: > Two small comments in addition to what Matthew said... > > 'Paulo Matos' via Racket Users wrote on 10/30/18 11:32 AM: >> $ base64 /dev/urandom | head -c 1000000 > foo3 > > Even though these are just test files, you might normally want to make > them by instead `dd if=/dev/urandom` piped to `base64 -`, along with > `dd` options like `ibs`, `obs`, and `count`. That gets you exactly the > size of Base64-encoded content you want, with well-formed Base64 > encoding, without consuming unnecessary bytes from "/dev/urandom", and > with possibly more efficient I/O blocking factors. > > Also, in the exact illustrative example code you gave, I wonder whether > you'd get better performance and simplicity by having > `gunzip-through-ports` write direct to the "foo.txt" output-port, with > no line-by-line processing, and whether you need the separate thread. > (Of course, your actual application code might need to do line-by-line > processing, and/or have a separate thread, so I'm mainly mentioning this > for the list, and for future copy&paste code reusers.) > -- Paulo Matos -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to racket-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.