Re: Snappy compression with Pig

2018-05-03 Thread Rohini Palaniswamy
Can you give the full stack trace? On Tue, May 1, 2018 at 6:35 AM, Alex Soto wrote: > Hello, > > I am using Pig 0.17.0 and I am trying to enable Snappy compression for > temporary files. > I installed Snappy on all the Hadoop nodes: > > sudo yum install snappy snappy-devel > ln -

Snappy compression with Pig

2018-05-01 Thread Alex Soto
Hello, I am using Pig 0.17.0 and I am trying to enable Snappy compression for temporary files. I installed Snappy on all the Hadoop nodes: sudo yum install snappy snappy-devel ln -sf /usr/lib64/libsnappy.so /opt/hadoop/lib/native/ Yum installed the following packages: Installe

Re: Snappy compression with pig

2012-04-30 Thread Prashant Kommireddi
Line On Mon, Apr 30, 2012 at 4:15 PM, Mohit Anchlia wrote: > Thanks! It worked just fine. But now my question is when compressing a text > file is it compressed line by line or the entire file is compressed as one? > > On Sun, Apr 29, 2012 at 7:33 PM, Prashant Kommireddi >wrote: > > > By blocks

Re: Snappy compression with pig

2012-04-30 Thread Mohit Anchlia
Thanks! It worked just fine. But now my question is when compressing a text file is it compressed line by line or the entire file is compressed as one? On Sun, Apr 29, 2012 at 7:33 PM, Prashant Kommireddi wrote: > By blocks do you mean you would be using Snappy to write SequeneFile? Yes, > you ca

Re: Snappy compression with pig

2012-04-29 Thread Prashant Kommireddi
By blocks do you mean you would be using Snappy to write SequeneFile? Yes, you can do that by setting compression at BLOCK level for the sequence file. On Sun, Apr 29, 2012 at 1:41 PM, Mohit Anchlia wrote: > Thanks! Is this compressing everyline or in blocks? Is it possible to set > it to compres

Re: Snappy compression with pig

2012-04-29 Thread Mohit Anchlia
Thanks! Is this compressing everyline or in blocks? Is it possible to set it to compress per block? On Sun, Apr 29, 2012 at 1:12 PM, Prashant Kommireddi wrote: > The ones you mentioned are for map output compression, not job output. > > On Apr 29, 2012, at 1:07 PM, Mohit Anchlia wrote: > > > I t

Re: Snappy compression with pig

2012-04-29 Thread Prashant Kommireddi
The ones you mentioned are for map output compression, not job output. On Apr 29, 2012, at 1:07 PM, Mohit Anchlia wrote: > I tried these and didn't work with STORE? Is this different than the one > you mentioned? > > SET mapred.compress.map.output true; > > SET mapred.output.compression org.apac

Re: Snappy compression with pig

2012-04-29 Thread Mohit Anchlia
I tried these and didn't work with STORE? Is this different than the one you mentioned? SET mapred.compress.map.output true; SET mapred.output.compression org.apache.hadoop.io.compress.SnappyCodec; On Sun, Apr 29, 2012 at 11:57 AM, Prashant Kommireddi wrote: > Have you tried setting output com

Re: Snappy compression with pig

2012-04-29 Thread Prashant Kommireddi
Have you tried setting output compression to Snappy for Store? grunt> set output.compression.enabled true; grunt> set output.compression.codec org.apache.hadoop.io.compress.SnappyCodec; You should be able to read and write Snappy compressed files with PigStorage which uses Hadoop TextInputFormat

Re: Snappy compression with pig

2012-04-26 Thread Mohit Anchlia
I think I need to write both store and load functions. It appears that only intermediate output that is stored on temp location can be compressed using: SET mapred.compress.map.output true; SET mapred.output.compression org.apache.hadoop.io.compress.SnappyCodec; Any pointers as to how I can st