compiline hadoop offline

2010-07-12 Thread Ahmad Shahzad
Hi ALL, Is it possible to compile hadoop using ANT without being connected to internet. Because whenever i compile hadoop, it checks for some ivy dependencies online. If it is possible than what should i change to compile hadoop. I am just changing some of the hadoop code. Also, if i am

Re: compiline hadoop offline

2010-07-12 Thread Ashish
Can you try passing following params to JVM -DproxyHost= -DproxyPort= It should work for most Java Apps :) On Mon, Jul 12, 2010 at 8:58 PM, Ahmad Shahzad wrote: > Hi ALL, >           Is it possible to compile hadoop using ANT without being > connected to internet. Because whenever i compile hado

proxy settings for ivy

2010-07-12 Thread Ahmad Shahzad
Hi ALL, Can anyone tell me where i set the proxy settings for ivy. I am unable to build hadoop using ant. It says BUILD FAILED java.net.ConnectException: Connection refused.The reason is that i am connected through a proxy to internet.So, where should i tell hadoop to use the proxy. Reg

Re: proxy settings for ivy

2010-07-12 Thread Harsh J
Ensure you've set your ANT_OPTS for this, before issuing the ant command. For example: set ANT_OPTS=-Dhttp.proxyHost=kaboom -Dhttp.proxyPort=2888 There are similar options available for authenticated proxy also :) On Mon, Jul 12, 2010 at 9:41 PM, Ahmad Shahzad wrote: > Hi ALL, >           Can a

Re: compiline hadoop offline

2010-07-12 Thread Konstantin Boudnik
On Mon, Jul 12, 2010 at 08:28AM, Ahmad Shahzad wrote: > Hi ALL, >Is it possible to compile hadoop using ANT without being > connected to internet. Because whenever i compile hadoop, it checks for some > ivy dependencies online. If it is possible than what should i change to > compile ha

Hadoop Compression - Current Status

2010-07-12 Thread Stephen Watt
Please let me know if any of assertions are incorrect. I'm going to be adding any feedback to the Hadoop Wiki. It seems well documented that the LZO Codec is the most performant codec ( http://blog.oskarsson.nu/2009/03/hadoop-feat-lzo-save-disk-space-and.html) but it is GPL infected and thus it

RE: Hadoop Compression - Current Status

2010-07-12 Thread Segel, Mike
How can you say zip files are 'best codecs' to use? Call me silly but I seem to recall that if you're using a zip'd file for input you can't really use a file splitter? (Going from memory, which isn't the best thing to do...) -Mike -Original Message- From: Stephen Watt [mailto:sw...@us

Re: [VOTE] Release Hadoop 0.21.0 (candidate 0)

2010-07-12 Thread Stephen Watt
This is likely a result of how things are now being built post project-split, but previously, for the hadoop-0.20.x releases there was a top level build.xml file which would orchestrate building the sub-projects which were split underneath the src directory, resulting in a final hadoop-20.x-cor

Re: Hadoop Compression - Current Status

2010-07-12 Thread Patrick Angeles
Also, fwiw, the use of codecs and SequenceFiles are somewhat orthogonal. You'll have to compress the sequencefile with a codec, be it gzip, bz2 or lzo. SequenceFiles do get you splittability which you won't get with just Gzip (until we get MAPREDUCE-491) or the hadoop-lzo InputFormats. cheers, -

Re: Hadoop Compression - Current Status

2010-07-12 Thread Greg Roelofs
Stephen Watt wrote: > Please let me know if any of assertions are incorrect. I'm going to be > adding any feedback to the Hadoop Wiki. It seems well documented that the > LZO Codec is the most performant codec ( > http://blog.oskarsson.nu/2009/03/hadoop-feat-lzo-save-disk-space-and.html) Spee

proxy settings for compiling hadoop

2010-07-12 Thread Ahmad Shahzad
Hi ALL, Can anyone tell me where to set the proxy so that i can compile hadoop through ant. Which file should i edit for the proxy settings. Regards, Ahmad Shahzad

Re: proxy settings for ivy

2010-07-12 Thread Ahmad Shahzad
Thanks for help. Setting ANT_OPTS="Dhttp.proxyHost=proxy -Dhttp.proxyPort=port" worked. Ahmad

Re: [VOTE] Release Hadoop 0.21.0 (candidate 0)

2010-07-12 Thread Felix Halim
Hi Tom, Just want to let you know that back then when I tried to circumvent this problem, I used job.submit() and retrieve the counter asynchronously. I found out that the counter values always zero during the execution (so the job is still running halfway). So, in this case, the job is not retire

common-dev@hadoop.apache.org

2010-07-12 Thread Zhang Jianfeng
common-dev@hadoop.apache.org