For a test I would suggest, yes. The issue isn't a CPU issue, depend only on 
memory. 

--
Alexander Alten-Lorenz
http://mapredit.blogspot.com
German Hadoop LinkedIn Group: http://goo.gl/N8pCF

On May 23, 2012, at 11:58 AM, Debarshi Basak wrote:

> I have 16 cores on each machines?
> Should i still set mappers to 1?
> 
> 
> Debarshi Basak
> Tata Consultancy Services
> Mailto: debarshi.ba...@tcs.com
> Website: http://www.tcs.com
> ____________________________________________
> Experience certainty. IT Services
> Business Solutions
> Outsourcing
> ____________________________________________
> 
> -----alo alt wrote: -----
> To: user@hive.apache.org
> From: alo alt <wget.n...@googlemail.com>
> Date: 05/23/2012 03:25PM
> Subject: Re:
> 
> Ah, 24 mappers are really high. Did you tried to use only one mapper? 
> 
> --
> Alexander Alten-Lorenz
> 
> http://mapredit.blogspot.com
> 
> German Hadoop LinkedIn Group: 
> http://goo.gl/N8pCF
> 
> 
> On May 23, 2012, at 11:50 AM, Debarshi Basak wrote:
> 
> > Actually yes..I changed java opts is 2g..mapred.child.opts is 400m  i have 
> > max mapper set to 24...My memory is 64GB..My problem is that the size of 
> > index created is around 22GB..How does the index in hive works?Does it load 
> > the complete index into memory?
> > 
> > 
> > Debarshi Basak
> > Tata Consultancy Services
> > Mailto: debarshi.ba...@tcs.com
> > Website: 
> http://www.tcs.com
> 
> > ____________________________________________
> > Experience certainty. IT Services
> > Business Solutions
> > Outsourcing
> > ____________________________________________
> > 
> > -----alo alt wrote: -----
> > To: user@hive.apache.org
> > From: alo alt <wget.n...@googlemail.com>
> > Date: 05/23/2012 02:51PM
> > Subject: Re:
> > 
> > Use the memory management options, as described in the link above. You was 
> > gotten OOM - out of memory - and that could depend on a misconfiguration. 
> > Did you try playing with mapred.child.ulimit and with java.opts?
> > 
> > 
> > 
> > --
> > Alexander Alten-Lorenz
> > 
> > 
> http://mapredit.blogspot.com
> 
> > 
> > German Hadoop LinkedIn Group: 
> > 
> http://goo.gl/N8pCF
> 
> > 
> > 
> > On May 23, 2012, at 11:12 AM, Debarshi Basak wrote:
> > 
> > > But what i am doing is i am creating index then setting the path of the 
> > > index and running a select <columns> from table_name where <condition>
> > > How can i resolve this issue?
> > > 
> > > 
> > > Debarshi Basak
> > > Tata Consultancy Services
> > > Mailto: debarshi.ba...@tcs.com
> > > Website: 
> > 
> http://www.tcs.com
> 
> > 
> > > ____________________________________________
> > > Experience certainty. IT Services
> > > Business Solutions
> > > Outsourcing
> > > ____________________________________________
> > > 
> > > -----alo alt wrote: -----
> > > To: user@hive.apache.org
> > > From: alo alt <wget.n...@googlemail.com>
> > > Date: 05/23/2012 02:08PM
> > > Subject: Re:
> > > 
> > > Hi,
> > > 
> > > 
> > > 
> > 
> http://hadoop.apache.org/common/docs/r0.20.2/mapred_tutorial.html#Memory+management
> 
> > 
> > > 
> > > 
> > > This message means that for some reason the garbage collector is taking 
> > > an excessive amount of time (by default 98% of all CPU time of the 
> > > process) and recovers very little memory in each run (by default 2% of 
> > > the heap).
> > > 
> > > --
> > > Alexander Alten-Lorenz
> > > 
> > > 
> > 
> http://mapredit.blogspot.com
> 
> > 
> > > 
> > > German Hadoop LinkedIn Group: 
> > > 
> > 
> http://goo.gl/N8pCF
> 
> > 
> > > 
> > > 
> > > On May 23, 2012, at 10:13 AM, Debarshi Basak wrote:
> > > 
> > > > When i am trying to run a query with index i am getting this 
> > > > exception.My hive version is 0.7.1
> > > >  
> > > > java.lang.OutOfMemoryError: GC overhead limit exceeded
> > > >         at java.nio.ByteBuffer.wrap(ByteBuffer.java:369)
> > > >         at org.apache.hadoop.io.Text.decode(Text.java:327)
> > > >         at org.apache.hadoop.io.Text.toString(Text.java:254)
> > > >         at 
> > > > org.apache.hadoop.hive.ql.index.compact.HiveCompactIndexResult.add(HiveCompactIndexResult.java:118)
> > > >         at 
> > > > org.apache.hadoop.hive.ql.index.compact.HiveCompactIndexResult.<init>(HiveCompactIndexResult.java:107)
> > > >         at 
> > > > org.apache.hadoop.hive.ql.index.compact.HiveCompactIndexInputFormat.getSplits(HiveCompactIndexInputFormat.java:89)
> > > >         at 
> > > > org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:971)
> > > >         at 
> > > > org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:963)
> > > >         at 
> > > > org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:170)
> > > >         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:880)
> > > >         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:833)
> > > >         at java.security.AccessController.doPrivileged(Native Method)
> > > >         at javax.security.auth.Subject.doAs(Subject.java:415)
> > > >         at 
> > > > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157)
> > > >         at 
> > > > org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:833)
> > > >         at 
> > > > org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:807)
> > > >         at 
> > > > org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:671)
> > > >         at 
> > > > org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:123)
> > > >         at 
> > > > org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:131)
> > > >         at 
> > > > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
> > > >         at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1063)
> > > >         at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:900)
> > > >         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:748)
> > > >         at 
> > > > org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:209)
> > > >         at 
> > > > org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:286)
> > > >         at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:516)
> > > >         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > > >         at 
> > > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > > >         at 
> > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > >         at java.lang.reflect.Method.invoke(Method.java:601)
> > > >         at org.apache.hadoop.util.RunJar.main(RunJar.java:197)
> > > > 
> > > > 
> > > > 
> > > > Debarshi Basak
> > > > Tata Consultancy Services
> > > > Mailto: debarshi.ba...@tcs.com
> > > > Website: 
> > > 
> > 
> http://www.tcs.com
> 
> > 
> > > 
> > > > ____________________________________________
> > > > Experience certainty. IT Services
> > > > Business Solutions
> > > > Outsourcing
> > > > ____________________________________________
> > > > =====-----=====-----=====
> > > > Notice: The information contained in this e-mail
> > > > message and/or attachments to it may contain 
> > > > confidential or privileged information. If you are 
> > > > not the intended recipient, any dissemination, use, 
> > > > review, distribution, printing or copying of the 
> > > > information contained in this e-mail message 
> > > > and/or attachments to it are strictly prohibited. If 
> > > > you have received this communication in error, 
> > > > please notify us by reply e-mail or telephone and 
> > > > immediately and permanently delete the message 
> > > > and any attachments. Thank you
> > > > 
> > > > 
> > > 
> > > 
> > 
> > 
> 
> 

Reply via email to