Re: Hi, Hive People urgent question about [Distribute By] function

2015-10-27 Thread Gopal Vijayaraghavan
> I want to override partitionByHash function on Flink like the same way >of DBY on Hive. > I am working on implementing some benchmark system for these two system, >which could be contritbutino to Hive as well. I would be very disappointed if Flink fails to outperform Hive with a Distribute BY,

Re: Hi, Hive People urgent question about [Distribute By] function

2015-10-27 Thread Philip Lee
Hello, the same question about DISTRIBUTE BY on Hive. Accorring to you, you do not use hashCode of Object class on DBY, Distribute By. I tried to understand how ObjectInspectorUtils works for distribution, but it seemed it has a lot of Hive API. It is not much understnading. I want to override pa

Re: Hi, Hive People urgent question about [Distribute By] function

2015-10-24 Thread Philip Lee
Hello, the same question about DISTRIBUTE BY on Hive. Accorring to you, you do not use hashCode of Object class on DBY, Distribute By. I tried to understand how ObjectInspectorUtils works for distribution, but it seemed it has a lot of Hive API. It is not much understnading. I want to override pa

Re: Hi, Hive People urgent question about [Distribute By] function

2015-10-22 Thread Gopal Vijayaraghavan
> so do you think if we want the same result from Hive and Spark or the >other freamwork, how could we try this one ? There's a special backwards compat slow codepath that gets triggered if you do set mapred.reduce.tasks=199; (or any number) This will produce the exact same hash-code as the jav

Re: Hi, Hive People urgent question about [Distribute By] function

2015-10-22 Thread Philip Lee
Thanks for your help. so do you think if we want the same result from Hive and Spark or the other freamwork, how could we try this one ? could you tell me in detail. Regards, Philip On Thu, Oct 22, 2015 at 6:25 PM, Gopal Vijayaraghavan wrote: > > > When applying [Distribute By] on Hive to the

Re: Hi, Hive People urgent question about [Distribute By] function

2015-10-22 Thread Gopal Vijayaraghavan
> When applying [Distribute By] on Hive to the framework, the function >should be partitionByHash on Flink. This is to spread out all the rows >distributed by a hash key from Object Class in Java. Hive does not use the Object hashCode - the identityHashCode is inconsistent, so Object.hashCode() .

Hi, Hive People urgent question about [Distribute By] function

2015-10-22 Thread Philip Lee
Hello, I am working on Flink and Spark majoring in Computer Science in Berlin. I have the important question. Well, this question is from what I do these days, which is translations Hive Query to Flink. When applying [Distribute By] on Hive to the framework, the function should be partitionByHash

Re: hi all

2012-07-11 Thread Mapred Learn
You can create an external table to make your Data visible in hive. Sent from my iPhone On Jul 11, 2012, at 7:39 AM, shaik ahamed wrote: > Hi All, > >As i have a data of 100GB in HDFS as i want this 100 gb file to > move or copy to the hive directory or path how c

Re: hi all

2012-07-11 Thread Bejoy KS
Hi Shaik If you already have the data in hdfs then just create an External Table with that hdfs location. You'll have the data in your hive table. Or if you want to have a managed table then also it is good use a Load data statement. It'd be faster as well since it is a hdfs move

Re: hi all

2012-07-11 Thread Mohammad Tariq
Try it out using "distcp" command. Regards, Mohammad Tariq On Wed, Jul 11, 2012 at 8:09 PM, shaik ahamed wrote: > Hi All, > >As i have a data of 100GB in HDFS as i want this 100 gb file to > move or copy to the hive directory or path how can i achieve t

hi all

2012-07-11 Thread shaik ahamed
Hi All, As i have a data of 100GB in HDFS as i want this 100 gb file to move or copy to the hive directory or path how can i achieve this . Is there any cmd to run this. Please provide me a solution where i can load fast ... Thanks in advance Shaik

Re: hi

2012-07-06 Thread Nguyễn Minh Kha
"No space left on device" this mean your HDD is full. On Fri, Jul 6, 2012 at 10:25 PM, shaik ahamed wrote: > Hi all, > > As im trying to insert the data in to the hive table as im > getting the below error > > Total MapReduce jobs = 2 > Launching Job 1 ou

hi

2012-07-06 Thread shaik ahamed
Hi all, As im trying to insert the data in to the hive table as im getting the below error Total MapReduce jobs = 2 Launching Job 1 out of 2 Number of reduce tasks is set to 0 since there's no reduce operator org.apache.hadoop.ipc.RemoteException: java.io.IOExce

Re: hi all

2012-07-06 Thread Nitin Pawar
ave machines changed the ip? Thanks, Nitin On Fri, Jul 6, 2012 at 6:17 PM, shaik ahamed wrote: > Hi , > > Below is the error,i found in the Job Tracker log file : > > > *Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out* > > Please help me in this

Re: hi all

2012-07-06 Thread shaik ahamed
Hi , Below is the error,i found in the Job Tracker log file : *Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out* Please help me in this ... *Thanks in Advance* *Shaik.* On Fri, Jul 6, 2012 at 5:22 PM, Bejoy KS wrote: > ** > Hi Shaik > > There is some error w

Re: hi all

2012-07-06 Thread Bejoy KS
Hi Shaik There is some error while MR jobs are running. To get the root cause please post in the error log from the failed task. You can browse the Job Tracker web UI and choose the right job Id and drill down to the failed tasks to get the error logs. Regards Bejoy KS Sent from handheld

hi all

2012-07-06 Thread shaik ahamed
*Hi users,* ** * As im selecting the distinct column from the vender Hive table * ** *Im getting the below error plz help me in this* ** *hive> select distinct supplier from vender_sample;* Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks not specif

Re: hi users

2012-07-05 Thread Mohammad Tariq
.As the 2 second one im not able to connect .Is this the >> prob for not retreiving the data, or other than this. >> >> >> >> On Thu, Jul 5, 2012 at 12:23 PM, Nitin Pawar >> wrote: >>> >>> can you check dfs health? &g

Re: hi users

2012-07-05 Thread shaik ahamed
Thanks for ur reply nitin .. On Thu, Jul 5, 2012 at 12:30 PM, Nitin Pawar wrote: > read for hadoop dfs fsck command > > > On Thu, Jul 5, 2012 at 12:29 PM, shaik ahamed wrote: > >> Hi Nitin, >> >> How can i check the dfs health? could u plz guide me the st

Re: hi users

2012-07-05 Thread Nitin Pawar
read for hadoop dfs fsck command On Thu, Jul 5, 2012 at 12:29 PM, shaik ahamed wrote: > Hi Nitin, > > How can i check the dfs health? could u plz guide me the steps... > > On Thu, Jul 5, 2012 at 12:23 PM, Nitin Pawar wrote: > >> can you check dfs health? >> &g

Re: hi users

2012-07-05 Thread Nitin Pawar
n this. > > > > On Thu, Jul 5, 2012 at 12:23 PM, Nitin Pawar wrote: > >> can you check dfs health? >> >> I think few of your nodes are down >> >> >> On Thu, Jul 5, 2012 at 12:17 PM, shaik ahamed wrote: >> >>> Hi All, >>> &g

Re: hi users

2012-07-04 Thread shaik ahamed
Hi Nitin, How can i check the dfs health? could u plz guide me the steps... On Thu, Jul 5, 2012 at 12:23 PM, Nitin Pawar wrote: > can you check dfs health? > > I think few of your nodes are down > > > On Thu, Jul 5, 2012 at 12:17 PM, shaik ahamed

Re: hi users

2012-07-04 Thread shaik ahamed
this. On Thu, Jul 5, 2012 at 12:23 PM, Nitin Pawar wrote: > can you check dfs health? > > I think few of your nodes are down > > > On Thu, Jul 5, 2012 at 12:17 PM, shaik ahamed wrote: > >> Hi All, >> >> >>Im not able to fetch the d

Re: hi users

2012-07-04 Thread Mohammad Tariq
Hello shaik, Were you able to fetch the data earlier. I mean is it happening for the first time or you were not able to fetch the data even once?? Regards, Mohammad Tariq On Thu, Jul 5, 2012 at 12:17 PM, shaik ahamed wrote: > Hi All, > > >Im not a

Re: hi users

2012-07-04 Thread Nitin Pawar
can you check dfs health? I think few of your nodes are down On Thu, Jul 5, 2012 at 12:17 PM, shaik ahamed wrote: > Hi All, > > >Im not able to fetch the data from the hive table ,getting > the below error > >FAILED: Error in semantic analysis: &g

hi users

2012-07-04 Thread shaik ahamed
Hi All, Im not able to fetch the data from the hive table ,getting the below error FAILED: Error in semantic analysis: hive> select * from vender; OK Failed with exception java.io.IOException:java.io.IOException: Could not obtain block: blk_-3328791500929854839_1178 f

Re: Hi

2012-07-04 Thread Bejoy KS
Hi Shaik Updates are not supported in hive. Still you can accomplish updates by over writing either a whole table or a partition. In short updates are not directly supported in hive, indirectly it is really expensive as well. Regards Bejoy KS Sent from handheld, please excuse typos

Hi

2012-07-04 Thread shaik ahamed
Hi All, We can update the records in the Hive table,if so please tell me the syntax in hive. Regards, shaik.

Re: hi all

2012-06-26 Thread Bejoy KS
Hi Shaik On a first look, since you are using Dynamic Partition Insert, the partition column should be the last column on select query used in Insert Overwrite. Modify your Insert as INSERT OVERWRITE TABLE vender_part PARTITION (order_date) SELECT vender,supplier,quantity,order_date  FROM

hi all

2012-06-26 Thread shaik ahamed
Hi Users, As i created an hive table with the below syntax CREATE EXTERNAL TABLE vender_part(vender string, supplier string,quantity int ) PARTITIONED BY (order_date string) row format delimited fields terminated by ',' stored as textfile; And inserted the 100GB of dat

Re: Hi Everybody, one question about the integration of hbase and hive

2011-10-13 Thread Jean-Daniel Cryans
That configuration has been removed from HBase 2 years ago, so no. You need to point to the zookeeper ensemble. You can copy your hbase-site.xml file into hive's conf directory so that it gets all those configurations you have on your cluster. J-D On Wed, Oct 12, 2011 at 12:28 AM, liming liu wrot

Hi Everybody, one question about the integration of hbase and hive

2011-10-12 Thread liming liu
Is there any possibility to start the hive server with the hbase.master specified? I know to hive -hiveconf hbase.master=ubuntu3:6 and hive --service hiveserver. but how to specify the hbase.master when i start up the hive server? Thanks very much~ -- Liu Liming(Andy) Tel: +86-134-7253-4429

Re: hi .... any one know how to solv the result run in hive and script hive

2011-09-07 Thread Jasper Knulst
Hi, 2011/9/7 Harold(陳春宏) > Hello: > > I have analysis apache log from hive, but there is a problem > > When I write hive command in Script file and use crontab for schedule it * > *** > > The result is different with run in hive container > >

hi .... any one know how to solv the result run in hive and script hive

2011-09-07 Thread 陳春宏
Hello: I have analysis apache log from hive, but there is a problem When I write hive command in Script file and use crontab for schedule it The result is different with run in hive container The attachment file is 2 way process detail Hive_error.txt is run hive command in script Hiv

Re: Hi,all.Is it possible to generate mutiple records in one SerDe?

2011-03-21 Thread
Wow, is there any other way to do that? 2011/3/21 Ted Yu > I don't think so: > Object deserialize(Writable blob) throws SerDeException; > > > > On Mon, Mar 21, 2011 at 4:55 AM, 幻 wrote: > >> Hi,all.Is it possible to generate mutiple records in one SerDe? I m

Re: Hi,all.Is it possible to generate mutiple records in one SerDe?

2011-03-21 Thread Ted Yu
I don't think so: Object deserialize(Writable blob) throws SerDeException; On Mon, Mar 21, 2011 at 4:55 AM, 幻 wrote: > Hi,all.Is it possible to generate mutiple records in one SerDe? I mean if I > can return more than one rows in deserialize? > > Thanks! >