You can create an external table to make your Data visible in hive.
Sent from my iPhone
On Jul 11, 2012, at 7:39 AM, shaik ahamed wrote:
> Hi All,
>
>As i have a data of 100GB in HDFS as i want this 100 gb file to
> move or copy to the hive directory or path how can i achieve th
Hi Shaik
If you already have the data in hdfs then just create an External Table with
that hdfs location. You'll have the data in your hive table.
Or if you want to have a managed table then also it is good use a Load data
statement. It'd be faster as well since it is a hdfs move operation unde
Try it out using "distcp" command.
Regards,
Mohammad Tariq
On Wed, Jul 11, 2012 at 8:09 PM, shaik ahamed wrote:
> Hi All,
>
>As i have a data of 100GB in HDFS as i want this 100 gb file to
> move or copy to the hive directory or path how can i achieve this .
>
> Is there any cm
can you tell us
1) how many nodes are there in the cluster?
2) is there any connectivity problems if the # nodes > 3
3) if you have just one slave do you have a higher replication factor?
4) what is the compression you are using for the tables?
5) if you have a dhcp based network, did your slave ma
Hi ,
Below is the error,i found in the Job Tracker log file :
*Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out*
Please help me in this ...
*Thanks in Advance*
*Shaik.*
On Fri, Jul 6, 2012 at 5:22 PM, Bejoy KS wrote:
> **
> Hi Shaik
>
> There is some error while MR jobs
Hi Shaik
There is some error while MR jobs are running. To get the root cause please
post in the error log from the failed task.
You can browse the Job Tracker web UI and choose the right job Id and drill
down to the failed tasks to get the error logs.
Regards
Bejoy KS
Sent from handheld, pl
Hi Shaik
On a first look, since you are using Dynamic Partition Insert, the partition
column should be the last column on select query used in Insert Overwrite.
Modify your Insert as
INSERT OVERWRITE TABLE vender_part PARTITION (order_date) SELECT
vender,supplier,quantity,order_dateĀ FROM ve