You can follow the link below:
https://stackoverflow.com/questions/21329856/how-to-use-hive-without-hadoop
Hope it will resolve all your queries.
Thanks,
Balajee Venkatesh
On Mon 26 Nov, 2018, 12:28 PM Ravinder Bahadur Hi ,
>
> I am a newbie in this field and need your help. I have the
It's way too easy to perform file write through hive itself then
unnecessary usage of Excel and power operations. Please refer the provided
link for your reference.
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML
Thanks,
Balajee
On Tue 15 May, 2018, 11:38 PM Grim, Paul,
wrot
Ok. There are 8 tables. Do you see any conditional match based on some
keys? May be a well designed 'Left/Right' join provisioning would help you
tackling your use case. You can also look at merge or union options. Once
you are able to correlate those tables and design a query which would give
you
Overwrite your table by taking left join with another table. Let me know if
you want any syntactical help.
On 14-Feb-2018 12:54 PM, "Andy Srine" wrote:
> Hi Team,
>
> Whats the best way to do an update on one table from another? Variations
> of this syntax below doesn't seem to work:
>
> UPDATE
Thanks guys!!!
It was as simple as to run a simple unix command and clean the source data.
sed -i 's/\r//g' wrote:
Assuming that special characters have been added by Windows platform as
mentioned by Shakti Singh, one easy way to cleanup the file is using the
command “*dos2unix filename*”.
difficult to manually traverse the file for finding
the root cause.
I suspected the presence of some '\n' characters in the file but couldn't
spot any such character.
Any insights that what should be my approach to resolve this issue?
Thanks,
Balajee Venkatesh
Try Flume.
I hope it will help you moving the data efficiently.
Thanks,
Balajee Venkatesh
On 17-Feb-2017 10:15 pm, "rakesh sharma" wrote:
Hi
I have a peculiar problem of moving hive data from one hive environment say
prod to another dev. The data runs into crores of rows. How can
Hi Divya,
If you don't specify the Join type then by default it performs Inner Join
on the tuples under action.
So, there is nothing specific to point as a difference between Join and
Inner Join in Hadoop.
Hope it helps.
Thanks,
Balajee Venkatesh
On 12-Feb-2017 9:32 am, "Divya Gehl
our is limited to partitioned column/columns only. I
can see automatic type conversion and data loading for Non-Partitioned
tables.
Can you please look into this and suggest me where this is a bug or
it's a standard behaviour for partitioned columns where data type of
source and target partitioned field is different?
Thanks & Regards,
Balajee Venkatesh
our is limited to partitioned column/columns only. I
can see automatic type conversion and data loading for Non-Partitioned
tables.
Can you please look into this and suggest me where this is a bug or
it's a standard behaviour for partitioned columns where data type of
source and target partitioned field is different?
Thanks & Regards,
Balajee Venkatesh
10 matches
Mail list logo