Hi,
we are getting the below error when connecting to hiveserver2 beeline.
Connecting to jdbc:hive2://myaddress:1
Error: Could not open client transport with JDBC Uri:
jdbc:hive2://myaddress:1: Peer indicated failure: Error validating the
login (state=08S01,code=0)
No current connection
Hi Mich,
Thank you for your response.
My question is very simple. How to do you process huge read-only data in
HDFS using Hive?
Regards,
Sandeep Giri,
+1 347 781 4573 (US)
+91-953-899-8962 (IN)
www.CloudxLab.com
Phone: +1 (412) 568-3901 <+1+(412)+568-3901> (Office)
[image: linkedi
beeline -u jdbc:hive2://rhes564:10010/default
> org.apache.hive.jdbc.HiveDriver -n hduser -p
>
> When I look at permissioning I see only hdfs can write to it not user
> Sandeep?
>
> HTH
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn *
> https://www.linkedin.com/profil
CCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> On 31 May 2016 at 08:50, Sandeep Giri wrote:
>
>> Hi Hive Team,
>>
>> As per my understanding, in Hiv
It throws the following error:
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask.
MetaException(message:java.security.AccessControlException: Permission
denied: user=sandeep, access=WRITE,
inode="/data/SentimentFiles/SentimentFiles/upload/data/tweets_raw":hdfs:hdfs
Is it worth raising a bug in hive ?
On Thu, Mar 24, 2016 at 3:37 PM, Sandeep Khurana
wrote:
> Hello
>
> Hive provides a table sample approach for number of rows. The
> documentation is at
>
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Sampling#Lang
/HIVE-3401 .
--
Thanks and regards
Sandeep Khurana
hbase org.apache.hadoop.hbase.snapshot.SnapshotInfo -snapshot test_snapshot
-stats -schema
On Thu, Sep 24, 2015 at 3:43 PM, Sandeep Nemuri
wrote:
> You can check snapshot state if it is healthy or not using below command.
>
>
> On Thu, Sep 24, 2015 at 2:55 PM, 核弹头す <510688..
/hbase-huser/hbase/.hbase-snapshot/goods_v3_hbase_snap0/.snapshotinfo
>
> at
> org.apache.hadoop.hbase.snapshot.SnapshotDescriptionUtils.readSnapshotInfo(SnapshotDescriptionUtils.java:307)
>
> at
> org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.copySnapshotForScanner(RestoreSnapshotHelper.java:727)
>
> at
> org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormatImpl.setInput(TableSnapshotInputFormatImpl.java:364)
>
> at
> org.apache.hadoop.hive.hbase.HBaseTableSnapshotInputFormatUtil.configureJob(HBaseTableSnapshotInputFormatUtil.java:77)
>
> at
> org.apache.hadoop.hive.hbase.HBaseStorageHandler.configureTableJobProperties(HBaseStorageHandler.java:387)
>
> ... 29 more
>
--
* Regards*
* Sandeep Nemuri*
ache.hadoop.hive.serde2.lazy.LazySimpleSerDe
Stage: Stage-0
Fetch Operator
limit: -1
Thanks,
-sandeep
--
_
The information contained in this communication is intended solely for the
use of the individual or entity to whom it
n its
>> receipt.
>>
>>
>>
>
> _
> The information contained in this communication is intended solely for the
> use of the individual or entity to whom it is addressed and others
> authorized to receive it. It may contain confidential or legally privileged
> information. If you are not the intended recipient you are hereby notified
> that any disclosure, copying, distribution or taking any action in reliance
> on the contents of this information is strictly prohibited and may be
> unlawful. If you have received this communication in error, please notify
> us immediately by responding to this email and then delete it from your
> system. The firm is neither liable for the proper and complete transmission
> of the information contained in this communication nor for any delay in its
> receipt.
>
--
--Regards
Sandeep Nemuri
On Thu, Sep 12, 2013 at 6:23 PM, Nitin Pawar wrote:
> try creating table with your existing mongo db and collection see the data
> can be read by the user or not.
> What you need to do is mongo collection column mapping exactly with same
> names into hive column definition.
>
> if
;
On Thu, Sep 12, 2013 at 5:02 PM, Nitin Pawar wrote:
> If you are importing from hive to mongo, why can't you just select from
> mongo table and insert into hive table?
>
>
> On Thu, Sep 12, 2013 at 4:24 PM, Sandeep Nemuri wrote:
>
>> Hi Nitin Pawar,
>>
ive-mongo).
>
> Its pretty easy to use as well. If you want to start with analytics
> directly.
>
>
> On Thu, Sep 12, 2013 at 2:02 PM, Sandeep Nemuri wrote:
>
>> Thanks all
>> i am trying to import data with this program
>> but when i compied this code i got er
se pig mongodb combination to get
>> the data from mongodb through pig, then after you can create a table
>> in hive that will points to the pig output file on hdfs.
>>
>> https://github.com/mongodb/mongo-hadoop/blob/master/pig/README.md
>>
>>
Hi every one ,
I am trying to import data from mongodb to hive . i
got some jar files to connect mongo and hive .
now how to import the data from mongodb to hive ?
Thanks in advance.
--
--Regards
Sandeep Nemuri
ome kind of
> function between the two "ids" in the two tables. That way you could join
> on A.id1 = function(B.id2) otherwise the only other thing i can think of
> to use the ROW_NUMBER() analytics function in hive 0.11 and join on that if
> it is indeed random.
>
>
: Re: How to perform arithmetic operations in hive
>
> Try
>
> select emp_name, (emp_no * 10) from emp_table;
>
> Sent from my iPhone
>
> On Aug 22, 2013, at 8:14 AM, Sandeep Nemuri wrote:
>
> Hi all ,
> Can we perform arithmetic operator on *select
.
--Regards
Sandeep Nemuri
Hi all ,
I want to join two tables
**
I have table_A:
id1 var1 var2
1 ab
2 cd
Table_B:
id2 var3 var4
3 ef
4 gh
Expected Output is :
id1 var1 var2 id2 var3 var4
1 ab 3e f
2 cd 4g h
Thanks in advance.
--
--Regards
Sandeep Nemuri
Hi,
Thank you all for your help. I'll try both ways and i'll get back to you.
On Fri, Sep 7, 2012 at 11:02 AM, Mohammad Tariq wrote:
> I said this assuming that a Hadoop cluster is available since Sandeep is
> planning to use Hive. If that is the case then MapReduce would be f
R&D Data Team
>
> Burlington, MA
>
> 781-565-4611
>
> ** **
>
> *From:* Sandeep Reddy P [mailto:sandeepreddy.3...@gmail.com]
> *Subject:* How to load csv data into HIVE
>
> ** **
>
> Hi,
> Here is the sample data
> "174969274",
a.HiveException: Hive
Runtime Error while processing row {"foo":98,"bar":"abc"}
at
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:546)
at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:143)
... 8 more
Caused
ed by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java
--
Thanks,
sandeep
HI,
Can we consider using HBASE for the same?
On Thu, Aug 9, 2012 at 1:19 PM, Sandeep Reddy P wrote:
> Thank you all for the info.
>
>
> On Thu, Aug 9, 2012 at 12:30 PM, Bob Gause wrote:
>
>> Hive has no update & delete statements.
>>
>> You can drop a tab
ons out into
> intermediate temp tables. We have a lot more tables in our Hive process
> than we had in our MySQL/Postgres process.
>
> Hope this helps….
> Bob
>
> Robert Gause
> Senior Systems Engineer
> ZyQuest, Inc.
> bob.ga...@zyquest.com
>
> On Aug 9, 2012, a
Hi Bejoy,
Thanks for the link. When you say updates are not supported directly is
there any other way we can update data in HDFS/Hive?
On Thu, Aug 9, 2012 at 10:30 AM, Bejoy Ks wrote:
> Hi Sandeep
>
> If you are looking at inserting more data into existing tables that has
> data, t
gt;>
>>>>> COLLECTION ITEMS TERMINATED BY '\002'
>>>>>
>>>>> MAP KEYS TERMINATED BY '\003'
>>>>>
>>>>> STORED AS INPUTFORMAT
>>>>> 'com.hadoop.mapred.DeprecatedLzoTextInputFormat' OUTPUTFORMAT
>>>>> 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat';
>>>>>
>>>>>
>>>>>
>>>>> Or like this:
>>>>>
>>>>> ROW FORMAT DELIMITED
>>>>>
>>>>> FIELDS TERMINATED BY '\t'
>>>>>
>>>>> STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat'
>>>>> OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat';
>>>>>
>>>>>
>>>>> where can i find that info?
>>>>>
>>>>>
>>>>> Thanks in advance.
>>>>>
>>>>>
>>>>
>>>
>>
>
--
Thanks,
sandeep
te it!
_
From: Sreekanth Ramakrishnan [mailto:sreer...@yahoo-inc.com]
Sent: Monday, January 03, 2011 2:42 PM
To: user@hive.apache.org; sandeep
Subject: Re: Regarding Number of Jobs Created for One Hive-Query
Hi Sandeep,
If you try running the explain query it will show you number of stages wh
Hi,
While executing a query from hive ,a Job will get created.
Please let me know the Mapping Between Hive Query And the Jobs that are
created for that query. ?Is it One-One ?
Thanks
sandeep
30 matches
Mail list logo