Hive integration is a big effort to do i
would guess.
On Tue, Apr 10, 2018 at 12:46 AM, Ashutosh Chauhan
wrote:
> Hi Amit,
>
> Yes only mysql and postgres are supported for druid metadata storage.
> Thats because Druid only supports these. You mentioned that Hive and Druid
> are working
Hive Druid Integration:
I have Hive and Druid working independently.
But having trouble connecting the two together.
I don't have Hortonworks.
I have Druid using sqlserver as metadata store database.
When I try setting this property in Beeline,
set hive.druid.metadata.db.type=sqlserver;
I get
test
Thanks & Regards,
Amit Kumar,
Scientist B,
Mob: 9910611621
(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
INFO cli.LlapServiceDriver: LLAP service driver finished
Thanks & Regards,
Amit Kumar,
Mob: 9910611621
On Sat, Jul 22, 2017 at 5:00 PM, Amit Kumar wrote:
> Hi,
>
> I have installed hadoop 2.7.2 and hive 2.1.1. Successful
mpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Thanks & Regards,
Amit Kumar,
Mob: 9910611621
$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Query:
insert into table tableB select col1, col2, col3, col4, col5, col6, col7, col8
from tableA
Thanks
Amit
Legal Disclaimer:
The information contained in this message may be privileged and confidential.
It is
ample - GUAD,MONT,DF,CAN,OXA
Thanks
Amit
Legal Disclaimer:
The information contained in this message may be privileged and confidential.
It is intended to be read only by the individual or entity to whom it is
addressed or by their designee. If the reader of this message is not the
intended r
Hi,
I am trying to understand how hive is reading the configuration from
hive-site.xml. Where we define the structure of the xml file and code used to
read the hite-site.xml.
Thanks
Amit
Legal Disclaimer:
The information contained in this message may be privileged and confidential.
It is
e.printStackTrace();
}
From: Markovitz, Dudu [mailto:dmarkov...@paypal.com]
Sent: Friday, August 05, 2016 3:04 PM
To: user@hive.apache.org
Subject: RE: Error running SQL query through Hive JDBC
Can you please share the query?
From: Amit Bajpai [mai
Hi,
I am getting the below error when running the SQL query through Hive JDBC. Can
suggestion how to fix it.
org.apache.hive.service.cli.HiveSQLException: Error while compiling statement:
FAILED: SemanticException UDF = is not allowed
at org.apache.hive.jdbc.Utils.verifySuccess(
You need to increase the value for the below hive property value in Ambari
hive.server2.tez.sessions.per.default.queue
If this does not fix the issue then you need to update the capacity scheduler
property values.
From: Raj hadoop [mailto:raj.had...@gmail.com]
Sent: Wednesday, August 03, 2016 8
ySQL's "SHOW PROCESSLIST"
(and equivalent commands in most other databases).
From: Amit Bajpai [mailto:amit.baj...@flextronics.com]
Sent: Thursday, July 14, 2016 10:22 PM
To: user@hive.apache.org<mailto:user@hive.apache.org>
Subject: Yarn Application ID for Hive query
Hi,
user='amit',
password='amit',
database='default') as conn:
with conn.cursor() as cur:
#Execute query
cur.execute("SELECT COMP_ID, COUNT(1) FROM tableA GROUP BY COMP_ID")
#Fetch table resul
ity.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Not sure why it is failing in the server. If any one kindly point it out it
will be great.
Thanks,
Amit
to djnago
> see this on how to clear sessions from django
>
> http://www.opencsw.org/community/questions/289/how-to-clear-the-django-session-cache
>
> On Fri, May 15, 2015 at 12:24 PM, amit kumar wrote:
>
>> Yes it is happening for hue only, can u plz suggest how i clea
om server. (this may
> clean all users active sessions from hue so be careful while doing it)
>
>
>
> On Fri, May 15, 2015 at 11:31 AM, amit kumar wrote:
>
>> i am using CDH 5.2.1,
>>
>> Any pointers will be of immense help.
>>
>>
>>
>>
i am using CDH 5.2.1,
Any pointers will be of immense help.
Thanks
On Fri, May 15, 2015 at 9:43 AM, amit kumar wrote:
> Hi,
>
> After re-create my account in Hue, i receives “User matching query does
> not exist” when attempting to perform hive query.
>
> The query
Hi,
After re-create my account in Hue, i receives “User matching query does not
exist” when attempting to perform hive query.
The query is succeed in hive command line.
Please suggest on this,
Thanks you
Amit
Thank you Jason, will upgrade the hive 0.14, and tried out the bug.
On Fri, May 8, 2015 at 1:43 AM, Jason Dere wrote:
> The javaXML issue referenced by that bug should be fixed by hive-0.14 ..
> note the original poster was using hive-0.13
>
>
> On May 7, 2015, at 12:48 PM, am
Jason,
The last comment is "This has been fixed in 0.14 release. Please open new
jira if you see any issues."
is this issue resolved in hive 0.14 ?
On Tue, May 5, 2015 at 11:36 PM, Jason Dere wrote:
> Looks like you are running into
> https://issues.apache.org/jira/browse/HIVE-8321, fixed i
what error you are getting after mentioning javaxml in place of kryo
On Wed, May 6, 2015 at 12:44 AM, Bhagwan S. Soni
wrote:
> Please find attached error log for the same.
>
> On Tue, May 5, 2015 at 11:36 PM, Jason Dere wrote:
>
>> Looks like you are running into
>> https://issues.apache.org/j
Doug,
Do i need any changes in configuration or else to resolve this issue.
Thanks
On Tue, May 5, 2015 at 4:46 AM, amit kumar wrote:
> Do you have any suggestion to resolve this issue,
>
> I am looking for a resolution.
>
> On Tue, May 5, 2015 at 4:42 AM, Moore, Douglas
nks for the update!
>
> - Douglas
>From: amit kumar
> Reply-To:
> Date: Tue, 5 May 2015 04:40:18 +0530
> To:
> Subject: Re: Unable to move files on Hive/Hdfs
>
> Hi Doug,
>
> I have use CDH 5.2.1
>
> I performed the below task, and getting the error,
:36 AM, amit kumar wrote:
> Hi Doug,
>
> I have use CDH 5.2.1
>
> Disable ACLs on Name Nodes
>
>
> Set Enable Access Control Lists = False
>
> Save Changes
>
> Restart Hadoop Cluster
>
>
>
> Stack trace:
>
> 2015-05-04 10:38:18,820 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:8553)
After rolling those same changes out, the problem resolved itself.
On Tue, May 5, 2015 at 4:28 AM, Moore, Douglas <
douglas.mo...@thinkbiganalytics.com> wrote:
> Hi Amit,
>
> We've seen the same error on MoveTask with
While moving the data from hive/hdfs we get below error,
Please suggest on this.
Moving data to:
hdfs://nameservice1/tmp/hive-srv-hdp-edh-d/hive_2015-05-04_10-02-39_841_5305383954203911235-1/-ext-1
Failed with exception Unable to move
sourcehdfs://nameservice1/tmp/hive-srv-hdp-edh-d/hive_2015
Hi User,
I want to know the difference of query execution time in hive if I use SSD
for HDFS and HDD for HDFS.
Thanks,
Amit
2014-11-26 20:21:53,923 Stage-1 map = 99%, reduce = 10%, Cumulative CPU
29516.21 sec
2014-11-26 20:22:53,935 Stage-1 map = 99%, reduce = 10%, Cumulative CPU
29552.95 sec
Please help me to find out solution.
Thanks
Amit
Hi Daniel,
Thank you , Its running fine.
*Another question:*
could you please tell me what to do If I will get *Shuffle Error*.
one time I got this type of error while running a join query on 300GB data
with 20GB data
Thanks
Amit
On Mon, Nov 24, 2014 at 11:13 PM, Daniel Haviv <
daniel
achines and try again.
>
> Daniel
>
> On 24 בנוב׳ 2014, at 19:26, Amit Behera wrote:
>
> I did not modify in all the slaves. except slave
>
> will it be a problem ?
>
> But for small data (up to 20 GB table) it is running and for 300GB table
> only count(*) running
* except slave6, slave7, slave8
On Mon, Nov 24, 2014 at 10:56 PM, Amit Behera wrote:
> I did not modify in all the slaves. except slave
>
> will it be a problem ?
>
> But for small data (up to 20 GB table) it is running and for 300GB table
> only count(*) running sometimes an
I did not modify in all the slaves. except slave
will it be a problem ?
But for small data (up to 20 GB table) it is running and for 300GB table
only count(*) running sometimes and sometimes failed
Thanks
Amit
On Mon, Nov 24, 2014 at 10:37 PM, Daniel Haviv <
daniel.ha...@veracity-group.
hi Daniel,
this stacktrace same for other query .
for different run I am getting slave7 sometime slave8...
And also I registered all machine IPs in /etc/hosts
Regards
Amit
On Mon, Nov 24, 2014 at 10:22 PM, Daniel Haviv <
daniel.ha...@veracity-group.com> wrote:
> It seems
otal MapReduce CPU Time Spent: 10 minutes 25
seconds 190 mse
Please help me to fix the issue.
Thanks
Amit
hi Devopam,
Thank you for replying.
I am using Hue on the top of Hive. So can you please help me, how oozie
will help me and how can I integrate oozie with this.
Thanks
Amit
On Fri, Nov 7, 2014 at 7:58 PM, Devopam Mittra wrote:
> hi Amit,
> Please try to see if Hive CLI (client) ins
Hi users,
I have hive set up at multi node hadoop cluster.
I want to run multiple queries on top of a table from different machines.
So please help how to achieve multiple access on hive to run multiple
queries simultaneously.
Thanks
Amit
fV68f3bXI5+/zgBFSqZBr3OEXT5qtdZ7jjrzJEmv92D/yb
>
> mnDfYSC7ri/rGEB9L96K4EFRXrsJykMqKrC2JxZ6LZbhN0IhJBMwTKYpwPOw9DE2WDL2
> METQ==
> MIME-Version: 1.0
> X-Received: by 10.182.18.104 with SMTP id v8mr42490010obd.3.1415116992147;
> Tue, 04 Nov 2014 08:03:12 -0800 (PST)
> Received: by 10.76.150.70 with HTTP; Tue, 4 Nov 2014 08:03:12 -0800 (PST)
> Date: Tue, 4 Nov 2014 21:33:12 +0530
> Message-ID: <
> calxycns4bv0b7jvbdctxm3utqlhg7wdaf8pppwjvhsetdth...@mail.gmail.com>
> Subject: Want to Join this
> From: Amit Behera
> To: user-subscr...@hive.apache.org
> Content-Type: multipart/alternative; boundary=001a11c33312a1f5fa05070a9941
> X-Virus-Checked: Checked by ClamAV on apache.org
>
>
://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/
Thanks
On Tue, Sep 9, 2014 at 8:53 PM, Amit Dutta wrote:
Thanks a lot for your reply..I changed the following parameters from Cloudera
manager
mapred.tasktracker.map.tasks.maximum = 2 (it was 1 before)
mapred.tasktracker.reduce.tasks.maximum
,Amit
Subject: Re: PIG heart beat freeze using hue + cdh 5.1
From: zenon...@gmail.com
Date: Tue, 9 Sep 2014 20:34:19 -0400
To: user@hive.apache.org
It use Yarn now you need to set your container resource memory and CPU then set
the mapreduce physical memory and CPU cores the number of mapper and
I think one of the issue is number of mapreduce slot for the cluster... Can
anyone please let me know how do I increase the mapreduce slot?
From: amitkrdu...@outlook.com
To: user@hive.apache.org
Subject: PIG heart beat freeze using hue + cdh 5.1
Date: Tue, 9 Sep 2014 17:55:01 -0500
Hi I have
Hi
Does anyone please let me know how to increase the mapreduce slots? i am
getting infinite heartbeat when i run a PIG script from hue cloudera cdh5.1
Thanks,Amit
Hi
Does anyone please let me know how to increase the mapreduce slots? i am
getting infinite heartbeat when i run a PIG script from hue cloudera cdh5.1
Thanks,Amit
Hi I have a only 604 rows in the hive table.
while using A = LOAD 'revenue' USING org.apache.hcatalog.pig.HCatLoader(); DUMP
A; it starts spouting heart beat repeatedly and does not leave this state.Can
please someone help.I am getting following exception
2014-09-09 17:27:45,844 [JobControl] IN
Make sure there are no primary key clash. HBase would over write the row if you
upload data with same primary key. That's one reason you can possibly get less
rows than what you uploaded
Sent from my mobile device, please excuse the typos
> On May 1, 2014, at 3:34 PM, "Kennedy, Sean C." wro
ut 25% yeah but no where close to multiples I was
hoping. I changed the striping to 4MB. Tried creating index every 10k rows.
Inserted 6 million rows and did many different type of queries. Any ideas
people what I might be missing ?
Amit
Sent from my mobile device, please excuse the typos
h ORC storage..
Any pointers would be very helpful.
Amit
Error: java.lang.RuntimeException: Hive Runtime Error while closing
operators
at
org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:240)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
cluded the step below.
Any pointers would be appreciated.
Amit
I have a single node setup with minimal settings. JPS output is as follows
$ jps
9823 NameNode
12172 JobHistoryServer
9903 DataNode
14895 Jps
11796 ResourceManager
12034 NodeManager
*Running Hadoop 0.2.2 with Yarn.*
Step1
CREATE T
cluded the step below.
Any pointers would be appreciated.
Amit
I have a single node setup with minimal settings. JPS output is as follows
$ jps
9823 NameNode
12172 JobHistoryServer
9903 DataNode
14895 Jps
11796 ResourceManager
12034 NodeManager
*Running Hadoop 0.2.2 with Yarn.*
Step1
CREATE T
are not used
and the table names some how might map to similar hashcode values?
Also is changing the alias the only workaround for this problem or is there
any other workaround possible?
Thanks,
Amit
On Sun, Aug 11, 2013 at 9:22 PM, Navis류승우 wrote:
> Hi,
>
> Hive is notorious making
On Thu, Sep 27, 2012 at 10:56 AM, Amit Sangroya wrote:
> Hello everyone,
>
> I am experiencing that Hive v-0.9.0 works with hadoop 0.20.0 only in
> default scheduling mode. But when I try to use the "Fair" scheduler using
> this configuration, I see that map reduce do
?
Thanks,
Amit
On Sun, Apr 1, 2012 at 12:35 PM, Bejoy Ks wrote:
> Hi
> On a first look, it seems like map join is happening in your case
> other than bucketed map join. The following conditions need to hold for
> bucketed map join to work
> 1) Both the tables are bucketed on the j
s that expeced behaviour? Should
it not create these hash maps on the corresponding mappers in parallel?
Thanks,
Amit
On Thu, Jan 19, 2012 at 9:22 AM, Bejoy Ks wrote:
> Hi Avrila
>AFAIK the bucketed map join is not default in hive and it happens
> only when the values is set to tr
the User with which the hive server is running, and connects to
the default database.
What version of Toad for Cloud are you using?
Thanks,
Amit
On Tue, Jan 31, 2012 at 10:59 AM, Sriram Krishnan wrote:
> I was under the impression that Toad uses JDBC – and AFAIK there is no
> way to authen
Do you know anyway in which this can be done in "Hive Server" ?
Amit
On Tue, Aug 23, 2011 at 11:21 AM, Chinna Rao Lalam 72745 <
chinna...@huawei.com> wrote:
>
> Hi Amit,
>
> Pls check this issue HIVE-1405 it will help u .This issue targeting same
> scenario
Hi Chinna,
That worked, Thanks a lot. So once the jar is picked up, is there a way
to create a temporary function, that is retained even if i quit the
interactive shell and start it again? Or do i have to use the create
command to register the function everytime?
Thanks.
Amit
On Mon, Aug 22
Hi Vaibhav,
Excuse my ignorance as im a little new to Hive. What do you mean by
restart the Hive Server? I am using the Hive Interactive shell for my work.
So i start the shell after modifying the config variable. Which server do i
need to restart?
Amit
On Mon, Aug 22, 2011 at 2:49 PM
r the
function using "create temporary function as 'funciton' ", it
cannot find the jar. Any idea whats going on here?
Here is the snippet from hive-site.xml:
~/Documents/workspace/Hive_0_7_1/build/dist/conf$grep aux hive-site.xml
hive.aux.jars.path/Users/amsharma/dev/Perforce/development/depot/dataeng/hive/dist
Amit
Hi,
I am also trying the same but don't know the exact build steps. Someone please
tell the same.
-regards
Amit
From: Jean-Charles Thomas
To: Hive mailing list
Sent: Tue, 22 March, 2011 11:40:18 AM
Subject: Hive/hbase integration - Rebuild the St
And hive replaces the MIN_AGE parameter automatically.
-amit
Hi,
I am also facing the same issue (hive-0.7, hbase-0.90.1, hadoop-0.20.2).
Any help?
-amit
From: Bennie Schut
To: "user@hive.apache.org"
Sent: Wed, 9 March, 2011 4:39:49 AM
Subject: hive hbase handler metadata NullPointerException
Hi All,
I
61 matches
Mail list logo