Hello Ravi,
When you wget this url
wget http://:9091/schema?name=ed&store=parquet&
isMutated=true&table=ed&secbypass=testing'
Do you get avsc file?
Regards,
Jagat Singh
On Sat, 27 Jun 2020, 7:01 am ravi kanth, wrote:
> Just want to follow up on the below email.
>
Hi,
Is it possible to do bulk load using files in hive table backed by
transactions instead of update statements.
Thanks
gt; be the problem you're having. Have you tried your query with MapReduce?
>
> On Sun, Nov 1, 2015 at 5:32 PM, Jagat Singh wrote:
>
>> One interesting message here , *No plan file found: *
>>
>> 15/11/01 23:55:36 INFO exec.Utilities: No plan file found: hdfs://
&g
One interesting message here , *No plan file found: *
15/11/01 23:55:36 INFO exec.Utilities: No plan file found: hdfs://
sandbox.hortonworks.com:8020/tmp/hive/root/119652ff-3158-4cce-b32d-b300bfead1bc/hive_2015-11-01_23-54-47_767_5715642849033319370-1/-mr-10003/40878ced-7985-40d9-9b1d-27f06acb1bef
Hi,
I am trying to run Hive on Spark on HDP Virtual machine 2.3
Following wiki
https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started
I have replaced all the occurrences of hdp.version with 2.3.0.0-2557
I start hive with following
set hive.execution.engine=spark;
set
We are using Hive 0.14
Our input file size is around 100 GB uncompressed
We are using insering this data to hive which is ORC based table , ZLIB
While inserting we are also using following two parameters.
SET hive.exec.reducers.max=10;
SET mapred.reduce.tasks=5;
The output ORC file produced
Did you do table column stats
On 30 May 2015 9:04 am, "sreejesh s" wrote:
> Hi,
>
> I am new to Hive, please help me understand the benefit of ORC file format
> storing Sum, Min, Max values.
> Whenever we try to find a sum of values in a particular column, it still
> runs the MapReduce job.
>
> s
Hi,
How to monitor logs for the Hive Tez jobs.
In the shell i can see the progress of the Hive job.
If i click the application master on RM i get the following error.
Thanks,
HTTP ERROR 500
Problem accessing /proxy/application_1431929898495_21650/. Reason:
Connection refused
Caused by:
Can you please share the command which you are trying to run.
Thanks
On Thu, Jul 10, 2014 at 10:32 AM, wenlong...@changhong.com <
wenlong...@changhong.com> wrote:
> Hi guys,
>
> Is anybody can tell me , why my hive(0.12.0) cannot load data from hdfs
> where the filename is the same.
> But I ca
Hi,
Is the user which is running query having rights to access the file.
Thanks
On Wed, Jul 2, 2014 at 1:51 PM, wrote:
> Hi,
>
> Cannot add a jar to hive classpath.
>
> Once I launch HIVE, I type -> ADD JAR hdfs://10.37.83.117
> :9000/user/ipg_intg_user/AP/scripts/lib/wsaUtils.jar;
>
>
>
> I
Its defined in build.properties
You can try changing there and build.
http://svn.apache.org/viewvc/hive/trunk/build.properties?revision=1521520&view=markup
On 21/09/2013 8:19 PM, "wesley dias" wrote:
> Hello Everyone,
>
> I am new to hive and I had a query related to building the Hive package
rk. we need to use Java api's
>
>
> 2013/9/13 Jagat Singh
>
>> Hi
>>
>> You can use distributed cache and hive add file command
>>
>> See here for example syntax
>>
>>
>> http://stackoverflow.com/questions/15429040/add-multiple-file
Hi
You can use distributed cache and hive add file command
See here for example syntax
http://stackoverflow.com/questions/15429040/add-multiple-files-to-distributed-cache-in-hive
Regards,
Jagat
On Sat, Sep 14, 2013 at 9:57 AM, Stephen Boesch wrote:
>
> We have a UDF that is configur
Adding to Sanjay's reply
The only thing left after flume has added partitions is to tell hive
metastore to update partition information.
which you can do via
add partition command
Then you can read data via hive straight away.
On Sat, Sep 14, 2013 at 10:00 AM, Sanjay Subramanian <
sanjay.subr
.
Thanks in advance for your help.
Regards,
Jagat Singh
On Fri, Mar 29, 2013 at 4:48 PM, Owen O'Malley wrote:
> Actually, Hive already has the ability to have different schemas for
> different partitions. (Although of course it would be nice to have the
> alter table be more flex
Hello Nitin,
Thanks for sharing.
Do we have more details on
Versioned metadata feature of ORC ? , is it like handling varying schemas
in Hive?
Regards,
Jagat Singh
On Fri, Mar 29, 2013 at 4:16 PM, Nitin Pawar wrote:
>
> Hi,
>
> Here is is a nice presentation from Owen from Ho
Yes just wait for sometime.
We have awesome people here , they would suggest wonderful solutions to you.
On Mon, Mar 4, 2013 at 8:18 PM, Sai Sai wrote:
> Thanks again Jagat. just wanted to get a second opinion about my excel
> question.
> Thanks again for the inp
Hi,
There are many reporting tool which can read from Hive server.
All you need is to start hive server and then point tool to use it.
Pentaho , Talend , ireport are few.
Just search over here.
Thanks.
Jagat Singh
On Mon, Mar 4, 2013 at 7:58 PM, Sai Sai wrote:
> Just wondering if th
Hi,
$hive -e 'select * from myTable' > MyResultsFile.txt
Then you can use this file to import into excel
If you want to use HUE , then it has functionality to export to excel
directly.
Thanks,
Jagat Singh
On Mon, Mar 4, 2013 at 7:56 PM, Sai Sai wrote:
> Just wondering
Hi,
See this
http://svn.apache.org/repos/asf/hive/trunk/conf/hive-default.xml.template
There is one property , if you set that to true it would show.
hive.cli.print.header
false
Whether to print the names of the columns in query
output.
Thanks,
Jagat Singh
On Mon, Mar 4, 2013 at 7
You might want to read this
https://cwiki.apache.org/Hive/languagemanual-auth.html
On Fri, Feb 22, 2013 at 9:44 PM, Sachin Sudarshana <
sachin.sudarsh...@gmail.com> wrote:
> Hi,
>
> I have just started learning about hive.
> I have configured Hive to use mysql as the metastore instead of derby
If all files are in same partition then they satisfy condition of same
value as partion column .
You cannot do with hive but can have one intermediate table and then to
move required files using glob pattern
---
Sent from Mobile , short and crisp.
On 07-Jan-2013 1:07 AM, "Oded Poncz" wro
,
Jagat Singh
On Thu, Dec 13, 2012 at 7:15 PM, Manish Malhotra <
manish.hadoop.w...@gmail.com> wrote:
>
> Ideally, push the aggregated data to some RDBMS like MySQL and have REST
> API or some API to enable ui to build report or query out of it.
>
> If the use case is ad-hoc
Hive structure information is in metastore which is by default in Derby
database ( which I doubt if you would be having) or in mysql or something.
Point your hive to mysql and try.
---
Sent from Mobile , short and crisp.
On 09-Sep-2012 5:29 AM, "yogesh dhari" wrote:
> Hi all,
>
> I ha
Hi,
I had same error few days back.
Now difficulty we have is to find which gz file is corrupt. Its not corrupt
as such but some how hadoop says it is. If you made the file in Windows and
then transfer to hadoop of can give. This error. If you want to see which
file is corrupt do select count que
Hi Anurag,
How much space is for /user and /tmp directory on client.
Did you check that part? , anything which might stop move task from
finishing.
---
Sent from Mobile , short and crisp.
On 11-Aug-2012 1:37 PM, "Anurag Tangri" wrote:
> Hi,
> We are facing this issue where we run a hiv
>From the code here
http://svn.apache.org/viewvc/hive/branches/branch-0.7/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFSum.java?view=markup
For float , doble and string the implementation points to common function
GenericUDAFSumDouble()
if (parameters[0].getCategory() != ObjectIn
One of similar use case which I worked in , the record timestamp is not
guaranteed to arrive in some order. So we used Pig to do some processing
similar to what your custom code is doing and after the records are in
required order of timestamp we push them to hive.
---
Sent from Mobile , s
e.blogspot.in/2012/05/hive-mysql-setup-configuration.html>for
storing hive metadata
That's it
Please let me know if you need any detailed help.
Thanks,
Jagat Singh
On Wed, Jun 6, 2012 at 3:42 AM, Rafael Maffud Carlini
wrote:
> Hello everyone, I develop a scientific research for my co
Hello Sreenath,
Beside the tools mentioned by Bejoy you can also refer to Pentaho and Hive
both play well.
Regards,
Jagat Singh
On Mon, Jun 4, 2012 at 3:49 PM, Bejoy Ks wrote:
> Hi Sreenath
>
> If you are looking at a UI for queries then Cloudera's hue is the
> best ch
Okay
Just export JAVA_HOME also
export JAVA_HOME="path to your java folder "
On Sat, Jun 2, 2012 at 7:35 PM, Babak Bastan wrote:
> I have checked this plas but no *j2sdk1.5-sun *:(
> many java file and folders but no *j2sdk1.5-sun*
>
>
> On Sat, Jun 2, 2012 at 4:00 P
Can you check if you have *Java* at the place where the path is shown in
error?
On Sat, Jun 2, 2012 at 7:26 PM, Babak Bastan wrote:
> hey Jagat,
> Thank you ! something has happened :)
> but new error about java like this:
>
> */usr/lib/j2sdk1.5-sun/bin/java*
> *file o
and hive doesnot found( translate from German)
>
>
> On Sat, Jun 2, 2012 at 3:26 PM, Jagat wrote:
>
>> Hi
>>
>> When you say
>>
>> I suppose you have entered correct path to your hadoop below
>>
>> *export HADOOP_HOME=/path_of_your_hadoop_fo
working hive then
To Start HIVE thrift server just type
# hive --service hiveserver
It would show message like
Starting Hive Thrift Server
To check it hive server has been started successfully
Type
#netstat -nl | grep 1
Some service must be running there.
Hope it helps
Regards,
Jagat
May be you can see razorsql to convert schemas.
---
Sent from Mobile , short and crisp.
On 12-May-2012 11:58 AM, "Xiaobo Gu" wrote:
> **
> I can't find it in the release package.
>
> --
> Xiaobo Gu
>
Hello
Try to keep set of records which you need for particular analysis in same
table. Generally we use Pig to feed data to hive tables and we have
arranged our tables such that all the data which is to required for
particular report is right present in that table. This helps to improve
hive perfo
Have you checked oozie hive action.
---
Sent from Mobile
On 05-May-2012 6:09 PM, "Chandan B.K" wrote:
> Hi ,
> Does Hive internally has any scheduling feature. Does latest releases
> of Hive expose any API's to schedule a query to fire at a particular time?
> Thanks
>
> --
>
> -Rega
Hi Can you do
#netstat -nl | grep 1
Hive is compatible with 0.20.2 series , not with 1.x series of Hadoop
If you start Hive server with Hadoop 0.20 it would work
- Original Message -
From: ylyy-1985
Sent: 04/10/12 08:33 AM
To: user
Subject: cannot start the thrift server
hi all,
,
org.apache.hadoop.io.compress.BZip2Codec,
org.apache.hadoop.io.compress.SnappyCodec
3. Restart Hadoop.
More details
http://code.google.com/p/hadoop-snappy/
Thanks
Jagat
On Thu, Mar 22, 2012 at 5:00 PM, hadoop hive wrote:
> HI Folks,
>
> i follow all ther steps and build an
environment.
Regards,
Jagat
On Thu, Mar 15, 2012 at 12:24 AM, Chalcy Raja wrote:
> I have issues in setting up development environment for hive. So far, I
> just got the jar file modified and worked and now trying to get it to svn,
> so I can contribute code back.
>
> ** **
parameter mapred.reduce.tasks is
negative, hive will use this one as the max number of reducers when
automatically determine number of reducers.
Thanks and Regards
Jagat
On Tue, Mar 13, 2012 at 9:54 PM, Bruce Bian wrote:
> Hi there,
> when I'm using Hive to doing a qu
Dear Keith
Please delete $HADOOP_HOME/build , in your hadoop home the build directory
And try again
Thanks
Jagat
On Sat, Mar 10, 2012 at 5:07 AM, Keith Wiley wrote:
> Considering that I don't even konw what the metastore is, I doubt I did
> anything specifically asi
42 matches
Mail list logo