You might want to read this
https://cwiki.apache.org/Hive/languagemanual-auth.html
On Fri, Feb 22, 2013 at 9:44 PM, Sachin Sudarshana <
sachin.sudarsh...@gmail.com> wrote:
> Hi,
>
> I have just started learning about hive.
> I have configured Hive to use mysql as the metastore instead of derby
Hi,
See this
http://svn.apache.org/repos/asf/hive/trunk/conf/hive-default.xml.template
There is one property , if you set that to true it would show.
hive.cli.print.header
false
Whether to print the names of the columns in query
output.
Thanks,
Jagat Singh
On Mon, Mar 4, 2013 at 7
Hi,
$hive -e 'select * from myTable' > MyResultsFile.txt
Then you can use this file to import into excel
If you want to use HUE , then it has functionality to export to excel
directly.
Thanks,
Jagat Singh
On Mon, Mar 4, 2013 at 7:56 PM, Sai Sai wrote:
> Just wondering
Hi,
There are many reporting tool which can read from Hive server.
All you need is to start hive server and then point tool to use it.
Pentaho , Talend , ireport are few.
Just search over here.
Thanks.
Jagat Singh
On Mon, Mar 4, 2013 at 7:58 PM, Sai Sai wrote:
> Just wondering if th
ut.
> Sai.
>
>
> --
> *From:* Jagat Singh
> *To:* user@hive.apache.org; Sai Sai
> *Sent:* Monday, 4 March 2013 1:01 AM
> *Subject:* Re: hive light weight reporting tool
>
> Hi,
>
>
> There are many reporting tool which can read from Hive server.
&
Hello Nitin,
Thanks for sharing.
Do we have more details on
Versioned metadata feature of ORC ? , is it like handling varying schemas
in Hive?
Regards,
Jagat Singh
On Fri, Mar 29, 2013 at 4:16 PM, Nitin Pawar wrote:
>
> Hi,
>
> Here is is a nice presentation from Owen from Ho
.
Thanks in advance for your help.
Regards,
Jagat Singh
On Fri, Mar 29, 2013 at 4:48 PM, Owen O'Malley wrote:
> Actually, Hive already has the ability to have different schemas for
> different partitions. (Although of course it would be nice to have the
> alter table be more flex
Adding to Sanjay's reply
The only thing left after flume has added partitions is to tell hive
metastore to update partition information.
which you can do via
add partition command
Then you can read data via hive straight away.
On Sat, Sep 14, 2013 at 10:00 AM, Sanjay Subramanian <
sanjay.subr
Hi
You can use distributed cache and hive add file command
See here for example syntax
http://stackoverflow.com/questions/15429040/add-multiple-files-to-distributed-cache-in-hive
Regards,
Jagat
On Sat, Sep 14, 2013 at 9:57 AM, Stephen Boesch wrote:
>
> We have a UDF that is configured via
rk. we need to use Java api's
>
>
> 2013/9/13 Jagat Singh
>
>> Hi
>>
>> You can use distributed cache and hive add file command
>>
>> See here for example syntax
>>
>>
>> http://stackoverflow.com/questions/15429040/add-multiple-file
Its defined in build.properties
You can try changing there and build.
http://svn.apache.org/viewvc/hive/trunk/build.properties?revision=1521520&view=markup
On 21/09/2013 8:19 PM, "wesley dias" wrote:
> Hello Everyone,
>
> I am new to hive and I had a query related to building the Hive package
Hi Can you do
#netstat -nl | grep 1
Hive is compatible with 0.20.2 series , not with 1.x series of Hadoop
If you start Hive server with Hadoop 0.20 it would work
- Original Message -
From: ylyy-1985
Sent: 04/10/12 08:33 AM
To: user
Subject: cannot start the thrift server
hi all,
One of similar use case which I worked in , the record timestamp is not
guaranteed to arrive in some order. So we used Pig to do some processing
similar to what your custom code is doing and after the records are in
required order of timestamp we push them to hive.
---
Sent from Mobile , s
>From the code here
http://svn.apache.org/viewvc/hive/branches/branch-0.7/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFSum.java?view=markup
For float , doble and string the implementation points to common function
GenericUDAFSumDouble()
if (parameters[0].getCategory() != ObjectIn
Hi Anurag,
How much space is for /user and /tmp directory on client.
Did you check that part? , anything which might stop move task from
finishing.
---
Sent from Mobile , short and crisp.
On 11-Aug-2012 1:37 PM, "Anurag Tangri" wrote:
> Hi,
> We are facing this issue where we run a hiv
Hi,
I had same error few days back.
Now difficulty we have is to find which gz file is corrupt. Its not corrupt
as such but some how hadoop says it is. If you made the file in Windows and
then transfer to hadoop of can give. This error. If you want to see which
file is corrupt do select count que
Hive structure information is in metastore which is by default in Derby
database ( which I doubt if you would be having) or in mysql or something.
Point your hive to mysql and try.
---
Sent from Mobile , short and crisp.
On 09-Sep-2012 5:29 AM, "yogesh dhari" wrote:
> Hi all,
>
> I ha
,
Jagat Singh
On Thu, Dec 13, 2012 at 7:15 PM, Manish Malhotra <
manish.hadoop.w...@gmail.com> wrote:
>
> Ideally, push the aggregated data to some RDBMS like MySQL and have REST
> API or some API to enable ui to build report or query out of it.
>
> If the use case is ad-hoc
If all files are in same partition then they satisfy condition of same
value as partion column .
You cannot do with hive but can have one intermediate table and then to
move required files using glob pattern
---
Sent from Mobile , short and crisp.
On 07-Jan-2013 1:07 AM, "Oded Poncz" wro
Hi,
Is the user which is running query having rights to access the file.
Thanks
On Wed, Jul 2, 2014 at 1:51 PM, wrote:
> Hi,
>
> Cannot add a jar to hive classpath.
>
> Once I launch HIVE, I type -> ADD JAR hdfs://10.37.83.117
> :9000/user/ipg_intg_user/AP/scripts/lib/wsaUtils.jar;
>
>
>
> I
Can you please share the command which you are trying to run.
Thanks
On Thu, Jul 10, 2014 at 10:32 AM, wenlong...@changhong.com <
wenlong...@changhong.com> wrote:
> Hi guys,
>
> Is anybody can tell me , why my hive(0.12.0) cannot load data from hdfs
> where the filename is the same.
> But I ca
Hi,
How to monitor logs for the Hive Tez jobs.
In the shell i can see the progress of the Hive job.
If i click the application master on RM i get the following error.
Thanks,
HTTP ERROR 500
Problem accessing /proxy/application_1431929898495_21650/. Reason:
Connection refused
Caused by:
Did you do table column stats
On 30 May 2015 9:04 am, "sreejesh s" wrote:
> Hi,
>
> I am new to Hive, please help me understand the benefit of ORC file format
> storing Sum, Min, Max values.
> Whenever we try to find a sum of values in a particular column, it still
> runs the MapReduce job.
>
> s
We are using Hive 0.14
Our input file size is around 100 GB uncompressed
We are using insering this data to hive which is ORC based table , ZLIB
While inserting we are also using following two parameters.
SET hive.exec.reducers.max=10;
SET mapred.reduce.tasks=5;
The output ORC file produced
Hi,
I am trying to run Hive on Spark on HDP Virtual machine 2.3
Following wiki
https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started
I have replaced all the occurrences of hdp.version with 2.3.0.0-2557
I start hive with following
set hive.execution.engine=spark;
set
One interesting message here , *No plan file found: *
15/11/01 23:55:36 INFO exec.Utilities: No plan file found: hdfs://
sandbox.hortonworks.com:8020/tmp/hive/root/119652ff-3158-4cce-b32d-b300bfead1bc/hive_2015-11-01_23-54-47_767_5715642849033319370-1/-mr-10003/40878ced-7985-40d9-9b1d-27f06acb1bef
gt; be the problem you're having. Have you tried your query with MapReduce?
>
> On Sun, Nov 1, 2015 at 5:32 PM, Jagat Singh wrote:
>
>> One interesting message here , *No plan file found: *
>>
>> 15/11/01 23:55:36 INFO exec.Utilities: No plan file found: hdfs://
&g
Hi,
Is it possible to do bulk load using files in hive table backed by
transactions instead of update statements.
Thanks
Hello Ravi,
When you wget this url
wget http://:9091/schema?name=ed&store=parquet&
isMutated=true&table=ed&secbypass=testing'
Do you get avsc file?
Regards,
Jagat Singh
On Sat, 27 Jun 2020, 7:01 am ravi kanth, wrote:
> Just want to follow up on the below email.
>
29 matches
Mail list logo