may generate sub-optiomal splits resulting in less map-side
> parallelism.
> This config is just provided as an workaround and is suitable when all orc
> files
> are small (
> Thanks
> Prasanth
>
>
> On Apr 18, 2016, at 7:44 PM, Biswajit Nayak
> wrote:
>
>
Hi All,
I seriously need help on this aspect. Any reference or pointer to
troubleshoot or fix this, could be helpful.
Regards
Biswa
On Fri, Mar 25, 2016 at 11:24 PM, Biswajit Nayak
wrote:
> Prashanth,
>
> Apologies for the delay in response.
>
> Below is the orcfiledump of the
> Prasanth
>
> On Mar 10, 2016, at 5:11 PM, Prasanth Jayachandran <
> pjayachand...@hortonworks.com> wrote:
>
> Could you attach the emtpy orc files from one of the broken partition
> somewhere? I can run some tests on it to see why its happening.
>
> Thanks
> Prasa
Both the parameters are set to false by default.
*hive> set hive.optimize.index.filter;*
*hive.optimize.index.filter=false*
*hive> set hive.orc.splits.include.file.footer;*
*hive.orc.splits.include.file.footer=false*
*hive> *
>>>I suspect this might be related to having 0 row files in the buc
Hi Gopal,
I had already pasted the table format in this thread. Will repeat it again.
*hive> desc formatted *testdb.table_orc*;*
*OK*
*# col_name data_typecomment *
*row_id bigint *
*a int
Any one has seen this ?
On Tue, Mar 1, 2016 at 11:07 AM, Biswajit Nayak
wrote:
> The fix in the https://issues.apache.org/jira/browse/HIVE-7164. does not
> works.
>
> On Tue, Mar 1, 2016 at 10:51 AM, Richa Sharma > wrote:
>
>> Great!
>>
>> So what is
Any one has any idea about this.. Really stuck with this.
On Tue, Mar 1, 2016 at 4:09 PM, Biswajit Nayak
wrote:
> Hi,
>
> It works for MR engine, while in TEZ it fails.
>
> *hive> set hive.execution.engine=tez;*
>
> *hive> set hive.fetch.task.conversion=none;*
&g
rcInputFormat.java:836)*
* at
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:702)*
* ... 4 more*
*]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1
killedVertices:0*
*hive> *
On Tue, Mar 1, 2016 at 1:09 PM, Biswajit Nayak
wrote:
>
Gopal,
Any plan of provide the fix to Hive 1.x versions or to backport it?
Regards
Biswa
On Tue, Mar 1, 2016 at 11:44 AM, Biswajit Nayak
wrote:
> Thanks Gopal for the details .. happy to know it has been counted and
> fixed.
>
> Biswa
>
>
> On Tue, Mar 1, 20
Thanks Gopal for the details .. happy to know it has been counted and
fixed.
Biswa
On Tue, Mar 1, 2016 at 11:37 AM, Gopal Vijayaraghavan
wrote:
>
> > Yes it is kerberos cluster.
> ...
> > After disabling the optimization in hive cli, it works with limit
> >option.
>
> Alright, then it is fixed
Thanks Gopal for the response.
Yes it is kerberos cluster.
After disabling the optimization in hive cli, it works with limit option.
Below is the DESC details of the table that you asked for.
*hive> desc formatted *testdb.table_orc*;*
*OK*
*# col_namedata_type comment
The fix in the https://issues.apache.org/jira/browse/HIVE-7164. does not
works.
On Tue, Mar 1, 2016 at 10:51 AM, Richa Sharma
wrote:
> Great!
>
> So what is the interim fix you are implementing
>
> Richa
> On Mar 1, 2016 4:06 PM, "Biswajit Nayak" wrote:
>
>&
es should still persist if partition column data type in Hive is a
> string.
>
> I am checking HCatalog documentation for support of int data type in
> partition column.
>
> Cheers
> Richa
>
> On Tue, Mar 1, 2016 at 3:06 PM, Biswajit Nayak
> wrote:
>
>> Hi Ric
Hi All,
I am trying to run a simple query of select with limit option, it fails.
Below are the details.
Versions:-
Hadoop :- 2.7.1
Hive :- 1.2.0
Sqoop :- 1.4.5
Query:-
The table table_orc is partitioned based on year, month and day. And the
table is ORC storage.
hive> select date f
ng key type for column salary : int. Only string
> fields are allowed in partition columns in Catalog
>
>
> On Tue, Mar 1, 2016 at 2:19 PM, Biswajit Nayak
> wrote:
>
>> Hi All,
>>
>> I am trying to do a SQOOP export from hive( integer type partition) to
>
Hi All,
I am trying to do a SQOOP export from hive( integer type partition) to
mysql through HCAT and it fails with the following error.
Versions:-
Hadoop :- 2.7.1
Hive :- 1.2.0
Sqoop :- 1.4.5
Table in Hive :-
hive> use default;
OK
Time taken: 0.028 seconds
hive> describe emp_detail
eMethodAccessorImpl.java:57)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:622)
>> at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)
>> a
could you check the oozie.log and catalina.out log file. It will give you
an idea what is wrong.
~Biswa
On Sun, Feb 22, 2015 at 1:22 PM, Rahul Channe
wrote:
> Hi All,
>
> I configured the oozie build successfully and prepared oozie war file.
> After starting oozie when i tried to checkstatus i
Hi All,
I had metastore(0.12) running previously. But after upgrading to 0.13 it is
failing with below error message. The upgrade was a clean new setup.
Additional details:-
Mysql Version:- 5.6
Mysql Connector :- 5.1.30
Starting Hive Metastore Server
log4j:WARN No such property [maxBackupIndex
Congrats...
~Biswa
-oThe important thing is not to stop questioning o-
On Mon, Apr 14, 2014 at 11:32 PM, Sergey Shelukhin
wrote:
> Congrats!
>
>
> On Mon, Apr 14, 2014 at 10:55 AM, Prasanth Jayachandran <
> pjayachand...@hortonworks.com> wrote:
>
>> Congratulations everyone!!
>>
>> Than
r one
> reports it to be 35GB. What are the factors that can cause this difference?
> And why is just 35GB data causing DFS to hit its limits?
>
>
>
>
> On 14-Apr-2014, at 8:31 am, Biswajit Nayak
> wrote:
>
> Hi Saumitra,
>
> Could you please check the non-df
Hi Saumitra,
Could you please check the non-dfs usage. They also contribute to filling
up the disk space.
~Biswa
-oThe important thing is not to stop questioning o-
On Mon, Apr 14, 2014 at 1:24 AM, Saumitra wrote:
> Hello,
>
> We are running HDFS on 9-node hadoop cluster, hadoop vers
Congrats Xuefu..
With Best Regards
Biswajit
~Biswa
-oThe important thing is not to stop questioning o-
On Fri, Feb 28, 2014 at 2:50 PM, Carl Steinbach wrote:
> I am pleased to announce that Xuefu Zhang has been elected to the Hive
> Project Management Committee. Please join me in cong
This one is for monitoring of metastore only. It has to be chaged for
server and monitoring.
Biswa
On 21 Feb 2014 22:44, wrote:
> Thanks Biswajit.
>
>
>
> I will try it and let you know.
>
>
>
>
>
>
>
> Thanks,
>
> Shouvanik
>
>
>
>
uot;%s -n
hive.metastore.current_heap_usage -v %d -t int8 -d 60 -g HiveHeapstats
\n",gmetric, sum;}' |sh
fi
please let me know if it works for you.. Apologies for the delay response..
Thanks
BIswa
On Fri, Feb 21, 2014 at 10:41 AM, wrote:
> That will be great! Thanks
;
>
>
>
> Thanks,
>
> Shouvanik
>
>
>
> *From:* Biswajit Nayak [mailto:biswajit.na...@inmobi.com]
> *Sent:* Thursday, February 20, 2014 8:30 PM
> *To:* user@hive.apache.org
> *Subject:* Re: Is there any monitoring tool available for hiveserver2
>
>
>
> I ha
I have built up a customized script for alerting and monitoring.
Could not find any default way to do it.
Thanks
Biswajit
On 21 Feb 2014 05:17, wrote:
> Hi,
>
>
>
> It might happen that hiveserver2 memory gets exhausted. Similarly there
> would be many other things to monitor for hiveserver2.
Congratulations Gunther...
On Fri, Dec 27, 2013 at 7:20 PM, Prasanth Jayachandran <
pjayachand...@hortonworks.com> wrote:
> Congrats Gunther!!
>
> Sent from my iPhone
>
> > On Dec 27, 2013, at 4:46 PM, Lefty Leverenz
> wrote:
> >
> > Congratulations Gunther, well deserved!
> >
> > -- Lefty
> >
Hi All,
Any one has any idea how to make hive server to emit metrics for ganglia.
i tried adding some properties in env file. it does not works.
Thanks
Biswajit
--
_
The information contained in this communication is intended solely f
Hi All,
Could any one help me in identifying the data points for monitoring the
hive server and metastore. Or any tool that could help. Saw tool name
"HAWK" in slideshare, but could find any anywhere its source code has been
shared.
Thanks
Biswajit
--
___
Hi All,
I had setup the hive in my home directory, but today i moved it to /opt.
After that starting it throws error:-
*Exception in thread "main" javax.jdo.JDOFatalDataStoreException: Unable to
open a test connection to the given database. JDBC url =
jdbc:derby:;databaseName=metastore_db;create=
> assuming that you have all directory permissions in place you can start
> hive metastore service by running
>
> nohup hive --service metastore > somefile &
>
> that should do it.
>
> *Note* this is hive metastore service and not WebHcatalog Service.
>
>
>
>
sing pure apache
> distribution) (just in case something missed in documentation)
> and keep raising your doubts :)
>
>
>
>
> On Wed, Dec 4, 2013 at 6:23 PM, Biswajit Nayak
> wrote:
>
>> Thanks a lot.. I am a naive to hive..
>> On 4 Dec 2013 18:13, "Nitin
Thanks a lot.. I am a naive to hive..
On 4 Dec 2013 18:13, "Nitin Pawar" wrote:
> Exception clearly says "You are not allowed to drop the default database"
>
>
> On Wed, Dec 4, 2013 at 6:09 PM, Biswajit Nayak
> wrote:
>
>> Hi All,
>>
>>
Hi All,
I was trying to drop a database name "default" but every time it fails with
the below error message. While other commands works fine like " show
database"
hive> DROP DATABASE IF EXISTS default;
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask. MetaExcept
Dont think so..
On 23 Nov 2013 01:20, wrote:
> Has no one raised a Jira ticket ?
>
>
> Dr. Simon Thompson
>
> ____
> From: Biswajit Nayak [biswajit.na...@inmobi.com]
> Sent: 22 November 2013 19:45
> To: user@hive.apache.org
&g
Hi Echo,
I dont think there is any to prevent this. I had the same concern in hbase,
but found out that it is assumed that user using the system are very much
aware of it. I am into hive from last 3 months, was looking for some kind
of way here, but no luck till now..
Thanks
Biswa
On 23 Nov 2013
Congrats to both of you..
On Fri, Nov 22, 2013 at 1:26 PM, Lefty Leverenz wrote:
> Congratulations, Jitendra and Eric! The more the merrier.
>
> -- Lefty
>
>
> On Thu, Nov 21, 2013 at 6:31 PM, Jarek Jarcec Cecho wrote:
>
>> Congratulations, good job!
>>
>> Jarcec
>>
>> On Thu, Nov 21, 2013 at 0
Hi All,
I was trying to start the hive web server, the webserver came up but the
database connectivity is not happening. throwing the below errors. Any
help will be very helpful.
File: NucleusJDOHelper.java Line:425 method:
getJDOExceptionForNucleusException
class: org.datan
39 matches
Mail list logo