tement (hive2)
>
>
>
>
> ---
> If your project is set up for it, you can reply to this email and have your
> reply appear on GitHub as well. If your project does not have this feature
> enabled and wishes so, or if the feature is enabled but not working, please
> contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
> with INFRA.
> ---
>
--
Nitin Pawar
at it takes few seconds while Hive's Orc format takes
> fraction of seconds.
>
> Regards,
> Amey
>
--
Nitin Pawar
pom" does not exist in the project "hcatalog".
>
> Was curious to know if I'm only one facing this or Is there anyother way to
> publish maven artifacts locally?
>
> Thanks
> Amareshwari
>
--
Nitin Pawar
com> wrote:
> Nitin,
>
> Hive does not compile with jdk7. You have to use jdk6 for compiling
>
>
> On Wed, Jun 12, 2013 at 9:42 PM, Nitin Pawar >wrote:
>
> > I tried the build on trunk
> >
> > i did not hit the issue of make-pom but i hit the issue of jdbc
build it
On Mon, Jul 29, 2013 at 8:07 PM, Rodrigo Trujillo <
rodrigo.truji...@linux.vnet.ibm.com> wrote:
> Hi,
>
> is it possible to build Hive 0.11 and HCatalog with Hadoop 2 (2.0.4-alpha)
> ??
>
> Regards,
>
> Rodrigo
>
>
--
Nitin Pawar
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Caused by: java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:129)
> at
>
> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
> ... 31 more
>
--
Nitin Pawar
eMetaStoreClient.(HiveMetaStoreClient.java:157)
> >>> at
> >>>
> >>>
> org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2092)
> >>> at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2102)
> >>> at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:888)
> >>> at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:830)
> >>> at
> >>>
> >>>
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:954)
> >>> at
> >>>
> >>>
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:7524)
> >>> at
> >>>
> >>>
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:243)
> >>> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:431)
> >>> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:336)
> >>> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:909)
> >>> at
> >>>
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:258)
> >>> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:215)
> >>> at
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:406)
> >>> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:341)
> >>> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:642)
> >>> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:557)
> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>> at
> >>>
> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>> at
> >>>
> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> >>> Caused by: java.net.SocketTimeoutException: Read timed out
> >>> at java.net.SocketInputStream.socketRead0(Native Method)
> >>> at java.net.SocketInputStream.read(SocketInputStream.java:129)
> >>> at
> >>>
> >>>
> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
> >>> ... 31 more
> >>>
> >>
> >>
> >>
> >> --
> >> Nitin Pawar
> >>
> >
> >
>
--
Nitin Pawar
r_partition(HiveMetaStoreClient.java:834)
> at org.apache.hadoop.hive.ql.metadata.Hive.alterPartition(
> Hive.java:425)
> ... 17 more
> 2014-06-08 20:16:34,852 ERROR ql.Driver (SessionState.java:printError(403))
> - FAILED: Execution Error, return code 1 from org.apache.hadoop
> .hive.ql.exec.MoveTask
>
>
> --
> *Fernando Agudo Tarancón*
> /Big Data Software Engineer/
>
> Telf.: +34 917 680 490
> Fax: +34 913 833 301
> C/ Manuel Tovar, 49-53 - 28034 Madrid - Spain
>
> _http://www.bidoop.es_
>
>
--
Nitin Pawar
>
> This is relevant for the problem?
>
> Thanks,
>
>
> On 10/06/14 10:53, Nitin Pawar wrote:
>
>> Hive 0.9.0 with CDH4.1 <--- This is very old release.
>>
>> I would recommend to upgrade to hive-0.13 or at least 0.12 and see.
>>
>> Error
anickam P wrote:
>
> Hi,
>
> I'm getting the below error while loading the data into hive table.
> *return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask*
> *
> *
> I used "* LOAD DATA INPATH '/home/storage/mount1/tabled.txt' INTO TABLE
> TEST;*" this query to insert into table.
>
>
> Thanks,
> Manickam P
>
>
>
>
> --
> Nitin Pawar
>
--
Nitin Pawar
e to create temporary tables and make sure I clean
them up myself after the jobs are over.
Thanks,
Nitin Pawar
then the entire large table
> goes through the second mapper.
>
> Is there something that I am doing wrong because there are three nodes in
> the HADOOP cluster currently and I was expecting that at least 6 mappers
> should have been used.
>
> Thanks and Regards,
> Gourav
>
--
Nitin Pawar
Hi all,
>
> Is there an existing way to drop Hive tables without having the deleted
> files hitting trash? If not, can we add something similar to Hive for this?
>
>
> Thanks a lot.
>
--
Nitin Pawar
e of a scheduling
> conflict
> > please let us know. If enough people fall into this category we will try
> to
> > reschedule.
> >
> > Thanks.
> >
> > Carl
> >
>
--
Nitin Pawar
?
What would be recommended JDK version going forward for development
activities ?
--
Nitin Pawar
; Mark
>
> On Thu, Apr 11, 2013 at 2:42 AM, Nitin Pawar >wrote:
>
> > Hello,
> >
> > I am trying to build hive on both trunk and branch-0.10
> >
> > I have tried both SUN JDK6 and JDK7
> > With both the version running into different issues
> >
e in the code this is currently
> handled.
>
> Thanks,
> Steve
>
--
Nitin Pawar
E_PATH?
Currently this is set to
String historyDirectory = System.getProperty("user.home");
String historyFile = historyDirectory + File.separator + HISTORYFILE;
Any ideas on what will be the default path then ?
--
Nitin Pawar
istribution is
> prohibited. If you are not the intended recipient, please contact the
> sender by reply email and destroy all copies of the original message along
> with any attachments, from your computer system. If you are the intended
> recipient, please be advised that the content of this message is subject to
> access, review and disclosure by the sender's Email System Administrator.
>
>
> CONFIDENTIALITY NOTICE
> ==
> This email message and any attachments are for the exclusive use of the
> intended recipient(s) and may contain confidential and privileged
> information. Any unauthorized review, use, disclosure or distribution is
> prohibited. If you are not the intended recipient, please contact the
> sender by reply email and destroy all copies of the original message along
> with any attachments, from your computer system. If you are the intended
> recipient, please be advised that the content of this message is subject to
> access, review and disclosure by the sender's Email System Administrator.
>
--
Nitin Pawar
tPool.java:1148)
> at
> org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
> at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:52
> 1)
> at
> org.datanucleus.store.rdbms.RDBMSStoreManager.(RDBMSStoreManager.java:290)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at
> org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:593)
> at
> org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
> at
> org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161)
> at
> org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583)
>
--
Nitin Pawar
Nitin Pawar created HIVE-2980:
-
Summary: Show a warning or an error when the data directory is
empty or not existing
Key: HIVE-2980
URL: https://issues.apache.org/jira/browse/HIVE-2980
Project: Hive
Nitin Pawar created HIVE-5432:
-
Summary: self join for a table with serde definition fails with
classNotFoundException, single queries work fine
Key: HIVE-5432
URL: https://issues.apache.org/jira/browse/HIVE-5432
[
https://issues.apache.org/jira/browse/HIVE-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13628789#comment-13628789
]
Nitin Pawar commented on HIVE-4231:
---
Even I am running into same issue when tryin
[
https://issues.apache.org/jira/browse/HIVE-1708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13630609#comment-13630609
]
Nitin Pawar commented on HIVE-1708:
---
I did add a new setting to hive-site.xml and
Project: Hive
Issue Type: Bug
Reporter: Nitin Pawar
Priority: Minor
When we create buckets on a larger datasets, its not often that all the
partitions have same number of buckets so we choose the largest possible number
to capture the buckets mostly
25 matches
Mail list logo