110)
856 at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
857 at java.lang.Thread.run(Thread.java:722)
Regards,
Manish
On Tue, Jan 20, 2015 at 5:58 PM, Manish Malhotra <
manish.hadoop.w...@gmail.com> wrote:
> Hi All,
>
> I'm using Hive Thrift Server in Pro
Hi All,
I'm using Hive Thrift Server in Production which at peak handles around 500
req/min.
After certain point the Hive Thrift Server is going into the no response
mode and throws
Following exception
"org.apache.hadoop.hive.ql.metadata.HiveException:
org.apache.thrift.transport.TTransportExcepti
What Sanjay and Swagatika replied are perfect.
Plus fundamentally if you see, if you are able to run the hive query from
CLI or some internal API like HiveDriver, the flow will be this:
>> Compile the query
>> Get the info from Hive Metastore using Thrift or JDBC, Optimize it ( if
required and ca
When you are using Cli library ... it internally uses ZK or configured /
support locking service, so no extra effort is required to do that.
Though there is a patch for hiveserver leak zookeeper HIVE-3723 , which
people are trying on 0.9 and 0.10.
Regards,
Manish
On Thu, Jan 10, 2013 at 11:23 P
Hi,
I'm looking for HiveServer2 implementation in ASF.
I follow this link.
https://cwiki.apache.org/Hive/hiveserver2-thrift-api.html
JIRA: https://issues.apache.org/jira/browse/HIVE-2935
Is this same as what CDH4 has released HiveServer2 and its available under
ASF or this is not being implemente
I, does it give
compilation problem in synchronous way instead of scheduling a query that
is not correct?
Regards,
Manish
On Sun, Jan 6, 2013 at 10:45 AM, Manish Malhotra <
manish.hadoop.w...@gmail.com> wrote:
> Thanks Edward for explaining
> Im also very much interested in buil
ink this HWI is not good enough yet.
> > 3. ThreadPool is used to run queries. So synchronous mode can be
> achieved by setting its size to 1.
> > Regards,
> > Qiang
> >
> > 2013/1/6 Manish Malhotra
> >>
> >> Thanks Quiang,
> >>
> >
er all, have a try and give some advice if you're interested in it.
>
> Thanks
>
> Qiang
>
>
> 2013/1/5 Manish Malhotra
>
>>
>> Hi All,
>>
>> We are exploring HWI to be used in PROD environment for adhoc queries
>> etc.
>> Want
Hi All,
We are exploring HWI to be used in PROD environment for adhoc queries etc.
Want to check out in the hive community that can somebody share there
experience while using the HWI in prod or any environment in terms of its
stability and performance.
Also evaluating to enhance to make it more u
Hi,
As mentioned by Nitin and other fellows.
There are few points you need to consider.
1. Hive is currently and build for OLAP apps and not for OLTP ( Realtime
like RDBMS like MySQL, Oracle)
2. Though you can connect to Hive Thrift using JDBC implementation, but its
still not a production grade
Looks like https://issues.apache.org/jira/browse/HCATALOG-541 is also
related
though the issue looks like when dealing with large number of partitions.
Regards,
Manish
On Wed, Dec 12, 2012 at 5:59 PM, Shreepadma Venugopalan <
shreepa...@cloudera.com> wrote:
>
>
>
> On Tue, Dec 11, 2012 at 12:
Ideally, push the aggregated data to some RDBMS like MySQL and have REST
API or some API to enable ui to build report or query out of it.
If the use case is ad-hoc query then once that qry is submitted, and result
is generated in batch mode, the REST API can be provided to get the results
from HDF
Thanks Ruslan,
Please see my inline comments,
Why do you need metadata backup? Can't you just store all the table create
statements in an init file?
MM: Because I don't want to depend on the init script that will have all
the entries for all the tables.
And this backup tool should be independent
Sending again, as got no response.
Can somebody from Hive dev group please review my approach and reply?
Cheers,
Manish
On Thu, Dec 6, 2012 at 11:17 PM, Manish Malhotra <
manish.hadoop.w...@gmail.com> wrote:
> Hi,
>
> I'm building / designing a back-up and restore to
Hi,
I'm building / designing a back-up and restore tool for hive data for
Disaster Recovery scenarios.
I'm trying to understand the locking behavior of HIVE that is currently
supporting ZooKeeper for locking.
My thought process if like this ( early design.)
1. Backing up the meta-data of hive.
Hi,
As per my understanding, the JDBC driver for hive is not scalable, it's a
single threaded model.
Though even if you get handle of Data Access API, the latency to generate
report would be high !!
If you are ok with that , then please checkout the Thrift API and CLI
Driver class code:
http://hi
think dedicating 3 nodes for ZK for metastore is an overhead then you would
> need https://issues.apache.org/jira/browse/HIVE-3255 With that patch,
> tokens are stored in same backend db, so there would be no need to bring up
> ZK cluster.
>
> Hopefully, both of these patches gets in
Hi,
I need to build a failover/LB solution for Hive Services.
MySQL DB is fine, and can work out.
But for Hive Metastore Service, can I simply put the Load Balancer like HA
Proxy etc. in between the client and achieve this.
Thrift Servers and default stateless, not sure about hive one.
I red very
18 matches
Mail list logo