Are you pointing to one of the Impala daemon nodes?
What do the logs say?
On Wed, Dec 12, 2018 at 12:15 PM Fawze Abujaber wrote:
> I'm using impala JDBC and not ODBC and it's not working
>
> On Wed, Dec 12, 2018 at 8:13 PM Abhi Basu <9000r...@gmail.com> wrote:
&g
Correct. Using JDBC to connect to Impala from Zeppelin.
ODBC is mainly for Windows (I think), all our nodes run Centos7.
On Wed, Dec 12, 2018 at 12:07 PM Fawze Abujaber wrote:
> So you are using hive JDBC and not impala JBC
>
>
>
> On Wed, Dec 12, 2018 at 7:47 PM A
11:18 AM Fawze Abujaber wrote:
> When you run the impala query, is it running with user? Do you see the
> user in cloudera manager?
>
> Do you mind share your impala odbc string ?
>
> On Wed, 12 Dec 2018 at 19:06 Abhi Basu <9000r...@gmail.com> wrote:
>
>> Yes, w
both: 1) Argos Limited, Registered office: 489-499
>> Avebury Boulevard, Milton Keynes, United Kingdom, MK9 2NW
>> <https://maps.google.com/?q=499+Avebury+Boulevard,+Milton+Keynes,+United+Kingdom,+MK9+2NW&entry=gmail&source=g>,
>> registered number: 01081551 (England and Wales); and 2) Sainsbury's
>> Supermarkets Ltd, Registered office: 33 Holborn, London, EC1N 2HT
>> <https://maps.google.com/?q=33+Holborn,+London,+EC1N+2HT&entry=gmail&source=g>,
>> registered number: 03261722 (England and Wales). Both companies are
>> subsidiaries of J Sainsbury plc (185647).
>>
>> --
> Take Care
> Fawze Abujaber
>
--
Abhi Basu
Forwarded-For $proxy_protocol_addr;
> proxy_http_version 1.1;
> proxy_set_header Upgrade websocket;
> proxy_set_header Connection upgrade;
> proxy_read_timeout 86400;
> }
>
>
> On Thu, Nov 29, 2018 at 9:22 AM Abhi Basu <9000r...@gmail.com> wrot
So, if I want to use nginx only for reverse proxy and shiro for Zeppelin
authentication, do I not need to do step 3:
https://zeppelin.apache.org/docs/0.6.2/security/authentication.html
Thanks.
On Thu, Nov 29, 2018 at 5:16 AM Xun Liu wrote:
> hi,Abhi Basu
>
> First you need to exp
Yes, this helps. We have a single Zeppelin service.
Thanks,
Abhi
On Thu, Nov 29, 2018 at 5:16 AM Xun Liu wrote:
> hi,Abhi Basu
>
> First you need to explain your deployment:
> How many zeppelin-servers do you have through nginx reverse proxy?
>
> 1)If there is only one zep
Shiro, or am I missing something?
Thanks,
Abhi
--
Abhi Basu
Never mind, my zeppelin binaries were somehow corrupt. After fresh download
and install, all is good. :}
On Wed, Sep 19, 2018 at 3:44 PM, Abhi Basu <9000r...@gmail.com> wrote:
> Do I need to use the IP address (public) of my EC2 node here for this to
> work? I have a 5 node CDH 5.15
see a 403 error.
Logs are attached.
Thanks,
Abhi
--
Abhi Basu
zeppelin-centos-ip-172-31-81-167.ec2.internal.log
Description: Binary data
zeppelin-centos-ip-172-31-81-167.ec2.internal.out
Description: Binary data
Yes correct
On Jan 22, 2018 11:44 PM, "Jeff Zhang" wrote:
>
> Just curious to know does impala use HiveDriver ?
>
> Abhi Basu <9000r...@gmail.com>于2018年1月23日周二 上午12:50写道:
>
>> Seems like a 0.7.3 bug, please verify.
>>
>> The same configuratio
Seems like a 0.7.3 bug, please verify.
The same configurations worked fine in 0.7.2.
Thanks,
Abhi
On Mon, Jan 22, 2018 at 10:04 AM, Abhi Basu <9000r...@gmail.com> wrote:
> Need some help. Not sure why the connection is failing. Using Zeppelin
> 0.7.3 binary with CDH 5.13.
>
&g
)
Caused by: org.apache.thrift.transport.TTransportException:
java.net.ConnectException: Connection refused (Connection refused)
at org.apache.thrift.transport.TSocket.open(TSocket.java:185)
at
org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:190)
--
Abhi Basu
I thought it was the port I had typed in earlier, but get same exception
with 21050.
Thanks,
Abhi
On Fri, Nov 17, 2017 at 10:39 AM, Abhi Basu <9000r...@gmail.com> wrote:
> Hive works fine but impala does not.
>
> Interpreter setting:
> common.max_count 100
java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:580)
at org.apache.thrift.transport.TSocket.open(TSocket.java:180)
Thanks,
Abhi
--
Abhi Basu
llo Zeppelin users,
>
> I am using Zeppelin for quite sometime and never used Tableau.
> If anyone has experience/knowledge on both the tools, please list out the
> points why Zeppelin is better than Tableau or the other way around.
>
> Thanks
>
--
Abhi Basu
Or does Zeppelin use the existing Hive interpreter for LLAP without any
changes?
Thanks,
Abhi
On Thu, Aug 17, 2017 at 4:00 PM, Abhi Basu <9000r...@gmail.com> wrote:
> Is there any additional config needed for the Hive interpreter to talk to
> Hive LLAP on HDP cluster?
>
>
Is there any additional config needed for the Hive interpreter to talk to
Hive LLAP on HDP cluster?
Thanks,
Abhi
--
Abhi Basu
java.lang.Thread.run(Thread.java:745)
Thanks,
Abhi
--
Abhi Basu
Can you please point me to documentation to set up an install of Zeppelin
on Windows to connect to remote Hadoop cluster?
Thanks,
Abhi
--
Abhi Basu
use zeppelin on my local computer and use it to run spark
>>>> executors on a distant yarn cluster since I can't easily install zeppelin
>>>> on the cluster gateway.
>>>>
>>>> I installed the correct hadoop version (2.6), and compiled zeppelin
>>>> (from the master branch) as following:
>>>>
>>>> *mvn clean package -DskipTests -Phadoop-2.6
>>>> -Dhadoop.version=2.6.0-cdh5.5.0 -Pyarn -Pspark-2.0 -Pscala-2.11*
>>>>
>>>> I also set HADOOP_HOME_DIR to /usr/local/lib/hadoop where my hadoop is
>>>> installed (I also tried with /usr/local/lib/hadoop/etc/hadoop/ where
>>>> the conf files such as yarn-site.xml are). I set
>>>> yarn.resourcemanager.hostname to the resource manager of the cluster (I
>>>> copied the value from the config file on the cluster) but when I start a
>>>> spark command it still tries to connect to 0.0.0.0:8032 as one can see
>>>> in the logs:
>>>>
>>>> *INFO [2016-11-01 20:48:26,581] ({pool-2-thread-2}
>>>> Client.java[handleConnectionFailure]:862) - Retrying connect to server:
>>>> 0.0.0.0/0.0.0.0:8032 <http://0.0.0.0/0.0.0.0:8032>. Already tried 9
>>>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>>>> sleepTime=1000 MILLISECONDS)*
>>>>
>>>> Am I missing something something? Is there any additional parameters to
>>>> set?
>>>>
>>>> Thanks!
>>>>
>>>> Benoit
>>>>
>>>>
>>>>
>>>>
>>
>
--
Abhi Basu
Is there a specific binary for CDH 5.8.0, hadoop 2.6. and Spark 1.6?
Or, is the best method to compile the source code with appropriate switches?
Thanks,
Abhi
--
Abhi Basu
nd run the paragraph, it should work.
>
> On 19 September 2016 at 09:56, Abhi Basu <9000r...@gmail.com> wrote:
>
>> Built from source (0.61) today for CDH 5.8 and configured impala and hive
>> interpreters in Zeppelin along with other requirements.
>>
>> See the
.type
zeppelin.jdbc.concurrent.max_connection 10
zeppelin.jdbc.concurrent.use true
zeppelin.jdbc.keytab.location
zeppelin.jdbc.principal
Dependencies
artifactexclude
org.apache.hive:hive-jdbc:0.14.0
org.apache.hadoop:hadoop-common:2.6.0
--
Abhi Basu
I feel there is a scala compatibility issue and I will try compiling with
the right switches.
On Wed, Sep 14, 2016 at 1:54 PM, Abhi Basu <9000r...@gmail.com> wrote:
> Yes that fixed some of the problems.
>
> I am using Zeppelin 0.6.1 binaries against CDH 5.8 (Spark 1.6.0). Wou
y? not the bin directory.
>
> On Wed, Sep 14, 2016 at 10:19 AM Abhi Basu <9000r...@gmail.com> wrote:
>
>> Tried pyspark command on same machine which uses Anaconda python and
>> sc.version returned value.
>>
>> Zeppelin:
>> zeppelin.python /home/c
.sh?
> Could you verify the some code works with ${SPARK_HOME}/bin/pyspark, on
> the same machine that zeppelin runs?
>
> Thanks,
> moon
>
>
> On Wed, Sep 14, 2016 at 8:07 AM Abhi Basu <9000r...@gmail.com> wrote:
>
>> Oops sorry. the above code generated this
spark.concurrentSQL false
zeppelin.spark.importImplicit true
zeppelin.spark.maxResult 1000
zeppelin.spark.printREPLOutput true
zeppelin.spark.sql.stacktrace false
zeppelin.spark.useHiveContext true
On Wed, Sep 14, 2016 at 10:05 AM, Abhi Basu <9000r...@gmail.com> wrote:
> %pyspark
>
> input_file =
%pyspark
input_file = "hdfs:tmp/filenname.gz"
raw_rdd = sc.textFile(input_file)
Using this URL made it work:
jdbc:hive2://myhost.example.com:21050/;auth=noSasl
On Wed, Aug 31, 2016 at 11:13 AM, Abhi Basu <9000r...@gmail.com> wrote:
> Except spark-sql is geared more towards developers and our users are
> looking for a SQL engine like hive (except faster). :)
&g
ugh jdbc.
>
> http://www.cloudera.com/documentation/archive/impala/
> 2-x/2-1-x/topics/impala_jdbc.html
>
> Thanks,
> Pradeep
>
>
> On Wed, Aug 31, 2016 at 10:45 AM, Abhi Basu <9000r...@gmail.com> wrote:
>
>> How do I setup a connection to impala? Do I need t
How do I setup a connection to impala? Do I need to point to impala-jdbc
jar in dependencies?
Thanks,
Abhi
On Wed, Aug 31, 2016 at 10:36 AM, Abhi Basu <9000r...@gmail.com> wrote:
> OK, got it. Added the hadoop jar to dependencies and it started working.
>
> Thanks.
>
> On
OK, got it. Added the hadoop jar to dependencies and it started working.
Thanks.
On Wed, Aug 31, 2016 at 10:24 AM, Abhi Basu <9000r...@gmail.com> wrote:
> So, path to the jars like /usr/lib/hive/* ?
>
> On Wed, Aug 31, 2016 at 9:53 AM, Jeff Zhang wrote:
>
>> You don
So, path to the jars like /usr/lib/hive/* ?
On Wed, Aug 31, 2016 at 9:53 AM, Jeff Zhang wrote:
> You don't need to copy these jars manually, just specify them in the
> interpreter setting page.
>
> On Wed, Aug 31, 2016 at 9:52 PM, Abhi Basu <9000r...@gmail.com> wrote:
&
ter setting page.
>
> https://zeppelin.apache.org/docs/0.6.1/interpreter/hive.html#dependencies
>
> org.apache.hive:hive-jdbc:0.14.0
> org.apache.hadoop:hadoop-common:2.6.0
>
>
> On Wed, Aug 31, 2016 at 2:39 AM, Abhi Basu <9000r...@gmail.com> wrote:
>
>> Folks:
>&g
configure for Impala within the JDBC
section of interpreters.
Thanks,
Abhi
--
Abhi Basu
ve/Impala.
>
> You mean this?
>
> master spark://master:7077
>
>
>
> Then spark will connect to hdfs
> This email is confidential and may be subject to privilege. If you are not
> the intended recipient, please do not copy or disclose its content but
> contact the sender immediately upon receipt.
>
--
Abhi Basu
In the past I have used Zeppelin on an edge node of CDH cluster. I am
trying to figure out how to connect Zeppelin running on a CentOS node to a
remote hadoop cluster to be able to use Spark, Hive/Impala.
Thanks,
Abhi
--
Abhi Basu
38 matches
Mail list logo