YMMV and I don’t think my approach will work for your use case.
Here is a suggestion based on what I’ve done. In the first paragraph you can
register tables with code as such.
%spark
val example = sqlContext.read.format("jdbc").options(
Map("url" -> "jdbc:postgresql://localhost:5432/db_name"
We are using the JDBC interpreter. The business analysts only know SQL and run
ad-hoc queries for their report exports to CSV.
Cheers,
Ben
> On Jan 5, 2017, at 2:21 PM, t p wrote:
>
> Are you using JDBC or the PSQL interpreter? I had encountered something
> similar while using the PSQL inter
Are you using JDBC or the PSQL interpreter? I had encountered something similar
while using the PSQL interpreter and I had to restart Zeppelin.
My experience using PSQL (Postgresql, HAWK) was not as good as using
spark/scala wrappers (JDBC data source) to connect via JDBC and then register
tem
We are getting “out of shared memory” errors when multiple users are running
SQL queries against our PostgreSQL DB either simultaneously or throughout the
day. When this happens, Zeppelin 0.6.0 becomes unresponsive for any more SQL
queries. It looks like this is being caused by too many locks be
Hello.
AFAIK The connections did not closed until restart JDBC Interpreter.
so https://github.com/apache/zeppelin/pull/1396 use ConnectionPool for
control sessions.
2016-10-19 2:43 GMT+09:00 Benjamin Kim :
> We are using Zeppelin 0.6.0 as a self-service for our clients to query our
> PostgreSQL
We are using Zeppelin 0.6.0 as a self-service for our clients to query our
PostgreSQL databases. We are noticing that the connections are not closing
after each one of them are done. What is the normal operating procedure to have
these connections close when idle? Our scope for the JDBC interpre