Hi All,
I've created test tables in HiveCLI (druid1, druid2) and test tables in
Beeline (beeline1, beeline2).
I want to be able to access Hive tables in Beeline and Beeline tables in
Hive. Is it possible to do?
I've set up hive-site.xml for both Hive and Spark to use the same warehouse
thinking
Hi All,
How do you create an external Druid table via Spark?
I know that you can do it like this:
https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/using-druid/content/druid_anatomy_of_hive_to_druid.html
But the issue is that Spark was built on Hive 1.2.1:
https://spark.apache.org/docs/la
requirements are to
use Thrift Server, which for some reason uses Spark SQL instead of HiveQL,
which should be a default behavior, because Thrift Server uses Hive.
Thanks,
Val
On Tue, Jul 2, 2019 at 4:34 PM Valeriy Trofimov
wrote:
> Hi All,
>
> I'm trying to create and external table using Th
Hi All,
I'm trying to create and external table using Thrift Server to which I'm
connected via Beeline. In order to do this I run the following Hive SQL
query, as described here:
https://cwiki.apache.org/confluence/display/Hive/Druid+Integration
CREATE EXTERNAL TABLE druid_table_1
STORED BY 'org.
Hi All,
I want to use Spark SQL to access multiple DBs including doing
inter-dataset joins. I use Thrift Server to access Hive DB, because this is
what it's designed to do. How can I use Thrift Server to access other DBs?
Thanks,
Val
Hi All,
What Java version should I use to build Spark on Ubuntu? What are the
instructions on installing it on Ubuntu?
Official doc on this is missing this info:
https://spark.apache.org/docs/latest/building-spark.html
If I use default JDK, I get a build error Googling which shows that I need
to