Depending on the size of the data i recommend to schedule regularly an extract in tableau. There tableau converts it to an internal in-memory representation outside of Spark (can also exist on disk if memory is too small) and then use it within Tableau. Accessing directly the database is not so efficient. Additionally use always the newest version of tableau..
> On 30 Jan 2017, at 21:57, Mich Talebzadeh <mich.talebza...@gmail.com> wrote: > > Hi, > > Has anyone tried using Tableau on Spark SQL? > > Specifically how does Tableau handle in-memory capabilities of Spark. > > As I understand Tableau uses its own propriety SQL against say Oracle. That > is well established. So for each product Tableau will try to use its own > version of SQL against that product like Spark > or Hive. > > However, when I last tried Tableau on Hive, the mapping and performance was > not that good in comparision with the same tables and data in Hive.. > > My approach has been to take Oracle 11.g sh schema containing star schema and > create and ingest the same tables and data into Hive tables. Then run > Tableau against these tables and do the performance comparison. Given that > Oracle is widely used with Tableau this test makes sense? > > Thanks. > > > Dr Mich Talebzadeh > > LinkedIn > https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw > > http://talebzadehmich.wordpress.com > > Disclaimer: Use it at your own risk. Any and all responsibility for any loss, > damage or destruction of data or any other property which may arise from > relying on this email's technical content is explicitly disclaimed. The > author will in no case be liable for any monetary damages arising from such > loss, damage or destruction. >