Just out of curiosity, does any one know what kind of account it is?
https://issues.apache.org/jira/secure/ViewProfile.jspa?name=Thincrs
Was wondering if it's a bot for some purposes
Some automated tool or something. Unclear from
https://www.linkedin.com/company/thincrs
I'll reply to ask them to not add automated comments to JIRA.
On Sat, Dec 1, 2018 at 8:22 AM Hyukjin Kwon wrote:
>
> Just out of curiosity, does any one know what kind of account it is?
> https://issues.apach
Hi, Ryan,
Catalog is a really important component for Spark SQL or any analytics
platform, I have to emphasize. Thus, a careful design is needed to ensure
it works as expected. Based on my previous discussion with many community
members, Spark SQL needs a catalog interface so that we can mount mul
Xiao,
I do have opinions about how multi-catalog support should work, but I don't
think we are at a point where there is consensus. That's why I've started
discussion threads and added the CatalogTableIdentifier PR instead of a
comprehensive design doc. You have opinions about how users should int
Jackey,
The proposal to add a sql-api module was based on the need to have the SQL
API classes, like `Table` available to Catalyst so we can have logical
plans and analyzer rules in that module. But, nothing in Catalyst is public
and so it doesn't contain user-implemented APIs. There are 3 options
Hi, Ryan,
I try to avoid discussing each specific topic about the catalog federation
before we deciding the framework of multi-catalog supports.
- *CatalogTableIdentifier*: The PR
https://github.com/apache/spark/pull/21978 is doing nothing but adding an
interface. In the PR, we did not discuss h
I try to avoid discussing each specific topic about the catalog federation
before we deciding the framework of multi-catalog supports.
I’ve tried to open discussions on this for the last 6+ months because we
need it. I understand that you’d like a comprehensive plan for supporting
more than one ca
Dear all,
I use cmd "*./build/mvn -Pyarn -Phadoop-2.7 -Dhadoop.version=2.7.6
-DskipTests clean package*" to
compile Spark2.4, but failed on Spark Project Tags, which throws error:
*Cannot run program
"/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre/bin/javac":
error=2, No s
javac is in $JAVA_HOME/bin/javac on Mac OS installations. It has
always worked fine on my Mac and for many other developers. You
probably have an env problem, like: that's not actually where java is,
or this isn't the JAVA_HOME actually reaching your build.
On Sat, Dec 1, 2018 at 9:53 PM wuyi wrot
Hi, Ryan,
Let us first focus on answering the most fundamental problem before
discussing various related topics. What is a catalog in Spark SQL?
My definition of catalog is based on the database catalog. Basically, the
catalog provides a service that manage the metadata/definitions of database
ob
Just curious on the need for a catalog within Spark.
So Spark interface with other systems – many of which have a catalog of their
own – e.g. RDBMSes, HBase, Cassandra, etc. and some don’t (e.g. HDFS,
filesyststem, etc).
So what is the purpose of having this catalog within Spark for tables defin
Hi, Owen, thank for your suggestion.
I recheck my env and do not find any wrong with JAVA_HOME. But I'm agree
with you there must be something wrong with the system env.
Currently, I created a link file (named javac) under $JAVA_HOME/jre/bin to
link to $JAVA_HOME/bin/javac to work around thi
12 matches
Mail list logo