Hello,
I agree that Spark should check whether the underlying datasource
support default values or not, and adjust its behavior accordingly.
If we follow this direction, do you see the default-values capability
in scope of the "DataSourceV2 capability API"?
Best regards,
Alessandro
On Fri, 21 De
Hi Ryan,
That's a good point. Since in this case Spark is just a channel to pass
user's action to the data source, we should think of what actions the data
source supports.
Following this direction, it makes more sense to delegate everything to
data sources.
As the first step, maybe we should no
I think it is good to know that not all sources support default values.
That makes me think that we should delegate this behavior to the source and
have a way for sources to signal that they accept default values in DDL (a
capability) and assume that they do not in most cases.
On Thu, Dec 20, 2018
I guess my question is why is this a Spark level behavior? Say the user has
an underlying source where they have a different behavior at the source
level. In Spark they set a new default behavior and it's added to the
catalogue, is the Source expected to propagate this? Or does the user have
to be
srowen commented on a change in pull request #163: Announce the schedule of
2019 Spark+AI summit at SF
URL: https://github.com/apache/spark-website/pull/163#discussion_r243353800
##
File path: site/sitemap.xml
##
@@ -139,657 +139,661 @@
- https://spark.apache.org/re