t, true, false
> +-
> Relation[id#4,stat_repository_type#5,stat_repository_id#6,stat_holder_type#7,stat_holder_id#8,stat_coverage_type#9,stat_coverage_id#10,stat_membership_type#11,stat_membership_id#12,context#13]
> JDBCRelation(stats) (state=,code=0)
>
JDBCRelation also extends the BaseRelation as
. Is there something I need to implement
in the SolrRelation class to be able to create Parquet tables from Solr
tables.
Looking forward to your suggestions.
Thanks,
--
Kiran Chitturi
Nevermind, there is already a Jira open for this
https://issues.apache.org/jira/browse/SPARK-16698
On Fri, Aug 5, 2016 at 5:33 PM, Kiran Chitturi <
kiran.chitt...@lucidworks.com> wrote:
> Hi,
>
> During our upgrade to 2.0.0, we found this issue with one of our failing
> tests
or someone else. Would it make sense to update so
that hive-metastore and Spark package are on the same derby version ?
Thanks,
--
Kiran Chitturi
(QueryExecution.scala:83)
> at
> org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:83)
> at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2558)
> at org.apache.spark.sql.Dataset.head(Dataset.scala:1924)
> at org.apache.spark.sql.Dataset.take(Dataset.scala:2139)
> ... 48 elided
> scala>
The same happens for json files too. Is this a known issue in 2.0.0 ?
Removing the field with dots from the csv/json file fixes the issue :)
Thanks,
--
Kiran Chitturi
oved from Spark, and can be
> found at the Apache Bahir project: http://bahir.apache.org/
>
> I don't think there's a release for Spark 2.0.0 yet, though (only for
> the preview version).
>
>
> On Wed, Aug 3, 2016 at 8:40 PM, Kiran Chitturi
> wrote:
> > Hi,
>
ssing streaming packages ?
If so, how can we get someone to release and publish new versions
officially ?
I would like to help in any way possible to get these packages released and
published.
Thanks,
--
Kiran Chitturi
t; (executor 2 exited caused by one of the running tasks) Reason: Remote RPC
> client di
Is it possible for executor to die when the jobs in the sparkContext are
cancelled ? Apart from https://issues.apache.org/jira/browse/SPARK-14234, I
could not find any Jiras that report this error.
Sometimes,
ps://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>>
>>>>
>>>>
>>>> http://talebzadehmich.wordpress.com
>>>>
>>>>
>>>>
>>>> On 14 April 2016 at 19:26, Josh Rosen wrot
Thanks Hyukjin for the suggestion. I will take a look at implementing Solr
datasource with CatalystScan.
e ranges, I would like for the timestamp filters to
be pushed down to the Solr query.
Are there limitations on the type of filters that are passed down with
Timestamp types ?
Is there something that I should do in my code to fix this ?
Thanks,
--
Kiran Chitturi
g if spark-packages.org can support ascii doc files in addition to
README.md files.
Thanks,
--
Kiran Chitturi
12 matches
Mail list logo