Hi,
In the past the Apache Spark community have created preview packages (not
official releases) and used those as opportunities to ask community members
to test the upcoming versions of Apache Spark. Several people in the Apache
community have suggested we conduct votes for these preview packages
Considering the references It seems *RedundantIfChecker *is okay then.
Could I try to add this one?
*RedundantIfChecker *(See
http://www.scalastyle.org/rules-dev.html#org_scalastyle_scalariform_RedundantIfChecker
)
It seems there are two usage of this. This simply checks if (cond) true
else false
Yeah, can you open a JIRA with that reproduction please? You can ping me
on it.
On Tue, May 17, 2016 at 4:55 PM, Reynold Xin wrote:
> It seems like the problem here is that we are not using unique names
> for mapelements_isNull?
>
>
>
> On Tue, May 17, 2016 at 3:29 PM, Koert Kuipers wrote:
>
>
It seems like the problem here is that we are not using unique names
for mapelements_isNull?
On Tue, May 17, 2016 at 3:29 PM, Koert Kuipers wrote:
> hello all, we are slowly expanding our test coverage for spark
> 2.0.0-SNAPSHOT to more in-house projects. today i ran into this issue...
>
> thi
hello all, we are slowly expanding our test coverage for spark
2.0.0-SNAPSHOT to more in-house projects. today i ran into this issue...
this runs fine:
val df = sc.parallelize(List(("1", "2"), ("3", "4"))).toDF("a", "b")
df
.map(row => row)(RowEncoder(df.schema))
.select("a", "b")
.show
how
Hi,
I saw a replay of a talk about what’s coming in Spark 2.0 and the speed
performances…
I am curious about indexing of data sets.
In HBase/MapRDB you can create ordered sets of indexes through an inverted
table.
Here, you can take the intersection of the indexes to find the result set of
Perhaps you need to make the "compile" task of the appropriate module
depend on the task that generates the resource file?
Sorry but my knowledge of sbt doesn't really go too far.
On Tue, May 17, 2016 at 11:58 AM, dhruve ashar wrote:
> We are trying to pick the spark version automatically from p
We are trying to pick the spark version automatically from pom instead of
manually modifying the files. This also includes richer pieces of
information like last commit, version, user who built the code etc to
better identify the framework running.
The setup is as follows :
- A shell script genera
Hi Yang,
I think it's deleted accidentally while we were working on the API
migration. We will add it back (
https://issues.apache.org/jira/browse/SPARK-15367).
Thanks,
Yin
On Fri, May 13, 2016 at 2:47 AM, 汪洋 wrote:
> Hi all,
>
> I notice that HiveContext used to have a refreshTable() method,