On Jun 22, 2016, at 1:14 PM, Michael Armbrust
<mich...@databricks.com <mailto:mich...@databricks.com>> wrote:
+1
On Wed, Jun 22, 2016 at 11:33 AM, Jonathan Kelly
<jonathaka...@gmail.com <mailto:jonathaka...@gmail.com>> wrote:
+1
On Wed, Jun 22, 2016 at 10:41 AM Tim Hunter
<timhun...@databricks.com <mailto:timhun...@databricks.com>>
wrote:
+1 This release passes all tests on the graphframes and
tensorframes packages.
On Wed, Jun 22, 2016 at 7:19 AM, Cody Koeninger
<c...@koeninger.org <mailto:c...@koeninger.org>> wrote:
If we're considering backporting changes for the 0.8
kafka
integration, I am sure there are people who would
like to get
https://issues.apache.org/jira/browse/SPARK-10963
into 1.6.x as well
On Wed, Jun 22, 2016 at 7:41 AM, Sean Owen
<so...@cloudera.com <mailto:so...@cloudera.com>> wrote:
> Good call, probably worth back-porting, I'll try to
do that. I don't
> think it blocks a release, but would be good to get
into a next RC if
> any.
>
> On Wed, Jun 22, 2016 at 11:38 AM, Pete Robbins
<robbin...@gmail.com <mailto:robbin...@gmail.com>> wrote:
>> This has failed on our 1.6 stream builds regularly.
>> (https://issues.apache.org/jira/browse/SPARK-6005)
looks fixed in 2.0?
>>
>> On Wed, 22 Jun 2016 at 11:15 Sean Owen
<so...@cloudera.com <mailto:so...@cloudera.com>> wrote:
>>>
>>> Oops, one more in the "does anybody else see
this" department:
>>>
>>> - offset recovery *** FAILED ***
>>> recoveredOffsetRanges.forall(((or:
(org.apache.spark.streaming.Time,
>>>
Array[org.apache.spark.streaming.kafka.OffsetRange])) =>
>>>
>>>
earlierOffsetRangesAsSets.contains(scala.Tuple2.apply[org.apache.spark.streaming.Time,
>>>
>>>
scala.collection.immutable.Set[org.apache.spark.streaming.kafka.OffsetRange]](or._1,
>>>
>>>
scala.this.Predef.refArrayOps[org.apache.spark.streaming.kafka.OffsetRange](or._2).toSet[org.apache.spark.streaming.kafka.OffsetRange]))))
>>> was false Recovered ranges are not the same as
the ones generated
>>> (DirectKafkaStreamSuite.scala:301)
>>>
>>> This actually fails consistently for me too in
the Kafka integration
>>> code. Not timezone related, I think.
>
>
---------------------------------------------------------------------
> To unsubscribe, e-mail:
dev-unsubscr...@spark.apache.org
<mailto:dev-unsubscr...@spark.apache.org>
> For additional commands, e-mail:
dev-h...@spark.apache.org
<mailto:dev-h...@spark.apache.org>
>
---------------------------------------------------------------------
To unsubscribe, e-mail:
dev-unsubscr...@spark.apache.org
<mailto:dev-unsubscr...@spark.apache.org>
For additional commands, e-mail:
dev-h...@spark.apache.org
<mailto:dev-h...@spark.apache.org>