You can also claim that there's a whole section of "Migrating from 1.6 to
2.0" missing there:
https://spark.apache.org/docs/2.0.0-preview/sql-programming-guide.html#migration-guide
*Romi Kuntsman*, *Big Data Engineer*
http://www.totango.com
On Tue, Jul 5, 2016 at 12:24 PM, nihe
lt back to you
*Romi Kuntsman*, *Big Data Engineer*
http://www.totango.com
On Thu, Jan 15, 2015 at 1:52 PM, preeze wrote:
> From the official spark documentation
> (http://spark.apache.org/docs/1.2.0/running-on-yarn.html):
>
> "In yarn-cluster mode, the Spark driver runs i
retrying).
*Romi Kuntsman*, *Big Data Engineer*
http://www.totango.com
On Wed, Apr 1, 2015 at 12:58 PM, Gil Vernik wrote:
> I actually saw the same issue, where we analyzed some container with few
> hundreds of GBs zip files - one was corrupted and Spark exit with
> Exception on the entire
$SocketTask.run(ControllerThreadSocketFactory.java:158)
at java.lang.Thread.run(Thread.java:745)
*Romi Kuntsman*, *Big Data Engineer*
http://www.totango.com
On Wed, Apr 1, 2015 at 6:46 PM, Ted Yu wrote:
> bq. writing the output (to Amazon S3) failed
>
> What's the value of "fs
a whole new RDD again.
*Romi Kuntsman*, *Big Data Engineer*
http://www.totango.com
On Thu, Sep 17, 2015 at 10:07 AM, Gil Vernik wrote:
> Hi,
>
> I have the following case, which i am not sure how to resolve.
>
> My code uses HadoopRDD and creates various RDDs on top of it
> (MapP
sparkConext is available on the driver, not on executors.
To read from Cassandra, you can use something like this:
https://github.com/datastax/spark-cassandra-connector/blob/master/doc/2_loading.md
*Romi Kuntsman*, *Big Data Engineer*
http://www.totango.com
On Mon, Sep 21, 2015 at 2:27 PM
(SparkContext.scala:103)
at
org.apache.spark.SparkContext.getSchedulingMode(SparkContext.scala:1501)
at
org.apache.spark.SparkContext.postEnvironmentUpdate(SparkContext.scala:2005)
at org.apache.spark.SparkContext.(SparkContext.scala:543)
at
org.apache.spark.api.java.JavaSparkContext.(JavaSparkContext.scala:61)
Th
https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark
*Romi Kuntsman*, *Big Data Engineer*
http://www.totango.com
On Fri, Oct 30, 2015 at 1:25 PM, Saurabh Shah
wrote:
> Hello, my name is Saurabh Shah and I am a second year undergraduate
> student at DA-IICT, Gandh
wait, this is an identical email like was from "Aadi Thakar <
thakkar.aa...@gmail.com>" a day before
could it be a spambot?
*Romi Kuntsman*, *Big Data Engineer*
http://www.totango.com
On Mon, Nov 2, 2015 at 10:12 AM, Romi Kuntsman wrote:
> https://cwiki.apache.org/confl
can/should be in Spark 2.0)
*Romi Kuntsman*, *Big Data Engineer*
http://www.totango.com
On Fri, Nov 6, 2015 at 2:53 PM, Jean-Baptiste Onofré
wrote:
> Hi Sean,
>
> Happy to see this discussion.
>
> I'm working on PoC to run Camel on Spark Streaming. The purpose is to have
n - multiple
levels of aggregations, iterative machine learning algorithms etc.
Sending the whole "workplan" to the Spark framework would be, as I see it,
the next step of it's evolution, like stored procedures send a logic with
many SQL queries to the database.
Was it more clear t
e fundamentally different, and building the framework around that will
benefit each of those flows (like events instead of microbatches in
streaming, worker-side intermediate processing in batch, etc).
So where is the best way to have a full Spark 2.0 discussion?
*Romi Kuntsman*, *Big Data Engineer*
h
If they have a problem managing memory, wouldn't there should be a OOM?
Why does AppClient throw a NPE?
*Romi Kuntsman*, *Big Data Engineer*
http://www.totango.com
On Mon, Nov 9, 2015 at 4:59 PM, Akhil Das
wrote:
> Is that all you have in the executor logs? I suspect some of those jo
ay be a network timeout etc)
*Romi Kuntsman*, *Big Data Engineer*
http://www.totango.com
On Mon, Nov 9, 2015 at 6:00 PM, Akhil Das
wrote:
> Did you find anything regarding the OOM in the executor logs?
>
> Thanks
> Best Regards
>
> On Mon, Nov 9, 2015 at 8:44 PM, Romi Kun
Hi Michael,
What about the memory leak bug?
https://issues.apache.org/jira/browse/SPARK-11293
Even after the memory rewrite in 1.6.0, it still happens in some cases.
Will it be fixed for 1.6.1?
Thanks,
*Romi Kuntsman*, *Big Data Engineer*
http://www.totango.com
On Mon, Feb 1, 2016 at 9:59 PM
Is it possible to make RC versions available via Maven? (many projects do
that)
That will make integration much easier, so many more people can test the
version before the final release.
Thanks!
*Romi Kuntsman*, *Big Data Engineer*
http://www.totango.com
On Tue, Feb 23, 2016 at 8:07 AM, Luciano
Sounds fair. Is it to avoid cluttering maven central with too many
intermediate versions?
What do I need to add in my pom.xml section to make it work?
*Romi Kuntsman*, *Big Data Engineer*
http://www.totango.com
On Tue, Feb 23, 2016 at 9:34 AM, Reynold Xin wrote:
> We usually publish t
+1 for Java 8 only
I think it will make it easier to make a unified API for Java and Scala,
instead of the wrappers of Java over Scala.
On Mar 24, 2016 11:46 AM, "Stephen Boesch" wrote:
> +1 for java8 only +1 for 2.11+ only .At this point scala libraries
> supporting only 2.10 are typicall
18 matches
Mail list logo