Hi All ,
Just wanted to know if there is any work around or resolution for below
issue in Stand alone mode
https://issues.apache.org/jira/browse/SPARK-9559
Ashish
YARN may be a workaround.
On Thu, Feb 18, 2016 at 4:13 PM, Ashish Soni wrote:
> Hi All ,
>
> Just wanted to know if there is any work around or resolution for below
> issue in Stand alone mode
>
> https://issues.apache.org/jira/browse/SPARK-9559
>
> Ashish
>
I saw this slide:
http://image.slidesharecdn.com/east2016v2matei-160217154412/95/2016-spark-summit-east-keynote-matei-zaharia-5-638.jpg?cb=1455724433
Didn't see the talk - was this just referring to the existing work on the
spark-streaming-kafka subproject, or is someone actually working on making
I think Matei was referring to the Kafka direct streaming source added in
2015.
On Thu, Feb 18, 2016 at 11:59 AM, Cody Koeninger wrote:
> I saw this slide:
> http://image.slidesharecdn.com/east2016v2matei-160217154412/95/2016-spark-summit-east-keynote-matei-zaharia-5-638.jpg?cb=1455724433
>
> D
You are correct and we should document that.
Any suggestions on where we should document this? In DoubleType and
FloatType?
On Tuesday, February 16, 2016, Maciej Szymkiewicz
wrote:
> I am not sure if I've missed something obvious but as far as I can tell
> DataFrame API doesn't provide a clearl
Hi,
I'm trying to finish up a PR (https://github.com/apache/spark/pull/10089)
which is currently failing PySpark tests. The instructions to run the test
suite seem a little dated. I was able to find these:
https://cwiki.apache.org/confluence/display/SPARK/PySpark+Internals
http://spark.apache.org/
Awesome! Congrats and welcome!!
2016-02-18 11:26 GMT+08:00 Cheng Lian :
> Awesome! Congrats and welcome!!
>
> Cheng
>
> On Tue, Feb 9, 2016 at 2:55 AM, Shixiong(Ryan) Zhu <
> shixi...@databricks.com> wrote:
>
>> Congrats!!! Herman and Wenchen!!!
>>
>>
>> On Mon, Feb 8, 2016 at 10:44 AM, Luciano R
Hi all,
I am planning to submit a PR for
https://issues.apache.org/jira/browse/SPARK-8000.
Currently, file format is not detected by the file extension unlike
compression codecs are being detected.
I am thinking of introducing another interface (a function) at
DataSourceRegister just like shortN
Thanks for the email.
Don't make it that complicated. We just want to simplify the common cases
(e.g. csv/parquet), and don't need this to work for everything out there.
On Thu, Feb 18, 2016 at 9:25 PM, Hyukjin Kwon wrote:
> Hi all,
>
> I am planning to submit a PR for
> https://issues.apache.
I've run into some problems with the Python tests in the past when I
haven't built with hive support, you might want to build your assembly with
hive support and see if that helps.
On Thursday, February 18, 2016, Jason White wrote:
> Hi,
>
> I'm trying to finish up a PR (https://github.com/apach
Compiling with `build/mvn -Pyarn -Phadoop-2.4 -Phive -Dhadoop.version=2.4.0
-DskipTests clean package` followed by `python/run-tests` seemed to do the
trick! Thanks!
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/How-to-run-PySpark-tests-tp16357p16362
Great - I'll update the wiki.
On Thu, Feb 18, 2016 at 8:34 PM, Jason White
wrote:
> Compiling with `build/mvn -Pyarn -Phadoop-2.4 -Phive -Dhadoop.version=2.4.0
> -DskipTests clean package` followed by `python/run-tests` seemed to do the
> trick! Thanks!
>
>
>
> --
> View this message in context:
Hi All,
When running concurrent Spark Jobs on YARN (Spark-1.5.2) which share a
single Spark Context, the jobs take more time to complete comparing with
when they ran with different Spark Context.
The spark jobs are submitted on different threads.
Test Case:
A. 3 spark jobs submitted seri
How did you configure YARN queues? What scheduler? Preemption ?
> On 19 Feb 2016, at 06:51, Prabhu Joseph wrote:
>
> Hi All,
>
>When running concurrent Spark Jobs on YARN (Spark-1.5.2) which share a
> single Spark Context, the jobs take more time to complete comparing with when
> they ran
Fair Scheduler, YARN Queue has the entire cluster resource as maxResource,
preemption does not come into picture during test case, all the spark jobs
got the requested resource.
The concurrent jobs with different spark context runs fine, so suspecting
on resource contention is not a correct one.
15 matches
Mail list logo