look at the yarn-default configuration file.
>
> Check your log related settings to see if log aggregation is enabled or
> also the log retention duration to see if its too small and files are being
> deleted.
>
> On Wed, Jun 29, 2016 at 4:47 PM, prateek arora > wrote:
>
&
Hi
My Spark application was crashed and show information
LogType:stdout
Log Upload Time:Wed Jun 29 14:38:03 -0700 2016
LogLength:1096
Log Contents:
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGILL (0x4) at pc=0x7f67baa0d221, pid=12207, tid=140083473176320
#
#
on YARN or Standalone?
>
> Pozdrawiam,
> Jacek Laskowski
>
> https://medium.com/@jaceklaskowski/
> Mastering Apache Spark http://bit.ly/mastering-apache-spark
> Follow me at https://twitter.com/jaceklaskowski
>
>
> On Wed, Jun 1, 2016 at 7:55 PM, prateek arora
> wr
please help me to solve my problem
Regards
Prateek
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-enable-core-dump-in-spark-tp27065p27081.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
---
Hi
I am using cloudera to setup spark 1.6.0 on ubuntu 14.04 .
I set core dump limit to unlimited in all nodes .
Edit /etc/security/limits.conf file and add " * soft core unlimited "
line.
i rechecked using : $ ulimit -all
core file size (blocks, -c) unlimited
data seg size
Please help to solve my problem .
Regards
Prateek
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-get-and-save-core-dump-of-native-library-in-executors-tp26945p26967.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
I am running my cluster on Ubuntu 14.04
Regards
Prateek
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-get-and-save-core-dump-of-native-library-in-executors-tp26945p26952.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
ubuntu 14.04
On Thu, May 12, 2016 at 2:40 PM, Ted Yu wrote:
> Which OS are you using ?
>
> See http://en.linuxreviews.org/HOWTO_enable_core-dumps
>
> On Thu, May 12, 2016 at 2:23 PM, prateek arora > wrote:
>
>> Hi
>>
>> I am running my spark application
Hi
I am running my spark application with some third party native libraries .
but it crashes some time and show error " Failed to write core dump. Core
dumps have been disabled. To enable core dumping, try "ulimit -c unlimited"
before starting Java again " .
Below are the log :
A fatal error h
Hi
My Spark Streaming application receiving data from one kafka topic ( one
partition) and rdd have 30 partition.
but scheduler schedule the task between executors running on same host
with NODE_LOCAL locality level. ( where kafka topic partition created) .
Below are the logs :
16/05/06
, Mar 25, 2016 at 10:50 AM, Ted Yu wrote:
> See this thread:
>
> http://search-hadoop.com/m/q3RTtAvwgE7dEI02
>
> On Fri, Mar 25, 2016 at 10:39 AM, prateek arora <
> prateek.arora...@gmail.com> wrote:
>
>> Hi
>>
>> I want to submit spark application from ou
Hi
I want to submit spark application from outside of spark clusters . so
please help me to provide a information regarding this.
Regards
Prateek
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/is-there-any-way-to-submit-spark-application-from-outside-o
e to give you help on that.
> >
> > If you’re not using a dependency manager - well, you should be. Trying
> to manage this manually is a pain that you do not want to get in the way of
> your project. There are perfectly good tools to do this for you; use them.
> >
>
Hi
Thanks for the information .
but my problem is that if i want to write spark application which depend on
third party libraries like opencv then whats is the best approach to
distribute all .so and jar file of opencv in all cluster ?
Regards
Prateek
--
View this message in context:
http
Hi
I have multiple node cluster and my spark jobs depend on a native
library (.so files) and some jar files.
Can some one please explain what are the best ways to distribute dependent
files across nodes?
right now i copied dependent files in all nodes using chef tool .
Regards
Prateek
--
V
et to be released) onwards."
On Thu, Dec 17, 2015 at 3:24 PM, Vikram Kone wrote:
> Hi Prateek,
> Were you able to figure why this is happening? I'm seeing the same error
> on my spark standalone cluster.
>
> Any pointers anyone?
>
> On Fri, Dec 11, 2015 at 2:05 PM, p
Hi
I am trying to access Spark Using REST API but got below error :
Command :
curl http://:18088/api/v1/applications
Response:
Error 503 Service Unavailable
HTTP ERROR 503
Problem accessing /api/v1/applications. Reason:
Service Unavailable
Caused by:
org.spark-project.jetty.ser
Hi Thanks
In my scenario batches are independent .so is it safe to use in production
environment ?
Regards
Prateek
On Wed, Dec 9, 2015 at 11:39 AM, Ted Yu wrote:
> Have you seen this thread ?
>
> http://search-hadoop.com/m/q3RTtgSGrobJ3Je
>
> On Wed, Dec 9, 2015 at 11:12 AM
Hi
when i run my spark streaming application .. following information show on
application streaming UI.
i am using spark 1.5.0
Batch Time Input Size Scheduling Delay (?) Processing Time (?)
Status
2015/12/09 11:00:42 107 events - -
que
Hi
Is it possible into spark to write only RDD transformation into hdfs or any
other storage system ?
Regards
Prateek
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/can-i-write-only-RDD-transformation-into-hdfs-or-any-other-storage-system-tp25637.html
Sen
10:17 AM, prateek arora > wrote:
>
>> Hi Ted
>> Thanks for the information .
>> is there any way that two different spark application share there data ?
>>
>> Regards
>> Prateek
>>
>> On Fri, Dec 4, 2015 at 9:54 AM, Ted Yu wrote:
>&
on+about+yarn+cluster+mode+and+spark+driver+allowMultipleContexts
>
> Cheers
>
> On Fri, Dec 4, 2015 at 9:46 AM, prateek arora
> wrote:
>
>> Hi
>>
>> I want to create multiple sparkContext in my application.
>> i read so many articles they suggest &quo
Hi
I want to create multiple sparkContext in my application.
i read so many articles they suggest " usage of multiple contexts is
discouraged, since SPARK-2243 is still not resolved."
i want to know that Is spark 1.5.0 supported to create multiple contexts
without error ?
and if supported then are
Hi
I am using spark streaming with Kafka. spark version is 1.5.0 and batch
interval is 1 sec.
In my scenario , algorithm take 7-10 sec to process 1 batch period data. so
after completing previous batch
, spark streaming start processing on next batch.
i want that my spark streaming applicatio
spread currently. Do you want to compute something per day, per week etc.
> Based on that, return a partition number. You could use mod 30 or some such
> function to get the partitions.
> On Nov 18, 2015 5:17 AM, "prateek arora"
> wrote:
>
>> Hi
>> I am trying to i
pi/python/PythonPartitioner.scala
> ./core/src/main/scala/org/apache/spark/Partitioner.scala
>
> Cheers
>
> On Tue, Nov 17, 2015 at 9:24 AM, prateek arora > wrote:
>
>> Hi
>> Thanks
>> I am new in spark development so can you provide some help to write a
>> custo
wrote:
> You can write your own custom partitioner to achieve this
>
> Regards
> Sab
> On 17-Nov-2015 1:11 am, "prateek arora"
> wrote:
>
>> Hi
>>
>> I have a RDD with 30 record ( Key/value pair ) and running 30 executor . i
>> want to repar
Hi
I have a RDD with 30 record ( Key/value pair ) and running 30 executor . i
want to reparation this RDD in to 30 partition so every partition get one
record and assigned to one executor .
when i used rdd.repartition(30) its repartition my rdd in 30 partition but
some partition get 2 record , s
Hi
In my scenario :
I have rdd with key/value pair . i want to combine elements that have approx
same keys.
like
(144,value)(143,value)(142,value)...(214,value)(213,value)(212,value)(313,value)(314,value)...
i want to combine elements that have key 144.143,142... means keys have
+-2 r
Hi
I am trying to write a simple program using addFile Function but getting
error in my worker node that file doest not exist
tage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost
task 0.3 in stage 0.0 (TID 3, slave2.novalocal):
java.io.FileNotFoundException: File
file:/tmp/
I am trying to write a simple program using addFile Function but getting
error in my worker node that file doest not exist
tage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost
task 0.3 in stage 0.0 (TID 3, slave2.novalocal):
java.io.FileNotFoundException: File
file:/tmp/spar
I can also switch to the mongodb if spark have a support for the.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/connector-for-CouchDB-tp18630p21429.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
yes please but i am new for spark and couchdb .
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/connector-for-CouchDB-tp18630p21428.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
I am also looking for connector for CouchDB in Spark. did you find anything ?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/connector-for-CouchDB-tp18630p21422.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
i am looking for the spark connector for Couch DB please help me .
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/spark-connector-for-CouchDB-tp21421.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
35 matches
Mail list logo