sparkers,
Is there a better way to control memory usage when streaming input's speed
is faster than the speed of handled by spark streaming ?
Thanks,
Francis.Hu
hi, All
I encountered OOM when streaming.
I send data to spark streaming through Zeromq at a speed of 600 records per
second, but the spark streaming only handle 10 records per 5 seconds( set it
in streaming program)
my two workers have 4 cores CPU and 1G RAM.
These workers always occur Out
hi,all
When i run ZeroMQWordCount example on cluster, the worker log says: Caused
by: com.typesafe.config.ConfigException$Missing: No configuration setting
found for key 'akka.zeromq'
Actually, i can see that the reference.conf in
spark-examples-assembly-0.9.1.jar contains below configura
I have just the problem resolved via running master and work daemons
individually on where they are.
if I execute the shell: sbin/start-all.sh , the problem always exist.
发件人: Francis.Hu [mailto:francis...@reachjunction.com]
发送时间: Tuesday, May 06, 2014 10:31
收件人: user@spark.apache.org
The file does not exist in fact and no permission issue.
francis@ubuntu-4:/test/spark-0.9.1$ ll work/app-20140505053550-/
total 24
drwxrwxr-x 6 francis francis 4096 May 5 05:35 ./
drwxrwxr-x 11 francis francis 4096 May 5 06:18 ../
drwxrwxr-x 2 francis francis 4096 May 5 05:35 2/
Hi,All
We run a spark cluster with three workers.
created a spark streaming application,
then run the spark project using below command:
shell> sbt run spark://192.168.219.129:7077 tcp://192.168.20.118:5556 foo
we looked at the webui of workers, jobs failed without any error or in
Thanks, Prashant Sharma
It works right now after degrade zeromq from 4.0.1 to 2.2.
Do you know the new release of spark whether it will upgrade zeromq ?
Many of our programs are using zeromq 4.0.1, so if in next release ,spark
streaming can release with a newer zeromq that would be be
Hi, all
I installed spark-0.9.1 and zeromq 4.0.1 , and then run below example:
./bin/run-example org.apache.spark.streaming.examples.SimpleZeroMQPublisher
tcp://127.0.1.1:1234 foo.bar`
./bin/run-example org.apache.spark.streaming.examples.ZeroMQWordCount
local[2] tcp://127.0.1.1:1234 foo`
Great!!!
When i built it on another disk whose format is ext4, it works right now.
hadoop@ubuntu-1:~$ df -Th
FilesystemType Size Used Avail Use% Mounted on
/dev/sdb6 ext4 135G 8.6G 119G 7% /
udev devtmpfs 7.7G 4.0K 7.7G 1% /dev
tmpfs
Hi, All
I stuck in a NoClassDefFoundError. Any helps that would be appreciated.
I download spark 0.9.0 source, and then run this command to build it :
SPARK_HADOOP_VERSION=2.2.0 SPARK_YARN=true sbt/sbt assembly
then no error during the build of spark.
After that I run the spark-shell for
10 matches
Mail list logo