Hi Gabor,

Pls find the logs attached. These are truncated logs.

Command used :
spark-submit --verbose --packages
org.apache.spark:spark-sql-kafka-0-10_2.12:3.0.1,com.typesafe:config:1.4.0
--master yarn --deploy-mode cluster --class com.stream.Main --num-executors
2 --driver-memory 2g --executor-cores 1 --executor-memory 4g --files
gs://x/jars_application.conf,gs://x/log4j.properties
gs://x/a-synch-r-1.0-SNAPSHOT.jar
For this I used a snapshot jar, not a fat jar.


Regards
Amit

On Mon, Dec 7, 2020 at 10:15 PM Gabor Somogyi <gabor.g.somo...@gmail.com>
wrote:

> Well, I can't do miracle without cluster and logs access.
> What I don't understand why you need fat jar?! Spark libraries normally
> need provided scope because it must exist on all machines...
> I would take a look at the driver and executor logs which contains the
> consumer configs + I would take a look at the exact version of the consumer
> (this is printed also in the same log)
>
> G
>
>
> On Mon, Dec 7, 2020 at 5:07 PM Amit Joshi <mailtojoshia...@gmail.com>
> wrote:
>
>> Hi Gabor,
>>
>> The code is very simple Kafka consumption of data.
>> I guess, it may be the cluster.
>> Can you please point out the possible problem toook for in the cluster?
>>
>> Regards
>> Amit
>>
>> On Monday, December 7, 2020, Gabor Somogyi <gabor.g.somo...@gmail.com>
>> wrote:
>>
>>> + Adding back user list.
>>>
>>> I've had a look at the Spark code and it's not
>>> modifying "partition.assignment.strategy" so the problem
>>> must be either in your application or in your cluster setup.
>>>
>>> G
>>>
>>>
>>> On Mon, Dec 7, 2020 at 12:31 PM Gabor Somogyi <gabor.g.somo...@gmail.com>
>>> wrote:
>>>
>>>> It's super interesting because that field has default value:
>>>> *org.apache.kafka.clients.consumer.RangeAssignor*
>>>>
>>>> On Mon, 7 Dec 2020, 10:51 Amit Joshi, <mailtojoshia...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> Thnks for the reply.
>>>>> I did tried removing the client version.
>>>>> But got the same exception.
>>>>>
>>>>>
>>>>> Thnks
>>>>>
>>>>> On Monday, December 7, 2020, Gabor Somogyi <gabor.g.somo...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> +1 on the mentioned change, Spark uses the following kafka-clients
>>>>>> library:
>>>>>>
>>>>>> <kafka.version>2.4.1</kafka.version>
>>>>>>
>>>>>> G
>>>>>>
>>>>>>
>>>>>> On Mon, Dec 7, 2020 at 9:30 AM German Schiavon <
>>>>>> gschiavonsp...@gmail.com> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I think the issue is that you are overriding the kafka-clients that
>>>>>>> comes with  <artifactId>spark-sql-kafka-0-10_2.12</artifactId>
>>>>>>>
>>>>>>>
>>>>>>> I'd try removing the kafka-clients and see if it works
>>>>>>>
>>>>>>>
>>>>>>> On Sun, 6 Dec 2020 at 08:01, Amit Joshi <mailtojoshia...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi All,
>>>>>>>>
>>>>>>>> I am running the Spark Structured Streaming along with Kafka.
>>>>>>>> Below is the pom.xml
>>>>>>>>
>>>>>>>> <properties>
>>>>>>>>     <maven.compiler.source>1.8</maven.compiler.source>
>>>>>>>>     <maven.compiler.target>1.8</maven.compiler.target>
>>>>>>>>     <encoding>UTF-8</encoding>
>>>>>>>>     <!-- Put the Scala version of the cluster -->
>>>>>>>>     <scalaVersion>2.12.10</scalaVersion>
>>>>>>>>     <sparkVersion>3.0.1</sparkVersion>
>>>>>>>> </properties>
>>>>>>>>
>>>>>>>> <dependency>
>>>>>>>>     <groupId>org.apache.kafka</groupId>
>>>>>>>>     <artifactId>kafka-clients</artifactId>
>>>>>>>>     <version>2.1.0</version>
>>>>>>>> </dependency>
>>>>>>>>
>>>>>>>> <dependency>
>>>>>>>>     <groupId>org.apache.spark</groupId>
>>>>>>>>     <artifactId>spark-core_2.12</artifactId>
>>>>>>>>     <version>${sparkVersion}</version>
>>>>>>>>     <scope>provided</scope>
>>>>>>>> </dependency>
>>>>>>>> <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-sql -->
>>>>>>>> <dependency>
>>>>>>>>     <groupId>org.apache.spark</groupId>
>>>>>>>>     <artifactId>spark-sql_2.12</artifactId>
>>>>>>>>     <version>${sparkVersion}</version>
>>>>>>>>     <scope>provided</scope>
>>>>>>>> </dependency>
>>>>>>>> <!-- 
>>>>>>>> https://mvnrepository.com/artifact/org.apache.spark/spark-sql-kafka-0-10
>>>>>>>>  -->
>>>>>>>> <dependency>
>>>>>>>>     <groupId>org.apache.spark</groupId>
>>>>>>>>     <artifactId>spark-sql-kafka-0-10_2.12</artifactId>
>>>>>>>>     <version>${sparkVersion}</version>
>>>>>>>> </dependency>
>>>>>>>>
>>>>>>>> Building the fat jar with shade plugin. The jar is running as expected 
>>>>>>>> in my local setup with the command
>>>>>>>>
>>>>>>>> *spark-submit --master local[*] --class com.stream.Main 
>>>>>>>> --num-executors 3 --driver-memory 2g --executor-cores 2 
>>>>>>>> --executor-memory 3g prism-event-synch-rta.jar*
>>>>>>>>
>>>>>>>> But when I am trying to run same jar in spark cluster using yarn with 
>>>>>>>> command:
>>>>>>>>
>>>>>>>> *spark-submit --master yarn --deploy-mode cluster --class 
>>>>>>>> com.stream.Main --num-executors 4 --driver-memory 2g --executor-cores 
>>>>>>>> 1 --executor-memory 4g  gs://jars/prism-event-synch-rta.jar*
>>>>>>>>
>>>>>>>> Getting the this exception:
>>>>>>>>
>>>>>>>>        
>>>>>>>>
>>>>>>>>
>>>>>>>> *at org.apache.spark.sql.execution.streaming.StreamExecution.org 
>>>>>>>> <http://org.apache.spark.sql.execution.streaming.StreamExecution.org>$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:355)
>>>>>>>>       at 
>>>>>>>> org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:245)Caused
>>>>>>>>  by: org.apache.kafka.common.config.ConfigException: Missing required 
>>>>>>>> configuration "partition.assignment.strategy" which has no default 
>>>>>>>> value. at 
>>>>>>>> org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:124)*
>>>>>>>>
>>>>>>>> I have tried setting up the "partition.assignment.strategy", then also 
>>>>>>>> its not working.
>>>>>>>>
>>>>>>>> Please help.
>>>>>>>>
>>>>>>>>
>>>>>>>> Regards
>>>>>>>>
>>>>>>>> Amit Joshi
>>>>>>>>
>>>>>>>>
20/12/07 20:32:46 INFO ResourceUtils: 
==============================================================
20/12/07 20:32:46 INFO ResourceUtils: Resources for spark.executor:

20/12/07 20:32:46 INFO ResourceUtils: 
==============================================================
20/12/07 20:32:46 INFO YarnCoarseGrainedExecutorBackend: Successfully 
registered with driver
20/12/07 20:32:46 INFO Utils: Successfully started service 
'org.apache.spark.network.netty.NettyBlockTransferService' on port 42411.

20/12/07 20:32:46 INFO BlockManager: Using 
org.apache.spark.storage.RandomBlockReplicationPolicy for block replication 
policy
20/12/07 20:32:46 INFO BlockManager: Registering executor with local external 
shuffle service.

20/12/07 20:32:51 INFO YarnCoarseGrainedExecutorBackend: Driver commanded a 
shutdown
20/12/07 20:32:51 INFO MemoryStore: MemoryStore cleared
20/12/07 20:32:51 INFO BlockManager: BlockManager stopped
20/12/07 20:32:51 INFO ShutdownHookManager: Shutdown hook called

End of LogType:stderr
***********************************************************************

Container: container_1607359766658_0016_02_000002 on 
LogAggregationType: AGGREGATED
===========================================================================================================
LogType:prelaunch.out
LogLastModifiedTime:Mon Dec 07 20:32:42 +0000 2020
LogLength:100
LogContents:
Setting up env variables
Setting up job resources
Copying debugging information
Launching container

End of LogType:prelaunch.out
******************************************************************************

Container: container_1607359766658_0016_02_000002 on 
LogAggregationType: AGGREGATED
===========================================================================================================
LogType:directory.info
LogLastModifiedTime:Mon Dec 07 20:32:42 +0000 2020
LogLength:8474
LogContents:
ls -l:
total 80
lrwxrwxrwx 1 yarn yarn   96 Dec  7 20:32 __app__.jar -> 
/hadoop/yarn/nm-local-dir/usercache/ta-1.0-SNAPSHOT.jar
lrwxrwxrwx 1 yarn yarn   76 Dec  7 20:32 __spark_conf__ -> 
/hadoop/yarn/nm-local-dir/usercache/filecache/170/__spark_conf__.zip
lrwxrwxrwx 1 yarn yarn   95 Dec  7 20:32 com.github.luben_zstd-jni-1.4.4-3.jar 
-> 
/hadoop/yarn/nm-local-dir/usercache/""filecache/173/com.github.luben_zstd-jni-1.4.4-3.jar
lrwxrwxrwx 1 yarn yarn   87 Dec  7 20:32 com.typesafe_config-1.4.0.jar -> 
/hadoop/yarn/nm-local-dir/usercache/""filecache/175/com.typesafe_config-1.4.0.jar
-rw-r--r-- 1 yarn yarn   92 Dec  7 20:32 container_tokens
-rwx------ 1 yarn yarn  710 Dec  7 20:32 default_container_executor.sh
-rwx------ 1 yarn yarn  655 Dec  7 20:32 default_container_executor_session.sh
lrwxrwxrwx 1 yarn yarn   79 Dec  7 20:32 jars_application.conf -> 
/hadoop/yarn/nm-local-dir/usercache/""filecache/169/jars_application.conf
-rwx------ 1 yarn yarn 7609 Dec  7 20:32 launch_container.sh
lrwxrwxrwx 1 yarn yarn   74 Dec  7 20:32 log4j.properties -> 
/hadoop/yarn/nm-local-dir/usercache/""filecache/179/log4j.properties
lrwxrwxrwx 1 yarn yarn  100 Dec  7 20:32 
org.apache.commons_commons-pool2-2.6.2.jar -> 
/hadoop/yarn/nm-local-dir/usercache/""filecache/166/org.apache.commons_commons-pool2-2.6.2.jar
lrwxrwxrwx 1 yarn yarn   98 Dec  7 20:32 
org.apache.kafka_kafka-clients-2.4.1.jar -> 
/hadoop/yarn/nm-local-dir/usercache/""filecache/174/org.apache.kafka_kafka-clients-2.4.1.jar
lrwxrwxrwx 1 yarn yarn  110 Dec  7 20:32 
org.apache.spark_spark-sql-kafka-0-10_2.12-3.0.1.jar -> 
/hadoop/yarn/nm-local-dir/usercache/""filecache/168/org.apache.spark_spark-sql-kafka-0-10_2.12-3.0.1.jar
lrwxrwxrwx 1 yarn yarn  121 Dec  7 20:32 
org.apache.spark_spark-token-provider-kafka-0-10_2.12-3.0.1.jar -> 
/hadoop/yarn/nm-local-dir/usercache/""filecache/178/org.apache.spark_spark-token-provider-kafka-0-10_2.12-3.0.1.jar
lrwxrwxrwx 1 yarn yarn   84 Dec  7 20:32 org.lz4_lz4-java-1.7.1.jar -> 
/hadoop/yarn/nm-local-dir/usercache/""filecache/177/org.lz4_lz4-java-1.7.1.jar
lrwxrwxrwx 1 yarn yarn   88 Dec  7 20:32 org.slf4j_slf4j-api-1.7.30.jar -> 
/hadoop/yarn/nm-local-dir/usercache/""filecache/171/org.slf4j_slf4j-api-1.7.30.jar
lrwxrwxrwx 1 yarn yarn   98 Dec  7 20:32 
org.spark-project.spark_unused-1.0.0.jar -> 
/hadoop/yarn/nm-local-dir/usercache/""filecache/167/org.spark-project.spark_unused-1.0.0.jar

LogType:launch_container.sh
LogLastModifiedTime:Mon Dec 07 20:32:42 +0000 2020
LogLength:7609
LogContents:
#!/bin/bash

set -o pipefail -e
export 
PRELAUNCH_OUT="/var/log/hadoop-yarn/userlogs/application_160734/container_1607359766658_0016_02_000002/prelaunch.out"
exec >"${PRELAUNCH_OUT}"
export 
PRELAUNCH_ERR="/var/log/hadoop-yarn/userlogs/application_160734/container_1607359766658_0016_02_000002/prelaunch.err"
exec 2>"${PRELAUNCH_ERR}"
echo "Setting up env variables"
export JAVA_HOME=${JAVA_HOME:-"/usr/lib/jvm/adoptopenjdk-8-hotspot-amd64"}
export HADOOP_COMMON_HOME=${HADOOP_COMMON_HOME:-"/usr/lib/hadoop"}
export HADOOP_HDFS_HOME=${HADOOP_HDFS_HOME:-"/usr/lib/hadoop-hdfs"}
export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop/conf"}
export HADOOP_YARN_HOME=${HADOOP_YARN_HOME:-"/usr/lib/hadoop-yarn"}
export HADOOP_MAPRED_HOME=${HADOOP_MAPRED_HOME:-"/usr/lib/hadoop-mapreduce"}
export 
HADOOP_TOKEN_FILE_LOCATION="/hadoop/yarn/nm-local-dir/usercache/""/appcache/application_160734/container_1607359766658_0016_02_000002/container_tokens"
export CONTAINER_ID="container_1607359766658_0016_02_000002"
export NM_PORT="8026"
export NM_HOST="*-spark30-deb-w-0.c.xxxxxx"
export NM_HTTP_PORT="8042"
export 
LOCAL_DIRS="/hadoop/yarn/nm-local-dir/usercache/""/appcache/application_160734"
export LOCAL_USER_DIRS="/hadoop/yarn/nm-local-dir/usercache/""/"
export 
LOG_DIRS="/var/log/hadoop-yarn/userlogs/application_160734/container_1607359766658_0016_02_000002"
export USER=""""
export LOGNAME=""""
export HOME="/home/"
export 
PWD="/hadoop/yarn/nm-local-dir/usercache/""/appcache/application_160734/container_1607359766658_0016_02_000002"
export JVM_PID="$$"
export MALLOC_ARENA_MAX="4"
export NM_AUX_SERVICE_spark_shuffle=""
export 
NM_AUX_SERVICE_mapreduce_shuffle="AAA0+gAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA="
export 
SPARK_YARN_STAGING_DIR="hdfs://*-spark30-deb-m/user/""/.sparkStaging/application_160734"
export 
SPARK_DIST_CLASSPATH=":/etc/hive/conf:/usr/share/java/mysql.jar:/usr/local/share/google/dataproc/lib/*"
export OPENBLAS_NUM_THREADS="1"
export 
CLASSPATH="$PWD:$PWD/__spark_conf__:$PWD/__spark_libs__/*:/usr/lib/spark/jars/*:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME/*:$HADOOP_COMMON_HOME/lib/*:$HADOOP_HDFS_HOME/*:$HADOOP_HDFS_HOME/lib/*:$HADOOP_MAPRED_HOME/*:$HADOOP_MAPRED_HOME/lib/*:$HADOOP_YARN_HOME/*:$HADOOP_YARN_HOME/lib/*:/usr/local/share/google/dataproc/lib/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*:/usr/local/share/google/dataproc/lib/*::/etc/hive/conf:/usr/share/java/mysql.jar:/usr/local/share/google/dataproc/lib/*:$PWD/__spark_conf__/__hadoop_conf__"
export SPARK_USER=""""
echo "Setting up job resources"
ln -sf -- 
"/hadoop/yarn/nm-local-dir/usercache/""/filecache/175/com.typesafe_config-1.4.0.jar"
 "com.typesafe_config-1.4.0.jar"
ln -sf -- 
"/hadoop/yarn/nm-local-dir/usercache/""/filecache/171/org.slf4j_slf4j-api-1.7.30.jar"
 "org.slf4j_slf4j-api-1.7.30.jar"
ln -sf -- 
"/hadoop/yarn/nm-local-dir/usercache/""/filecache/179/log4j.properties" 
"log4j.properties"
ln -sf -- 
"/hadoop/yarn/nm-local-dir/usercache/""/filecache/168/org.apache.spark_spark-sql-kafka-0-10_2.12-3.0.1.jar"
 "org.apache.spark_spark-sql-kafka-0-10_2.12-3.0.1.jar"
ln -sf -- 
"/hadoop/yarn/nm-local-dir/usercache/""/filecache/176/*-*-synch-1.0-SNAPSHOT.jar"
 "__app__.jar"
ln -sf -- 
"/hadoop/yarn/nm-local-dir/usercache/""/filecache/178/org.apache.spark_spark-token-provider-kafka-0-10_2.12-3.0.1.jar"
 "org.apache.spark_spark-token-provider-kafka-0-10_2.12-3.0.1.jar"
ln -sf -- 
"/hadoop/yarn/nm-local-dir/usercache/""/filecache/172/org.xerial.snappy_snappy-java-1.1.7.5.jar"
 "org.xerial.snappy_snappy-java-1.1.7.5.jar"
ln -sf -- 
"/hadoop/yarn/nm-local-dir/usercache/""/filecache/177/org.lz4_lz4-java-1.7.1.jar"
 "org.lz4_lz4-java-1.7.1.jar"
ln -sf -- 
"/hadoop/yarn/nm-local-dir/usercache/""/filecache/167/org.spark-project.spark_unused-1.0.0.jar"
 "org.spark-project.spark_unused-1.0.0.jar"
ln -sf -- 
"/hadoop/yarn/nm-local-dir/usercache/""/filecache/174/org.apache.kafka_kafka-clients-2.4.1.jar"
 "org.apache.kafka_kafka-clients-2.4.1.jar"
ln -sf -- 
"/hadoop/yarn/nm-local-dir/usercache/""/filecache/166/org.apache.commons_commons-pool2-2.6.2.jar"
 "org.apache.commons_commons-pool2-2.6.2.jar"
ln -sf -- 
"/hadoop/yarn/nm-local-dir/usercache/""/filecache/173/com.github.luben_zstd-jni-1.4.4-3.jar"
 "com.github.luben_zstd-jni-1.4.4-3.jar"
ln -sf -- 
"/hadoop/yarn/nm-local-dir/usercache/""/filecache/169/jars_application.conf" 
"jars_application.conf"
ln -sf -- 
"/hadoop/yarn/nm-local-dir/usercache/""/filecache/170/__spark_conf__.zip" 
"__spark_conf__"
echo "Copying debugging information"
# Creating copy of launch script
cp "launch_container.sh" 
"/var/log/hadoop-yarn/userlogs/application_160734/container_1607359766658_0016_02_000002/launch_container.sh"
chmod 640 
"/var/log/hadoop-yarn/userlogs/application_160734/container_1607359766658_0016_02_000002/launch_container.sh"
# Determining directory contents
echo "ls -l:" 
1>"/var/log/hadoop-yarn/userlogs/application_160734/container_1607359766658_0016_02_000002/directory.info"
ls -l 
1>>"/var/log/hadoop-yarn/userlogs/application_160734/container_1607359766658_0016_02_000002/directory.info"
echo "find -L . -maxdepth 5 -ls:" 
1>>"/var/log/hadoop-yarn/userlogs/application_160734/container_1607359766658_0016_02_000002/directory.info"
find -L . -maxdepth 5 -ls 
1>>"/var/log/hadoop-yarn/userlogs/application_160734/container_1607359766658_0016_02_000002/directory.info"
echo "broken symlinks(find -L . -maxdepth 5 -type l -ls):" 
1>>"/var/log/hadoop-yarn/userlogs/application_160734/container_1607359766658_0016_02_000002/directory.info"
find -L . -maxdepth 5 -type l -ls 
1>>"/var/log/hadoop-yarn/userlogs/application_160734/container_1607359766658_0016_02_000002/directory.info"
echo "Launching container"
exec /bin/bash -c "$JAVA_HOME/bin/java -server -Xmx4096m 
-Djava.io.tmpdir=$PWD/tmp '-Dspark.driver.port=43347' '-Dspark.ui.port=0' 
'-Dspark.rpc.message.maxSize=512' 
-Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/userlogs/application_160734/container_1607359766658_0016_02_000002
 -XX:OnOutOfMemoryError='kill %p' 
org.apache.spark.executor.YarnCoarseGrainedExecutorBackend --driver-url 
spark://CoarseGrainedScheduler@*-spark30-deb-w-1.c.xxxxxx:43347 --executor-id 1 
--hostname *-spark30-deb-w-0.c.xxxxxx --cores 1 --app-id application_160734 
--resourceProfileId 0 --user-class-path file:$PWD/__app__.jar --user-class-path 
file:$PWD/org.apache.spark_spark-sql-kafka-0-10_2.12-3.0.1.jar 
--user-class-path file:$PWD/com.typesafe_config-1.4.0.jar --user-class-path 
file:$PWD/org.apache.spark_spark-token-provider-kafka-0-10_2.12-3.0.1.jar 
--user-class-path file:$PWD/org.apache.kafka_kafka-clients-2.4.1.jar 
--user-class-path file:$PWD/org.apache.commons_commons-pool2-2.6.2.jar 
--user-class-path file:$PWD/org.spark-project.spark_unused-1.0.0.jar 
--user-class-path file:$PWD/com.github.luben_zstd-jni-1.4.4-3.jar 
--user-class-path file:$PWD/org.lz4_lz4-java-1.7.1.jar --user-class-path 
file:$PWD/org.xerial.snappy_snappy-java-1.1.7.5.jar --user-class-path 
file:$PWD/org.slf4j_slf4j-api-1.7.30.jar 
1>/var/log/hadoop-yarn/userlogs/application_160734/container_1607359766658_0016_02_000002/stdout
 
2>/var/log/hadoop-yarn/userlogs/application_160734/container_1607359766658_0016_02_000002/stderr"

End of LogType:launch_container.sh
************************************************************************************


End of LogType:prelaunch.err
******************************************************************************
lrwxrwxrwx 1 yarn yarn   99 Dec  7 20:32 
org.xerial.snappy_snappy-java-1.1.7.5.jar -> 
/hadoop/yarn/nm-local-dir/usercache/""filecache/172/org.xerial.snappy_snappy-java-1.1.7.5.jar
drwx--x--- 2 yarn yarn 4096 Dec  7 20:32 tmp
find -L . -maxdepth 5 -ls:
 51610350      4 drwx--x---   3 yarn     yarn         4096 Dec  7 20:32 .
 51610577     44 -r-x------   1 yarn     yarn        41472 Dec  7 20:32 
./org.slf4j_slf4j-api-1.7.30.jar
 51610357      4 -rw-r--r--   1 yarn     yarn           16 Dec  7 20:32 
./.default_container_executor_session.sh.crc
 51610592     44 -r-x------   1 yarn     yarn        41007 Dec  7 20:32 
./__app__.jar
 51610583   4112 -r-x------   1 yarn     yarn      4210625 Dec  7 20:32 
./com.github.luben_zstd-jni-1.4.4-3.jar
 51610580   1892 -r-x------   1 yarn     yarn      1934320 Dec  7 20:32 
./org.xerial.snappy_snappy-java-1.1.7.5.jar
 51610355      4 -rw-r--r--   1 yarn     yarn           68 Dec  7 20:32 
./.launch_container.sh.crc
 51610589    288 -r-x------   1 yarn     yarn       294174 Dec  7 20:32 
./com.typesafe_config-1.4.0.jar
 51610540      4 -r-x------   1 yarn     yarn         2777 Dec  7 20:32 
./org.spark-project.spark_unused-1.0.0.jar
 51610351      4 drwx--x---   2 yarn     yarn         4096 Dec  7 20:32 ./tmp
 51610353      4 -rw-r--r--   1 yarn     yarn           12 Dec  7 20:32 
./.container_tokens.crc
 51610359      4 -rw-r--r--   1 yarn     yarn           16 Dec  7 20:32 
./.default_container_executor.sh.crc
 51610595    636 -r-x------   1 yarn     yarn       649950 Dec  7 20:32 
./org.lz4_lz4-java-1.7.1.jar
 51610543    344 -r-x------   1 yarn     yarn       348857 Dec  7 20:32 
./org.apache.spark_spark-sql-kafka-0-10_2.12-3.0.1.jar
 51610546      4 -r-x------   1 yarn     yarn         1065 Dec  7 20:32 
./jars_application.conf
 51610601      4 -r-x------   1 yarn     yarn          238 Dec  7 20:32 
./log4j.properties
 51610549      4 drwx------   3 yarn     yarn         4096 Dec  7 20:32 
./__spark_conf__
 51610575      4 -r-x------   1 yarn     yarn         2832 Dec  7 20:32 
./__spark_conf__/__spark_dist_cache__.properties
 51610550      4 -r-x------   1 yarn     yarn          918 Dec  7 20:32 
./__spark_conf__/log4j.properties
 51610573    140 -r-x------   1 yarn     yarn       142579 Dec  7 20:32 
./__spark_conf__/__spark_hadoop_conf__.xml
 51610574      4 -r-x------   1 yarn     yarn         2501 Dec  7 20:32 
./__spark_conf__/__spark_conf__.properties
 51610551      4 drwx------   2 yarn     yarn         4096 Dec  7 20:32 
./__spark_conf__/__hadoop_conf__
 51610567      4 -r-x------   1 yarn     yarn         2316 Dec  7 20:32 
./__spark_conf__/__hadoop_conf__/ssl-client.xml.example
 51610566      4 -r-x------   1 yarn     yarn         2163 Dec  7 20:32 
./__spark_conf__/__hadoop_conf__/mapred-env.sh
 51610569     12 -r-x------   1 yarn     yarn        11392 Dec  7 20:32 
./__spark_conf__/__hadoop_conf__/hadoop-policy.xml
 51610564      8 -r-x------   1 yarn     yarn         7152 Dec  7 20:32 
./__spark_conf__/__hadoop_conf__/yarn-env.sh
 51610562      4 -r-x------   1 yarn     yarn         1335 Dec  7 20:32 
./__spark_conf__/__hadoop_conf__/configuration.xsl
 51610571      4 -r-x------   1 yarn     yarn         2697 Dec  7 20:32 
./__spark_conf__/__hadoop_conf__/ssl-server.xml.example
 51610553     12 -r-x------   1 yarn     yarn         8713 Dec  7 20:32 
./__spark_conf__/__hadoop_conf__/mapred-site.xml
 51610554     20 -r-x------   1 yarn     yarn        17233 Dec  7 20:32 
./__spark_conf__/__hadoop_conf__/hadoop-env.sh
 51610558      8 -r-x------   1 yarn     yarn         5246 Dec  7 20:32 
./__spark_conf__/__hadoop_conf__/core-site.xml
 51610565      4 -r-x------   1 yarn     yarn         1940 Dec  7 20:32 
./__spark_conf__/__hadoop_conf__/container-executor.cfg
 51610555      4 -r-x------   1 yarn     yarn         3321 Dec  7 20:32 
./__spark_conf__/__hadoop_conf__/hadoop-metrics2.properties
 51610572      8 -r-x------   1 yarn     yarn         4113 Dec  7 20:32 
./__spark_conf__/__hadoop_conf__/mapred-queues.xml.template
 51610563      8 -r-x------   1 yarn     yarn         7064 Dec  7 20:32 
./__spark_conf__/__hadoop_conf__/hdfs-site.xml
 51610557     12 -r-x------   1 yarn     yarn         8506 Dec  7 20:32 
./__spark_conf__/__hadoop_conf__/yarn-site.xml
 51610570      4 -r-x------   1 yarn     yarn          977 Dec  7 20:32 
./__spark_conf__/__hadoop_conf__/fairscheduler.xml
 51610559     12 -r-x------   1 yarn     yarn         8468 Dec  7 20:32 
./__spark_conf__/__hadoop_conf__/capacity-scheduler.xml
 51610552     16 -r-x------   1 yarn     yarn        13326 Dec  7 20:32 
./__spark_conf__/__hadoop_conf__/log4j.properties
 51610568      4 -r-x------   1 yarn     yarn         1537 Dec  7 20:32 
./__spark_conf__/__hadoop_conf__/distcp-default.xml
 51610561      0 -r-x------   1 yarn     yarn            0 Dec  7 20:32 
./__spark_conf__/__hadoop_conf__/nodes_include
 51610556      4 -r-x------   1 yarn     yarn           82 Dec  7 20:32 
./__spark_conf__/__hadoop_conf__/yarn-timelineserver.logging.properties
 51610560      0 -r-x------   1 yarn     yarn            0 Dec  7 20:32 
./__spark_conf__/__hadoop_conf__/nodes_exclude
 51610586   3196 -r-x------   1 yarn     yarn      3269712 Dec  7 20:32 
./org.apache.kafka_kafka-clients-2.4.1.jar
 51610356      4 -rwx------   1 yarn     yarn          655 Dec  7 20:32 
./default_container_executor_session.sh
 51610358      4 -rwx------   1 yarn     yarn          710 Dec  7 20:32 
./default_container_executor.sh
 51610598     56 -r-x------   1 yarn     yarn        56317 Dec  7 20:32 
./org.apache.spark_spark-token-provider-kafka-0-10_2.12-3.0.1.jar
 51610352      4 -rw-r--r--   1 yarn     yarn           92 Dec  7 20:32 
./container_tokens
 51610536    128 -r-x------   1 yarn     yarn       129174 Dec  7 20:32 
./org.apache.commons_commons-pool2-2.6.2.jar
 51610354      8 -rwx------   1 yarn     yarn         7609 Dec  7 20:32 
./launch_container.sh
broken symlinks(find -L . -maxdepth 5 -type l -ls):

End of LogType:directory.info
*******************************************************************************


End of LogType:prelaunch.err
******************************************************************************


End of LogType:stdout
***********************************************************************

===========================================================================================================
LogType:stderr
LogLastModifiedTime:Mon Dec 07 20:32:34 +0000 2020
LogLength:59088
LogContents:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/usr/lib/spark/jars/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
20/12/07 20:32:20 INFO SignalUtils: Registered signal handler for TERM
20/12/07 20:32:20 INFO SignalUtils: Registered signal handler for HUP
20/12/07 20:32:20 INFO SignalUtils: Registered signal handler for INT
20/12/07 20:32:20 INFO SecurityManager: Changing view acls to: yarn,""
20/12/07 20:32:20 INFO SecurityManager: Changing modify acls to: yarn,""
20/12/07 20:32:20 INFO SecurityManager: Changing view acls groups to:
20/12/07 20:32:20 INFO SecurityManager: Changing modify acls groups to:
20/12/07 20:32:20 INFO SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users  with view permissions: Set(yarn, ""); groups 
with view permissions: Set(); users  with modify permissions: Set(yarn, ""); 
groups with modify permissions: Set()
20/12/07 20:32:20 WARN NativeCodeLoader: Unable to load native-hadoop library 
for your platform... using builtin-java classes where applicable
20/12/07 20:32:21 INFO ApplicationMaster: ApplicationAttemptId: 
appattempt_1607359766658_0016_000001
20/12/07 20:32:21 INFO ApplicationMaster: Starting the user application in a 
separate Thread
20/12/07 20:32:21 INFO ApplicationMaster: Waiting for spark context 
initialization...
20/12/07 20:32:21 INFO Main: Starting the streaming consumer !!!
20/12/07 20:32:21 INFO SparkContext: Running Spark version 3.0.1
20/12/07 20:32:21 INFO ResourceUtils: 
==============================================================
20/12/07 20:32:21 INFO ResourceUtils: Resources for spark.driver:

20/12/07 20:32:21 INFO ResourceUtils: 
==============================================================
20/12/07 20:32:21 INFO SparkContext: Submitted application: event-synch
20/12/07 20:32:21 INFO SecurityManager: Changing view acls to: yarn,""
20/12/07 20:32:21 INFO SecurityManager: Changing modify acls to: yarn,""
20/12/07 20:32:21 INFO SecurityManager: Changing view acls groups to:
20/12/07 20:32:21 INFO SecurityManager: Changing modify acls groups to:
20/12/07 20:32:21 INFO SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users  with view permissions: Set(yarn, ""); groups 
with view permissions: Set(); users  with modify permissions: Set(yarn, ""); 
groups with modify permissions: Set()
20/12/07 20:32:21 INFO Utils: Successfully started service 'sparkDriver' on 
port 42803.
20/12/07 20:32:21 INFO SparkEnv: Registering MapOutputTracker
20/12/07 20:32:21 INFO SparkEnv: Registering BlockManagerMaster
20/12/07 20:32:21 INFO BlockManagerMasterEndpoint: Using 
org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/12/07 20:32:21 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/12/07 20:32:21 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
20/12/07 20:32:22 INFO DiskBlockManager: Created local directory at 
/hadoop/yarn/nm-local-dir/usercache/""/appcache/application_160734/blockmgr-832ab202-8c21-4927-b336-da218fd0dfd6
20/12/07 20:32:22 INFO MemoryStore: MemoryStore started with capacity 912.3 MiB
20/12/07 20:32:22 INFO SparkEnv: Registering OutputCommitCoordinator
20/12/07 20:32:22 INFO Utils: Successfully started service 'SparkUI' on port 
32979.
20/12/07 20:32:22 INFO ServerInfo: Adding filter to /jobs: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:22 INFO ServerInfo: Adding filter to /jobs/json: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:22 INFO ServerInfo: Adding filter to /jobs/job: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:22 INFO ServerInfo: Adding filter to /jobs/job/json: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:22 INFO ServerInfo: Adding filter to /stages: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:22 INFO ServerInfo: Adding filter to /stages/json: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:22 INFO ServerInfo: Adding filter to /stages/stage: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:22 INFO ServerInfo: Adding filter to /stages/stage/json: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:22 INFO ServerInfo: Adding filter to /stages/pool: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:22 INFO ServerInfo: Adding filter to /stages/pool/json: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:22 INFO ServerInfo: Adding filter to /storage: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:22 INFO ServerInfo: Adding filter to /storage/json: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:22 INFO ServerInfo: Adding filter to /storage/rdd: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:22 INFO ServerInfo: Adding filter to /storage/rdd/json: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:22 INFO ServerInfo: Adding filter to /environment: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:22 INFO ServerInfo: Adding filter to /environment/json: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:22 INFO ServerInfo: Adding filter to /executors: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:22 INFO ServerInfo: Adding filter to /executors/json: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:22 INFO ServerInfo: Adding filter to /executors/threadDump: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:22 INFO ServerInfo: Adding filter to /executors/threadDump/json: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:22 INFO ServerInfo: Adding filter to /static: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:22 INFO ServerInfo: Adding filter to /: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:22 INFO ServerInfo: Adding filter to /api: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:22 INFO ServerInfo: Adding filter to /jobs/job/kill: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:22 INFO ServerInfo: Adding filter to /stages/stage/kill: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:22 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at 
http://*-spark30-deb-w-0xxxxxx
20/12/07 20:32:22 INFO YarnClusterScheduler: Created YarnClusterScheduler
20/12/07 20:32:22 INFO FairSchedulableBuilder: Creating Fair Scheduler pools 
from default file: fairscheduler.xml
20/12/07 20:32:22 INFO FairSchedulableBuilder: Created pool: default, 
schedulingMode: FAIR, minShare: 0, weight: 1
20/12/07 20:32:22 INFO Utils: Using initial executors = 4, max of 
spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors 
and spark.executor.instances
20/12/07 20:32:22 INFO Utils: Successfully started service 
'org.apache.spark.network.netty.NettyBlockTransferService' on port 38883.
20/12/07 20:32:22 INFO NettyBlockTransferService: Server created on 
*-spark30-deb-w-0.c.xxxxxx:38883
20/12/07 20:32:22 INFO BlockManager: Using 
org.apache.spark.storage.RandomBlockReplicationPolicy for block replication 
policy
20/12/07 20:32:22 INFO BlockManagerMaster: Registering BlockManager 
BlockManagerId(driver, *-spark30-deb-w-0.c.xxxxxx, 38883, None)
20/12/07 20:32:22 INFO BlockManagerMasterEndpoint: Registering block manager 
*-spark30-deb-w-0.c.xxxxxx:38883 with 912.3 MiB RAM, BlockManagerId(driver, 
*-spark30-deb-w-0.c.xxxxxx, 38883, None)
20/12/07 20:32:22 INFO BlockManagerMaster: Registered BlockManager 
BlockManagerId(driver, *-spark30-deb-w-0.c.xxxxxx, 38883, None)
20/12/07 20:32:22 INFO BlockManager: external shuffle service port = 7337
20/12/07 20:32:22 INFO BlockManager: Initialized BlockManager: 
BlockManagerId(driver, *-spark30-deb-w-0.c.xxxxxx, 38883, None)
20/12/07 20:32:23 INFO ServerInfo: Adding filter to /metrics/json: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter

20/12/07 20:32:24 INFO Utils: Using initial executors = 4, max of 
spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors 
and spark.executor.instances
20/12/07 20:32:24 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to 
request executors before the AM has registered!
20/12/07 20:32:24 INFO RMProxy: Connecting to ResourceManager at 
*-spark30-deb-m/a.b.c.d:8030
20/12/07 20:32:25 INFO YarnRMClient: Registering the ApplicationMaster
20/12/07 20:32:25 INFO ApplicationMaster: Preparing Local resources
20/12/07 20:32:25 WARN DomainSocketFactory: The short-circuit local reads 
feature cannot be used because libhadoop cannot be loaded.
20/12/07 20:32:25 INFO ApplicationMaster:
===============================================================================
Default YARN executor launch context:
  env:
    CLASSPATH -> 
{{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>/usr/lib/spark/jars/*<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/*<CPS>$HADOOP_COMMON_HOME/lib/*<CPS>$HADOOP_HDFS_HOME/*<CPS>$HADOOP_HDFS_HOME/lib/*<CPS>$HADOOP_MAPRED_HOME/*<CPS>$HADOOP_MAPRED_HOME/lib/*<CPS>$HADOOP_YARN_HOME/*<CPS>$HADOOP_YARN_HOME/lib/*<CPS>/usr/local/share/google/dataproc/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*<CPS>/usr/local/share/google/dataproc/lib/*<CPS>:/etc/hive/conf:/usr/share/java/mysql.jar:/usr/local/share/google/dataproc/lib/*<CPS>{{PWD}}/__spark_conf__/__hadoop_conf__
    SPARK_DIST_CLASSPATH -> 
:/etc/hive/conf:/usr/share/java/mysql.jar:/usr/local/share/google/dataproc/lib/*
    SPARK_YARN_STAGING_DIR -> 
hdfs://*-spark30-deb-m/user/""/.sparkStaging/application_160734
    SPARK_USER -> ""
    OPENBLAS_NUM_THREADS -> 1

  command:
    {{JAVA_HOME}}/bin/java \
      -server \
      -Xmx4096m \
      -Djava.io.tmpdir={{PWD}}/tmp \
      '-Dspark.driver.port=42803' \
      '-Dspark.ui.port=0' \
      '-Dspark.rpc.message.maxSize=512' \
      -Dspark.yarn.app.container.log.dir=<LOG_DIR> \
      -XX:OnOutOfMemoryError='kill %p' \
      org.apache.spark.executor.YarnCoarseGrainedExecutorBackend \
      --driver-url \
      spark://CoarseGrainedScheduler@*-spark30-deb-w-0.c.xxxxxx:42803 \
      --executor-id \
      <executorId> \
      --hostname \
      <hostname> \
      --cores \
      1 \
      --app-id \
      application_160734 \
      --resourceProfileId \
      0 \
      --user-class-path \
      file:$PWD/__app__.jar \
      --user-class-path \
      file:$PWD/org.apache.spark_spark-sql-kafka-0-10_2.12-3.0.1.jar \
      --user-class-path \
      file:$PWD/com.typesafe_config-1.4.0.jar \
      --user-class-path \
      file:$PWD/org.apache.spark_spark-token-provider-kafka-0-10_2.12-3.0.1.jar 
\
      --user-class-path \
      file:$PWD/org.apache.kafka_kafka-clients-2.4.1.jar \
      --user-class-path \
      file:$PWD/org.apache.commons_commons-pool2-2.6.2.jar \
      --user-class-path \
      file:$PWD/org.spark-project.spark_unused-1.0.0.jar \
      --user-class-path \
      file:$PWD/com.github.luben_zstd-jni-1.4.4-3.jar \
      --user-class-path \
      file:$PWD/org.lz4_lz4-java-1.7.1.jar \
      --user-class-path \
      file:$PWD/org.xerial.snappy_snappy-java-1.1.7.5.jar \
      --user-class-path \
      file:$PWD/org.slf4j_slf4j-api-1.7.30.jar \
      1><LOG_DIR>/stdout \
      2><LOG_DIR>/stderr

  resources:
    com.github.luben_zstd-jni-1.4.4-3.jar -> resource { scheme: "hdfs" host: 
"*-spark30-deb-m" port: -1 file: 
"/user/""/.sparkStaging/application_160734/com.github.luben_zstd-jni-1.4.4-3.jar"
 } size: 4210625 timestamp: 1607373137580 type: FILE visibility: PRIVATE
    org.apache.commons_commons-pool2-2.6.2.jar -> resource { scheme: "hdfs" 
host: "*-spark30-deb-m" port: -1 file: 
"/user/""/.sparkStaging/application_160734/org.apache.commons_commons-pool2-2.6.2.jar"
 } size: 129174 timestamp: 1607373137516 type: FILE visibility: PRIVATE
    org.lz4_lz4-java-1.7.1.jar -> resource { scheme: "hdfs" host: 
"*-spark30-deb-m" port: -1 file: 
"/user/""/.sparkStaging/application_160734/org.lz4_lz4-java-1.7.1.jar" } size: 
649950 timestamp: 1607373137604 type: FILE visibility: PRIVATE
    org.slf4j_slf4j-api-1.7.30.jar -> resource { scheme: "hdfs" host: 
"*-spark30-deb-m" port: -1 file: 
"/user/""/.sparkStaging/application_160734/org.slf4j_slf4j-api-1.7.30.jar" } 
size: 41472 timestamp: 1607373137653 type: FILE visibility: PRIVATE
    org.apache.kafka_kafka-clients-2.4.1.jar -> resource { scheme: "hdfs" host: 
"*-spark30-deb-m" port: -1 file: 
"/user/""/.sparkStaging/application_160734/org.apache.kafka_kafka-clients-2.4.1.jar"
 } size: 3269712 timestamp: 1607373137492 type: FILE visibility: PRIVATE
    __app__.jar -> resource { scheme: "hdfs" host: "*-spark30-deb-m" port: -1 
file: "/user/""/.sparkStaging/application_160734/*-*-synch-1.0-SNAPSHOT.jar" } 
size: 41007 timestamp: 1607373137307 type: FILE visibility: PRIVATE
    __spark_conf__ -> resource { scheme: "hdfs" host: "*-spark30-deb-m" port: 
-1 file: "/user/""/.sparkStaging/application_160734/__spark_conf__.zip" } size: 
260633 timestamp: 1607373138075 type: ARCHIVE visibility: PRIVATE
    org.apache.spark_spark-token-provider-kafka-0-10_2.12-3.0.1.jar -> resource 
{ scheme: "hdfs" host: "*-spark30-deb-m" port: -1 file: 
"/user/""/.sparkStaging/application_160734/org.apache.spark_spark-token-provider-kafka-0-10_2.12-3.0.1.jar"
 } size: 56317 timestamp: 1607373137449 type: FILE visibility: PRIVATE
    org.spark-project.spark_unused-1.0.0.jar -> resource { scheme: "hdfs" host: 
"*-spark30-deb-m" port: -1 file: 
"/user/""/.sparkStaging/application_160734/org.spark-project.spark_unused-1.0.0.jar"
 } size: 2777 timestamp: 1607373137544 type: FILE visibility: PRIVATE
    log4j.properties -> resource { scheme: "hdfs" host: "*-spark30-deb-m" port: 
-1 file: "/user/""/.sparkStaging/application_160734/log4j.properties" } size: 
238 timestamp: 1607373137921 type: FILE visibility: PRIVATE
    org.apache.spark_spark-sql-kafka-0-10_2.12-3.0.1.jar -> resource { scheme: 
"hdfs" host: "*-spark30-deb-m" port: -1 file: 
"/user/""/.sparkStaging/application_160734/org.apache.spark_spark-sql-kafka-0-10_2.12-3.0.1.jar"
 } size: 348857 timestamp: 1607373137376 type: FILE visibility: PRIVATE
    com.typesafe_config-1.4.0.jar -> resource { scheme: "hdfs" host: 
"*-spark30-deb-m" port: -1 file: 
"/user/""/.sparkStaging/application_160734/com.typesafe_config-1.4.0.jar" } 
size: 294174 timestamp: 1607373137425 type: FILE visibility: PRIVATE
    org.xerial.snappy_snappy-java-1.1.7.5.jar -> resource { scheme: "hdfs" 
host: "*-spark30-deb-m" port: -1 file: 
"/user/""/.sparkStaging/application_160734/org.xerial.snappy_snappy-java-1.1.7.5.jar"
 } size: 1934320 timestamp: 1607373137630 type: FILE visibility: PRIVATE
    jars_application.conf -> resource { scheme: "hdfs" host: "*-spark30-deb-m" 
port: -1 file: 
"/user/""/.sparkStaging/application_160734/jars_application.conf" } size: 1065 
timestamp: 1607373137789 type: FILE visibility: PRIVATE

===============================================================================
20/12/07 20:32:25 INFO Utils: Using initial executors = 4, max of 
spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors 
and spark.executor.instances
20/12/07 20:32:25 INFO Configuration: resource-types.xml not found
20/12/07 20:32:25 INFO ResourceUtils: Unable to find 'resource-types.xml'.
20/12/07 20:32:25 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: 
ApplicationMaster registered as 
NettyRpcEndpointRef(spark://YarnAM@*-spark30-deb-w-0.c.xxxxxx:42803)
20/12/07 20:32:25 INFO YarnAllocator: Will request 4 executor container(s), 
each with 1 core(s) and 4505 MB memory (including 409 MB of overhead)
20/12/07 20:32:25 INFO YarnAllocator: Submitted 4 unlocalized container 
requests.
20/12/07 20:32:25 INFO ApplicationMaster: Started progress reporter thread with 
(heartbeat : 3000, initial allocation : 200) intervals
20/12/07 20:32:25 INFO YarnClusterSchedulerBackend: SchedulerBackend is ready 
for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
20/12/07 20:32:25 INFO YarnClusterScheduler: YarnClusterScheduler.postStartHook 
done
20/12/07 20:32:25 INFO SharedState: loading hive config file: 
file:/etc/hive/conf.dist/hive-site.xml
20/12/07 20:32:25 INFO SharedState: Setting hive.metastore.warehouse.dir 
('null') to the value of spark.sql.warehouse.dir 
('file:/hadoop/yarn/nm-local-dir/usercache/""/appcache/application_160734/container_1607359766658_0016_01_000001/spark-warehouse').
20/12/07 20:32:25 INFO SharedState: Warehouse path is 
'file:/hadoop/yarn/nm-local-dir/usercache/""/appcache/application_160734/container_1607359766658_0016_01_000001/spark-warehouse'.
20/12/07 20:32:25 INFO ServerInfo: Adding filter to /SQL: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:25 INFO ServerInfo: Adding filter to /SQL/json: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:25 INFO ServerInfo: Adding filter to /SQL/execution: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:25 INFO ServerInfo: Adding filter to /SQL/execution/json: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:25 INFO ServerInfo: Adding filter to /static/sql: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:26 INFO YarnAllocator: Launching container 
container_1607359766658_0016_01_000002 on host *-spark30-deb-w-1.c.xxxxxx for 
executor with ID 1
20/12/07 20:32:26 INFO YarnAllocator: Received 1 containers from YARN, 
launching executors on 1 of them.
20/12/07 20:32:27 INFO YarnAllocator: Launching container 
container_1607359766658_0016_01_000003 on host *-spark30-deb-w-0.c.xxxxxx for 
executor with ID 2
20/12/07 20:32:27 INFO YarnAllocator: Launching container 
container_1607359766658_0016_01_000004 on host *-spark30-deb-w-1.c.xxxxxx for 
executor with ID 3
20/12/07 20:32:27 INFO YarnAllocator: Received 2 containers from YARN, 
launching executors on 2 of them.
20/12/07 20:32:28 INFO YarnAllocator: Launching container 
container_1607359766658_0016_01_000005 on host *-spark30-deb-w-0.c.xxxxxx for 
executor with ID 4
20/12/07 20:32:28 INFO YarnAllocator: Received 1 containers from YARN, 
launching executors on 1 of them.
20/12/07 20:32:29 INFO StateStoreCoordinatorRef: Registered 
StateStoreCoordinator endpoint
20/12/07 20:32:29 INFO ServerInfo: Adding filter to /StreamingQuery: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:29 INFO ServerInfo: Adding filter to /StreamingQuery/json: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:29 INFO ServerInfo: Adding filter to /StreamingQuery/statistics: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:29 INFO ServerInfo: Adding filter to 
/StreamingQuery/statistics/json: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:29 INFO ServerInfo: Adding filter to /static/sql: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
20/12/07 20:32:29 INFO MicroBatchExecution: Checkpoint root 
/Users/""/Desktop/helper/checkpoint resolved to 
hdfs://*-spark30-deb-m/Users/""/Desktop/helper/checkpoint.
20/12/07 20:32:30 INFO MicroBatchExecution: Starting event-synch [id = 
c29e258d-1a86-4a32-b319-d1bad92ca74a, runId = 
eaeb6dc5-0729-4a2d-926d-77afffd40b9c]. Use 
hdfs://*-spark30-deb-m/Users/""/Desktop/helper/checkpoint to store the query 
checkpoint.
20/12/07 20:32:30 WARN SparkSession$Builder: Using an existing SparkSession; 
some spark core configurations may not take effect.
20/12/07 20:32:30 INFO MicroBatchExecution: Reading table 
[org.apache.spark.sql.kafka010.KafkaSourceProvider$KafkaTable@46a6d7e2] from 
DataSourceV2 named 'kafka' 
[org.apache.spark.sql.kafka010.KafkaSourceProvider@4370aa1a]
20/12/07 20:32:30 INFO MicroBatchExecution: Starting new streaming query.
20/12/07 20:32:30 INFO MicroBatchExecution: Stream started from {}
20/12/07 20:32:30 WARN KafkaOffsetReader: Error in attempt 1 getting Kafka 
offsets:
org.apache.kafka.common.config.ConfigException: Missing required configuration 
"partition.assignment.strategy" which has no default value.
        at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:124)
        at 
org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:48)
        at 
org.apache.kafka.clients.consumer.ConsumerConfig.<init>(ConsumerConfig.java:194)
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:380)
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:363)
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:350)
        at 
org.apache.spark.sql.kafka010.SubscribeStrategy.createConsumer(ConsumerStrategy.scala:76)
        at 
org.apache.spark.sql.kafka010.KafkaOffsetReader.consumer(KafkaOffsetReader.scala:88)
        at 
org.apache.spark.sql.kafka010.KafkaOffsetReader.$anonfun$partitionsAssignedToConsumer$2(KafkaOffsetReader.scala:538)
        at 
org.apache.spark.sql.kafka010.KafkaOffsetReader.$anonfun$withRetriesWithoutInterrupt$1(KafkaOffsetReader.scala:600)
        at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at 
org.apache.spark.util.UninterruptibleThread.runUninterruptibly(UninterruptibleThread.scala:77)
        at 
org.apache.spark.sql.kafka010.KafkaOffsetReader.withRetriesWithoutInterrupt(KafkaOffsetReader.scala:599)
        at 
org.apache.spark.sql.kafka010.KafkaOffsetReader.$anonfun$partitionsAssignedToConsumer$1(KafkaOffsetReader.scala:536)
        at 
org.apache.spark.sql.kafka010.KafkaOffsetReader.runUninterruptibly(KafkaOffsetReader.scala:567)
        at 
org.apache.spark.sql.kafka010.KafkaOffsetReader.partitionsAssignedToConsumer(KafkaOffsetReader.scala:536)
        at 
org.apache.spark.sql.kafka010.KafkaOffsetReader.fetchLatestOffsets(KafkaOffsetReader.scala:316)
        at 
org.apache.spark.sql.kafka010.KafkaMicroBatchStream.$anonfun$getOrCreateInitialPartitionOffsets$1(KafkaMicroBatchStream.scala:153)
        at scala.Option.getOrElse(Option.scala:189)
        at 
org.apache.spark.sql.kafka010.KafkaMicroBatchStream.getOrCreateInitialPartitionOffsets(KafkaMicroBatchStream.scala:148)
        at 
org.apache.spark.sql.kafka010.KafkaMicroBatchStream.initialOffset(KafkaMicroBatchStream.scala:76)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$constructNextBatch$5(MicroBatchExecution.scala:378)
        at scala.Option.getOrElse(Option.scala:189)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$constructNextBatch$3(MicroBatchExecution.scala:378)
        at 
org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:352)
        at 
org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:350)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:69)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$constructNextBatch$2(MicroBatchExecution.scala:371)
        at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:285)
        at scala.collection.immutable.Map$Map1.foreach(Map.scala:179)
        at scala.collection.TraversableLike.map(TraversableLike.scala:285)
        at scala.collection.TraversableLike.map$(TraversableLike.scala:278)
        at scala.collection.AbstractTraversable.map(Traversable.scala:108)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$constructNextBatch$1(MicroBatchExecution.scala:368)
        at 
scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.withProgressLocked(MicroBatchExecution.scala:597)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.constructNextBatch(MicroBatchExecution.scala:364)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:208)
        at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at 
org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:352)
        at 
org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:350)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:69)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:191)
        at 
org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:57)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:185)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:334)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:245)
20/12/07 20:32:30 INFO ResourceProfile: Default ResourceProfile created, 
executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , 
memory -> name: memory, amount: 4096, script: , vendor: ), task resources: 
Map(cpus -> name: cpus, amount: 1.0)
20/12/07 20:32:31 WARN KafkaOffsetReader: Error in attempt 2 getting Kafka 
offsets:
org.apache.kafka.common.config.ConfigException: Missing required configuration 
"partition.assignment.strategy" which has no default value.
        at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:124)
        at 
org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:48)
        at 
org.apache.kafka.clients.consumer.ConsumerConfig.<init>(ConsumerConfig.java:194)
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:380)
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:363)
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:350)
        at 
org.apache.spark.sql.kafka010.SubscribeStrategy.createConsumer(ConsumerStrategy.scala:76)
        at 
org.apache.spark.sql.kafka010.KafkaOffsetReader.consumer(KafkaOffsetReader.scala:88)
        at 
org.apache.spark.sql.kafka010.KafkaOffsetReader.$anonfun$partitionsAssignedToConsumer$2(KafkaOffsetReader.scala:538)
        at 
org.apache.spark.sql.kafka010.KafkaOffsetReader.$anonfun$withRetriesWithoutInterrupt$1(KafkaOffsetReader.scala:600)
        at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at 
org.apache.spark.util.UninterruptibleThread.runUninterruptibly(UninterruptibleThread.scala:77)
        at 
org.apache.spark.sql.kafka010.KafkaOffsetReader.withRetriesWithoutInterrupt(KafkaOffsetReader.scala:599)
        at 
org.apache.spark.sql.kafka010.KafkaOffsetReader.$anonfun$partitionsAssignedToConsumer$1(KafkaOffsetReader.scala:536)
        at 
org.apache.spark.sql.kafka010.KafkaOffsetReader.runUninterruptibly(KafkaOffsetReader.scala:567)
        at 
org.apache.spark.sql.kafka010.KafkaOffsetReader.partitionsAssignedToConsumer(KafkaOffsetReader.scala:536)
        at 
org.apache.spark.sql.kafka010.KafkaOffsetReader.fetchLatestOffsets(KafkaOffsetReader.scala:316)
        at 
org.apache.spark.sql.kafka010.KafkaMicroBatchStream.$anonfun$getOrCreateInitialPartitionOffsets$1(KafkaMicroBatchStream.scala:153)
        at scala.Option.getOrElse(Option.scala:189)
        at 
org.apache.spark.sql.kafka010.KafkaMicroBatchStream.getOrCreateInitialPartitionOffsets(KafkaMicroBatchStream.scala:148)
        at 
org.apache.spark.sql.kafka010.KafkaMicroBatchStream.initialOffset(KafkaMicroBatchStream.scala:76)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$constructNextBatch$5(MicroBatchExecution.scala:378)
        at scala.Option.getOrElse(Option.scala:189)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$constructNextBatch$3(MicroBatchExecution.scala:378)
        at 
org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:352)
        at 
org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:350)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:69)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$constructNextBatch$2(MicroBatchExecution.scala:371)
        at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:285)
        at scala.collection.immutable.Map$Map1.foreach(Map.scala:179)
        at scala.collection.TraversableLike.map(TraversableLike.scala:285)
        at scala.collection.TraversableLike.map$(TraversableLike.scala:278)
        at scala.collection.AbstractTraversable.map(Traversable.scala:108)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$constructNextBatch$1(MicroBatchExecution.scala:368)
        at 
scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.withProgressLocked(MicroBatchExecution.scala:597)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.constructNextBatch(MicroBatchExecution.scala:364)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:208)
        at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at 
org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:352)
        at 
org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:350)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:69)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:191)
        at 
org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:57)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:185)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:334)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:245)
20/12/07 20:32:31 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered 
executor NettyRpcEndpointRef(spark-client://Executor) (10.169.56.87:48702) with 
ID 1
20/12/07 20:32:31 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered 
executor NettyRpcEndpointRef(spark-client://Executor) (10.169.56.87:48704) with 
ID 3
20/12/07 20:32:31 INFO ExecutorMonitor: New executor 1 has registered (new 
total is 1)
20/12/07 20:32:31 INFO ExecutorMonitor: New executor 3 has registered (new 
total is 2)
20/12/07 20:32:32 INFO BlockManagerMasterEndpoint: Registering block manager 
*-spark30-deb-w-1.c.xxxxxx:43653 with 2004.6 MiB RAM, BlockManagerId(1, 
*-spark30-deb-w-1.c.xxxxxx, 43653, None)
20/12/07 20:32:32 INFO BlockManagerMasterEndpoint: Registering block manager 
*-spark30-deb-w-1.c.xxxxxx:45359 with 2004.6 MiB RAM, BlockManagerId(3, 
*-spark30-deb-w-1.c.xxxxxx, 45359, None)
20/12/07 20:32:32 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered 
executor NettyRpcEndpointRef(spark-client://Executor) (10.169.56.103:38596) 
with ID 2
20/12/07 20:32:32 INFO ExecutorMonitor: New executor 2 has registered (new 
total is 3)
20/12/07 20:32:32 INFO BlockManagerMasterEndpoint: Registering block manager 
*-spark30-deb-w-0.c.xxxxxx:38447 with 2004.6 MiB RAM, BlockManagerId(2, 
*-spark30-deb-w-0.c.xxxxxx, 38447, None)
---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to