Please attach all the thread dumps and log files for the investigation.

-
Denis


On Wed, Feb 5, 2020 at 8:03 AM pg31 <singhhoneyyo...@gmail.com> wrote:

> Hi
>
> Cluster Configuration:
> 3 Nodes (112 GB Memory / 512 GB Disk)
>
> Ignite Configuration:
> 1. Persistence Enabled
> 2. Version: 2.6.0
> 3. Configuration is as follows:
>             <property name="authenticationEnabled" value="true"/>
>             <property name="failureDetectionTimeout" value="30000"/>
>             <property name="workDirectory" value="/persistence/work"/>
>
>             <property name="dataStorageConfiguration">
>                 <bean
> class="org.apache.ignite.configuration.DataStorageConfiguration">
>
>                     <property name="defaultDataRegionConfiguration">
>                         <bean
> class="org.apache.ignite.configuration.DataRegionConfiguration">
>
>                             <property name="name" value="Default_Region"/>
>                             <property name="persistenceEnabled"
> value="true"/>
>                             <property name="maxSize" value="#{80 * 1024 *
> 1024 * 1024}"/>
>                         </bean>
>                     </property>
>                     <property name="storagePath" value="/persistence"/>
>                     <property name="walPath" value="/wal"/>
>                     <property name="walArchivePath" value="/wal/archive"/>
>                     <property name="walMode" value="LOG_ONLY"/>
>                     <property name="walCompactionEnabled" value="true" />
>                     <property name="walHistorySize" value="2" />
>                 </bean>
>             </property>
>
>
> Issue: Data Loading with Spark gets stuck at the end.
> Description:
> I am trying to load 65M (million) rows into the Ignite cluster. Everything
> runs well till 64.5 Million rows and then all of sudden, the data ingestion
> just hangs. (I am able to ingest 64.5 Million rows in about 10 minutes)
>
> There is enough memory which is still free on all the nodes (Approximately
> 80 GB of memory remains free on all the nodes).
>
> I am using the following code to ingest data into ignite.
>
> dataFrame
>       .write.format(IgniteDataFrameSettings.FORMAT_IGNITE)
>       .option(IgniteDataFrameSettings.OPTION_TABLE, tableName)
>       .option(IgniteDataFrameSettings.OPTION_CONFIG_FILE, igniteConfig)
>
> .option(IgniteDataFrameSettings.OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS,
> primaryKey)
>       .option("user", igniteUsername)
>       .option("password", ignitePassword)
>       .mode(SaveMode.Overwrite) //Overwriting entire table.
>       .save()
>
> I am not sure, but it looks like that the Ignite Thread is hanging
> somewhere.
>
> This is what I can see in the thread dumps of all the executors.
>
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2770/Spark_Thread_Stuck.png>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

Reply via email to