[ https://issues.apache.org/jira/browse/HIVE-18214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Sahil Takiar updated HIVE-18214: -------------------------------- Attachment: HIVE-18214.2.patch > Flaky test: TestSparkClient > --------------------------- > > Key: HIVE-18214 > URL: https://issues.apache.org/jira/browse/HIVE-18214 > Project: Hive > Issue Type: Sub-task > Components: Spark > Reporter: Sahil Takiar > Assignee: Sahil Takiar > Attachments: HIVE-18214.1.patch, HIVE-18214.2.patch > > > Looks like there is a race condition in {{TestSparkClient#runTest}}. The test > creates a {{RemoteDriver}} in memory, which creates a {{JavaSparkContext}}. A > new {{JavaSparkContext}} is created for each test that is run. There is a > race condition where the {{RemoteDriver}} isn't given enough time to > shutdown, so when the next test starts running it creates another > {{JavaSparkContext}} which causes an exception like > {{org.apache.spark.SparkException: Only one SparkContext may be running in > this JVM (see SPARK-2243)}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029)