Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12150/
Java: 64bit/jdk1.9.0-ea-b54 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
There were too many update fails (25 > 20) - we expect it can happen, but 
shouldn't easily

Stack Trace:
java.lang.AssertionError: There were too many update fails (25 > 20) - we 
expect it can happen, but shouldn't easily
        at 
__randomizedtesting.SeedInfo.seed([A8FAE17676D1BE93:20AEDEACD82DD36B]:0)
        at org.junit.Assert.fail(Assert.java:93)
        at org.junit.Assert.assertTrue(Assert.java:43)
        at org.junit.Assert.assertFalse(Assert.java:68)
        at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:230)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:502)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
        at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
        at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
        at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
        at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
        at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 10251 lines...]
   [junit4] Suite: org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/init-core-data-001
   [junit4]   2> 674985 T5369 oas.BaseDistributedSearchTestCase.initHostContext 
Setting hostContext system property: /d_tw/vh
   [junit4]   2> 674988 T5369 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   2> 674988 T5370 oasc.ZkTestServer$2$1.setClientPort client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 674988 T5370 oasc.ZkTestServer$ZKServerMain.runFromConfig 
Starting server
   [junit4]   2> 675088 T5369 oasc.ZkTestServer.run start zk server on 
port:41610
   [junit4]   2> 675096 T5369 oasc.AbstractZkTestCase.putConfig put 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2> 675097 T5369 oasc.AbstractZkTestCase.putConfig put 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/schema15.xml
 to /configs/conf1/schema.xml
   [junit4]   2> 675098 T5369 oasc.AbstractZkTestCase.putConfig put 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 675099 T5369 oasc.AbstractZkTestCase.putConfig put 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2> 675100 T5369 oasc.AbstractZkTestCase.putConfig put 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/protwords.txt
 to /configs/conf1/protwords.txt
   [junit4]   2> 675100 T5369 oasc.AbstractZkTestCase.putConfig put 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/currency.xml
 to /configs/conf1/currency.xml
   [junit4]   2> 675101 T5369 oasc.AbstractZkTestCase.putConfig put 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/enumsConfig.xml
 to /configs/conf1/enumsConfig.xml
   [junit4]   2> 675101 T5369 oasc.AbstractZkTestCase.putConfig put 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/open-exchange-rates.json
 to /configs/conf1/open-exchange-rates.json
   [junit4]   2> 675102 T5369 oasc.AbstractZkTestCase.putConfig put 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/mapping-ISOLatin1Accent.txt
 to /configs/conf1/mapping-ISOLatin1Accent.txt
   [junit4]   2> 675103 T5369 oasc.AbstractZkTestCase.putConfig put 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/old_synonyms.txt
 to /configs/conf1/old_synonyms.txt
   [junit4]   2> 675103 T5369 oasc.AbstractZkTestCase.putConfig put 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/synonyms.txt
 to /configs/conf1/synonyms.txt
   [junit4]   2> 675146 T5369 oas.SolrTestCaseJ4.writeCoreProperties Writing 
core.properties file to 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/control-001/cores/collection1
   [junit4]   2> 675147 T5369 oejs.Server.doStart jetty-8.1.10.v20130312
   [junit4]   2> 675148 T5369 oejs.AbstractConnector.doStart Started 
[email protected]:51908
   [junit4]   2> 675148 T5369 oascse.JettySolrRunner$1.lifeCycleStarted Jetty 
properties: 
{solr.data.dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/tempDir-001/control/data, hostContext=/d_tw/vh, 
hostPort=51908, 
coreRootDirectory=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/control-001/cores}
   [junit4]   2> 675149 T5369 oass.SolrDispatchFilter.init 
SolrDispatchFilter.init()sun.misc.Launcher$AppClassLoader@3b764bce
   [junit4]   2> 675149 T5369 oasc.SolrResourceLoader.<init> new 
SolrResourceLoader for directory: 
'/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/control-001/'
   [junit4]   2> 675162 T5369 oasc.SolrXmlConfig.fromFile Loading container 
configuration from 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/control-001/solr.xml
   [junit4]   2> 675165 T5369 oasc.CorePropertiesLocator.<init> Config-defined 
core root directory: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/control-001/cores
   [junit4]   2> 675165 T5369 oasc.CoreContainer.<init> New CoreContainer 
908649797
   [junit4]   2> 675165 T5369 oasc.CoreContainer.load Loading cores into 
CoreContainer 
[instanceDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/control-001/]
   [junit4]   2> 675166 T5369 oasc.CoreContainer.load loading shared library: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/control-001/lib
   [junit4]   2> 675166 T5369 oasc.SolrResourceLoader.addToClassLoader WARN 
Can't find (or read) directory to add to classloader: lib (resolved as: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/control-001/lib).
   [junit4]   2> 675170 T5369 oashc.HttpShardHandlerFactory.init created with 
socketTimeout : 90000,urlScheme : ,connTimeout : 15000,maxConnectionsPerHost : 
20,maxConnections : 10000,corePoolSize : 0,maximumPoolSize : 
2147483647,maxThreadIdleTime : 5,sizeOfQueue : -1,fairnessPolicy : 
false,useRetries : false,
   [junit4]   2> 675171 T5369 oasu.UpdateShardHandler.<init> Creating 
UpdateShardHandler HTTP client with params: 
socketTimeout=340000&connTimeout=45000&retry=true
   [junit4]   2> 675171 T5369 oasl.LogWatcher.createWatcher SLF4J impl is 
org.slf4j.impl.Log4jLoggerFactory
   [junit4]   2> 675171 T5369 oasl.LogWatcher.newRegisteredLogWatcher 
Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
   [junit4]   2> 675172 T5369 oasc.CoreContainer.load Node Name: 127.0.0.1
   [junit4]   2> 675172 T5369 oasc.ZkContainer.initZooKeeper Zookeeper 
client=127.0.0.1:41610/solr
   [junit4]   2> 675172 T5369 oasc.ZkController.checkChrootPath zkHost includes 
chroot
   [junit4]   2> 675180 T5369 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.ZkController.createEphemeralLiveNode Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:51908_d_tw%2Fvh
   [junit4]   2> 675182 T5369 N:127.0.0.1:51908_d_tw%2Fvh oasc.Overseer.close 
Overseer (id=null) closing
   [junit4]   2> 675182 T5369 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.OverseerElectionContext.runLeaderProcess I am going to be the leader 
127.0.0.1:51908_d_tw%2Fvh
   [junit4]   2> 675183 T5369 N:127.0.0.1:51908_d_tw%2Fvh oasc.Overseer.start 
Overseer (id=93661436702490627-127.0.0.1:51908_d_tw%2Fvh-n_0000000000) starting
   [junit4]   2> 675186 T5369 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.OverseerAutoReplicaFailoverThread.<init> Starting 
OverseerAutoReplicaFailoverThread autoReplicaFailoverWorkLoopDelay=10000 
autoReplicaFailoverWaitAfterExpiration=30000 
autoReplicaFailoverBadNodeExpiration=60000
   [junit4]   2> 675186 T5397 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.OverseerCollectionProcessor.run Process current queue of collection 
creations
   [junit4]   2> 675187 T5396 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.Overseer$ClusterStateUpdater.run Starting to work on the main queue
   [junit4]   2> 675188 T5369 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.CorePropertiesLocator.discover Looking for core definitions underneath 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/control-001/cores
   [junit4]   2> 675189 T5369 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.CoreDescriptor.<init> CORE DESCRIPTOR: {name=collection1, 
config=solrconfig.xml, transient=false, schema=schema.xml, loadOnStartup=true, 
instanceDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/control-001/cores/collection1, 
collection=control_collection, 
absoluteInstDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/control-001/cores/collection1/, coreNodeName=, 
dataDir=data/, shard=}
   [junit4]   2> 675189 T5369 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.CorePropertiesLocator.discoverUnder Found core collection1 in 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/control-001/cores/collection1/
   [junit4]   2> 675189 T5369 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.CorePropertiesLocator.discover Found 1 core definitions
   [junit4]   2> 675190 T5399 N:127.0.0.1:51908_d_tw%2Fvh C:control_collection 
c:collection1 oasc.ZkController.publish publishing core=collection1 state=down 
collection=control_collection
   [junit4]   2> 675190 T5399 N:127.0.0.1:51908_d_tw%2Fvh C:control_collection 
c:collection1 oasc.ZkController.publish numShards not found on descriptor - 
reading it from system property
   [junit4]   2> 675190 T5395 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
/overseer/queue state SyncConnected
   [junit4]   2> 675190 T5399 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.ZkController.waitForCoreNodeName look for our core node name
   [junit4]   2> 675190 T5396 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "roles":null,
   [junit4]   2>          "base_url":"http://127.0.0.1:51908/d_tw/vh";,
   [junit4]   2>          "node_name":"127.0.0.1:51908_d_tw%2Fvh",
   [junit4]   2>          "numShards":"1",
   [junit4]   2>          "state":"down",
   [junit4]   2>          "shard":null,
   [junit4]   2>          "collection":"control_collection",
   [junit4]   2>          "operation":"state"} current state version: 0
   [junit4]   2> 675191 T5396 N:127.0.0.1:51908_d_tw%2Fvh 
oasco.ReplicaMutator.updateState Update state numShards=1 message={
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "roles":null,
   [junit4]   2>          "base_url":"http://127.0.0.1:51908/d_tw/vh";,
   [junit4]   2>          "node_name":"127.0.0.1:51908_d_tw%2Fvh",
   [junit4]   2>          "numShards":"1",
   [junit4]   2>          "state":"down",
   [junit4]   2>          "shard":null,
   [junit4]   2>          "collection":"control_collection",
   [junit4]   2>          "operation":"state"}
   [junit4]   2> 675191 T5396 N:127.0.0.1:51908_d_tw%2Fvh 
oasco.ClusterStateMutator.createCollection building a new cName: 
control_collection
   [junit4]   2> 675191 T5396 N:127.0.0.1:51908_d_tw%2Fvh 
oasco.ReplicaMutator.updateState Assigning new node to shard shard=shard1
   [junit4]   2> 676191 T5399 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.ZkController.waitForShardId waiting to find shard id in clusterstate for 
collection1
   [junit4]   2> 676191 T5399 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.ZkController.createCollectionZkNode Check for collection 
zkNode:control_collection
   [junit4]   2> 676191 T5399 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.ZkController.createCollectionZkNode Collection zkNode exists
   [junit4]   2> 676192 T5399 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.SolrResourceLoader.<init> new SolrResourceLoader for directory: 
'/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/control-001/cores/collection1/'
   [junit4]   2> 676203 T5399 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.ZkController.watchZKConfDir watch zkdir /configs/conf1
   [junit4]   2> 676205 T5399 N:127.0.0.1:51908_d_tw%2Fvh oasc.Config.<init> 
loaded config solrconfig.xml with version 0 
   [junit4]   2> 676209 T5399 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.SolrConfig.refreshRequestParams current version of requestparams : -1
   [junit4]   2> 676212 T5399 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.SolrConfig.<init> Using Lucene MatchVersion: 5.2.0
   [junit4]   2> 676220 T5399 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.SolrConfig.<init> Loaded SolrConfig: solrconfig.xml
   [junit4]   2> 676221 T5399 N:127.0.0.1:51908_d_tw%2Fvh 
oass.IndexSchema.readSchema Reading Solr Schema from /configs/conf1/schema.xml
   [junit4]   2> 676225 T5399 N:127.0.0.1:51908_d_tw%2Fvh 
oass.IndexSchema.readSchema [collection1] Schema name=test
   [junit4]   2> 676295 T5399 N:127.0.0.1:51908_d_tw%2Fvh 
oass.IndexSchema.readSchema default search field in schema is text
   [junit4]   2> 676296 T5399 N:127.0.0.1:51908_d_tw%2Fvh 
oass.IndexSchema.readSchema unique key field: id
   [junit4]   2> 676297 T5399 N:127.0.0.1:51908_d_tw%2Fvh 
oass.FileExchangeRateProvider.reload Reloading exchange rates from file 
currency.xml
   [junit4]   2> 676299 T5399 N:127.0.0.1:51908_d_tw%2Fvh 
oass.FileExchangeRateProvider.reload Reloading exchange rates from file 
currency.xml
   [junit4]   2> 676305 T5399 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.CoreContainer.create Creating SolrCore 'collection1' using configuration 
from collection control_collection
   [junit4]   2> 676305 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasc.SolrCore.initDirectoryFactory solr.StandardDirectoryFactory
   [junit4]   2> 676306 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasc.SolrCore.<init> [collection1] Opening new SolrCore at 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/control-001/cores/collection1/, 
dataDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/control-001/cores/collection1/data/
   [junit4]   2> 676306 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasc.JmxMonitoredMap.<init> JMX monitoring is enabled. Adding Solr mbeans to 
JMX Server: com.sun.jmx.mbeanserver.JmxMBeanServer@3425a966
   [junit4]   2> 676306 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasc.CachingDirectoryFactory.get return new directory for 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/control-001/cores/collection1/data
   [junit4]   2> 676307 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasc.SolrCore.getNewIndexDir New index directory detected: old=null 
new=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/control-001/cores/collection1/data/index/
   [junit4]   2> 676307 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasc.SolrCore.initIndex WARN [collection1] Solr index directory 
'/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/control-001/cores/collection1/data/index' doesn't exist. 
Creating new index...
   [junit4]   2> 676307 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasc.CachingDirectoryFactory.get return new directory for 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/control-001/cores/collection1/data/index
   [junit4]   2> 676308 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasu.RandomMergePolicy.<init> RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=40, maxMergeAtOnceExplicit=23, maxMergedSegmentMB=32.9423828125, 
floorSegmentMB=2.05859375, forceMergeDeletesPctAllowed=13.087659575459174, 
segmentsPerTier=15.0, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.41692349822860775
   [junit4]   2> 676326 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasc.SolrDeletionPolicy.onCommit SolrDeletionPolicy.onCommit: commits: num=1
   [junit4]   2>                
commit{dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 
A8FAE17676D1BE93-001/control-001/cores/collection1/data/index,segFN=segments_1,generation=1}
   [junit4]   2> 676326 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
   [junit4]   2> 676340 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"nodistrib"
   [junit4]   2> 676341 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"dedupe"
   [junit4]   2> 676341 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasup.UpdateRequestProcessorChain.init inserting 
DistributedUpdateProcessorFactory into updateRequestProcessorChain "dedupe"
   [junit4]   2> 676341 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"stored_sig"
   [junit4]   2> 676341 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasup.UpdateRequestProcessorChain.init inserting 
DistributedUpdateProcessorFactory into updateRequestProcessorChain "stored_sig"
   [junit4]   2> 676341 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"distrib-dup-test-chain-explicit"
   [junit4]   2> 676341 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"distrib-dup-test-chain-implicit"
   [junit4]   2> 676341 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasup.UpdateRequestProcessorChain.init inserting 
DistributedUpdateProcessorFactory into updateRequestProcessorChain 
"distrib-dup-test-chain-implicit"
   [junit4]   2> 676342 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasc.SolrCore.loadUpdateProcessorChains no updateRequestProcessorChain defined 
as default, creating implicit default
   [junit4]   2> 676343 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 676343 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 676344 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 676344 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 676347 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasc.RequestHandlers.initHandlersFromConfig Registered paths: 
/admin/mbeans,standard,/update/csv,/update/json/docs,/admin/luke,/admin/segments,/get,/admin/system,/replication,/admin/properties,/config,/schema,/admin/plugins,/admin/logging,/update/json,/admin/threads,/admin/ping,/update,/admin/file
   [junit4]   2> 676347 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasc.SolrCore.initStatsCache Using default statsCache cache: 
org.apache.solr.search.stats.LocalStatsCache
   [junit4]   2> 676348 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasu.UpdateHandler.<init> Using UpdateLog implementation: 
org.apache.solr.update.UpdateLog
   [junit4]   2> 676348 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasu.UpdateLog.init Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10
   [junit4]   2> 676348 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasu.CommitTracker.<init> Hard AutoCommit: disabled
   [junit4]   2> 676348 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasu.CommitTracker.<init> Soft AutoCommit: disabled
   [junit4]   2> 676349 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasu.RandomMergePolicy.<init> RandomMergePolicy wrapping class 
org.apache.lucene.index.LogDocMergePolicy: [LogDocMergePolicy: 
minMergeSize=1000, mergeFactor=50, maxMergeSize=9223372036854775807, 
maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true, 
maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.5598927705693212]
   [junit4]   2> 676349 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasc.SolrDeletionPolicy.onInit SolrDeletionPolicy.onInit: commits: num=1
   [junit4]   2>                
commit{dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 
A8FAE17676D1BE93-001/control-001/cores/collection1/data/index,segFN=segments_1,generation=1}
   [junit4]   2> 676350 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
   [junit4]   2> 676350 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oass.SolrIndexSearcher.<init> Opening Searcher@326019f2[collection1] main
   [junit4]   2> 676351 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasr.ManagedResourceStorage.newStorageIO Setting up ZooKeeper-based storage for 
the RestManager with znodeBase: /configs/conf1
   [junit4]   2> 676351 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasr.ManagedResourceStorage$ZooKeeperStorageIO.configure Configured 
ZooKeeperStorageIO with znodeBase: /configs/conf1
   [junit4]   2> 676351 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasr.RestManager.init Initializing RestManager with initArgs: {}
   [junit4]   2> 676351 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasr.ManagedResourceStorage.load Reading _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 676351 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasr.ManagedResourceStorage$ZooKeeperStorageIO.openInputStream No data found 
for znode /configs/conf1/_rest_managed.json
   [junit4]   2> 676352 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasr.ManagedResourceStorage.load Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 676352 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasr.RestManager.init Initializing 0 registered ManagedResources
   [junit4]   2> 676352 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oash.ReplicationHandler.inform Commits will be reserved for  10000
   [junit4]   2> 676352 T5399 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasc.CoreContainer.registerCore registering core: collection1
   [junit4]   2> 676353 T5400 N:127.0.0.1:51908_d_tw%2Fvh c:collection1 
oasc.SolrCore.registerSearcher [collection1] Registered new searcher 
Searcher@326019f2[collection1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 676353 T5403 N:127.0.0.1:51908_d_tw%2Fvh C:control_collection 
S:shard1 c:collection1 oasc.ZkController.register Register replica - 
core:collection1 address:http://127.0.0.1:51908/d_tw/vh 
collection:control_collection shard:shard1
   [junit4]   2> 676353 T5369 N:127.0.0.1:51908_d_tw%2Fvh 
oass.SolrDispatchFilter.init 
user.dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0
   [junit4]   2> 676353 T5369 N:127.0.0.1:51908_d_tw%2Fvh 
oass.SolrDispatchFilter.init SolrDispatchFilter.init() done
   [junit4]   2> 676355 T5403 N:127.0.0.1:51908_d_tw%2Fvh C:control_collection 
S:shard1 c:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess Running 
the leader process for shard shard1
   [junit4]   2> 676356 T5395 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
/overseer/queue state SyncConnected
   [junit4]   2> 676357 T5403 N:127.0.0.1:51908_d_tw%2Fvh C:control_collection 
S:shard1 c:collection1 oasc.ShardLeaderElectionContext.waitForReplicasToComeUp 
Enough replicas found to continue.
   [junit4]   2> 676357 T5403 N:127.0.0.1:51908_d_tw%2Fvh C:control_collection 
S:shard1 c:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess I may 
be the new leader - try and sync
   [junit4]   2> 676357 T5396 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2>          "operation":"leader",
   [junit4]   2>          "shard":"shard1",
   [junit4]   2>          "collection":"control_collection"} current state 
version: 1
   [junit4]   2> ASYNC  NEW_CORE C1176 name=collection1 
org.apache.solr.core.SolrCore@5e35a2c7 
url=http://127.0.0.1:51908/d_tw/vh/collection1 node=127.0.0.1:51908_d_tw%2Fvh 
C1176_STATE=coll:control_collection core:collection1 props:{core=collection1, 
base_url=http://127.0.0.1:51908/d_tw/vh, node_name=127.0.0.1:51908_d_tw%2Fvh, 
state=down}
   [junit4]   2> 676357 T5403 N:127.0.0.1:51908_d_tw%2Fvh C:control_collection 
S:shard1 c:collection1 C1176 oasc.SyncStrategy.sync Sync replicas to 
http://127.0.0.1:51908/d_tw/vh/collection1/
   [junit4]   2> 676357 T5403 N:127.0.0.1:51908_d_tw%2Fvh C:control_collection 
S:shard1 c:collection1 C1176 oasc.SyncStrategy.syncReplicas Sync Success - now 
sync replicas to me
   [junit4]   2> 676357 T5403 N:127.0.0.1:51908_d_tw%2Fvh C:control_collection 
S:shard1 c:collection1 C1176 oasc.SyncStrategy.syncToMe 
http://127.0.0.1:51908/d_tw/vh/collection1/ has no replicas
   [junit4]   2> 676357 T5369 oasc.ChaosMonkey.monkeyLog monkey: init - expire 
sessions:false cause connection loss:false
   [junit4]   2> 676357 T5403 N:127.0.0.1:51908_d_tw%2Fvh C:control_collection 
S:shard1 c:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess I am 
the new leader: http://127.0.0.1:51908/d_tw/vh/collection1/ shard1
   [junit4]   2> 676359 T5395 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
/overseer/queue state SyncConnected
   [junit4]   2> 676360 T5396 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2>          "operation":"leader",
   [junit4]   2>          "shard":"shard1",
   [junit4]   2>          "collection":"control_collection",
   [junit4]   2>          "base_url":"http://127.0.0.1:51908/d_tw/vh";,
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "state":"active"} current state version: 1
   [junit4]   2> 676399 T5369 oas.SolrTestCaseJ4.writeCoreProperties Writing 
core.properties file to 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-1-001/cores/collection1
   [junit4]   2> 676400 T5369 oasc.AbstractFullDistribZkTestBase.createJettys 
create jetty 1 in directory 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-1-001
   [junit4]   2> 676400 T5369 oejs.Server.doStart jetty-8.1.10.v20130312
   [junit4]   2> 676401 T5369 oejs.AbstractConnector.doStart Started 
[email protected]:37342
   [junit4]   2> 676402 T5369 oascse.JettySolrRunner$1.lifeCycleStarted Jetty 
properties: 
{solr.data.dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/tempDir-001/jetty1, solrconfig=solrconfig.xml, 
hostContext=/d_tw/vh, hostPort=37342, 
coreRootDirectory=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-1-001/cores}
   [junit4]   2> 676402 T5369 oass.SolrDispatchFilter.init 
SolrDispatchFilter.init()sun.misc.Launcher$AppClassLoader@3b764bce
   [junit4]   2> 676402 T5369 oasc.SolrResourceLoader.<init> new 
SolrResourceLoader for directory: 
'/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-1-001/'
   [junit4]   2> 676415 T5369 oasc.SolrXmlConfig.fromFile Loading container 
configuration from 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-1-001/solr.xml
   [junit4]   2> 676418 T5369 oasc.CorePropertiesLocator.<init> Config-defined 
core root directory: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-1-001/cores
   [junit4]   2> 676418 T5369 oasc.CoreContainer.<init> New CoreContainer 
108202147
   [junit4]   2> 676418 T5369 oasc.CoreContainer.load Loading cores into 
CoreContainer 
[instanceDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-1-001/]
   [junit4]   2> 676419 T5369 oasc.CoreContainer.load loading shared library: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-1-001/lib
   [junit4]   2> 676419 T5369 oasc.SolrResourceLoader.addToClassLoader WARN 
Can't find (or read) directory to add to classloader: lib (resolved as: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-1-001/lib).
   [junit4]   2> 676422 T5369 oashc.HttpShardHandlerFactory.init created with 
socketTimeout : 90000,urlScheme : ,connTimeout : 15000,maxConnectionsPerHost : 
20,maxConnections : 10000,corePoolSize : 0,maximumPoolSize : 
2147483647,maxThreadIdleTime : 5,sizeOfQueue : -1,fairnessPolicy : 
false,useRetries : false,
   [junit4]   2> 676423 T5369 oasu.UpdateShardHandler.<init> Creating 
UpdateShardHandler HTTP client with params: 
socketTimeout=340000&connTimeout=45000&retry=true
   [junit4]   2> 676424 T5369 oasl.LogWatcher.createWatcher SLF4J impl is 
org.slf4j.impl.Log4jLoggerFactory
   [junit4]   2> 676424 T5369 oasl.LogWatcher.newRegisteredLogWatcher 
Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
   [junit4]   2> 676424 T5369 oasc.CoreContainer.load Node Name: 127.0.0.1
   [junit4]   2> 676424 T5369 oasc.ZkContainer.initZooKeeper Zookeeper 
client=127.0.0.1:41610/solr
   [junit4]   2> 676424 T5369 oasc.ZkController.checkChrootPath zkHost includes 
chroot
   [junit4]   2> 676510 T5403 N:127.0.0.1:51908_d_tw%2Fvh C:control_collection 
S:shard1 c:collection1 oasc.ZkController.register We are 
http://127.0.0.1:51908/d_tw/vh/collection1/ and leader is 
http://127.0.0.1:51908/d_tw/vh/collection1/
   [junit4]   2> 676510 T5403 N:127.0.0.1:51908_d_tw%2Fvh C:control_collection 
S:shard1 c:collection1 oasc.ZkController.register No LogReplay needed for 
core=collection1 baseURL=http://127.0.0.1:51908/d_tw/vh
   [junit4]   2> 676510 T5403 N:127.0.0.1:51908_d_tw%2Fvh C:control_collection 
S:shard1 c:collection1 oasc.ZkController.checkRecovery I am the leader, no 
recovery necessary
   [junit4]   2> 676510 T5403 N:127.0.0.1:51908_d_tw%2Fvh C:control_collection 
S:shard1 c:collection1 oasc.ZkController.publish publishing core=collection1 
state=active collection=control_collection
   [junit4]   2> 676510 T5403 N:127.0.0.1:51908_d_tw%2Fvh C:control_collection 
S:shard1 c:collection1 oasc.ZkController.publish numShards not found on 
descriptor - reading it from system property
   [junit4]   2> 676511 T5395 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
/overseer/queue state SyncConnected
   [junit4]   2> 676511 T5396 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "core_node_name":"core_node1",
   [junit4]   2>          "roles":null,
   [junit4]   2>          "base_url":"http://127.0.0.1:51908/d_tw/vh";,
   [junit4]   2>          "node_name":"127.0.0.1:51908_d_tw%2Fvh",
   [junit4]   2>          "numShards":"1",
   [junit4]   2>          "state":"active",
   [junit4]   2>          "shard":"shard1",
   [junit4]   2>          "collection":"control_collection",
   [junit4]   2>          "operation":"state"} current state version: 2
   [junit4]   2> 676512 T5396 N:127.0.0.1:51908_d_tw%2Fvh 
oasco.ReplicaMutator.updateState Update state numShards=1 message={
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "core_node_name":"core_node1",
   [junit4]   2>          "roles":null,
   [junit4]   2>          "base_url":"http://127.0.0.1:51908/d_tw/vh";,
   [junit4]   2>          "node_name":"127.0.0.1:51908_d_tw%2Fvh",
   [junit4]   2>          "numShards":"1",
   [junit4]   2>          "state":"active",
   [junit4]   2>          "shard":"shard1",
   [junit4]   2>          "collection":"control_collection",
   [junit4]   2>          "operation":"state"}
   [junit4]   2> 677433 T5369 N:127.0.0.1:37342_d_tw%2Fvh 
oasc.ZkController.createEphemeralLiveNode Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:37342_d_tw%2Fvh
   [junit4]   2> 677434 T5369 N:127.0.0.1:37342_d_tw%2Fvh oasc.Overseer.close 
Overseer (id=null) closing
   [junit4]   2> 677435 T5369 N:127.0.0.1:37342_d_tw%2Fvh 
oasc.CorePropertiesLocator.discover Looking for core definitions underneath 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-1-001/cores
   [junit4]   2> 677435 T5369 N:127.0.0.1:37342_d_tw%2Fvh 
oasc.CoreDescriptor.<init> CORE DESCRIPTOR: {name=collection1, 
config=solrconfig.xml, transient=false, schema=schema.xml, loadOnStartup=true, 
instanceDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-1-001/cores/collection1, collection=collection1, 
absoluteInstDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-1-001/cores/collection1/, coreNodeName=, 
dataDir=data/, shard=}
   [junit4]   2> 677435 T5369 N:127.0.0.1:37342_d_tw%2Fvh 
oasc.CorePropertiesLocator.discoverUnder Found core collection1 in 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-1-001/cores/collection1/
   [junit4]   2> 677436 T5369 N:127.0.0.1:37342_d_tw%2Fvh 
oasc.CorePropertiesLocator.discover Found 1 core definitions
   [junit4]   2> 677436 T5422 N:127.0.0.1:37342_d_tw%2Fvh C:collection1 
c:collection1 oasc.ZkController.publish publishing core=collection1 state=down 
collection=collection1
   [junit4]   2> 677436 T5422 N:127.0.0.1:37342_d_tw%2Fvh C:collection1 
c:collection1 oasc.ZkController.publish numShards not found on descriptor - 
reading it from system property
   [junit4]   2> 677436 T5422 N:127.0.0.1:37342_d_tw%2Fvh 
oasc.ZkController.waitForCoreNodeName look for our core node name
   [junit4]   2> 677437 T5395 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
/overseer/queue state SyncConnected
   [junit4]   2> 677437 T5396 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "roles":null,
   [junit4]   2>          "base_url":"http://127.0.0.1:37342/d_tw/vh";,
   [junit4]   2>          "node_name":"127.0.0.1:37342_d_tw%2Fvh",
   [junit4]   2>          "numShards":"1",
   [junit4]   2>          "state":"down",
   [junit4]   2>          "shard":null,
   [junit4]   2>          "collection":"collection1",
   [junit4]   2>          "operation":"state"} current state version: 3
   [junit4]   2> 677437 T5396 N:127.0.0.1:51908_d_tw%2Fvh 
oasco.ReplicaMutator.updateState Update state numShards=1 message={
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "roles":null,
   [junit4]   2>          "base_url":"http://127.0.0.1:37342/d_tw/vh";,
   [junit4]   2>          "node_name":"127.0.0.1:37342_d_tw%2Fvh",
   [junit4]   2>          "numShards":"1",
   [junit4]   2>          "state":"down",
   [junit4]   2>          "shard":null,
   [junit4]   2>          "collection":"collection1",
   [junit4]   2>          "operation":"state"}
   [junit4]   2> 677437 T5396 N:127.0.0.1:51908_d_tw%2Fvh 
oasco.ClusterStateMutator.createCollection building a new cName: collection1
   [junit4]   2> 677438 T5396 N:127.0.0.1:51908_d_tw%2Fvh 
oasco.ReplicaMutator.updateState Assigning new node to shard shard=shard1
   [junit4]   2> 678437 T5422 N:127.0.0.1:37342_d_tw%2Fvh 
oasc.ZkController.waitForShardId waiting to find shard id in clusterstate for 
collection1
   [junit4]   2> 678437 T5422 N:127.0.0.1:37342_d_tw%2Fvh 
oasc.ZkController.createCollectionZkNode Check for collection zkNode:collection1
   [junit4]   2> 678437 T5422 N:127.0.0.1:37342_d_tw%2Fvh 
oasc.ZkController.createCollectionZkNode Collection zkNode exists
   [junit4]   2> 678438 T5422 N:127.0.0.1:37342_d_tw%2Fvh 
oasc.SolrResourceLoader.<init> new SolrResourceLoader for directory: 
'/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-1-001/cores/collection1/'
   [junit4]   2> 678446 T5422 N:127.0.0.1:37342_d_tw%2Fvh 
oasc.ZkController.watchZKConfDir watch zkdir /configs/conf1
   [junit4]   2> 678447 T5422 N:127.0.0.1:37342_d_tw%2Fvh oasc.Config.<init> 
loaded config solrconfig.xml with version 0 
   [junit4]   2> 678450 T5422 N:127.0.0.1:37342_d_tw%2Fvh 
oasc.SolrConfig.refreshRequestParams current version of requestparams : -1
   [junit4]   2> 678452 T5422 N:127.0.0.1:37342_d_tw%2Fvh 
oasc.SolrConfig.<init> Using Lucene MatchVersion: 5.2.0
   [junit4]   2> 678459 T5422 N:127.0.0.1:37342_d_tw%2Fvh 
oasc.SolrConfig.<init> Loaded SolrConfig: solrconfig.xml
   [junit4]   2> 678459 T5422 N:127.0.0.1:37342_d_tw%2Fvh 
oass.IndexSchema.readSchema Reading Solr Schema from /configs/conf1/schema.xml
   [junit4]   2> 678462 T5422 N:127.0.0.1:37342_d_tw%2Fvh 
oass.IndexSchema.readSchema [collection1] Schema name=test
   [junit4]   2> 678522 T5422 N:127.0.0.1:37342_d_tw%2Fvh 
oass.IndexSchema.readSchema default search field in schema is text
   [junit4]   2> 678523 T5422 N:127.0.0.1:37342_d_tw%2Fvh 
oass.IndexSchema.readSchema unique key field: id
   [junit4]   2> 678524 T5422 N:127.0.0.1:37342_d_tw%2Fvh 
oass.FileExchangeRateProvider.reload Reloading exchange rates from file 
currency.xml
   [junit4]   2> 678525 T5422 N:127.0.0.1:37342_d_tw%2Fvh 
oass.FileExchangeRateProvider.reload Reloading exchange rates from file 
currency.xml
   [junit4]   2> 678530 T5422 N:127.0.0.1:37342_d_tw%2Fvh 
oasc.CoreContainer.create Creating SolrCore 'collection1' using configuration 
from collection collection1
   [junit4]   2> 678531 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasc.SolrCore.initDirectoryFactory solr.StandardDirectoryFactory
   [junit4]   2> 678531 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasc.SolrCore.<init> [collection1] Opening new SolrCore at 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-1-001/cores/collection1/, 
dataDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-1-001/cores/collection1/data/
   [junit4]   2> 678531 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasc.JmxMonitoredMap.<init> JMX monitoring is enabled. Adding Solr mbeans to 
JMX Server: com.sun.jmx.mbeanserver.JmxMBeanServer@3425a966
   [junit4]   2> 678532 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasc.CachingDirectoryFactory.get return new directory for 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-1-001/cores/collection1/data
   [junit4]   2> 678532 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasc.SolrCore.getNewIndexDir New index directory detected: old=null 
new=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-1-001/cores/collection1/data/index/
   [junit4]   2> 678532 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasc.SolrCore.initIndex WARN [collection1] Solr index directory 
'/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-1-001/cores/collection1/data/index' doesn't exist. 
Creating new index...
   [junit4]   2> 678532 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasc.CachingDirectoryFactory.get return new directory for 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-1-001/cores/collection1/data/index
   [junit4]   2> 678533 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasu.RandomMergePolicy.<init> RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=40, maxMergeAtOnceExplicit=23, maxMergedSegmentMB=32.9423828125, 
floorSegmentMB=2.05859375, forceMergeDeletesPctAllowed=13.087659575459174, 
segmentsPerTier=15.0, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.41692349822860775
   [junit4]   2> 678565 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasc.SolrDeletionPolicy.onCommit SolrDeletionPolicy.onCommit: commits: num=1
   [junit4]   2>                
commit{dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 
A8FAE17676D1BE93-001/shard-1-001/cores/collection1/data/index,segFN=segments_1,generation=1}
   [junit4]   2> 678565 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
   [junit4]   2> 678569 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"nodistrib"
   [junit4]   2> 678570 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"dedupe"
   [junit4]   2> 678570 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasup.UpdateRequestProcessorChain.init inserting 
DistributedUpdateProcessorFactory into updateRequestProcessorChain "dedupe"
   [junit4]   2> 678570 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"stored_sig"
   [junit4]   2> 678570 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasup.UpdateRequestProcessorChain.init inserting 
DistributedUpdateProcessorFactory into updateRequestProcessorChain "stored_sig"
   [junit4]   2> 678570 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"distrib-dup-test-chain-explicit"
   [junit4]   2> 678570 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"distrib-dup-test-chain-implicit"
   [junit4]   2> 678571 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasup.UpdateRequestProcessorChain.init inserting 
DistributedUpdateProcessorFactory into updateRequestProcessorChain 
"distrib-dup-test-chain-implicit"
   [junit4]   2> 678571 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasc.SolrCore.loadUpdateProcessorChains no updateRequestProcessorChain defined 
as default, creating implicit default
   [junit4]   2> 678572 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 678573 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 678573 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 678574 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 678589 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasc.RequestHandlers.initHandlersFromConfig Registered paths: 
/admin/mbeans,standard,/update/csv,/update/json/docs,/admin/luke,/admin/segments,/get,/admin/system,/replication,/admin/properties,/config,/schema,/admin/plugins,/admin/logging,/update/json,/admin/threads,/admin/ping,/update,/admin/file
   [junit4]   2> 678590 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasc.SolrCore.initStatsCache Using default statsCache cache: 
org.apache.solr.search.stats.LocalStatsCache
   [junit4]   2> 678590 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasu.UpdateHandler.<init> Using UpdateLog implementation: 
org.apache.solr.update.UpdateLog
   [junit4]   2> 678590 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasu.UpdateLog.init Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10
   [junit4]   2> 678591 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasu.CommitTracker.<init> Hard AutoCommit: disabled
   [junit4]   2> 678591 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasu.CommitTracker.<init> Soft AutoCommit: disabled
   [junit4]   2> 678592 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasu.RandomMergePolicy.<init> RandomMergePolicy wrapping class 
org.apache.lucene.index.LogDocMergePolicy: [LogDocMergePolicy: 
minMergeSize=1000, mergeFactor=50, maxMergeSize=9223372036854775807, 
maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true, 
maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.5598927705693212]
   [junit4]   2> 678593 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasc.SolrDeletionPolicy.onInit SolrDeletionPolicy.onInit: commits: num=1
   [junit4]   2>                
commit{dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 
A8FAE17676D1BE93-001/shard-1-001/cores/collection1/data/index,segFN=segments_1,generation=1}
   [junit4]   2> 678593 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
   [junit4]   2> 678593 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oass.SolrIndexSearcher.<init> Opening Searcher@3b68d484[collection1] main
   [junit4]   2> 678594 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasr.ManagedResourceStorage.newStorageIO Setting up ZooKeeper-based storage for 
the RestManager with znodeBase: /configs/conf1
   [junit4]   2> 678594 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasr.ManagedResourceStorage$ZooKeeperStorageIO.configure Configured 
ZooKeeperStorageIO with znodeBase: /configs/conf1
   [junit4]   2> 678594 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasr.RestManager.init Initializing RestManager with initArgs: {}
   [junit4]   2> 678594 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasr.ManagedResourceStorage.load Reading _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 678594 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasr.ManagedResourceStorage$ZooKeeperStorageIO.openInputStream No data found 
for znode /configs/conf1/_rest_managed.json
   [junit4]   2> 678594 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasr.ManagedResourceStorage.load Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 678595 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasr.RestManager.init Initializing 0 registered ManagedResources
   [junit4]   2> 678595 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oash.ReplicationHandler.inform Commits will be reserved for  10000
   [junit4]   2> 678595 T5423 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasc.SolrCore.registerSearcher [collection1] Registered new searcher 
Searcher@3b68d484[collection1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 678595 T5422 N:127.0.0.1:37342_d_tw%2Fvh c:collection1 
oasc.CoreContainer.registerCore registering core: collection1
   [junit4]   2> 678596 T5426 N:127.0.0.1:37342_d_tw%2Fvh C:collection1 
S:shard1 c:collection1 oasc.ZkController.register Register replica - 
core:collection1 address:http://127.0.0.1:37342/d_tw/vh collection:collection1 
shard:shard1
   [junit4]   2> 678596 T5369 N:127.0.0.1:37342_d_tw%2Fvh 
oass.SolrDispatchFilter.init 
user.dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0
   [junit4]   2> 678596 T5369 N:127.0.0.1:37342_d_tw%2Fvh 
oass.SolrDispatchFilter.init SolrDispatchFilter.init() done
   [junit4]   2> 678598 T5426 N:127.0.0.1:37342_d_tw%2Fvh C:collection1 
S:shard1 c:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess Running 
the leader process for shard shard1
   [junit4]   2> 678598 T5395 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
/overseer/queue state SyncConnected
   [junit4]   2> 678598 T5426 N:127.0.0.1:37342_d_tw%2Fvh C:collection1 
S:shard1 c:collection1 oasc.ShardLeaderElectionContext.waitForReplicasToComeUp 
Enough replicas found to continue.
   [junit4]   2> 678599 T5426 N:127.0.0.1:37342_d_tw%2Fvh C:collection1 
S:shard1 c:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess I may 
be the new leader - try and sync
   [junit4]   2> 678599 T5396 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2>          "operation":"leader",
   [junit4]   2>          "shard":"shard1",
   [junit4]   2>          "collection":"collection1"} current state version: 4
   [junit4]   2> ASYNC  NEW_CORE C1177 name=collection1 
org.apache.solr.core.SolrCore@2bf1accf 
url=http://127.0.0.1:37342/d_tw/vh/collection1 node=127.0.0.1:37342_d_tw%2Fvh 
C1177_STATE=coll:collection1 core:collection1 props:{core=collection1, 
base_url=http://127.0.0.1:37342/d_tw/vh, node_name=127.0.0.1:37342_d_tw%2Fvh, 
state=down}
   [junit4]   2> 678599 T5426 N:127.0.0.1:37342_d_tw%2Fvh C:collection1 
S:shard1 c:collection1 C1177 oasc.SyncStrategy.sync Sync replicas to 
http://127.0.0.1:37342/d_tw/vh/collection1/
   [junit4]   2> 678599 T5426 N:127.0.0.1:37342_d_tw%2Fvh C:collection1 
S:shard1 c:collection1 C1177 oasc.SyncStrategy.syncReplicas Sync Success - now 
sync replicas to me
   [junit4]   2> 678599 T5426 N:127.0.0.1:37342_d_tw%2Fvh C:collection1 
S:shard1 c:collection1 C1177 oasc.SyncStrategy.syncToMe 
http://127.0.0.1:37342/d_tw/vh/collection1/ has no replicas
   [junit4]   2> 678599 T5426 N:127.0.0.1:37342_d_tw%2Fvh C:collection1 
S:shard1 c:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess I am 
the new leader: http://127.0.0.1:37342/d_tw/vh/collection1/ shard1
   [junit4]   2> 678600 T5395 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
/overseer/queue state SyncConnected
   [junit4]   2> 678601 T5396 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2>          "operation":"leader",
   [junit4]   2>          "shard":"shard1",
   [junit4]   2>          "collection":"collection1",
   [junit4]   2>          "base_url":"http://127.0.0.1:37342/d_tw/vh";,
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "state":"active"} current state version: 4
   [junit4]   2> 678638 T5369 oas.SolrTestCaseJ4.writeCoreProperties Writing 
core.properties file to 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-2-001/cores/collection1
   [junit4]   2> 678638 T5369 oasc.AbstractFullDistribZkTestBase.createJettys 
create jetty 2 in directory 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-2-001
   [junit4]   2> 678639 T5369 oejs.Server.doStart jetty-8.1.10.v20130312
   [junit4]   2> 678640 T5369 oejs.AbstractConnector.doStart Started 
[email protected]:37743
   [junit4]   2> 678640 T5369 oascse.JettySolrRunner$1.lifeCycleStarted Jetty 
properties: 
{solr.data.dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/tempDir-001/jetty2, solrconfig=solrconfig.xml, 
hostContext=/d_tw/vh, hostPort=37743, 
coreRootDirectory=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-2-001/cores}
   [junit4]   2> 678640 T5369 oass.SolrDispatchFilter.init 
SolrDispatchFilter.init()sun.misc.Launcher$AppClassLoader@3b764bce
   [junit4]   2> 678640 T5369 oasc.SolrResourceLoader.<init> new 
SolrResourceLoader for directory: 
'/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-2-001/'
   [junit4]   2> 678653 T5369 oasc.SolrXmlConfig.fromFile Loading container 
configuration from 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-2-001/solr.xml
   [junit4]   2> 678656 T5369 oasc.CorePropertiesLocator.<init> Config-defined 
core root directory: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-2-001/cores
   [junit4]   2> 678657 T5369 oasc.CoreContainer.<init> New CoreContainer 
279113930
   [junit4]   2> 678657 T5369 oasc.CoreContainer.load Loading cores into 
CoreContainer 
[instanceDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-2-001/]
   [junit4]   2> 678657 T5369 oasc.CoreContainer.load loading shared library: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-2-001/lib
   [junit4]   2> 678657 T5369 oasc.SolrResourceLoader.addToClassLoader WARN 
Can't find (or read) directory to add to classloader: lib (resolved as: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-2-001/lib).
   [junit4]   2> 678661 T5369 oashc.HttpShardHandlerFactory.init created with 
socketTimeout : 90000,urlScheme : ,connTimeout : 15000,maxConnectionsPerHost : 
20,maxConnections : 10000,corePoolSize : 0,maximumPoolSize : 
2147483647,maxThreadIdleTime : 5,sizeOfQueue : -1,fairnessPolicy : 
false,useRetries : false,
   [junit4]   2> 678662 T5369 oasu.UpdateShardHandler.<init> Creating 
UpdateShardHandler HTTP client with params: 
socketTimeout=340000&connTimeout=45000&retry=true
   [junit4]   2> 678662 T5369 oasl.LogWatcher.createWatcher SLF4J impl is 
org.slf4j.impl.Log4jLoggerFactory
   [junit4]   2> 678662 T5369 oasl.LogWatcher.newRegisteredLogWatcher 
Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
   [junit4]   2> 678662 T5369 oasc.CoreContainer.load Node Name: 127.0.0.1
   [junit4]   2> 678663 T5369 oasc.ZkContainer.initZooKeeper Zookeeper 
client=127.0.0.1:41610/solr
   [junit4]   2> 678663 T5369 oasc.ZkController.checkChrootPath zkHost includes 
chroot
   [junit4]   2> 678751 T5426 N:127.0.0.1:37342_d_tw%2Fvh C:collection1 
S:shard1 c:collection1 oasc.ZkController.register We are 
http://127.0.0.1:37342/d_tw/vh/collection1/ and leader is 
http://127.0.0.1:37342/d_tw/vh/collection1/
   [junit4]   2> 678751 T5426 N:127.0.0.1:37342_d_tw%2Fvh C:collection1 
S:shard1 c:collection1 oasc.ZkController.register No LogReplay needed for 
core=collection1 baseURL=http://127.0.0.1:37342/d_tw/vh
   [junit4]   2> 678751 T5426 N:127.0.0.1:37342_d_tw%2Fvh C:collection1 
S:shard1 c:collection1 oasc.ZkController.checkRecovery I am the leader, no 
recovery necessary
   [junit4]   2> 678751 T5426 N:127.0.0.1:37342_d_tw%2Fvh C:collection1 
S:shard1 c:collection1 oasc.ZkController.publish publishing core=collection1 
state=active collection=collection1
   [junit4]   2> 678751 T5426 N:127.0.0.1:37342_d_tw%2Fvh C:collection1 
S:shard1 c:collection1 oasc.ZkController.publish numShards not found on 
descriptor - reading it from system property
   [junit4]   2> 678752 T5395 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
/overseer/queue state SyncConnected
   [junit4]   2> 678753 T5396 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "core_node_name":"core_node1",
   [junit4]   2>          "roles":null,
   [junit4]   2>          "base_url":"http://127.0.0.1:37342/d_tw/vh";,
   [junit4]   2>          "node_name":"127.0.0.1:37342_d_tw%2Fvh",
   [junit4]   2>          "numShards":"1",
   [junit4]   2>          "state":"active",
   [junit4]   2>          "shard":"shard1",
   [junit4]   2>          "collection":"collection1",
   [junit4]   2>          "operation":"state"} current state version: 5
   [junit4]   2> 678753 T5396 N:127.0.0.1:51908_d_tw%2Fvh 
oasco.ReplicaMutator.updateState Update state numShards=1 message={
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "core_node_name":"core_node1",
   [junit4]   2>          "roles":null,
   [junit4]   2>          "base_url":"http://127.0.0.1:37342/d_tw/vh";,
   [junit4]   2>          "node_name":"127.0.0.1:37342_d_tw%2Fvh",
   [junit4]   2>          "numShards":"1",
   [junit4]   2>          "state":"active",
   [junit4]   2>          "shard":"shard1",
   [junit4]   2>          "collection":"collection1",
   [junit4]   2>          "operation":"state"}
   [junit4]   2> 679671 T5369 N:127.0.0.1:37743_d_tw%2Fvh 
oasc.ZkController.createEphemeralLiveNode Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:37743_d_tw%2Fvh
   [junit4]   2> 679672 T5369 N:127.0.0.1:37743_d_tw%2Fvh oasc.Overseer.close 
Overseer (id=null) closing
   [junit4]   2> 679673 T5369 N:127.0.0.1:37743_d_tw%2Fvh 
oasc.CorePropertiesLocator.discover Looking for core definitions underneath 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-2-001/cores
   [junit4]   2> 679673 T5369 N:127.0.0.1:37743_d_tw%2Fvh 
oasc.CoreDescriptor.<init> CORE DESCRIPTOR: {name=collection1, 
config=solrconfig.xml, transient=false, schema=schema.xml, loadOnStartup=true, 
instanceDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-2-001/cores/collection1, collection=collection1, 
absoluteInstDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-2-001/cores/collection1/, coreNodeName=, 
dataDir=data/, shard=}
   [junit4]   2> 679673 T5369 N:127.0.0.1:37743_d_tw%2Fvh 
oasc.CorePropertiesLocator.discoverUnder Found core collection1 in 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-2-001/cores/collection1/
   [junit4]   2> 679674 T5369 N:127.0.0.1:37743_d_tw%2Fvh 
oasc.CorePropertiesLocator.discover Found 1 core definitions
   [junit4]   2> 679674 T5442 N:127.0.0.1:37743_d_tw%2Fvh C:collection1 
c:collection1 oasc.ZkController.publish publishing core=collection1 state=down 
collection=collection1
   [junit4]   2> 679674 T5442 N:127.0.0.1:37743_d_tw%2Fvh C:collection1 
c:collection1 oasc.ZkController.publish numShards not found on descriptor - 
reading it from system property
   [junit4]   2> 679675 T5442 N:127.0.0.1:37743_d_tw%2Fvh 
oasc.ZkController.waitForCoreNodeName look for our core node name
   [junit4]   2> 679675 T5395 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
/overseer/queue state SyncConnected
   [junit4]   2> 679675 T5396 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "roles":null,
   [junit4]   2>          "base_url":"http://127.0.0.1:37743/d_tw/vh";,
   [junit4]   2>          "node_name":"127.0.0.1:37743_d_tw%2Fvh",
   [junit4]   2>          "numShards":"1",
   [junit4]   2>          "state":"down",
   [junit4]   2>          "shard":null,
   [junit4]   2>          "collection":"collection1",
   [junit4]   2>          "operation":"state"} current state version: 6
   [junit4]   2> 679675 T5396 N:127.0.0.1:51908_d_tw%2Fvh 
oasco.ReplicaMutator.updateState Update state numShards=1 message={
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "roles":null,
   [junit4]   2>          "base_url":"http://127.0.0.1:37743/d_tw/vh";,
   [junit4]   2>          "node_name":"127.0.0.1:37743_d_tw%2Fvh",
   [junit4]   2>          "numShards":"1",
   [junit4]   2>          "state":"down",
   [junit4]   2>          "shard":null,
   [junit4]   2>          "collection":"collection1",
   [junit4]   2>          "operation":"state"}
   [junit4]   2> 679675 T5396 N:127.0.0.1:51908_d_tw%2Fvh 
oasco.ReplicaMutator.updateState Collection already exists with numShards=1
   [junit4]   2> 679676 T5396 N:127.0.0.1:51908_d_tw%2Fvh 
oasco.ReplicaMutator.updateState Assigning new node to shard shard=shard1
   [junit4]   2> 680675 T5442 N:127.0.0.1:37743_d_tw%2Fvh 
oasc.ZkController.waitForShardId waiting to find shard id in clusterstate for 
collection1
   [junit4]   2> 680675 T5442 N:127.0.0.1:37743_d_tw%2Fvh 
oasc.ZkController.createCollectionZkNode Check for collection zkNode:collection1
   [junit4]   2> 680675 T5442 N:127.0.0.1:37743_d_tw%2Fvh 
oasc.ZkController.createCollectionZkNode Collection zkNode exists
   [junit4]   2> 680676 T5442 N:127.0.0.1:37743_d_tw%2Fvh 
oasc.SolrResourceLoader.<init> new SolrResourceLoader for directory: 
'/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-2-001/cores/collection1/'
   [junit4]   2> 680685 T5442 N:127.0.0.1:37743_d_tw%2Fvh 
oasc.ZkController.watchZKConfDir watch zkdir /configs/conf1
   [junit4]   2> 680686 T5442 N:127.0.0.1:37743_d_tw%2Fvh oasc.Config.<init> 
loaded config solrconfig.xml with version 0 
   [junit4]   2> 680688 T5442 N:127.0.0.1:37743_d_tw%2Fvh 
oasc.SolrConfig.refreshRequestParams current version of requestparams : -1
   [junit4]   2> 680691 T5442 N:127.0.0.1:37743_d_tw%2Fvh 
oasc.SolrConfig.<init> Using Lucene MatchVersion: 5.2.0
   [junit4]   2> 680697 T5442 N:127.0.0.1:37743_d_tw%2Fvh 
oasc.SolrConfig.<init> Loaded SolrConfig: solrconfig.xml
   [junit4]   2> 680698 T5442 N:127.0.0.1:37743_d_tw%2Fvh 
oass.IndexSchema.readSchema Reading Solr Schema from /configs/conf1/schema.xml
   [junit4]   2> 680701 T5442 N:127.0.0.1:37743_d_tw%2Fvh 
oass.IndexSchema.readSchema [collection1] Schema name=test
   [junit4]   2> 680760 T5442 N:127.0.0.1:37743_d_tw%2Fvh 
oass.IndexSchema.readSchema default search field in schema is text
   [junit4]   2> 680761 T5442 N:127.0.0.1:37743_d_tw%2Fvh 
oass.IndexSchema.readSchema unique key field: id
   [junit4]   2> 680761 T5442 N:127.0.0.1:37743_d_tw%2Fvh 
oass.FileExchangeRateProvider.reload Reloading exchange rates from file 
currency.xml
   [junit4]   2> 680763 T5442 N:127.0.0.1:37743_d_tw%2Fvh 
oass.FileExchangeRateProvider.reload Reloading exchange rates from file 
currency.xml
   [junit4]   2> 680769 T5442 N:127.0.0.1:37743_d_tw%2Fvh 
oasc.CoreContainer.create Creating SolrCore 'collection1' using configuration 
from collection collection1
   [junit4]   2> 680769 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasc.SolrCore.initDirectoryFactory solr.StandardDirectoryFactory
   [junit4]   2> 680769 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasc.SolrCore.<init> [collection1] Opening new SolrCore at 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-2-001/cores/collection1/, 
dataDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-2-001/cores/collection1/data/
   [junit4]   2> 680769 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasc.JmxMonitoredMap.<init> JMX monitoring is enabled. Adding Solr mbeans to 
JMX Server: com.sun.jmx.mbeanserver.JmxMBeanServer@3425a966
   [junit4]   2> 680770 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasc.CachingDirectoryFactory.get return new directory for 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-2-001/cores/collection1/data
   [junit4]   2> 680770 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasc.SolrCore.getNewIndexDir New index directory detected: old=null 
new=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-2-001/cores/collection1/data/index/
   [junit4]   2> 680770 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasc.SolrCore.initIndex WARN [collection1] Solr index directory 
'/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-2-001/cores/collection1/data/index' doesn't exist. 
Creating new index...
   [junit4]   2> 680770 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasc.CachingDirectoryFactory.get return new directory for 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-2-001/cores/collection1/data/index
   [junit4]   2> 680771 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasu.RandomMergePolicy.<init> RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=40, maxMergeAtOnceExplicit=23, maxMergedSegmentMB=32.9423828125, 
floorSegmentMB=2.05859375, forceMergeDeletesPctAllowed=13.087659575459174, 
segmentsPerTier=15.0, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.41692349822860775
   [junit4]   2> 680798 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasc.SolrDeletionPolicy.onCommit SolrDeletionPolicy.onCommit: commits: num=1
   [junit4]   2>                
commit{dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 
A8FAE17676D1BE93-001/shard-2-001/cores/collection1/data/index,segFN=segments_1,generation=1}
   [junit4]   2> 680798 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
   [junit4]   2> 680802 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"nodistrib"
   [junit4]   2> 680802 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"dedupe"
   [junit4]   2> 680802 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasup.UpdateRequestProcessorChain.init inserting 
DistributedUpdateProcessorFactory into updateRequestProcessorChain "dedupe"
   [junit4]   2> 680802 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"stored_sig"
   [junit4]   2> 680803 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasup.UpdateRequestProcessorChain.init inserting 
DistributedUpdateProcessorFactory into updateRequestProcessorChain "stored_sig"
   [junit4]   2> 680803 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"distrib-dup-test-chain-explicit"
   [junit4]   2> 680803 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain 
"distrib-dup-test-chain-implicit"
   [junit4]   2> 680803 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasup.UpdateRequestProcessorChain.init inserting 
DistributedUpdateProcessorFactory into updateRequestProcessorChain 
"distrib-dup-test-chain-implicit"
   [junit4]   2> 680803 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasc.SolrCore.loadUpdateProcessorChains no updateRequestProcessorChain defined 
as default, creating implicit default
   [junit4]   2> 680804 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 680805 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 680805 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 680806 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
   [junit4]   2> 680821 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasc.RequestHandlers.initHandlersFromConfig Registered paths: 
/admin/mbeans,standard,/update/csv,/update/json/docs,/admin/luke,/admin/segments,/get,/admin/system,/replication,/admin/properties,/config,/schema,/admin/plugins,/admin/logging,/update/json,/admin/threads,/admin/ping,/update,/admin/file
   [junit4]   2> 680822 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasc.SolrCore.initStatsCache Using default statsCache cache: 
org.apache.solr.search.stats.LocalStatsCache
   [junit4]   2> 680822 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasu.UpdateHandler.<init> Using UpdateLog implementation: 
org.apache.solr.update.UpdateLog
   [junit4]   2> 680822 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasu.UpdateLog.init Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10
   [junit4]   2> 680823 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasu.CommitTracker.<init> Hard AutoCommit: disabled
   [junit4]   2> 680823 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasu.CommitTracker.<init> Soft AutoCommit: disabled
   [junit4]   2> 680824 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasu.RandomMergePolicy.<init> RandomMergePolicy wrapping class 
org.apache.lucene.index.LogDocMergePolicy: [LogDocMergePolicy: 
minMergeSize=1000, mergeFactor=50, maxMergeSize=9223372036854775807, 
maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true, 
maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.5598927705693212]
   [junit4]   2> 680824 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasc.SolrDeletionPolicy.onInit SolrDeletionPolicy.onInit: commits: num=1
   [junit4]   2>                
commit{dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 
A8FAE17676D1BE93-001/shard-2-001/cores/collection1/data/index,segFN=segments_1,generation=1}
   [junit4]   2> 680825 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
   [junit4]   2> 680825 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oass.SolrIndexSearcher.<init> Opening Searcher@75fb5053[collection1] main
   [junit4]   2> 680825 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasr.ManagedResourceStorage.newStorageIO Setting up ZooKeeper-based storage for 
the RestManager with znodeBase: /configs/conf1
   [junit4]   2> 680825 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasr.ManagedResourceStorage$ZooKeeperStorageIO.configure Configured 
ZooKeeperStorageIO with znodeBase: /configs/conf1
   [junit4]   2> 680826 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasr.RestManager.init Initializing RestManager with initArgs: {}
   [junit4]   2> 680826 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasr.ManagedResourceStorage.load Reading _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 680826 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasr.ManagedResourceStorage$ZooKeeperStorageIO.openInputStream No data found 
for znode /configs/conf1/_rest_managed.json
   [junit4]   2> 680826 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasr.ManagedResourceStorage.load Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 680826 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasr.RestManager.init Initializing 0 registered ManagedResources
   [junit4]   2> 680826 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oash.ReplicationHandler.inform Commits will be reserved for  10000
   [junit4]   2> 680827 T5443 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasc.SolrCore.registerSearcher [collection1] Registered new searcher 
Searcher@75fb5053[collection1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 680827 T5442 N:127.0.0.1:37743_d_tw%2Fvh c:collection1 
oasc.CoreContainer.registerCore registering core: collection1
   [junit4]   2> 680827 T5446 N:127.0.0.1:37743_d_tw%2Fvh C:collection1 
S:shard1 c:collection1 oasc.ZkController.register Register replica - 
core:collection1 address:http://127.0.0.1:37743/d_tw/vh collection:collection1 
shard:shard1
   [junit4]   2> 680828 T5369 N:127.0.0.1:37743_d_tw%2Fvh 
oass.SolrDispatchFilter.init 
user.dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0
   [junit4]   2> 680828 T5369 N:127.0.0.1:37743_d_tw%2Fvh 
oass.SolrDispatchFilter.init SolrDispatchFilter.init() done
   [junit4]   2> 680828 T5446 N:127.0.0.1:37743_d_tw%2Fvh C:collection1 
S:shard1 c:collection1 oasc.ZkController.register We are 
http://127.0.0.1:37743/d_tw/vh/collection1/ and leader is 
http://127.0.0.1:37342/d_tw/vh/collection1/
   [junit4]   2> 680828 T5446 N:127.0.0.1:37743_d_tw%2Fvh C:collection1 
S:shard1 c:collection1 oasc.ZkController.register No LogReplay needed for 
core=collection1 baseURL=http://127.0.0.1:37743/d_tw/vh
   [junit4]   2> 680829 T5446 N:127.0.0.1:37743_d_tw%2Fvh C:collection1 
S:shard1 c:collection1 oasc.ZkController.checkRecovery Core needs to 
recover:collection1
   [junit4]   2> 680829 T5446 N:127.0.0.1:37743_d_tw%2Fvh C:collection1 
S:shard1 c:collection1 oasu.DefaultSolrCoreState.doRecovery Running recovery - 
first canceling any ongoing recovery
   [junit4]   2> ASYNC  NEW_CORE C1178 name=collection1 
org.apache.solr.core.SolrCore@7b6a86a9 
url=http://127.0.0.1:37743/d_tw/vh/collection1 node=127.0.0.1:37743_d_tw%2Fvh 
C1178_STATE=coll:collection1 core:collection1 props:{core=collection1, 
base_url=http://127.0.0.1:37743/d_tw/vh, node_name=127.0.0.1:37743_d_tw%2Fvh, 
state=down}
   [junit4]   2> 680829 T5447 N:127.0.0.1:37743_d_tw%2Fvh C:collection1 
S:shard1 c:collection1 C1178 oasc.RecoveryStrategy.run Starting recovery 
process.  core=collection1 recoveringAfterStartup=true
   [junit4]   2> 680830 T5447 N:127.0.0.1:37743_d_tw%2Fvh C:collection1 
S:shard1 c:collection1 C1178 oasc.RecoveryStrategy.doRecovery ###### 
startupVersions=[]
   [junit4]   2> 680830 T5447 N:127.0.0.1:37743_d_tw%2Fvh C:collection1 
S:shard1 c:collection1 C1178 oasc.RecoveryStrategy.doRecovery Publishing state 
of core collection1 as recovering, leader is 
http://127.0.0.1:37342/d_tw/vh/collection1/ and I am 
http://127.0.0.1:37743/d_tw/vh/collection1/
   [junit4]   2> 680830 T5447 N:127.0.0.1:37743_d_tw%2Fvh C:collection1 
S:shard1 c:collection1 C1178 oasc.ZkController.publish publishing 
core=collection1 state=recovering collection=collection1
   [junit4]   2> 680830 T5447 N:127.0.0.1:37743_d_tw%2Fvh C:collection1 
S:shard1 c:collection1 C1178 oasc.ZkController.publish numShards not found on 
descriptor - reading it from system property
   [junit4]   2> 680830 T5395 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
/overseer/queue state SyncConnected
   [junit4]   2> 680831 T5396 N:127.0.0.1:51908_d_tw%2Fvh 
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "core_node_name":"core_node2",
   [junit4]   2>          "roles":null,
   [junit4]   2>          "base_url":"http://127.0.0.1:37743/d_tw/vh";,
   [junit4]   2>          "node_name":"127.0.0.1:37743_d_tw%2Fvh",
   [junit4]   2>          "numShards":"1",
   [junit4]   2>          "state":"recovering",
   [junit4]   2>          "shard":"shard1",
   [junit4]   2>          "collection":"collection1",
   [junit4]   2>          "operation":"state"} current state version: 7
   [junit4]   2> 680831 T5447 N:127.0.0.1:37743_d_tw%2Fvh C:collection1 
S:shard1 c:collection1 C1178 oasc.RecoveryStrategy.sendPrepRecoveryCmd Sending 
prep recovery command to http://127.0.0.1:37342/d_tw/vh; WaitForState: 
action=PREPRECOVERY&core=collection1&nodeName=127.0.0.1%3A37743_d_tw%252Fvh&coreNodeName=core_node2&state=recovering&checkLive=true&onlyIfLeader=true&onlyIfLeaderActive=true
   [junit4]   2> 680831 T5396 N:127.0.0.1:51908_d_tw%2Fvh 
oasco.ReplicaMutator.updateState Update state numShards=1 message={
   [junit4]   2>          "core":"collection1",
   [junit4]   2>          "core_node_name":"core_node2",
   [junit4]   2>          "roles":null,
   [junit4]   2>          "base_url":"http://127.0.0.1:37743/d_tw/vh";,
   [junit4]   2>          "node_name":"127.0.0.1:37743_d_tw%2Fvh",
   [junit4]   2>          "numShards":"1",
   [junit4]   2>          "state":"recovering",
   [junit4]   2>          "shard":"shard1",
   [junit4]   2>          "collection":"collection1",
   [junit4]   2>          "operation":"state"}
   [junit4]   2> 680831 T5415 N:127.0.0.1:37342_d_tw%2Fvh 
oasha.CoreAdminHandler.handleWaitForStateAction Going to wait for coreNodeName: 
core_node2, state: recovering, checkLive: true, onlyIfLeader: true, 
onlyIfLeaderActive: true
   [junit4]   2> 680832 T5415 N:127.0.0.1:37342_d_tw%2Fvh 
oasha.CoreAdminHandler.handleWaitForStateAction Will wait a max of 183 seconds 
to see collection1 (shard1 of collection1) have state: recovering
   [junit4]   2> 680832 T5415 N:127.0.0.1:37342_d_tw%2Fvh 
oasha.CoreAdminHandler.handleWaitForStateAction In WaitForState(recovering): 
collection=collection1, shard=shard1, thisCore=collection1, 
leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, 
currentState=down, localState=active, nodeName=127.0.0.1:37743_d_tw%2Fvh, 
coreNodeName=core_node2, onlyIfActiveCheckResult=false, nodeProps: 
core_node2:{"core":"collection1","base_url":"http://127.0.0.1:37743/d_tw/vh","node_name":"127.0.0.1:37743_d_tw%2Fvh","state":"down"}
   [junit4]   2> 680893 T5369 oas.SolrTestCaseJ4.writeCoreProperties Writing 
core.properties file to 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-3-001/cores/collection1
   [junit4]   2> 680894 T5369 oasc.AbstractFullDistribZkTestBase.createJettys 
create jetty 3 in directory 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-3-001
   [junit4]   2> 680895 T5369 oejs.Server.doStart jetty-8.1.10.v20130312
   [junit4]   2> 680896 T5369 oejs.AbstractConnector.doStart Started 
[email protected]:37347
   [junit4]   2> 680897 T5369 oascse.JettySolrRunner$1.lifeCycleStarted Jetty 
properties: 
{solr.data.dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/tempDir-001/jetty3, solrconfig=solrconfig.xml, 
hostContext=/d_tw/vh, hostPort=37347, 
coreRootDirectory=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-3-001/cores}
   [junit4]   2> 680897 T5369 oass.SolrDispatchFilter.init 
SolrDispatchFilter.init()sun.misc.Launcher$AppClassLoader@3b764bce
   [junit4]   2> 680897 T5369 oasc.SolrResourceLoader.<init> new 
SolrResourceLoader for directory: 
'/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-3-001/'
   [junit4]   2> 680916 T5369 oasc.SolrXmlConfig.fromFile Loading container 
configuration from 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-3-001/solr.xml
   [junit4]   2> 680921 T5369 oasc.CorePropertiesLocator.<init> Config-defined 
core root directory: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-3-001/cores
   [junit4]   2> 680921 T5369 oasc.CoreContainer.<init> New CoreContainer 
1464551542
   [junit4]   2> 680921 T5369 oasc.CoreContainer.load Loading cores into 
CoreContainer 
[instanceDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-3-001/]
   [junit4]   2> 680921 T5369 oasc.CoreContainer.load loading shared library: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-3-001/lib
   [junit4]   2> 680922 T5369 oasc.SolrResourceLoader.addToClassLoader WARN 
Can't find (or read) directory to add to classloader: lib (resolved as: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-3-001/lib).
   [junit4]   2> 680928 T5369 oashc.HttpShardHandlerFactory.init created with 
socketTimeout : 90000,urlScheme : ,connTimeout : 15000,maxConnectionsPerHost : 
20,maxConnections : 10000,corePoolSize : 0,maximumPoolSize : 
2147483647,maxThreadIdleTime : 5,sizeOfQueue : -1,fairnessPolicy : 
false,useRetries : false,
   [junit4]   2> 680930 T5369 oasu.UpdateShardHandler.<init> Creating 
UpdateShardHandler HTTP client with params: 
socketTimeout=340000&connTimeout=45000&retry=true
   [junit4]   2> 680931 T5369 oasl.LogWatcher.createWatcher SLF4J impl is 
org.slf4j.impl.Log4jLoggerFactory
   [junit4]   2> 680931 T5369 oasl.LogWatcher.newRegisteredLogWatcher 
Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
   [junit4]   2> 680931 T5369 oasc.CoreContainer.load Node Name: 127.0.0.1
   [junit4]   2> 680932 T5369 oasc.ZkContainer.initZooKeeper Zookeeper 
client=127.0.0.1:41610/solr
   [junit4]   2> 680932 T5369 oasc.ZkController.checkChrootPath zkHost includes 
chroot
   [junit4]   2> 681833 T5415 N:127.0.0.1:37342_d_tw%2Fvh 
oasha.CoreAdminHandler.handleWaitForStateAction In WaitForState(recovering): 
collection=collection1, shard=shard1, thisCore=collection1, 
leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, 
currentState=recovering, localState=active, nodeName=127.0.0.1:37743_d_tw%2Fvh, 
coreNodeName=core_node2, onlyIfActiveCheckResult=false, nodeProps: 
core_node2:{"core":"collection1","base_url":"http://127.0.0.1:37743/d_tw/vh","node_name":"127.0.0.1:37743_d_tw%2Fvh","state":"recovering"}
   [junit4]   2> 681833 T5415 N:127.0.0.1:37342

[...truncated too long message...]

131 T5369 C:collection1 S:shard1 c:collection1 oasc.ZkController.publish 
numShards not found on descriptor - reading it from system property
   [junit4]   2> 718136 T5463 N:127.0.0.1:37347_d_tw%2Fvh 
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path 
/overseer/queue state SyncConnected
   [junit4]   2> 718136 T5369 C:control_collection S:shard1 c:collection1 
oasc.Overseer.close Overseer 
(id=93661436702490634-127.0.0.1:37347_d_tw%2Fvh-n_0000000003) closing
   [junit4]   2> 718136 T5564 N:127.0.0.1:37347_d_tw%2Fvh 
oasc.Overseer$ClusterStateUpdater.run Overseer Loop exiting : 
127.0.0.1:37347_d_tw%2Fvh
   [junit4]   2> 718145 T5463 N:127.0.0.1:37347_d_tw%2Fvh 
oascc.ZkStateReader$3.process WARN ZooKeeper watch triggered, but Solr cannot 
talk to ZK
   [junit4]   2> 718921 T5543 N:127.0.0.1:37347_d_tw%2Fvh C1220 
oasc.SyncStrategy.sync WARN Closed, skipping sync up.
   [junit4]   2> 718922 T5543 N:127.0.0.1:37347_d_tw%2Fvh 
oasc.ShardLeaderElectionContext.rejoinLeaderElection Not rejoining election 
because CoreContainer is closed
   [junit4]   2> 718922 T5543 N:127.0.0.1:37347_d_tw%2Fvh oasc.SolrCore.close 
[collection1]  CLOSING SolrCore org.apache.solr.core.SolrCore@7bc72654
   [junit4]   2> 718922 T5543 N:127.0.0.1:37347_d_tw%2Fvh 
oasc.ZkController.unRegisterConfListener  a listener was removed because of 
core close
   [junit4]   2> 718922 T5543 N:127.0.0.1:37347_d_tw%2Fvh 
oasu.DirectUpdateHandler2.close closing 
DirectUpdateHandler2{commits=2,autocommits=0,soft 
autocommits=0,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=858,cumulative_deletesById=420,cumulative_deletesByQuery=1,cumulative_errors=0,transaction_logs_total_size=111715,transaction_logs_total_number=1}
   [junit4]   2> 718923 T5543 N:127.0.0.1:37347_d_tw%2Fvh 
oasu.SolrCoreState.decrefSolrCoreState Closing SolrCoreState
   [junit4]   2> 718923 T5543 N:127.0.0.1:37347_d_tw%2Fvh 
oasu.DefaultSolrCoreState.closeIndexWriter SolrCoreState ref count has reached 
0 - closing IndexWriter
   [junit4]   2> 718923 T5543 N:127.0.0.1:37347_d_tw%2Fvh 
oasu.DefaultSolrCoreState.closeIndexWriter closing IndexWriter with 
IndexWriterCloser
   [junit4]   2> 718924 T5543 N:127.0.0.1:37347_d_tw%2Fvh 
oasc.SolrCore.closeSearcher [collection1] Closing main searcher on request.
   [junit4]   2> 718942 T5543 N:127.0.0.1:37347_d_tw%2Fvh 
oasc.CachingDirectoryFactory.close Closing StandardDirectoryFactory - 2 
directories currently being tracked
   [junit4]   2> 718943 T5543 N:127.0.0.1:37347_d_tw%2Fvh 
oasc.CachingDirectoryFactory.closeCacheValue looking to close 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-3-001/cores/collection1/data 
[CachedDir<<refCount=0;path=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-3-001/cores/collection1/data;done=false>>]
   [junit4]   2> 718943 T5543 N:127.0.0.1:37347_d_tw%2Fvh 
oasc.CachingDirectoryFactory.close Closing directory: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-3-001/cores/collection1/data
   [junit4]   2> 718943 T5543 N:127.0.0.1:37347_d_tw%2Fvh 
oasc.CachingDirectoryFactory.closeCacheValue looking to close 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-3-001/cores/collection1/data/index 
[CachedDir<<refCount=0;path=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-3-001/cores/collection1/data/index;done=false>>]
   [junit4]   2> 718943 T5543 N:127.0.0.1:37347_d_tw%2Fvh 
oasc.CachingDirectoryFactory.close Closing directory: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001/shard-3-001/cores/collection1/data/index
   [junit4]   2> 718974 T5369 oejsh.ContextHandler.doStop stopped 
o.e.j.s.ServletContextHandler{/d_tw/vh,null}
   [junit4]   2> 719166 T5369 C:control_collection S:shard1 c:collection1 
oasc.ZkTestServer.send4LetterWord connecting to 127.0.0.1:41610 41610
   [junit4]   2> 719167 T5534 oasc.ZkTestServer.send4LetterWord connecting to 
127.0.0.1:41610 41610
   [junit4]   2> 719168 T5534 oasc.ZkTestServer$ZKServerMain.runFromConfig WARN 
Watch limit violations: 
   [junit4]   2>        Maximum concurrent children watches above limit:
   [junit4]   2>        
   [junit4]   2>                2       /solr/overseer/collection-queue-work
   [junit4]   2>        
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=ChaosMonkeyNothingIsSafeTest -Dtests.method=test 
-Dtests.seed=A8FAE17676D1BE93 -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=en_GB -Dtests.timezone=Pacific/Pago_Pago -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] FAILURE 44.2s J0 | ChaosMonkeyNothingIsSafeTest.test <<<
   [junit4]    > Throwable #1: java.lang.AssertionError: There were too many 
update fails (25 > 20) - we expect it can happen, but shouldn't easily
   [junit4]    >        at 
__randomizedtesting.SeedInfo.seed([A8FAE17676D1BE93:20AEDEACD82DD36B]:0)
   [junit4]    >        at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:230)
   [junit4]    >        at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
   [junit4]    >        at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
   [junit4]    >        at java.lang.Thread.run(Thread.java:745)
   [junit4]   2> 719172 T5369 C:control_collection S:shard1 c:collection1 
oas.SolrTestCaseJ4.deleteCore ###deleteCore
   [junit4]   2> NOTE: leaving temporary files on disk at: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest
 A8FAE17676D1BE93-001
   [junit4]   2> 44189 T5368 ccr.ThreadLeakControl.checkThreadLeaks WARNING 
Will linger awaiting termination of 1 leaked thread(s).
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene50): 
{rnd_b=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
_version_=PostingsFormat(name=MockRandom), a_t=PostingsFormat(name=MockRandom), 
a_i=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
id=PostingsFormat(name=LuceneVarGapDocFreqInterval)}, docValues:{}, 
sim=DefaultSimilarity, locale=en_GB, timezone=Pacific/Pago_Pago
   [junit4]   2> NOTE: Linux 3.13.0-49-generic amd64/Oracle Corporation 
1.9.0-ea (64-bit)/cpus=12,threads=1,free=155355552,total=508952576
   [junit4]   2> NOTE: All tests run in this JVM: [PrimitiveFieldTypeTest, 
UniqFieldsUpdateProcessorFactoryTest, AddBlockUpdateTest, 
QueryElevationComponentTest, HdfsThreadLeakTest, HighlighterTest, 
TestStressLucene, TestTrackingShardHandlerFactory, TestRemoteStreaming, 
TestRandomMergePolicy, TestSweetSpotSimilarityFactory, SolrCoreTest, 
TestPostingsSolrHighlighter, TestShortCircuitedRequests, TestMacros, 
PeerSyncTest, CoreAdminRequestStatusTest, TestReplicaProperties, 
URLClassifyProcessorTest, TestManagedSchema, SaslZkACLProviderTest, 
HighlighterMaxOffsetTest, SolrRequestParserTest, TestSolr4Spatial2, 
TestSolrQueryParserDefaultOperatorResource, TestJmxIntegration, TestNRTOpen, 
TestBulkSchemaConcurrent, TestDistributedSearch, FastVectorHighlighterTest, 
SolrIndexConfigTest, SuggesterWFSTTest, TestInitQParser, 
DistributedQueryComponentOptimizationTest, TestAnalyzeInfixSuggestions, 
TriLevelCompositeIdRoutingTest, TestComplexPhraseQParserPlugin, 
TestDistribDocBasedVersion, DistributedFacetPivotWhiteBoxTest, 
TestCodecSupport, TestSchemaVersionResource, TestSolrConfigHandlerCloud, 
SignatureUpdateProcessorFactoryTest, OverseerStatusTest, TestSolr4Spatial, 
TestSimpleQParserPlugin, TestJsonRequest, UpdateRequestProcessorFactoryTest, 
TestDocumentBuilder, ConcurrentDeleteAndCreateCollectionTest, 
CSVRequestHandlerTest, TermVectorComponentTest, TestSolrIndexConfig, 
TestReversedWildcardFilterFactory, TestJmxMonitoredMap, TestRandomDVFaceting, 
SyncSliceTest, ChangedSchemaMergeTest, TestManagedSchemaDynamicFieldResource, 
CachingDirectoryFactoryTest, DateMathParserTest, SolrCloudExampleTest, 
TestBlobHandler, CloudExitableDirectoryReaderTest, SpatialHeatmapFacetsTest, 
TestInfoStreamLogging, ChaosMonkeyNothingIsSafeTest]
   [junit4] Completed [231/483] on J0 in 45.22s, 1 test, 1 failure <<< FAILURES!

[...truncated 798 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:536: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:484: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:61: The following error 
occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/extra-targets.xml:39: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build.xml:229: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/common-build.xml:511: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:1434: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:991: 
There were test failures: 483 suites, 1916 tests, 1 failure, 50 ignored (22 
assumptions)

Total time: 47 minutes 25 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 64bit/jdk1.9.0-ea-b54 
-XX:-UseCompressedOops -XX:+UseSerialGC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to