See <https://builds.apache.org/job/Kafka-trunk/511/changes>

Changes:

[junrao] kafka-2101; Metric metadata-age is reset on a failed update; patched 
by Tim Brooks; reviewed by Jun Rao

------------------------------------------
[...truncated 1366 lines...]
kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED

kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testCompressedMessages PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved PASSED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testAppendMessageWithNullPayload PASSED

kafka.log.LogTest > testCorruptLog PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.LogTest > testParseTopicPartitionName PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.log.LogTest > testParseTopicPartitionNameForNull PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingSeparator PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.api.ProducerSendTest > testAutoCreateTopic PASSED

kafka.api.ProducerSendTest > testSendOffset PASSED

kafka.api.ProducerSendTest > testSerializer PASSED

kafka.api.ProducerSendTest > testClose PASSED

kafka.api.ProducerSendTest > testSendToPartition PASSED

kafka.api.ProducerSendTest > testFlush PASSED

kafka.api.ProducerSendTest > testCloseWithZeroTimeoutFromCallerThread PASSED

kafka.api.ProducerSendTest > testCloseWithZeroTimeoutFromSenderThread PASSED

kafka.api.ApiUtilsTest > testShortStringNonASCII PASSED

kafka.api.ApiUtilsTest > testShortStringASCII PASSED

kafka.api.ProducerBounceTest > testBrokerFailure PASSED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas PASSED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic PASSED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList PASSED

kafka.api.ProducerFailureHandlingTest > testNoResponse PASSED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed PASSED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic PASSED

kafka.api.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown PASSED

kafka.api.ConsumerTest > testSimpleConsumption PASSED

kafka.api.ConsumerTest > testAutoOffsetReset PASSED

kafka.api.ConsumerTest > testSeek PASSED

kafka.api.ConsumerTest > testGroupConsumption PASSED

kafka.api.ConsumerTest > testPositionAndCommit PASSED

kafka.api.ConsumerTest > testPartitionsFor PASSED

kafka.api.ConsumerTest > testPartitionReassignmentCallback PASSED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures PASSED

kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures PASSED

kafka.api.RequestResponseSerializationTest > 
testSerializationAndDeserialization PASSED

kafka.api.test.ProducerCompressionTest > testCompression[0] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[1] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[2] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[3] PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

unit.kafka.KafkaTest > testGetKafkaConfigFromArgs PASSED

unit.kafka.KafkaTest > testGetKafkaConfigFromArgsWrongSetValue PASSED

unit.kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd PASSED

unit.kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly PASSED

unit.kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging PASSED

unit.kafka.utils.CommandLineUtilsTest > testParseEmptyArg PASSED

unit.kafka.utils.CommandLineUtilsTest > testParseSingleArg PASSED

unit.kafka.utils.CommandLineUtilsTest > testParseArgs PASSED

unit.kafka.utils.ByteBoundedBlockingQueueTest > testByteBoundedBlockingQueue 
PASSED

unit.kafka.zk.ZKPathTest > testCreatePersistentPathThrowsException PASSED

unit.kafka.zk.ZKPathTest > testCreatePersistentPath PASSED

unit.kafka.zk.ZKPathTest > testMakeSurePersistsPathExistsThrowsException PASSED

unit.kafka.zk.ZKPathTest > testMakeSurePersistsPathExists PASSED

unit.kafka.zk.ZKPathTest > testCreateEphemeralPathThrowsException PASSED

unit.kafka.zk.ZKPathTest > testCreateEphemeralPathExists PASSED

unit.kafka.zk.ZKPathTest > testCreatePersistentSequentialThrowsException PASSED

unit.kafka.zk.ZKPathTest > testCreatePersistentSequentialExists PASSED

unit.kafka.common.TopicTest > testInvalidTopicNames PASSED

unit.kafka.common.ConfigTest > testInvalidClientIds PASSED

unit.kafka.common.ConfigTest > testInvalidGroupIds PASSED

unit.kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig PASSED

unit.kafka.consumer.PartitionAssignorTest > testRoundRobinPartitionAssignor 
PASSED

unit.kafka.consumer.PartitionAssignorTest > testRangePartitionAssignor PASSED

unit.kafka.server.KafkaConfigConfigDefTest > testFromPropsDefaults PASSED

unit.kafka.server.KafkaConfigConfigDefTest > testFromPropsEmpty PASSED

unit.kafka.server.KafkaConfigConfigDefTest > testFromPropsToProps FAILED
    java.lang.AssertionError: expected:<{num.io.threads=2051678117, 
log.dir=/tmp/log, num.network.threads=442579598, 
offsets.topic.num.partitions=1996793767, log.cleaner.enable=true, 
inter.broker.protocol.version=0.8.3.X, host.name=?????????, 
log.cleaner.backoff.ms=2080497098, log.segment.delete.delay.ms=516834257, 
controller.socket.timeout.ms=444411414, queued.max.requests=673019914, 
controlled.shutdown.max.retries=1810738435, num.replica.fetchers=1160759331, 
socket.request.max.bytes=1453815395, log.flush.interval.ms=762170329, 
offsets.topic.replication.factor=1011, 
log.flush.offset.checkpoint.interval.ms=923125288, 
security.inter.broker.protocol=PLAINTEXT, 
zookeeper.session.timeout.ms=413974606, metrics.sample.window.ms=1000, 
offsets.topic.compression.codec=1, zookeeper.connection.timeout.ms=2068179601, 
fetch.purgatory.purge.interval.requests=1242197204, 
log.retention.bytes=692466534, log.dirs=/tmp/logs,/tmp/logs2, 
replica.fetch.min.bytes=1791426389, compression.type=lz4, 
log.roll.jitter.ms=356707666, log.cleaner.threads=2, 
replica.lag.time.max.ms=1073834162, advertised.port=4321, 
max.connections.per.ip.overrides=127.0.0.1:2, 127.0.0.2:3, 
socket.send.buffer.bytes=1319605180, metrics.num.samples=2, port=1234, 
replica.fetch.wait.max.ms=321, log.segment.bytes=468671022, 
log.retention.minutes=772707425, auto.create.topics.enable=true, 
replica.socket.receive.buffer.bytes=1923367476, 
log.cleaner.io.max.bytes.per.second=0.2, zookeeper.sync.time.ms=2072589946, 
log.roll.jitter.hours=2106718330, log.retention.check.interval.ms=906922522, 
reserved.broker.max.id=100, unclean.leader.election.enable=true, 
advertised.listeners=PLAINTEXT://:2909, log.cleaner.io.buffer.load.factor=1.0, 
consumer.min.session.timeout.ms=422104288, log.retention.ms=1496447411, 
replica.high.watermark.checkpoint.interval.ms=118464842, 
log.cleanup.policy=delete, log.cleaner.dedupe.buffer.size=3145729, 
offsets.commit.timeout.ms=2084609508, min.insync.replicas=963487957, 
zookeeper.connect=127.0.0.1:2181, 
leader.imbalance.per.broker.percentage=148038876, 
log.index.interval.bytes=242075900, 
leader.imbalance.check.interval.seconds=1376263302, 
offsets.retention.minutes=1781435041, socket.receive.buffer.bytes=369224522, 
log.cleaner.delete.retention.ms=898157008, replica.socket.timeout.ms=493318414, 
num.partitions=2, offsets.topic.segment.bytes=852590082, 
default.replication.factor=549663639, log.cleaner.io.buffer.size=905972186, 
offsets.commit.required.acks=-1, num.recovery.threads.per.data.dir=1012415473, 
log.retention.hours=1115262747, replica.fetch.max.bytes=2041540755, 
log.roll.hours=115708840, metric.reporters=, message.max.bytes=1234, 
log.cleaner.min.cleanable.ratio=0.6, offsets.load.buffer.size=1818565888, 
delete.topic.enable=true, listeners=PLAINTEXT://:9092, 
offset.metadata.max.bytes=1563320007, 
controlled.shutdown.retry.backoff.ms=1270013702, 
max.connections.per.ip=359602609, consumer.max.session.timeout.ms=2124317921, 
log.roll.ms=241126032, advertised.host.name=??????????, 
log.flush.scheduler.interval.ms=1548906710, auto.leader.rebalance.enable=false, 
producer.purgatory.purge.interval.requests=1640729755, 
controlled.shutdown.enable=false, log.index.size.max.bytes=1748380064, 
log.flush.interval.messages=982245822, broker.id=15, 
offsets.retention.check.interval.ms=593078788, 
replica.fetch.backoff.ms=394858256, background.threads=124969300, 
connections.max.idle.ms=554679959}> but was:<{num.io.threads=2051678117, 
log.dir=/tmp/log, num.network.threads=442579598, 
offsets.topic.num.partitions=1996793767, inter.broker.protocol.version=0.8.3.X, 
log.cleaner.enable=true, host.name=?????????, 
log.cleaner.backoff.ms=2080497098, log.segment.delete.delay.ms=516834257, 
controller.socket.timeout.ms=444411414, 
controlled.shutdown.max.retries=1810738435, queued.max.requests=673019914, 
num.replica.fetchers=1160759331, socket.request.max.bytes=1453815395, 
log.flush.interval.ms=762170329, offsets.topic.replication.factor=1011, 
log.flush.offset.checkpoint.interval.ms=923125288, 
security.inter.broker.protocol=PLAINTEXT, 
zookeeper.session.timeout.ms=413974606, metrics.sample.window.ms=1000, 
offsets.topic.compression.codec=1, zookeeper.connection.timeout.ms=2068179601, 
fetch.purgatory.purge.interval.requests=1242197204, 
log.retention.bytes=692466534, log.dirs=/tmp/logs,/tmp/logs2, 
compression.type=lz4, replica.fetch.min.bytes=1791426389, 
log.roll.jitter.ms=356707666, log.cleaner.threads=2, 
replica.lag.time.max.ms=1073834162, advertised.port=4321, 
max.connections.per.ip.overrides=127.0.0.1:2, 127.0.0.2:3, 
socket.send.buffer.bytes=1319605180, metrics.num.samples=2, port=1234, 
replica.fetch.wait.max.ms=321, log.segment.bytes=468671022, 
log.retention.minutes=772707425, auto.create.topics.enable=true, 
replica.socket.receive.buffer.bytes=1923367476, 
log.cleaner.io.max.bytes.per.second=0.2, zookeeper.sync.time.ms=2072589946, 
log.roll.jitter.hours=2106718330, log.retention.check.interval.ms=906922522, 
reserved.broker.max.id=100, unclean.leader.election.enable=true, 
advertised.listeners=PLAINTEXT://:2909, log.cleaner.io.buffer.load.factor=1.0, 
consumer.min.session.timeout.ms=422104288, log.retention.ms=1496447411, 
replica.high.watermark.checkpoint.interval.ms=118464842, 
log.cleanup.policy=delete, log.cleaner.dedupe.buffer.size=3145729, 
offsets.commit.timeout.ms=2084609508, min.insync.replicas=963487957, 
leader.imbalance.per.broker.percentage=148038876, 
zookeeper.connect=127.0.0.1:2181, offsets.retention.minutes=1781435041, 
leader.imbalance.check.interval.seconds=1376263302, 
log.index.interval.bytes=242075900, socket.receive.buffer.bytes=369224522, 
log.cleaner.delete.retention.ms=898157008, replica.socket.timeout.ms=493318414, 
num.partitions=2, offsets.topic.segment.bytes=852590082, 
default.replication.factor=549663639, offsets.commit.required.acks=-1, 
log.cleaner.io.buffer.size=905972186, 
num.recovery.threads.per.data.dir=1012415473, log.retention.hours=1115262747, 
replica.fetch.max.bytes=2041540755, log.roll.hours=115708840, 
metric.reporters=, message.max.bytes=1234, offsets.load.buffer.size=1818565888, 
log.cleaner.min.cleanable.ratio=0.6, delete.topic.enable=true, 
listeners=PLAINTEXT://:9092, offset.metadata.max.bytes=1563320007, 
controlled.shutdown.retry.backoff.ms=1270013702, 
max.connections.per.ip=359602609, consumer.max.session.timeout.ms=2124317921, 
log.roll.ms=241126032, advertised.host.name=??????????, 
log.flush.scheduler.interval.ms=1548906710, auto.leader.rebalance.enable=false, 
producer.purgatory.purge.interval.requests=1640729755, 
controlled.shutdown.enable=false, log.index.size.max.bytes=1748380064, 
log.flush.interval.messages=982245822, broker.id=15, 
offsets.retention.check.interval.ms=593078788, 
replica.fetch.backoff.ms=394858256, background.threads=124969300, 
connections.max.idle.ms=554679959}>
        at org.junit.Assert.fail(Assert.java:92)
        at org.junit.Assert.failNotEquals(Assert.java:689)
        at org.junit.Assert.assertEquals(Assert.java:127)
        at org.junit.Assert.assertEquals(Assert.java:146)
        at 
unit.kafka.server.KafkaConfigConfigDefTest.testFromPropsToProps(KafkaConfigConfigDefTest.scala:257)

unit.kafka.server.KafkaConfigConfigDefTest > testFromPropsInvalid PASSED

unit.kafka.server.KafkaConfigConfigDefTest > testSpecificProperties PASSED

unit.kafka.log.LogConfigTest > testFromPropsDefaults PASSED

unit.kafka.log.LogConfigTest > testFromPropsEmpty PASSED

unit.kafka.log.LogConfigTest > testFromPropsToProps PASSED

unit.kafka.log.LogConfigTest > testFromPropsInvalid PASSED

452 tests completed, 1 failed
:core:test FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':core:test'.
> There were failing tests. See the report at: 
> file://<https://builds.apache.org/job/Kafka-trunk/ws/core/build/reports/tests/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 8 mins 42.582 secs
Build step 'Execute shell' marked build as failure
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1

Reply via email to