[ https://issues.apache.org/jira/browse/CASSANDRA-20570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18008613#comment-18008613 ]
Nikhil commented on CASSANDRA-20570: ------------------------------------ Hello [~dcapwell], I have done the required changes and test as well from my side for jdk version 8 and 11. Apologies for the inconvience caused. I have done testing using below mentioned command and got BUILD SUCCESSFUL. {code:java} 1. ant test -Dtest.name=CreateTableValidationTest -Duse.jdk11=true 2. ant test -Dtest.name=LeveledCompactionStrategyTest -Duse.jdk11=true 3. ant test -Duse.jdk11=true 4. And by running CQL query in python3 bin/cqlsh.py{code} But not getting below errors. {code:java} 1. Test org.apache.cassandra.simulator.test.AccordHarrySimulationTest::test-cassandra.testtag_IS_UNDEFI 2. Error parsing jvm11-dtests/unitTestReports/TESTS-TestSuites.xml 'message' 3. Test org.apache.cassandra.fuzz.topology.AccordBootstrapTest::bootstrapFuzzTest-_jdk11 failed 4. Error parsing jvm11-utests/unitTestReports/TESTS-TestSuites.xml Unable to find failure/error for sui 5. Error parsing jvm17-dtests/unitTestReports/TESTS-TestSuites.xml 'message' 6. Test org.apache.cassandra.fuzz.sai.MultiNodeSAITest::mappertimedout had an error,Test org.apache.cas 7. Test org.apache.cassandra.cql3.ViewComplexLivenessTest::testStrictLivenessTombstone[0]-_jdk17 failed 8. Test replace_address_test.TestReplaceAddress::replace_address_test.py::TestReplaceAddress::test_fail 9. Test upgrade_tests.paging_test.TestPagingWithDeletionsNodes3RF3_Upgrade_indev_4_0_x_To_indev_trunk::{code} > Leveled Compaction doesn't validate maxBytesForLevel when the table is > altered/created > -------------------------------------------------------------------------------------- > > Key: CASSANDRA-20570 > URL: https://issues.apache.org/jira/browse/CASSANDRA-20570 > Project: Apache Cassandra > Issue Type: Improvement > Components: Local/Compaction > Reporter: David Capwell > Assignee: Nikhil > Priority: Normal > Attachments: > ci_summary-cassandra-4.0-67a822379d00898af6071f444e43113e7c4a8f2a.html, > ci_summary-cassandra-4.1-3c53118393f75ae3808ba553284cbd6c2a28fddf.html, > ci_summary-cassandra-5.0-7515ff6ca5ec8ecd064e7e61f8eac3297288f8f7.html, > ci_summary-trunk-9dadf7c0999632305578d1ca710f54b690fa4792.html, > result_details-cassandra-4.0-67a822379d00898af6071f444e43113e7c4a8f2a.tar.gz, > result_details-cassandra-5.0-7515ff6ca5ec8ecd064e7e61f8eac3297288f8f7.tar.gz, > result_details-trunk-9dadf7c0999632305578d1ca710f54b690fa4792.tar.gz > > Time Spent: 4h > Remaining Estimate: 0h > > In fuzz testing I hit this fun error message > {code} > java.lang.RuntimeException: Repair job has failed with the error message: > Repair command #1 failed with error At most 9223372036854775807 bytes may be > in a compaction level; your maxSSTableSize must be absurdly high to compute > 4.915501902751334E24. Check the logs on the repair participants for further > details > at > org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:187) > {code} > I was able to create a table, write data to it, and it only ever had issues > once I did a repair… > The error comes from > {code} > INFO [node2_Repair-Task:1] 2025-04-18 12:08:14,920 SubstituteLogger.java:169 > - ERROR 19:08:14 Repair 138143d4-1dd2-11b2-ba62-f56ce069092d failed: > java.lang.RuntimeException: At most 9223372036854775807 bytes may be in a > compaction level; your maxSSTableSize must be absurdly high to compute > 4.915501902751334E24 > at > org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:191) > at > org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:182) > at > org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:651) > at > org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:640) > at > org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:635) > at > org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getEstimatedRemainingTasks(LeveledCompactionStrategy.java:276) > at > org.apache.cassandra.db.compaction.CompactionStrategyManager.getEstimatedRemainingTasks(CompactionStrategyManager.java:1147) > at > org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:81) > at > org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:73) > at > org.apache.cassandra.db.compaction.CompactionManager.getPendingTasks(CompactionManager.java:2368) > at > org.apache.cassandra.service.ActiveRepairService.verifyCompactionsPendingThreshold(ActiveRepairService.java:659) > at > org.apache.cassandra.service.ActiveRepairService.prepareForRepair(ActiveRepairService.java:672) > at > org.apache.cassandra.repair.RepairCoordinator.prepare(RepairCoordinator.java:486) > at > org.apache.cassandra.repair.RepairCoordinator.runMayThrow(RepairCoordinator.java:329) > at > org.apache.cassandra.repair.RepairCoordinator.run(RepairCoordinator.java:280) > {code} > Which has this logic > {code} > double bytes = Math.pow(levelFanoutSize, level) * maxSSTableSizeInBytes; > if (bytes > Long.MAX_VALUE) > throw new RuntimeException("At most " + Long.MAX_VALUE + " bytes may be > in a compaction level; your maxSSTableSize must be absurdly high to compute " > + bytes); > {code} > The fuzz test had the following inputs > {code} > level = 8 > levelFanoutSize = 90 > maxSSTableSizeInBytes = 1141899264 > {code} > Given that the max level is known (its 8, and is hard coded), we can do this > calculation during create table / alter table to make sure that we don’t blow > up later on -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org