[ https://issues.apache.org/jira/browse/CASSANDRA-20570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
David Capwell updated CASSANDRA-20570: -------------------------------------- Change Category: Operability Complexity: Low Hanging Fruit Status: Open (was: Triage Needed) > Leveled Compaction doesn't validate maxBytesForLevel when the table is > altered/created > -------------------------------------------------------------------------------------- > > Key: CASSANDRA-20570 > URL: https://issues.apache.org/jira/browse/CASSANDRA-20570 > Project: Apache Cassandra > Issue Type: Improvement > Components: Local/Compaction > Reporter: David Capwell > Priority: Normal > > In fuzz testing I hit this fun error message > {code} > java.lang.RuntimeException: Repair job has failed with the error message: > Repair command #1 failed with error At most 9223372036854775807 bytes may be > in a compaction level; your maxSSTableSize must be absurdly high to compute > 4.915501902751334E24. Check the logs on the repair participants for further > details > at > org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:187) > {code} > I was able to create a table, write data to it, and it only ever had issues > once I did a repair… > The error comes from > {code} > INFO [node2_Repair-Task:1] 2025-04-18 12:08:14,920 SubstituteLogger.java:169 > - ERROR 19:08:14 Repair 138143d4-1dd2-11b2-ba62-f56ce069092d failed: > java.lang.RuntimeException: At most 9223372036854775807 bytes may be in a > compaction level; your maxSSTableSize must be absurdly high to compute > 4.915501902751334E24 > at > org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:191) > at > org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:182) > at > org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:651) > at > org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:640) > at > org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:635) > at > org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getEstimatedRemainingTasks(LeveledCompactionStrategy.java:276) > at > org.apache.cassandra.db.compaction.CompactionStrategyManager.getEstimatedRemainingTasks(CompactionStrategyManager.java:1147) > at > org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:81) > at > org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:73) > at > org.apache.cassandra.db.compaction.CompactionManager.getPendingTasks(CompactionManager.java:2368) > at > org.apache.cassandra.service.ActiveRepairService.verifyCompactionsPendingThreshold(ActiveRepairService.java:659) > at > org.apache.cassandra.service.ActiveRepairService.prepareForRepair(ActiveRepairService.java:672) > at > org.apache.cassandra.repair.RepairCoordinator.prepare(RepairCoordinator.java:486) > at > org.apache.cassandra.repair.RepairCoordinator.runMayThrow(RepairCoordinator.java:329) > at > org.apache.cassandra.repair.RepairCoordinator.run(RepairCoordinator.java:280) > {code} > Which has this logic > {code} > double bytes = Math.pow(levelFanoutSize, level) * maxSSTableSizeInBytes; > if (bytes > Long.MAX_VALUE) > throw new RuntimeException("At most " + Long.MAX_VALUE + " bytes may be > in a compaction level; your maxSSTableSize must be absurdly high to compute " > + bytes); > {code} > The fuzz test had the following inputs > {code} > level = 8 > levelFanoutSize = 90 > maxSSTableSizeInBytes = 1141899264 > {code} > Given that the max level is known (its 8, and is hard coded), we can do this > calculation during create table / alter table to make sure that we don’t blow > up later on -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org