dependabot[bot] opened a new pull request, #4643:
URL: https://github.com/apache/ignite-3/pull/4643

   Bumps [org.rocksdb:rocksdbjni](https://github.com/facebook/rocksdb) from 
9.6.1 to 9.7.3.
   <details>
   <summary>Release notes</summary>
   <p><em>Sourced from <a 
href="https://github.com/facebook/rocksdb/releases";>org.rocksdb:rocksdbjni's 
releases</a>.</em></p>
   <blockquote>
   <h2>RocksDB 9.7.3</h2>
   <h2>9.7.3 (10/16/2024)</h2>
   <h3>Behavior Changes</h3>
   <ul>
   <li>OPTIONS file to be loaded by remote worker is now preserved so that it 
does not get purged by the primary host. A similar technique as how we are 
preserving new SST files from getting purged is used for this. 
min_options_file_numbers_ is tracked like pending_outputs_ is tracked.</li>
   </ul>
   <h2>9.7.2 (10/08/2024)</h2>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Fix a bug for surfacing write unix time: 
<code>Iterator::GetProperty(&quot;rocksdb.iterator.write-time&quot;)</code> for 
non-L0 files.</li>
   </ul>
   <h2>9.7.1 (09/26/2024)</h2>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Several DB option settings could be lost through 
<code>GetOptionsFromString()</code>, possibly elsewhere as well. Affected 
options, now fixed:<code>background_close_inactive_wals</code>, 
<code>write_dbid_to_manifest</code>, <code>write_identity_file</code>, 
<code>prefix_seek_opt_in_only</code></li>
   <li>Fix under counting of allocated memory in the compressed secondary cache 
due to looking at the compressed block size rather than the actual memory 
allocated, which could be larger due to internal fragmentation.</li>
   <li>Skip insertion of compressed blocks in the secondary cache if the 
lowest_used_cache_tier DB option is kVolatileTier.</li>
   </ul>
   <h2>9.7.0 (09/20/2024)</h2>
   <h3>New Features</h3>
   <ul>
   <li>Make Cache a customizable class that can be instantiated by the object 
registry.</li>
   <li>Add new option <code>prefix_seek_opt_in_only</code> that makes iterators 
generally safer when you might set a <code>prefix_extractor</code>. When 
<code>prefix_seek_opt_in_only=true</code>, which is expected to be the future 
default, prefix seek is only used when <code>prefix_same_as_start</code> or 
<code>auto_prefix_mode</code> are set. Also, <code>prefix_same_as_start</code> 
and <code>auto_prefix_mode</code> now allow prefix filtering even with 
<code>total_order_seek=true</code>.</li>
   <li>Add a new table property &quot;rocksdb.key.largest.seqno&quot; which 
records the largest sequence number of all keys in file. It is verified to be 
zero during SST file ingestion.</li>
   </ul>
   <h3>Behavior Changes</h3>
   <ul>
   <li>Changed the semantics of the BlobDB configuration option 
<code>blob_garbage_collection_force_threshold</code> to define a threshold for 
the overall garbage ratio of all blob files currently eligible for garbage 
collection (according to <code>blob_garbage_collection_age_cutoff</code>). This 
can provide better control over space amplification at the cost of slightly 
higher write amplification.</li>
   <li>Set <code>write_dbid_to_manifest=true</code> by default. This means DB 
ID will now be preserved through backups, checkpoints, etc. by default. Also 
add <code>write_identity_file</code> option which can be set to false for 
anticipated future behavior.</li>
   <li>In FIFO compaction, compactions for changing file temperature 
(configured by option <code>file_temperature_age_thresholds</code>) will 
compact one file at a time, instead of merging multiple eligible file together 
(<a 
href="https://redirect.github.com/facebook/rocksdb/issues/13018";>#13018</a>).</li>
   <li>Support ingesting db generated files using hard link, i.e. 
IngestExternalFileOptions::move_files/link_files and 
IngestExternalFileOptions::allow_db_generated_files.</li>
   <li>Add a new file ingestion option 
<code>IngestExternalFileOptions::link_files</code> to hard link input files and 
preserve original files links after ingestion.</li>
   <li>DB::Close now untracks files in SstFileManager, making avaialble any 
space used
   by them. Prior to this change they would be orphaned until the DB is 
re-opened.</li>
   </ul>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Fix a bug in CompactRange() where result files may not be compacted in 
any future compaction. This can only happen when users configure 
CompactRangeOptions::change_level to true and the change level step of manual 
compaction fails (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/13009";>#13009</a>).</li>
   <li>Fix handling of dynamic change of <code>prefix_extractor</code> with 
memtable prefix filter. Previously, prefix seek could mix different prefix 
interpretations between memtable and SST files. Now the latest 
<code>prefix_extractor</code> at the time of iterator creation or refresh is 
respected.</li>
   <li>Fix a bug with manual_wal_flush and auto error recovery from WAL failure 
that may cause CFs to be inconsistent (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/12995";>#12995</a>). 
The fix will set potential WAL write failure as fatal error when 
manual_wal_flush is true, and disables auto error recovery from these 
errors.</li>
   </ul>
   </blockquote>
   </details>
   <details>
   <summary>Changelog</summary>
   <p><em>Sourced from <a 
href="https://github.com/facebook/rocksdb/blob/v9.7.3/HISTORY.md";>org.rocksdb:rocksdbjni's
 changelog</a>.</em></p>
   <blockquote>
   <h2>9.7.3 (10/16/2024)</h2>
   <h3>Behavior Changes</h3>
   <ul>
   <li>OPTIONS file to be loaded by remote worker is now preserved so that it 
does not get purged by the primary host. A similar technique as how we are 
preserving new SST files from getting purged is used for this. 
min_options_file_numbers_ is tracked like pending_outputs_ is tracked.</li>
   </ul>
   <h2>9.7.2 (10/08/2024)</h2>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Fix a bug for surfacing write unix time: 
<code>Iterator::GetProperty(&quot;rocksdb.iterator.write-time&quot;)</code> for 
non-L0 files.</li>
   </ul>
   <h2>9.7.1 (09/26/2024)</h2>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Several DB option settings could be lost through 
<code>GetOptionsFromString()</code>, possibly elsewhere as well. Affected 
options, now fixed:<code>background_close_inactive_wals</code>, 
<code>write_dbid_to_manifest</code>, <code>write_identity_file</code>, 
<code>prefix_seek_opt_in_only</code></li>
   <li>Fix under counting of allocated memory in the compressed secondary cache 
due to looking at the compressed block size rather than the actual memory 
allocated, which could be larger due to internal fragmentation.</li>
   <li>Skip insertion of compressed blocks in the secondary cache if the 
lowest_used_cache_tier DB option is kVolatileTier.</li>
   </ul>
   <h2>9.7.0 (09/20/2024)</h2>
   <h3>New Features</h3>
   <ul>
   <li>Make Cache a customizable class that can be instantiated by the object 
registry.</li>
   <li>Add new option <code>prefix_seek_opt_in_only</code> that makes iterators 
generally safer when you might set a <code>prefix_extractor</code>. When 
<code>prefix_seek_opt_in_only=true</code>, which is expected to be the future 
default, prefix seek is only used when <code>prefix_same_as_start</code> or 
<code>auto_prefix_mode</code> are set. Also, <code>prefix_same_as_start</code> 
and <code>auto_prefix_mode</code> now allow prefix filtering even with 
<code>total_order_seek=true</code>.</li>
   <li>Add a new table property &quot;rocksdb.key.largest.seqno&quot; which 
records the largest sequence number of all keys in file. It is verified to be 
zero during SST file ingestion.</li>
   </ul>
   <h3>Behavior Changes</h3>
   <ul>
   <li>Changed the semantics of the BlobDB configuration option 
<code>blob_garbage_collection_force_threshold</code> to define a threshold for 
the overall garbage ratio of all blob files currently eligible for garbage 
collection (according to <code>blob_garbage_collection_age_cutoff</code>). This 
can provide better control over space amplification at the cost of slightly 
higher write amplification.</li>
   <li>Set <code>write_dbid_to_manifest=true</code> by default. This means DB 
ID will now be preserved through backups, checkpoints, etc. by default. Also 
add <code>write_identity_file</code> option which can be set to false for 
anticipated future behavior.</li>
   <li>In FIFO compaction, compactions for changing file temperature 
(configured by option <code>file_temperature_age_thresholds</code>) will 
compact one file at a time, instead of merging multiple eligible file together 
(<a 
href="https://redirect.github.com/facebook/rocksdb/issues/13018";>#13018</a>).</li>
   <li>Support ingesting db generated files using hard link, i.e. 
IngestExternalFileOptions::move_files/link_files and 
IngestExternalFileOptions::allow_db_generated_files.</li>
   <li>Add a new file ingestion option 
<code>IngestExternalFileOptions::link_files</code> to hard link input files and 
preserve original files links after ingestion.</li>
   <li>DB::Close now untracks files in SstFileManager, making avaialble any 
space used
   by them. Prior to this change they would be orphaned until the DB is 
re-opened.</li>
   </ul>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Fix a bug in CompactRange() where result files may not be compacted in 
any future compaction. This can only happen when users configure 
CompactRangeOptions::change_level to true and the change level step of manual 
compaction fails (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/13009";>#13009</a>).</li>
   <li>Fix handling of dynamic change of <code>prefix_extractor</code> with 
memtable prefix filter. Previously, prefix seek could mix different prefix 
interpretations between memtable and SST files. Now the latest 
<code>prefix_extractor</code> at the time of iterator creation or refresh is 
respected.</li>
   <li>Fix a bug with manual_wal_flush and auto error recovery from WAL failure 
that may cause CFs to be inconsistent (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/12995";>#12995</a>). 
The fix will set potential WAL write failure as fatal error when 
manual_wal_flush is true, and disables auto error recovery from these 
errors.</li>
   </ul>
   <h2>9.6.0 (08/19/2024)</h2>
   <h3>New Features</h3>
   <ul>
   <li>*Best efforts recovery supports recovering to incomplete Version with a 
clean seqno cut that presents a valid point in time view from the user's 
perspective, if versioning history doesn't include atomic flush.</li>
   <li>New option 
<code>BlockBasedTableOptions::decouple_partitioned_filters</code> should 
improve efficiency in serving read queries because filter and index partitions 
can consistently target the configured <code>metadata_block_size</code>. This 
option is currently opt-in.</li>
   <li>Introduce a new mutable CF option <code>paranoid_memory_checks</code>. 
It enables additional validation on data integrity during reads/scanning. 
Currently, skip list based memtable will validate key ordering during look up 
and scans.</li>
   </ul>
   <h3>Public API Changes</h3>
   <ul>
   <li>Add ticker stats to count file read retries due to checksum mismatch</li>
   <li>Adds optional installation callback function for remote compaction</li>
   </ul>
   <h3>Behavior Changes</h3>
   <ul>
   <li>There may be less intra-L0 compaction triggered by total L0 size being 
too small. We now use compensated file size (tombstones are assigned some value 
size) when calculating L0 size and reduce the threshold for L0 size limit. This 
is to avoid accumulating too much data/tombstones in L0.</li>
   </ul>
   <h3>Bug Fixes</h3>
   <ul>
   <li>*Make DestroyDB supports slow deletion when it's configured in 
<code>SstFileManager</code>. The slow deletion is subject to the configured 
<code>rate_bytes_per_sec</code>, but not subject to the 
<code>max_trash_db_ratio</code>.</li>
   <li>Fixed a bug where we set unprep_seqs_ even when WriteImpl() fails. This 
was caught by stress test write fault injection in WriteImpl(). This may have 
incorrectly caused iteration creation failure for unvalidated writes or 
returned wrong result for 
WriteUnpreparedTxn::GetUnpreparedSequenceNumbers().</li>
   </ul>
   <!-- raw HTML omitted -->
   </blockquote>
   <p>... (truncated)</p>
   </details>
   <details>
   <summary>Commits</summary>
   <ul>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/0e2801ac30b3f283c3b14e523ba3667eca024f09";><code>0e2801a</code></a>
 Version and HISTORY.md update for 9.7.3 patch</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/2647d5c661c8285598150f001ced96142c693269";><code>2647d5c</code></a>
 Fix Compaction Stats (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/13071";>#13071</a>)</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/11f21cf86b519a6f33b44f3cd3e3fe3fb1d1e315";><code>11f21cf</code></a>
 Preserve Options File (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/13074";>#13074</a>)</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/eca4f106bb9994045e31ca463187ecf5dc211b3f";><code>eca4f10</code></a>
 Add file_checksum from FileChecksumGenFactory and Tests for corrupted output 
...</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/5bb363edc72c31d57abe4c9eace5bb48d0e3bba3";><code>5bb363e</code></a>
 Print unknown writebatch tag (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/13062";>#13062</a>)</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/b5cde68b8ab2b78b3364c23c566eee14d5cc488a";><code>b5cde68</code></a>
 Update HISTORY for 9.7.2</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/d9787264a8eb0528966ded06499dff46a4f8739c";><code>d978726</code></a>
 Update version.h</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/2fef013616ac8e474d19f6b0815155a37ae9350f";><code>2fef013</code></a>
 Fix a bug for surfacing write unix time (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/13057";>#13057</a>)</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/a24567271031a16115c0ceb4abee03acd9831787";><code>a245672</code></a>
 Update HISTORY and version for 9.7.1</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/786ac6a0e9fcf96a2a9680953c35c465a59ad965";><code>786ac6a</code></a>
 Bug fix and test BuildDBOptions (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/13038";>#13038</a>)</li>
   <li>Additional commits viewable in <a 
href="https://github.com/facebook/rocksdb/compare/v9.6.1...v9.7.3";>compare 
view</a></li>
   </ul>
   </details>
   <br />
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=org.rocksdb:rocksdbjni&package-manager=gradle&previous-version=9.6.1&new-version=9.7.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   <details>
   <summary>Dependabot commands and options</summary>
   <br />
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot show <dependency name> ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   
   
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscr...@ignite.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to