dependabot[bot] opened a new pull request, #10102:
URL: https://github.com/apache/gravitino/pull/10102

   Bumps [llama-index](https://github.com/run-llama/llama_index) from 0.13.0 to 
0.14.15.
   <details>
   <summary>Release notes</summary>
   <p><em>Sourced from <a 
href="https://github.com/run-llama/llama_index/releases";>llama-index's 
releases</a>.</em></p>
   <blockquote>
   <h2>v0.14.15</h2>
   <h1>Release Notes</h1>
   <h2>[2026-02-18]</h2>
   <h3>llama-index-agent-agentmesh [0.1.0]</h3>
   <ul>
   <li>[Integration] AgentMesh: Trust Layer for LlamaIndex Agents (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20644";>#20644</a>)</li>
   </ul>
   <h3>llama-index-core [0.14.15]</h3>
   <ul>
   <li>Support basic operations for multimodal types (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20640";>#20640</a>)</li>
   <li>Feat recursive llm type support (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20642";>#20642</a>)</li>
   <li>fix: remove redundant metadata_seperator field from TextNode (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20649";>#20649</a>)</li>
   <li>fix(tests): update mock prompt type in mock_prompts.py (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20661";>#20661</a>)</li>
   <li>Feat multimodal template var formatting (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20682";>#20682</a>)</li>
   <li>Feat multimodal prompt templates (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20683";>#20683</a>)</li>
   <li>Feat multimodal chat prompt helper (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20684";>#20684</a>)</li>
   <li>Add retry and error handling to BaseExtractor (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20693";>#20693</a>)</li>
   <li>ensure at least one message/content block is returned by the old memory 
(<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20729";>#20729</a>)</li>
   </ul>
   <h3>llama-index-embeddings-ibm [0.6.0.post1]</h3>
   <ul>
   <li>chore: Remove persistent_connection parameter support, update (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20714";>#20714</a>)</li>
   <li>docs: Update IBM docs (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20718";>#20718</a>)</li>
   </ul>
   <h3>llama-index-llms-anthropic [0.10.9]</h3>
   <ul>
   <li>Sonnet 4-6 addition (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20723";>#20723</a>)</li>
   </ul>
   <h3>llama-index-llms-bedrock-converse [0.12.10]</h3>
   <ul>
   <li>fix(bedrock-converse): ensure thinking_delta is populated in all chat 
modes (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20664";>#20664</a>)</li>
   <li>feat(bedrock-converse): Add support for Claude Sonnet 4.6 (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20726";>#20726</a>)</li>
   </ul>
   <h3>llama-index-llms-ibm [0.7.0.post1]</h3>
   <ul>
   <li>chore: Remove persistent_connection parameter support, update (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20714";>#20714</a>)</li>
   <li>docs: Update IBM docs (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20718";>#20718</a>)</li>
   </ul>
   <h3>llama-index-llms-mistralai [0.10.0]</h3>
   <ul>
   <li>Rrubini/mistral azure sdk (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20668";>#20668</a>)</li>
   </ul>
   <h3>llama-index-llms-oci-data-science [1.0.0]</h3>
   <ul>
   <li>Add support for new OCI DataScience endpoint /predictWithStream for 
streaming use case (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20545";>#20545</a>)</li>
   </ul>
   <h3>llama-index-observability-otel [0.3.0]</h3>
   <!-- raw HTML omitted -->
   </blockquote>
   <p>... (truncated)</p>
   </details>
   <details>
   <summary>Changelog</summary>
   <p><em>Sourced from <a 
href="https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md";>llama-index's
 changelog</a>.</em></p>
   <blockquote>
   <h3>llama-index-core [0.14.15]</h3>
   <ul>
   <li>Support basic operations for multimodal types (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20640";>#20640</a>)</li>
   <li>Feat recursive llm type support (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20642";>#20642</a>)</li>
   <li>fix: remove redundant metadata_seperator field from TextNode (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20649";>#20649</a>)</li>
   <li>fix(tests): update mock prompt type in mock_prompts.py (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20661";>#20661</a>)</li>
   <li>Feat multimodal template var formatting (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20682";>#20682</a>)</li>
   <li>Feat multimodal prompt templates (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20683";>#20683</a>)</li>
   <li>Feat multimodal chat prompt helper (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20684";>#20684</a>)</li>
   <li>Add retry and error handling to BaseExtractor (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20693";>#20693</a>)</li>
   <li>ensure at least one message/content block is returned by the old memory 
(<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20729";>#20729</a>)</li>
   </ul>
   <h3>llama-index-embeddings-ibm [0.6.0.post1]</h3>
   <ul>
   <li>chore: Remove persistent_connection parameter support, update (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20714";>#20714</a>)</li>
   <li>docs: Update IBM docs (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20718";>#20718</a>)</li>
   </ul>
   <h3>llama-index-llms-anthropic [0.10.9]</h3>
   <ul>
   <li>Sonnet 4-6 addition (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20723";>#20723</a>)</li>
   </ul>
   <h3>llama-index-llms-bedrock-converse [0.12.10]</h3>
   <ul>
   <li>fix(bedrock-converse): ensure thinking_delta is populated in all chat 
modes (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20664";>#20664</a>)</li>
   <li>feat(bedrock-converse): Add support for Claude Sonnet 4.6 (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20726";>#20726</a>)</li>
   </ul>
   <h3>llama-index-llms-ibm [0.7.0.post1]</h3>
   <ul>
   <li>chore: Remove persistent_connection parameter support, update (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20714";>#20714</a>)</li>
   <li>docs: Update IBM docs (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20718";>#20718</a>)</li>
   </ul>
   <h3>llama-index-llms-mistralai [0.10.0]</h3>
   <ul>
   <li>Rrubini/mistral azure sdk (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20668";>#20668</a>)</li>
   </ul>
   <h3>llama-index-llms-oci-data-science [1.0.0]</h3>
   <ul>
   <li>Add support for new OCI DataScience endpoint /predictWithStream for 
streaming use case (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20545";>#20545</a>)</li>
   </ul>
   <h3>llama-index-observability-otel [0.3.0]</h3>
   <ul>
   <li>improve otel data serialization by flattening dicts (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20719";>#20719</a>)</li>
   <li>feat: support custom span processor; refactor: use 
llama-index-instrumentation instead of llama-index-core (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20732";>#20732</a>)</li>
   </ul>
   <h3>llama-index-program-evaporate [0.5.2]</h3>
   <ul>
   <li>Sandbox LLM-generated code execution in EvaporateExtractor (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20676";>#20676</a>)</li>
   </ul>
   <h3>llama-index-readers-bitbucket [0.4.2]</h3>
   <!-- raw HTML omitted -->
   </blockquote>
   <p>... (truncated)</p>
   </details>
   <details>
   <summary>Commits</summary>
   <ul>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/4937fc017cbf91d08c6beaadb790ae44745a87a1";><code>4937fc0</code></a>
 Release 0.14.15 (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/20735";>#20735</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/98698936ec2cccaf8eb78018176d6d6da8daaee2";><code>9869893</code></a>
 feat(bedrock-converse): Add support for Nova 2 (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/20736";>#20736</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/40da24454784980dd4d27135533a1fff779d6929";><code>40da244</code></a>
 fix(layoutir): restrict requires-python to &gt;=3.12 to match layoutir 
dependenc...</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/6504188504a5070b43bb0d4633f000e129f51f87";><code>6504188</code></a>
 feat: support custom span processor; refactor: use 
llama-index-instrumentatio...</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/dc716d159cf93c60dca31e2abaca1166877216a2";><code>dc716d1</code></a>
 chore: update issue classifier action to v0.2.0 (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/20734";>#20734</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/6d0aff422db769014e384242f7a2130015d71fa5";><code>6d0aff4</code></a>
 ensure at least one message/conent block is returned by the old memory (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/20729";>#20729</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/fdcc72cc362e033a45db52af52be15dad2bab472";><code>fdcc72c</code></a>
 feat: add issue classifier gh action (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/20720";>#20720</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/171ae830ad98c22cc69afa043678340536bc7dbe";><code>171ae83</code></a>
 fix: Update WhatsAppChatLoader to retrieve DataFrame in pandas format (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/20722";>#20722</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/68c760a50d29844f96e56e189e91c676e6445bf9";><code>68c760a</code></a>
 fix(layoutir): hotfix for output_dir crash and Block extraction (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/20708";>#20708</a>
 follo...</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/83f45ce5fcdd3a96c587ab3f86e527addda621f0";><code>83f45ce</code></a>
 Add retry and error handling to BaseExtractor (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/20693";>#20693</a>)</li>
   <li>Additional commits viewable in <a 
href="https://github.com/run-llama/llama_index/compare/v0.13.0...v0.14.15";>compare
 view</a></li>
   </ul>
   </details>
   <br />
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=llama-index&package-manager=pip&previous-version=0.13.0&new-version=0.14.15)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   <details>
   <summary>Dependabot commands and options</summary>
   <br />
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot show <dependency name> ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   
   
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to