[PR #659] [CLOSED] Bump node-llama-cpp from 3.7.0 to 3.14.2 #663

Closed
opened 2026-03-03 13:55:35 +03:00 by kerem · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/jehna/humanify/pull/659
Author: @dependabot[bot]
Created: 10/27/2025
Status: Closed

Base: mainHead: dependabot/npm_and_yarn/node-llama-cpp-3.14.2


📝 Commits (1)

  • 58a74d2 Bump node-llama-cpp from 3.7.0 to 3.14.2

📊 Changes

1 file changed (+458 additions, -291 deletions)

View changed files

📝 package-lock.json (+458 -291)

📄 Description

Bumps node-llama-cpp from 3.7.0 to 3.14.2.

Release notes

Sourced from node-llama-cpp's releases.

v3.14.2

3.14.2 (2025-10-26)

Bug Fixes

  • a new release due to a semantic-release failure in the previous release (#518) (e516e50)

Shipped with llama.cpp release b6845

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.14.1

3.14.1 (2025-10-26)

Bug Fixes

  • Vulkan: include integrated GPU memory (#516) (47475ac)
  • Vulkan: deduplicate the same device coming from different drivers (#516) (47475ac)
  • adapt Llama chat wrappers to breaking llama.cpp changes (#516) (47475ac)

Shipped with llama.cpp release b6843

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.14.0

3.14.0 (2025-10-02)

Features

  • Qwen3 Reranker support (#506) (00305f7) (see #506 for prequantized Qwen3 Reranker models you can use)

Bug Fixes

  • handle HuggingFace rate limit responses (#506) (00305f7)
  • adapt to llama.cpp breaking changes (#506) (00305f7)

Shipped with llama.cpp release b6673

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/jehna/humanify/pull/659 **Author:** [@dependabot[bot]](https://github.com/apps/dependabot) **Created:** 10/27/2025 **Status:** ❌ Closed **Base:** `main` ← **Head:** `dependabot/npm_and_yarn/node-llama-cpp-3.14.2` --- ### 📝 Commits (1) - [`58a74d2`](https://github.com/jehna/humanify/commit/58a74d2c387e4de4d00a0e14f1fc52bdd314cf0e) Bump node-llama-cpp from 3.7.0 to 3.14.2 ### 📊 Changes **1 file changed** (+458 additions, -291 deletions) <details> <summary>View changed files</summary> 📝 `package-lock.json` (+458 -291) </details> ### 📄 Description Bumps [node-llama-cpp](https://github.com/withcatai/node-llama-cpp) from 3.7.0 to 3.14.2. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/withcatai/node-llama-cpp/releases">node-llama-cpp's releases</a>.</em></p> <blockquote> <h2>v3.14.2</h2> <h2><a href="https://github.com/withcatai/node-llama-cpp/compare/v3.14.1...v3.14.2">3.14.2</a> (2025-10-26)</h2> <h3>Bug Fixes</h3> <ul> <li>a new release due to a <code>semantic-release</code> failure in the previous release (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/518">#518</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/e516e5015d6818d483487e4229ea91a36188b2c2">e516e50</a>)</li> </ul> <hr /> <p>Shipped with <code>llama.cpp</code> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b6845"><code>b6845</code></a></p> <blockquote> <p>To use the latest <code>llama.cpp</code> release available, run <code>npx -n node-llama-cpp source download --release latest</code>. (<a href="https://node-llama-cpp.withcat.ai/guide/building-from-source#download-new-release">learn more</a>)</p> </blockquote> <h2>v3.14.1</h2> <h2><a href="https://github.com/withcatai/node-llama-cpp/compare/v3.14.0...v3.14.1">3.14.1</a> (2025-10-26)</h2> <h3>Bug Fixes</h3> <ul> <li><strong>Vulkan:</strong> include integrated GPU memory (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/516">#516</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/47475aceef49429c4ba51e681249d82d78be0960">47475ac</a>)</li> <li><strong>Vulkan:</strong> deduplicate the same device coming from different drivers (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/516">#516</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/47475aceef49429c4ba51e681249d82d78be0960">47475ac</a>)</li> <li>adapt Llama chat wrappers to breaking <code>llama.cpp</code> changes (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/516">#516</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/47475aceef49429c4ba51e681249d82d78be0960">47475ac</a>)</li> </ul> <hr /> <p>Shipped with <code>llama.cpp</code> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b6843"><code>b6843</code></a></p> <blockquote> <p>To use the latest <code>llama.cpp</code> release available, run <code>npx -n node-llama-cpp source download --release latest</code>. (<a href="https://node-llama-cpp.withcat.ai/guide/building-from-source#download-new-release">learn more</a>)</p> </blockquote> <h2>v3.14.0</h2> <h1><a href="https://github.com/withcatai/node-llama-cpp/compare/v3.13.0...v3.14.0">3.14.0</a> (2025-10-02)</h1> <h3>Features</h3> <ul> <li>Qwen3 Reranker support (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/506">#506</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/00305f7790d3f998ab4311b3ea0ccf54732d2c02">00305f7</a>) (see <a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/506">#506</a> for prequantized Qwen3 Reranker models you can use)</li> </ul> <h3>Bug Fixes</h3> <ul> <li>handle HuggingFace rate limit responses (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/506">#506</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/00305f7790d3f998ab4311b3ea0ccf54732d2c02">00305f7</a>)</li> <li>adapt to <code>llama.cpp</code> breaking changes (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/506">#506</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/00305f7790d3f998ab4311b3ea0ccf54732d2c02">00305f7</a>)</li> </ul> <hr /> <p>Shipped with <code>llama.cpp</code> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b6673"><code>b6673</code></a></p> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/e516e5015d6818d483487e4229ea91a36188b2c2"><code>e516e50</code></a> fix: <code>semantic-release</code> retry (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/518">#518</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/47475aceef49429c4ba51e681249d82d78be0960"><code>47475ac</code></a> fix(Vulkan): include integrated GPU memory (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/516">#516</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/02805ee9fecd2ff9f7563f68de00af1d605d849c"><code>02805ee</code></a> test: fix tests (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/509">#509</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/142c91f3909f91dcc9f59eff44ab27a2721e2161"><code>142c91f</code></a> test: fix tests (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/508">#508</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/c1f26bfd3056cdb6878d5c2b354aeffbadf2974f"><code>c1f26bf</code></a> build: update CI Vulkan SDK version (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/507">#507</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/00305f7790d3f998ab4311b3ea0ccf54732d2c02"><code>00305f7</code></a> feat: Qwen3 Reranker support (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/506">#506</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/eefe78c8ffa2dd277e1b8913d957f61eadc8788a"><code>eefe78c</code></a> feat: Seed OSS support (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/502">#502</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/d33cc315eb5ecfc209da4d843a6ac7184e832754"><code>d33cc31</code></a> fix(Vulkan): read external memory usage (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/500">#500</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/76b505edf350ae8bf8837fddeda68f8fb9ed4550"><code>76b505e</code></a> fix: adapt to breaking <code>llama.cpp</code> changes (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/501">#501</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/c5cd057a0097959241faec053b863e60103b103e"><code>c5cd057</code></a> test: fix tests (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/499">#499</a>)</li> <li>Additional commits viewable in <a href="https://github.com/withcatai/node-llama-cpp/compare/v3.7.0...v3.14.2">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=node-llama-cpp&package-manager=npm_and_yarn&previous-version=3.7.0&new-version=3.14.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
kerem 2026-03-03 13:55:35 +03:00
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/humanify#663
No description provided.