[PR #551] [CLOSED] Bump node-llama-cpp from 3.7.0 to 3.11.0 #561

Closed
opened 2026-03-03 13:55:06 +03:00 by kerem · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/jehna/humanify/pull/551
Author: @dependabot[bot]
Created: 7/30/2025
Status: Closed

Base: mainHead: dependabot/npm_and_yarn/node-llama-cpp-3.11.0


📝 Commits (1)

  • 463e16b Bump node-llama-cpp from 3.7.0 to 3.11.0

📊 Changes

1 file changed (+362 additions, -210 deletions)

View changed files

📝 package-lock.json (+362 -210)

📄 Description

Bumps node-llama-cpp from 3.7.0 to 3.11.0.

Release notes

Sourced from node-llama-cpp's releases.

v3.11.0

3.11.0 (2025-07-29)

Features

Bug Fixes


Shipped with llama.cpp release b6018

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.10.0

3.10.0 (2025-06-12)

Features

  • JSON Schema Grammar: $defs and $ref support with full inferred types (#472) (9cdbce9)
  • inspect gguf command: format and print the Jinja chat template with --key .chatTemplate (#472) (9cdbce9)

Bug Fixes

  • JinjaTemplateChatWrapper: first function call prefix detection (#472) (9cdbce9)
  • QwenChatWrapper: improve Qwen chat template detection (#472) (9cdbce9)
  • apply maxTokens on function calling parameters (#472) (9cdbce9)
  • adjust default prompt completion length based on SWA size when relevant (#472) (9cdbce9)
  • improve thought segmentation syntax extraction (#472) (9cdbce9)
  • adapt to llama.cpp changes (#472) (9cdbce9)

Shipped with llama.cpp release b5640

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.9.0

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/jehna/humanify/pull/551 **Author:** [@dependabot[bot]](https://github.com/apps/dependabot) **Created:** 7/30/2025 **Status:** ❌ Closed **Base:** `main` ← **Head:** `dependabot/npm_and_yarn/node-llama-cpp-3.11.0` --- ### 📝 Commits (1) - [`463e16b`](https://github.com/jehna/humanify/commit/463e16b5d7df13f0685b0683e890dade31024ef7) Bump node-llama-cpp from 3.7.0 to 3.11.0 ### 📊 Changes **1 file changed** (+362 additions, -210 deletions) <details> <summary>View changed files</summary> 📝 `package-lock.json` (+362 -210) </details> ### 📄 Description Bumps [node-llama-cpp](https://github.com/withcatai/node-llama-cpp) from 3.7.0 to 3.11.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/withcatai/node-llama-cpp/releases">node-llama-cpp's releases</a>.</em></p> <blockquote> <h2>v3.11.0</h2> <h1><a href="https://github.com/withcatai/node-llama-cpp/compare/v3.10.0...v3.11.0">3.11.0</a> (2025-07-29)</h1> <h3>Features</h3> <ul> <li>NUMA policy (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/482">#482</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/a2ddaa2a52768c13a7163763092b26c8657d90d3">a2ddaa2</a>) (documentation: <a href="https://node-llama-cpp.withcat.ai/api/type-aliases/LlamaOptions#numa">API: <code>LlamaOptions[&quot;numa&quot;]</code></a>)</li> <li><strong><code>inspect gpu</code> command</strong>: log prebuilt binaries and cloned source releases (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/482">#482</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/a2ddaa2a52768c13a7163763092b26c8657d90d3">a2ddaa2</a>)</li> </ul> <h3>Bug Fixes</h3> <ul> <li>add missing GGUF metadata types (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/482">#482</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/a2ddaa2a52768c13a7163763092b26c8657d90d3">a2ddaa2</a>)</li> <li>level of some internal logs (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/482">#482</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/a2ddaa2a52768c13a7163763092b26c8657d90d3">a2ddaa2</a>)</li> <li>JSON schema grammar edge case (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/482">#482</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/a2ddaa2a52768c13a7163763092b26c8657d90d3">a2ddaa2</a>)</li> </ul> <hr /> <p>Shipped with <code>llama.cpp</code> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b6018"><code>b6018</code></a></p> <blockquote> <p>To use the latest <code>llama.cpp</code> release available, run <code>npx -n node-llama-cpp source download --release latest</code>. (<a href="https://node-llama-cpp.withcat.ai/guide/building-from-source#download-new-release">learn more</a>)</p> </blockquote> <h2>v3.10.0</h2> <h1><a href="https://github.com/withcatai/node-llama-cpp/compare/v3.9.0...v3.10.0">3.10.0</a> (2025-06-12)</h1> <h3>Features</h3> <ul> <li><strong>JSON Schema Grammar:</strong> <a href="https://json-schema.org/understanding-json-schema/structuring#defs"><code>$defs</code> and <code>$ref</code></a> support with full inferred types (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/472">#472</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/9cdbce949fdf9b8e2fb5f6a0af5f6af5fb4384f5">9cdbce9</a>)</li> <li><strong><code>inspect gguf</code> command:</strong> format and print the Jinja chat template with <code>--key .chatTemplate</code> (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/472">#472</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/9cdbce949fdf9b8e2fb5f6a0af5f6af5fb4384f5">9cdbce9</a>)</li> </ul> <h3>Bug Fixes</h3> <ul> <li><strong><code>JinjaTemplateChatWrapper</code>:</strong> first function call prefix detection (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/472">#472</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/9cdbce949fdf9b8e2fb5f6a0af5f6af5fb4384f5">9cdbce9</a>)</li> <li><strong><code>QwenChatWrapper</code>:</strong> improve Qwen chat template detection (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/472">#472</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/9cdbce949fdf9b8e2fb5f6a0af5f6af5fb4384f5">9cdbce9</a>)</li> <li>apply <code>maxTokens</code> on function calling parameters (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/472">#472</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/9cdbce949fdf9b8e2fb5f6a0af5f6af5fb4384f5">9cdbce9</a>)</li> <li>adjust default prompt completion length based on SWA size when relevant (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/472">#472</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/9cdbce949fdf9b8e2fb5f6a0af5f6af5fb4384f5">9cdbce9</a>)</li> <li>improve thought segmentation syntax extraction (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/472">#472</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/9cdbce949fdf9b8e2fb5f6a0af5f6af5fb4384f5">9cdbce9</a>)</li> <li>adapt to <code>llama.cpp</code> changes (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/472">#472</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/9cdbce949fdf9b8e2fb5f6a0af5f6af5fb4384f5">9cdbce9</a>)</li> </ul> <hr /> <p>Shipped with <code>llama.cpp</code> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b5640"><code>b5640</code></a></p> <blockquote> <p>To use the latest <code>llama.cpp</code> release available, run <code>npx -n node-llama-cpp source download --release latest</code>. (<a href="https://node-llama-cpp.withcat.ai/guide/building-from-source#download-new-release">learn more</a>)</p> </blockquote> <h2>v3.9.0</h2> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/5565614ae71b13c1ab449e0e8983378f49ea7249"><code>5565614</code></a> build: fix CI config (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/483">#483</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/a2ddaa2a52768c13a7163763092b26c8657d90d3"><code>a2ddaa2</code></a> feat: NUMA policy (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/482">#482</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/59cf309f2dc44540e9939615d5b0bf07d51411fb"><code>59cf309</code></a> test: fix a test (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/473">#473</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/9cdbce949fdf9b8e2fb5f6a0af5f6af5fb4384f5"><code>9cdbce9</code></a> feat(JSON Schema Grammar): <code>$defs</code> and <code>$ref</code> support with full inferred type...</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/ea8d9046bb2b4c33b79c058b4a0f608f5452c0fb"><code>ea8d904</code></a> feat: reasoning budget (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/468">#468</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/1799127cd88c4b53a89b6080241bef895c2bf25b"><code>1799127</code></a> fix(<code>getLlamaGpuTypes</code>): edge case (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/463">#463</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/f2cb873befcc1069a2ce58b413b9214d57ce3931"><code>f2cb873</code></a> feat: save and restore a context sequence state (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/460">#460</a>)</li> <li>See full diff in <a href="https://github.com/withcatai/node-llama-cpp/compare/v3.7.0...v3.11.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=node-llama-cpp&package-manager=npm_and_yarn&previous-version=3.7.0&new-version=3.11.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
kerem 2026-03-03 13:55:06 +03:00
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/humanify#561
No description provided.