[PR #446] [CLOSED] Bump node-llama-cpp from 3.7.0 to 3.8.1 #463

Closed
opened 2026-03-03 13:54:40 +03:00 by kerem · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/jehna/humanify/pull/446
Author: @dependabot[bot]
Created: 5/20/2025
Status: Closed

Base: mainHead: dependabot/npm_and_yarn/node-llama-cpp-3.8.1


📝 Commits (1)

  • 6518d5e Bump node-llama-cpp from 3.7.0 to 3.8.1

📊 Changes

1 file changed (+191 additions, -197 deletions)

View changed files

📝 package-lock.json (+191 -197)

📄 Description

Bumps node-llama-cpp from 3.7.0 to 3.8.1.

Release notes

Sourced from node-llama-cpp's releases.

v3.8.1

3.8.1 (2025-05-19)

Bug Fixes

  • getLlamaGpuTypes: edge case (#463) (1799127)
  • remove prompt completion from the cached context window (#463) (1799127)

Shipped with llama.cpp release b5415

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.8.0

3.8.0 (2025-05-17)

Features

Bug Fixes

  • adapt to breaking llama.cpp changes (#460) (f2cb873)
  • capture multi-token segment separators (#460) (f2cb873)
  • race condition when reading extremely long gguf metadata (#460) (f2cb873)
  • adapt memory estimation to newly added model architectures (#460) (f2cb873)
  • skip binary testing on certain problematic conditions (#460) (f2cb873)
  • improve GPU backend loading error description (#460) (f2cb873)

Shipped with llama.cpp release b5414

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/jehna/humanify/pull/446 **Author:** [@dependabot[bot]](https://github.com/apps/dependabot) **Created:** 5/20/2025 **Status:** ❌ Closed **Base:** `main` ← **Head:** `dependabot/npm_and_yarn/node-llama-cpp-3.8.1` --- ### 📝 Commits (1) - [`6518d5e`](https://github.com/jehna/humanify/commit/6518d5e6b439f412aed8259dab695b0b0a01a86e) Bump node-llama-cpp from 3.7.0 to 3.8.1 ### 📊 Changes **1 file changed** (+191 additions, -197 deletions) <details> <summary>View changed files</summary> 📝 `package-lock.json` (+191 -197) </details> ### 📄 Description Bumps [node-llama-cpp](https://github.com/withcatai/node-llama-cpp) from 3.7.0 to 3.8.1. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/withcatai/node-llama-cpp/releases">node-llama-cpp's releases</a>.</em></p> <blockquote> <h2>v3.8.1</h2> <h2><a href="https://github.com/withcatai/node-llama-cpp/compare/v3.8.0...v3.8.1">3.8.1</a> (2025-05-19)</h2> <h3>Bug Fixes</h3> <ul> <li><strong><code>getLlamaGpuTypes</code>:</strong> edge case (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/463">#463</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/1799127cd88c4b53a89b6080241bef895c2bf25b">1799127</a>)</li> <li>remove prompt completion from the cached context window (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/463">#463</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/1799127cd88c4b53a89b6080241bef895c2bf25b">1799127</a>)</li> </ul> <hr /> <p>Shipped with <code>llama.cpp</code> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b5415"><code>b5415</code></a></p> <blockquote> <p>To use the latest <code>llama.cpp</code> release available, run <code>npx -n node-llama-cpp source download --release latest</code>. (<a href="https://node-llama-cpp.withcat.ai/guide/building-from-source#download-new-release">learn more</a>)</p> </blockquote> <h2>v3.8.0</h2> <h1><a href="https://github.com/withcatai/node-llama-cpp/compare/v3.7.0...v3.8.0">3.8.0</a> (2025-05-17)</h1> <h3>Features</h3> <ul> <li>save and restore a context sequence state (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/460">#460</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/f2cb873befcc1069a2ce58b413b9214d57ce3931">f2cb873</a>) (documentation: <a href="https://node-llama-cpp.withcat.ai/guide/chat-session#save-and-restore-with-context-sequence-state">Saving and restoring a context sequence evaluation state</a>)</li> <li>stream function call parameters (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/460">#460</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/f2cb873befcc1069a2ce58b413b9214d57ce3931">f2cb873</a>) (documentation: <a href="https://node-llama-cpp.withcat.ai/api/type-aliases/LLamaChatPromptOptions#onfunctioncallparamschunk">API: <code>LLamaChatPromptOptions[&quot;onFunctionCallParamsChunk&quot;]</code></a>)</li> <li>configure Hugging Face remote endpoint for resolving URIs (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/460">#460</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/f2cb873befcc1069a2ce58b413b9214d57ce3931">f2cb873</a>) (documentation: <a href="https://node-llama-cpp.withcat.ai/api/type-aliases/ResolveModelFileOptions#endpoints">API: <code>ResolveModelFileOptions[&quot;endpoints&quot;]</code></a>)</li> <li>Qwen 3 support (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/460">#460</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/f2cb873befcc1069a2ce58b413b9214d57ce3931">f2cb873</a>)</li> <li><strong><code>QwenChatWrapper</code></strong>: support discouraging the generation of thoughts (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/460">#460</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/f2cb873befcc1069a2ce58b413b9214d57ce3931">f2cb873</a>) (documentation: <a href="https://node-llama-cpp.withcat.ai/api/classes/QwenChatWrapper#constructor">API: <code>QwenChatWrapper</code> constructor &gt; <code>thoughts</code> option</a>)</li> <li><strong><code>getLlama</code></strong>: <code>dryRun</code> option (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/460">#460</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/f2cb873befcc1069a2ce58b413b9214d57ce3931">f2cb873</a>) (documentation: <a href="https://node-llama-cpp.withcat.ai/api/type-aliases/LlamaOptions#dryrun">API: <code>LlamaOptions[&quot;dryRun&quot;]</code></a>)</li> <li><code>getLlamaGpuTypes</code> function (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/460">#460</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/f2cb873befcc1069a2ce58b413b9214d57ce3931">f2cb873</a>) (documentation: <a href="https://node-llama-cpp.withcat.ai/api/functions/getLlamaGpuTypes">API: <code>getLlamaGpuTypes</code></a>)</li> </ul> <h3>Bug Fixes</h3> <ul> <li>adapt to breaking <code>llama.cpp</code> changes (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/460">#460</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/f2cb873befcc1069a2ce58b413b9214d57ce3931">f2cb873</a>)</li> <li>capture multi-token segment separators (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/460">#460</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/f2cb873befcc1069a2ce58b413b9214d57ce3931">f2cb873</a>)</li> <li>race condition when reading extremely long gguf metadata (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/460">#460</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/f2cb873befcc1069a2ce58b413b9214d57ce3931">f2cb873</a>)</li> <li>adapt memory estimation to newly added model architectures (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/460">#460</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/f2cb873befcc1069a2ce58b413b9214d57ce3931">f2cb873</a>)</li> <li>skip binary testing on certain problematic conditions (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/460">#460</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/f2cb873befcc1069a2ce58b413b9214d57ce3931">f2cb873</a>)</li> <li>improve GPU backend loading error description (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/460">#460</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/f2cb873befcc1069a2ce58b413b9214d57ce3931">f2cb873</a>)</li> </ul> <hr /> <p>Shipped with <code>llama.cpp</code> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b5414"><code>b5414</code></a></p> <blockquote> <p>To use the latest <code>llama.cpp</code> release available, run <code>npx -n node-llama-cpp source download --release latest</code>. (<a href="https://node-llama-cpp.withcat.ai/guide/building-from-source#download-new-release">learn more</a>)</p> </blockquote> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/1799127cd88c4b53a89b6080241bef895c2bf25b"><code>1799127</code></a> fix(<code>getLlamaGpuTypes</code>): edge case (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/463">#463</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/f2cb873befcc1069a2ce58b413b9214d57ce3931"><code>f2cb873</code></a> feat: save and restore a context sequence state (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/460">#460</a>)</li> <li>See full diff in <a href="https://github.com/withcatai/node-llama-cpp/compare/v3.7.0...v3.8.1">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=node-llama-cpp&package-manager=npm_and_yarn&previous-version=3.7.0&new-version=3.8.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
kerem 2026-03-03 13:54:40 +03:00
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/humanify#463
No description provided.