[PR #110] [CLOSED] Bump node-llama-cpp from 3.0.0-beta.44 to 3.0.0-beta.45 #171

Closed
opened 2026-03-03 13:53:23 +03:00 by kerem · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/jehna/humanify/pull/110
Author: @dependabot[bot]
Created: 9/20/2024
Status: Closed

Base: mainHead: dependabot/npm_and_yarn/node-llama-cpp-3.0.0-beta.45


📝 Commits (1)

  • 7fd6801 Bump node-llama-cpp from 3.0.0-beta.44 to 3.0.0-beta.45

📊 Changes

1 file changed (+285 additions, -147 deletions)

View changed files

📝 package-lock.json (+285 -147)

📄 Description

Bumps node-llama-cpp from 3.0.0-beta.44 to 3.0.0-beta.45.

Release notes

Sourced from node-llama-cpp's releases.

v3.0.0-beta.45

3.0.0-beta.45 (2024-09-19)

Bug Fixes

  • improve performance of parallel evaluation from multiple contexts (#309) (4b3ad61)
  • Llama 3.1 chat wrapper standard chat history (#309) (4b3ad61)
  • adapt to llama.cpp sampling refactor (#309) (4b3ad61)
  • Llama 3 Instruct function calling (#309) (4b3ad61)
  • don't preload prompt in the chat command when using --printTimings or --meter (#309) (4b3ad61)
  • more stable Jinja template matching (#309) (4b3ad61)

Features

  • inspect estimate command (#309) (4b3ad61)
  • move seed option to the prompt level (#309) (4b3ad61)
  • Functionary v3 support (#309) (4b3ad61)
  • Mistral chat wrapper (#309) (4b3ad61)
  • improve Llama 3.1 chat template detection (#309) (4b3ad61)
  • change autoDisposeSequence default to false (#309) (4b3ad61)
  • move download, build and clear commands to be subcommands of a source command (#309) (4b3ad61)
  • simplify TokenBias (#309) (4b3ad61)
  • better threads default value (#309) (4b3ad61)
  • make LlamaEmbedding an object (#309) (4b3ad61)
  • HF_TOKEN env var support for reading GGUF file metadata (#309) (4b3ad61)
  • TemplateChatWrapper: custom history template for each message role (#309) (4b3ad61)
  • more helpful inspect gpu command (#309) (4b3ad61)
  • all tokenizer tokens iterator (#309) (4b3ad61)
  • failed context creation automatic remedy (#309) (4b3ad61)
  • abort generation support in CLI commands (#309) (4b3ad61)
  • --gpuLayers max and --contextSize max flag support for inspect estimate command (#309) (4b3ad61)
  • extract all prebuilt binaries to external modules (#309) (4b3ad61)
  • updated docs (#309) (4b3ad61)
  • combine model downloaders (#309) (4b3ad61)
  • feat(electron example template): update badge, scroll anchoring, table support (#309) (4b3ad61)

Shipped with llama.cpp release b3785

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/jehna/humanify/pull/110 **Author:** [@dependabot[bot]](https://github.com/apps/dependabot) **Created:** 9/20/2024 **Status:** ❌ Closed **Base:** `main` ← **Head:** `dependabot/npm_and_yarn/node-llama-cpp-3.0.0-beta.45` --- ### 📝 Commits (1) - [`7fd6801`](https://github.com/jehna/humanify/commit/7fd680119ba1e6c8ac9d891892e2be56b21de881) Bump node-llama-cpp from 3.0.0-beta.44 to 3.0.0-beta.45 ### 📊 Changes **1 file changed** (+285 additions, -147 deletions) <details> <summary>View changed files</summary> 📝 `package-lock.json` (+285 -147) </details> ### 📄 Description Bumps [node-llama-cpp](https://github.com/withcatai/node-llama-cpp) from 3.0.0-beta.44 to 3.0.0-beta.45. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/withcatai/node-llama-cpp/releases">node-llama-cpp's releases</a>.</em></p> <blockquote> <h2>v3.0.0-beta.45</h2> <h1><a href="https://github.com/withcatai/node-llama-cpp/compare/v3.0.0-beta.44...v3.0.0-beta.45">3.0.0-beta.45</a> (2024-09-19)</h1> <h3>Bug Fixes</h3> <ul> <li>improve performance of parallel evaluation from multiple contexts (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li>Llama 3.1 chat wrapper standard chat history (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li>adapt to <code>llama.cpp</code> sampling refactor (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li>Llama 3 Instruct function calling (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li>don't preload prompt in the <code>chat</code> command when using <code>--printTimings</code> or <code>--meter</code> (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li>more stable Jinja template matching (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> </ul> <h3>Features</h3> <ul> <li><code>inspect estimate</code> command (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li>move <code>seed</code> option to the prompt level (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li>Functionary v3 support (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li>Mistral chat wrapper (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li>improve Llama 3.1 chat template detection (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li>change <code>autoDisposeSequence</code> default to <code>false</code> (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li>move <code>download</code>, <code>build</code> and <code>clear</code> commands to be subcommands of a <code>source</code> command (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li>simplify <code>TokenBias</code> (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li>better <code>threads</code> default value (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li>make <code>LlamaEmbedding</code> an object (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li><code>HF_TOKEN</code> env var support for reading GGUF file metadata (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li><strong><code>TemplateChatWrapper</code></strong>: custom history template for each message role (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li>more helpful <code>inspect gpu</code> command (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li>all tokenizer tokens iterator (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li>failed context creation automatic remedy (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li>abort generation support in CLI commands (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li><code>--gpuLayers max</code> and <code>--contextSize max</code> flag support for <code>inspect estimate</code> command (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li>extract all prebuilt binaries to external modules (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li>updated docs (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li>combine model downloaders (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li>feat(electron example template): update badge, scroll anchoring, table support (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> </ul> <hr /> <p>Shipped with <code>llama.cpp</code> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3785"><code>b3785</code></a></p> <blockquote> <p>To use the latest <code>llama.cpp</code> release available, run <code>npx -n node-llama-cpp source download --release latest</code>. (<a href="https://node-llama-cpp.withcat.ai/guide/building-from-source#download-new-release">learn more</a>)</p> </blockquote> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/d0795c169c10c7e02bc634e14fe6e2c7f53d90ae"><code>d0795c1</code></a> build: fix CI config (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/318">#318</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/b98767cd0b8296993f1edf7d7676e4a695705ee3"><code>b98767c</code></a> build: fix release bug (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/317">#317</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/253b5d63d5c19836dda2150bb0779869e02406d9"><code>253b5d6</code></a> build: fix release bug (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/316">#316</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/d4a42842ca2cd6341159ce304370e2188967c50f"><code>d4a4284</code></a> build: fix release bug (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/315">#315</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/b34dc8c0758353f24fb6c91cee80116665bf879a"><code>b34dc8c</code></a> build: fix release config (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/314">#314</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/5fadbc4d52cc5a6fa7a4434c97be8ba16163571f"><code>5fadbc4</code></a> build: fix CI config (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/313">#313</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/fd4f067caf9ac750b316d2c7c864ff8e321434da"><code>fd4f067</code></a> build: fix CI config (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/312">#312</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71"><code>4b3ad61</code></a> feat: new docs (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>)</li> <li>See full diff in <a href="https://github.com/withcatai/node-llama-cpp/compare/v3.0.0-beta.44...v3.0.0-beta.45">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=node-llama-cpp&package-manager=npm_and_yarn&previous-version=3.0.0-beta.44&new-version=3.0.0-beta.45)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
kerem 2026-03-03 13:53:23 +03:00
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/humanify#171
No description provided.