[PR #243] [CLOSED] Bump node-llama-cpp from 3.0.0-beta.44 to 3.3.1 #285

Closed
opened 2026-03-03 13:53:53 +03:00 by kerem · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/jehna/humanify/pull/243
Author: @dependabot[bot]
Created: 12/9/2024
Status: Closed

Base: mainHead: dependabot/npm_and_yarn/node-llama-cpp-3.3.1


📝 Commits (1)

  • 5c10640 Bump node-llama-cpp from 3.0.0-beta.44 to 3.3.1

📊 Changes

1 file changed (+294 additions, -130 deletions)

View changed files

📝 package-lock.json (+294 -130)

📄 Description

Bumps node-llama-cpp from 3.0.0-beta.44 to 3.3.1.

Release notes

Sourced from node-llama-cpp's releases.

v3.3.1

3.3.1 (2024-12-09)

Bug Fixes

  • align embedding input with WPM vocabulary type models (#393) (28c7984)

Shipped with llama.cpp release b4291

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.3.0

3.3.0 (2024-12-02)

Bug Fixes

  • improve binary compatibility testing on Electron apps (#386) (97abbca)
  • too many abort signal listeners (#386) (97abbca)
  • log level of some lower level logs (#386) (97abbca)
  • context window missing response during generation on specific extreme conditions (#386) (97abbca)
  • adapt to breaking llama.cpp changes (#386) (97abbca)
  • automatically resolve compiler is out of heap space CUDA build error (#386) (97abbca)

Features

  • Llama 3.2 3B function calling support (#386) (97abbca)
  • use llama.cpp backend registry for GPUs instead of custom implementations (#386) (97abbca)
  • getLlama: build: "try" option (#386) (97abbca)
  • init command: --model flag (#386) (97abbca)
  • JSON Schema grammar: array prefixItems, minItems, maxItems support (#388) (4d387de)
  • JSON Schema grammar: object additionalProperties, minProperties, maxProperties support (#388) (4d387de)
  • JSON Schema grammar: string minLength, maxLength, format support (#388) (4d387de)
  • JSON Schema grammar: improve inferred types (#388) (4d387de)
  • function calling: params description support (#388) (4d387de)
  • function calling: document JSON Schema type properties on Functionary chat function types (#388) (4d387de)

Shipped with llama.cpp release b4234

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.2.0

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/jehna/humanify/pull/243 **Author:** [@dependabot[bot]](https://github.com/apps/dependabot) **Created:** 12/9/2024 **Status:** ❌ Closed **Base:** `main` ← **Head:** `dependabot/npm_and_yarn/node-llama-cpp-3.3.1` --- ### 📝 Commits (1) - [`5c10640`](https://github.com/jehna/humanify/commit/5c106406feaec94f718e377a5e61586c67164971) Bump node-llama-cpp from 3.0.0-beta.44 to 3.3.1 ### 📊 Changes **1 file changed** (+294 additions, -130 deletions) <details> <summary>View changed files</summary> 📝 `package-lock.json` (+294 -130) </details> ### 📄 Description Bumps [node-llama-cpp](https://github.com/withcatai/node-llama-cpp) from 3.0.0-beta.44 to 3.3.1. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/withcatai/node-llama-cpp/releases">node-llama-cpp's releases</a>.</em></p> <blockquote> <h2>v3.3.1</h2> <h2><a href="https://github.com/withcatai/node-llama-cpp/compare/v3.3.0...v3.3.1">3.3.1</a> (2024-12-09)</h2> <h3>Bug Fixes</h3> <ul> <li>align embedding input with WPM vocabulary type models (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/393">#393</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/28c7984e769884e76c0aafb3c0b0217d90849d03">28c7984</a>)</li> </ul> <hr /> <p>Shipped with <code>llama.cpp</code> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4291"><code>b4291</code></a></p> <blockquote> <p>To use the latest <code>llama.cpp</code> release available, run <code>npx -n node-llama-cpp source download --release latest</code>. (<a href="https://node-llama-cpp.withcat.ai/guide/building-from-source#download-new-release">learn more</a>)</p> </blockquote> <h2>v3.3.0</h2> <h1><a href="https://github.com/withcatai/node-llama-cpp/compare/v3.2.0...v3.3.0">3.3.0</a> (2024-12-02)</h1> <h3>Bug Fixes</h3> <ul> <li>improve binary compatibility testing on Electron apps (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/386">#386</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/97abbca063f0ccf1b69607638d42f5ccc6ee1e2d">97abbca</a>)</li> <li>too many abort signal listeners (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/386">#386</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/97abbca063f0ccf1b69607638d42f5ccc6ee1e2d">97abbca</a>)</li> <li>log level of some lower level logs (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/386">#386</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/97abbca063f0ccf1b69607638d42f5ccc6ee1e2d">97abbca</a>)</li> <li>context window missing response during generation on specific extreme conditions (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/386">#386</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/97abbca063f0ccf1b69607638d42f5ccc6ee1e2d">97abbca</a>)</li> <li>adapt to breaking <code>llama.cpp</code> changes (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/386">#386</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/97abbca063f0ccf1b69607638d42f5ccc6ee1e2d">97abbca</a>)</li> <li>automatically resolve <code>compiler is out of heap space</code> CUDA build error (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/386">#386</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/97abbca063f0ccf1b69607638d42f5ccc6ee1e2d">97abbca</a>)</li> </ul> <h3>Features</h3> <ul> <li>Llama 3.2 3B function calling support (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/386">#386</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/97abbca063f0ccf1b69607638d42f5ccc6ee1e2d">97abbca</a>)</li> <li>use <code>llama.cpp</code> backend registry for GPUs instead of custom implementations (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/386">#386</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/97abbca063f0ccf1b69607638d42f5ccc6ee1e2d">97abbca</a>)</li> <li><strong><code>getLlama</code></strong>: <code>build: &quot;try&quot;</code> option (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/386">#386</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/97abbca063f0ccf1b69607638d42f5ccc6ee1e2d">97abbca</a>)</li> <li><strong><code>init</code> command</strong>: <code>--model</code> flag (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/386">#386</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/97abbca063f0ccf1b69607638d42f5ccc6ee1e2d">97abbca</a>)</li> <li><strong>JSON Schema grammar</strong>: array <code>prefixItems</code>, <code>minItems</code>, <code>maxItems</code> support (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/388">#388</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4d387ded56aedf8f8b8a77509962d95f8b0d5ae8">4d387de</a>)</li> <li><strong>JSON Schema grammar</strong>: object <code>additionalProperties</code>, <code>minProperties</code>, <code>maxProperties</code> support (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/388">#388</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4d387ded56aedf8f8b8a77509962d95f8b0d5ae8">4d387de</a>)</li> <li><strong>JSON Schema grammar</strong>: string <code>minLength</code>, <code>maxLength</code>, <code>format</code> support (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/388">#388</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4d387ded56aedf8f8b8a77509962d95f8b0d5ae8">4d387de</a>)</li> <li><strong>JSON Schema grammar</strong>: improve inferred types (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/388">#388</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4d387ded56aedf8f8b8a77509962d95f8b0d5ae8">4d387de</a>)</li> <li><strong>function calling</strong>: params <code>description</code> support (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/388">#388</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4d387ded56aedf8f8b8a77509962d95f8b0d5ae8">4d387de</a>)</li> <li><strong>function calling</strong>: document JSON Schema type properties on Functionary chat function types (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/388">#388</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4d387ded56aedf8f8b8a77509962d95f8b0d5ae8">4d387de</a>)</li> </ul> <hr /> <p>Shipped with <code>llama.cpp</code> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4234"><code>b4234</code></a></p> <blockquote> <p>To use the latest <code>llama.cpp</code> release available, run <code>npx -n node-llama-cpp source download --release latest</code>. (<a href="https://node-llama-cpp.withcat.ai/guide/building-from-source#download-new-release">learn more</a>)</p> </blockquote> <h2>v3.2.0</h2> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/6a541636d6379daac9ff04f79f0a93009e7ada2b"><code>6a54163</code></a> docs: choosing an embedding model (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/396">#396</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/28c7984e769884e76c0aafb3c0b0217d90849d03"><code>28c7984</code></a> fix: align embedding input with WPM vocabulary type models (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/393">#393</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/4d387ded56aedf8f8b8a77509962d95f8b0d5ae8"><code>4d387de</code></a> feat: JSON Schema grammar enhancements (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/388">#388</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/bc6cfe3dbc82f81bafb3b1fc91f69a5b4e316630"><code>bc6cfe3</code></a> build: fix CI (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/389">#389</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/97abbca063f0ccf1b69607638d42f5ccc6ee1e2d"><code>97abbca</code></a> feat: Llama 3.2 3B function calling support (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/386">#386</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/6405ee945e792651123189aae2612212095765b6"><code>6405ee9</code></a> fix: build warning on macOS (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/377">#377</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/ff02ebd4b203aa4035886580724b4154b7330801"><code>ff02ebd</code></a> chore: update modules (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/376">#376</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/ea12dc5128fce5e486847be378b676d3bca6a30d"><code>ea12dc5</code></a> feat: chat session response prefix (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/375">#375</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/8145c9409cca6b7b29c5ed168255df29239f395a"><code>8145c94</code></a> feat(minor): reference common classes on the <code>Llama</code> instance (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/360">#360</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/51eab61cc5cba224c66ea8e47589d09ec2526535"><code>51eab61</code></a> docs: improvements (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/357">#357</a>)</li> <li>Additional commits viewable in <a href="https://github.com/withcatai/node-llama-cpp/compare/v3.0.0-beta.44...v3.3.1">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=node-llama-cpp&package-manager=npm_and_yarn&previous-version=3.0.0-beta.44&new-version=3.3.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
kerem 2026-03-03 13:53:53 +03:00
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/humanify#285
No description provided.