[PR #235] [CLOSED] Bump node-llama-cpp from 3.0.0-beta.44 to 3.3.0 #274

Closed
opened 2026-03-03 13:53:50 +03:00 by kerem · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/jehna/humanify/pull/235
Author: @dependabot[bot]
Created: 12/3/2024
Status: Closed

Base: mainHead: dependabot/npm_and_yarn/node-llama-cpp-3.3.0


📝 Commits (1)

  • 1ca57e2 Bump node-llama-cpp from 3.0.0-beta.44 to 3.3.0

📊 Changes

1 file changed (+309 additions, -105 deletions)

View changed files

📝 package-lock.json (+309 -105)

📄 Description

Bumps node-llama-cpp from 3.0.0-beta.44 to 3.3.0.

Release notes

Sourced from node-llama-cpp's releases.

v3.3.0

3.3.0 (2024-12-02)

Bug Fixes

  • improve binary compatibility testing on Electron apps (#386) (97abbca)
  • too many abort signal listeners (#386) (97abbca)
  • log level of some lower level logs (#386) (97abbca)
  • context window missing response during generation on specific extreme conditions (#386) (97abbca)
  • adapt to breaking llama.cpp changes (#386) (97abbca)
  • automatically resolve compiler is out of heap space CUDA build error (#386) (97abbca)

Features

  • Llama 3.2 3B function calling support (#386) (97abbca)
  • use llama.cpp backend registry for GPUs instead of custom implementations (#386) (97abbca)
  • getLlama: build: "try" option (#386) (97abbca)
  • init command: --model flag (#386) (97abbca)
  • JSON Schema grammar: array prefixItems, minItems, maxItems support (#388) (4d387de)
  • JSON Schema grammar: object additionalProperties, minProperties, maxProperties support (#388) (4d387de)
  • JSON Schema grammar: string minLength, maxLength, format support (#388) (4d387de)
  • JSON Schema grammar: improve inferred types (#388) (4d387de)
  • function calling: params description support (#388) (4d387de)
  • function calling: document JSON Schema type properties on Functionary chat function types (#388) (4d387de)

Shipped with llama.cpp release b4234

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.2.0

3.2.0 (2024-10-31)

Bug Fixes

  • Electron crash with some models on macOS when not using Metal (#375) (ea12dc5)
  • adapt to llama.cpp breaking changes (#375) (ea12dc5)
  • support rejectattr in Jinja templates (#376) (ea12dc5)
  • build warning on macOS (#377) (6405ee9)

Features

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/jehna/humanify/pull/235 **Author:** [@dependabot[bot]](https://github.com/apps/dependabot) **Created:** 12/3/2024 **Status:** ❌ Closed **Base:** `main` ← **Head:** `dependabot/npm_and_yarn/node-llama-cpp-3.3.0` --- ### 📝 Commits (1) - [`1ca57e2`](https://github.com/jehna/humanify/commit/1ca57e22b32c27889c5c270cee05aac037c04362) Bump node-llama-cpp from 3.0.0-beta.44 to 3.3.0 ### 📊 Changes **1 file changed** (+309 additions, -105 deletions) <details> <summary>View changed files</summary> 📝 `package-lock.json` (+309 -105) </details> ### 📄 Description Bumps [node-llama-cpp](https://github.com/withcatai/node-llama-cpp) from 3.0.0-beta.44 to 3.3.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/withcatai/node-llama-cpp/releases">node-llama-cpp's releases</a>.</em></p> <blockquote> <h2>v3.3.0</h2> <h1><a href="https://github.com/withcatai/node-llama-cpp/compare/v3.2.0...v3.3.0">3.3.0</a> (2024-12-02)</h1> <h3>Bug Fixes</h3> <ul> <li>improve binary compatibility testing on Electron apps (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/386">#386</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/97abbca063f0ccf1b69607638d42f5ccc6ee1e2d">97abbca</a>)</li> <li>too many abort signal listeners (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/386">#386</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/97abbca063f0ccf1b69607638d42f5ccc6ee1e2d">97abbca</a>)</li> <li>log level of some lower level logs (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/386">#386</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/97abbca063f0ccf1b69607638d42f5ccc6ee1e2d">97abbca</a>)</li> <li>context window missing response during generation on specific extreme conditions (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/386">#386</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/97abbca063f0ccf1b69607638d42f5ccc6ee1e2d">97abbca</a>)</li> <li>adapt to breaking <code>llama.cpp</code> changes (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/386">#386</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/97abbca063f0ccf1b69607638d42f5ccc6ee1e2d">97abbca</a>)</li> <li>automatically resolve <code>compiler is out of heap space</code> CUDA build error (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/386">#386</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/97abbca063f0ccf1b69607638d42f5ccc6ee1e2d">97abbca</a>)</li> </ul> <h3>Features</h3> <ul> <li>Llama 3.2 3B function calling support (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/386">#386</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/97abbca063f0ccf1b69607638d42f5ccc6ee1e2d">97abbca</a>)</li> <li>use <code>llama.cpp</code> backend registry for GPUs instead of custom implementations (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/386">#386</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/97abbca063f0ccf1b69607638d42f5ccc6ee1e2d">97abbca</a>)</li> <li><strong><code>getLlama</code></strong>: <code>build: &quot;try&quot;</code> option (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/386">#386</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/97abbca063f0ccf1b69607638d42f5ccc6ee1e2d">97abbca</a>)</li> <li><strong><code>init</code> command</strong>: <code>--model</code> flag (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/386">#386</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/97abbca063f0ccf1b69607638d42f5ccc6ee1e2d">97abbca</a>)</li> <li><strong>JSON Schema grammar</strong>: array <code>prefixItems</code>, <code>minItems</code>, <code>maxItems</code> support (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/388">#388</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4d387ded56aedf8f8b8a77509962d95f8b0d5ae8">4d387de</a>)</li> <li><strong>JSON Schema grammar</strong>: object <code>additionalProperties</code>, <code>minProperties</code>, <code>maxProperties</code> support (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/388">#388</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4d387ded56aedf8f8b8a77509962d95f8b0d5ae8">4d387de</a>)</li> <li><strong>JSON Schema grammar</strong>: string <code>minLength</code>, <code>maxLength</code>, <code>format</code> support (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/388">#388</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4d387ded56aedf8f8b8a77509962d95f8b0d5ae8">4d387de</a>)</li> <li><strong>JSON Schema grammar</strong>: improve inferred types (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/388">#388</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4d387ded56aedf8f8b8a77509962d95f8b0d5ae8">4d387de</a>)</li> <li><strong>function calling</strong>: params <code>description</code> support (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/388">#388</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4d387ded56aedf8f8b8a77509962d95f8b0d5ae8">4d387de</a>)</li> <li><strong>function calling</strong>: document JSON Schema type properties on Functionary chat function types (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/388">#388</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4d387ded56aedf8f8b8a77509962d95f8b0d5ae8">4d387de</a>)</li> </ul> <hr /> <p>Shipped with <code>llama.cpp</code> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4234"><code>b4234</code></a></p> <blockquote> <p>To use the latest <code>llama.cpp</code> release available, run <code>npx -n node-llama-cpp source download --release latest</code>. (<a href="https://node-llama-cpp.withcat.ai/guide/building-from-source#download-new-release">learn more</a>)</p> </blockquote> <h2>v3.2.0</h2> <h1><a href="https://github.com/withcatai/node-llama-cpp/compare/v3.1.1...v3.2.0">3.2.0</a> (2024-10-31)</h1> <h3>Bug Fixes</h3> <ul> <li>Electron crash with some models on macOS when not using Metal (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/375">#375</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/ea12dc5128fce5e486847be378b676d3bca6a30d">ea12dc5</a>)</li> <li>adapt to <code>llama.cpp</code> breaking changes (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/375">#375</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/ea12dc5128fce5e486847be378b676d3bca6a30d">ea12dc5</a>)</li> <li>support <code>rejectattr</code> in Jinja templates (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/376">#376</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/ff02ebd4b203aa4035886580724b4154b7330801">ea12dc5</a>)</li> <li>build warning on macOS (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/377">#377</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/6405ee945e792651123189aae2612212095765b6">6405ee9</a>)</li> </ul> <h3>Features</h3> <ul> <li>chat session response prefix (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/375">#375</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/ea12dc5128fce5e486847be378b676d3bca6a30d">ea12dc5</a>)</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/4d387ded56aedf8f8b8a77509962d95f8b0d5ae8"><code>4d387de</code></a> feat: JSON Schema grammar enhancements (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/388">#388</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/bc6cfe3dbc82f81bafb3b1fc91f69a5b4e316630"><code>bc6cfe3</code></a> build: fix CI (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/389">#389</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/97abbca063f0ccf1b69607638d42f5ccc6ee1e2d"><code>97abbca</code></a> feat: Llama 3.2 3B function calling support (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/386">#386</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/6405ee945e792651123189aae2612212095765b6"><code>6405ee9</code></a> fix: build warning on macOS (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/377">#377</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/ff02ebd4b203aa4035886580724b4154b7330801"><code>ff02ebd</code></a> chore: update modules (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/376">#376</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/ea12dc5128fce5e486847be378b676d3bca6a30d"><code>ea12dc5</code></a> feat: chat session response prefix (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/375">#375</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/8145c9409cca6b7b29c5ed168255df29239f395a"><code>8145c94</code></a> feat(minor): reference common classes on the <code>Llama</code> instance (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/360">#360</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/51eab61cc5cba224c66ea8e47589d09ec2526535"><code>51eab61</code></a> docs: improvements (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/357">#357</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/4ee10a90e954df26e9e7782d4e2f9e98daab24a5"><code>4ee10a9</code></a> feat: <code>resolveModelFile</code> method (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/351">#351</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/578e7102bfeaf54e8604d261dd91cb002d3687a7"><code>578e710</code></a> docs: home page fixes (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/346">#346</a>)</li> <li>Additional commits viewable in <a href="https://github.com/withcatai/node-llama-cpp/compare/v3.0.0-beta.44...v3.3.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=node-llama-cpp&package-manager=npm_and_yarn&previous-version=3.0.0-beta.44&new-version=3.3.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
kerem 2026-03-03 13:53:50 +03:00
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/humanify#274
No description provided.