[PR #119] [CLOSED] Bump node-llama-cpp from 3.0.0-beta.44 to 3.0.0 #177

Closed
opened 2026-03-03 13:53:24 +03:00 by kerem · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/jehna/humanify/pull/119
Author: @dependabot[bot]
Created: 9/24/2024
Status: Closed

Base: mainHead: dependabot/npm_and_yarn/node-llama-cpp-3.0.0


📝 Commits (1)

  • 5c98582 Bump node-llama-cpp from 3.0.0-beta.44 to 3.0.0

📊 Changes

1 file changed (+300 additions, -147 deletions)

View changed files

📝 package-lock.json (+300 -147)

📄 Description

Bumps node-llama-cpp from 3.0.0-beta.44 to 3.0.0.

Release notes

Sourced from node-llama-cpp's releases.

v3.0.0

3.0.0 (2024-09-24)

Features

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/jehna/humanify/pull/119 **Author:** [@dependabot[bot]](https://github.com/apps/dependabot) **Created:** 9/24/2024 **Status:** ❌ Closed **Base:** `main` ← **Head:** `dependabot/npm_and_yarn/node-llama-cpp-3.0.0` --- ### 📝 Commits (1) - [`5c98582`](https://github.com/jehna/humanify/commit/5c9858210f596320db3d121d03668d00192771f5) Bump node-llama-cpp from 3.0.0-beta.44 to 3.0.0 ### 📊 Changes **1 file changed** (+300 additions, -147 deletions) <details> <summary>View changed files</summary> 📝 `package-lock.json` (+300 -147) </details> ### 📄 Description Bumps [node-llama-cpp](https://github.com/withcatai/node-llama-cpp) from 3.0.0-beta.44 to 3.0.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/withcatai/node-llama-cpp/releases">node-llama-cpp's releases</a>.</em></p> <blockquote> <h2>v3.0.0</h2> <h1><a href="https://github.com/withcatai/node-llama-cpp/compare/v2.8.16...v3.0.0">3.0.0</a> (2024-09-24)</h1> <h3>Features</h3> <ul> <li>function calling (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/139">#139</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/5fcdf9b1cc809ef8f93b97969d305e1613f12c8e">5fcdf9b</a>)</li> <li>get embedding for text (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/144">#144</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4cf1fbaf06f97547e48937e05264d168ca1c6fbc">4cf1fba</a>)</li> <li>async model and context loading (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/178">#178</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/315a3ebf5b8e33426e3d0cf222bb0e46651c977e">315a3eb</a>)</li> <li>token biases (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/196">#196</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/3ad44940738ecad7d12b7eff0fa3c1f9bb5ee887">3ad4494</a>)</li> <li>automatic batching (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/104">#104</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4757af816c5d081a7796d8692063a46fabb1164b">4757af8</a>)</li> <li>prompt completion engine (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/225">#225</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/95f4645b3f30564aeb2c461e8662d09f5da7f179">95f4645</a>)</li> <li>model compatibility warnings (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/225">#225</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/95f4645b3f30564aeb2c461e8662d09f5da7f179">95f4645</a>)</li> <li>Vulkan support (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/171">#171</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/d161bcd6c1bd83af57b803c4fd1afaa4fef14c20">d161bcd</a>)</li> <li>Windows on Arm prebuilt binary (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/181">#181</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/f3b7f817dad4391db95765bc5c16568c5e1c4989">f3b7f81</a>)</li> <li>change the default log level to warn (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/191">#191</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/b542b53cbde457cbfa39df2e3892289a3bbbe89a">b542b53</a>)</li> <li><code>pull</code> command (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/214">#214</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/453c16227f7bbaaa597643df0576f9c93d2d2800">453c162</a>)</li> <li><code>inspect gpu</code> command (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/175">#175</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/5a705761d1a0d4d250707fafdd4761877f72d269">5a70576</a>)</li> <li><code>inspect gguf</code> command (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/182">#182</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/35e6f506400056080efb313c3ae07dc5fa5f11ac">35e6f50</a>)</li> <li><code>inspect estimate</code> command (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li><code>inspect measure</code> command (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/182">#182</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/35e6f506400056080efb313c3ae07dc5fa5f11ac">35e6f50</a>)</li> <li><code>init</code> command to scaffold a new project from a template (with <code>node-typescript</code> and <code>electron-typescript-react</code> templates) (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/217">#217</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/d6a0f439ec63d706345a662e7dd65ec7e74471c0">d6a0f43</a>)</li> <li>move <code>download</code>, <code>build</code> and <code>clear</code> commands to be subcommands of a <code>source</code> command (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li>move <code>seed</code> option to the prompt level (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li><strong><code>TemplateChatWrapper</code></strong>: custom history template for each message role (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li>Llama 3.1 support (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/273">#273</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/e3e09941c7132a9e0086b4bce4554898fd022369">e3e0994</a>)</li> <li>Mistral chat wrapper (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li>Functionary v3 support (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li>Phi-3 support (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/273">#273</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/e3e09941c7132a9e0086b4bce4554898fd022369">e3e0994</a>)</li> <li>extract all prebuilt binaries to external modules (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/309">#309</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/4b3ad61637c951b955984d2d22cf97c1ed109b71">4b3ad61</a>)</li> <li>parallel function calling (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/225">#225</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/95f4645b3f30564aeb2c461e8662d09f5da7f179">95f4645</a>)</li> <li>preload prompt (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/225">#225</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/95f4645b3f30564aeb2c461e8662d09f5da7f179">95f4645</a>)</li> <li><code>onTextChunk</code> option (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/273">#273</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/e3e09941c7132a9e0086b4bce4554898fd022369">e3e0994</a>)</li> <li>flash attention (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/264">#264</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/c2e322cd9b1707846d147109c0462f59e9c63755">c2e322c</a>)</li> <li>debug mode (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/217">#217</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/d6a0f439ec63d706345a662e7dd65ec7e74471c0">d6a0f43</a>)</li> <li>load LoRA adapters (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/217">#217</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/d6a0f439ec63d706345a662e7dd65ec7e74471c0">d6a0f43</a>)</li> <li>split gguf files support (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/214">#214</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/453c16227f7bbaaa597643df0576f9c93d2d2800">453c162</a>)</li> <li><code>stopOnAbortSignal</code> and <code>customStopTriggers</code> on <code>LlamaChat</code> and <code>LlamaChatSession</code> (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/214">#214</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/453c16227f7bbaaa597643df0576f9c93d2d2800">453c162</a>)</li> <li>Llama 3 support (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/205">#205</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/ef501f98a67461bbf82d22b9bc85969603ad86f1">ef501f9</a>)</li> <li><code>--gpu</code> flag in generation CLI commands (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/205">#205</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/ef501f98a67461bbf82d22b9bc85969603ad86f1">ef501f9</a>)</li> <li><code>specialTokens</code> parameter on <code>model.detokenize</code> (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/205">#205</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/ef501f98a67461bbf82d22b9bc85969603ad86f1">ef501f9</a>)</li> <li>interactively select a model from CLI commands (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/191">#191</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/b542b53cbde457cbfa39df2e3892289a3bbbe89a">b542b53</a>)</li> <li>automatically adapt to current free VRAM state (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/182">#182</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/35e6f506400056080efb313c3ae07dc5fa5f11ac">35e6f50</a>)</li> <li>GGUF file metadata info on <code>LlamaModel</code> (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/182">#182</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/35e6f506400056080efb313c3ae07dc5fa5f11ac">35e6f50</a>)</li> <li>use the <code>tokenizer.chat_template</code> header from the <code>gguf</code> file when available - use it to find a better specialized chat wrapper or use <code>JinjaTemplateChatWrapper</code> with it as a fallback (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/182">#182</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/35e6f506400056080efb313c3ae07dc5fa5f11ac">35e6f50</a>)</li> <li>simplify generation CLI commands: <code>chat</code>, <code>complete</code>, <code>infill</code> (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/182">#182</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/35e6f506400056080efb313c3ae07dc5fa5f11ac">35e6f50</a>)</li> <li>gguf parser (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/168">#168</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/bcaab4fd27612225baf06e9df758581e368a57fb">bcaab4f</a>)</li> <li>use the best compute layer available by default (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/175">#175</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/5a705761d1a0d4d250707fafdd4761877f72d269">5a70576</a>)</li> <li>more guardrails to prevent loading an incompatible prebuilt binary (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/175">#175</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/5a705761d1a0d4d250707fafdd4761877f72d269">5a70576</a>)</li> <li>completion and infill (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/164">#164</a>) (<a href="https://github.com/withcatai/node-llama-cpp/commit/ede69c1a19242854e0885d75a10ebdc7761ce7cf">ede69c1</a>)</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/97b0d86e88f32d7c9c934f3d1547ec6cf5ca35ae"><code>97b0d86</code></a> build: fix release job (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/332">#332</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/fc0fca512718b1952ef220caa38a0e98b6790060"><code>fc0fca5</code></a> feat: version 3.0 (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/105">#105</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/8565b7c369f1d75b0e27a65b4a1779a01e1df6f7"><code>8565b7c</code></a> feat: v3.0 stable release (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/331">#331</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/c35fcadbf5734be7513ee0cba61f37a5879a185c"><code>c35fcad</code></a> Merge remote-tracking branch 'origin/master' into beta</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/4b7ef5bbeaeccd3e5d44eb8fefcc2d7588ddf413"><code>4b7ef5b</code></a> fix: improve model downloader CI logs (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/329">#329</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/ebc4e8305617c5fa5e12ce86fe2fd097277a0bee"><code>ebc4e83</code></a> feat: <code>resetChatHistory</code> function on a <code>LlamaChatSession</code> (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/327">#327</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/cf791f1d1d603b0d0c06ea3e358534bb5dd8df61"><code>cf791f1</code></a> build: fix CI config (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/326">#326</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/1c720ca4bafd7f3c233e1bd314db56b0c0ebf565"><code>1c720ca</code></a> build: fix CI config (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/325">#325</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/7805fd583da98a3c2cab2ec7e4dba264bf1ed83f"><code>7805fd5</code></a> docs: improve documentation (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/324">#324</a>)</li> <li><a href="https://github.com/withcatai/node-llama-cpp/commit/6c644ffcf4f33416accb6e086c842b713f231e41"><code>6c644ff</code></a> fix: revert <code>electron-builder</code> version used in Electron template (<a href="https://redirect.github.com/withcatai/node-llama-cpp/issues/323">#323</a>)</li> <li>Additional commits viewable in <a href="https://github.com/withcatai/node-llama-cpp/compare/v3.0.0-beta.44...v3.0.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=node-llama-cpp&package-manager=npm_and_yarn&previous-version=3.0.0-beta.44&new-version=3.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
kerem 2026-03-03 13:53:24 +03:00
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/humanify#177
No description provided.