[PR #161] Update dependency @langchain/ollama to v1.2.5 #138

Open
opened 2026-03-03 13:59:04 +03:00 by kerem · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/Kuingsmile/word-GPT-Plus/pull/161
Author: @renovate[bot]
Created: 1/31/2026
Status: 🔄 Open

Base: masterHead: renovate/langchain-ollama-1.x-lockfile


📝 Commits (1)

  • eb3e221 Update dependency @langchain/ollama to v1.2.5

📊 Changes

1 file changed (+3 additions, -3 deletions)

View changed files

📝 yarn.lock (+3 -3)

📄 Description

This PR contains the following updates:

Package Change Age Confidence
@langchain/ollama (source) 1.2.11.2.5 age confidence

Release Notes

langchain-ai/langchainjs (@​langchain/ollama)

v1.2.5

Patch Changes

v1.2.4

Patch Changes
  • #​9887 1fa865b Thanks @​Muhammad-Kamran-Khan! - Fix validation to allow file_url and file_id without filename metadata in Responses API, and prevent sending filename when not allowed.

  • #​9873 28efb57 Thanks @​hntrl! - Add reasoningEffort call option as a convenience shorthand for reasoning.effort

    • Adds reasoningEffort to BaseChatOpenAICallOptions for easier configuration of reasoning models
    • Automatically coalesces reasoningEffort into reasoning.effort when calling reasoning models (o1, o3, etc.)
    • If both reasoningEffort and reasoning.effort are provided, reasoning.effort takes precedence
    • Marked as @deprecated to encourage use of the full reasoning.effort option
  • #​9876 4e42452 Thanks @​sflanker! - fix(openai): pass runManager to responses._generate function in ChatOpenAI

  • #​9900 a9b5059 Thanks @​hntrl! - Improved abort signal handling for chat models:

    • Added ModelAbortError class in @langchain/core/errors that contains partial output when a model invocation is aborted mid-stream
    • invoke() now throws ModelAbortError with accumulated partialOutput when aborted during streaming (when using streaming callback handlers)
    • stream() throws a regular AbortError when aborted (since chunks are already yielded to the caller)
    • All provider implementations now properly check and propagate abort signals in both _generate() and _streamResponseChunks() methods
    • Added standard tests for abort signal behavior
  • #​9900 a9b5059 Thanks @​hntrl! - fix(providers): add proper abort signal handling for invoke and stream operations

    • Added early abort check (signal.throwIfAborted()) at the start of _generate methods to immediately throw when signal is already aborted
    • Added abort signal checks inside streaming loops in _streamResponseChunks to return early when signal is aborted
    • Propagated abort signals to underlying SDK calls where applicable (Google GenAI, Google Common/VertexAI, Cohere)
    • Added standard tests for abort signal behavior in @langchain/standard-tests

    This enables proper cancellation behavior for both invoke and streaming operations, and allows fallback chains to correctly proceed to the next runnable when the previous one is aborted.

v1.2.3

Compare Source

Patch Changes

v1.2.2

Compare Source

Patch Changes

Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/Kuingsmile/word-GPT-Plus/pull/161 **Author:** [@renovate[bot]](https://github.com/apps/renovate) **Created:** 1/31/2026 **Status:** 🔄 Open **Base:** `master` ← **Head:** `renovate/langchain-ollama-1.x-lockfile` --- ### 📝 Commits (1) - [`eb3e221`](https://github.com/Kuingsmile/word-GPT-Plus/commit/eb3e22141dfc4962e3bb93efa4eb12c44b48408a) Update dependency @langchain/ollama to v1.2.5 ### 📊 Changes **1 file changed** (+3 additions, -3 deletions) <details> <summary>View changed files</summary> 📝 `yarn.lock` (+3 -3) </details> ### 📄 Description This PR contains the following updates: | Package | Change | [Age](https://docs.renovatebot.com/merge-confidence/) | [Confidence](https://docs.renovatebot.com/merge-confidence/) | |---|---|---|---| | [@langchain/ollama](https://redirect.github.com/langchain-ai/langchainjs/tree/main/libs/langchain-ollama/) ([source](https://redirect.github.com/langchain-ai/langchainjs)) | [`1.2.1` → `1.2.5`](https://renovatebot.com/diffs/npm/@langchain%2follama/1.2.1/1.2.5) | ![age](https://developer.mend.io/api/mc/badges/age/npm/@langchain%2follama/1.2.5?slim=true) | ![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/@langchain%2follama/1.2.1/1.2.5?slim=true) | --- ### Release Notes <details> <summary>langchain-ai/langchainjs (@&#8203;langchain/ollama)</summary> ### [`v1.2.5`](https://redirect.github.com/langchain-ai/langchainjs/releases/tag/langchain%401.2.5) ##### Patch Changes - Updated dependencies \[[`817fc9a`](https://redirect.github.com/langchain-ai/langchainjs/commit/817fc9a56d4699f3563a6e153b13eadf7bcc661b)]: - [@&#8203;langchain/core](https://redirect.github.com/langchain/core)@&#8203;1.1.10 ### [`v1.2.4`](https://redirect.github.com/langchain-ai/langchainjs/releases/tag/%40langchain/openai%401.2.4) ##### Patch Changes - [#&#8203;9887](https://redirect.github.com/langchain-ai/langchainjs/pull/9887) [`1fa865b`](https://redirect.github.com/langchain-ai/langchainjs/commit/1fa865b1cb8a30c2269b83cdb5fc84d374c3fca9) Thanks [@&#8203;Muhammad-Kamran-Khan](https://redirect.github.com/Muhammad-Kamran-Khan)! - Fix validation to allow file\_url and file\_id without filename metadata in Responses API, and prevent sending filename when not allowed. - [#&#8203;9873](https://redirect.github.com/langchain-ai/langchainjs/pull/9873) [`28efb57`](https://redirect.github.com/langchain-ai/langchainjs/commit/28efb57448933368094ca41c63d9262ac0f348a6) Thanks [@&#8203;hntrl](https://redirect.github.com/hntrl)! - Add `reasoningEffort` call option as a convenience shorthand for `reasoning.effort` - Adds `reasoningEffort` to `BaseChatOpenAICallOptions` for easier configuration of reasoning models - Automatically coalesces `reasoningEffort` into `reasoning.effort` when calling reasoning models (o1, o3, etc.) - If both `reasoningEffort` and `reasoning.effort` are provided, `reasoning.effort` takes precedence - Marked as `@deprecated` to encourage use of the full `reasoning.effort` option - [#&#8203;9876](https://redirect.github.com/langchain-ai/langchainjs/pull/9876) [`4e42452`](https://redirect.github.com/langchain-ai/langchainjs/commit/4e42452e4c020408bd6687667e931497b05aaff5) Thanks [@&#8203;sflanker](https://redirect.github.com/sflanker)! - fix(openai): pass runManager to responses.\_generate function in ChatOpenAI - [#&#8203;9900](https://redirect.github.com/langchain-ai/langchainjs/pull/9900) [`a9b5059`](https://redirect.github.com/langchain-ai/langchainjs/commit/a9b50597186002221aaa4585246e569fa44c27c8) Thanks [@&#8203;hntrl](https://redirect.github.com/hntrl)! - Improved abort signal handling for chat models: - Added `ModelAbortError` class in `@langchain/core/errors` that contains partial output when a model invocation is aborted mid-stream - `invoke()` now throws `ModelAbortError` with accumulated `partialOutput` when aborted during streaming (when using streaming callback handlers) - `stream()` throws a regular `AbortError` when aborted (since chunks are already yielded to the caller) - All provider implementations now properly check and propagate abort signals in both `_generate()` and `_streamResponseChunks()` methods - Added standard tests for abort signal behavior - [#&#8203;9900](https://redirect.github.com/langchain-ai/langchainjs/pull/9900) [`a9b5059`](https://redirect.github.com/langchain-ai/langchainjs/commit/a9b50597186002221aaa4585246e569fa44c27c8) Thanks [@&#8203;hntrl](https://redirect.github.com/hntrl)! - fix(providers): add proper abort signal handling for invoke and stream operations - Added early abort check (`signal.throwIfAborted()`) at the start of `_generate` methods to immediately throw when signal is already aborted - Added abort signal checks inside streaming loops in `_streamResponseChunks` to return early when signal is aborted - Propagated abort signals to underlying SDK calls where applicable (Google GenAI, Google Common/VertexAI, Cohere) - Added standard tests for abort signal behavior in `@langchain/standard-tests` This enables proper cancellation behavior for both invoke and streaming operations, and allows fallback chains to correctly proceed to the next runnable when the previous one is aborted. ### [`v1.2.3`](https://redirect.github.com/langchain-ai/langchainjs/releases/tag/%40langchain/ollama%401.2.3) [Compare Source](https://redirect.github.com/langchain-ai/langchainjs/compare/@langchain/ollama@1.2.2...@langchain/ollama@1.2.3) ##### Patch Changes - [#&#8203;10065](https://redirect.github.com/langchain-ai/langchainjs/pull/10065) [`e2b3d90`](https://redirect.github.com/langchain-ai/langchainjs/commit/e2b3d90a7b27dbb6a303e52bab08a1c881c2c850) Thanks [@&#8203;MatanTsach](https://redirect.github.com/MatanTsach)! - fix(ollama): preserve tool\_calls when AIMessage content is a string ### [`v1.2.2`](https://redirect.github.com/langchain-ai/langchainjs/releases/tag/%40langchain/openai%401.2.2) [Compare Source](https://redirect.github.com/langchain-ai/langchainjs/compare/@langchain/ollama@1.2.1...@langchain/ollama@1.2.2) ##### Patch Changes - [#&#8203;9777](https://redirect.github.com/langchain-ai/langchainjs/pull/9777) [`3efe79c`](https://redirect.github.com/langchain-ai/langchainjs/commit/3efe79c62ff2ffe0ada562f7eecd85be074b649a) Thanks [@&#8203;christian-bromann](https://redirect.github.com/christian-bromann)! - fix(core): properly elevate reasoning tokens </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/Kuingsmile/word-GPT-Plus). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0Mi45Mi4xIiwidXBkYXRlZEluVmVyIjoiNDMuMzYuMiIsInRhcmdldEJyYW5jaCI6Im1hc3RlciIsImxhYmVscyI6W119--> --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/word-GPT-Plus#138
No description provided.