mirror of
https://github.com/Kuingsmile/word-GPT-Plus.git
synced 2026-04-25 16:25:49 +03:00
[PR #161] Update dependency @langchain/ollama to v1.2.5 #138
Labels
No labels
bug
enhancement
help wanted
pull-request
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/word-GPT-Plus#138
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
📋 Pull Request Information
Original PR: https://github.com/Kuingsmile/word-GPT-Plus/pull/161
Author: @renovate[bot]
Created: 1/31/2026
Status: 🔄 Open
Base:
master← Head:renovate/langchain-ollama-1.x-lockfile📝 Commits (1)
eb3e221Update dependency @langchain/ollama to v1.2.5📊 Changes
1 file changed (+3 additions, -3 deletions)
View changed files
📝
yarn.lock(+3 -3)📄 Description
This PR contains the following updates:
1.2.1→1.2.5Release Notes
langchain-ai/langchainjs (@langchain/ollama)
v1.2.5Patch Changes
817fc9a]:v1.2.4Patch Changes
#9887
1fa865bThanks @Muhammad-Kamran-Khan! - Fix validation to allow file_url and file_id without filename metadata in Responses API, and prevent sending filename when not allowed.#9873
28efb57Thanks @hntrl! - AddreasoningEffortcall option as a convenience shorthand forreasoning.effortreasoningEfforttoBaseChatOpenAICallOptionsfor easier configuration of reasoning modelsreasoningEffortintoreasoning.effortwhen calling reasoning models (o1, o3, etc.)reasoningEffortandreasoning.effortare provided,reasoning.efforttakes precedence@deprecatedto encourage use of the fullreasoning.effortoption#9876
4e42452Thanks @sflanker! - fix(openai): pass runManager to responses._generate function in ChatOpenAI#9900
a9b5059Thanks @hntrl! - Improved abort signal handling for chat models:ModelAbortErrorclass in@langchain/core/errorsthat contains partial output when a model invocation is aborted mid-streaminvoke()now throwsModelAbortErrorwith accumulatedpartialOutputwhen aborted during streaming (when using streaming callback handlers)stream()throws a regularAbortErrorwhen aborted (since chunks are already yielded to the caller)_generate()and_streamResponseChunks()methods#9900
a9b5059Thanks @hntrl! - fix(providers): add proper abort signal handling for invoke and stream operationssignal.throwIfAborted()) at the start of_generatemethods to immediately throw when signal is already aborted_streamResponseChunksto return early when signal is aborted@langchain/standard-testsThis enables proper cancellation behavior for both invoke and streaming operations, and allows fallback chains to correctly proceed to the next runnable when the previous one is aborted.
v1.2.3Compare Source
Patch Changes
e2b3d90Thanks @MatanTsach! - fix(ollama): preserve tool_calls when AIMessage content is a stringv1.2.2Compare Source
Patch Changes
3efe79cThanks @christian-bromann! - fix(core): properly elevate reasoning tokensConfiguration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.
🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.