[GH-ISSUE #419] Add better local model support for LM Studio + document current workaround using humanify openai --baseURL #84

Open
opened 2026-03-03 13:52:51 +03:00 by kerem · 0 comments
Owner

Originally created by @0xdevalias on GitHub (Apr 26, 2025).
Original GitHub issue: https://github.com/jehna/humanify/issues/419

See Also

Originally created by @0xdevalias on GitHub (Apr 26, 2025). Original GitHub issue: https://github.com/jehna/humanify/issues/419 - https://lmstudio.ai/ - > Your local AI toolkit. - > Download and run Llama, DeepSeek, Mistral, Phi on your computer. - https://lmstudio.ai/models - > Model Catalog - https://lmstudio.ai/docs/app - > Docs - https://lmstudio.ai/docs/app/api - > **LM Studio as a Local LLM API Server** > > You can serve local LLMs from LM Studio's Developer tab, either on localhost or on the network. > > LM Studio's APIs can be used through an [OpenAI compatibility mode](https://lmstudio.ai/docs/api/rest-api/openai-api), enhanced [REST API](https://lmstudio.ai/docs/api/rest-api/endpoints/endpoints), or through a client library like [lmstudio-js](https://lmstudio.ai/docs/api/sdk). - https://lmstudio.ai/docs/app/api/endpoints/openai - > OpenAI Compatibility API - https://lmstudio.ai/docs/app/api/endpoints/rest - > LM Studio REST API (beta) - > LM Studio now has its own REST API, in addition to OpenAI compatibility mode. > > The REST API includes enhanced stats such as Token / Second and Time To First Token (TTFT), as well as rich information about models such as loaded vs unloaded, max context, quantization, and more. - https://lmstudio.ai/docs/app/api/structured-output - > Structured Output > > You can enforce a particular response format from an LLM by providing a JSON schema to the `/v1/chat/completions` endpoint, via LM Studio's REST API (or via any OpenAI client). - https://lmstudio.ai/docs/typescript - > lmstudio-js (TypeScript SDK) > > The SDK provides you a set of programmatic tools to interact with LLMs, embeddings models, and agentic flows. - https://github.com/lmstudio-ai/lmstudio-js - > LM Studio TypeScript SDK - > `lmstudio-ts` is LM Studio's official JavaScript/TypeScript client SDK, it allows you to > > - Use LLMs to [respond in chats](https://lmstudio.ai/docs/typescript/llm-prediction/chat-completion) or predict [text completions](https://lmstudio.ai/docs/typescript/llm-prediction/completion) > - Define functions as tools, and turn LLMs into [autonomous agents](https://lmstudio.ai/docs/typescript/agent/act) that run completely locally > - [Load](https://lmstudio.ai/docs/typescript/manage-models/loading), [configure](https://lmstudio.ai/docs/typescript/llm-prediction/parameters), and [unload](https://lmstudio.ai/docs/typescript/manage-models/loading) models from memory > - Supports both browser and any Node-compatible environments > - Generate embeddings for text, and more! - https://github.com/lmstudio-ai/lmstudio-js#why-use-lmstudio-js-over-openai-sdk - > **Why use `lmstudio-js` over `openai` sdk?** > > Open AI's SDK is designed to use with Open AI's proprietary models. As such, it is missing many features that are essential for using LLMs in a local environment, such as: > > - Managing loading and unloading models from memory > - Configuring load parameters (context length, gpu offload settings, etc.) > - Speculative decoding > - Getting information (such as context length, model size, etc.) about a model > - ... and more > > In addition, while `openai` sdk is automatically generated, `lmstudio-js` is designed from ground-up to be clean and easy to use for TypeScript/JavaScript developers. ## See Also - https://github.com/jehna/humanify/issues/400 - https://github.com/jehna/humanify/issues/84 - https://github.com/jehna/humanify/issues/14 - https://github.com/jehna/humanify/issues/416
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/humanify#84
No description provided.