[GH-ISSUE #11] How to change model #10

Open
opened 2026-02-26 21:30:43 +03:00 by kerem · 7 comments
Owner

Originally created by @Dcerniy on GitHub (Feb 4, 2026).
Original GitHub issue: https://github.com/fspecii/ace-step-ui/issues/11

Hi! Maybe i'm a little stupid or blind, but i don't see switch for models, like... I downloaded 1 more model besides turbo, for better inference steps (bcs on turbo it's max on 8), but API seems to choose turbo model for standard?

Originally created by @Dcerniy on GitHub (Feb 4, 2026). Original GitHub issue: https://github.com/fspecii/ace-step-ui/issues/11 Hi! Maybe i'm a little stupid or blind, but i don't see switch for models, like... I downloaded 1 more model besides turbo, for better inference steps (bcs on turbo it's max on 8), but API seems to choose turbo model for standard?
Author
Owner

@Dcerniy commented on GitHub (Feb 4, 2026):

I was able to get around this by simply changing the folder name simillar to turbo model, but that doesn't seem like a complete solution. And in the UI, for example, I can't select inference higher than 32, although logically, I should be able to do more, right?

<!-- gh-comment-id:3848640202 --> @Dcerniy commented on GitHub (Feb 4, 2026): I was able to get around this by simply changing the folder name simillar to turbo model, but that doesn't seem like a complete solution. And in the UI, for example, I can't select inference higher than 32, although logically, I should be able to do more, right?
Author
Owner

@GTManiK commented on GitHub (Feb 4, 2026):

Please add support for base model, it has potential

<!-- gh-comment-id:3849095597 --> @GTManiK commented on GitHub (Feb 4, 2026): Please add support for base model, it has potential
Author
Owner

@mykeehu commented on GitHub (Feb 4, 2026):

And please add support for Qwen models (0.6B, 1.7B, 4B) too. :)

<!-- gh-comment-id:3849988805 --> @mykeehu commented on GitHub (Feb 4, 2026): And please add support for Qwen models (0.6B, 1.7B, 4B) too. :)
Author
Owner

@TrumpetOfDeath commented on GitHub (Feb 5, 2026):

[...] in the UI, for example, I can't select inference higher than 32, although logically, I should be able to do more, right?

fyi the official Gradio implementation also maxes at 32 inference steps, so I think it made sense to follow that.

<!-- gh-comment-id:3851146332 --> @TrumpetOfDeath commented on GitHub (Feb 5, 2026): > [...] in the UI, for example, I can't select inference higher than 32, although logically, I should be able to do more, right? fyi the official Gradio implementation also maxes at 32 inference steps, so I think it made sense to follow that.
Author
Owner

@Marcus-Arcadius commented on GitHub (Feb 6, 2026):

And please add support for Qwen models (0.6B, 1.7B, 4B) too. :)

I second this as well for this would be an amazing option to have as i have enough vram to run the qwen 4b

<!-- gh-comment-id:3858371793 --> @Marcus-Arcadius commented on GitHub (Feb 6, 2026): > And please add support for Qwen models (0.6B, 1.7B, 4B) too. :) I second this as well for this would be an amazing option to have as i have enough vram to run the qwen 4b
Author
Owner

@comfyubuntu commented on GitHub (Feb 10, 2026):

Maybe the author can give us an update? I really like the interface but i would love the use of all the models. Will there be an update for this?

<!-- gh-comment-id:3874674235 --> @comfyubuntu commented on GitHub (Feb 10, 2026): Maybe the author can give us an update? I really like the interface but i would love the use of all the models. Will there be an update for this?
Author
Owner

@kajan988 commented on GitHub (Feb 11, 2026):

[...] in the UI, for example, I can't select inference higher than 32, although logically, I should be able to do more, right?

fyi the official Gradio implementation also maxes at 32 inference steps, so I think it made sense to follow that.

Gradio allows you to use from 0 to 200 steps, depending on the model you choose.

<!-- gh-comment-id:3882187633 --> @kajan988 commented on GitHub (Feb 11, 2026): > > [...] in the UI, for example, I can't select inference higher than 32, although logically, I should be able to do more, right? > > fyi the official Gradio implementation also maxes at 32 inference steps, so I think it made sense to follow that. Gradio allows you to use from 0 to 200 steps, depending on the model you choose.
Sign in to join this conversation.
No labels
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/ace-step-ui#10
No description provided.